Posts Tagged ‘ceph’

ceph: a quick critique

November 21, 2013

The other day I went ahead and had a short rant on Ceph at twitter:

This prompted a response by Ian Colle and I somehow managed to get myself to write a short blog post explaining my thoughts.

A good place to start is the ceph-deploy tool. I think this tweet sums up how I feel about the existence of the tool in the first place:

Now the tool itself could be great (more on that later). And it’s OK to involve it in a quick start guide of sorts. But I would have hoped that the deep dive sections provided some more insight on what is happening under the hood.

That said, the ceph guys have decided to go ahead with ceph-deploy. Maybe it cut the docs size by half (bundle what used to be 10+ steps in a single ceph-deploy invocation), maybe it makes user errors fewer and support much easier. So I bit the bullet and went ahead with it. Installed Ubuntu 13.10, typed “apt-get install ceph*” on my admin and my two test nodes and tried to start away hacking. 1 day later I was nowhere more near to having a working cluster working, my monitor health displaying 2 OSDs, 0 in, 0 up. It wasn’t a full day of work but it was frustrating. At the end of the day I gave up and decided to declare the Ubuntu Saucy packages broken.

Now I appreciate that InkTank may have nothing to do with the packages in the default Ubuntu repos. It may not provide them, it may not test against them. In fact most of their guides recommend using the repositories at ceph.com. But they’re there. And if something is in the repo, people expect for it to work.

Having finally bit the bullet I decided to go ahead with the “official” ceph-deploy and packages. This was not without its problems. Locating the packages for Ubuntu saucy took a little bit more time than it had to. Having resolved that even that I kept running into issues. Turns out that if at any point “you want to start over” purgedata is not enough. Turns out that this is a known problem too. “apt-get install –reinstall” fixed things for me and voila, I had a ceph cluster.

Neat. “ceph health” indicated my 2 OSDs up and running, I could mount the pool from a client, etc. Let me take a look at ceph.conf:


# cat /etc/ceph/ceph.conf
[global]
fsid = 2e36c280-4b7f-4474-aa87-9fe317388060
mon_initial_members = foo
mon_host = W.X.Y.Z
auth_supported = cephx
osd_journal_size = 1024
filestore_xattr_use_omap = true

This is it. No sections for my one monitor, my one MDS, my 2 OSDs. If you have read Configuring Ceph congrats. You still are non-the-wiser of where all these configuration settings are stored. I’ll find out. Eventually.

Was this the end of my problems? Almost. I went ahead, rebooted my small test cluster (2 servers; 1x MON/MDS/OSD, 1x OSD) and noticed the following:


ceph> health
HEALTH_WARN mds cluster is degraded

Thankfully that was an easy one. Clickety-click:


osd-0# mount /dev/sdb3 /var/lib/ceph/osd/ceph-0
osd-1# mount /dev/sdb3 /var/lib/ceph/osd/ceph-1
# ceph
ceph> health
HEALTH_OK

Does it work? Like a charm. But the frustration in the process over what seems to be silly bugs was constantly mounting. And it shouldn’t have. This is the simplest setup one could possibly come-up with, with just a couple of nodes. I was using the latest Ubuntu, not some niche distribution like Gentoo or Arch nor a distro with outdated packages like CentOS/RHEL/Debian-stable. I should have this up and running in an hour not a couple of days, so that I can hack at stuff of more direct interest to me.

Getting back to my original tweet: I exaggerated. You can certainly grab an expert from InkTank and help you set up Ceph. Or you can invest some time on your own. But I still wanted this to be simpler.

Advertisements