ceph: a quick critique

The other day I went ahead and had a short rant on Ceph at twitter:

This prompted a response by Ian Colle and I somehow managed to get myself to write a short blog post explaining my thoughts.

A good place to start is the ceph-deploy tool. I think this tweet sums up how I feel about the existence of the tool in the first place:

Now the tool itself could be great (more on that later). And it’s OK to involve it in a quick start guide of sorts. But I would have hoped that the deep dive sections provided some more insight on what is happening under the hood.

That said, the ceph guys have decided to go ahead with ceph-deploy. Maybe it cut the docs size by half (bundle what used to be 10+ steps in a single ceph-deploy invocation), maybe it makes user errors fewer and support much easier. So I bit the bullet and went ahead with it. Installed Ubuntu 13.10, typed “apt-get install ceph*” on my admin and my two test nodes and tried to start away hacking. 1 day later I was nowhere more near to having a working cluster working, my monitor health displaying 2 OSDs, 0 in, 0 up. It wasn’t a full day of work but it was frustrating. At the end of the day I gave up and decided to declare the Ubuntu Saucy packages broken.

Now I appreciate that InkTank may have nothing to do with the packages in the default Ubuntu repos. It may not provide them, it may not test against them. In fact most of their guides recommend using the repositories at ceph.com. But they’re there. And if something is in the repo, people expect for it to work.

Having finally bit the bullet I decided to go ahead with the “official” ceph-deploy and packages. This was not without its problems. Locating the packages for Ubuntu saucy took a little bit more time than it had to. Having resolved that even that I kept running into issues. Turns out that if at any point “you want to start over” purgedata is not enough. Turns out that this is a known problem too. “apt-get install –reinstall” fixed things for me and voila, I had a ceph cluster.

Neat. “ceph health” indicated my 2 OSDs up and running, I could mount the pool from a client, etc. Let me take a look at ceph.conf:


# cat /etc/ceph/ceph.conf
[global]
fsid = 2e36c280-4b7f-4474-aa87-9fe317388060
mon_initial_members = foo
mon_host = W.X.Y.Z
auth_supported = cephx
osd_journal_size = 1024
filestore_xattr_use_omap = true

This is it. No sections for my one monitor, my one MDS, my 2 OSDs. If you have read Configuring Ceph congrats. You still are non-the-wiser of where all these configuration settings are stored. I’ll find out. Eventually.

Was this the end of my problems? Almost. I went ahead, rebooted my small test cluster (2 servers; 1x MON/MDS/OSD, 1x OSD) and noticed the following:


ceph> health
HEALTH_WARN mds cluster is degraded

Thankfully that was an easy one. Clickety-click:


osd-0# mount /dev/sdb3 /var/lib/ceph/osd/ceph-0
osd-1# mount /dev/sdb3 /var/lib/ceph/osd/ceph-1
# ceph
ceph> health
HEALTH_OK

Does it work? Like a charm. But the frustration in the process over what seems to be silly bugs was constantly mounting. And it shouldn’t have. This is the simplest setup one could possibly come-up with, with just a couple of nodes. I was using the latest Ubuntu, not some niche distribution like Gentoo or Arch nor a distro with outdated packages like CentOS/RHEL/Debian-stable. I should have this up and running in an hour not a couple of days, so that I can hack at stuff of more direct interest to me.

Getting back to my original tweet: I exaggerated. You can certainly grab an expert from InkTank and help you set up Ceph. Or you can invest some time on your own. But I still wanted this to be simpler.

Advertisements

Tags: ,

6 Responses to “ceph: a quick critique”

  1. Neil Levine Says:

    As the product guy at Inktank, I’ll agree that the install process can always be made better though we have come a long way since the days of mkcephfs (ceph-deploy’s predecessor). Here’s a question: what would you like it to be? I’d love to hear your ideal psuedo-process for installing Ceph, be it workflow, tools etc

    With regards to the config data, the monitors (which are distributed key/value stores) hold many of the details about the cluster. The ceph.conf file really only needs to hold the bare minimum to get a node into the cluster (hence the Mon host details being mandatory).

    We really to avoid putting the burden on the user to maintain lots of config files across hundreds of nodes where possible (otherwise Puppet et al became a dependency for running Ceph), especially when the Monitors can serve the needs of the OSD nodes. But it sounds like this is coming at the lack of visibility for you. Would it help if we added more explicit steps that allowed you to see the entire config of the cluster during the install process?

    • mperedim Says:

      Hey Neil,

      Appreciate the feedback. Here are a couple of thoughts.

      From a workflow perspective the main requirement for me is to “just work”. I could probably file 3 documentation bugs based on all the above (may well do once I get some time):
      1. for purgedata, requiring purge as well
      2. a recommendation to use the official packages rather than whatever is bundled for Ubuntu or the user’s distribution of choice
      3. the recommendation to edit fstab so that the setup recovers after a reboot

      This is in addition to the bug I pointed out about the lack of Ubuntu saucy packages in your official repo. I would probably add a 5th implementation bug that purgedata requiring a purge sounds wrong to me (I shouldn’t have to reinstall packages to start from scratch).

      Regarding lack of visibility I see your point but I still disagree. Maybe it’s me being an old-school guy preferring simple text files that I can easily comprehend / maintain rather than XML, let alone something even more obscure (the distributed key-value store falls into this category). I imagine that if I had to maintain a rather large production cluster I would probably have spent the time to look into the ceph CLI tool and this might not have bothered me too much (it still would; cf. my “black magic” quote on twitter – https://twitter.com/mperedim/status/403417348457000960).

    • Tom Voss Says:

      Where can I find the ceph.com packages for Ubuntu Saucy and how can I make ceph-deploy from my Admin node recognize and use it?

      If this package isn’t available can you please explain why? Can you also make it very obvious in your documentation that one cannot simply install and configure a handful of Ubuntu nodes in preparation for installing Ceph, install ceph-deploy on an Admin machine (MacOS via easy_install in this case), configure communication amongst all of the nodes, and then ceph-deploy install to Ubuntu Saucy? It is obviously very frustrating to have gone through all of these preparatory steps only to catastrophically fail once attempting the actual install.

      • Neil Levine Says:

        At Inktank, we only work with and support the LTS Ubuntu releases (12.04, soon 14.04). I think the Ubuntu team put the latest packages for Saucy into their Cloud Archive. Look for James Page (from Canonical) on the ML/IRC and he should be able to point you in the right direction.

      • Tom Voss Says:

        I hope you’ll understand my confusion then, as Ubuntu 13.04 (Raring) packages can be found in your repository.

        Would you suggest I fork ceph-deploy and modify it so that it works with Saucy or should I work on doing a manual install on all of my servers? This seems like a lot of hoops to jump through if Saucy packages could be made available in your repository.

  2. Neil Levine Says:

    I’ll pass on your doc requests to our doc writer.

    I think the tension over the config file is that Ceph is designed for “webscale” deployments, where there are hundreds or thousands of nodes. In this environment, having a config file is as useful as maintaining a /etc/hosts file for the internet 🙂

    But I appreciate for a starter user with just a few nodes, verbosity is a more useful approach. I’ll chat with the engineering team and see what ideas we can come up with,

    Neil

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: