Opennebula, ZFS and Xen – Part 1 (Get going)

I’ve been reading about and wanting to try OpenNebula for months. 10 days ago I managed to get going a basic setup and thought I’d document things a little bit on my blog as a reference.

Disclaimer: This is not an OpenNebula how-to but rather a series of posts capturing my own customizations. Hence it is intended as a supplement to the official documentation, not a replacement or a step-by-step HOW-TO.

For my setup I opted for Centos & OpenNebula 1.4 for the frontend node (2.0 entered RC just a couple of days ago so I may be upgrading soon), OpenSolaris Illumos for provision of the Shared FS (hereby storage node) and Xen for the cluster nodes.

Installing the storage node

The storage node setup is pretty straightforward.

  • Grab a decent 64-bit system with reasonably enough RAM, since ZFS is well known to be memory hungry. For my setup I chose a Sun X2100 M2 server lying unused in the datacenter with a Dual-Core AMD Opteron 1214 CPU and 8GB RAM.
  • Install the latest development build of OpenSolaris, b134. One may either install OpenSolaris 2009.06 and upgrade from the dev repository or just pick up the latest ISO image from genunix.org
  • Upgrade to Illumos; while not really necessary this will mean that you will have the latest and greatest ZFS bits on your system
  • Create the ZFS datasets required for OpenNebula (Solaris die-hards may prefer “pfexec zfs” as a non-root user ;)) and assign ownership to the oneadmin user


# zfs create rpool/export/home/cloud
# zfs set mountpoint=/srv/cloud rpool/export/home/cloud
# zfs create rpool/export/home/cloud/images
# zfs create rpool/export/home/cloud/one
# zfs create rpool/export/home/cloud/one/var
# chown -R oneadmin:cloud /srv/cloud

  • Create the cloud group and oneadmin user. Make sure that you note down the uid and groupid so that these are identical in your frontend and cluster nodes.
  • Share the top-level cloud ZFS dataset. One should share the top-level dataset as read-write only for the front-end node and the “one/var” sub-dataset as read-write for the cluster nodes. Keeping things simple this example showcases a read-write share for the entire cluster subnet


# zfs set sharenfs='rw=@192.168.1.0/24'

The above should be enough to get things going for the default out-of-the-box NFS setup that OpenNebula uses from a storage perspective, the only difference being that the Shared FS does not “live” on the frontend but an independent system.

Installing the frontend and cluster nodes

Once the storage node is properly setup follow the OpenNebula documentation to setup the frontend and cluster nodes, using an NFS transfer driver.

The only “catch” is that the Shared FS lives on a Solaris NFS4 server, and for that matter one that behaves funnily with NFS3 clients. Hence, you need to mount the Shared FS as NFS4 as such in /etc/fstab of your frontend and cluster nodes and make sure that the NFSv4 domain of the NFS4 clients and server match:

frontend-node$ grep cloud /etc/fstab
nfs-server-IP:/srv/cloud /srv/cloud/ nfs4 noauto 0 0
frontend-node$ cat /etc/idmapd.conf
[General]

Verbosity = 0
Pipefs-Directory = /var/lib/nfs/rpc_pipefs
Domain = mydomain.priv

[Mapping]

Nobody-User = nobody
Nobody-Group = nobody

# [Translation]
# Method = nsswitch
storage-node$ cat /var/run/nfs4_domain
mydomain.priv

Note that the NFS4 domain picked by the server is normally the DNS domain.

Validating the setup

Once the setup is complete, one may use the sample VM to validate it. Just:

  • copy it under “/srv/cloud/images/ttylinux” on the storage server
  • use onevm create on the frontend node to create a new instance of it
  • figure out the cluster node it got deployed to with “onevm list”
  • use virt-viewer on the cluster node to verify the VM has been properly launched and running
Advertisements

Tags: , , ,

7 Responses to “Opennebula, ZFS and Xen – Part 1 (Get going)”

  1. Opennebula, ZFS and Xen – Part 2 (Instant cloning) « ~mperedim/weblog Says:

    […] ~mperedim/weblog Just another WordPress.com weblog « Opennebula, ZFS and Xen – Part 1 (Get going) […]

  2. Tweets that mention Opennebula, ZFS and Xen – Part 1 (Get going) « ~mperedim/weblog -- Topsy.com Says:

    […] This post was mentioned on Twitter by Asteris Masouras, Asteris Masouras and mperedim, OpenNebula. OpenNebula said: RT @mperedim published a 2-part guide on Opennebula and ZFS integration – http://wp.me/pprn9-48 , http://wp.me/pprn9-4u […]

  3. Opennebula, ZFS and Xen – Part 3 (Oracle VM server) « ~mperedim/weblog Says:

    […] ZFS and Xen – Part 3 (Oracle VM server) By mperedim Read OpenNebula, ZFS and Xen, Part 1 Read OpenNebula, ZFS and Xen, Part […]

  4. Installazione di uno storage ZFS/NFS su OpenSolaris per OpenNebula » azns.it Says:

    […] In questo howto useremo OpenSolaris per creare uno storage NFS4; alla base del datastore ci sarà ZFS che incorpora le funzionalità di snapshooting e modificheremo i driver NFS di OpenNebula per utilizzare gli snapshot in fase di deploy al posto della copia fisica: per una VM con un disco di 4 GB i tempi di deploy passeranno da circa 2 minuti a circa un secondo! Questo articolo è tratto da questo blog. […]

  5. xentutorial.com Says:

    Opennebula, ZFS and Xen – Part 1 (Get going)…

    I’ve been reading about and wanting to try OpenNebula for months. 10 days ago I managed to get going a basic setup and thought I’d document things a little bit on my blog as a reference. Disclaimer: This is not an OpenNebula how-to but rather a series …

  6. preetham Says:

    Hi… i am trying to setup Opnenebula as frontend in Linux Os in one machine and i have Node1 consisting of virtual machines in which is running on XCP and the vm’s os is Centos on the other machine . I need steps to integrate Opennebula machine (Frontend) to the Node1 machine(XCP). Could you please guide me regarding this or send the steps to install..

    Thanks in advance…

    • mperedim Says:

      I’ve never tried this but recall that this question comes up in the ONE mailing lists occasionally. According to http://www.mail-archive.com/users@lists.opennebula.org/msg09093.html it seems that XCP support has been left to hang dry since the 3.4 release, encouraging the community to pick it up.

      I have lately moved my relevant infrastructure to Cloudstack (the company I worked for was bought by Citrix) so no longer have a setup handy to provide any help on this.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: