Disclaimer: This is not an OpenNebula how-to but rather a series of posts capturing my own customizations. Hence it is intended as a supplement to the official documentation, not a replacement or a step-by-step HOW-TO.
For my setup I opted for Centos & OpenNebula 1.4 for the frontend node (2.0 entered RC just a couple of days ago so I may be upgrading soon), OpenSolaris Illumos for provision of the Shared FS (hereby storage node) and Xen for the cluster nodes.
Installing the storage node
The storage node setup is pretty straightforward.
- Grab a decent 64-bit system with reasonably enough RAM, since ZFS is well known to be memory hungry. For my setup I chose a Sun X2100 M2 server lying unused in the datacenter with a Dual-Core AMD Opteron 1214 CPU and 8GB RAM.
- Install the latest development build of OpenSolaris, b134. One may either install OpenSolaris 2009.06 and upgrade from the dev repository or just pick up the latest ISO image from genunix.org
- Upgrade to Illumos; while not really necessary this will mean that you will have the latest and greatest ZFS bits on your system
- Create the ZFS datasets required for OpenNebula (Solaris die-hards may prefer “pfexec zfs” as a non-root user ;)) and assign ownership to the oneadmin user
# zfs create rpool/export/home/cloud
# zfs set mountpoint=/srv/cloud rpool/export/home/cloud
# zfs create rpool/export/home/cloud/images
# zfs create rpool/export/home/cloud/one
# zfs create rpool/export/home/cloud/one/var
# chown -R oneadmin:cloud /srv/cloud
- Create the cloud group and oneadmin user. Make sure that you note down the uid and groupid so that these are identical in your frontend and cluster nodes.
- Share the top-level cloud ZFS dataset. One should share the top-level dataset as read-write only for the front-end node and the “one/var” sub-dataset as read-write for the cluster nodes. Keeping things simple this example showcases a read-write share for the entire cluster subnet
# zfs set firstname.lastname@example.org/24'
The above should be enough to get things going for the default out-of-the-box NFS setup that OpenNebula uses from a storage perspective, the only difference being that the Shared FS does not “live” on the frontend but an independent system.
Installing the frontend and cluster nodes
Once the storage node is properly setup follow the OpenNebula documentation to setup the frontend and cluster nodes, using an NFS transfer driver.
The only “catch” is that the Shared FS lives on a Solaris NFS4 server, and for that matter one that behaves funnily with NFS3 clients. Hence, you need to mount the Shared FS as NFS4 as such in /etc/fstab of your frontend and cluster nodes and make sure that the NFSv4 domain of the NFS4 clients and server match:
frontend-node$ grep cloud /etc/fstab
nfs-server-IP:/srv/cloud /srv/cloud/ nfs4 noauto 0 0
frontend-node$ cat /etc/idmapd.conf
Verbosity = 0
Pipefs-Directory = /var/lib/nfs/rpc_pipefs
Domain = mydomain.priv
Nobody-User = nobody
Nobody-Group = nobody
# Method = nsswitch
storage-node$ cat /var/run/nfs4_domain
Note that the NFS4 domain picked by the server is normally the DNS domain.
Validating the setup
Once the setup is complete, one may use the sample VM to validate it. Just:
- copy it under “/srv/cloud/images/ttylinux” on the storage server
- use onevm create on the frontend node to create a new instance of it
- figure out the cluster node it got deployed to with “onevm list”
- use virt-viewer on the cluster node to verify the VM has been properly launched and running