Archive for the ‘computers’ Category

NetScaler fun with OpenStack keys and userdata

April 17, 2016

One of the things that’s been bugging me about NetScaler and OpenStack is the lack of basic integration. Its management network is configured via DHCP on first boot, or via config drive and userdata if DHCP is not available, but it doesn’t import SSH keys or runs userdata scripts for its initial configuration.

Thankfully, the above limitation maybe easily alleviated using the nsbefore.sh and nsafter.sh boot-time configuration backdoors. Here is a sample nsbefore.sh, based on the OpenStack docs, for VPX that can handle import of SSH keys:

root@ns# cat /nsconfig/nsbefore.sh
#!/usr/bin/bash
# Fetch public key using HTTP
ATTEMPTS=10
FAILED=0
while [ ! -f /nsconfig/ssh/authorized_keys ]; do
  curl -f http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key > /tmp/metadata-key 2>/dev/null
  if [ $? -eq 0 ]; then
    cat /tmp/metadata-key >> /nsconfig/ssh/authorized_keys
    chmod 0600 /nsconfig/ssh/authorized_keys
    rm -f /tmp/metadata-key
    echo "Successfully retrieved public key from instance metadata"
    echo "*****************"
    echo "AUTHORIZED KEYS"
    echo "*****************"
    cat /nsconfig/ssh/authorized_keys
    echo "*****************"
  else
    FAILED=`expr $FAILED + 1`
    if [ $FAILED -ge $ATTEMPTS ]; then
      echo "Failed to retrieve public key from instance metadata after $FAILED attempts, quitting"
      break
    fi
    echo "Could not retrieve public key from instance metadata (attempt #$FAILED/$ATTEMPTS), retrying in 5 seconds..."
    ifconfig 0/1
    sleep 5
  fi
done

Courtesy of the RedHat documentation a simple nsafter.sh that can retrieve and run a userdata is the following:

#!/usr/bin/bash

# Fetch userdata using HTTP
ATTEMPTS=10
FAILED=0
while [ ! -f /nsconfig/userdata ]; do
  curl -f http://169.254.169.254/openstack/2012-08-10/user_data > /tmp/userdata 2>/dev/null
  if [ $? -eq 0 ]; then
    cat /tmp/userdata >> /nsconfig/userdata
    chmod 0700 /nsconfig/userdata
    rm -f /tmp/userdata
    echo "Successfully retrieved userdata"
    echo "*****************"
    echo "USERDATA"
    echo "*****************"
    cat /nsconfig/userdata
    echo "*****************"
    /nsconfig/userdata
  else
    FAILED=`expr $FAILED + 1`
    if [ $FAILED -ge $ATTEMPTS ]; then
      echo "Failed to retrieve public key from instance metadata after $FAILED attempts, quitting"
      break
    fi
    echo "Could not retrieve public key from instance metadata (attempt #$FAILED/$ATTEMPTS), retrying in 5 seconds..."
    sleep 5
  fi
done

Simple enough. Now to put these to the test:

  1. Create a simple HEAT template
  2. # more template
    ################################################################################
    heat_template_version: 2015-10-15
    
    ################################################################################
    
    description: >
      Simple template to deploy a NetScaler with floating IP
    
    ################################################################################
    
    resources:
      testvpx:
        type: OS::Nova::Server
        properties:
          key_name: mysshkey
          image: NS_userdata
          flavor: m1.vpx
          networks:
            - network: private_network
          user_data_format: "RAW"
          user_data:
            get_file: provision.sh
    
      testvpx_floating_ip:
        type: OS::Neutron::FloatingIP
        properties:
          floating_network: external_network
    
      testvpx_float_association:
        type: OS::Neutron::FloatingIPAssociation
        properties:
          floatingip_id: { get_resource: testvpx_floating_ip }
          port_id: {get_attr: [testvpx, addresses, private_network, 0, port]}
    
  3. Import in Glance a NetScaler image with the above changes for nsbefore.sh and nsafter.sh; name it NS_userdata
  4. Create a simple test provisioning script
  5. # cat provision.sh
    #!/usr/bin/bash
    
    echo foo
    touch /var/tmp/foobar
    echo bar >> /var/tmp/foobar
    
    nscli -U :nsroot:nsroot add ns ip 172.16.30.40 255.255.255.0
    
  6. Create a stack and identify the NetScaler floating IP address
  7. # heat stack-create -f template vpx__userdata
    +--------------------------------------+------------------+--------------------+---------------------+--------------+
    | id                                   | stack_name       | stack_status       | creation_time       | updated_time |
    +--------------------------------------+------------------+--------------------+---------------------+--------------+
    | 540cb3d2-3b21-443c-a43b-10c745d28498 | vpx__userdata    | CREATE_IN_PROGRESS | 2016-04-17T16:49:49 | None         |
    +--------------------------------------+------------------+--------------------+---------------------+--------------+
    # # nova list | grep testvpx
    | 77388ebc-97e8-4a74-b863-40e822cb88c7 | vpx__userdata-testvpx-t3r3avxl7unc        | ACTIVE | -          | Running     | private_network=192.168.100.200, 10.78.16.139
    

This should be it. In order to verify everything went smoothly SSH into the instance using your private SSH key and run “sh ns ip” to verify that the provisioning script properly executed.

# ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i privatekey.pem nsroot@10.78.16.139
Warning: Permanently added '10.78.16.139' (RSA) to the list of known hosts.
###############################################################################
#                                                                             #
#        WARNING: Access to this system is for authorized users only          #
#         Disconnect IMMEDIATELY if you are not an authorized user!           #
#                                                                             #
###############################################################################

Last login: Sun Apr 17 16:51:06 2016 from 10.78.16.59
 Done
> sh ns ip
        Ipaddress        Traffic Domain  Type             Mode     Arp      Icmp     Vserver  State
        ---------        --------------  ----             ----     ---      ----     -------  ------
1)      192.168.100.200  0               NetScaler IP     Active   Enabled  Enabled  NA       Enabled
2)      172.16.30.40     0               SNIP             Active   Enabled  Enabled  NA       Enabled

Netscaler VPX: DHCP support

February 4, 2015

This is a quick recipe for enabling DHCP for your Netscaler VPX on KVM:

  1. Boot the KVM VPX instance per the instructions on the citrix site.
  2. Create /nsconfig/nsbefore.sh

  3. #!/bin/sh

    mkdir /var/db # to store lease files
    mkdir /var/empty
    /sbin/dhclient -l /var/db/dhclient.leases.1 0/1

  4. Create /nsconfig/nsafter.sh

  5. #!/bin/sh

    ADDR=`grep fixed-address /var/db/dhclient.leases.1 | awk '{print $NF}' | sed -e 's/;//' | uniq | tail -1`
    SUBNET=`grep subnet-mask /var/db/dhclient.leases.1 | awk '{print $NF}' | sed -e 's/;//' | uniq | tail -1`
    GATEWAY=`grep routers /var/db/dhclient.leases.1 | awk '{print $NF}' | sed -e 's/;//' | uniq | tail -1`

    grep "$ADDR" /nsconfig/ns.conf
    if [ $? != 0 ]; then
    nscli -U :nsroot:nsroot "set ns config -IPAddress $ADDR -netmask $SUBNET"
    nscli -U :nsroot:nsroot "savec"
    sleep 5
    yes | nscli -U :nsroot:nsroot "reboot"
    fi

    grep "$GATEWAY" /nsconfig/ns.conf

    if [ $? != 0 ]; then
    nscli -U :nsroot:nsroot "add route 0.0.0.0 0.0.0.0 $GATEWAY"
    nscli -U :nsroot:nsroot "savec"
    fi

  6. Power off your Netscaler VPX and save the disk image as a DHCP-enabled template.

2012 Nexus 7 with F2FS

January 9, 2015

This turns out to have been easier than anticipated, having followed at large the guide available at gadgetreactor. Some quick notes on fun facts I ran into:

  1. If you were thinking of using a VM and USB passthrough, think again. It may or may not work. Do yourself a favor and don’t waste a few hours trying with Virtualbox, VMWare player or whatever trying to make it work.
  2. If you are a decent cloud netizen, most of your stuff is already in Dropbox/GDrive/GMail/etc. Thus, if your tablet is predominantly a couch computing device you probably don’t need a backup in the first place. Worst case scenario, you’ll lose your angry birds high scores.
  3. TWRP, once booted, seems to expose a USB media device for read/write purposes. Hence, you don’t really need a USB OTG thumbdrive. Just make sure that TWRP is installed, boot into it and try to create/copy a file prior to wiping everything.
  4. There is some kind of issue with the current slim Gapps which requires the mini-gapps to be flushed before full-gapps. If you just install the full-gapps Google Play Store and Services will be unavailable and your N7 largely useless.

This was it. On retrospect it was rather easy, most of the time and effort spend on trying to make USB passthrough work rather than using my spare Windows laptop.

Device is quite responsive with the latest firmware. I really like the new “cards” layout when I view the open apps but other than that the new interface is a little bit heavy on visuals. Overall it’s probably a welcome upgrade.

ceph: a quick critique

November 21, 2013

The other day I went ahead and had a short rant on Ceph at twitter:

This prompted a response by Ian Colle and I somehow managed to get myself to write a short blog post explaining my thoughts.

A good place to start is the ceph-deploy tool. I think this tweet sums up how I feel about the existence of the tool in the first place:

Now the tool itself could be great (more on that later). And it’s OK to involve it in a quick start guide of sorts. But I would have hoped that the deep dive sections provided some more insight on what is happening under the hood.

That said, the ceph guys have decided to go ahead with ceph-deploy. Maybe it cut the docs size by half (bundle what used to be 10+ steps in a single ceph-deploy invocation), maybe it makes user errors fewer and support much easier. So I bit the bullet and went ahead with it. Installed Ubuntu 13.10, typed “apt-get install ceph*” on my admin and my two test nodes and tried to start away hacking. 1 day later I was nowhere more near to having a working cluster working, my monitor health displaying 2 OSDs, 0 in, 0 up. It wasn’t a full day of work but it was frustrating. At the end of the day I gave up and decided to declare the Ubuntu Saucy packages broken.

Now I appreciate that InkTank may have nothing to do with the packages in the default Ubuntu repos. It may not provide them, it may not test against them. In fact most of their guides recommend using the repositories at ceph.com. But they’re there. And if something is in the repo, people expect for it to work.

Having finally bit the bullet I decided to go ahead with the “official” ceph-deploy and packages. This was not without its problems. Locating the packages for Ubuntu saucy took a little bit more time than it had to. Having resolved that even that I kept running into issues. Turns out that if at any point “you want to start over” purgedata is not enough. Turns out that this is a known problem too. “apt-get install –reinstall” fixed things for me and voila, I had a ceph cluster.

Neat. “ceph health” indicated my 2 OSDs up and running, I could mount the pool from a client, etc. Let me take a look at ceph.conf:


# cat /etc/ceph/ceph.conf
[global]
fsid = 2e36c280-4b7f-4474-aa87-9fe317388060
mon_initial_members = foo
mon_host = W.X.Y.Z
auth_supported = cephx
osd_journal_size = 1024
filestore_xattr_use_omap = true

This is it. No sections for my one monitor, my one MDS, my 2 OSDs. If you have read Configuring Ceph congrats. You still are non-the-wiser of where all these configuration settings are stored. I’ll find out. Eventually.

Was this the end of my problems? Almost. I went ahead, rebooted my small test cluster (2 servers; 1x MON/MDS/OSD, 1x OSD) and noticed the following:


ceph> health
HEALTH_WARN mds cluster is degraded

Thankfully that was an easy one. Clickety-click:


osd-0# mount /dev/sdb3 /var/lib/ceph/osd/ceph-0
osd-1# mount /dev/sdb3 /var/lib/ceph/osd/ceph-1
# ceph
ceph> health
HEALTH_OK

Does it work? Like a charm. But the frustration in the process over what seems to be silly bugs was constantly mounting. And it shouldn’t have. This is the simplest setup one could possibly come-up with, with just a couple of nodes. I was using the latest Ubuntu, not some niche distribution like Gentoo or Arch nor a distro with outdated packages like CentOS/RHEL/Debian-stable. I should have this up and running in an hour not a couple of days, so that I can hack at stuff of more direct interest to me.

Getting back to my original tweet: I exaggerated. You can certainly grab an expert from InkTank and help you set up Ceph. Or you can invest some time on your own. But I still wanted this to be simpler.

Farewell old friend

January 29, 2010

James Gosling piece touched a sensitive chord; finding out that sun.com is no more touched another.

While my first contact with Unix was also through a VT220 terminal connected to a SunOS server it was not love at first sight. Even when I ended up loving a distant descedant of Unix I recall myself poking our Solaris admin at what used to be dayjob 6 years ago over the inefficiency of the Solaris userland.

It was not till 2005 that I started dealing more frequently with Solaris, version 8 back at the time [1]. And then Solaris 10 came around. x86 became a first class citizen, allowing for a huge performance boost in afforable hardware. With it slowly came Zones (a superior virtualization technology reminiscent of FreeBSD jails), DTrace, Grub (allowing peaceful co-existence with other O/S), ZFS (literally the last word in filesystems, at least in the English alphabet ;)) and more. All of these coupled with typically superior documentation and always sticking to the POLA for long-time users. And the same way I ended up hating Windows and loving Linux, in spite of starting my systems administrator carreer in a Windows environment, I ended up loving Solaris. Not that I hate Linux nowadays [*], I’ve just grown too old to accept things breaking or changing for little reason every now and then.

In every end lies a new beginning they say. Let’s hope that Elisson manages to monetize the numerous cool technologies that have been coming out of Santa Clara but at the same time the spirit of quality, technical and design excellence, the well-thought out customer support and the ever-present drive to push our overall computing experience to new frontiers that “Sun” represented will stay with us.

[1] Solaris 9 is like the new Star Wars trilogy; passionate Solaris users -such as the author- vehemently deny it ever existed.

[*] OK, that’s not entirely true; I do hate one particular “flavor” of it.

On monty’s petition to save Mysql

January 2, 2010

I have grown so tired of reading about this that I decided a short blog post is in order.

Recently Dimitris, a good friend and close associate, posted to a popular Greek Open Source list a link to Monty’s plea for saving MySQL [1].

I was a little reluctant to answer at first, since it’s well known that I am a fan of everything under the Sun and could be accused of being biased [2]. That said, George Keramidas‘ wrote a rather insightful comment:

My personal opinion is that Monty is fear mongering, not because he truly believes that he is saving MySQL but for an own personal agenda.

This gave me a little bit of courage, seeing that there are others who think like me. So I posted a short reply:

It’s actually quite simple. With Sun, a company in a really sad state (financially) Monty could make some money out of MariaDB. With Oracle he just doesn’t any chance.

Besides, claiming that “GPL and dual licensing were good when I was making shitloads of money but now that someone else does it’s suddenly bad” is hypocritical, to say the least”

The follow up by Dimitris kicked off with a rather surprising comment:
There may be some truth to the above statement (actually, it’s obviously true) [editor’s note: he was referring to the “could make some money” paragraph]
Afterwards Dimitris tried to make a case that in spite of Monty’s personal agenda signing the petition could make for a better future of MySQL:
Things aren’t black and white and this fight is not just for the code but for MySQL as a project (code, community, trademark, servers, foo). There are some examples of projects that were successfully forked with the right leadership but there are many more that remained stale.
There is some merit to the above statement. Yet, I still don’t think that signing the MySQL petition will help at all. As any other FOSS project MySQL can and will thrive if either (or both) of the following conditions stand true:
  1. It gets the backing of a large company, like Oracle
  2. It successfully forms a large community

Signing the Help MySQL petition certainly doesn’t help towards the first direction and is irrelevant to the second (there are countless of successful FOSS projects with either a GPL or a BSD-style license). It only helps:

Now, if anyone is really interested in helping the above interests, feel free to sign the petition. If you ask me, you won’t be cutting MySQL any favors.

P.S. Dimitris and George, I liberally translated your e-mails. If you feel that a translation is off, feel free to let me know and I will correct it promptly.

[1]  http://helpmysql.org/el/

[2] Disclaimer: other than being a fan of numerous technologies they’ve brought us during the last 25 years, I have no affiliation with Sun Microsystems

Crippled by design

November 6, 2009

At times I can’t help feeling that Windows are crippled by design. This is what happened to me yesterday when I needed to install a new copy of Outlook on my netbook:

  1. Download a fresh copy Office 2007 from the Internet (thank you MSDNAA) in my media PC; total time required 15 minutes or so
  2. Put in a 8GB USB stick to copy the ISO to; got an out of disk space (total time 1 minute)
  3. Verify that the drive indeed had (mysteriously) just 200+ MB free space out of total 1GB.
  4. Fire up Computer Management; find out that I had indeed created just a 1GB partition
  5. Try to create a 2nd partition in the 7GB available free space; find out that the relevant option is not available

WTF? In a moment of haste instead of “turning it off and on again” I first opted to search through Google and stumbled upon a number of references describing the following problem:

By default, a USB flash drive is detected by Windows XP/Vista as removable media and thus will not display more than one partition. In addition, Windows will not allow you to multi-partition removable media.

I ended up transferring the ISO through my wireless (my media PC is divorced from any wiring). Probably took the same time that it took to download it from the Internet.

ARGH!

We travel light

January 24, 2009

Today before joining Dimitri at Indifex I thought I’d take a long walk at Patras, get some fresh air and make myself a long due small little present. Naturally this would not be possible if I had to drag my Dell Latitude D620 for three to four kilometers so I thought I’d try my luck and see if I can work with the NC10 instead.

The Samsung NC10 proved to be very easy to carry around while walking in the city of Patras. That much was expected. During the last couple of hours it has also proved a joy to use and be actually productive with it. Hook it up to the 24″ screen, setup the applications you like (Atom being x86 compatible certainly helps here) and one is good to go. I don’t miss my “bulky” Latitude and its even more bulky carrying bag a bit.

The bigger picture

January 8, 2009

Perforce is cool. Really cool. Sure it may be centralized, require a network connection all the time and have a strange commit flow (yes p4 open, I am looking at you). However the GUI client is fantastic, the CLI tool excellent (with well thought-out command names), p4 help almost always saves the day, the documentation is great, the technotes can be a life savior. And to add to that they offer a proxy. A perforce proxy is no substitute for a distributed SCM (since in turn it must be always connected to the main server), does not automatically mean that user experience is identical as if the main repository was on the local LAN but is a welcome add-on.

Lately people noticed our perforce proxy being rather slow. Admittedly it’s an old machine, with a measly (for today’s standards) 512MB RAM. So what was the verdict? Let’s buy a new one! YEAH! And let’s make sure that it can expand to 32GB, the entry level model that grows to “just” 8GB are probably not enough.

I am frequently impressed by how often people end-up getting confined in their micro-world and fail to get the big picture. Good product QA engineers, who could be great if they would get a better understanding of the underlying O/S. Same for development engineers. Customer support engineers who have an excellent grasp of the customer and provide prompt responses for documented issues but with poor troubleshooting skills, resorting to product development engineers for the slightest of problems (this is not an excuse for poor documentation btw). And the list goes on.

For what is worth, per the Perforce technical notes 8GB would probably be a little bit too much for the new server. 512MB would be more than enough. With a careful setup (disabling unnecessary daemons of the underlying O/S) even 256MB could suffice (and yes this means that we may not need to buy a new server)

On usability

January 8, 2009

You can tell that the usability of a system has a serious problem when a very simple task takes at the very list thirteen arcane commands in a Unix prompt. You can also tell that you need a usability expert/committee of some sort when this problem is there for many years (if not more) and noone has done something about it.

P.S. Just because there isn’t a huge market around them (like web sites) doesn’t mean that command line tools cannot be (un-)usable.