Showing posts with label hack. Show all posts
Showing posts with label hack. Show all posts

Wednesday, September 16, 2015

Running Kubernetes on a Raspberry PI

Running the Docker engine on Raspberry Pi is a breeze thanks to the Docker pirates from Hypriot, just download the image and flash it on your Pi and you are off to the races. I am not going to cover this installation process, it is well documented on the Hypriot website and I also wrote a recipe in the Docker cookbook. Roughly, download the .img file and dd it to your SD card, then boot your PI.
Having Docker on Raspberry Pi offers tons of possibilities for hobbyist and home devices. It also triggered my interest because Kubernetes, one of the Docker orchestrators, can be run standalone on a single node using Docker containers. I wrote a post several months ago about doing it using docker-compose. So I decided to give it a try last week-end, running Kubernetes on a PI using the Hypriot image that has the Docker engine.

Getting etcd to run

The first issue is that Kubernetes currently uses etcd, and that you need to run it on ARM. I decided to get the etcd source directly on the PI and updated the Dockerfile to build it directly there. Etcd uses a Golang ONBUILD image and it was causing me grief. So I copied the content of the ONBUILD image and created a new Dockerfile based on hypriot/rpi-golang to build it directly. You can see the Dockerfile. With that you have a Docker container running etcd on ARM.

Getting the Hyperkube to run on ARM

Now, I needed the hyperkube binary to run on ARM. Hyperkube is a single binary that allows you to start all the Kubernetes components. Thankfully there are some binaries already available for ARM. That was handy because I struggled to compile Kubernetes directly on the PI.
With that hyperkube binary on hand, I built an image based on the resin/rpi-raspbian:wheezy image. Quite straightforward:
FROM resin/rpi-raspbian:wheezy

RUN apt-get update
RUN apt-get -yy -q install iptables ca-certificates

COPY hyperkube /hyperkube

The Kubelet systemd unit

The Kubernetes agent running on all nodes in a cluster is called the Kubelet. The Kubelet is in charge of making sure that all containers supposed to be running on the node actually do run. It can also be used with a manifest to start some specific containers at startup. There is a good post from Kelsey Hightower about it. Since The Hypriot image uses systemd I took the systemd unit that creates a Kubelet service directly from Kelsey's post:
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/bin/kubelet  \
--api-servers=http://127.0.0.1:8080 \
--allow-privileged=true \
--config=/etc/kubernetes/manifests \
--v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
The kubelet binary is downloaded directly from the same location as hyperkube. The manifest is a Kubernetes Pod definition that starts all the containers to get a Kubernetes controller running. It starts etcd, the API server, the scheduler, the controller and the service proxy, all using the hyperkube image built above.

Now the dirty hack

Kubernetes does something interesting. All containers in a Pod actually use the same IP address. This is done by running a fake container that just does nothing. The other containers in the Pod just share the same network namespace as this fake container. This is actually called the pause container. I did not find a way to specify a different image for the pause container in Kubernetes, it seems hard coded to gcr.io/google_containers/pause:0.8.0 which off course is supposed to run on x86_64.
So the dirty trick consisted in taking the pause Goland code from the Kubernetes source, compiling it on the PI using the hypriot/rpi-golang and sticking the binary in a SCRATCH image and tagging it locally to appear as gcr.io/google_containers/pause:0.8.0 and avoid the download of the real image that runs on x86_64. Yeah...right...I told you dirty but that was the quickest way I could think of.

Putting it all together

Now that you have all the images ready directly on the PI, plus a Kubelet service, you can start it. The containers will be created and you will have a single node Kubernetes cluster on the PI. All is left is to use the kubectl CLI to use it. You can download an ARM version of Kubectl form the official Kubernetes release.
HypriotOS: root@black-pearl in ~
$ docker images
REPOSITORY                       TAG         
hyperkube                        latest
gcr.io/google_containers/pause   0.8.0
etcd                             latest
resin/rpi-raspbian               wheezy   
hypriot/rpi-golang               latest 

HypriotOS: root@black-pearl in ~
$ ./kubectl get pods
NAME                          READY     STATUS    RESTARTS   AGE
kube-controller-black-pearl   5/5       Running   5          5m
HypriotOS: root@black-pearl in ~
$ ./kubectl get nodes
NAME          LABELS                               STATUS
black-pearl   kubernetes.io/hostname=black-pearl   Ready

Get it

Everything is on GitHub at https://github.com/skippbox/k8s4pi including a horrible bash script that does the entire build :)

Thursday, May 23, 2013

The LinuxTag Hack

What do you do when you go to LinuxTag the premier Open Source conference in Berlin Germany ? You give a talk, you hand out tee-shirts at the CloudStack booth, you explain Cloud computing and you hack a CloudStack driver for SaltStack while patching libcloud.

The talk: Talking about Clouds is nice and all, but after many years and many talks, I shamelessly admit that it gets a little old. So lately I have been working on BigData, both as a backend to CloudStack (think Ceph, Riack CS, Gluster) and as a workload to a cloud. I am talking about using Apache Whirr or Apache Provisionr incubating to deploy "one-click" hadoop clusters on public clouds. It is a long story that I will keep for another post as I am trying to write this before going to bed. but check out the slides and keep an eye on pallet and exoscale.

The booth: An open source booth is ..well a booth. I came with my pop-up banner, table cloth, tee-shirts, post-cards, USB stick/bottle openers, it feels a little bit like a traveling sales man, not that I would know but I imagine it like this. I have to explain that the 2 and 3 XL shirts will shrink quite a bit and will fit perfectly people's M or L frames. Then I point at the banner to showcase the magnificient CloudStack UI, I explain that there is an API server behind it. Sometimes I launch devcloud and do a live demo to bring them to their knees, sometimes I have to ask for help on IRC to answer a question, and sometimes a german developer wants to trade a tee-shirt against illegal substances not to be named. Life in the fastlane let me tell you. But that's what it's like to build a community, it is very much an evangelization process.

The hack: Then there is the hack, going to an open source conference without writing code would be a sin. The OSS gods decided to put me next to Tom Hatch the CTO of Saltstack. Tom is a funny guy with a deep voice and madenning Python skills. Saltstack is an alternative to Chef and Puppet which are written in this foreign language Ruby. It does configuration management, remote execution, cloud deployments and tons of other things. I was also happily surprised to find 300 folks on IRC, very chatty folks I might add. Anyway, Saltstack was not going to go anywhere because they did not have a CloudStack driver. EC2 yes, Rackspace yes, Joyent yes, OpenStack yes...but no CloudStack. I had to do something about it. A quick git clone and a RTFM later I was on my way, thanks to my new best friend exoscale, a Swiss public cloud provider (based on CloudStack off course) who gave me 50 Francs worth of Cloud Brownies I could test everything live.

I did not want to run any silly master on my laptop, behind crazy NAT non sense, so I used Salt-cloud, I copied the linode driver and started hacking it. It uses libcloud which thankfully I had looked at just couple weeks ago while writing a bit of CloudStack doc. There was a few issues with the libcloud driver so I opened couple bugs there and committed to patches to fix my own bugs, isn't it nice ? Grant it there are not big bugs, but bugs they are, 329 and 330 to be exact.

I got it working and finished it tonight which explains my excitement. I forked their repo on git and made two pull requests that got merged right away. Let's get down to it, shall we. You need two configuration file: cloud and profile

Your cloud conf defines your Cloud Provider and which driver it uses, that's where I define exoscale, my keys etc...

providers:
  exoscale:
    apikey:  
    secretkey: 
    host: api.exoscale.ch
    path: /compute
    securitygroup: default
    user: root
    private_key: 
    provider: cloudstack

Your profile conf defines the instance type that you are going to use, it's a combination of the image, the service offerings and the keys used to access the instance that it will create...

ubuntu-exoscale:
    provider: exoscale
    image: 1d16c78d-268f-47d0-be0c-b80d31e765d2 
    size: b6cd1ff5-3a2f-4e9d-a4d1-8988c1191fe8 
    ssh_interface: public
    ssh_username: root
    keypair: exoscale

With this you can know list-location, list-sizes, list-images:

salt-cloud --list-locations exoscale
Password:
[INFO    ] Configuration file path: /Users/sebastiengoasguen/.saltcloud/cloud
[INFO    ] salt-cloud starting
cloudstack
  CH-GV2
    country: AU
    id: 1128bd56-b4d9-4ac6-a7b9-c715b187ce11

salt-cloud --list-sizes exoscale
Password:
[INFO    ] Configuration file path: /Users/sebastiengoasguen/.saltcloud/cloud
[INFO    ] salt-cloud starting
cloudstack
  Extra-large
    bandwidth: 0
    disk: 0
    id: 350dc5ea-fe6d-42ba-b6c0-efb8b75617ad
    price: 0
    ram: 16384
    uuid: edb4cd4ae14bbf152d451b30c4b417ab095a5bfe
...snip...

salt-cloud --list-images exoscale
[INFO    ] Configuration file path: /Users/sebastiengoasguen/.saltcloud/cloud
[INFO    ] salt-cloud starting
cloudstack
  CentOS 5.5(64-bit) no GUI (KVM)
    extra:
      format: QCOW2
      hypervisor: KVM
      os: CentOS 5.5 (64-bit)
    id: 77d32782-6866-43d4-9524-6fe346594d09
    uuid: 2fb9ae4b32b4ea5eafd3341166cf948cfe24aa7f
...snip...
And finally of course:
salt-cloud -p ubuntu-exoscale lll
This will start the instance on the cloud, do key based ssh access to it and start bootstrapping saltstack. Tomorrow I will check out their actual configuration management scheme, launch couple minions, hoping to create a monogd cluster or even a hadoop cluster. Before you ask, lll is the name you give to an instance when you get tired.

The caveat to all of this is that my patches to libcloud need to be accepted before the CloudStack SaltStack driver is usable. Enjoy ! Signing off. Busy day: Talk, booth, hack