Showing posts with label etcd. Show all posts
Showing posts with label etcd. Show all posts

Wednesday, September 16, 2015

Running Kubernetes on a Raspberry PI

Running the Docker engine on Raspberry Pi is a breeze thanks to the Docker pirates from Hypriot, just download the image and flash it on your Pi and you are off to the races. I am not going to cover this installation process, it is well documented on the Hypriot website and I also wrote a recipe in the Docker cookbook. Roughly, download the .img file and dd it to your SD card, then boot your PI.
Having Docker on Raspberry Pi offers tons of possibilities for hobbyist and home devices. It also triggered my interest because Kubernetes, one of the Docker orchestrators, can be run standalone on a single node using Docker containers. I wrote a post several months ago about doing it using docker-compose. So I decided to give it a try last week-end, running Kubernetes on a PI using the Hypriot image that has the Docker engine.

Getting etcd to run

The first issue is that Kubernetes currently uses etcd, and that you need to run it on ARM. I decided to get the etcd source directly on the PI and updated the Dockerfile to build it directly there. Etcd uses a Golang ONBUILD image and it was causing me grief. So I copied the content of the ONBUILD image and created a new Dockerfile based on hypriot/rpi-golang to build it directly. You can see the Dockerfile. With that you have a Docker container running etcd on ARM.

Getting the Hyperkube to run on ARM

Now, I needed the hyperkube binary to run on ARM. Hyperkube is a single binary that allows you to start all the Kubernetes components. Thankfully there are some binaries already available for ARM. That was handy because I struggled to compile Kubernetes directly on the PI.
With that hyperkube binary on hand, I built an image based on the resin/rpi-raspbian:wheezy image. Quite straightforward:
FROM resin/rpi-raspbian:wheezy

RUN apt-get update
RUN apt-get -yy -q install iptables ca-certificates

COPY hyperkube /hyperkube

The Kubelet systemd unit

The Kubernetes agent running on all nodes in a cluster is called the Kubelet. The Kubelet is in charge of making sure that all containers supposed to be running on the node actually do run. It can also be used with a manifest to start some specific containers at startup. There is a good post from Kelsey Hightower about it. Since The Hypriot image uses systemd I took the systemd unit that creates a Kubelet service directly from Kelsey's post:
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/bin/kubelet  \
--api-servers=http://127.0.0.1:8080 \
--allow-privileged=true \
--config=/etc/kubernetes/manifests \
--v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
The kubelet binary is downloaded directly from the same location as hyperkube. The manifest is a Kubernetes Pod definition that starts all the containers to get a Kubernetes controller running. It starts etcd, the API server, the scheduler, the controller and the service proxy, all using the hyperkube image built above.

Now the dirty hack

Kubernetes does something interesting. All containers in a Pod actually use the same IP address. This is done by running a fake container that just does nothing. The other containers in the Pod just share the same network namespace as this fake container. This is actually called the pause container. I did not find a way to specify a different image for the pause container in Kubernetes, it seems hard coded to gcr.io/google_containers/pause:0.8.0 which off course is supposed to run on x86_64.
So the dirty trick consisted in taking the pause Goland code from the Kubernetes source, compiling it on the PI using the hypriot/rpi-golang and sticking the binary in a SCRATCH image and tagging it locally to appear as gcr.io/google_containers/pause:0.8.0 and avoid the download of the real image that runs on x86_64. Yeah...right...I told you dirty but that was the quickest way I could think of.

Putting it all together

Now that you have all the images ready directly on the PI, plus a Kubelet service, you can start it. The containers will be created and you will have a single node Kubernetes cluster on the PI. All is left is to use the kubectl CLI to use it. You can download an ARM version of Kubectl form the official Kubernetes release.
HypriotOS: root@black-pearl in ~
$ docker images
REPOSITORY                       TAG         
hyperkube                        latest
gcr.io/google_containers/pause   0.8.0
etcd                             latest
resin/rpi-raspbian               wheezy   
hypriot/rpi-golang               latest 

HypriotOS: root@black-pearl in ~
$ ./kubectl get pods
NAME                          READY     STATUS    RESTARTS   AGE
kube-controller-black-pearl   5/5       Running   5          5m
HypriotOS: root@black-pearl in ~
$ ./kubectl get nodes
NAME          LABELS                               STATUS
black-pearl   kubernetes.io/hostname=black-pearl   Ready

Get it

Everything is on GitHub at https://github.com/skippbox/k8s4pi including a horrible bash script that does the entire build :)

Tuesday, September 30, 2014

On Docker and Kubernetes on CloudStack

On Docker and Kubernetes on CloudStack

Docker has pushed containers to a new level, making it extremely easy to package and deploy applications within containers. Containers are not new, with Solaris containers and OpenVZ among several containers technologies going back 2005. But Docker has caught on quickly as mentioned by @adrianco. The startup speed is not surprising for containers, the portability is reminiscent of the Java goal to "write once run anywhere". What is truly interesting with Docker is that availability of Docker registries (e.g Docker Hub) to share containers and the potential to change the application deployment workflows.

Rightly so, we should soon see IT move to a docker based application deployment, where developers package their applications and make them available to Ops. Very much like we have been using war files. Embracing a DevOps mindset/culture should be easier with Docker. Where it becomes truly interesting is when we start thinking about an infrastructure whose sole purpose is to run containers. We can envision a bare operating system with a single goal to manage docker based services. This should make sys admin life easier.

The role of the Cloud with Docker

While the buzz around Docker has been truly amazing and a community has grown over night, some may think that this signals the end of the cloud. I think it is far from the truth as Docker may indeed become the killer app of the cloud.

A IaaS layer is what is: an infrastructure orchestration layer, while Docker and its ecosystem will become the application orchestration layer.

The question then becomes: How do I run Docker in the cloud ? And there is a straightforward answer: Just install Docker in your cloud templates. Whether on AWS or GCE or Azure or your private cloud, you can prepare linux based templates that provide Docker support. If you are aiming for the bare operating system whose sole purpose is to run Docker then the new CoreOS linux distribution might be your best pick. CoreOS provides rolling upgrades of the kernel, systemd based services, a distributed key value store (i.e etcd) and a distributed service scheduling system (i.e fleet)

exoscale an Apache CloudStack based public clouds, recently announced the availability of CoreOS templates.

Like exoscale, any cloud provider be it public or private can make CoreOS templates available. Providing Docker within the cloud instantly.

Docker application orchestration, here comes Kubernetes

Running one container is easy, but running multiple coordinated containers across a cluster of machines is not yet a solved problem. If you think of an application as a set of containers, starting these on multiple hosts, replicating some of them, accessing distributed databases, providing proxy services and fault tolerance is the true challenge.

However, Google came to the resuce and announced Kubernetes a system that solves these issues and makes managing scalable, fault-tolerant container based apps doable :)

Kubernetes is of course available on Google public cloud GCE, but also in Rackspace, Digital Ocean and Azure. It can also be deployed on CoreOS easily.

Kubernetes on CloudStack

Kubernetes is under heavy development, you can test it with Vagrant. Under the hood, aside from the go code :), most of the deployment solutions use SaltStack recipes but if you are a Chef, Puppet or Ansible user I am sure we will see recipes for those configuration management solution soon.

But you surely got the same idea that I had :) Since Kubernetes can be deployed on CoreOS and that CoreOS is available on exoscale. We are just a breath away from running Kubernetes on CloudStack.

It took a little more than a breath, but you can clone kubernetes-exoscale and you will get running in 10 minutes. With a 3 node etcd cluster and a 5 node kubernetes cluster, running a Flannel overlay.

CloudStack supports EC2 like userdata, and the CoreOS templates on exoscale support cloud-init, hence passing some cloud-config files to the instance deployment was straightforward. I used libcloud to provision all the nodes, created proper security groups to let the Kubernetes nodes access the etcd cluster and let the Kubernetes nodes talk to each other, especially to open a UDP port to build a networking overlay with Flannel. Fleet is used to launch all the Kubernetes services. Try it out.

Conclusions.

Docker is extremely easy to use and taking advantage of a cloud you can get started quickly. CoreOS will put your docker work on steroid with availability to start apps as systemd services over a distributed cluster. Kubernetes will up that by giving you replication of your containers and proxy services for free (time).

We might see pure docker based public clouds (e.g think Mesos cluster with a Kubernetes framework). These will look much more like PaaS, especially if they integrate a Docker registry and a way to automatically build docker images (e.g think continuous deployment pipeline).

But a "true" IaaS is actually very complimentary, providing multi-tenancy, higher security as well as multiple OS templates. So treating docker as a standard cloud workload is not a bad idea. With three dominant public clouds in AWS, GCE and Azure and a multitude of "regional" ones like exoscale it appears that building a virtualization based cloud is a solved problem (at least with Apache CloudStack :)).

So instead of talking about cloudifying your application, maybe you should start thinking about Dockerizing your applications and letting them loose on CloudStack.