Showing posts with label google. Show all posts
Showing posts with label google. Show all posts

Tuesday, June 09, 2015

Building an S3 object store with Docker, Cassandra and Kubernetes

Docker makes building distributed applications relatively painless. At the very least deploying existing distributed systems/framework is made easier since you only need to launch containers. Docker hub is full of MongoDB, Elasticsearch, Cassandra images etc ... Assuming that you like what is inside those images, you can just grab them and run a container and you are done.

With a cluster manager/container orchestration system like Kubernetes, running clustered version of these systems where you need to operate multiple containers and multiple nodes is also made dead simple. Swear to God, it is !

Just check the list of examples and you will find everything that is needed to run a Redis, a Spark, a Storm, an Hazelcast even a Glusterfs cluster. Discovery of all the nodes can be a challenge but with things like Etcd, Consul, registrator, service discovery has never been easier.

What caught my eye in the list of Kubernetes examples is the ability to run an Apache Cassandra cluster. Yes, a Cassandra cluster based on Docker containers. It caught my eye especially that my buddies at exoscale have written an S3 compatible object store that uses Cassandra for storage. It's called Pithos and for those interested is written in Clojure.

So I wondered, let's run Cassandra in Kubernetes, then let's create a Docker image for Pithos and run it in Kubernetes as well. That should give me a S3 compatible object store, built using Docker containers.

To start we need a Kubernetes cluster. The easiest is to use Google container engine. But keep an eye on Kubestack which is a Terraform plan to create one. It could easily be adapted for different cloud providers. If you are new to Kubernetes check my previous post, or get the Docker cookbook in early release I just pushed a chapter on Kubernetes. Whatever technique you use, before proceeding you should be able to use the kubectl client and list the nodes in your cluster. For example:

$ ./kubectl get nodes
NAME                              LABELS                                                   STATUS
k8s-cookbook-935a6530-node-hsdb   kubernetes.io/hostname=k8s-cookbook-935a6530-node-hsdb   Ready
k8s-cookbook-935a6530-node-mukh   kubernetes.io/hostname=k8s-cookbook-935a6530-node-mukh   Ready
k8s-cookbook-935a6530-node-t9p8   kubernetes.io/hostname=k8s-cookbook-935a6530-node-t9p8   Ready
k8s-cookbook-935a6530-node-ugp4   kubernetes.io/hostname=k8s-cookbook-935a6530-node-ugp4   Ready

Running Cassandra in Kubernetes

You can use the Kubernetes example straight up or clone my own repo, you can explore all the pods, replication controllers and service definition there:

$ git clone https://github.com/how2dock/dockbook.git
$ cd ch05/examples

Then launch the Cassandra replication controller, increase the number of replicas and launch the service:

$ kubectl create -f ./cassandra/cassandra-controller.yaml
$ kubectl scale --replicas=4 rc cassandra
$ kubectl create -f ./cassandra/cassandra-service.yaml

Once the image is downloaded you will have your Kubernetes pods in running state. Note that the image currently used comes from the Google registry. That's because this image contains a Discovery class specified in the Cassandra configuration. You could use the Cassandra image from Docker hub but would have to put that Java class in there to allow all cassandra nodes to discover each other. As I said, almost painless !

$ kubectl get pods --selector="name=cassandra"

Once Cassandra discovers all nodes and rebalances the database storage you will get something like:

$ ./kubectl exec cassandra-5f709 -c cassandra nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address    Load       Tokens  Owns (effective)  Host ID                               Rack
UN  10.16.2.4  84.32 KB   256     46.0%             8a0c8663-074f-4987-b5db-8b5ff10d9774  rack1
UN  10.16.1.3  67.81 KB   256     53.7%             784c8f4d-7722-4d16-9fc4-3fee0569ec29  rack1
UN  10.16.0.3  51.37 KB   256     49.7%             2f551b3e-9314-4f12-affc-673409e0d434  rack1
UN  10.16.3.3  65.67 KB   256     50.6%             a746b8b3-984f-4b1e-91e0-cc0ea917773b  rack1

Note that you can also access the logs of a container in a pod with kubectl logs very handy.

Launching Pithos S3 object store

Pithos is a daemon which "provides an S3 compatible frontend to a cassandra cluster". So if we run Pithos in our Kubernetes cluster and point it to our running Cassandra cluster we can expose an S3 compatible interface.

To that end I created a Docker image for Pithos runseb/pithos on Docker hub. Its an automated build so you can check out the Dockerfile there. The image contains the default configuration file. You will want to change it to edit your access keys and bucket stores definitions. I launch Pithos as a Kubernetes replication controller and expose a service with an external load balancer created on Google compute engine. The Cassandra service that we launched earlier allows Pithos to find Cassandra using DNS resolution. To bootstrap pithos we need to run a non-restarting Pod which installs the Pithos schema in Cassandra. Let's do it:

$ kubectl create -f ./pithos/pithos-bootstrap.yaml

Wait for the bootstrap to happen, i.e for the Pod to get in succeed state. Then launch the replication controller. For now we will launch only one replicas. Using an rc makes it easy to attach a service and expose it via a Public IP address.

$ kubectl create -f ./pithos/pithos-rc.yaml
$ kubectl create -f ./pithos/spithos.yaml
$ ./kubectl get services --selector="name=pithos"
NAME      LABELS        SELECTOR      IP(S)            PORT(S)
pithos    name=pithos   name=pithos   10.19.251.29     8080/TCP
                                      104.197.27.250 

Since Pithos will serve on port 8080 by default, make sure that you open the firewall for public IP of the load-balancer.

Use an S3 client

You are now ready to use your S3 object store, offered by Pithos, backed by Cassandra, running on Kubernetes using Docker. Wow...a mouth full !!!

Install s3cmd and create a configuration file like so:

$ cat ~/.s3cfg
[default]
access_key = AKIAIOSFODNN7EXAMPLE
secret_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
check_ssl_certificate = False
enable_multipart = True
encoding = UTF-8
encrypt = False
host_base = s3.example.com
host_bucket = %(bucket)s.s3.example.com
proxy_host = 104.197.27.250 
proxy_port = 8080
server_side_encryption = True
signature_v2 = True
use_https = False
verbosity = WARNING

Note that we use an unencrypted proxy (the load-balancer IP created by the Pithos Kubernetes service, don't forget to change it). The access and secret keys are the default stored in the Dockerfile

With this configuration in place, you are ready to use +s3cmd+:

$ s3cmd mb s3://foobar
Bucket 's3://foobar/' created
$ s3cmd ls
2015-06-09 11:19  s3://foobar

If you wanted to use Boto, this would work as well:

#!/usr/bin/env python

from boto.s3.key import Key
from boto.s3.connection import S3Connection
from boto.s3.connection import OrdinaryCallingFormat

apikey='AKIAIOSFODNN7EXAMPLE'
secretkey='wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'

cf=OrdinaryCallingFormat()

conn=S3Connection(aws_access_key_id=apikey,
                  aws_secret_access_key=secretkey,
                  is_secure=False,host='104.197.27.250',
                  port=8080,
                  calling_format=cf)

conn.create_bucket('foobar')

And that's it. All of these steps make sound like a lot, but honestly it has never been that easy to run an S3 object store. Docker and Kubernetes truly make running distributed applications a breeze.

Monday, May 04, 2015

Running VMs in Docker Containers via Kubernetes

Couple weeks ago Google finally published a technical paper describing Borg, their cluster management system that they built over the last ten years or more and that runs all Google services.

There are several interesting concepts in the paper, one of them of course being that they run everything in containers. Whether they use Docker or not is unknown. Some parts of their workloads probably still use LMCTFY, - Let Me Contain That For You-. What struck me is that they say to not be using full virtualization. It makes sense in terms of timeline, considering that Borg started before the advent of hardware virtualization. However, their Google Compute Engine offers VM as a Service, so it is fair to wonder how they are running their VMs. This reminded me of John Wilkes talk at MesosCon 2014. He discussed scheduling in Borg (without mentioning it) and at 23 minutes in his talk, mentions that they run VMs in containers.

Running VM in containers does make sense when you think in terms of a cluster management system that deals with multiple type of workloads. You treat your IaaS (e.g GCE) as a workload, and contain it so that you can pack all your servers and maximize utilization. It also allows you to run some workloads on bare-metal for performance.

Therefore let's assume that GCE is just another workload for Google and that it runs through Borg.

Since Borg laid out the principles for Kubernetes, the cluster management system designed for containerized workloads and open sourced by Google in June 2014. You are left asking:

"How can we run VMs in Kubernetes ?"

This is where Rancher comes to our help to help us prototype a little some-some. Two weeks ago, Rancher announced RancherVM, basically a startup script that creates KVM VMs inside Docker containers (not really doing it justice calling it a script...). It is available on GitHub and super easy to try. I will spare you the details and tell you to go to GitHub instead. The result is that you can build a Docker image that contains a KVM qcow image, and that running the container starts the VM with the proper networking.

Privilege gotcha

With a Docker image now handy to run a KVM instance in it, using Kubernetes to start this container is straightforward. Create a Pod that launches this container. The only caveat is that the Docker host(s) that you use and that form your Kubernetes cluster need to have KVM installed and that your containers will need to have some level of privileges to access the KVM devices. While this can be tweaked with Docker run parameters like --device and --cap-add, you can brute force it in a very unsecure manner with --privilege. However Kubernetes does not accept to run privileged containers by default (rightfully so). Therefore you need to start you Kubernetes cluster (i.e API server and Kubelet with the --allow_privilege=true option).

If you are new to Kubernetes, check out my previous post where I show you how to start a one node Kubernetes "cluster" with Docker compose. The only modification that I did from that post, is that I am running this on a Docker host that also has KVM installed, that the compose manifest specifies --allow_pivileged=true in the kubelet startup command, and that I modify the /etc/kubernetes/manifests/master.json by specifiying a volume. This allows me not to tamper with the images from Google.

Let's try it out

Build your RancherVM images:

$ git clone https://github.com/rancherio/vm.git
$ cd vm
$ make all

You will now have several RancherVM images:

$ sudo docker images
REPOSITORY                           TAG                 ...
rancher/vm-android                   4.4                 ...
rancher/vm-android                   latest              ...
rancher/ranchervm                    0.0.1               ...
rancher/ranchervm                    latest              ...
rancher/vm-centos                    7.1                 ...
rancher/vm-centos                    latest              ...
rancher/vm-ubuntu                    14.04               ...
rancher/vm-ubuntu                    latest              ...
rancher/vm-rancheros                 0.3.0               ...
rancher/vm-rancheros                 latest              ...
rancher/vm-base                      0.0.1               ...
rancher/vm-base                      latest              ...

Starting one of those will give you access to a KVM instance running in the container.

I will skip the startup of the Kubernetes components. Check my previous post. Once you have Kubernetes running you can list the pods (i.e group of containers/volumes). You will see that the Kubernetes master itself is running as a Pod.

$ ./kubectl get pods
POD         IP        CONTAINER(S)         IMAGE(S)                                     ...
nginx-127             controller-manager   gcr.io/google_containers/hyperkube:v0.14.1   ...
                      apiserver            gcr.io/google_containers/hyperkube:v0.14.1                                             
                      scheduler            gcr.io/google_containers/hyperkube:v0.14.1

Now let's define a RancherVM as a Kubernetes Pod. We do this in a YAML file

apiVersion: v1beta2
kind: Pod
id: ranchervm
labels:
  name: vm
desiredState:
  manifest:
    version: v1beta2
    containers:
      - name: master
        image: rancher/vm-rancheros
        privileged: true
        volumeMounts:
          - name: ranchervm
            mountPath: /ranchervm
        env:
         - name: RANCHER_VM
           value: "true"
    volumes:
      - name: ranchervm
        source:
          hostDir: 
            path: /tmp/ranchervm

To create the Pod use the kubectl CLI:

$ ./kubectl create -f vm.yaml 
pods/ranchervm
$ ./kubectl get pods
POD         IP            CONTAINER(S)         IMAGE(S)                                     ....
nginx-127                 controller-manager   gcr.io/google_containers/hyperkube:v0.14.1   ....
                          apiserver            gcr.io/google_containers/hyperkube:v0.14.1                                             
                          scheduler            gcr.io/google_containers/hyperkube:v0.14.1                                             
ranchervm   172.17.0.10   master               rancher/vm-rancheros                         ....

The RancherVM image specified contains RancherOS. The container will start automatically but of course the actual VM will take couple more seconds to start. Once it's up, you can ping it and you can ssh to the VM instance.

$ ping -c 1 172.17.0.10
PING 172.17.0.10 (172.17.0.10) 56(84) bytes of data.
64 bytes from 172.17.0.10: icmp_seq=1 ttl=64 time=0.725 ms

$ ssh rancher@172.17.0.10 
...
[rancher@ranchervm ~]$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[rancher@ranchervm ~]$ sudo system-docker ps
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS               NAMES
229a22962a4d        console:latest      "/usr/sbin/entry.sh    2 minutes ago       Up 2 minutes                            console             
cfd06aa73192        userdocker:latest   "/usr/sbin/entry.sh    2 minutes ago       Up 2 minutes                            userdocker          
448e03b18f93        udev:latest         "/usr/sbin/entry.sh    2 minutes ago       Up 2 minutes                            udev                
ff929cddeda9        syslog:latest       "/usr/sbin/entry.sh    2 minutes ago       Up 2 minutes                            syslog              

Amazing ! I can feel that you are just wondering what the heck is going on:)

You want to kill the VM ? Just kill the pod:

$ ./kubectl delete pod ranchervm

Remember that a Pod is not a single container but could contain several ones as well as volumes.

Let's go a step further, and scale the number of VMs by using a replication controller.

Using a Replication Controller to scale the VM

Kubernetes is quite nice, it builds on years of experience with fault-tolerance at Google and provides mechanism for keeping your services up, scaling them and rolling new versions. The replication Controller is a primitive to manage the scale of your services.

So say you would like to automatically increase or decrease the number of VMs running in your datacenter. Start them with a replication controller. This is defined in a YAML manifest like so:

id: ranchervm
kind: ReplicationController
apiVersion: v1beta2
desiredState:
  replicas: 1
  replicaSelector:
    name: ranchervm
  podTemplate:
    desiredState:
      manifest:
        version: v1beta2
        id: vm 
        containers:
          - name: vm
            image: rancher/vm-rancheros
            privileged: true
            volumeMounts:
              - name: ranchervm
                mountPath: /ranchervm
            env:
              - name: RANCHER_VM
                value: "true"
        volumes:
          - name: ranchervm
            source:
              hostDir:
                path: /tmp/ranchervm
    labels:
      name: ranchervm
  

This manifest defines a Pod template (the one that we created earlier), and set a number of replicas. Here we start with one. To launch it, use the kubectl binary again:

$ ./kubectl create -f vmrc.yaml 
replicationControllers/ranchervm
$ ./kubectl get rc
CONTROLLER   CONTAINER(S)   IMAGE(S)               SELECTOR         REPLICAS
ranchervm    vm             rancher/vm-rancheros   name=ranchervm   1

If you list the pods, you will see that your container is running and hence your VM will start shortly.

$ ./kubectl get pods
POD               IP            CONTAINER(S)         IMAGE(S)                                     ...
nginx-127                       controller-manager   gcr.io/google_containers/hyperkube:v0.14.1   ...
                                apiserver            gcr.io/google_containers/hyperkube:v0.14.1                                                    
                                scheduler            gcr.io/google_containers/hyperkube:v0.14.1                                                    
ranchervm-16ncs   172.17.0.11   vm                   rancher/vm-rancheros                         ...

Why is this awesome ? Because you can scale easily:

$ ./kubectl resize --replicas=2 rc ranchervm
resized

And Boom, two VMs:

$ ./kubectl get pods -l name=ranchervm
POD               IP            CONTAINER(S)   IMAGE(S)               ...
ranchervm-16ncs   172.17.0.11   vm             rancher/vm-rancheros   ...
ranchervm-279fu   172.17.0.12   vm             rancher/vm-rancheros   ...

Now of course, this little test is done on one node. But if you had a real Kubernetes cluster, it would schedule these pods on available nodes. From a networking standpoint, RancherVM can provide DHCP service or not. That means that you could let Kubernetes assign the IP to the Pod and the VMs would be networked over the overlay in place.

Now imagine that we had security groups via an OVS switch on all nodes in the cluster...we could have multi-tenancy with network isolation and full VM isolation. While being able to run workloads in "traditional" containers. This has some significant impact on the current IaaS space, and even Mesos itself.

Your Cloud as a containerized distributed workload, anyone ???

For more recipes like these, checkout the Docker cookbook.

Tuesday, September 30, 2014

On Docker and Kubernetes on CloudStack

On Docker and Kubernetes on CloudStack

Docker has pushed containers to a new level, making it extremely easy to package and deploy applications within containers. Containers are not new, with Solaris containers and OpenVZ among several containers technologies going back 2005. But Docker has caught on quickly as mentioned by @adrianco. The startup speed is not surprising for containers, the portability is reminiscent of the Java goal to "write once run anywhere". What is truly interesting with Docker is that availability of Docker registries (e.g Docker Hub) to share containers and the potential to change the application deployment workflows.

Rightly so, we should soon see IT move to a docker based application deployment, where developers package their applications and make them available to Ops. Very much like we have been using war files. Embracing a DevOps mindset/culture should be easier with Docker. Where it becomes truly interesting is when we start thinking about an infrastructure whose sole purpose is to run containers. We can envision a bare operating system with a single goal to manage docker based services. This should make sys admin life easier.

The role of the Cloud with Docker

While the buzz around Docker has been truly amazing and a community has grown over night, some may think that this signals the end of the cloud. I think it is far from the truth as Docker may indeed become the killer app of the cloud.

A IaaS layer is what is: an infrastructure orchestration layer, while Docker and its ecosystem will become the application orchestration layer.

The question then becomes: How do I run Docker in the cloud ? And there is a straightforward answer: Just install Docker in your cloud templates. Whether on AWS or GCE or Azure or your private cloud, you can prepare linux based templates that provide Docker support. If you are aiming for the bare operating system whose sole purpose is to run Docker then the new CoreOS linux distribution might be your best pick. CoreOS provides rolling upgrades of the kernel, systemd based services, a distributed key value store (i.e etcd) and a distributed service scheduling system (i.e fleet)

exoscale an Apache CloudStack based public clouds, recently announced the availability of CoreOS templates.

Like exoscale, any cloud provider be it public or private can make CoreOS templates available. Providing Docker within the cloud instantly.

Docker application orchestration, here comes Kubernetes

Running one container is easy, but running multiple coordinated containers across a cluster of machines is not yet a solved problem. If you think of an application as a set of containers, starting these on multiple hosts, replicating some of them, accessing distributed databases, providing proxy services and fault tolerance is the true challenge.

However, Google came to the resuce and announced Kubernetes a system that solves these issues and makes managing scalable, fault-tolerant container based apps doable :)

Kubernetes is of course available on Google public cloud GCE, but also in Rackspace, Digital Ocean and Azure. It can also be deployed on CoreOS easily.

Kubernetes on CloudStack

Kubernetes is under heavy development, you can test it with Vagrant. Under the hood, aside from the go code :), most of the deployment solutions use SaltStack recipes but if you are a Chef, Puppet or Ansible user I am sure we will see recipes for those configuration management solution soon.

But you surely got the same idea that I had :) Since Kubernetes can be deployed on CoreOS and that CoreOS is available on exoscale. We are just a breath away from running Kubernetes on CloudStack.

It took a little more than a breath, but you can clone kubernetes-exoscale and you will get running in 10 minutes. With a 3 node etcd cluster and a 5 node kubernetes cluster, running a Flannel overlay.

CloudStack supports EC2 like userdata, and the CoreOS templates on exoscale support cloud-init, hence passing some cloud-config files to the instance deployment was straightforward. I used libcloud to provision all the nodes, created proper security groups to let the Kubernetes nodes access the etcd cluster and let the Kubernetes nodes talk to each other, especially to open a UDP port to build a networking overlay with Flannel. Fleet is used to launch all the Kubernetes services. Try it out.

Conclusions.

Docker is extremely easy to use and taking advantage of a cloud you can get started quickly. CoreOS will put your docker work on steroid with availability to start apps as systemd services over a distributed cluster. Kubernetes will up that by giving you replication of your containers and proxy services for free (time).

We might see pure docker based public clouds (e.g think Mesos cluster with a Kubernetes framework). These will look much more like PaaS, especially if they integrate a Docker registry and a way to automatically build docker images (e.g think continuous deployment pipeline).

But a "true" IaaS is actually very complimentary, providing multi-tenancy, higher security as well as multiple OS templates. So treating docker as a standard cloud workload is not a bad idea. With three dominant public clouds in AWS, GCE and Azure and a multitude of "regional" ones like exoscale it appears that building a virtualization based cloud is a solved problem (at least with Apache CloudStack :)).

So instead of talking about cloudifying your application, maybe you should start thinking about Dockerizing your applications and letting them loose on CloudStack.

Friday, July 11, 2014

GCE Interface to CloudStack

Gstack, a GCE compatible interface to CloudStack

Google Compute Engine (GCE) is the Google public cloud. In december 2013, Google announced the General Availability (GA) of GCE. With AWS and Microsoft Azure, it is one of the three leading public clouds in the market. Apache CloudStack now has a brand new GCE compatible interface (Gstack) that lets users use the GCE clients (i.e gcloud and gcutil) to access their CloudStack cloud. This has been made possible through the Google Summer of Code program.

Last summer Ian Duffy, a student from Dublin City University participated in GSoC through the Apache Software Foundation (ASF) and worked on a LDAP plugin to CloudStack. He did such a great job that he finished early and was made an Apache CloudStack committer. Since he was done with his original GSoC project I encouraged him to take on a new one :), he brought in a friend for the ride: Darren Brogan. Both of them worked for fun on the GCE interface to CloudStack and learned Python doing so.

They remained engaged with the CloudStack community and has a third year project worked on an Amazon EC2 interface to CloudStack using what they learned from the GCE interface. They got an A :). Since they loved it so much, Darren applied to the GSoC program and proposed to go back to Gstack, improve it, extend the unittests and make it compatible with the GCE v1 API.

Technically, Gstack is a Python Flask application that provides a REST API compatible with the GCE API and forwards the requests to the corresponding CloudStack API. The source is available on GitHub and the binary is downloadable via PyPi. Let's show you how to use it.

Installation and Configuration of Gstack

You can grab the Gstack binary package from Pypi using pip in one single command.

pip install gstack

Or if you plan to explore the source and work on it, you can Clone the repository and install it by hand. Pull requests are of course welcome.

git clone https://github.com/NOPping/gstack.git
sudo python ./setup.py install

Both of these installation methods will install a gstack and a gstack-configure binary in your path. Before running Gstack you must configure it. To do so run:

gstack-configure

And enter your configuration information when prompted. You will need to specify the host and port where you want gstack to run on, as well as the CloudStack endpoint that you want gstack to forward the requests to. In the example below we use the exoscale cloud:

$ gstack-configure
gstack bind address [0.0.0.0]: localhost
gstack bind port [5000]: 
Cloudstack host [localhost]: api.exoscale.ch
Cloudstack port [8080]: 443
Cloudstack protocol [http]: https
Cloudstack path [/client/api]: /compute

The information will be stored in a configuration file available at ~/.gstack/gstack.conf:

$ cat ~/.gstack/gstack.conf 
PATH = 'compute/v1/projects/'
GSTACK_BIND_ADDRESS = 'localhost'
GSTACK_PORT = '5000'
CLOUDSTACK_HOST = 'api.exoscale.ch'
CLOUDSTACK_PORT = '443'
CLOUDSTACK_PROTOCOL = 'https'
 CLOUDSTACK_PATH = '/compute'

You are now ready to start Gstack in the foreground with:

gstack

That's all there is to running Gstack. To be able to use it as if you were talking to GCE however, you need to use gcutil and configure it a bit.

Using gcutil with Gstack

The current version of Gstack only works with the stand-alone version of gcutil. Do not use the version of gcutil bundled in the Google Cloud SDK. Instead install the 0.14.2 version of gcutil. Gstack comes with a self-signed certificate for the local endpoint gstack/data/server.crt, copy the certificate to the gcutil certificates file gcutil/lib/httplib2/httplib2/cacerts.txt. A bit dirty I know, but that's a work in progress.

At this stage your CloudStack apikey and secretkey need to be entered in the gcutil auth_helper.py file at gcutil/lib/google_compute_engine/gcutil/auth_helper.py.

Again not ideal but hopefully gcutil or the Cloud SDK will soon be able to configure those without touching the source. Darren and Ian opened a feature request with google to pass the client_id and client_secret as options to gcutil, hopefully future release of gcutil will allow us to do so.

Now, create a cached parameters file for gcutil. Assuming you are running gstack on your local machine, using the defaults that were suggested during the configuration phase. Modify ~/.gcutil_params with the following:

--auth_local_webserver
--auth_host_port=9999
--dump_request_response
--authorization_uri_base=https://localhost:5000/oauth2
--ssh_user=root
--fetch_discovery
--auth_host_name=localhost
--api_host=https://localhost:5000/

Warning: Make sure to set the --auth_host_name variable to the same value as GSTACK_BIND_ADDRESS in your ~/.gstack/gstack.conf file. Otherwise you will see certificates errors.

With this setup complete, gcutil will issues requests to the local Flask application, get an OAuth token, issue requests to your CloudStack endpoint and return the response in a GCE compatible format.

Example with exoscale.

You can now start issuing standard gcutil commands. For illustration purposes we use Exoscale. Since there are several semantic differences, you will notice that as a project we use the account information from CloudStack. Hence we pass our email address as the project value.

Let's start by listing the availability zones:

$ gcutil --cached_flags_file=~/.gcutil_params --project=runseb@gmail.com listzones
+----------+--------+------------------+
| name     | status | next-maintenance |
+----------+--------+------------------+
| ch-gva-2 | UP     | None scheduled   |
+----------+--------+------------------+

Let's list the machine types or in CloudStack terminology: the compute service offerings and to list the available images.

$ ./gcutil --cached_flags_file=~/.gcutil_params --project=runseb@gmail.com listimages
$ gcutil --cached_flags_file=~/.gcutil_params --project=runseb@gmail.com listmachinetypes
+-------------+----------+------+-----------+-------------+
| name        | zone     | cpus | memory-mb | deprecation |
+-------------+----------+------+-----------+-------------+
| Micro       | ch-gva-2 |    1 |       512 |             |
+-------------+----------+------+-----------+-------------+
| Tiny        | ch-gva-2 |    1 |      1024 |             |
+-------------+----------+------+-----------+-------------+
| Small       | ch-gva-2 |    2 |      2048 |             |
+-------------+----------+------+-----------+-------------+
| Medium      | ch-gva-2 |    2 |      4096 |             |
+-------------+----------+------+-----------+-------------+
| Large       | ch-gva-2 |    4 |      8182 |             |
+-------------+----------+------+-----------+-------------+
| Extra-large | ch-gva-2 |    4 |     16384 |             |
+-------------+----------+------+-----------+-------------+
| Huge        | ch-gva-2 |    8 |     32184 |             |
+-------------+----------+------+-----------+-------------+

You can also list firewalls which gstack maps to CloudStack security groups. To create a securitygroup, use the firewall commands:

$ ./gcutil --cached_flags_file=~/.gcutil_params --project=runseb@gmail.com addfirewall ssh --allowed=tcp:22
$ ./gcutil --cached_flags_file=~/.gcutil_params --project=runseb@gmail.com getfirewall ssh

To start an instance you can follow the interactive prompt given by gcutil. You will need to pass the --permit_root_ssh flag, another one of those semantic and access configuration that needs to be ironed out. The interactive prompt will let you choose the machine type and the image that you want, it will then start the instance

$ ./gcutil --cached_flags_file=~/.gcutil_params --project=runseb@gmail.com addinstance foobar
Selecting the only available zone: CH-GV2
1: Extra-large  Extra-large 16384mb 4cpu
2: Huge Huge 32184mb 8cpu
3: Large    Large 8192mb 4cpu
4: Medium   Medium 4096mb 2cpu
5: Micro    Micro 512mb 1cpu
6: Small    Small 2048mb 2cpu
7: Tiny Tiny 1024mb 1cpu
7
1: CentOS 5.5(64-bit) no GUI (KVM)
2: Linux CentOS 6.4 64-bit
3: Linux CentOS 6.4 64-bit
4: Linux CentOS 6.4 64-bit
5: Linux CentOS 6.4 64-bit
6: Linux CentOS 6.4 64-bit
<...snip>
INFO: Waiting for insert of instance . Sleeping for 3s.
INFO: Waiting for insert of instance . Sleeping for 3s.

Table of resources:

+--------+--------------+--------------+----------+---------+
| name   | network-ip   | external-ip  | zone     | status  |
+--------+--------------+--------------+----------+---------+
| foobar | 185.1.2.3    | 185.1.2.3    | ch-gva-2 | RUNNING |
+--------+--------------+--------------+----------+---------+

Table of operations:

+--------------+--------+--------------------------+----------------+
| name         | status | insert-time              | operation-type |
+--------------+--------+--------------------------+----------------+
| e4180d83-31d0| DONE   | 2014-06-09T10:31:35+0200 | insert         |
+--------------+--------+--------------------------+----------------+

You can of course list (with listinstances) and delete instances

$ ./gcutil --cached_flags_file=~/.gcutil_params --project=runseb@gmail.com deleteinstance foobar
Delete instance foobar? [y/n]
y 
WARNING: Consider passing '--zone=CH-GV2' to avoid the unnecessary zone lookup which requires extra API calls.
INFO: Waiting for delete of instance . Sleeping for 3s.
+--------------+--------+--------------------------+----------------+
| name         | status | insert-time              | operation-type |
+--------------+--------+--------------------------+----------------+
| d421168c-4acd| DONE   | 2014-06-09T10:34:53+0200 | delete         |
+--------------+--------+--------------------------+----------------+

Gstack is still a work in progress, it is now compatible with GCE GA v1.0 API. The few differences in API semantics need to be investigated further and additional API calls need to be supported. However it provides a solid base to start working on hybrid solutions between GCE public cloud and a CloudStack based private cloud.

GSoC has been terrific to Ian and Darren, they both learned how to work with an open source community and ultimately became part of it through their work. They learned tools like JIRA, git, Review Board and became less shy with working publicly on a mailing lists. Their work on Gstack and EC2stack is certainly of high value to CloudStack and should become the base for interesting products that will use hybrid clouds.