Thursday, May 31, 2018

Welcome to the Kubeiverse

Welcome to the Kubeiverse

The Kubeiverse is expanding at an ever increasing rate of knots. You'd be excused for thinking that it is hard to keep up with all of the different tools and technologies available to help with deploying, scaling and managing applications on Kubernetes. You're not alone.

To help me learn and keep track as well as help the wider community along the way. I'm going to attempt to explore many of these strange new tools and seek out new Kubeilizsations.

As I travel my mission is to better understand the problems each of these tools hopes to solve, how to install, configure and use them, and explore some of the use cases that are out there in the wild.

Uncharted waters:

Wednesday, May 30, 2018

Mid-Week fun with Draft, Kubernetes and Amazon ECR.

Mid-Week fun with Draft, Kubernetes and Amazon ECR. 

Today I’m going to be playing with Draft. Draft is an open source tool, which is part of the Kubernetes ecosystem. Draft aims to remove the “friction” from containerised application development workflows.

Draft is not a tool aimed at production deployments. It’s goal, from what I can tell, is to make it easier for developers to build and test their applications locally before the changes are moved in to version control. The official Draft doco refers to this as the "inner-loop" of a developers workflow.
If we look at the high-level (and over simplified) manual steps involved in testing your application locally, within a container on Kubernetes:

  • Cut some code.
  • Create a Dockerfile.
  • Build the image.
  • Push the image to a container image repo. somewhere.
  • Create some Kubernetes manifests.
  • Run some kubectl magic and eventually you’ll have a runnning version of your application that you can access.

The goal of Draft is distil this down to a few simple commands: draft create and draft up.

But hang on Mitch, I hear you say. Why do I need to test my application in a container? I’ve got everything I need right here on my system, dependencies, runtime etc. Well, this is exactly why it is important to test within a container, the target evironment isn’t going to neesarily have all, or any, of those things wired up in exactly the same way your local system does. If you’re plan is to containerise your applicaion, run your tests within a containerised environment. That way, a whole class of potential “it worked on my laptop” issues never see the light of day.

So lets take a look at how we get up and running with Draft.


Before we get started, we need the following things:
  • Runnig Kubernetes cluster (Minikube for example)
  • Helm (for deploying your application to the Kubernetes cluster)

For the purpose of this post, I am going to assume that you already have a Kubernetes cluster up and running, minikube or otherwise. On the off chance that you do not, check out this awesome guide on how to get up and running with a local Kubernetes cluster.

Our workspace currently looks like this

└── main.go

0 directories, 1 file


Installing and configuring helm

Installing helm using home brew is pretty straight forward:
brew install kubernetes-helm

Alternatively the helm binaries can be downloaded and setup by following the steps here.

If you are running a Kubernetes cluster with RBAC enabled, you'll need to make sure the RBAC prerequisites are met, as per here.

If you have an RBAC enabled cluster then run something like the following command to deploy the deploy RBAC stuff: kubectl create -f tiller_rbac_stuff.yaml

The final step here is to run helm init. This init process deploys the helm sever-side components called tiller.

Installing Draft

I'm using OSX and I have home brew installed, so installing Draft was as simple as: 

brew tap azure/draft && brew install draft 

If you have a slightly different rig then you can download a release binary for your specific OS from here:

Assuming that this is the first time you are running draft on your system, run draft init to setup draft correctly. This does things like download plugins, setup envars and few other things.

Once that's done, we need to run the following command to configure the registry:

draft config set registry <registrypath> 

We are going to use Amazon ECR as a private repository, so set the value to be something like this: <accountid>

As part of the init process, there are a few files and folders that get created for for us in the ~/.draft folder, let's see what they are:

cache: a cache of all the things.

logs: fairly self explanatory. The logs from the draft are stored here

packs: This folder contains what draft calls packs. A pack represents a collection of template files tailored for specific programing languages. Draft uses these templates to bootstrap your project with all of the goodness you’ll need to deploy it to Kubernetes. There is a Dockerfile template and a selection of Helm chart templates.

plugins: This folder containers the standard Draft Pack Repository Plugin. This is the plugin used by Draft for adding, removing, listing and fetching pack repositories. Based on what the readme says, “Enables the Draft community to come up alternative forms of pack repositories by implementing their own plugin for fetching down these packs, so it made sense to initially spike the tooling as an entirely separate project.”

Switch to the folder containing your application source code.

Run draft create to pull in the boiler plate stuff based on the language Draft has detected that you are developing in.
--> Draft detected Go (100.000000%) 
--> Ready to sail

As you can see, Draft detected we're developing in Go.
What does our directory look like now?

├── Dockerfile
├── charts
│   └── go
│       ├── Chart.yaml
│       ├── charts
│       ├── templates
│       │   ├── NOTES.txt
│       │   ├── _helpers.tpl
│       │   ├── deployment.yaml
│       │   ├── ingress.yaml
│       │   └── service.yaml
│       └── values.yaml
├── draft.toml
└── main.go

There is quite a bit of additional stuff here now. I'm not going to dive in to what each of these things are, I'll leave that for another day, but suffice to say, Draft does a lot of the heavy lifting required for packaging applications in Docker images and creating the necessary helm charts for deploying to Kubernetes.

I noticed that the Dockerfile uses the official golang image  with the onbuild tag. This is a fairly large image. I might want to consider something a bit lighter, such as once of the alpine derivatives. The good thing is, that I can easily make those changes by updating my Dockerfile and test them locally before this moves in to my CI pipeline.

Setting up ECR Credentials Helper 

We’re going to be storing the artifact that we create, the Docker image, in Amazon ECR. ECR is a fully managed container registry compliant with the Docker registry v2 API. Kubernetes also supports ECR, more details available here.

We’ll be using the Amazon ECR Credential Helper to help us seamlesly get the access token required by the Docker CLI to authenitcae with ECR. This blog post does a great job of doing in to this in much more detail. Amazon ECR
The basic steps are:

Grab the credential helper:

go get -u 

Move docker-credential-ecr-login binary in to your $PATH. For example, on my laptop, I ran:

mv $GOPATH/bin/docker-credential-ecr-login /usr/local/bin/docker-credential-ecr-login 

The next step is to update (or create in some cases) the ~/.docker/config.json to:

    "credsStore": "ecr-login"

The assumption is that you already have some AWS credentials available in one of the standard locations such as  the ~/.aws/credentials file, environment variables or an IAM role.

Deploying to Kuberneretes

At which point, we can now run draft up to deploy the application in to the target Kubernetes environment. (Thanks helm!)
Draft Up Started: 'dogs': 01CEPSA8NM8T4DZA5DG3ZW7R5Z
draft: Building Docker Image: SUCCESS ⚓  (1.0013s)
draft: Pushing Docker Image: SUCCESS ⚓  (8.2443s)
draft: Releasing Application: SUCCESS ⚓  (5.1915s)
Inspect the logs with `dogs logs 01CEPSA8NM8T4DZA5DG3ZW7R5Z

What just happened?

A few things has it happens. But it's worth pointing out that up to this point, we have packaged and deployed our application to a Kubernetes cluster without one touching the kubectl, docker or helm command line utilities. I think that's kinda cool!

Firstly, Draft very kindly built us a Docker image.

If we run the aws cli command: 

aws ecr list-images --region ap-southeast-1 --repository-name dogs

We should be able to see that the newly minted image has been pushed to the ECR registry/repo we configured earlier. 

    "imageIds": [
            "imageDigest": "sha256:b102a0eb7b4f8026fe7fabe...",
            "imageTag": "ed6ff22dff7b34d868ba31efc5d..."


Draft then deployed our application in to our Kubernetes cluster.  If we run:

kubectl get pod

We should be able to see a pod running:

dogs-go-74c9fc5989-7lf46   1/1       Running   0          12m

Draft also, very kindly, creates us a service which Kubernetes uses to expose our application to the outside world. With that we are now in a position to test our application. 

We do this by using draft connect.
When you run draft connect draft does some magical port-forwarding for you.
You can use a browser to connect to http://localhost:<portnumber> and marvel at your working application in all of its containerised glory. Or you can use curl.

curl -i http://localhost:57985
Each time this happens, the local port number changes. To provide a more of an consistent experience, I have taken to adding the --override-port flag. This allows me to switch back to the same browser window, to view the changes to my app. 
For example

draft connect --override-port 57985:8080

That's it for now. I'm up and running with Draft and I'm keen to explore how I can customise the tool, create my own packs and understand how it integrates with my CI process. 

See you next time!

A little about Me

My photo
My name is Mitch Beaumont and I've been a technology professional since 1999. I began my career working as a desk-side support engineer for a medical devices company in a small town in the middle of England (Ashby De La Zouch). I then joined IBM Global Services where I began specialising in customer projects which were based on and around Citrix technologies. Following a couple of very enjoyable years with IBM I relocated to London to work as a system operations engineer for a large law firm where I responsible for the day to day operations and development of the firms global Citrix infrastructure. In 2006 I was offered a position in Sydney, Australia. Since then I've had the privilege of working for and with a number of companies in various technology roles including as a Solutions Architect and Technical team leader.