Wednesday, July 04, 2018

AWS Fargate from the Command-Line

We all love a good command-line demo. Here is one I put together recently for demonstrating a few things, firstly Docker multi-stage builds, and secondly how a simple web service written in Go could be deployed to AWS Fargate using nothing but the command line.

 

What's cool about that I hear you ask?

What's cool about it is that at no point during this demo am I deploying, configuring or going to have to manage ANY servers.

 

Is it "serverless" ... is it containers? YES!

Let’s take a look:

Here is the simple Go HTTP server.

package main

import (
 "log"
 "net/http"

 "github.com/gorilla/mux"
)

func YourHandler(w http.ResponseWriter, r *http.Request) {
 w.Write([]byte("<h1>Hello Mitch Beaumont!</h1>\n"))
}

func main() {
 r := mux.NewRouter()

 r.HandleFunc("/", YourHandler)

 log.Fatal(http.ListenAndServe(":8000", r))
}

As I mentioned, I'm going to use Docker's multi-stage build process to compile my HTTP server and then build a lightweight container image.

 

Why are you using multi-stage builds?  

When it comes to containers, there aren't many people who'd argue that the smaller, the better! 

Multi-stage builds help us optimise the size of our container image by allowing us to, from a software development perspective, clear the wheat from the chaff. 

In the case of our simple Go application we require Go to be installed so that we can compile the application in to a binary. Once compiled, these dependencies are not required for the binary to run our simple web service. By using multi-stage builds, we easily use one container image, with the required dependencies, to build our binary. We can then move our artifact (the binary) in to a new container image, with a minimal footprint (in our case, scratch). 
The result is an optimised container image.
 
Here is my how my multi-stage Dockerfile looks.

# STEP 1 build executable binary

FROM golang:alpine As builder 
COPY . $GOPATH/src/github.com/mitchybawesome/http-server/
WORKDIR $GOPATH/src/github.com/mitchybawesome/http-server/

RUN apk add --no-cache git mercurial

#get dependancies
RUN go get -d -v

#build the binary
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o /go/bin/httpserver

# STEP 2 build a small image

# start from scratch
FROM scratch

# Copy our static executable from the builder
COPY --from=builder /go/bin/httpserver /go/bin/httpserver
ENTRYPOINT ["/go/bin/httpserver"]

Now I have my container image, I need to push it in to a repository. It just so happens that I have an Amazon ECR repo setup and ready to go. I'll use the following commands to build the image, tag it, login to ECR and push the image. (The account ID is a dummy)

ACCOUNT_ID="000000000"
REGION="us-west-2" 
REPO="go-http-server"
 
# Login to ecr
$(aws ecr get-login --no-include-email --region ${REGION})

# Build docker image
docker build -t ${REPO} .

# Tag docker image
docker tag ${REPO}:latest ${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/${REPO}:latest

# Push docker image
docker push ${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/${REPO}:latest

Now that we have our container image available. The next step is to create an ECS cluster in which to run our container.

 

BUT YOU SAID NO SERVERS! 

I know, I know. Cool your jets man!

Whilst it's true, I am creating a cluster, there are no actual instances being provisioned in my AWS account. The cluster construct is used purely as a management construct and security boundary.

# Create Cluster
aws ecs create-cluster --cluster-name fargate-cluster --region ${REGION}

Next, we need to create a load balancer that will route requests to our running service. In order to create the load balancer, we need few details about the environment, specifically, we need to know the VPC in to which we will be deploying the load balancer, and the subnets that we're going to connect our load balancer to.

vpcid=$(aws ec2 describe-vpcs | jq '.Vpcs[] | select (.IsDefault == true) | .VpcId' | sed -e 's/^"//' -e 's/"$//')

The $vpcid variable now contains our default VPC ID. We will use this to parse the list of available subnets. For the purpose of this demo, I'll be creating all of my resources in the default VPC.

Notice the filter I've applied in the jq query to select the VPC which has the "IsDefault" flag set to "true".

subnets=$(aws ec2 describe-subnets --filters "Name=vpc-id,Values=${vpcid}" | jq '.Subnets[].SubnetId') && echo $subnets

Run the above command to output a list of subnets. Keep these safe, we'll need them later.

The next step in our process is to create a security group that will be attached to the load balancer. We use this security group to control traffic ingress from the public Internet.

If we did not create / assign a security group, the VPCs default security group would be assigned to the load balancer and we'd have a hard time accessing our service.

# Create ELB security group
aws ec2 create-security-group \
--description "security group for ALB" \
--group-name "security-group-for-alb" \
--region ${REGION} | jq '.GroupId'

Grab the "GroupId" output from the previous command, and use it to define the ingress rules for the security group. You will also need the load balancer's GroupId for a later step, so keep it close.

In this example, we're allowing all TCP traffic from any source IP address to connect to the load balancer.

# Configure ELB security group ingress rule to allow ELB to connect to tasks. 
aws ec2 authorize-security-group-ingress \
--group-id <GroupId_from_previous_command> \
--protocol tcp \
--port 80 \
--cidr 0.0.0.0/0

We're now ready to create our load balancer. This command creates an ALB (Application Load Balancer) and outputs the ARN (Amazon Resource Name) to the command line.

 

Why ALB?

We're using an ALB because of the support for dynamic port mappings and path based routing. Both of these features translate to a more optimized deployment model in terms of infrastructure and ultimately cost.

 

Were you paying attention?

In an earlier step, I asked you to record some information. Do you remember what it was? 
I hope so, because you'll need that information to complete this command.  (Hint: It's the subnets!)

# Create a load balancer and get the ELB ARN.
aws elbv2 create-load-balancer \ 
--name go-http-server \ 
--subnets subnet-111111 \
subnet-222222 \
subnet-333333 \ 
--security-groups <ALB_Security_GroupId> \
| jq '.LoadBalancers[].LoadBalancerArn'

Now that we have a load balancer created, we need to create a target group. The load balancer uses the target group to route requests to one or more registered targets. Which in our case will be containers (or tasks) running our simple Go web service.

# Create a target group
aws elbv2 create-target-group \
--name fargate-targets \
--target-type ip \
--protocol HTTP \
--port 80 \
--vpc-id ${vpcid} | jq '.TargetGroups[].TargetGroupArn'

 

Joining the dots! 

Our target group now needs to be attached to a listener and the listener needs to be attached the load balancer to complete the setup. You'll need the load balancer ARN and the target group ARN, both of which were outputs from the previous commands, to complete this step.

# Create a listener
aws elbv2 create-listener \
--load-balancer-arn <load_balancer_arn> \
--protocol HTTP \
--port 80 \
--default-actions Type=forward,TargetGroupArn=<target_group_arn>

 

How do we get our application deployed in to the cluster?

Great question! We have an image that we pushed in to ECR. But we somehow need to tell ECS that we want it to launch a container based off of that image.  We do that by creating task definition.

The task definition is a set of properties that allows us to model the run-time environment for our containerised go web service. Within the task definition we specify, among other things, how much memory and CPU we want to allocate to our task.

Task definitions are written in JSON. I've taken the liberty of dropping an example of the task definition I'm using in to a gist, which can be found here.

If you use this task definition, don't forget to update the image path!

# Register Task Definition
aws ecs register-task-definition \
--cli-input-json file://./go-http-server.json \
--region ${REGION} --query 'taskDefinition.taskDefinitionArn'

Our task needs a security group assigned to it so that we can control the kind and the sources of traffic that are allowed to reach it.

Record the "GroupId". You'll need it later.

# Create security group for the tasks
aws ec2 create-security-group \
--description "security group for fargate task" \
--group-name "security-group-for-fargate-task" \
--region ${REGION} | jq '.GroupId'

I mentioned earlier that we we'd need the GroupId of the load balancers security group. Now is the time. Use that group ID to define and attach an ingress rule to the task security group.

Basically what we're doing here is telling the task that it can accept TCP connections over port 8000 from the load balancer.

Keeping our traffic flow rules tight!

# Configure security group ingress rule to allow ELB to connect to tasks.
aws ec2 authorize-security-group-ingress \
--group-id \
--protocol tcp \
--port 8000 \
--source-group <alb_security_group_id>

 

Wrapping up

The final step is creating our service. You'll need some of the outputs from the previous commands to complete this step, including: the name of the task definition, a comma separated list of the subnets in the VPC to which the tasks needs to be connected, the ID of the task security group, the ARN of the target group and the name of the container. (Which you can get from the task definition you created earlier).

# Create Service
aws ecs create-service --cluster fargate-cluster --service-name go-http-server \
--task-definition <task_definition> --desired-count 2 --launch-type "FARGATE" \
--network-configuration "awsvpcConfiguration={subnets=[ <command_separated_list_subnets> ],securityGroups=[<security_group>],assignPublicIp=ENABLED}" \
--load-balancers targetGroupArn=<target_group_arn>,containerName=<container_name>,containerPort=<container_port> \
--region ${REGION}

Let's now make sure we can connect to our service. Grab the DNSName of the load balancer by querying the load balancer ARN.


url=$(aws elbv2 describe-load-balancers \
--load-balancer-arns <load_balancer_arn> \
| jq '.LoadBalancers[].DNSName' | sed -e 's/^"//' -e 's/"$//') && curl $url


Hopefully you get a response back! If you don't perhaps check your security group rules.

 

 Drum roll please

So, using nothing but the command line and NO SERVERS, we've packaged up a simple web service,  pushed it to a secure repository and created a framework for scaling, securing, deploying and serving the web service.

In the next few posts, I'm going to explore some of the finer details around security, scaling and updating our simple web service.

As always, I'd love your feedback and thoughts.

~mitch




No comments:

A little about Me

My photo
My name is Mitch Beaumont and I've been a technology professional since 1999. I began my career working as a desk-side support engineer for a medical devices company in a small town in the middle of England (Ashby De La Zouch). I then joined IBM Global Services where I began specialising in customer projects which were based on and around Citrix technologies. Following a couple of very enjoyable years with IBM I relocated to London to work as a system operations engineer for a large law firm where I responsible for the day to day operations and development of the firms global Citrix infrastructure. In 2006 I was offered a position in Sydney, Australia. Since then I've had the privilege of working for and with a number of companies in various technology roles including as a Solutions Architect and Technical team leader.