Thursday, December 03, 2015

All Five!

Happy to report I've now passed all five AWS Certifications!

AWS Certified Solutions Architect - Associate

AWS Certified Developer - Associate

AWS Certified SysOps Administrator - Associate

AWS Certified Solutions Architect - Professional

AWS Certified DevOps Engineer - Professional



Tuesday, October 13, 2015

AWS Certified Solutions Architect - Professional



This past week whilst at AWS re:Invent I'm pleased to report that I sat and passed the AWS Certified Solutions Architect Professional level exam.

Having passed the exam I thought I’d share some of my thoughts and experiences on the exam. Hopefully if you are planning on taking the exam, some of these "pearls" will help you prepare, at least mentally for the exam.

The Exam

To start with, it's important to remember that in order to qualify to take this exam you must already hold the the AWS Certified Solutions Architect - Associate badge.

The exam is 170 minutes long and includes 80 questions. 

You'll be measured on 8 domains of knowledge, with the largest percentage of the marks going to "Security". This kind of makes sense considering AWS makes no secret about how security is of paramount importance in everything they do.

This white paper is a very very good read ... 

You can read all about the exam here ... 

The Questions 

Most of the questions are LONG scenario based questions. I felt a bit like I'd run a marathon by the time I clicked the "Submit" button at the end of the exam.

The best piece of advice I can offer here is READ the question completely and READ the answers completely. After all, this is a professional grade architecture exam and as Solutions Architects, there is an expectation that we can extract key requirements from a given scenario.

Unlike many of the other exams I've taken, where a set of answers typically includes two or three options  which, to the trained eye, are obviously incorrect, most answer-sets in this exam contained a full range of answers that could all be correct. However, what you are being asked is to choose the best answer based on the scenario provided. Again, it's important to read and extract the key points or requirements from each question. 

Considering the number of questions and time available, time management is really important. You need to allow yourself around 2 minutes per question. 

I personally found working in 20 minute blocks with a target of 10 questions each block helped me manage my time.

That said, after reviewing my marked answers and going back to review, I only ended up with about 9 minutes remaining on the clock.

Marking for Review

This brings me to another piece of useful advice for keeping time on your side. Use the check later tick box to help you push ahead with the exam if you end up getting stuck on a particular question. You can go back and have another crack at the end.

Preparation

In terms of study guides and resources, I personally found that the "Practice Exam", available for $40 USD through Kryterion, was an excellent starting point. Not only does it give you a feel for the questions and the time constrains you have to work with, it also provides a breakdown (once you've completed the exam) of how you performed in each of the domains. For me this helped guide me on some of the areas I needed to focus on.

Other materials I would recommend all candidates read ... 

AWS White Papers are a must:

Read through the product FAQs:

The reality is that there is not substitute for real-world experience. I have personally worked with the AWS platform for around 3 years which gave me a solid foundation on which to prepare for the exam.

Conclusion

In closing, I think this is a great exam which really tests broad set of skills. The exam prep guide really should be used as a starting point for planning your study. 

The exam tests you’re knowledge of a range of AWS services which can be a challenge if the scope of your work has been limited to a smaller subset of the more commonly uses services. 

Don't forget that Security carries a lot of weight in terms of the overall mark, so really make sure you understand things like IAM, roles, polices, federation, web identity. 

There is also a new "Well Architected Framework" document that AWS published recently. This is definitely worth a read because it will help you understand the best practices that should be applied to your thinking when you make architectural decisions.


Good luck!

Tuesday, June 23, 2015

Teach a man to fish ...

Today somebody asked me a question which I thought warranted a blog post. For the purpose of this blog post "somebody" will be referred to as Jeff.

So, Jeff came to me with a problem. Jeff had set out to build a particular solution in AWS. During his investigations he found an off-the-shelf CloudFormation template which deployed the exact solution he wanted.

Jeff downloaded the CloudFormation template from GitHub, logged in to the the AWS management console and ran through the "Create new Stack" wizard. Jeff was on top of the world, the solution was being built in front of his very eyes and so far, all he'd had to do was a bit of googling and a few mouse clicks.

He was grinning like a cheshire cat, life was good, CloudFormation was working it's magic and he was going to be the office hero ... right up until the moment he saw the dreaded ROLLBACK_IN_PROGRESS message.

S**! Jeff thought to himself as he watched his beautiful solution torn down, volume-by-volume, instance-by-instance, ELB-by-ELB.

He opened up the CloudFormation template using his trusted copy of Sublime and this is what he saw:

"Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer nec odio. Praesent libero. Sed cursus ante dapibus diam. Sed nisi. Nulla quis sem at nibh elementum imperdiet. Duis sagittis ipsum. Praesent mauris. Fusce nec tellus sed augue semper porta. Mauris massa. Vestibulum lacinia arcu eget nulla. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos."

Well, he didn't really see that, but I'm sure you can appreciate that to the untrained eye, CloudFormation templates can definitely look a little scary.

That's when Jeff decided to call me and ask for a little help. We walked through the template and tried to identify the reasons for the failure, which are beyond the scope of this post. One thing which did become clear from the silence on the other end of the line was that Jeff was struggling a little bit to keep up with my troubleshooting approach, how did we get from Error Message A to Solution B.

Jeff then reminded me of the famous quote "Give a man a fish and you feed him for a day, teach him how to fish and you teach him for life".

Now, just to put a little context around my friend Jeff, he's a very smart developer. It would not take Jeff long to "learn how to fish". But, what were the best resources to help Jeff "learn to fish".

AWS have an awesome documentation library and below I've included a few of the, in my personal opinion, best links for getting to grips with CloudFormation.

This first link is a great starting point for anyone wanting to start out with CloudFormation and understand the building blocks of a template:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html

This next link is a the bible of AWS CloudFormation resources. It provides an invaluable breakdown of every resource type you can create through CloudFormation. Definitely my first stop when handcrafting and troubleshooting templates.

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html

I'm sure I'll hear from Jeff again. But I know that armed with new arsenal of new links, he will try his absolute best to catch that fish on his own first. He may not succeed, but he will learn a lot and with each attempt, this;

"Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer nec odio. Praesent libero. Sed cursus ante dapibus diam. Sed nisi. Nulla quis sem at nibh elementum imperdiet. Duis sagittis ipsum. Praesent mauris. Fusce nec tellus sed augue semper porta. Mauris massa. Vestibulum lacinia arcu eget nulla. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos."

Will start to look more like this;

  "Resources" : {
    "EC2Instance" : {
      "Type" : "AWS::EC2::Instance",
      "Properties" : {
        "InstanceType" : { "Ref" : "InstanceType" },
        "SecurityGroups" : [ { "Ref" : "InstanceSecurityGroup" } ],
        "KeyName" : { "Ref" : "KeyName" },
        "ImageId" : "ami-2323232"
      }
    },

Wednesday, June 10, 2015

Programming monogamy

I decided it was high time I bought an end to my exclusive relationship with PHP and started playing the field a little.

I'm not suggesting that there is anything wrong with PHP. Exactly the opposite in fact, not coming from a development background, I've been able to achieve some fantastic things with PHP. My journey from "total PHP newb" to "not so much of a PHP newb" has introduced me to a new whole new world and some very interesting concepts. Starting to understand application development and some of the application programming principals has helped improve my understanding of software development within my company. It has enabled me to have more constructive conversations with developers across my organisation, this was especially relevant when it came to re-factoring some of our software solutions for AWS.

That aside, I recently met this little beauty who goes by the name of Ruby. Why Ruby and not Python or Node or some other scripting language. The main motivator was Rails. Rails is a web application framework that I've heard lots about and am keen to explore.

A lot of the PHP work I've done has been around building web applications and browser based consoles for managing environments and services within our environment. I never got really stuck into a framework for developing applications in PHP, I normally handcraft everything with a simple MVC structure, like the one below.

This practice got hammered home thanks to the cover-to-cover reading of "PHP for Absolute Beginners", which I'd highly recommend to anyone looking to get started with PHP or web application development in general.

Anyway, I digress, since Ruby and Rails "appear" to go hand-in-hand, I decided to start learning some Ruby.

I wanted to start, as I did with PHP, with something really simple. Since most of what I do these days resolves around AWS, pulling back a list of EC2 instances and dumping them to the console seemed like a perfect place to start.

In my first little script below, I've created an empty hash, built a function which uses the AWS SDK for ruby to return a list of instances and populate the hash with a subset of the information returned.

It then iterates through the hash, using the awesome .each method and spits out a "nicely" formatted report to the console.

Pretty basic, but it gave me the chance to get an grasp on some basic Ruby concepts like symbols and the awesome .each method.

These simple scripts inevitably form the building blocks for larger and more complex solutions, so sit tight and lets see where Ruby and I go from here.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
#!/usr/bin/ruby 

require "aws-sdk"

$instance_hash = Hash.new('Nothing New')

################################
# => Function: get_ec2_instances
# => returns all running EC2 Instances for my account.
###############################

def get_ec2_instances
 ec2 = Aws::EC2::Client.new(region: 'ap-southeast-2')

 resp = ec2.describe_instances()
 resp[:reservations].each do | reservations |
  reservations[:instances].each do | instances |
   $instance_hash[instances[:instance_id]] = 
    {
    "accountId" => reservations[:owner_id],
    "state" => instances[:state][:name], 
    "privateIp" => instances[:private_ip_address]
    }
  end
 end
end

get_ec2_instances

$instance_hash.each do |key,value|
 puts "Intstance Id: #{key}"
 value.each do |k,v|
  puts "#{k} : #{v}"
 end
 puts "-" * 25
end

Monday, June 01, 2015

Route53 + RaspberryPi + Cron + PHP =  lazy admin.

Thanks to my recently AWS-connected Pi, performing a scheduled DNS cutover from the comfort of my own bed could not have been easier.

With a little cron magic and some "Aws\Route53\Route53Client" you can easily schedule changes to Route53 records / record sets. 


<?php
error_reporting(E_ALL);
ini_set("Display Errors", 1);
require 'vendor/autoload.php';
// Create client object for Route53
$r53Client = \Aws\Route53\Route53Client::factory(array());
// Create client object for SES
$SesClient = \Aws\Ses\SesClient::factory(array(
    'region' => 'us-east-1'
));
// Function for sending notifications if the record change fails or for confirmation that the change has been made.
function StackNotification($body, $cnameDns)
{
    global $SesClient;
    $stackSubject = 'DNS Update Confirmation ' . "[$cnameDns]";
    $SesClient->sendEmail(array(
        'Source' => 'blahblah@mitchyb.com',
        'Destination' => array(
            'ToAddresses' => array(
                'blahblah@mitchyb.com'
            )
        ),
        'Message' => array(
            'Subject' => array(
                'Data' => $stackSubject
            ),
            'Body' => array(
                'Html' => array(
                    'Data' => $body
                )
            )
        )
    ));
}
function updateRecord($elbDns, $cnameDns)
{
    global $r53Client;
    global $cloudFormationStackName;
    // Update DNS Records
    try {
        $command = $r53Client->changeResourceRecordSets(array(
            'HostedZoneId' => 'Z16PRLGBWGMRUY',
            'ChangeBatch' => (object) array(
                'Changes' => (object) array(
                    array(
                        'Action' => 'UPSERT',
                        'ResourceRecordSet' => array(
                            'Name' => $cnameDns,
                            'Type' => 'CNAME',
                            'TTL' => 60 * 5,
                            'ResourceRecords' => array(
                                array(
                                    'Value' => $elbDns
                                )
                            )
                        )
                    )
                )
            )
        ));
        
        $msg = "Route53 record updated to " . $elbDns;
        StackNotification($msg, $cnameDns);
    }
    catch (Exception $e) {
        $errorMsg = "Route53 record update failed with error: $e";
        trigger_error($errorMsg);
        
        StackNotification($errorMsg);
        exit;
    }
}
;
// Call the record set update function.
updateRecord('offline.mitchyb.com', 'blog.mitchyb.com');

Wednesday, May 27, 2015

Installing the AWS SDK for PHP onto my Raspberry Pi


Just some notes for my own reference on getting the AWS SDK for PHP working on my RaspberryPi.

1. Installed PHP and Apache (needed for this particular project)

pi@raspberrypi ~ $ sudo apt-get install apache2 php5 libapache2-mod-php5

2. Moved my project folder and created the composer.json file (as per the SDK installation instructions).

nano composer.json

3. Popped in the required JSON.

{
    "require": {
        "aws/aws-sdk-php": "2.*"
    }

}

4. Ran the installer and ... bop-bow! ....

pi@raspberrypi ~/phpscripts/greenmode $ php composer.phar install
Loading composer repositories with package information
Installing dependencies (including require-dev)
Your requirements could not be resolved to an installable set of packages.

  Problem 1

    - aws/aws-sdk-php 2.4.0 requires guzzle/guzzle ~3.7.0 -> satisfiable by guzzle/guzzle[v3.7.0, v3.7.1, v3.7.2, v3.7.3, v3.7.4].

... and

 guzzle/guzzle v3.9.3 requires ext-curl * -> the requested PHP extension curl is missing from your system.

4. Loaded up the php5-curl package ....

pi@raspberrypi /var/www $ sudo apt-get install php5-curl

5. Everything working!

pi@raspberrypi ~/phpscripts/greenmode $ php composer.phar install
Loading composer repositories with package information
Installing dependencies (including require-dev)
  - Installing symfony/event-dispatcher (v2.6.8)
    Downloading: 100%         

  - Installing guzzle/guzzle (v3.9.3)
    Downloading: 100%         

  - Installing aws/aws-sdk-php (2.8.7)
    Downloading: 100%    

Just to be sure ... let's do something .... 

A quick Ec2 iterator script ...

<?php 

        require 'vendor/autoload.php';

        $ec2Client = \Aws\Ec2\Ec2Client::factory(array(
                'profile' => 'dev',
                'region'  => 'ap-southeast-2'
        ));


        function allInstances(){

                $iterator = $GLOBALS['ec2Client']->getIterator('describeInstances',array(
                        'Filters' => array(
                                        array(

                                                'Name' => 'instance-state-name',
                                                'Values' => array('running')

                                                )

                                        ))
                                );

                foreach($iterator as $object){

                        echo $object['InstanceId'] . PHP_EOL;
                }

        }


echo print_r(allInstances(),true);

... and we get our instances back. Sweet!

pi@raspberrypi ~/phpscripts/greenmode $ /usr/bin/php sdktest.php
i-4fcc3a81
i-4585708b
i-b4a6697a

i-960efa58

Tuesday, May 19, 2015

My First Lambda Function

I thought is was about time I got familiar with AWS Lambda.

AWS Lambda is an event-driven computing service, which at that time of writing is available in US-EAST (Virginia) , US-WEST (Oregon) and EU (Ireland).

Lambda allows custom functions, written in NodeJs (server side JavaScript), to be executed, on-demand and in response to particular triggers, or events.

My Problem: We have an application which sends out thousands of emails a day. Our application uses another AWS Service SES (Simple Email Service).

When sending hundreds of emails, it's not un-common to get a lot of delivery failures, or bounces. These bounces will eventually make their way back to a service mail box which someone or something can trawl through to build up a list of dead email addresses.

This to me seemed like a perfect candidate process for a bit of "cloudification", and so we begin.

My project uses the following services,

NOTE: I'm using region N.Virginia for all non-global services.

SES (N.Virginia)
SNS (N.Virginia)
DynamoDB (N.Virginia)
Lambda (N.Virginia)
IAM (Global)

SES (Simple Email Service)


The first thing we need to do is setup SES to forward bounce and complaint notifications to an SNS topic.  You'll need to go to your verified domain or email address to modify the notification settings.



If you don't have an SNS topic setup, you can click on the "Click here to create a new Amazon SNS topic" click.


Simply enter a Topic Name and Display Name



Once you've done that, pick the SNS topic from the drop down list. I didn't bother with Deliveries notifications because it would generate a lot to notifications.


DynamoDB 


The next step in my little project was to create a new DynamoDB table in which to store the information I'm going to gather from my bounce notifications.

The details of DynamoDb table creation are beyond the scope of this article, but I've included some high level steps below.



Create a new table, I called mine sesNotifications. I set a hash and range primary key type for which I'll be using the SNS Topic Arn (Amazon Resource Name) as the hash and the snsPublishTime stamp as the range. This will give me a nice sorted range index based based on the time the notification was received.


I also added the messageId as a global secondary index so that in the future, I could search for notifications based on the SES Message IDs (another project we're working on).



You'll next need to specific the read and write capacity for the database ... I'll leave that one to you.



Click continue through the remainder of the screens and your DynamoDB table will be created for you.

Lambda 


Whilst we wait for our DynamoDB table to create, we can now move on to the exciting part of the project. Lambda.

Jump to the Lambda management consoleand click on the massive blue button which says "Get Started Now" to begin building your first Lambda function.

Some points to note, Lambda functions are written in javascript, they can be developed locally and uploaded as Zip files, including all of the necessary packages. I'm not there yet, so I'm going to do everything inline, via the GUI.

First things first, give the function a name and a description:



In the function code window, you can choose from a number of templates, or simply create your own:



My function was initially based on the SNS Message template, and here it is.

var aws = require('aws-sdk');
var ddb = new aws.DynamoDB({params: {TableName: 'sesNotification'}});
 
exports.handler = function(event, context) {
  var SnsMessageId = event.Records[0].Sns.MessageId;
  var SnsPublishTime = event.Records[0].Sns.Timestamp;
  var SnsTopicArn = event.Records[0].Sns.TopicArn;
  var SnsMessage = event.Records[0].Sns.Message;
  var LambdaReceiveTime = new Date().toString();
  
  var MessageContent = JSON.parse(SnsMessage);
  var SesNotify = MessageContent['notificationType'];
  var SesFailedTarget = MessageContent['bounce']['bouncedRecipients'][0]['emailAddress'];
  var SesFailedCode = MessageContent['bounce']['bouncedRecipients'][0]['diagnosticCode'];
  var SesMessageId = MessageContent['mail']['messageId'];

  
  var itemParams = {Item: {SnsTopicArn: {S: SnsTopicArn},
  SnsPublishTime: {
        S: SnsPublishTime}, 
        SnsMessageId: {S: SnsMessageId},
        LambdaReceiveTime: {S: LambdaReceiveTime},
        SnsMessage: {S: SnsMessage},
        SesNotificationType: {S: SesNotify}, 
        SesTarget: {S: SesFailedTarget},
        SesError: {S: SesFailedCode},
        SesError: {S: SesMessageId}
}}; ddb.putItem(itemParams, function() { context.done(null,''); }); };

With code in place, we need to assign a role to the Lambda function to allow it to interact with the DynamoDB table we've created.



To keep things simple at this stage, you could choose "Basic with Dynamo", this role will allow the function all of the rights it needs to interact with DynamoDB.

If you want to be a little more granular, you can use the "Basic execution role" and add a role policy that looks a bit like this ...

{
  "Version": "2012-10-17",
  "Statement":[
    {
      "Sid":"AllowDynamoDbAccess",
      "Effect":"Allow",
      "Action":["dynamodb:*"],
      "Resource":["arn:aws:dynamodb:us-east-1:<blahblahblah>:table/sesNotification"]    }
  ]
}

Once the role / policy has been created, assign it to the function and click the big blue "Create Lambda Function" button at the bottom of the screen.

SNS (Simple Notification Service) 


The final step in the project to bring it all together is to push the SNS notifications to our newly created Lambda function.

Hop on over to the SNS management console  and track down the SNS topic we created earlier.



Click on the highlighted ARN to view the topic details. Click on the "Create Subscription" button


The topic ARN will be auto populated, choose AWS Lambda from the protocol list and choose the new Lambda Function from the Endpoint drop list.



Click Create Subscription.

And that's it. We should now be able to send a few emails to addresses which don't exist and wait for the bounces to start showing up in DynamoDb

I have a small PHP script which I use to test sending emails via SES, I modified a few parameters with a non-existent email address and hey presto, this is what we get.



We've got the table items containing the SES Message ID, the failed target address and the Error Code. All very useful.


Friday, April 10, 2015

Elastic Beanstalk - Tagging Instances


We use Elastic Beanstalk a lot. We also use CloudFormation a lot.

We use CloudFormation to create Elastic Beanstalk applications and environments, a lot.

Something which came to light recently is that it’s not possible to apply tags to the instances instantiated as part of an Elastic Beanstalk autoscaling group. 

This is a problem for us because we use tagging as a way to manage cost and control access to groups of resources, IAM policy conditions and such.

There are methods already documented for addressing this missing feature, but the problem I had with those methods is that it they rely on static tags being defined within ebextension statements.
This means our dev teams need to include a custom .ebextensions\<blahblah>.config inside of each build they do.

So I thought I’d put my Friday to good use and come up with a method for getting my instances tagged with tags assigned depending on the environment that is being deployed. 

Here it is. (This is by no means the only way, but it works well for us).


All of our applications are .Net. Also, this technique assumes you're instances have the AWS Powershell tools installed (which the default AMI's do have) and that the IAM role you associate with your instances has the ec2:CreateTags right on the instances instantiated by Beanstalk.

  • The first challenge is applying the tags. As I said before, I don’t want to hardcode my tags into a configuration file for each build, I’d rather keep things simple for our development teams and give them a single .ebextensions bundle. Beanstalk “Option Values” to the rescue. Whilst defining the environment, either through CloudFormation or using the Management Console, you can specific optional parameters, which are passed into the application stack as variables, PARAM1, PARAM2 etc. These option parameters provide me with my conduit for getting my tags into the stack.
  • In the case of a .NET application, the values of these parameters are passed to the <AppSettings> key within the applications web.config as a number of key-value pairs.
  • Next I need to capture what the value for each of these tags should be, for example, “Environment”, “Application Version” etc. Nothing which can’t be easily done thanks for a few parameters in my CloudFormation template.
"Environment": {
      "Description": "Environment setting (prod, stg, dev)",
      "Type": "String",
      "AllowedValues": [ "stg", "prod", "dev" ],
      "Default": "prod"
    },
  • Now that I have my tags and a way of getting those tags into the environment (PARAM1, PARAM2 etc) it's time for a little powershell-foo. The following section talks about that the script does.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
## Ingest the the Environment Variables

[xml]$data = Get-Content C:\inetpub\wwwroot\web.config

$data2 = $data.configuration.appSettings.add

$ebEnvironemnt = ($data2 | Select-Object -Property key,value | where key -Match "PARAM1").value
$ebFunction = ($data2 | Select-Object -Property key,value | where key -Match "PARAM2").value
$ebService = ($data2 | Select-Object -Property key,value | where key -Match "PARAM3").value
$ebOwner = ($data2 | Select-Object -Property key,value | where key -Match "PARAM4").value

#Apply Tags to Instance - ReckonOneService
$instanceid = (Invoke-WebRequest http://169.254.169.254/latest/meta-data/instance-id).content
Set-DefaultAWSRegion -region ap-southeast-3
New-EC2Tag -Resource $instanceid -Tag @{ Key="Environment" ; Value=$ebEnvironemnt}
New-EC2Tag -Resource $instanceid -Tag @{ Key="Service" ; Value=$ebService  }
New-EC2Tag -Resource $instanceid -Tag @{ Key="Function" ; Value=$ebFunction }
New-EC2Tag -Resource $instanceid -Tag @{ Key="Owner" ; Value=$ebOwner }
      1. As you can probably tell, this script reads the contents of the web.config file, specifically the contents of the Configuration\AppSettings section, under which Beanstalk very kindly places the keys PARAM1, PARAM2 etc and their corresponding values (which in this case are our tags).
      2. Once it's read the contents of the AppSettings key we strip out the keys / values we're interested in, which are PARAM1, PARAM2, PARAM3 and PARAM4. Thanks to a  little "Select-Object" and filtering we're able to take the values of each of our PARAM keys and pop them into a variable.
      3. The next thing the script does is perform a "web-request" against the local metadata to retrieve the instanceId.
      4. Armed with our PARAM values and instanceID the script now proceeds to run the "New-EC2Tag" command to add the tags to the local instance.

  • All that remains to be done is tie this all together by getting the script into the Instance and having it run. Both of these steps are achieved using ebextensions.
  • I won't dive into what ebextenions are except to say that they are away to apply more advanced customisations to your instances and applications beyond the configuration options available through the management console.
  • Getting the script into the instance can be achieved either by passing the contents into file using the ebextensions "files" section or by creating the storing the powershell script in an remote location (an S3 bucket for example) and downloading it to the instance, also using the "files" section of the configuration file. I personally prefer storing the powershell scripts remotely and pulling them into the instance, this is for two reasons. Firstly, you don't need to worry about formatting the powershell correctly to comply with YAML or JSON standards (as it needs to be when included in the configuration file) and secondly, it allows me to centrally store and maintain the scripts used to build my environment.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
files:
 "C:\\cfn\\scripts\\applytags_sf.ps1":
  source: https://elasticbeanstalk-downloadables.s3.amazonaws.com/applytags.ps1
  authentication: S3Access

Resources:
 AWSEBAutoScalingGroup:
  Metadata:
   AWS::CloudFormation::Authentication:
    S3Access:
     type: S3
     roleName: aws-elasticbeanstalk-ec2-role
     buckets: elasticbeanstalk-downloadables

container_commands:
 "01_tag_instance":
  command: powershell.exe -ExecutionPolicy Unrestricted C:\\cfn\\scripts\\applytags_sf.ps1


  • Above you can see an example of the ebexenstion configuration file. I will try and walk you through what each of the sections does.
    1. Using the "files" section we instruct the bootstap process to download our powerhsell script from a pre-determined S3 bucket location. 
    2. Remember to set the authentication as "s3", this is important when defining the authentication resource.
    3. Next we define a CloudFormation authentication resource for the S3 bucket containing our powershell script. Here we define which role should be used to inherit the rights from. (We're assuming you've granted your IAM role access to the bucket in question).
    4. Finally, we call the powershell script in the "container_commands" section. The reason we use the "container_commands" section is that these commands run after the package has been deployed. As appose to the "commands" which run before the package has been deployed.
  • Give this configuration file to you developers and ask them to place it in a folder called ".ebextensions" in the root of their visual studio project, as I've done in the screenshot below.

And that's about it. So let me walk you though what happens now:

  1. You set / capture the values of your tags using CloudFormation Parameters.
  2. The values are passed in to the instances and stored as key-value pairs in the <AppSettings> section of your .Net applications web.config file.
  3. During bootstrap of the instances the applytags.ps1 PowerShell script is pulled down to the instance and is executed.
  4. The script extracts the values from the web.config and then uses the AWS powershell toolkit command New-EC2Tag to create the tags and set them on the instance.
As I said, not the only solution I'm sure, but one which works for us quite nicely.



Tuesday, February 03, 2015

Cron and Git Pull

Assuming you're using SSH keys for authentication, you should be able setup a simple cron job with syntax similar to this and get scheduled pulls happening.

*/30 * * * * cd /var/www/html/repos/myprojectname && git pull >> /dev/null


This e-mail and all attachments ("e-mail") is private and confidential.  If you receive the e-mail in error, let us know by reply e-mail, delete it from your system and destroy all copies. The e-mail is the property and confidential information of Reckon Limited, or one of its related companies. Reckon Limited does not warrant that the information in the e-mail is error or omission free. This e-mail is not a quotation or proposal and no contractual obligations shall arise.

A little about Me

My photo
My name is Mitch Beaumont and I've been a technology professional since 1999. I began my career working as a desk-side support engineer for a medical devices company in a small town in the middle of England (Ashby De La Zouch). I then joined IBM Global Services where I began specialising in customer projects which were based on and around Citrix technologies. Following a couple of very enjoyable years with IBM I relocated to London to work as a system operations engineer for a large law firm where I responsible for the day to day operations and development of the firms global Citrix infrastructure. In 2006 I was offered a position in Sydney, Australia. Since then I've had the privilege of working for and with a number of companies in various technology roles including as a Solutions Architect and Technical team leader.