Menus

Capture EC2 launch/termination events using Cloudtrail, cloud watch, lambda

Creating cloud trail, cloud watch, and Lambda to capture EC2 launch or termination events

In response to spikes in traffic or workload at the same time, it’s not costing you an arm and a leg.

We have a client who had this problem and we had to come up with a way to track autoscaling events. The goal was to track how many servers of what instance type was launched, and how many were terminated. We tapped into CloudTrail events, sent them to CloudWatch, and triggered lambda to parse the logs to write to a DynamoDB table. Once this data is in DynamoDB, the rest was trivial. Even Though the termination event doesn’t tell us what instance type was terminated, it was just a matter of doing a left outer join (once you move data to a relational database, that is). Even though this is a use case that is very specific to our client, I thought explaining the way we went about it would help someone on the interweb. So what follows is a step-by-step.

Create CloudTrail and CloudWatch

Create a New CloudTrial and specify a bucket to store the logs

Once then CloudTrail is created you can enable the cloud watch Logs

  1. Select the cloud trail which you need to enable the Cloud watch.
  2. You can see the cloud watch log enabling the option

Now select our CloudTrail Log

Create IAM Role for Lambda and DynamoDB

Choose a role name you want

Select the Service Role Type AWS Lambda

Select AWS Service AWS Lambda AmazonDynamoDBFullAccess Policy Type. A note here. For demo purposes, I just chose full access. In production, you should only be allowing minimum necessary access.

Click the Create Role Button :

Check the below images and use the code below

NodeJs Code for storing launch and terminate historical data to DynamoDB tables. We are going to store data in two separate tables — one for instances created and the other for terminated.

var aws = require(‘aws-sdk’);
    const zlib = require(‘zlib’);
    exports.handler = (event, context, callback) = > {
        const payload = new Buffer(event.awslogs.data, ‘base64’);
        zlib.gunzip(payload, (err, res) = > {
            if (err) {
                return callback(err);
            }
            const parsed = JSON.parse(res.toString(‘utf8’));
            for (var i=0; i < parsed[‘logEvents’].length; i++){
                parsed1 =parsed[‘logEvents’][i][‘message’];
                const parsedfinal = JSON.parse(parsed1.toString(‘utf8’));
                console.log(‘Decoded final in:’,parsedfinal);
            }
            console.log(‘Decoded final out:’,parsedfinal);
            var creationDate = “”;
            var userName = parsedfinal.userIdentity.userName;
            var instancesSet = parsedfinal.responseElements.instancesSet;
            console.log(“instancesSet”, instancesSet);
            console.log(“userName”, userName);
            console.log(“instanceType”, instanceType);
        }
        ddb = new aws.DynamoDB({params: {TableName: ‘Titans-instance-start-state’}});
        var itemParams = {Item: {Id: {S: instanceId} ,Region: {S: awsRegion},userName: {S: us ddb.putItem(itemParams, function(err, data) {
                if(err) { context.fail(err)}
                else {
                    console.log(data);
                    context.succeed();
                }
            });
        }
        else if(eventName === “TerminateInstances”){
            for (var k=0; k < instancesSet.items.length; k++){
                instanceId = instancesSet[‘items’][k][‘instanceId’];
                console.log(“instanceId”, instanceId);
            }
            ddb1 = new aws.DynamoDB({params: {TableName: ‘Titans-instance-terminate-state’}
        }
        var itemParams1 = {
            Item: {
                Id: {S: instanceId} ,Region: {S: awsRegion},userName: {S: ddb1.putItem(itemParams1, function(err, data){
                    if(err) {
                        context.fail(err)
                    }
                    else {
                        console.log(data);
                        context.succeed();
                    }
                });
            }
        callback(null, `Successfully processed ${parsed.logEvents.length} log events.`);

Now we will stream the logs with Lambda

Now select our lambda function

Importance of Log Collection:

This is the main configuration, filter the logs only for Instance launch and terminate events.

Use this for filter pattern

{ $.eventName = “RunInstances” || $.eventName = “TerminateInstances” }

Create DynamoDB Tables

We are now all set! Go ahead and create a few micro instances and test to see if your setup is working fine.

In my case, here is how the instance the create table looks like.

We use ReDash extensively to visualize all sorts of data. Redash can connect DynamoDB directly but we chose to export this data every day to a couple of MySQL tables so that we can run a left outer join.

For example, the number of instances created per day

SELECT DATE(TIMESTAMP(REPLACE(REPLACE(createdate, ‘T’, ‘ ‘), ‘Z’, ”))) AS CrDate,
instancetype AS InstanceType,
count(*) AS CreateCount
FROM myinventory.startinstance
GROUP BY 1, 2
ORDER BY CrDate DESC, CreateCount Desc

The instance created and terminated list

SELECT T1.id AS InstanceID,
T1.createdate AS CreatedON,
T1.instancetype AS InstanceType,
T1.username AS CreatedBy,
T1.region AS region,
T2.createdate AS TerminatedON,
T2.username AS TerminatedBy

FROM ailinventory.startinstance T1
LEFT OUTER JOIN myinventory.stopinstance T2 ON T1.id = T2.id
WHERE MONTH(CAST(REPLACE(REPLACE(T1.createdate, ‘T’, ‘ ‘), ‘Z’, ”) AS DATETIME)) = 12
ORDER BY T1.createdate DESC

Enquire Now

Ready to unlock the full potential of your business with our cutting-edge cloud solutions?

Take the first step towards success and enquire now to explore how our tailored solutions can revolutionize your operations and drive growth.