Posts

Showing posts from October, 2019

Kubernetes Important Commands And Concepts

1. Listing all the resource s in the kubernetes you can use kubectl api-resources -o name (which lists the resources according to the name only) 2. Spec Every object in kubernetes has a specification provided by the user which defines the state for that object to be in. 3. Status Status represents the current actual state of the object. Kubernetes matches the spec to achieve the desired state specified in the spec 4. kubectl get:- to get the list of objects in kubernetes. For e.g kubectl get pods -n kube-system , kubectl get nodes you can get more detailed information about a object like kubectl get nodes kube-node1 -o yaml (yaml representation of this object) 5. kubectl describe kube-node1 (Readable overview about an object but not the yaml format) 6. Pods can contain one or more containers and a set of resources shared by those containers. All containers in kubernetes are part of a pod. Example of pod Yaml https://github.com/ankit630/IAC/blob/master/kubernetes/...

Part 2 Using Athena to query S3 buckets

This is in continuation to my previous post on how can use the Athena to query the s3 buckets storing the cloudtrail logs in order to better organize your security and compliance which is hard thing to achieve in a legacy/large accounts with number of users. Question:- Identifying the last 100 most used IAM Keys . Usually IAM roles is better approach to be used than using the IAM keys for the authentication as IAM roles can rotate the keys after every 15minutes thus making hard to intercept the keys and increasing the security of the Account. Answer SELECT     useridentity.accesskeyid,   useridentity.arn,   eventname,   COUNT(eventname) as frequency  FROM account_cloudtrail  WHERE sourceipaddress NOT LIKE '%.com'    AND year = '2019'    AND month = '01'    AND day = '01'    AND useridentity.accesskeyid LIKE 'AKIA%'  GROUP BY useridentity.accesskeyid, useridentity.arn, eventname ...

Part 1 Using Athena to query S3 buckets

While its great to push all the logs data gathered from various sources like your load balancers, cloudtrail, application logs etc to the S3 buckets. But as your infrastructure grows in size it becomes difficult to analyze such huge amount of data of months or year. You can use the Athena service of the Amazon AWS to query the S3 service data without the need of downloading and processing it manually. This saves the requirement of extra processing, space requirement etc. We are going to cover the query details of most of the effective queries which can help you analyze and meaningful information from your s3 logs data. Question:- Identifying all the users,events,accounts accessing a particular s3 bucket   Answer:- SELECT DISTINCT       account,     eventname,     useridentity.arn,     useragent,     vpcendpointid,     json_extract_scalar(requestparameters, '$.bucketName') AS bucketName, ...

Command Logging & Kibana Plotting

Problem Statement : Monitor & track all the activities/commands used by user on system Minimum Requirement(s):    1) Required separate ELK cluster for command logging                                                2) Required Snoopy Logger Agent on all client machines.                                                3) Required File beat agent. Context : In order to track what all commands are being fired by users , we''ll be needing bash_history of that specific user it becomes tedious task when we have to track specific user (or multiple user)in different machine...

[Solved] CannotPullContainerError: no space left on device

By default ECS service of AWS doesn't take care of free disk space on ECS instances while putting new tasks. It uses only CPU and Memory resources for a task placement. In case of disk overfilling, ECS is trying to start new task anyways, but it fails because of error “CannotPullContainerError: no space left on device”. Overfilled instances stay active in cluster until regular cluster roll replaces all instances. Solution:- The correct way for handling task placement is by letting ECS know about free disk space (set custom attribute) and set placement constrant for a task definition. ( https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-placement-constraints.html ). Once we have custom attribute that indicates disk usage, we can configure task definition to not place task if used disk space greater than configured threshold. This can be achieved by  included the shell script for monitoring free space and deregistering an instance. The script needs to be run throu...