-->

Tuesday, February 28, 2017

Custom Cloudwatch RDS Monitoring Plugins Part-2

In the part-1 we discussed about the executable RDS monitoring script which enabled you to pass any sql and than take output of the sql and fetch the result set to the cloudwatch and create the alarms which works as an custom metrics for the monitoring and will raise alarm whenever the threshold is crossed.

In our use case this result of the sql execution is 0 which denotes there is no error on the RDS. If there is any error than an error message will be displayed and the result will be non-zero which causes the cloudwatch to trigger an alarm.

Further the sql output is posted in the email body and sent to the DBA and devops DL.

In this post we are covering the configuration part to be used along with the previous executable script. Once you have configured like this you can schedule this script in the cron service on any server , use the awscli on it to create the alarms and trigger alerts on the rds.

Monday, February 27, 2017

Custom Cloudwatch RDS Monitoring Plugins Part-1

Monitoring the RDS Instances is necessary for detecting the issues, AWS RDS provides outbox metrices i.e. system level metrices. But there are occassions when you want to monitor things like blocked connections, advance queue etc. So you can use the below cloudwatch plugin to monitor anything in RDS based on custom query.

The plugin works on logic that the query which is executed on RDS does not provide any error message than the count would be 0 which means ok and if something is wrong than an error message would be prevented whose count would be not 0 which means alarm.

Than if you are using the sharding than you would need to execute the same query on all your shard databases. The below script can run on 1 database or number of database in case of sharding.

Saturday, February 18, 2017

Why Security is Devops Concern

Since the Devops deals with the rapid releases of the application over a short period of time using the CI-CD and automation combination so security plays a very significant role to make the overall process more secure so that you doesn't loose out to the loop holes which someone can take the advantage, penetrate your systems or insert there malicious code.

Following are the key ways through which you can adopt them in your day to day activities

1. Security as part of the team
Someone within the team should take the responsibility and whether you need to secure it up or indulge the security team to get it secured should be done as to when and where required.

2. Understand the Risks
Understanding the Risks helps in involving the security in your day to day operations and  close the loop holes. Once you understand Risks you would automatically take the necessary steps to fix on those Risks.

3. Security is part of Everything
Security forms the core of everything whether they are your network, systems, code , monitoring etc.

4. User Experience is important
The End user experience is important like if you use the too complex password in your environment than they will write that up which can easily be exploited and get access to your systems so always consider your user experience with that security policy that you are enforcing.

Application Security Principles

If you are using the Cloud to power your web or mobile applications than understanding the security is key aspect to deliver a good business application.

Following are summarized security priciples:-

1. Data in Transit protection
Consumer data transiting networks should be adequately protected against tampering and eavesdropping which can be done using the SSL Certificates via encryption and a combination of the network protection tools such as vpn networks etc.

2. Asset protection
The asset storing or processing  the data should be protected against physical tempering, loss and damage. The cloud provider limited access, moreover securing the access with key based authentication, storing data in encrypted format, backing up data can be used.

3. Separation Between Consumers
Preventing one malicious or compromised consumer from affecting service or data of  another. This can be done by interval user profiling, authentication and database where limited access to there own  account and data should be provided.

Thursday, February 16, 2017

Eternal Bash History for users command auditing in linux

There are times when there is need to track the commands executed by the users. This include all the system users irrespective of the teams, so that if things go wrong it can easily be tracked who executed that command.

This also helps to resolve the disputes within team when 2 users claims that they haven't executed the command. Also if you are installing or doing some new configuration then you can refer to the commands executed by you.

Place the configuration in the /etc/bashrc

 if [ "$BASH" ]; then  
 export HISTTIMEFORMAT="%Y-%m-%d_%H:%M:%S "
 export PROMPT_COMMAND="${PROMPT_COMMAND:+$PROMPT_COMMAND ; }"'echo "`date +'%y.%m.%d-%H:%M:%S:'`" $USER "("$ORIGINAL_USER")" "COMMAND: " "$(history 1 | cut -c8-)" >> /var/log/bash_eternal_history'
 alias ehistory='cat /var/log/bash_eternal_history'
 readonly PROMPT_COMMAND
 readonly HISTSIZE
 readonly HISTFILE
 readonly HOME
 readonly HISTIGNORE
 readonly HISTCONTROL
 fi

The output will be copied in a file generated under the /var/log directory. Execute the following commands to create the log file

 touch /var/log/bash_eternal_history  
 chmod 777 /var/log/bash_eternal_history
 chattr +a /var/log/bash_eternal_history



Tuesday, February 7, 2017

Pulling the Messages from Amazon SQS Queue to a file with python

The Amazon SQS Queue is a high throughput messaging Queue from Amazon AWS.

You can send any type of messages or logs to SQS and than use a consumer(Scripts) to pull those messages from  the SQS and take an action based on those Queue. One of the use cases can be like you can push all your ELB Logs to SQS and than from SQS you can send it anywhere including your Events Notifier(SIEM) tools, batch processing , automation etc.

The following Generalized python script will pull 10 messages at a time from SQS(polling period) provided by SQS and write in a file. The scripts pulls messages from SQS and writes them in a file. If you want to increase the number of messages you just need to run more number of processes. Like if you want to download 50 messages / minute than you just need to start 10 processes of your script and it will start downloading 50 messages/ minute. Kindly note python or sqs is not having any limitations in this case and you can increase it n number of process , however your base line operating system is the limiting factor in this case and would depend upon overall cpu available and I/O operations.

After downloading these logs you may analyze them , send it somewhere else through syslog or write your own scripts for automation.

[Solved] S3 Bucket Creation Fails with IllegalLocationConstraintException Error

While creating the bucket using the s3api , the bucket creation fails with the error message


An error occurred (IllegalLocationConstraintException) when calling the CreateBucket operation: The unspecified location constraint is incompatible for the region specific endpoint this request was sent to.


The error message came specifically in the Mumbai region but the same command was running in the singapore region

Not Working

aws s3api create-bucket --bucket bucketname --region ap-south-1

Working

aws s3api create-bucket --bucket bucketname --region ap-southeast-1


The reason for this error is creating additional parameters which needs to be passed in the mumbai for bucket creation using the s3api i.e. --create-bucket-configuration and LocationConstraint=ap-south-1. Once you pass it you should be able to create the bucket at command line

Working

aws s3api create-bucket --bucket bucketname --region ap-south-1 --create-bucket-configuration LocationConstraint=ap-south-1

Output

{
"Location": "http://bucketname.s3.amazonaws.com/"
}


RDS Alerts

The RDS Forms the crucial part of the web application and any problem in it can lead to downtime in application, reduced performance, 5xx errors, degraded user performance. RDS Monitoring plays a important part in this sense. Below is the list of the parameters which can be monitored to measure the normal operations of the RDS. Some of the monitoring metrics are provided by the AWS and rest can be created using the custom scripts.

Thresholds depends upon the size of the RDS(Cpu cores, Memory etc). We are just providing some idea about the connections threshold.

1. Cpu Utilization:- Cpu utilization increases as the workload and processing on the RDS increases. Alert threshold if [Cpu Utilization]  >= 80% for 5minutes .

2. Database Connections:- Database connections if increases beyond a limit should be alerted because if application doesn't get free connections than it will result in error as connectivity for those request would break. Alert threshold if [Database Connections] >= 10000 .

3. Disk Queue Depth:- The Disk queue depth can be significantly increased if your RDS is doing lot more I/O operations which would result in increased latency. Disk Queue represents the pending I/O operations for the volume. Adding more hard disk can be used to overcome this scenario.

4. Free Storage Space:- Represents the amount of storage space available on RDS. Alert threshold [Free Storage Space] < 2048 GB

Wednesday, February 1, 2017

EOF use to execute multiple commands on remote server

You can run the Linux commands on the remote machine using the loop in the bash script.

If you want to run a multiple commands on the remote server than you can use the EOF which opens a buffer/file in which you can enter the multiple commands which you want to execute on the remote machine. Once done entering the command than you can again use the EOF to end entering the command and in a way it closes the buffer/file.

EOF allows you to redirect the output of the EOF to some command. Like in our case we are redirecting the output to the sudo -i to execute those commands using the root user.


 for i in `cat file.txt`;do echo "###$i####";ssh -t -i key.pem -p 22 ec2-user@$i 'sudo -i << \EOF   
 export https_proxy=http://proxy.example.com:3128;export http_proxy=http://proxy.example.com:3128;  
 yum install sendmail -y  
 EOF'; done