Posts

Showing posts from 2017

Installing Salt on Centos 7

Image
Install the latest version of the salt from the salt repo directly instead of the epel repo as the salt repo provides you with the latest version of the salt available while epel repo is having slightly lower end packages as of now due to the dependency issues of packages of lower version. Create a following salt repo in the /etc/yum.repos.d directory vim saltstack.repo

Comparision between the Amazon AWS Transit VPC and Ipsec VPN Environment

-> The AWS Transit VPC is a Transit Overlay connection solution between multiple Spoke VPC's and reduces the overhead of configuring mesh of VPN connections between your datacenter to each different VPC's. Rather, the Spoke VPC's just need to connect to the Transit VPC for inter-spoke(VPC-VPC) connectivity and to extend the spokes to connect to the remote network/datacenters. This solution reduces the pain to implement individual IPsec VPN connections from your datacenters to each different VPCs located in different regions. -> Transit VPC uses the same IPsec VPN environment to connect to different AWS regions as a normal IPsec VPN connection between two different regions. I reviewed the output [mtr and ping] you shared "mtr_result_manual_ipsec.png", "mtr_result_transit_vpc.png" and it looks like you are getting relatively same latency over Transit VPC setup and manual IPsec setup, which is normal as both the setup are using IPsec VPN tunnel betwe...

Enabling the JMX PORT in kafka

If you want to get the metrics for the monitoring of the kafka than you need to enable the JMX port for the kafka which is 9999 be default. You need to configure this port in the kafka/bin/kafka-server-start.sh by exporting the JMX_PORT which can be used to get the metrics for the kafka. The same port is also used by the datadog agent for providing the metrices of the kafka cluster. Just enter the following line in the kafka-server-start.sh export JMX_PORT=${JMX_PORT:-9999}     exec $base_dir/kafka-run-class.sh $EXTRA_ARGS kafka.Kafka "$@" Afterwards you will need to restart the kafka broker service to make this active. Verify if the service is listening on the port 9999 # telnet localhost 9999 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. # netstat -tunlp | grep 9999 tcp        0      0 0.0.0.0:9999            0.0.0.0:*             ...

Manually allocating shards when Elasticsearch cluster is red

If you have large number of shards  with replica sets with huge amount of data its possible that you  get the ES cluster as red. The ES cluster goes red due to the issues with the primary shards which gets unassigned now depending on the situation you can take number of steps to resolve this issue. However as the last resort you might have to allocate the shard manually but its last recommendation best way it to figure out whats the issue with the cluster i.e. why its not assigning the shards. As a pre step you need to set the replication off otherwise you would have comparatively higher number of unassigned shards and that might take lot of time so if you want to quickly recover its better to set the replicas to 0 and than you can allow them back at a later point in time

Adding logrotation in elasticsearch

Elasticsearch supports log rotation with built in functionality you just need to configure the log4j.properties for the same. Just copy the below configuration file at the following location vim /etc/elasticsearch/log4j2.properties +status = error    +  +# log action execution errors for easier debugging  +logger.action.name = org.elasticsearch.action  +logger.action.level = debug  +  +appender.console.type = Console  +appender.console.name = console  +appender.console.layout.type = PatternLayout  +appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n  +  +appender.rolling.type = RollingFile  +appender.rolling.name = rolling  +appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log  +appender.rolling.layout.type = PatternLayout  +appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%.-10000m%n ...

Listing the complete IP range of the AWS

If you want to whitelist the IPs of AWS, use the following command to list the all ips of  aws curl https://ip-ranges.amazonaws.com/ip-ranges.json -s | jq '.prefixes[] | select(.region=="us-east-1" and .service=="EC2").ip_prefix'

docker version and docker info commands

You can use the docker version command which gives you information about Current version of the docker installed, api version, go version and built. [user@ankit63001 ~]$ docker images   REPOSITORY     TAG         IMAGE ID      CREATED       SIZE hello-world     latest       1815c82652c0    5 days ago     1.84kB [user@ankit63001 ~]$ docker version Client: Version:   17.05.0-ce API version: 1.29 Go version:  go1.7.5 Git commit:  89658be Built:    Thu May 4 22:06:25 2017 OS/Arch:   linux/amd64 Server: Version:   17.05.0-ce API version: 1.29 (minimum version 1.12) Go version:  go1.7.5 Git commit:  89658be Built:    Thu May 4 22:06:25 2017 OS/Arch:   linux/amd64 Experimental: false [user@ankit63001 ~]$

Deploying an EC2 Instance Using Terraform

You can use the following Terraform script to Deploy an Instance in your AWS Account. Terraform will create a t2.medium instance from the official RHEL7.2 AMI using the AMI ID within the specified subnet. And will create a 30GB root block device and a 10GB Ebs volume. The instance will use a predefined key and will add the specified tags to the Instance Being Launched. provider "aws" { access_key = "AKXXXXXXXXXXXXXXXXX" secret_key = "2YXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXxx" region = "ap-south-1" } resource "aws_instance" "instance_name" { ami = "ami-cdbdd7a2" count = 1 instance_type = "t2.medium" security_groups = ["sg-f70674re"] subnet_id = "subnet-526bcb6d" root_block_device = { volume_type = "standard" volume_size = "30" } ebs_block_device = { device_name = "/dev/sdm" volume_ty...

Custom Cloudwatch Alarm Configuration Part-8

As discussed in the previous post regarding the alarm plugins those plugins are used to push the metrics data to the cloudwatch using the cron running every minute or 5minutes depending upon your requirements. Next we have to create the alarms in the cloudwatch on the above metrics which works on the logic that if the metrics crosses the threshold value than an event is triggered which could be like send a mail through sns alerting that the value has crossed the threshold and if it agains comes below threshold than it state is changed from alarm to ok which is more like a recovery. But unlike from the console we are going to trigger this programmatically using the AWS CLI provided by the AWS. The script works sequentially and uses the array which runs in a loop and all the relevant alarms are created. The most important thing to be considered here is the name of the alarm which is to be created in the cloudwatch. Now you can put any name but the name based on programmatic assump...

Custom Cloudwatch Plugins CW_tcpConnections Part-7

The following Cloudwatch plugin can be used to determine the established tcp connections. #!/bin/bash # # About : Check TCP connections # # Name : cw_tcpconnection.sh DIR=$(dirname $0); PLUGIN_NAME='cw_tcpconnection'; # Include configuration file source ${DIR}/../conf/plugin.conf; #Get Current Instance ID INSTANCE_ID=(`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id`); #Get Hostname HOST_ID=(`wget -q -O - http://169.254.169.254/latest/meta-data/hostname`); # Help usage() { echo "Usage: $0 [-n ] [-d ] [-m ] [-h ] [-p ]" 1>&2; exit 1; } # Logger logger(){ SEVERITY=$1; MESSAGE=$2; DATE=`date +"[%Y-%b-%d %H:%M:%S.%3N]"`; echo -e "${DATE} [${SEVERITY}] [${PLUGIN_NAME}] [${INSTANCE_ID}] [${HOST_ID}] ${MESSAGE}" >> ${DIR}/../logs/appcwmon.log; } # Process Arguments if [ $# -eq 0 ]; then # When no argument is passed logger ERROR "Invalid a...

Custom Cloudwatch Plugins CW_Rabbitmq Queue Message Length Part-6

The following Cloudwatch plugin helps to measure the number of messages in the Rabbitmq unack,Ready,Total Message on which alarms can be configured later using the cloudwatch API. #!/bin/bash # # About : Check RabbitMQ Queue Message Length # # Name : cw_rabbitmq.sh DIR=$(dirname $0); PLUGIN_NAME='cw_rabbitmq'; # Include configuration file source ${DIR}/../conf/plugin.conf; #Get Current Instance ID INSTANCE_ID=(`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id`); #Get Hostname HOST_ID=(`wget -q -O - http://169.254.169.254/latest/meta-data/hostname`); # Help usage() { echo "Usage: $0 [-n ] [-d ] [-m ] [-u ] [-p ] [-q ]" 1>&2; exit 1; } # Logger logger(){ SEVERITY=$1; MESSAGE=$2; DATE=`date +"[%Y-%b-%d %H:%M:%S.%3N]"`; echo -e "${DATE} [${SEVERITY}] [${PLUGIN_NAME}] [${INSTANCE_ID}] [${HOST_ID}] ${MESSAGE}" >> ${DIR}/../logs/appcwmon.log; } # Process Argu...

Custom Cloudwatch Plugins CW_ProcessCount Part-5

You can monitor the number of process running for a service to determine whether the service is running or not on the Server using the following cloudwatch plugin. #!/bin/bash # # About : Check Process Running Status # # Name : cw_process.sh DIR=$(dirname $0); PLUGIN_NAME='cw_process'; # Include configuration file source ${DIR}/../conf/plugin.conf; #Get Current Instance ID INSTANCE_ID=(`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id`); #Get Hostname HOST_ID=(`wget -q -O - http://169.254.169.254/latest/meta-data/hostname`); # Help usage() { echo "Usage: $0 [-n ] [-d ] [-m ] [-p ]" 1>&2; exit 1; } # Logger logger(){ SEVERITY=$1; MESSAGE=$2; DATE=`date +"[%Y-%b-%d %H:%M:%S.%3N]"`; echo -e "${DATE} [${SEVERITY}] [${PLUGIN_NAME}] [${INSTANCE_ID}] [${HOST_ID}] ${MESSAGE}" >> ${DIR}/../logs/appcwmon.log; } # Process Arguments if [ $# -eq 0 ]; then # ...

Custom Cloudwatch Plugins CW_Netconnection Part-4

Cloudwatch can be used to monitor the established connection to the vm. This helps in tracking connections in case your application is network intensive #!/bin/bash # # About : Check Local and Foreign Network Connections # # Name : cw_netconnection.sh DIR=$(dirname $0); PLUGIN_NAME='cw_netconnection'; # Include configuration file source ${DIR}/../conf/plugin.conf; #Get Current Instance ID INSTANCE_ID=(`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id`); #Get Hostname HOST_ID=(`wget -q -O - http://169.254.169.254/latest/meta-data/hostname`); # Help usage() { echo "Usage: $0 [-n ] [-d ] [-m ] [-s ] -t [ LOCAL | FOREIGN ] -p " 1>&2; exit 1; } # Logger logger(){ SEVERITY=$1; MESSAGE=$2; DATE=`date +"[%Y-%b-%d %H:%M:%S.%3N]"`; echo -e "${DATE} [${SEVERITY}] [${PLUGIN_NAME}] [${INSTANCE_ID}] [${HOST_ID}] ${MESSAGE}" >> ${DIR}/../logs/appcwmon.log; } # Process A...

Custom Cloudwatch Plugins CW_MemoryUsage Part-3

The following plugin pushes the memory consumption of the vm to the cloudwatch which you can use to set the alarms and also can use for the autoscaling or taking actions when combined with the events. #!/bin/bash # # About : Check used memory in percentage # # Name : cw_memory.sh DIR=$(dirname $0); PLUGIN_NAME='cw_memory'; # Include configuration file source ${DIR}/../conf/plugin.conf; #Get Current Instance ID INSTANCE_ID=(`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id`); #Get Hostname HOST_ID=(`wget -q -O - http://169.254.169.254/latest/meta-data/hostname`); # Help usage() { echo "Usage: $0 [-n ] [-d ] [-m ] " 1>&2; exit 1; } # Logger logger(){ SEVERITY=$1; MESSAGE=$2; DATE=`date +"[%Y-%b-%d %H:%M:%S.%3N]"`; echo -e "${DATE} [${SEVERITY}] [${PLUGIN_NAME}] [${INSTANCE_ID}] [${HOST_ID}] ${MESSAGE}" >> ${DIR}/../logs/appcwmon.log; } # Process Arguments ...

Custom Cloudwatch Plugins CW_DiskUsage Part-2

Below is the plugin for monitoring the diskusage of specific mount via the cloudwatch. This would go in the bin folder in the cloudwatch and you need to create a file name like cw_diskusage.sh with the following script #!/bin/bash # # About : Percent of Disk usage by Mount based on Mount name # # Name : cw_diskuage.sh DIR=$(dirname $0); PLUGIN_NAME='cw_diskusage'; # Include configuration file source ${DIR}/../conf/plugin.conf; #Get Current Instance ID INSTANCE_ID=(`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id`); #Get Hostname HOST_ID=(`wget -q -O - http://169.254.169.254/latest/meta-data/hostname`); # Help usage() { echo "Usage: $0 [-n ] [-d ] [-m ] [-f Mount Point]" 1>&2; exit 1; } # Logger logger(){ SEVERITY=$1; MESSAGE=$2; DATE=`date +"[%Y-%b-%d %H:%M:%S.%3N]"`; echo -e "${DATE} [${SEVERITY}] [${PLUGIN_NAME}] [${INSTANCE_ID}] [${HOST_ID}] ${MESSAGE}" >...

Custom Cloudwatch Plugins Part-1

The Cloudwatch is a hosted tool provided by the aws to monitor different resources in your Cloud Infrastructure. AWS provides you with various metrice(data) related to resources to determine its state on per minute basis which can be used to monitor and raise an alarm whenever a certain threshold is crossed. You can configure the cloudwatch with the SNS to send the notification once the state of the alarm changes. Further you can configure the events and take any action on these alarms. The only limitation is that AWS provides you with certain metrices to monitor but there are times when you want to monitor the resources which are not provided by AWS. Like your services, established connections, processes, memory etc. For this you need to create your own custom cloudwatch metrics which you can push to the cloudwatch using the AWS Cli. Once the metrice has been configured in the cloudwatch than you can put the alarms on these metrices. You need to push the metrice regularly using t...

Custom Cloudwatch RDS Monitoring Plugins Part-2

In the part-1 we discussed about the executable RDS monitoring script which enabled you to pass any sql and than take output of the sql and fetch the result set to the cloudwatch and create the alarms which works as an custom metrics for the monitoring and will raise alarm whenever the threshold is crossed. In our use case this result of the sql execution is 0 which denotes there is no error on the RDS. If there is any error than an error message will be displayed and the result will be non-zero which causes the cloudwatch to trigger an alarm. Further the sql output is posted in the email body and sent to the DBA and devops DL. In this post we are covering the configuration part to be used along with the previous executable script. Once you have configured like this you can schedule this script in the cron service on any server , use the awscli on it to create the alarms and trigger alerts on the rds.

Custom Cloudwatch RDS Monitoring Plugins Part-1

Monitoring the RDS Instances is necessary for detecting the issues, AWS RDS provides outbox metrices i.e. system level metrices. But there are occassions when you want to monitor things like blocked connections, advance queue etc. So you can use the below cloudwatch plugin to monitor anything in RDS based on custom query. The plugin works on logic that the query which is executed on RDS does not provide any error message than the count would be 0 which means ok and if something is wrong than an error message would be prevented whose count would be not 0 which means alarm. Than if you are using the sharding than you would need to execute the same query on all your shard databases. The below script can run on 1 database or number of database in case of sharding.

Why Security is Devops Concern

Since the Devops deals with the rapid releases of the application over a short period of time using the CI-CD and automation combination so security plays a very significant role to make the overall process more secure so that you doesn't loose out to the loop holes which someone can take the advantage, penetrate your systems or insert there malicious code. Following are the key ways through which you can adopt them in your day to day activities 1. Security as part of the team Someone within the team should take the responsibility and whether you need to secure it up or indulge the security team to get it secured should be done as to when and where required. 2. Understand the Risks Understanding the Risks helps in involving the security in your day to day operations and  close the loop holes. Once you understand Risks you would automatically take the necessary steps to fix on those Risks. 3. Security is part of Everything Security forms the core of everything whether th...

Application Security Principles

If you are using the Cloud to power your web or mobile applications than understanding the security is key aspect to deliver a good business application. Following are summarized security priciples:- 1. Data in Transit protection Consumer data transiting networks should be adequately protected against tampering and eavesdropping which can be done using the SSL Certificates via encryption and a combination of the network protection tools such as vpn networks etc. 2. Asset protection The asset storing or processing  the data should be protected against physical tempering, loss and damage. The cloud provider limited access, moreover securing the access with key based authentication, storing data in encrypted format, backing up data can be used. 3. Separation Between Consumers Preventing one malicious or compromised consumer from affecting service or data of  another. This can be done by interval user profiling, authentication and database where limited access to there...

Eternal Bash History for users command auditing in linux

There are times when there is need to track the commands executed by the users. This include all the system users irrespective of the teams, so that if things go wrong it can easily be tracked who executed that command. This also helps to resolve the disputes within team when 2 users claims that they haven't executed the command. Also if you are installing or doing some new configuration then you can refer to the commands executed by you. Place the configuration in the /etc/bashrc if [ "$BASH" ]; then    export HISTTIMEFORMAT="%Y-%m-%d_%H:%M:%S "  export PROMPT_COMMAND="${PROMPT_COMMAND:+$PROMPT_COMMAND ; }"'echo "`date +'%y.%m.%d-%H:%M:%S:'`" $USER "("$ORIGINAL_USER")" "COMMAND: " "$(history 1 | cut -c8-)" >> /var/log/bash_eternal_history' alias ehistory='cat /var/log/bash_eternal_history'  readonly PROMPT_COMMAND  readonly HISTSIZE  readonly HISTFILE  re...

Pulling the Messages from Amazon SQS Queue to a file with python

The Amazon SQS Queue is a high throughput messaging Queue from Amazon AWS. You can send any type of messages or logs to SQS and than use a consumer(Scripts) to pull those messages from  the SQS and take an action based on those Queue. One of the use cases can be like you can push all your ELB Logs to SQS and than from SQS you can send it anywhere including your Events Notifier(SIEM) tools, batch processing , automation etc. The following Generalized python script will pull 10 messages at a time from SQS(polling period) provided by SQS and write in a file. The scripts pulls messages from SQS and writes them in a file. If you want to increase the number of messages you just need to run more number of processes. Like if you want to download 50 messages / minute than you just need to start 10 processes of your script and it will start downloading 50 messages/ minute. Kindly note python or sqs is not having any limitations in this case and you can increase it n number of process , ...

[Solved] S3 Bucket Creation Fails with IllegalLocationConstraintException Error

While creating the bucket using the s3api , the bucket creation fails with the error message An error occurred ( IllegalLocationConstraintException ) when calling the CreateBucket operation: The unspecified location constraint is incompatible for the region specific endpoint this request was sent to. The error message came specifically in the Mumbai region but the same command was running in the singapore region Not Working aws s3api create-bucket --bucket bucketname --region ap-south-1 Working aws s3api create-bucket --bucket bucketname --region ap-southeast-1 The reason for this error is creating additional parameters which needs to be passed in the mumbai for bucket creation using the s3api i.e. --create-bucket-configuration and LocationConstraint=ap-south-1. Once you pass it you should be able to create the bucket at command line Working aws s3api create-bucket --bucket bucketname --region ap-south-1 --create-bucket-co...

RDS Alerts

The RDS Forms the crucial part of the web application and any problem in it can lead to downtime in application, reduced performance, 5xx errors, degraded user performance. RDS Monitoring plays a important part in this sense. Below is the list of the parameters which can be monitored to measure the normal operations of the RDS. Some of the monitoring metrics are provided by the AWS and rest can be created using the custom scripts. Thresholds depends upon the size of the RDS(Cpu cores, Memory etc). We are just providing some idea about the connections threshold. 1. Cpu Utilization:- Cpu utilization increases as the workload and processing on the RDS increases. Alert threshold if [Cpu Utilization]  >= 80% for 5minutes . 2. Database Connections:- Database connections if increases beyond a limit should be alerted because if application doesn't get free connections than it will result in error as connectivity for those request would break. Alert threshold if [Database Connect...

EOF use to execute multiple commands on remote server

You can run the Linux commands on the remote machine using the loop in the bash script. If you want to run a multiple commands on the remote server than you can use the EOF which opens a buffer/file in which you can enter the multiple commands which you want to execute on the remote machine. Once done entering the command than you can again use the EOF to end entering the command and in a way it closes the buffer/file. EOF allows you to redirect the output of the EOF to some command. Like in our case we are redirecting the output to the sudo -i to execute those commands using the root user. for i in `cat file.txt`;do echo "###$i####";ssh -t -i key.pem -p 22 ec2-user@$i 'sudo -i << \EOF export https_proxy=http://proxy.example.com:3128;export http_proxy=http://proxy.example.com:3128; yum install sendmail -y EOF'; done

Auto AMI Backup of Ec2 instance

#!/bin/bash #Script Details #Script to create AMI of server on (based on cron timeperiod) and deleting AMI older than 3 days. #Time period can be controlled by the cron like Daily AMI Creation, every 3 days or weekly AMI Creation #The Retaining time for the AMI is 3days by default , however this can be customized on your requirement. #Deletion on AMI removes the associated snapshots #You need to pass the instance ID along as the arguments to the script #Credentials are fetched from the config file of the user #Uses the Sns configuration for sending the AMI Status #Instance Name is determined from the tag Name assigned to the instance #If Name tag is not found than script would exist with error message mail. #AMI Backup name would be having Instancename following by date in the YYYYMMDD format. #The backed up AMI would be having additional tags to identify the necessary information as follows #The instance ID from which this AutoAMI was creat...