Sunday, May 29, 2016

Ansible role terminology towards IT operations

Change Management:-

Change management is fundamental function and all the rest are build around it as core idea. it defines what the system needs to look like and make sure what it is, if its not that state you enforce it. For e.g. A webserver should have apache installed and wanted to installed at version 2.4 and should have started state and anything which deviates from this defined state you dictate a change state and mark that system as changed. System marked changed makes sense more in production system because production system shouldn't be changed like that and you might want to find the cause for the same.

In case of ansible if a system state is same even after change ansible wouldn't even try to change its state and this is called idempotent.


It is built on the change management but it is focussed on a role you are trying to establish. The most basic definition of provisioning is you are transitioning from one system state to another system state expected.

In case of ansible provisioning you can compare same to machine cloning or images in Cloud with  only change is that ansible actually installs and configure everytime instead of creating images from the machine. This can better be understood as lets say you want to configure the ntp server or database server or just want a server to test your code and than want to terminate the same.

The steps for provisioning are very simple like lets say we want to provision a web server. So you first installa Basic OS such as linux or windows. Than you go on installed the webserver software like apache or nginx. copy you configuration and your web files and install your security updates and start web service those are the steps which ansible  will send to the server and provision it for you.

Saturday, May 21, 2016

Using Rsyslog to forward Elastic Load balancer(ELB) logs in AWS

The ELB logs provides the great insights about the traffic being received by your application. You can identify the location, requests, errors and attacks by analyzing the ELB logs. Your  security team might be interested in analyzing these logs.

The problem is the logs are written either in 1 hour or every 5 minutes. You can also set them at  a definite size of 5MB. If you choose 1 hour than the size of the file  would be big. So it makes sense that logs are written at every 5 minutes since you want to analyze current requests coming on the ELB.

The problem in setting Rsyslog is the AWS logs are generated dyamica pattern and date yyyy/mm/dd
keep on rotating. Other problem is everytime a new log file is generated and thirdly logs are written in the S3 bucket which is a storage only and have very low computing power.

We used the S3fs to mount the S3 as a mount on the server this provided the easy access to logs on the s3. The other problem was all multiple application logs were written in  a single directory. We wanted to process multiple application logs separately for which we have used the rsync command to sync the logs in a separate directory. The advantage of using rsync is we won't have to process the same  log again and again it only takes the latest log and  does not copy log which is already present.

We generated the rsync log which provides the path of file being sync to the directory. So we directly using the cat to read the file content and append it in another file. So this way all the latest log file created by elb gets appended to a single file which can be easily be copied on remote server using the rsyslog or can be directly push to logstash if you are using the ELK setup or to your security team after which they can get the logs to there software for processing.

#####Script to get ELB logs written in single file#####
#####Created By Ankit Mittal######

year=`date +%Y`
month=`date +%m`
date=`date +%d`
process_check=`ps -ef | grep rsync | wc -l`

lb_sync() {
echo "------------Sync started at `date`------------- " >> /var/log/rsync-elb.log
rsync -avz --files-from=<(ls $path/*$logname* | cut -d / -f11) $path/ /var/log/$logname/

if [ $process_check -gt 1 ]
echo "Rsync process already running"
lb_sync application1-name >> /tmp/app1  ### application1 name coming in elb pattern through which can grep the name of the file of that application
lb_sync application2-name >> /tmp/app2      

merge_log() {
echo "-----------Started merging of logs `date`--------------" >> /var/log/merging-elblogs.log
for i in `cat /tmp/$listname | grep $listname`;do cat $path/$i >> /var/log/"$listname".log;done >> /var/log/merge-loop-output.log

merge_log application1
merge_log application2

Sunday, May 15, 2016

Jenkins Installation

Check if anything is running on the port 8080 which is used by the jenkins by default
 telnet localhost 8080  

Install the Java JDK which is required by the Jenkins
 mkdir /usr/java  
 cd /usr/java
 wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie"
 tar -xf jdk-8u92-linux-x64.tar.gz
 cd jdk1.8.0_92/
 update-alternatives --install /usr/bin/java java /usr/java/jdk1.8.0_92/bin/java 100
 update-alternatives --config java
 update-alternatives --install /usr/bin/javac javac /usr/java/jdk1.8.0_92/bin/javac 100
 update-alternatives --config javac
 update-alternatives --install /usr/bin/jar jar /usr/java/jdk1.8.0_92/bin/jar 100
 update-alternatives --config jar

Setup the JAVA_HOME
 vi /etc/rc.d/rc.local  
 export JAVA_HOME=/usr/java/jdk1.8.0_92/      
  export JRE_HOME=/usr/java/jdk1.8.0._92/jre      
 export PATH=$PATH:/usr/java/jdk1.8.0_92/bin:/opt/java/jdk1.8.0_92/jre/bin

Import the Jenkins repo
 sudo wget -O /etc/yum.repos.d/jenkins.repo  

Import the keys for the repo
 sudo rpm --import  

Enable the jenkins service to start at runtime
 /bin/systemctl enable jenkins.service  
 /bin/systemctl restart jenkins.service

This completes the installation of the jenkins. You can access the web-console by entering your ip followed by port 8080