-->

Wednesday, December 14, 2016

AWS S3 bucket Error client error (PermanentRedirect) occurred when calling the ListObjects operation

I was trying to sync the bucket cross region when i encountered the below error

 A client error (PermanentRedirect) occurred when calling the ListObjects operation: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint: $bucketname.s3-ap-southeast-1.amazonaws.com  
 You can fix this issue by explicitly providing the correct region location using the --region argument, the AWS_DEFAULT_REGION environment variable, or the region variable in the AWS CLI configuration file. You can get the bucket's location by running "aws s3api get-bucket-location --bucket BUCKET".  

Solution:-
In my case the Elastic IP was not associated with the Ec2 instances and it was trying to communicate via s3 endpoint due to which i was getting this error. Once i tried from the instance having Pubic Elastic IP with internet access it was resolved automatically.

Details:-
You can only use the s3 endpoint to connect to the s3 buckets in your region if you working in the cross region like in my case where one bucket was in singapore region while the other bucket was in the mumbai region than endpoint won't work.

In this case you instance must have public ip associated with it since all the communication is going through the internet. Once i associated the Elastic IP with my instance it worked completely fine. You can also get this error if you are using the NAT Gateway. The sync should only be performed from the instance having the internet gateway attached in its subnet.

Friday, December 9, 2016

Enabling the S3 bucket logging from the Command line for multiple buckets

For enabling the S3 bucket logging in the AWS you need to first setup the acl for read and write permission on the bucket by other buckets. You can enable the ACL using the AWS Cli as follows:-

 aws s3api put-bucket-acl --bucket BucketName --grant-read-acp 'URI="http://acs.amazonaws.com/groups/s3/LogDelivery"' --grant-write 'URI="http://acs.amazonaws.com/groups/s3/LogDelivery"';  

Than you can copy all the buckets names in a file for whom you want the logging to be enabled. And run the following command in the loop as follows so that all those bucket has there logging under the logbucketname.

 for i in `cat /tmp/bucketlist.txt`;do aws s3api put-bucket-logging --bucket $i --bucket-logging-status '{"LoggingEnabled":{"TargetPrefix":"S3logs/'$i'/","TargetBucket":"'S3logbucketname'"}}';done  



Thursday, December 8, 2016

Script to create the Security groups in AWS

You can use the AWS Console to create the Security groups for your servers. But if you are having large number of servers with different security group or you are involved in the migration of your environment than it can take lot of time and effort to do that manually.

In those cases you can use the following AWS Scripts which uses the AWS CLI to create the Security Groups

You need to provide the following arguments to the script for creating the Security groups

  1. Name of the Security group
  2. VpcID
  3. Environment Name
  4. Meaningful name for the Security group Usage
  5. Description about the Security Group

 #!/bin/bash  
 #  
 # Create Security Group in the AWS 
 # Need to provide the Security group name , VpcID, Environment name, Usedfor and description
 name=$1;  
 vpcId=$2;  
 environment=$3;  
 usedFor=$4  
 description=$5;  
 # We need to provide the name of the Environment and  
 # action to perform  
 #  
 usage(){  
     echo -e "Usage:\n";  
     echo -e "$0 <Name> <vpc_id> <TAG:Environment> <TAG:UsedFor> <TAG:Description> \n";  
     exit 0;  
 }  
 # Two inputs required to execute the script  
 if [ $# -ne 5 ];  
 then  
     usage;  
 fi;  
 #Create Subnet  
 groupId=`aws ec2 create-security-group --vpc-id $vpcId --group-name $name --description "$description" --query 'GroupId' --output text`;  
 if [[ $groupId == "" ]];  
 then  
     echo -e"Failed to create group";  
     exit 0;  
 fi;  
 echo -e "Group ID: $groupId";  
 echo "$name  $groupId" >> sg_lb_list.txt  
 #Assign TAGs  
 aws ec2 create-tags --resources $groupId --tags Key=Name,Value=$name Key=Environment,Value=$environment Key=UsedFor,Value=$usedFor Key=Description,Value="$description";  
 exit 1;  

Example

 ./create_security_group.sh Dev-SG-LB-App-Appserver vpc-2582cag7 Development Appname-ApplicationServer "Security Group for Appname Application Server";  


Uploading the Certificate to the Cloudfront using AWS Cli

If you are using the Cloudfront CDN to deliver your images and videos than cloudfront provides there own endpoint.

Now its possible to point your domain with CNAME to this endpoint so that all request on your domain is served from the Cloudfront and you can reduce the geographical latency and reduce the load on your servers for delivering the static content and increase the user experience on your application.

There is a requirement of delivering the content securely to the end user using the SSL Certificate. Cloudfront you can use the SSL certificate but it doesn't allow you to upload the certificate on the Console itself. You need to do that through the AWS CLI and via IAM.

Once you upload the certificate you can select the certificate in the cloudfront and it will be applied to the Cloudfront Distribution.

To upload the Certificate on the Cloudfront enter the below command to the server having the AWS CLI installed.

 aws iam upload-server-certificate  --server-certificate-name wildcard.yourdomain.com  --certificate-body file://yourdomain.crt  --private-key file://yourdomain.key  --certificate-chain file://gd_bundle-g2-g1.crt  --path /cloudfront/  

This will upload the certificate to the Cloudfront in the AWS.

Thursday, December 1, 2016

Resolving 504 Gateway Timeout Issues

AWS ELB gives two matrix which can help in diagnosing [5XX Errors]  issue and identify the possible root cause causing the Errors:

Matrix : “Sum HTTP 5XXs”   :  5XX Reponses given by Service [Due to no availability/failure of DB, S3, MQ etc]
Matrix :  “Sum ELB 5XXs”  :  5XX Response given by ELB [when idletime out expires while waiting for response from Tej Service (504), failure in ELB itself (503) ]

                                  Common reason for idle timeout on ELB is long running DB query or some application bug, performance tuning issues with micro service.

Wednesday, November 30, 2016

Setting Stricthostkeychecking and SSH session timeout on Linux Server

While working on the Linux server its common to face the ssh session timeout from the server due to ideal session. Although you can prevent the session timeout from the server and client.

Setup the ServerAliveInterval in your user account to provide the extended time for session timeout i.e. 120 seconds

 vim .ssh/config  

 Host *
   ServerAliveInterval 120

 chmod 600 .ssh/config

If you are using the Bastion host in your corporate environment to connect to the Different Server than you can edit the sshd_config file  and make an entry as below to increase the session timeout for the ssh

 vim /etc/ssh/sshd_config  

 ## ssh ideal timeout value
  ClientAliveInterval 120

Saturday, August 13, 2016

About RabbitMQ


  1. RabbitMQ is a message broker written in Erlang. 
  2. It is open source but commercial support is offered. 
  3. RabbitMQ uses standardized messaging protocols. 
  4. A protocol is similar to like hdp is a protocol just optimized for messaging. 
  5. The protocol supported out of the box is AMQP(Advanced messaging queuing protocol) version 0.9 .
  6. The other protocol are supported using the built in plugins system. 
  7. There are RabbitMQ client framework available but because RabbitMQ uses the standard protocols you can interact with it using any programming language even the javascript. 
  8. RabbitMQ also supports clustering ensuring the high availaibility of the broker service even if the hardware is failing. 

Messaging Scenarios

Besides setting out the complete microservices architecture , messaging can help out in many scenarios.

1. If you want the legacy applications to talk to each other without the data loss , messaging with queues is ideal. There are libraries from many programming languages for your message broker that lets you do that.

2. When data is coming from many places and need to power a dashboard Apps than messaging queue can easily be accumulated to give you real time insight on your dashboard scheme.

3. If you have big monolithic app and want to simplify it without rewriting it all at once first isolated one feature or layer to be handled by a service. Handle the communication between the monolith and new service with the messaging. Once done successfully take the next layer till the app is maintainable again. 

Microservice Architecture Example


The customer submits a new order from the WEBUI. From WEBUI a message has to be sent to the registration service which stores the order once completed it sends the message to the notification service confirming the customer that the message has been registered for example like sending an email.  

When to use the Microservices Architecture

The Architecture must have the capacity to scale. The project would start small but expectation is that they will grow fast. The Architecture must be flexible enough to add and remove features with minimal impact. Microservices are ideal for that because new services and old services can easily be added or removed if unsuccessful. Also when the demand rises enabling more capacity for each service should be an easy job. Since the messages are in the queue and services are picking up and processing messages one by one scaling the capacity up can be done by scaling another instance of the same microservice. Also the changes are rapid based on the requirement so deployment should be easy and reliable without affecting the overall system since each service is autonomous its easy to deploy it.

Much easier than deploying one big monolithic application. Also since there are many developers involved in the building of the whole system making all of them work on the same monolithic code would turn out to be a mess. With Microservices teams can be formed to work on microservices completely separate from other teams. But the biggest challenge is that the whole process must be reliable. Like if the orders and packages goes on missing its not an option. Since the message sits in the queue until the service processes it the reliability increases. Obviously we have keep focusing that the broker service keep functioning using clustering for example or else it becomes the single point of failure. 

Understanding the Microservices Architecture

Micro-services is building style for building distributed systems. Each service is a small independent process. Using the messaging service the microservices can be decoupled from each other. This is so decoupled that it is not even necessary to use the same programming language for each service. That's why the API has to be language agnostic. A message broker facilitates this language agnostic requirement. The services are called microservices because they are typically small. They focus on doing one isolated task in a distributed system. Each service should be autonomous. Microservices should keep the code sharing to minimum and they should have there own databases which doesn't need to be of the same type. One can use the Relational database while for the other service a nosql database might be more suitable. And because the each service is autonomous the building of the distributed system is very modular. Each microservice can be build by a separate team and can be separately deployed.

There is an important question over here that if each microservice has its own database what about the data integrity. In microservice architecture  each microservice maintains its own version of the entity with only the relevant data which is received with the message. For eg the registration service gets a new order from the user interface because it must be capable of producing the list of received orders later it stores the order in the database but it only stores the data necessary to produce the list. It than sends the message to the dispatcher and the finance service using a message broker. In the message it only keeps the data which the receiving service needs that service than processes the order and keeps the data of order it needs later. In this way the definition of order is different for each service. There is no single database which keeps all data that is to know about the order. The order data is distributed across the different microservices.

Messaging Patterns

The Messaging Patterns:-

1. Point-to-Point:- It sends the message directly one service to another specific service. The purpose of the message is to tell the service to do some task thats why it is known as command. For example a message register order which gives the command to register the order to service. For the service sending the message this is fire and forget the action. When the message is sent the processing is done by one or more services asynchronously. The sending service is not waiting for the result.






Introduction to Messaging Queues

A message broker is a intermediately between the services. It is capable of sending messages to and receiving messages from the services. It is the application running on some server and sending and receiving of the message is done with some API.

When a services sends a message to some other service it doesn't directly call the recipient service. It sends the message to the message broker and the message broker has internal mechanism to route the message to the receiver service. It doesn't send the message directly to the receiver service. But it holds it until the receiver picks the message up .

For example, if you consider the people as the services and the messages as the letter you send to other people. While sending the letter you do it through the mail service which act as the message broker which in turn routes the message to the receiver but the mail service doesn't handover the message to  the person personally but it is being stored in the persons mailbox until he is ready to take the message out. Such a mailbox is call the queue. It operates using the first in first out principle. 

Thursday, August 11, 2016

Creating multiple S3 Buckets using AWS Cli

If you want to create multiple buckets in AWS than it can be created using the AWS Cli specifying the bucket name and the region the bucket should be created.

Create a text file specifying the list of the buckets name which are required to be created. For e.g
 cd /tmp  
 vi bucketlist.txt

We are going to create the buckets in the Singapore region , you need to change the region name if you want to create it in some other region

 for i in `cat bucketlist.txt`;do aws s3api create-bucket --bucket $i --create-bucket-configuration LocationConstraint=ap-southeast-1 ; done  

Sunday, July 31, 2016

Website Availaibility with New Relic

NewRelic is a great tool to monitor the webapplications , see the insights of the various transactions occurring in the applications and track the code issues. It helps you identify the response times from the different geographical locations and helps you optimize the user experience by  reducing the load times for your applications.

If you just want to monitor the website availability you can use the ping monitor of the newrelic. Its simple and don't require any agents configuration upfront and the advantage is that though the traditional monitors monitor for the website http status i.e. 200 for the site loading but won't check if the content is actually loaded or not. This can sometimes cause an issue, while newrelic gives you a functionality where you can monitor the string on the page load and if the newrelic doesn't find that string it would raise an alarm.

You can configure the ping for a website as follows, in our example we have simply put the monitor for the google website you can replace it with your own.

Webserver/Appserver config backup with Git Script

In production environment there are always backup scheduled for the entire servers using the AMI or snapshots which take the system backup over a period of time and runs daily in the non peak hours.

But there are environments such as non-prod where many people have access to the config so that they can tune this according to there requirements. While this makes things to speed up but since no one is accountable for the changes this can simply go on breaking up the configuration due to multiple changes being performed.

Now you can always troubleshoot the changes but sometimes there are situation when the hardware or dns problems might arise which is outside your control. In those cases you can't keep your environment down since this would affect the testing in the lower environments.

Monday, July 18, 2016

AWS Cli installation from awscli.zip

You can install the AWS cli using the boto sdk as discussed in my previous post

You can also download the aws cli zip package and install the aws cli from it as

 curl "http://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"  
 unzip awscli-bundle.zip  
  sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws  

Sunday, July 17, 2016

Adding a Slave Server to Jenkins

By default the jenkins initiates the builds from the jenkins master. But if you are using the jenkins in an production environment there are chances several builds for different components/ micro services would be required to triggered parallely.

In case the IO/Network/System performance for the master may degrade and UI or jenkins might start to give reduced performance , hangs up etc. which is not a desirable state. So its recommended to use the jenkins slave for the builds while master only control those nodes.

To add the jenkins slave you need to have the two server's first one is your master server and other one the slave server on which you are going to build the jobs created in the jenkins.

For this you require to have 2 servers in our case we are going to use 172.31.59.1 as the jenkins master and 172.31.62.152.

You need to install the java jdk on both the servers. For installing java jdk and setting up the environment variables follow my previous posts Java JDK Installation and jenkins installation.

Ensure the jenkins user exist on both the  servers and generate a key on the jenkins master server using the ssh-keygen command and copy this key using the ssh-copy-id command on both the servers to make them passswordless from the jenkins master server.

Once done go to the jenkins dashboard


Thursday, July 14, 2016

Proxy server instance id instance of ec2-instance

If you are using the awscli to write your scripts than while checking the ec2 instance id from the instance you might be using the following command

 wget -q -O - http://169.254.169.254/latest/meta-data/instance-id  

The problem is if you are having your server on private subnet and using a proxy to connect to internet and if you use the above command it would give you the proxy server id instead of your instance-id. To overcome this issue use the following while writing your script.

 export NO_PROXY=169.254.169.254;  

The resolves the ip 169.254.169.254 from the instance and does not forward it to the proxy and you would get the instance id of the instance rather than the proxy server.


Resolved Error TCP segment of a reassembled PDU

Error code as noted in Wireshark:-
106   8.721506 0.000024 TCP 172.XX.XXX.XXX -> 172.XX.XX.XXX 368 [TCP segment of a reassembled PDU] 106

Problem statement:- Behind the ELB we were using the HAproxy and sending an options request in which the original request status was replaced by 200 status using the cors configuration. While HAproxy received the request from the ELB and responed back with 200 status ELB was not able to respond back and connection was terminated.

Resolution:- After recording the tcpdump and capturing the packets using the pcap file generated and analyzed via the wireshark we noticed the packed 106 was a [TCP segment of a reassembled PDU]. Actually the HTTP Packet is not complete, so the Wireshark is also unable to see the packet as an HTTP valid one, this is the same behavior as the ELB have.

According to the RFC-2616, section-6  After receiving and interpreting a request message, a server responds with an HTTP response message. [2]

       Response      = Status-Line               ; Section 6.1
                       *(( general-header        ; Section 4.5
                        | response-header        ; Section 6.2
                        | entity-header ) CRLF)  ; Section 7.1
                       CRLF
                       [ message-body ]          ; Section 7.2

So after the HEADER Section, it's required a CRLF (Carriage Return  + Line Feed) to complete the HEADER Section.

In our case the this was missing.

ELB needs the full request to understand that the request has been completed, so it's mandatory to be fully compliant with the RFC-2616.

In order to fix the issue, we have to add a CRLF after the Content-Lenght: 0 in the end of the file

This can be done by doing this:

# echo >> /directory/file.http

Then you will see that the file is on Unix format, Unix format does not use CRLF terminators:

# file /directory/file.http
/directory/file.http: ASCII text

So the file needs to be converted, in order to do that there is a tool called unix2dos, on Red Hat it can be installed by issuing this command:

# yum install unix2dos -y

then to convert the file:

# unix2dos /directory/file.http
unix2dos: converting file /directory/file.http to DOS format ...

You will see that the the file now will have CRLF line terminators:

# file /directory/file.http
/directory/file.http: ASCII text, with CRLF line terminators

After doing this we needed to restart HA-Proxy to use this new file that rewrites the http status from 503 to 200.

You can check the last line as

# cat -A /directory/file.http

last line should be blank (with CRLF which is the ^M$)

user-id^M$
Content-Length: 0^M$
^M$

Wednesday, July 13, 2016

Using pcap to analyze the network packets and troubleshooting web applications

If you are facing the problems with the webrequest and getting an error status. Than you can monitor the responses on the server side and client side by monitoring the packets sent over the network. You can use the tcpdump for it. If you want to have a greater insight into whats happening over your network than you need to capture the packets and analyze it using the network packet analyzer tool such as wireshark.

Else you can use the tcpdump also to view the packets captured. The pcap file needs to generated which captures your packet over the network and is basically a binary file. You need to query that file using the tcpdump or wireshark to see whats happening in your network.

To generated the pcap file for monitization use the following command

 tcpdump -i eth0 -s 65535 -w request.pcap  

To analyze the pcap file use the following command

 tcpdump -qns 0 -X -r request.pcap  


you should see the time of the request, Ip from which request was received, your server ip, protocol tcp or udp, packets information.

Sunday, July 10, 2016

Installing Botosdk on RHEL

In my previous post i covered the installation of the python pip which can be used for the Botosdk and awscli installation.

Follow these steps to install the botosdk and awscli on RHEL in Amazon AWS

 pip install boto3  
 pip install awscli

This completes the boto3  sdk and awscli installation.

Before you can use the command line you need to connect to the aws and authenticate. use the following command for this

 aws configure  

This would in-turn ask your Access key and Secret Access key which you can generate from the AWS IAM. Also you need to enter the region endpoint to ensure you are connecting to correct region in case you are using the multiple regions in AWS and also its a good practice to be followed.

Once done you should be able to connect to your AWS environment. 

Installing pip on Redhat Linux on Amazon AWS

The RHEL doesn't come with the Awscli or botosdk preinstalled unlike the Amazon Linux. To install the Awscli or python botosdk you need to install the python pip through which you can install the aws cli and boto sdk.

But the pip is not installed by default. Follow the following steps to install the python pip on RHEL 7 in Amazon Linux

 cd /tmp  
 yum install wget -y
  wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
 rpm -ivh epel-release-latest*
 yum install python-pip

This Will complete the installation of the python pip on the RHEL7

Checkout the version of the pip using following command

 [root@ip-xxx-xx-xx-xxx tmp]# pip -V  
 pip 7.1.0 from /usr/lib/python2.7/site-packages (python 2.7)

Thursday, June 2, 2016

Apache Maven Installation on Centos Linux

Apache Maven is widely being used follow the following steps to install the Apache Maven on your system

 cd /opt  
 wget http://redrockdigimark.com/apachemirror/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.zip
  unzip apache-maven-3.3.9-bin.zip

This will create a directory as apache-maven-3.3.9 and completes the apache maven installation

Wednesday, June 1, 2016

Autoscaling Important points Consideration

While going into the Autoscaling Environment setup in the Amazon AWS , you should keep a note of the following things in order to build a better dynamic environment


  • Always tag the instances while creating the Autoscaling group so that all the instances which would come up from the autoscaling group have the tags associated with them so you can easily find them while looking on the console.

  • If you are using the session persistence you can handle the session either from the AWS ELB or from the server. If you select AWS ELB , ELB would manage your session on the basis of the time duration specified. If you consider using the server session than roundrobin algorithm used by the ELB won't work effectively. To overcome this problem save the session in the database and using a memcache or some other caching for the session. This will prevent the overloading of any particular instance.

  • If you are using some configuration management tools which builds up your instance and deploys your stack and code as the new server comes up than you can take the advantages of the hooks provided by the AWS. This comes under the autoscaling lifecycle. There are both the pre and post hooks i.e. you can introduce a predefined delay before the instance gets attached to the ELB and start serving traffic. Similarly you can setup  a delay in which case instance would get de-register from the ELB so traffic won't come on them before actual termination. This is particularly helpful if you want to take out something from the server before termination.

Sunday, May 29, 2016

Ansible role terminology towards IT operations

Change Management:-

Change management is fundamental function and all the rest are build around it as core idea. it defines what the system needs to look like and make sure what it is, if its not that state you enforce it. For e.g. A webserver should have apache installed and wanted to installed at version 2.4 and should have started state and anything which deviates from this defined state you dictate a change state and mark that system as changed. System marked changed makes sense more in production system because production system shouldn't be changed like that and you might want to find the cause for the same.

In case of ansible if a system state is same even after change ansible wouldn't even try to change its state and this is called idempotent.



Provisioning:-

It is built on the change management but it is focussed on a role you are trying to establish. The most basic definition of provisioning is you are transitioning from one system state to another system state expected.

In case of ansible provisioning you can compare same to machine cloning or images in Cloud with  only change is that ansible actually installs and configure everytime instead of creating images from the machine. This can better be understood as lets say you want to configure the ntp server or database server or just want a server to test your code and than want to terminate the same.

The steps for provisioning are very simple like lets say we want to provision a web server. So you first installa Basic OS such as linux or windows. Than you go on installed the webserver software like apache or nginx. copy you configuration and your web files and install your security updates and start web service those are the steps which ansible  will send to the server and provision it for you.



Saturday, May 21, 2016

Using Rsyslog to forward Elastic Load balancer(ELB) logs in AWS

The ELB logs provides the great insights about the traffic being received by your application. You can identify the location, requests, errors and attacks by analyzing the ELB logs. Your  security team might be interested in analyzing these logs.

The problem is the logs are written either in 1 hour or every 5 minutes. You can also set them at  a definite size of 5MB. If you choose 1 hour than the size of the file  would be big. So it makes sense that logs are written at every 5 minutes since you want to analyze current requests coming on the ELB.

The problem in setting Rsyslog is the AWS logs are generated dyamica pattern and date yyyy/mm/dd
keep on rotating. Other problem is everytime a new log file is generated and thirdly logs are written in the S3 bucket which is a storage only and have very low computing power.

We used the S3fs to mount the S3 as a mount on the server this provided the easy access to logs on the s3. The other problem was all multiple application logs were written in  a single directory. We wanted to process multiple application logs separately for which we have used the rsync command to sync the logs in a separate directory. The advantage of using rsync is we won't have to process the same  log again and again it only takes the latest log and  does not copy log which is already present.

We generated the rsync log which provides the path of file being sync to the directory. So we directly using the cat to read the file content and append it in another file. So this way all the latest log file created by elb gets appended to a single file which can be easily be copied on remote server using the rsyslog or can be directly push to logstash if you are using the ELK setup or to your security team after which they can get the logs to there software for processing.

#####Script to get ELB logs written in single file#####
#####Created By Ankit Mittal######

#!/bin/bash  
Source="/path-to-elb-logs/ELB/AWSLogs/912198634563/elasticloadbalancing/ap-southeast-1"
year=`date +%Y`
month=`date +%m`
date=`date +%d`
path="$Source/$year/$month/$date"
process_check=`ps -ef | grep rsync | wc -l`

lb_sync() {
logname="$1"
echo "------------Sync started at `date`------------- " >> /var/log/rsync-elb.log
rsync -avz --files-from=&lt;(ls $path/*$logname* | cut -d / -f11) $path/ /var/log/$logname/
}

if [ $process_check -gt 1 ]
then
echo "Rsync process already running"
else
lb_sync application1-name >> /tmp/app1  ### application1 name coming in elb pattern through which can grep the name of the file of that application
lb_sync application2-name >> /tmp/app2      
fi

merge_log() {
listname="$1"
echo "-----------Started merging of logs `date`--------------" >> /var/log/merging-elblogs.log
for i in `cat /tmp/$listname | grep $listname`;do cat $path/$i >> /var/log/"$listname".log;done >> /var/log/merge-loop-output.log
}

merge_log application1
merge_log application2


Sunday, May 15, 2016

Jenkins Installation

Check if anything is running on the port 8080 which is used by the jenkins by default
 telnet localhost 8080  

Install the Java JDK which is required by the Jenkins
 mkdir /usr/java  
 cd /usr/java
 wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u92-b14/jdk-8u92-linux-x64.tar.gz
 tar -xf jdk-8u92-linux-x64.tar.gz
 cd jdk1.8.0_92/
 update-alternatives --install /usr/bin/java java /usr/java/jdk1.8.0_92/bin/java 100
 update-alternatives --config java
 update-alternatives --install /usr/bin/javac javac /usr/java/jdk1.8.0_92/bin/javac 100
 update-alternatives --config javac
 update-alternatives --install /usr/bin/jar jar /usr/java/jdk1.8.0_92/bin/jar 100
 update-alternatives --config jar

Setup the JAVA_HOME
 vi /etc/rc.d/rc.local  
 export JAVA_HOME=/usr/java/jdk1.8.0_92/      
  export JRE_HOME=/usr/java/jdk1.8.0._92/jre      
 export PATH=$PATH:/usr/java/jdk1.8.0_92/bin:/opt/java/jdk1.8.0_92/jre/bin

Import the Jenkins repo
 sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo  

Import the keys for the repo
 sudo rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key  

Enable the jenkins service to start at runtime
 /bin/systemctl enable jenkins.service  
 /bin/systemctl restart jenkins.service

This completes the installation of the jenkins. You can access the web-console by entering your ip followed by port 8080


Saturday, April 23, 2016

Docker Terms

Docker Engine:- 
Sometimes called docker daemon or docker runtime. It gets downloaded from apt-get or yum in linux whenever we install docker. It is responsible for providing access to all documents runtime and services.

Images:-
Images are what we launch docker containers from.
  docker run -it fedora /bin/bash  
In this example fedora is the image , it will launch fedora based container. Images comprises of different layers. Everytime we don't specify the image version it will pull the latest fedora version. For getting all the images use a -a flag in the docker pull command . For checking all the available images use
 docker images fedora  

Images have 5 fields associated with it.

Friday, April 22, 2016

Running docker on TCP

Docker container run as a unix socket which means it uses the UDP protocol which is not very reliable. If you want to run the docker on the TCP you can do it as follows:

 docker -H 10.X.X.X:2375 -d &;  


You can verify this by using the netstat command to verify if the docker is runnig on a tcp port


Granting normal users access on docker containers

Depending on your needs you might want to give normal users depending on there role to have permission so create docker containers. You don't want to go out giving sudo privileges to all users just to maintain the docker containers.

If you run the docker container you would receive an error stating permission denied as


To overcome this problem we are going to add the normal user under the docker group. This would enable them to create the docker containers. The docker group is created by default once you install the docker. You might want to restrict users who have the access on docker containers depending on you environment.

Stopping a virtual machine by vagrant

We have powered on the ubuntu and centos machine using the vagrant. You can power off the virtual machine using the halt command. You need to be in the path of the virtual machine to power it down. Use the following command to power down the virtual machine.

 vagrant halt  


You can confirm it from the Oracle Virtual Box also



Creating Centos Virtual machine using Vagrant

Creating a centos6.5 virtual machine using vagrant as follows

1. Add the Centos Image to  the folder as follows

 vagrant box add centos65-x86_64-20140116 https://github.com/2creatives/vagrant-centos/releases/download/v6.5.3/centos65-x86_64-20140116.box  


2. You need to initiate this virtual machine as follows and than start it

 vagrant initi centos65-x86_64-20140116  
 vagrant up



3. Details of the Virtual machine and authentication details can be found as follows

 vagrant ssh-config  


Thursday, April 21, 2016

Docker Installation

We are going to install the Docker on the Ubuntu machine which we already created  in our previous posts using the Vagrant, check it out here.

You need to have sudo privileges on the machine to install Docker. We are simply installing the docker using the root in our example. You can do the same or use sudo to achieve the same.

Make sure you are running kernel version 3.10 or higher while running Docker because it gives better performance and support kernel spaces and that kind of stuff.


For installing the docker on ubuntu we are going to use the apt-get and run the following command.

 apt-get install docker.io  


That's it docker installation is complete.

You can check if the docker service is running on the ubuntu machine or not as follows



Using Vagrant to run the Ubuntu Machine

Vagrant is particularly helpful in automating the virtual machine setup so that you can create the virtual machine much easily and start using them instead of doing step by step installation of the same. Vagrant using the prebuilt images and would download that automatically as you start off the virtual machine.

1. As a pre-requisite you need to have Oracle Virtual Box which actually runs the virtual machine. You can download the latest virtual box here.

2. You need to download and install the Vagrant. Vagrant can be downloaded here.

Once you have installed the Virtual box and Vagrant, its a matter of few commands to spin up a new Virtual Machine.

Open the windows command line using the cmd in search option.

Create a new folder as C:\vm\test to use it for creating your first virtual machine using Vagrant.

Navigate to that folder in the command line panel


You need to do initiation of the image being used by vagrant. Simply type following command

 vagrant init ubuntu/trusty64 
 vagrant up 

Saturday, April 16, 2016

Apache Kafka Introduction

Intoduction:-

Apache Kafka is a distributed publish-subscribe messaging system.
Originally developed at linkedin and later become part of apache project.
Apache kafka is written in scala.


Advantages:-
Kafka is fast, scalable, durable and distributed by design which means it can run as a cluster on the different nodes. Kafka has a very high throughput. With high throughput means that it can process billions of messages per day with low latency and in near real time activity from different systems together through different topics and queues.