Posts

Showing posts from 2016

AWS S3 bucket Error client error (PermanentRedirect) occurred when calling the ListObjects operation

I was trying to sync the bucket cross region when i encountered the below error A client error (PermanentRedirect) occurred when calling the ListObjects operation: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint: $bucketname.s3-ap-southeast-1.amazonaws.com You can fix this issue by explicitly providing the correct region location using the --region argument, the AWS_DEFAULT_REGION environment variable, or the region variable in the AWS CLI configuration file. You can get the bucket's location by running "aws s3api get-bucket-location --bucket BUCKET". Solution:- In my case the Elastic IP was not associated with the Ec2 instances and it was trying to communicate via s3 endpoint due to which i was getting this error. Once i tried from the instance having Pubic Elastic IP with internet access it was resolved automatically. Details:- You can only use the s3 endpoint to conn...

Enabling the S3 bucket logging from the Command line for multiple buckets

For enabling the S3 bucket logging in the AWS you need to first setup the acl for read and write permission on the bucket by other buckets. You can enable the ACL using the AWS Cli as follows:- aws s3api put-bucket-acl --bucket BucketName --grant-read-acp 'URI="http://acs.amazonaws.com/groups/s3/LogDelivery"' --grant-write 'URI="http://acs.amazonaws.com/groups/s3/LogDelivery"'; Than you can copy all the buckets names in a file for whom you want the logging to be enabled. And run the following command in the loop as follows so that all those bucket has there logging under the logbucketname. for i in `cat /tmp/bucketlist.txt`;do aws s3api put-bucket-logging --bucket $i --bucket-logging-status '{"LoggingEnabled":{"TargetPrefix":"S3logs/'$i'/","TargetBucket":"'S3logbucketname'"}}';done

Script to create the Security groups in AWS

You can use the AWS Console to create the Security groups for your servers. But if you are having large number of servers with different security group or you are involved in the migration of your environment than it can take lot of time and effort to do that manually. In those cases you can use the following AWS Scripts which uses the AWS CLI to create the Security Groups You need to provide the following arguments to the script for creating the Security groups Name of the Security group VpcID Environment Name Meaningful name for the Security group Usage Description about the Security Group #!/bin/bash # # Create Security Group in the AWS # Need to provide the Security group name , VpcID, Environment name, Usedfor and description name=$1; vpcId=$2; environment=$3; usedFor=$4 description=$5; # We need to provide the name of the Environment and # action to perform # usage(){ echo -e "Usage:\n"; echo -e "$0 ...

Uploading the Certificate to the Cloudfront using AWS Cli

If you are using the Cloudfront CDN to deliver your images and videos than cloudfront provides there own endpoint. Now its possible to point your domain with CNAME to this endpoint so that all request on your domain is served from the Cloudfront and you can reduce the geographical latency and reduce the load on your servers for delivering the static content and increase the user experience on your application. There is a requirement of delivering the content securely to the end user using the SSL Certificate. Cloudfront you can use the SSL certificate but it doesn't allow you to upload the certificate on the Console itself. You need to do that through the AWS CLI and via IAM. Once you upload the certificate you can select the certificate in the cloudfront and it will be applied to the Cloudfront Distribution. To upload the Certificate on the Cloudfront enter the below command to the server having the AWS CLI installed. aws iam upload-server-certificate --server-certifi...

Resolving 504 Gateway Timeout Issues

AWS ELB gives two matrix which can help in diagnosing [5XX Errors]  issue and identify the possible root cause causing the Errors: Matrix : “Sum HTTP 5XXs”   :  5XX Reponses given by Service [Due to no availability/failure of DB, S3, MQ etc] Matrix :  “Sum ELB 5XXs”  :  5XX Response given by ELB [when idletime out expires while waiting for response from Tej Service (504), failure in ELB itself (503) ]                                   Common reason for idle timeout on ELB is long running DB query or some application bug, performance tuning issues with micro service.

Setting Stricthostkeychecking and SSH session timeout on Linux Server

While working on the Linux server its common to face the ssh session timeout from the server due to ideal session. Although you can prevent the session timeout from the server and client. Setup the ServerAliveInterval in your user account to provide the extended time for session timeout i.e. 120 seconds vim .ssh/config   Host * ServerAliveInterval 120 chmod 600 .ssh/config If you are using the Bastion host in your corporate environment to connect to the Different Server than you can edit the sshd_config file  and make an entry as below to increase the session timeout for the ssh vim /etc/ssh/sshd_config   ## ssh ideal timeout value ClientAliveInterval 120

About RabbitMQ

RabbitMQ is a message broker written in Erlang.  It is open source but commercial support is offered.  RabbitMQ uses standardized messaging protocols.  A protocol is similar to like hdp is a protocol just optimized for messaging.  The protocol supported out of the box is AMQP(Advanced messaging queuing protocol) version 0.9 . The other protocol are supported using the built in plugins system.  There are RabbitMQ client framework available but because RabbitMQ uses the standard protocols you can interact with it using any programming language even the javascript.  RabbitMQ also supports clustering ensuring the high availaibility of the broker service even if the hardware is failing. 

Messaging Scenarios

Besides setting out the complete microservices architecture , messaging can help out in many scenarios. 1. If you want the legacy applications to talk to each other without the data loss , messaging with queues is ideal. There are libraries from many programming languages for your message broker that lets you do that. 2. When data is coming from many places and need to power a dashboard Apps than messaging queue can easily be accumulated to give you real time insight on your dashboard scheme. 3. If you have big monolithic app and want to simplify it without rewriting it all at once first isolated one feature or layer to be handled by a service. Handle the communication between the monolith and new service with the messaging. Once done successfully take the next layer till the app is maintainable again. 

Microservice Architecture Example

The customer submits a new order from the WEBUI. From WEBUI a message has to be sent to the registration service which stores the order once completed it sends the message to the notification service confirming the customer that the message has been registered for example like sending an email.  

When to use the Microservices Architecture

The Architecture must have the capacity to scale. The project would start small but expectation is that they will grow fast. The Architecture must be flexible enough to add and remove features with minimal impact. Microservices are ideal for that because new services and old services can easily be added or removed if unsuccessful. Also when the demand rises enabling more capacity for each service should be an easy job. Since the messages are in the queue and services are picking up and processing messages one by one scaling the capacity up can be done by scaling another instance of the same microservice. Also the changes are rapid based on the requirement so deployment should be easy and reliable without affecting the overall system since each service is autonomous its easy to deploy it. Much easier than deploying one big monolithic application. Also since there are many developers involved in the building of the whole system making all of them work on the same monolithic code would...

Understanding the Microservices Architecture

Micro-services is building style for building distributed systems. Each service is a small independent process. Using the messaging service the microservices can be decoupled from each other. This is so decoupled that it is not even necessary to use the same programming language for each service. That's why the API has to be language agnostic. A message broker facilitates this language agnostic requirement. The services are called microservices because they are typically small. They focus on doing one isolated task in a distributed system. Each service should be autonomous. Microservices should keep the code sharing to minimum and they should have there own databases which doesn't need to be of the same type. One can use the Relational database while for the other service a nosql database might be more suitable. And because the each service is autonomous the building of the distributed system is very modular. Each microservice can be build by a separate team and can be separat...

Messaging Patterns

Image
The Messaging Patterns:- 1. Point-to-Point :- It sends the message directly one service to another specific service. The purpose of the message is to tell the service to do some task thats why it is known as command. For example a message register order which gives the command to register the order to service. For the service sending the message this is fire and forget the action. When the message is sent the processing is done by one or more services asynchronously. The sending service is not waiting for the result.

Introduction to Messaging Queues

A message broker is a intermediately between the services. It is capable of sending messages to and receiving messages from the services. It is the application running on some server and sending and receiving of the message is done with some API. When a services sends a message to some other service it doesn't directly call the recipient service. It sends the message to the message broker and the message broker has internal mechanism to route the message to the receiver service. It doesn't send the message directly to the receiver service. But it holds it until the receiver picks the message up . For example, if you consider the people as the services and the messages as the letter you send to other people. While sending the letter you do it through the mail service which act as the message broker which in turn routes the message to the receiver but the mail service doesn't handover the message to  the person personally but it is being stored in the persons mailbox unt...

Creating multiple S3 Buckets using AWS Cli

If you want to create multiple buckets in AWS than it can be created using the AWS Cli specifying the bucket name and the region the bucket should be created. Create a text file specifying the list of the buckets name which are required to be created. For e.g cd /tmp   vi bucketlist.txt We are going to create the buckets in the Singapore region , you need to change the region name if you want to create it in some other region for i in `cat bucketlist.txt`;do aws s3api create-bucket --bucket $i --create-bucket-configuration LocationConstraint=ap-southeast-1 ; done  

Website Availaibility with New Relic

Image
NewRelic is a great tool to monitor the webapplications , see the insights of the various transactions occurring in the applications and track the code issues. It helps you identify the response times from the different geographical locations and helps you optimize the user experience by  reducing the load times for your applications. If you just want to monitor the website availability you can use the ping monitor of the newrelic. Its simple and don't require any agents configuration upfront and the advantage is that though the traditional monitors monitor for the website http status i.e. 200 for the site loading but won't check if the content is actually loaded or not. This can sometimes cause an issue, while newrelic gives you a functionality where you can monitor the string on the page load and if the newrelic doesn't find that string it would raise an alarm. You can configure the ping for a website as follows, in our example we have simply put the monitor for the ...

Webserver/Appserver config backup with Git Script

In production environment there are always backup scheduled for the entire servers using the AMI or snapshots which take the system backup over a period of time and runs daily in the non peak hours. But there are environments such as non-prod where many people have access to the config so that they can tune this according to there requirements. While this makes things to speed up but since no one is accountable for the changes this can simply go on breaking up the configuration due to multiple changes being performed. Now you can always troubleshoot the changes but sometimes there are situation when the hardware or dns problems might arise which is outside your control. In those cases you can't keep your environment down since this would affect the testing in the lower environments.

AWS Cli installation from awscli.zip

You can install the AWS cli using the boto sdk as discussed in my previous post You can also download the aws cli zip package and install the aws cli from it as curl "http://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"   unzip awscli-bundle.zip   sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws  

Adding a Slave Server to Jenkins

Image
By default the jenkins initiates the builds from the jenkins master. But if you are using the jenkins in an production environment there are chances several builds for different components/ micro services would be required to triggered parallely. In case the IO/Network/System performance for the master may degrade and UI or jenkins might start to give reduced performance , hangs up etc. which is not a desirable state. So its recommended to use the jenkins slave for the builds while master only control those nodes. To add the jenkins slave you need to have the two server's first one is your master server and other one the slave server on which you are going to build the jobs created in the jenkins. For this you require to have 2 servers in our case we are going to use 172.31.59.1 as the jenkins master and 172.31.62.152. You need to install the java jdk on both the servers. For installing java jdk and setting up the environment variables follow my previous posts Java JD...

Proxy server instance id instance of ec2-instance

If you are using the awscli to write your scripts than while checking the ec2 instance id from the instance you might be using the following command wget -q -O - http://169.254.169.254/latest/meta-data/instance-id   The problem is if you are having your server on private subnet and using a proxy to connect to internet and if you use the above command it would give you the proxy server id instead of your instance-id. To overcome this issue use the following while writing your script. export NO_PROXY=169.254.169.254;   The resolves the ip  169.254.169.254  from the instance and does not forward it to the proxy and you would get the instance id of the instance rather than the proxy server.

Resolved Error TCP segment of a reassembled PDU

Error code as noted in Wireshark:- 106   8.721506 0.000024 TCP 172.XX.XXX.XXX -> 172.XX.XX.XXX 368 [TCP segment of a reassembled PDU] 106 Problem statement:- Behind the ELB we were using the HAproxy and sending an options request in which the original request status was replaced by 200 status using the cors configuration. While HAproxy received the request from the ELB and responed back with 200 status ELB was not able to respond back and connection was terminated. Resolution:- After recording the tcpdump and capturing the packets using the pcap file generated and analyzed via the wireshark we noticed the packed 106 was a [TCP segment of a reassembled PDU]. Actually the HTTP Packet is not complete, so the Wireshark is also unable to see the packet as an HTTP valid one, this is the same behavior as the ELB have. According to the RFC-2616, section-6  After receiving and interpreting a request message, a server responds with an HTTP response message. [2]   ...

Using pcap to analyze the network packets and troubleshooting web applications

If you are facing the problems with the webrequest and getting an error status. Than you can monitor the responses on the server side and client side by monitoring the packets sent over the network. You can use the tcpdump for it. If you want to have a greater insight into whats happening over your network than you need to capture the packets and analyze it using the network packet analyzer tool such as wireshark. Else you can use the tcpdump also to view the packets captured. The pcap file needs to generated which captures your packet over the network and is basically a binary file. You need to query that file using the tcpdump or wireshark to see whats happening in your network. To generated the pcap file for monitization use the following command tcpdump -i eth0 -s 65535 -w request.pcap   To analyze the pcap file use the following command tcpdump -qns 0 -X -r request.pcap   you should see the time of the request, Ip from which request was received, your se...

Installing Botosdk on RHEL

In my previous post i covered the installation of the python pip which can be used for the Botosdk and awscli installation. Follow these steps to install the botosdk and awscli on RHEL in Amazon AWS pip install boto3   pip install awscli This completes the boto3  sdk and awscli installation. Before you can use the command line you need to connect to the aws and authenticate. use the following command for this aws configure   This would in-turn ask your Access key and Secret Access key which you can generate from the AWS IAM. Also you need to enter the region endpoint to ensure you are connecting to correct region in case you are using the multiple regions in AWS and also its a good practice to be followed. Once done you should be able to connect to your AWS environment. 

Installing pip on Redhat Linux on Amazon AWS

The RHEL doesn't come with the Awscli or botosdk preinstalled unlike the Amazon Linux. To install the Awscli or python botosdk you need to install the python pip through which you can install the aws cli and boto sdk. But the pip is not installed by default. Follow the following steps to install the python pip on RHEL 7 in Amazon Linux cd /tmp   yum install wget -y wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm rpm -ivh epel-release-latest* yum install python-pip This Will complete the installation of the python pip on the RHEL7 Checkout the version of the pip using following command [root@ip-xxx-xx-xx-xxx tmp]# pip -V   pip 7.1.0 from /usr/lib/python2.7/site-packages (python 2.7)

Apache Maven Installation on Centos Linux

Apache Maven is widely being used follow the following steps to install the Apache Maven on your system cd /opt   wget http://redrockdigimark.com/apachemirror/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.zip unzip apache-maven-3.3.9-bin.zip This will create a directory as apache-maven-3.3.9 and completes the apache maven installation

Autoscaling Important points Consideration

While going into the Autoscaling Environment setup in the Amazon AWS , you should keep a note of the following things in order to build a better dynamic environment Always tag the instances while creating the Autoscaling group so that all the instances which would come up from the autoscaling group have the tags associated with them so you can easily find them while looking on the console. If you are using the session persistence you can handle the session either from the AWS ELB or from the server. If you select AWS ELB , ELB would manage your session on the basis of the time duration specified. If you consider using the server session than roundrobin algorithm used by the ELB won't work effectively. To overcome this problem save the session in the database and using a memcache or some other caching for the session. This will prevent the overloading of any particular instance. If you are using some configuration management tools which builds up your instance and dep...

Ansible role terminology towards IT operations

Change Management:- Change management is fundamental function and all the rest are build around it as core idea. it defines what the system needs to look like and make sure what it is, if its not that state you enforce it. For e.g. A webserver should have apache installed and wanted to installed at version 2.4 and should have started state and anything which deviates from this defined state you dictate a change state and mark that system as changed. System marked changed makes sense more in production system because production system shouldn't be changed like that and you might want to find the cause for the same. In case of ansible if a system state is same even after change ansible wouldn't even try to change its state and this is called idempotent. Provisioning:- It is built on the change management but it is focussed on a role you are trying to establish. The most basic definition of provisioning is you are transitioning from one system state to another system ...

Using Rsyslog to forward Elastic Load balancer(ELB) logs in AWS

The ELB logs provides the great insights about the traffic being received by your application. You can identify the location, requests, errors and attacks by analyzing the ELB logs. Your  security team might be interested in analyzing these logs. The problem is the logs are written either in 1 hour or every 5 minutes. You can also set them at  a definite size of 5MB. If you choose 1 hour than the size of the file  would be big. So it makes sense that logs are written at every 5 minutes since you want to analyze current requests coming on the ELB. The problem in setting Rsyslog is the AWS logs are generated dyamica pattern and date yyyy/mm/dd keep on rotating. Other problem is everytime a new log file is generated and thirdly logs are written in the S3 bucket which is a storage only and have very low computing power. We used the S3fs to mount the S3 as a mount on the server this provided the easy access to logs on the s3. The other problem was all multiple applica...

Jenkins Installation

Image
Check if anything is running on the port 8080 which is used by the jenkins by default telnet localhost 8080   Install the Java JDK which is required by the Jenkins mkdir /usr/java   cd /usr/java wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u92-b14/jdk-8u92-linux-x64.tar.gz tar -xf jdk-8u92-linux-x64.tar.gz cd jdk1.8.0_92/ update-alternatives --install /usr/bin/java java /usr/java/jdk1.8.0_92/bin/java 100 update-alternatives --config java update-alternatives --install /usr/bin/javac javac /usr/java/jdk1.8.0_92/bin/javac 100 update-alternatives --config javac update-alternatives --install /usr/bin/jar jar /usr/java/jdk1.8.0_92/bin/jar 100 update-alternatives --config jar Setup the JAVA_HOME vi /etc/rc.d/rc.local   export JAVA_HOME=/usr/java/jdk1.8.0_92/       export JRE_HOME=/usr/java/jdk1.8.0._92/jre     ...

Docker Terms

Image
Docker Engine:-  Sometimes called docker daemon or docker runtime . It gets downloaded from apt-get or yum in linux whenever we install docker. It is responsible for providing access to all documents runtime and services. Images:- Images are what we launch docker containers from.  docker run -it fedora /bin/bash   In this example fedora is the image , it will launch fedora based container. Images comprises of different layers. Everytime we don't specify the image version it will pull the latest fedora version. For getting all the images use a -a flag in the docker pull command . For checking all the available images use docker images fedora   Images have 5 fields associated with it.

Running docker on TCP

Image
Docker container run as a unix socket which means it uses the UDP protocol which is not very reliable. If you want to run the docker on the TCP you can do it as follows: docker -H 10.X.X.X:2375 -d &;   You can verify this by using the netstat command to verify if the docker is runnig on a tcp port

Granting normal users access on docker containers

Image
Depending on your needs you might want to give normal users depending on there role to have permission so create docker containers. You don't want to go out giving sudo privileges to all users just to maintain the docker containers. If you run the docker container you would receive an error stating permission denied as To overcome this problem we are going to add the normal user under the docker group. This would enable them to create the docker containers. The docker group is created by default once you install the docker. You might want to restrict users who have the access on docker containers depending on you environment.

Stopping a virtual machine by vagrant

Image
We have powered on the ubuntu and centos machine using the vagrant. You can power off the virtual machine using the halt command. You need to be in the path of the virtual machine to power it down. Use the following command to power down the virtual machine. vagrant halt   You can confirm it from the Oracle Virtual Box also

Creating Centos Virtual machine using Vagrant

Image
Creating a centos6.5 virtual machine using vagrant as follows 1. Add the Centos Image to  the folder as follows vagrant box add centos65-x86_64-20140116 https://github.com/2creatives/vagrant-centos/releases/download/v6.5.3/centos65-x86_64-20140116.box   2. You need to initiate this virtual machine as follows and than start it vagrant initi centos65-x86_64-20140116    vagrant up 3. Details of the Virtual machine and authentication details can be found as follows vagrant ssh-config  

Docker Installation

Image
We are going to install the Docker on the Ubuntu machine which we already created  in our previous posts using the Vagrant , check it out here. You need to have sudo privileges on the machine to install Docker. We are simply installing the docker using the root in our example. You can do the same or use sudo to achieve the same. Make sure you are running kernel version 3.10 or higher while running Docker because it gives better performance and support kernel spaces and that kind of stuff. For installing the docker on ubuntu we are going to use the apt-get and run the following command. apt-get install docker.io   That's it docker installation is complete. You can check if the docker service is running on the ubuntu machine or not as follows

Using Vagrant to run the Ubuntu Machine

Image
Vagrant is particularly helpful in automating the virtual machine setup so that you can create the virtual machine much easily and start using them instead of doing step by step installation of the same. Vagrant using the prebuilt images and would download that automatically as you start off the virtual machine. 1. As a pre-requisite you need to have Oracle Virtual Box which actually runs the virtual machine. You can download the latest virtual box here . 2. You need to download and install the Vagrant . Vagrant can be downloaded here . Once you have installed the Virtual box and Vagrant, its a matter of few commands to spin up a new Virtual Machine. Open the windows command line using the cmd in search option. Create a new folder as C:\vm\test to use it for creating your first virtual machine using Vagrant. Navigate to that folder in the command line panel You need to do initiation of the image being used by vagrant. Simply type following command vagrant init ...

Apache Kafka Introduction

Intoduction:- Apache Kafka is a distributed publish-subscribe messaging system. Originally developed at linkedin and later become part of apache project. Apache kafka is written in scala. Advantages:- Kafka is fast, scalable, durable and distributed by design which means it can run as a cluster on the different nodes. Kafka has a very high throughput. With high throughput means that it can process billions of messages per day with low latency and in near real time activity from different systems together through different topics and queues.