-->

Wednesday, December 14, 2016

AWS S3 bucket Error client error (PermanentRedirect) occurred when calling the ListObjects operation

I was trying to sync the bucket cross region when i encountered the below error

 A client error (PermanentRedirect) occurred when calling the ListObjects operation: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint: $bucketname.s3-ap-southeast-1.amazonaws.com  
 You can fix this issue by explicitly providing the correct region location using the --region argument, the AWS_DEFAULT_REGION environment variable, or the region variable in the AWS CLI configuration file. You can get the bucket's location by running "aws s3api get-bucket-location --bucket BUCKET".  

Solution:-
In my case the Elastic IP was not associated with the Ec2 instances and it was trying to communicate via s3 endpoint due to which i was getting this error. Once i tried from the instance having Pubic Elastic IP with internet access it was resolved automatically.

Details:-
You can only use the s3 endpoint to connect to the s3 buckets in your region if you working in the cross region like in my case where one bucket was in singapore region while the other bucket was in the mumbai region than endpoint won't work.

In this case you instance must have public ip associated with it since all the communication is going through the internet. Once i associated the Elastic IP with my instance it worked completely fine. You can also get this error if you are using the NAT Gateway. The sync should only be performed from the instance having the internet gateway attached in its subnet.

Friday, December 9, 2016

Enabling the S3 bucket logging from the Command line for multiple buckets

For enabling the S3 bucket logging in the AWS you need to first setup the acl for read and write permission on the bucket by other buckets. You can enable the ACL using the AWS Cli as follows:-

 aws s3api put-bucket-acl --bucket BucketName --grant-read-acp 'URI="http://acs.amazonaws.com/groups/s3/LogDelivery"' --grant-write 'URI="http://acs.amazonaws.com/groups/s3/LogDelivery"';  

Than you can copy all the buckets names in a file for whom you want the logging to be enabled. And run the following command in the loop as follows so that all those bucket has there logging under the logbucketname.

 for i in `cat /tmp/bucketlist.txt`;do aws s3api put-bucket-logging --bucket $i --bucket-logging-status '{"LoggingEnabled":{"TargetPrefix":"S3logs/'$i'/","TargetBucket":"'S3logbucketname'"}}';done  



Thursday, December 8, 2016

Script to create the Security groups in AWS

You can use the AWS Console to create the Security groups for your servers. But if you are having large number of servers with different security group or you are involved in the migration of your environment than it can take lot of time and effort to do that manually.

In those cases you can use the following AWS Scripts which uses the AWS CLI to create the Security Groups

You need to provide the following arguments to the script for creating the Security groups

  1. Name of the Security group
  2. VpcID
  3. Environment Name
  4. Meaningful name for the Security group Usage
  5. Description about the Security Group

 #!/bin/bash  
 #  
 # Create Security Group in the AWS 
 # Need to provide the Security group name , VpcID, Environment name, Usedfor and description
 name=$1;  
 vpcId=$2;  
 environment=$3;  
 usedFor=$4  
 description=$5;  
 # We need to provide the name of the Environment and  
 # action to perform  
 #  
 usage(){  
     echo -e "Usage:\n";  
     echo -e "$0 <Name> <vpc_id> <TAG:Environment> <TAG:UsedFor> <TAG:Description> \n";  
     exit 0;  
 }  
 # Two inputs required to execute the script  
 if [ $# -ne 5 ];  
 then  
     usage;  
 fi;  
 #Create Subnet  
 groupId=`aws ec2 create-security-group --vpc-id $vpcId --group-name $name --description "$description" --query 'GroupId' --output text`;  
 if [[ $groupId == "" ]];  
 then  
     echo -e"Failed to create group";  
     exit 0;  
 fi;  
 echo -e "Group ID: $groupId";  
 echo "$name  $groupId" >> sg_lb_list.txt  
 #Assign TAGs  
 aws ec2 create-tags --resources $groupId --tags Key=Name,Value=$name Key=Environment,Value=$environment Key=UsedFor,Value=$usedFor Key=Description,Value="$description";  
 exit 1;  

Example

 ./create_security_group.sh Dev-SG-LB-App-Appserver vpc-2582cag7 Development Appname-ApplicationServer "Security Group for Appname Application Server";  


Uploading the Certificate to the Cloudfront using AWS Cli

If you are using the Cloudfront CDN to deliver your images and videos than cloudfront provides there own endpoint.

Now its possible to point your domain with CNAME to this endpoint so that all request on your domain is served from the Cloudfront and you can reduce the geographical latency and reduce the load on your servers for delivering the static content and increase the user experience on your application.

There is a requirement of delivering the content securely to the end user using the SSL Certificate. Cloudfront you can use the SSL certificate but it doesn't allow you to upload the certificate on the Console itself. You need to do that through the AWS CLI and via IAM.

Once you upload the certificate you can select the certificate in the cloudfront and it will be applied to the Cloudfront Distribution.

To upload the Certificate on the Cloudfront enter the below command to the server having the AWS CLI installed.

 aws iam upload-server-certificate  --server-certificate-name wildcard.yourdomain.com  --certificate-body file://yourdomain.crt  --private-key file://yourdomain.key  --certificate-chain file://gd_bundle-g2-g1.crt  --path /cloudfront/  

This will upload the certificate to the Cloudfront in the AWS.

Thursday, December 1, 2016

Resolving 504 Gateway Timeout Issues

AWS ELB gives two matrix which can help in diagnosing [5XX Errors]  issue and identify the possible root cause causing the Errors:

Matrix : “Sum HTTP 5XXs”   :  5XX Reponses given by Service [Due to no availability/failure of DB, S3, MQ etc]
Matrix :  “Sum ELB 5XXs”  :  5XX Response given by ELB [when idletime out expires while waiting for response from Tej Service (504), failure in ELB itself (503) ]

                                  Common reason for idle timeout on ELB is long running DB query or some application bug, performance tuning issues with micro service.