Posts

Showing posts from 2018

Git Cheat Sheet

Image

(Solved) Free storage dropped to 0 on aurora/rds node

When the slow_log is enabled and the long_query_time is set to 0 , this will cause the instance to log all the queries on the DB. This can cause the downtime in the Database always ensure you are not enabling these settings to log everything , it should only be enabled for a short period of time for debugging purpose only.

TLS 1.3 support with nginx plus

TLS 1.3 support is now available, bringing the latest performance and security improvements to transport layer security. This includes 0-RTT support, for faster TLS session resumption. TLS 1.3 can be used to provide secure transport for both HTTPS and TCP applications. OpenSSL 1.1.1 or newer is required to provide TLS 1.3. At the time of release OpenSSL 1.1.1 is available with Ubuntu 18.10 and FreeBSD 12 (shortly after release).

Storing AWS passwords secretly on mac

If you are managing large number of the AWS Accounts in your organisation than its better to use some federated solution. However if you are using passwords only than you can use the KeepPassXC https://keepassxc.org/download/#mac To securely store all your passwords in the keepassxc. It maintains its own database and you can take the backup of the database to keep your password safe. It does not sync them externally thus lowering the threat of compromise. In case your laptop corrupts you can import the database which was created earlier and you would be able to see the credentials. Everytime you open your database you will have to unlock it initially before gaining access to the secrets. Its much of the use is similar to the Hashicorp vault just the difference is instead of application making the request you keep your password in the keepassXC.

Important points regarding the use of the spot instances in the AWS

1. If you are using the balanced orientation which is a mix of cost orientation along with the availability orientation, it means it will always launch instances that have lowest pricing at the moment in the Az's along with the longest duration without any disruptions. Spot instances service provider usually make choices based on the lowest pricing and long lasting instances. But this does not mean these service provider will evenly spread between AZ's i.e. balanced orientation is not always balanced distribution. 2. You can usually change this by selecting the availability orientation but this options narrows down the possibility of long continuity instance types in case volatility increases so choose that option with consideration. 3. Now there can be issue in case your subnet is not having sufficient capacity to create more ip address. This can happen when there are not enough free addresses available in the subnet to satisfy the requested number of instance. 4. Also ...

Aliyun Cloud Important Points

There are two versions available for aliyun cli CLI Go Version CLI Python Version Make sure Go Version is to be installed as python version is going to be deprecated. You can refer to the below link for the Aliyun Cli installation https://github.com/aliyun/aliyun-cli?spm=a2c63.p38356.a3.1.17414388la2EgQ

Creating a VPN Tunnel

Create VPN Gateway. Create Customer Gateway and enter Office Gateway IP as Customer gateway IP address. Create IPSec Connection. Consider following important points - Local Network - VPC CIDR Remote Network - Office Network CIDR Encryption Algorithm - aes192 Download vpn configuration and share with network team. In mail, mention ports to be opened usually 22, 80, 443. Once the network team has configured the configuration on there end of  tunnel. Tunnel will be up in IPSec connection section. Update route table. Allow required port from the other end of the Tunnel NAT IP to allow the traffic to flow securely over the private tunnel.

Most Important Security Practices

Remove all passwords, keys etc from code and use vaults/jks etc for storing them securely Review all exposed APIs in terms of sanitising input params, build rate controls, authentication, and source whitelisting Build DDoS protection by reviewing perimeter architecture, implementing a WAF, put request rate limits at load balancer Keep reviewing all security groups, firewall rules, patch any system with vulnerable components Start secure code reviews for all releases and review input sanitisation, query parameterisation and other OWASP items.

Best Practices with Mysql Databases

Stored Procedures should not be used. All Queries taking more than 500 ms are classified as Bad Queries and will be considered as Blocker Bugs No unnecessary complex joins and no shared databases across multiple applications/services. Every database should have it's own Access Control Connections and Throttle Limits should be setup. Schema Migrations should not have any down time. Every database should have a Candidate Master and Multi-Redundancy Every database should have Orchestration setup with auto failure mode setup. All databases should be part of Monitoring

Database Proxy

Image
Database proxy is a middleware which once setup in place will ensure that all reads/writes from the application passes through it. It can serve following purposes. 1)  Balancing the load due to queries being performed on database . In most setups, database slaves are used with a DNS. This doesn't help in balancing the queries which are being performed on the slaves. It has been observed that while one slave is heavily loaded, the other is almost idle which clearly indicates balancing is not done in the right way and overall performance of the read queries are also degrading than the resources being used. 2)  Routing/Rejecting queries based on regex . This allows the Engineering team to have the capability of blacklist filters on certain clauses depending upon the current indexes in the table. This will ensure, queries executed from mysql cli client do not impact critical slaves. There is more that can be achieved with this feature. 3)  A utomatically shun slaves w...

Engineering Best practices to be followed

1. All teams should use confluence:- i.e. all team documents, on call process, how to, team details etc should be publish to the confluence itself. Documents should not be shared in the email of texts. 2. Publish Design documents for future release:- Design documents should have following structure Status, authors, reviewers, overview, goals both business level and tech level goals, Design, Architecture, Tech stack, changes in existing systems, APIS, public apis, non public apis, security, system infra details, testing , monitoring and alerting, disaster recover, failover, production readiness checklist, faqs. 3. Code quality:- a. Supported ides and minimum version. b. Use of bitbucket/gitlab and code style guildlines 4. Code Documentations and Guidelines:- a. Code commit should have JIRA ID with each and every commit b. Release branches should be properly defined. 5. Code Review:- a. publish code review checklist b. Tools to track code review c. Cross teams review f...

[Solved] Error: (!log_opts) Could not complete SSL handshake with : 5

If you are working with the nrpe and just compiled with the nrpe but not able to start the nrpe service on the 5666 port its because you have not compiled it with the SSL support. So to resolve this issue you need to recompile it with the SSL support to overcome this issue as cd  nrpe-3.2.1 ./configure --enable-ssl If you already have use the --enable-ssl than  you might not have added the ip address in the /etc/nagios/nrpe.cfg allowed_hosts=127.0.0.1,192.168.33.5 In case you are using xinetd or inetd services than you need to change /etc/xinetd.d/nrpe only_from       = 127.0.0.1 192.168.33.5

Dividing CIDR into AWS with Range and server with UI

Image
In case you are creating a new VPC and have selected the CIDR Range and want to divide the CIDR with proper ranges along with the number of usable ips , counts, ranges . You can use the following online tool where you can do it everything in UI thus reducing the overall error and managing overall information much more conveniently. http://www.davidc.net/sites/default/subnets/subnets.html Just update the CIDR Block Range like 10.0.0.0/16 and than you can start dividing the subnet as per your use case and incorporate the same in your VPC. AWS Subnet Calculator from CIDR Tool

[Solved] jenkins java.lang.IllegalArgumentException: Not valid encoding '%te'

After  a fresh jenkins installation, I was getting this error when i created the user. Caused by: java.lang.IllegalArgumentException: Not valid encoding '%te'         at org.eclipse.jetty.util.UrlEncoded.decodeHexByte(UrlEncoded.java:889)         at org.eclipse.jetty.util.UrlEncoded.decodeUtf8To(UrlEncoded.java:522)         at org.eclipse.jetty.util.UrlEncoded.decodeTo(UrlEncoded.java:577)         at org.eclipse.jetty.server.Request.extractFormParameters(Request.java:547)         at org.eclipse.jetty.server.Request.extractContentParameters(Request.java:471)         at org.eclipse.jetty.server.Request.getParameters(Request.java:386)         ... 91 more Solution:- Just restart the jenkins service that should fix the issue.

[Solved] Stderr: VBoxManage: error: The virtual machine 'master_default_1540967069723_95784' has terminated unexpectedly during startup with exit code 1 (0x1)

Image
Error:- There was an error while executing `VBoxManage`, a CLI used by Vagrant for controlling VirtualBox. The command and stderr is shown below. Command: ["startvm", "cddac55c-debe-470d-bb0a-d5badf0c19af", "--type", "gui"] Stderr: VBoxManage: error: The virtual machine 'master_default_1540967069723_95784' has terminated unexpectedly during startup with exit code 1 (0x1) VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component MachineWrap, interface IMachine Solution:- 1. So here is a brief of what I was doing, i installed the virtualbox on my macos using the brew and installed the vagrant than and tried to bring up the vm using vagrant which resulted in above error. 2. The problem is because the MACOS doesn't allow the changes in the kernel modules by some external application due to which the installation of the Virtualbox fails on MacOS 3. To resolve this issue download the virtualbox installer fr...

Installing the virtualbox on mac using brew

Use the following command to install the virtualbox on the mac using brew brew cask install virtualbox  

Managing Multiple VPC in Organization

If you are managing a very large infrastructure which is spawned across multiple Public clouds, private datacenters and have large number of external integration with the multiple merchants over the tunnel , its good to maintain the Network details for all the Public clouds (AWS-VPC), private datacenters etc so that there is no overlapping between your account and some other team account with whom you might have to peer or create a tunnel in a later point in time. Its good to maintain a wiki page for the same and everytime there is a requirement for the New infrastructure creation always update the wiki for the same. For the AWS you can prepare a excel sheet with the following fields to relay the information correctly to other teams:- 1. Network details 2. CIDR 3. Broadcast IP 4. Netmask 5. Location 6. Comments For private datacenters enter the following details 1. Subnet 2. Mask 3. Subnet Details 4. VLAN ID 5. Zone/VLAN 6. Gateway

Enable or Disable passphrase on id_rsa key file

It's always good to have a passphrase entered whenever you are generating any ssh-key for the server access as it helps to prevent unauthorised access in case you key is compromised from the security point of view and are mostly the requirement of the audits as it act as an two factor authentication which requires the passphrase and secure key entered to access the server. You can also enable the google authentication in which case it will generate a passcode on applications such as google authenticator and apart from the passphrase and key a person accessing the server would need to enter the google authenticator code as well in order to access the server thus increasing the security even further. Covered this in my previous post below Google Authenticator MFA for Linux systems In case you forget to enable the passphrase and want to enable it now use the following command to enable the passphrase without effecting your existing key file ssh-keygen -p  -f ~/.ssh/id_rsa ...

Elasticsearch monitoring

Image
What is Elastic Search? Elasticsearch is an open source distributed document store and search engine that stores and retrieves data structures in near real-time. Elasticsearch represents data in the form of structured JSON documents, and makes full-text search accessible via RESTful API and web clients for languages like PHP, Python, and Ruby. Few Key Areas to monitor Elastic Search in DataDog: Search and indexing performance Memory and garbage collection Host-level system and network metrics Cluster health and node availability Resource saturation and errors Search and indexing performance: Search  Performance Metrics: Query load : Monitoring the number of queries currently in progress can give you a rough idea of how many requests your cluster is dealing with at any particular moment in time. Query latency:   Though Elasticsearch does not explicitly provide this metric, monitoring tools can help you use the available metrics to calculate the average ...

Important points for Elasticsearch Optimizations

Points to be taken care before creating cluster: Volume of data Nodes and capacity planning. Balancing, High availability, Shards Allocation. Understanding the queries that clusters will serve. Config walk-through:  cluster.name: Represents the name of the cluster and it should be same across the nodes in the cluster. node.name: Represent the name of the particular node in the cluster. It must be unique for every node and it is good to represent the hostname. path.data: Location where the elasticsearch need to store the index data in disk. If you are planning to handle huge amount of data in the cluster, it is good to point to another EBS volume instead of root volume. path.logs: Location where the elasticsearch needs to store the server startup, indexing and other logs. It is also good to store at other than EBS volume. bootstrap.memory_lock: This is an important config in ES config file. This needs to set as "TRUE".  This config locks the amount of...

Issue sending Email from the Ec2 instances

I configured the postfix recently on the ec2 instance and tried sending the mail with all the security group rules and NACL rules in place. However after i was initially able to telnet to the google email servers on port 25 soon i start getting logs with no connection error messages and ultimately i was not able to do telnet and even the mails were not going or received by the receiver. This problem was only coming on the ec2 instances. This is because the Amazon throttles the traffic on the port 25 for all the Ec2 instance by default. But its possible to remove this throttling over the ec2 instance over the port 25. For removing this limitation you need to create a DNS A record in the route53 to your instance used in the mail server such as postfix. With the root account open the following link https://aws-portal.amazon.com/gp/aws/html-forms-controller/contactus/ec2-email-limit-rdns-request And provide your use case for sending the mail. Than you need to provide any reverse...

SSH upgradation on the ubuntu for PCI Compliance

In case your security team raises a concern regarding the upgrading of the openssh server version on the ubuntu servers kindly refer to the openssh version based on the distribution before making any changes as this can effect the overall reachability to the server Following are the latest openssh version based on the distribution OpenSSH 6.6 is the most recent version on Ubuntu 14.04. OpenSSH 7.2 is the most recent version on Ubuntu 16.04. OpenSSH 7.6 is the most recent version on Ubuntu 18.04. Openssh 7.6 is supported on the Ubuntu 18.04 only and Ubuntu 14.04 is not compliant with it. Thats why its not upgraded during the patching activity. Like all the other distribution ubuntu also backports the vulnerabilities so that the application compatibility doesn't break by changing versions between different distributions. Dont make any changes to your server which are not compatible with your distribution version. Go on providing the version of the ubuntu you are runn...

#4 Vault Installation Unsealing for access

Image

#3 Vault Features

Image

#2 How Vault Works

Image

#1 Hashicorp Vault Introduction

Image

#4 Principles of Infrastructure as code

Image

#2 Features of Blockchain

Image

#1 What is BlockChain

Image

#3 Challenges with Dynamic Infrastructure

Image

Managing Mysql Automated Failover on Ec2 instances with Orchestrator

Image
Orchestrator is a free and opensource mysql high availability and replication management tool whose major functionalities includes MySQL/MariaDB MasterDB failover in seconds (10-20 secs) considering our requirements and managing replication topology (Changing Replication Architecture by drag-drop or via Orchestrator CLI) with ease. Orchestrator has the following features:- 1. Discovery:- It actively crawls through the topologies and maps them and can read the basic replication status and configuration and provides slick visualisation of topologies including replication problems. 2. Refactoring:- Understands replication rules, knows about binlog file:position, GTID, Pseudo GTID, Binlog Servers.Refactoring replication topologies can be a matter of drag & drop a replica under another master. 3. Recovery:- It can detect master and intermediate master failures. It can be configured to perform automated recovery or allow user to choose manual recovery. Master failover ...

Mysql Benchmarking with sysbench

Benchmarking helps to establish the performance parameters for a mysql database on different instance sizes in AWS Cloud TOOL:                          sysbench MySQL Version:          5.6.39-log MySQL Community Server (GPL) EC2 Instance type:     r4.xlarge (30 GB RAM, 4 CPU)  DB Size:                      25 GB (10 tables) MySQL 5.6.39-log MySQL Community Server (GPL) sysbench /usr/share/sysbench/oltp_read_write.lua --threads=32 --events=0 --time=120 --mysql-user=root --mysql-password=XXXXXXXXX --mysql-port=3306 --tables=10 --delete_inserts=10 --index_updates=10 --non_index_updates=0 --table-size=10000000 --db-ps-mode=disable --report-interval=5 --mysql-host=10.1.2.3 run sysbench 1.0.15 (using bundled LuaJIT 2.1.0-beta2) Running the test with following options: Number of threads: 32 Report intermediate results every...

Upgrading AWS Instances to 5th Generation instances

Note:   Latest AWS CLI version is required. Once you've installed and configured the AWS CLI, and created an AMI, please follow the steps below: 1) SSH into your Ubuntu instance. 2) Upgrade the kernel on your instance by running the command:     sudo apt-get update && sudo apt-get install linux-image-generic 3) Stop the instance from the console or AWS CLI. 4) Using the AWS CLI, modify the instance attributes to enable  ENA  by running the command below:     aws ec2 modify-instance-attribute --instance-id -- ena -support --region ap-southeast-1 5) Using the AWS CLI, modify the instance attributes to change to the desired instance type (for example: m5.large)     aws ec2 modify-instance-attribute --instance-id --instance-type m5.large --region ap-southeast-1 6) Start your instance from the console or the AWS CLI. Once the instance boots, please confirm if the  ENA  module is in use on the network interface by runni...

Google Authenticator MFA for Linux systems

Image

kubernetes Installation Part-1 (Kubeadm, kubectl, kubelet)

Image

Kubernetes Installation Requirements

Image

DynamoDB table getting throttled

Usually when a table is throttled while the consumed capacity is well below the provisioned capacity, it is an indicator of hot partition or a “burst” of read or write activity. Hot partition means one single partition is accessed more than other partitions and hence more capacity is consumed by that partition. Hot partition can be caused by unevenly distributed data across partitions due to hot partition key. A “burst” of read or write activity occurs when workloads rely on short periods of time with high usage, for example batch data upload.

Tagging EBS Volumes with Same tags as on EC2 instances in Autoscaling

Propagate the tags from a instance in an autoscaling group to the EBS volumes attached. Although autoscaling group allows to apply tags to the instances this doesn't propagate to the instance volumes. So some scripting on user data section is needed during instance launch to properly tag the volumes of instances created by the autoscaling group. You can use the below script to apply the tags to the EBS Volumes in an autoscaling group which should be created in the userdata field in the launch configuration of the autoscaling group. Also you need to attach a IAM role to the instance with the permissions of describe-instances,create-tags etc.

SSL certificate uploaded in AWS ACM but not available while create ELB/ALB

In case you have created an ACM SSL certificate however it is not available in the drop down list to associate with your load balancer. The reason that you are unable to attach it to your ELB is because the certificate has a key length size of RSA-4096. Although it is possible to import a SSL certificate of 4096 bits into ACM, currently the ELB supported Certificate types are RSA_1024 and RSA_2048. If you are using any other type of Certificate, it will unfortunately not be eligible for attachment to your ELB, which means that you won't be able to select it during the ELB creation process. ACM supports 4096-bit RSA (RSA_4096)  certificates but integrated services (such as ELBs) allow only algorithms and key sizes they support to be associated with their resources. Note that ACM certificates (including certificates imported into ACM), are regional resources. Therefore, you must import your certificate into the same region that your ELB is in in order to associate it with th...

Runbook to resolve some of the most common issues in Linux

Check the status of the particular FS by df -ih Check for the recently created files by entering the FS which is showing high inodes find $1 -type f -print0 | xargs -0 stat --format '%Y :%y %n' | sort -nr | cut -d: -f2- | head Check the directory which is having most of the files find . -type d -print0 | xargs -0 -n1 count_files | sort -n

Creating a ssh config file so as not to pass the key or username in multiple servers

If you are running your servers in different VPC than based on the cidr range of the ip addresses and different username its difficult to remember all the keys  and username while connecting to the servers.

Concepts for Rolling Deployment strategy for applications in production environment

Rolling deployment strategy follows wherein servers are taken out in a batch for deployment instead of deploying onto all servers at once. This helps in lowering the HTTP 5xx and other such issues when the traffic is high on the application and the deployment needs to be performed. What Applications Qualifies for this Deployment: Applications running behind an Elastic load balancer. Application which have been moved to build and deploy model for NodeJS Applications (Wherein we build the deployment package once instead of doing on the fly compilation/installation)

Switching Roles between Different AWS Accounts

If you are having the Infrastructure running in different AWS Accounts than you don't need to logout and login individually to each AWS Account or use the different browser. You can simply switch the roles between the different account 1. Login to your primary account this would be the entry level account through which you can switch to any of your other aws account. 2.  Click on your username at the top of the screen and choose Switch Role > Then choose switch role again. It will ask you for the following information Account Number:- (Account number in which you want to switch from existing account) Role:- (Role name which has been assigned to your user in IAM) Display Name:- (Easy to recognise name e.g. Production-Appname) 3. Click on switch role and you should be in the other account without loging out from your current account. 4. When done editing revert back to your account. 5. You can add any number of accounts under the switch role and move between d...

Kubernetes Architecture Explanation

Image

Kubernetes Terminology

Kubernetes Master:- The master nodes in the kubernetes is a node which controls all the other nodes in the cluster. There can be several of those master nodes in a cluster if its said to be highly available or there can only be 1 if you got a single node cluster than that node will always going to be a master node. Kube-Apiserver:- Is the main part of the cluster which answer all the apicalls. This uses the key value store for storing the configuration of other persistent storage such as ETCD. ETCD:- Etcd is open source distributed key value store that provides the shared onfiguration and service discovery for the containers linux clusters. etcd runs on each machine in a cluster and gracefully handles the leader election during the network partitions and loss of the current leader. It is responsible for storing and replicating all kubernetes cluster state. Service Discovery:- It is the automatic detection of the devices and services offerentby the devices on a computer networ...

Kubernetes Features

Image

Creating Ubuntu Vms through Vagrant on Windows Hosts operating system

Image

Important points to consider while create zookeepeer and kafka cluster

The kafka queue is a highly scalable and extreme fast queuing service which can really come in handy when you have to handle large amount of messages and have to build and services to work in async mode with an option to handle the fault in the services but not loosing the data at the same time you need the system to be scalable which can meet the ever growing demand of messages which would be pushed to this cluster. Following are some of the important points to consider while creating the highly available kafka zookeeper cluster:- 1. If you want to scale your kafka nodes you should consider keeping the zookeeper on the separate nodes. This is particularly useful for environment where kafka messages throughput is extremely large and more number of brokers would be required after certain period of time to deal with the fault tolerance while maintaining the system to be scalable. Kafka in itself is very scalable solution and in case you are not getting the data in TBs you can cons...

Introduction to Kubernetes

Image

Introduction to Infrastructure as Code (IAC)

Image

Goals of Infrastructure as Code

Image

Setting Security team email for security related issues in AWS Account

Follow the steps below to fill the security team email this can come handy to respond to the security related issues in the AWS Account 1. Sign in to the AWS Management Console and open the Billing and Cost Management console at https://console.aws.amazon.com/billing/home#/.  2. On the navigation bar, choose your account name, and then choose My Account.  3. Scroll down to the Alternate Contacts section, and then choose Edit.  4. For the fields (in this case, Security Contact) that you want to change, type your updated information, and then choose Update. These alternate contacts, which include the Security Contact, enable AWS to contact another person about issues with your account, even if you're not attending to this account regularly. Regards Ankit Mittal

Configuring the dynamic inventory for Ansible

If you are running large number of servers in aws or using the autoscaling than its not possible to maintain the hosts entry in the hosts file of the ansible. Instead you can use the dynamic inventory which is a python script. ec2.py available on the ansible and free to use. You can configure the python script as follows and target the ec2 instances to make the configuration changes via ansible even on the autoscaling instances using the tags. 1. Download the Ansible dynamic inventory script wget https://raw.github.com/ansible/ansible/devel/contrib/inventory/ec2.py 2. Make the script executable chmod +x ec2.py 3. Attach the role to the ec2 instance from which you are going to run the ansible and attach appropriate policy to the role with permissions which would be required by ansible for the configuration management. 4. If you are using the private instances than chances are you might receive the empty list when you run the ./ec2.py --list command to test the dynamic inven...

Starting and Stopping Ec2 instances Automatically during Night for Non-prod Environment for Saving on the AWS Billing

The following script can be used to start and stop the Ec2 instances during the non-productive ours for the lower environment such as Development, Sit, stage etc. Depending on the number of instances that are stopped you can save on your AWS Billing. You can align the script to run during the defined ours using Cron job. The advantage of the script is that everytime you need to add an server to automatically start and stop you just need to enter in the Tag with specified values in our case its ADMIN_EC2_STARTSTOP which can have the values of stop or start and it would automatically add the instance to this setup. You don't have to change the script everytime a new server is being created. The script uses an additional tag of Environment to identify the Environment to start and stopped as it can be possible that you want to stop the Stage and Dev environment at different times and different days depending upon your requirement. So you dont have to change the script same script ...

Using aws config to monitor Stacks with Screenshots

Image
Aws config can be used to monitor the instances, security group and other resources within you AWS Account. This is specially useful to monitor the compliance of the instances and to raisae alarm if someone creates an instance without the proper procedure like missing tags, changes in the security group, non compliant instance type etc. Aws config will mark them as non-compliant under the config dashboard and can send a notification using the SNS service.It can easily be deployed using the cloudformation and helps in managing the resource effectively. This is specially recommended in case you have many users in your organization with the dashboard access who have privilege to create the instances. Follow the procedure descripbed below to configure the aws config in your environment for the ec2 instances where you can define the instances which are compliant in your aws account any instance apart from them would be marked as non-compliant and you configure an alert for the same. ...

Installing php

3.   PHP Installation Version: 5.3.17 export CFLAGS=-m64 export LDFLAGS=-m64  yum install libjpeg-devel yum install libpng-devel yum install libtiff-devel yum install libtool-ltdl-devel yum install freetype-devel Install libmcrypt-devel Install libmcrypt tar -zxvf mm-1.4.2.tar.gz make make test make install  tar -xvzf php-5.3.17.tar.gz  cd php-5.3.17   './configure'  '--prefix=/usr' '--with-libdir=lib64' '--libdir=/usr/lib64/' '--with-config-file-path=/etc' '--with-config-file-scan-dir=/etc/php.d' '--with-apxs2=/opt/www/apache/2.2.23/bin/apxs' '--with-libxml-dir=/usr' '--with-curl=/usr' '--with-mysql' '--with-zlib' '--with-zlib-dir=/usr' '--enable-sigchild' '--libexecdir=/usr/libexec' '--with-libdir=lib64' '--enable-sigchild' '--with-bz2' '--with-curl' '--with-exec-dir=/usr/bin' '--with-openss...

Upgrading Kernel Version on Aws Ec2 instance

There are multiple requirements for upgrading the Kernel version on the Ec2 instance on the AWS instances. You will definitely want to upgrade for the bug fixes and the stability which comes with the updated version. In our case we have to install the Antivirus required for the audit purpose but the issue arise when we were using the older version of the Kernel which was not supported by the Antivirus. This was particularly essential since antivirus provides you with active defense against unknown threats which are not included in the free antivirus. So we were required to update the kernel version. We were running the  3.13.0-105-lowlatency version of the kernel however the following version was supported by the Antivirus. We were required to upgrade the kernel to  3.13.0-142-generic for the antivirus to work. Upgrading the kernel version on the Aws ec2 instances can be little tricky because if you don't do it properly it would certainly cause the startup failures. W...

Machine Learning Use Cases in the Financial Services

With the rapid changing digital age and our more dependence on the digital services in every domain from banking, payments, medical, ecommerce, investments etc needs a technology delivery model that's suited to how the world and consumer needs are changing so as to allow companies to develop new products and capabilities that kind of fits with the digital age. We are considering the Capital One's use case for the Machine learning There were 3 main fields for the Capital one to improve there banking relationship - Fraud Detection, Credit Risk, Cybersecurity. Improving these areas involves distinguishing patterns, something the neural networks underlying machine learning accomplish far better than traditional , non-AI software. The goals were to Approve as many transactions as possible by identifying fraud only when it's very likely to happen; make better decisions making around credit risk, track constantly evolving threats. Applying machine learning to these areas is a...

Using the Comments in the Python

You can use the #  for single line comments. # This is comment  # this is also comment There is no concept of multi line comments in the python so you would need to enter the # infront of every line you want to comment in multiline case.

Creating and Running python scripts

You can simply start by creating a file which ends in the extension .py like vim hello.py Than just start by entering the print command print ("Hello, World!") Than you can execute this file with the python as follows python hello.py If you want to make this executable use the # in the linux as follows #!/bin/python chmod +x hello.py  afterwards you can simply execute it like ./hello.py Now always keep the executables in the bin directory mkdir bin mv hello.py bin/hello This way you can simply make it a command which can directly be executed in the linux terminal and this is how most of the commands actually work in the Linux You just need to modify the source to match yours , it can be done as follows source PATH=$PATH:$HOME/bin/

REPL in the python

REPL in python Read Evaluate print Loop . It helps you to work with the python the easy way when you start learning the python initially. You can simply login to the python by just writing the python on the terminal window. And once you evaluated than you exit out using the exit() function remember it neads () otherwise just writing the exit wouldn't allow you to log out.

History of the Python

1. Created by Guido van Rossum 2. First apperance in 1991 3. Used and supported by the tech giants like google and youtube. 4. Supported two major version for nearly a decade (python2 and python3) 5. Object oriented Scripting Language 6. Dynamic and Strong type system 7. Functional Concepts (map,reduce,filter,etc) 8. Whitespace delimited with pseudo-code like syntax 9. Used across a variety of descipilines, including: a.) Academia b.) Data Science c.) Devops d.) Web Development 10. Runs on all major operating systems. 11. Consistently high on the Tiobe index 

[Devops Job Openings] OPtum(UHG) opening for Linux Admin

Image
Greetings from Naukri.com!! We all know in this competitive scenario everyone is growing so fast. And in order to cope up with ever changing environment, it is imperative to tap the current opportunity before its being taken over by someone else. Here is an opportunity for you; your profile has been shortlisted from Naukri.com  job  portals.

Devops Manager || 4-10 Yrs || Noida

Job  Synopsis     Devops Manager || 4-10 Yrs || Noida Company:   Broctagon IT Solutions Pvt. Ltd. Experience:   4 to 9 yrs Location:   Noida     Job  Description     This  job  requires precision, leadership capabilities and someone who can streamline the workflow between deployment and development. This is a very critical position, for Broctagon's strategic initiatives, and you will be responsible for running the entire infrastructure for our product line. We need someone well-versed in collaboration techniques. If you are someone who believes that everything can be automated using cutting edge technologies, then hit the send button and share your profile with us! Key Responsibilities Areas: Lead a team of DevOps engineers Design, plan, execute on projects and deliverables Manage web applications, desktop applications and deployment in all environment, by using automated deployment methodologies Particip...

Kubernetes Architecture Diagram

Image