Pages

Subscribe:

Tuesday, February 18, 2020

[Solved] Difference between the Variable vs Global variable in Amazon RDS

Recently faced the issue after making changes in the RDS Parameters and querying the same within the mysql rds in the Amazon AWS.

 mysql> SHOW VARIABLES WHERE Variable_name LIKE 'character_set_%' OR Variable_name LIKE 'collation%';  
 +--------------------------+-------------------------------------------+  
 | Variable_name      | Value                   |  
 +--------------------------+-------------------------------------------+  
 | character_set_client   | utf8                   |  
 | character_set_connection | utf8                   |  
 | character_set_database  | utf8mb4                  |  
 | character_set_filesystem | binary                  |  
 | character_set_results  | utf8                   |  
 | character_set_server   | utf8mb4                  |  
 | character_set_system   | utf8                   |  
 | character_sets_dir    | /rdsdbbin/mysql-5.7.22.R5/share/charsets/ |  
 | collation_connection   | utf8_general_ci              |  
 | collation_database    | utf8mb4_unicode_ci            |  
 | collation_server     | utf8mb4_unicode_ci            |  
 +--------------------------+-------------------------------------------+  
 11 rows in set (0.01 sec)  
 mysql> SHOW GLOBAL VARIABLES WHERE Variable_name LIKE 'character_set_%' OR Variable_name LIKE 'collation%';  
 +--------------------------+-------------------------------------------+  
 | Variable_name      | Value                   |  
 +--------------------------+-------------------------------------------+  
 | character_set_client   | utf8mb4                  |  
 | character_set_connection | utf8mb4                  |  
 | character_set_database  | utf8mb4                  |  
 | character_set_filesystem | binary                  |  
 | character_set_results  | utf8mb4                  |  
 | character_set_server   | utf8mb4                  |  
 | character_set_system   | utf8                   |  
 | character_sets_dir    | /rdsdbbin/mysql-5.7.22.R5/share/charsets/ |  
 | collation_connection   | utf8mb4_unicode_ci            |  
 | collation_database    | utf8mb4_unicode_ci            |  
 | collation_server     | utf8mb4_unicode_ci            |  
 +--------------------------+-------------------------------------------+  
 11 rows in set (0.00 sec)  

Cause:-
session variables are getting overridden is because the client auto detects which character set to use based on the operating system setting. 

Reproduce:-
for reproducing the case two different MySQL clients running on separate servers. One was installed on an Ubuntu subsystem running on my local machine and the other was installed on a Ubuntu Linux server running on an EC2 instance. MySQL client running on my local machine the variables were not overridden. However, on the Ubuntu Linux server running on EC2 the session variables got overridden. 

Workaround/Resolution:-
setting the 'skip-character-set-client-handshake' parameter to 1 using you custom parameter group. This will ignore the character set information detected by the client and therefore set the session character set variable to be the same value as your global variables

Wednesday, October 16, 2019

Kubernetes Important Commands And Concepts

1. Listing all the resources in the kubernetes you can use
kubectl api-resources -o name (which lists the resources according to the name only)

2. Spec
Every object in kubernetes has a specification provided by the user which defines the state for that object to be in.

3. Status
Status represents the current actual state of the object. Kubernetes matches the spec to achieve the desired state specified in the spec

4. kubectl get:- to get the list of objects in kubernetes. For e.g kubectl get pods -n kube-system , kubectl get nodes
you can get more detailed information about a object like
kubectl get nodes kube-node1 -o yaml (yaml representation of this object)

5. kubectl describe kube-node1 (Readable overview about an object but not the yaml format)

6. Pods can contain one or more containers and a set of resources shared by those containers. All containers in kubernetes are part of a pod.

Example of pod Yaml https://github.com/ankit630/IAC/blob/master/kubernetes/pods/ex-pod.yml

7. kubectl create -f ex-pod.yml (Its going to create the pod in the kubernetes cluster)

8. kubectl apply -f ex-pod.yml (Any changes like change in existing container can be applied to existing container)

9. kubectl edit pod ex-pod (Apart from apply edit can also be used to edit pod and saving file will autoamtically apply changes)

10. kubectl delete pod ex-pod (Used to delete the existing pod)

11. Namespace allows to organize the objects in cluster with every object belonging to a namespace and when no namespace is defined it automatically goes to default namespace.

12. kubectl get namespaces (list the namespaces in cluster)

13. kubectl create ns ex-ns (creates the ex-ns namespace in kubernetes)

14. kubectl get pods -n ex-ns (list pods in example namespace)

Part 2 Using Athena to query S3 buckets

This is in continuation to my previous post on how can use the Athena to query the s3 buckets storing the cloudtrail logs in order to better organize your security and compliance which is hard thing to achieve in a legacy/large accounts with number of users.

Question:- Identifying the last 100 most used IAM Keys. Usually IAM roles is better approach to be used than using the IAM keys for the authentication as IAM roles can rotate the keys after every 15minutes thus making hard to intercept the keys and increasing the security of the Account.

Answer
 SELECT  
  useridentity.accesskeyid,

  useridentity.arn,

  eventname,

  COUNT(eventname) as frequency

 FROM account_cloudtrail

 WHERE sourceipaddress NOT LIKE '%.com'

   AND year = '2019'

   AND month = '01'

   AND day = '01'

   AND useridentity.accesskeyid LIKE 'AKIA%'

 GROUP BY useridentity.accesskeyid, useridentity.arn, eventname

 ORDER BY frequency DESC

 LIMIT 100 

Friday, October 11, 2019

Part 1 Using Athena to query S3 buckets

While its great to push all the logs data gathered from various sources like your load balancers, cloudtrail, application logs etc to the S3 buckets. But as your infrastructure grows in size it becomes difficult to analyze such huge amount of data of months or year.

You can use the Athena service of the Amazon AWS to query the S3 service data without the need of downloading and processing it manually. This saves the requirement of extra processing, space requirement etc. We are going to cover the query details of most of the effective queries which can help you analyze and meaningful information from your s3 logs data.

 Question:- Identifying all the users,events,accounts accessing a particular s3 bucket  
 Answer:-
 SELECT DISTINCT  
    account,

    eventname,

    useridentity.arn,

    useragent,

    vpcendpointid,

    json_extract_scalar(requestparameters, '$.bucketName') AS bucketName,

    sourceipaddress

 FROM unixcloudfusion_cloudtrail

 WHERE year = '2019'

  AND month = '10'

  AND day = '09'

  AND eventsource = 's3.amazonaws.com'

  AND json_extract_scalar(requestparameters, '$.bucketName') = 'unixcloudfusion.analytics' 


Thursday, October 10, 2019

Command Logging & Kibana Plotting


Problem Statement : Monitor & track all the activities/commands used by user on system


Minimum Requirement(s):   1) Required separate ELK cluster for command logging
                                               2) Required Snoopy Logger Agent on all client machines.
                                               3) Required File beat agent.


Context: In order to track what all commands are being fired by users , we''ll be needing bash_history of that specific user it becomes tedious task when we have to track specific user (or multiple user)in different machines

Solution:  Snoopy Logger is a tiny library that logs all executed commands (+ arguments) on your system.

Below is the link for more information on snoopy which includes installing snoopy logger as well.
https://github.com/a2o/snoopy

  Through Snoopy logger we will be getting one single file for all command hit by any user ,you can specify message format  and filter chain for filtering  logs in snoopy based on message format we need to create grok in logstash , we can also exclude some repetitive internal command by drop filter in logstash format for excluding command is given below :

filter {
 if [command] == "command-name" {
   drop {
      percentage => 100
    }
  }
}