-->

Tuesday, August 17, 2021

Terraform Certification Details

1. The Duration for the Terraform Exam is 1 Hour

2. You will have 50 to 60 questions and you will be tested on Terraform version 0.12 and higher so if you have worked on the version older than 0.12 than there has been considerable changes in the syntax and the logic as well.

3. The exam is online proctored and the whole certification is quite handsoff. 

4. You will have to register on the hashicorp website from where you will be redirected to the exam portal. And you will have to make sure your system meets the requirement for the online exam. 

5. The certification will have 2 years expiration from the day which you passed the exam.

Friday, August 6, 2021

[Solved] kubelet isn't running or healthy.The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error

Description:-

So the issue came when i was setting up the kubernetes cluster on the AWS centos VM. Although these steps were followed every time, this particular time it resulted in the error below

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

Issue:-

If you google this issue you will find the solution that this occurs because of the swap which was not the case with me and definately aws instance was not having the swap configured.

So in this particular case docker was using the groupfs which i changed to systemd and thats it volla it got resolved.

Create the file as

[[email protected]er ~]# vim /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
[[email protected] ~]# systemctl restart docker
[[email protected] ~]# systemctl status docker

Thats it you problem should be resolved now and you can the kubeadm reset command followed by the kubeadm init command to reinitialise your kubernetes cluster which will work this time.


Thursday, August 5, 2021

Preventing Google bot to scrawl certain website pages via Nginx

Sometimes you might want to skip the google bot from scrawling your certain pages so you can use the robots.txt file to decline them.

But at times during the migration or testing newer changes you allocate a small traffic on new endpoints to verify if things are working fine or not. Sometimes the newer pages might not have certain components which googlebot might be using from the seo perspective.

Also newer limited allocations of a part of traffic might cause bot to view pages differently and mark them as copied content due to which search results might get affected.

So you can prevent and control the google bot from scrawling pages from the nginx webserver itself as well.

First two important things are there:-

1. Google has multiple bots no one actually knows however google give some idea about its bots. But one thing is common they all have google in it

2. This is not a replacement for robots.txt rather we implementing because of the partioning/allocation of small traffic to new site which gradually increases over time. So we don't want both the sites to be simultaneously visible and remove it once the complete migration has occurred.

So you can detect the google bot with the help of the http_user_agent which nginx provides and you can look for the string google in it. If you find the user_agent is having google than you can be certain that its google bot.

So based on above conclusion we can control google bot via user_agent in nginx and restrict and proxy some particular site page based on this approach

So in location directive you can send 420 error to google_bot as and you can use this error condition in all your if statements wherever required.

 location = / {  
   error_page 420 = @google_bot;  
   # Checking for google bot  
   if ($http_user_agent ~* (google)) {  
     return 420;  
   }  
You can also proxy_pass and make the google bot to always come on the old page as 
location @google_bot {
    proxy_pass $scheme://unixcloudfusion;
}

[Resolved] Kubernetes showing older version of master after successful upgrade

 Issue:- Recently updated my kubernetes cluster

 [[email protected] ~]# kubeadm upgrade apply v1.21.3  
 [upgrade/config] Making sure the configuration is correct:  
 [upgrade/config] Reading configuration from the cluster...  
 [addons] Applied essential addon: CoreDNS  
 [addons] Applied essential addon: kube-proxy  
 [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.21.3". Enjoy!  
 [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.  

So the upgrade message clearly shows the cluster was upgraded to "v1.21.3" in master node. However when i run the command to verify

[[email protected] ~]$ kubectl get nodes -o wide
NAME                            STATUS     ROLES                  AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8smaster.unixcloudfusion.in    Ready      control-plane,master   9d    v1.21.2   172.31.36.208   <none>        CentOS Linux 7 (Core)   3.10.0-1160.31.1.el7.x86_64   docker://20.10.7
k8sworker1.unixcloudfusion.in   NotReady   <none>                 9d    v1.21.3   172.31.39.6     <none>        CentOS Linux 7 (Core)   3.10.0-1160.31.1.el7.x86_64   docker://20.10.7
k8sworker2.unixcloudfusion.in   Ready      <none>                 9d    v1.21.3   172.31.46.144   <none>        CentOS Linux 7 (Core)   3.10.0-1160.31.1.el7.x86_64   docker://20.10.7

Even after updating the version still showed v1.21.2

Resolution:-

So the cluster is showing you the old version because you have not updated the kubelet which updates the version in the etcd which is storing all the configuration. So just run the below command to update the kubelet and kubectl

[[email protected] ~]$ sudo yum install -y kubelet kubectl --disableexcludes=kubernetes
Loaded plugins: fastestmirror, versionlock
Loading mirror speeds from cached hostfile
 * base: mirror.centos.org
 * epel: repos.del.extreme-ix.org
 * extras: mirror.centos.org

Updated:
  kubectl.x86_64 0:1.21.3-0

Updated:
  kubelet.x86_64 0:1.21.3-0

Complete!

Once the update is complete for both the kubectl and kubelet , now verify the version again

[[email protected] ~]$ kubectl get nodes -o wide
NAME                            STATUS     ROLES                  AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8smaster.unixcloudfusion.in    Ready      control-plane,master   9d    v1.21.3   172.31.36.208   <none>        CentOS Linux 7 (Core)   3.10.0-1160.31.1.el7.x86_64   docker://20.10.7
k8sworker1.unixcloudfusion.in   NotReady   <none>                 9d    v1.21.3   172.31.39.6     <none>        CentOS Linux 7 (Core)   3.10.0-1160.31.1.el7.x86_64   docker://20.10.7
k8sworker2.unixcloudfusion.in   Ready      <none>                 9d    v1.21.3   172.31.46.144   <none>        CentOS Linux 7 (Core)   3.10.0-1160.31.1.el7.x86_64   docker://20.10.7
So the issue is resolved and it shows the new version