-->

Wednesday, September 28, 2022

Generating token in kubernetes using kubeadm command for adding the worker nodes

 Issue:- Kubeadm provides you a join token command when you first create a kubernetes cluster. But if you dont have that token handy for the future requirement for addition of the worker nodes to increase the cluster capacity ?

Solution:-  you can run the following command which will allow you to generate the full token command which can be used to add the worker nodes to master in the future.

 [[email protected] ~]$ kubeadm token create --print-join-command  

kubeadm join 172.31.98.106:6443 --token ix1ien.29glfz1p04d7ymtd --discovery-token-ca-cert-hash sha256:1f202db500d698032d075433176dd62f5d0074453daa12ccdfffd637a966a771

Once the token has been generated than you can run the command on the worker node to add it in the kubernetes cluster.

[Solved] Persistentvolume claim pending while installing the Elasticsearch using Helm

 


Issue:- 

When installing the elasticsearch using the helm , the elasticsearch continaer fails as the multimaster nodes go in pending state for the persistentvolumeclaim and continaer remains in the pending state.

Error:- 


Persistent volume claim remains in the pending state

Effect:-

Was not able to install the elasticsearch as persistent volume claim was not ready for the  Elasticsearch.

[Solved] stacktrace":ElasticsearchException[failed to bind service]; nested: AccessDeniedException[/usr/share/elasticsearch/data/nodes];

 

Issue:- 

When installing the elasticsearch using the helm , the elasticsearch continaer fails with an exception AccessDeniedException[/usr/share/elasticsearch/data/nodes];

Error:- 

"cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "uncaught exception in thread [main]",
"stacktrace": ["org.elasticsearch.bootstrap.StartupException: ElasticsearchException[failed to bind service]; nested: AccessDeniedException[/usr/share/elasticsearch/data/nodes];"

Effect:-

Was not able to install the elasticsearch and elasticsearch pod keeps crashing again and again as the healthcheck is not passed and the liveness probe fails restarting the pod again and again.

Resolution:-
Follow the following steps to resolve the issue

1. The issue comes because the elasticsearch user is not having the permission on the /usr/share/elasticsearch/data/nodes directory.

2. But you cannot directly use kubectl exec command as elasticsearch does not support sh or bash and also if the pod gets replaced the issue will arise again.

3. So in order to resolve this issue you need to use the init continaers and runasuser so that the proper permission is available with the elasticsearch user, I was able to create the following workaround for this issue
replicas: 1
minimumMasterNodes: 1

volumeClaimTemplate:
  accessModes: ["ReadWriteOnce"]
  resources:
    requests:
      storage: 1Gi
extraInitContainers: |
   - name: create
     image: busybox:1.35.0
     command: ['mkdir', '-p', '/usr/share/elasticsearch/data/nodes/']
     securityContext:
       runAsUser: 0
     volumeMounts:
      - mountPath: /usr/share/elasticsearch/data
        name: elasticsearch-master
   - name: file-permissions
     image: busybox:1.35.0
     command: ['chown', '-R', '1000:1000', '/usr/share/elasticsearch/']
     securityContext:
        runAsUser: 0
     volumeMounts:
      - mountPath: /usr/share/elasticsearch/data
        name: elasticsearch-master

Explanation:-

Here we have added extraInitContainers and use the busybox image, so create a directory /usr/share/elasticsearch/data/nodes/ inside the busybox container and use the securitycontext with runAsUser and changing the fil permission 1000 and mounting this volume inside the elasticsearch container at the path /usr/share/elasticsearch/data after which the correct permissions are there. So you should not get the accessdenied permission again. And pod should run fine this time.