Posts

Showing posts from September, 2022

Generating token in kubernetes using kubeadm command for adding the worker nodes

 Issue:- Kubeadm provides you a join token command when you first create a kubernetes cluster. But if you dont have that token handy for the future requirement for addition of the worker nodes to increase the cluster capacity ? Solution:-   you can run the following command which will allow you to generate the full token command which can be used to add the worker nodes to master in the future. [centos@kubemaster ~]$ kubeadm token create --print-join-command kubeadm join 172.31.98.106:6443 --token ix1ien.29glfz1p04d7ymtd --discovery-token-ca-cert-hash sha256:1f202db500d698032d075433176dd62f5d0074453daa12ccdfffd637a966a771 Once the token has been generated than you can run the command on the worker node to add it in the kubernetes cluster.

[Solved] Persistentvolume claim pending while installing the Elasticsearch using Helm

Image
  Issue:-  When installing the elasticsearch using the helm , the elasticsearch continaer fails as the multimaster nodes go in pending state for the persistentvolumeclaim and continaer remains in the pending state. Error:-  Persistent volume claim remains in the pending state Effect:- Was not able to install the elasticsearch as persistent volume claim was not ready for the  Elasticsearch.

[Solved] stacktrace":ElasticsearchException[failed to bind service]; nested: AccessDeniedException[/usr/share/elasticsearch/data/nodes];

  Issue:-  When installing the elasticsearch using the helm , the elasticsearch continaer fails with an exception AccessDeniedException[/usr/share/elasticsearch/data/nodes]; Error:-  "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "uncaught exception in thread [main]", "stacktrace": ["org.elasticsearch.bootstrap.StartupException: ElasticsearchException[failed to bind service]; nested: AccessDeniedException[/usr/share/elasticsearch/data/nodes];" Effect:- Was not able to install the elasticsearch and elasticsearch pod keeps crashing again and again as the healthcheck is not passed and the liveness probe fails restarting the pod again and again. Resolution:- Follow the following steps to resolve the issue 1. The issue comes because the elasticsearch user is not having the permission on the  /usr/share/elasticsearch/data/nodes directory. 2. But you cannot directly use kubectl ...