-->

Friday, November 13, 2020

[Solved] Network Split and High Erlang process on one node in the Rabbitmq Cluster

Problem:- The network split has occured in the Rabbitmq cluster causing the cluster of node1, node2 and node3 divide in two. Also the erlang process count was continuously high and hitting the upper limit. Further on network split the main cluster node hang up.

Cause:- The network split and high erlang process count might have occured if the request are not equally split across different nodes rather application is using one server as its endpoint. Due to which the erlang process count was continuously high on the node and that node got hanged , even it was hard to restart the process again.

Resolution:-

1. As network split occurs you need to stop the rabbitmq across all the nodes using the following command.

service rabbitmq-server stop  

2. After that you have to first start the node whose cookie was used across the other nodes. In our case node3 cookie was used across node1 and node2 to create the rabbitmq cluster. So we started the node3 first and than we remaining nodes

cd /var/lib/rabbitmq/mnesia/
mkdir -p /tmp/mnesia
mv * /tmp/mnesia/
service rabbitmq-server start  

3. Check the status in the rabbitmq dashboard by opening following url in the browser http://node3:15672 with username and password guest guest if not changed at this point the rabbitmq should be up and running on the node3 which should be visible in the dashboard

4. After that login to the node1 and execute the following steps and than again login to the node2 and repeat the same process as mentioned below

 service rabbitmq-server stop  
 cd /var/lib/rabbitmq/mnesia/  
 mkdir -p /tmp/mnesia  
 mv * /tmp/mnesia/  
 service rabbitmq-server start  
 rabbitmqctl stop_app  
 rabbitmqctl reset  
 rabbitmqctl join_cluster rabbit@node3  
 rabbitmqctl start_app  
 rabbitmqctl cluster_status  
At this point the cluster should be up and running with all the 3 nodes and you should consider using the Load balancer so request gets equally distributed across all nodes

0 comments:

Post a Comment