-->

Tuesday, December 12, 2023

[Solved] Something went wrong when we tried to create 'main' for you: Cannot create branch. The branch name must match this regular expression: (bug|hotfix|feature|release)\/[a-zA-Z0-9]+-[0-9]+-[a-zA-Z0-9-]+/*

 Error:-

While working on a new gitlab repository, when i tried to commit some files into the empty repository it failed with the following error thrown by the pre-receive hook

Something went wrong when we tried to create 'main' for you: Cannot create branch. The branch name must match this regular expression: (bug|hotfix|feature|release|main)\/[a-zA-Z0-9]+-[0-9]+-[a-zA-Z0-9-]+/*

Cause:-

By default Gitlab would restrict the branch name to follow certain standards to make it easier to determine why the branch was created in first place by putting regex like

(bug|hotfix|feature|release)\/[a-zA-Z0-9]+-[0-9]+-[a-zA-Z0-9-]+/* 

so when i tried to create the main branch than also it was expecting the above regex to be matched however that was not the case so it throw the above error. 

Solution :-

It's a good practice to follow naming convention so go ahead and temporarily disable the naming convention in Branch as

repository-->Settings--->Repository--->Push rules--->Branch name

Remove the following values from the Branch name

(bug|hotfix|feature|release)\/[a-zA-Z0-9]+-[0-9]+-[a-zA-Z0-9-]+/*

After that save push rules

Once you have created the main branch than go back and again put the value to follow the regex patter while creating branch in your repository

(bug|hotfix|feature|release)\/[a-zA-Z0-9]+-[0-9]+-[a-zA-Z0-9-]+/*

[Solved] dial unix .lima/colima/ha.sock: connect: connection refused

 Error:-

I have been using colima instead of the docker desktop for some time. Recently while starting colima got the following error

errors inspecting instance: [failed to get Info from "/Users/ankitmittal/.lima/colima/ha.sock": Get "http://lima-hostagent/v1/info": dial unix /Users/ankitmittal/.lima/colima/ha.sock: connect: connection refused]

Cause:-

The issue is caused due to the older sock file and colima not able to read from it properly.

Monday, December 4, 2023

[Solved] Error: updating RDS Cluster KMSKeyNotAccessibleFault: The specified KMS key [null] either doesn't exist, isn't enabled, or isn't accessible by the current user. Either specify a different key or access the key with a different user.

 Issue:-

While restoring RDS Cluster from the snapshot i recently came across a issue with the IAM permission because i was not using the Admin permission rather want to stick with the least permission required to get the work done. Thats where i encountered this error


Error: updating RDS Cluster KMSKeyNotAccessibleFault: The specified KMS key [null] either doesn't exist, isn't enabled, or isn't accessible by the current user. Either specify a different key or access the key with a different user.

Cause/Solution:-

The issue is caused because of the missing IAM permission for the KMS Key. For solution to this problem checkout the Cloudtrail for the Event DescribeKey. You should find a event failing for this event to an unknown key. When you will checkout further you will find the key is for the aws/secretsmanager.

if you select the option ManageMasterUserPassword: true then you not only need to add IAM permissions for secretsmanager:CreateSecret but you also need to add KMS permissions for kms:DescribeKey on the aws/secretsmanager KMS key ID arn.

Copy the Arn of the key which is referenced in the Cloudtrail and make an entry in the IAM role you using that should solve your issue.

In my case AWS Support was not able to figure this out. And they instead point me in wrong direction saying somehow the key was not being passed and its taking value null which is not the case here.


[Solved] KMSKeyNotAccessibleFault: The specified KMS key does not exist, is not enabled or you do not have permissions to access it.

 Error:-

While running the terraform i came across the IAM permission issue which prevented access to the kms key

KMSKeyNotAccessibleFault: The specified KMS key does not exist, is not enabled or you do not have permissions to access it.

Cause:-

The issue is caused because the IAM role being used by terraform is missing the permission of "kms:CreateGrant"


Solution :-

To resolve the issue in the IAM policy please grant the following permission "kms:CreateGrant" to the relevant kms key. that should solve the issue

{  
   "Action": [  
     "kms:Sign",  
     "kms:ReEncrypt*",  
     "kms:GetPublicKey",  
     "kms:GenerateDataKey*",  
     "kms:Encrypt",  
     "kms:DescribeKey",  
     "kms:Decrypt",  
     "kms:CreateGrant"  
   ],  
   "Effect": "Allow",  
   "Resource": [  
     "arn:aws:kms:ap-south-1:121294719847:key/e96772364-f678-4589-82aa-396casdafu",  
     "arn:aws:kms:ap-south-1:121294719847:key/6415234-e778-4355-a224-8f42341234",  
"arn:aws:kms:ap-south-1:121294719847:key/077b234-b165-4d5c-be78-a174ad23"
] }

[Solved] * exec: "tfsec": executable file not found in $PATH

 Error:-

While running the terragrunt plan in the Gitlab runner came across the following error

* exec: "tfsec": executable file not found in $PATH

Cause:-

The issue is caused because the tfsec was not installed in the container image.


Solution :-

To install the tfsec simply run the following bash script and it should install the tfsec on your machine


curl -s https://raw.githubusercontent.com/aquasecurity/tfsec/master/scripts/install_linux.sh | bash


Tuesday, September 12, 2023

[Solved] Error saving credentials: error storing credentials - err: exec: "docker-credential-desktop": executable file not found in $PATH, out: ``

 Error:-

While building the image on the ubuntu image got the following error

Error saving credentials: error storing credentials - err: exec: "docker-credential-desktop": executable file not found in $PATH, out: ``

"credsStore": "desktop",

Cause:-

The issue is caused because the config file inside the ~/.docker/config.json is using credsStore which should be credStore

Saturday, August 5, 2023

[Solved] Failed to deploy artifacts: Could not transfer artifact gitlab-maven status code: 401, reason phrase: Unauthorized (401)

Error:-

While pushing the artifacts jar files to the gitlab project package registry got error Unauthorized (401) as mentioned below

[ERROR] Failed to execute goal:- Failed to deploy artifacts: Could not transfer artifact ExampleApp:jar:1.1 from/to gitlab-maven (https://gitlab-dedicated.com/api/v4/projects/1520/packages/maven): status code: 401, reason phrase: Unauthorized (401) ->

Scenario:-

It was a maven project with private runners running and pushing the artifacts to the project package registry. Usually we are able to achieve this without any issue using the deploy token with mvn clean deploy goal in other projects without any issue. But the lately the project was different because it was having the multiple artifacts created via rest api request and using the maven to push to the gitlab package registry due to which we cannot run mvn clean deploy.  so instead we end up using the mvn deploy:deploy-file to push the artifacts resulting in the authentication issue even with the deploy token when the deploy token itself is having the permission on the package registry as per the gitlab documentation.

Cause:-

Even after trying all the tokens not able to push to the gitlab package registry. It was because of how our settings.xml file has been written. If you just pushing the packages directly in package registry you need to follow the following gitlab documentation

https://docs.gitlab.com/ee/user/packages/maven_repository/

Tuesday, May 16, 2023

[Solved] Pressing enter produces ^M instead of a newline in the MAC/Linux

Error:-

So while working on the Terraform and running terraform apply came across a issue where terraform was asking for the confirmation of yes and after entering yes , the terminal did took it as an Enter rather was printing  ^M every time pressed enter. Due to which the terraform apply wont work.

Cause:-

The most likely issue causing this is the stty terminal line setting. To resolve this issue lookout for the solution below.

[Solved] 0/111 nodes are available: 111 node(s) had untolerated taint {eks.amazonaws.com/compute-type: fargate}

While deploying the application deployment for the gitlab runner recently faced the following error in the EKS Fargate on Amazon AWS


Error:-

0/111 nodes are available: 111 node(s) had untolerated taint {eks.amazonaws.com/compute-type: fargate}

 

Cause:-

So Even though i have sufficient capacity in the EKS still the pod was not getting elected for the deployment in the EKS Fargate cluster. This is because of the missing Fargate profile which enables you to select and differentiate which pods you want to run in fargate and which dont. So you can differentiate between the deployments if you having onpremise or your own eks cluster running on ec2 nodes.

In my case i deployed on separate Namespace and each namespace needs to have associated Fargate profile along with the IAM roles for the permissions on the AWS Resources like the ECR for image download. In my case i just created a new NS and done the deployment in the EKS due to which the pods were not allocated to any nodes.

Sunday, April 16, 2023

[Solved] Argocd invalid username and password

 

After installing the Argocd in kubernetes facing issue during the login to Argocd UI in browser with error  invalid username or password


Error:-

Invalid username or password

 

Cause:-

MosCt of the places you will find that the initial password for the Argocd is the container name with argocd-server or argocd-server.namespace but by entering both of them you would still go through the same issue and it wont login. Argocd set up a one time password and you wont be able to decode the password which is in secrets manager as that password even dont work.

Check the solution below

[Solved] Error: SSL certificate problem: self signed certificate in certificate chain

 While creating a ubuntu machine in vagrant recently faced a issue where the image download failed with a SSL error as mentioned below


Error:-

$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Box 'krec/ubuntu2004-x64' could not be found. Attempting to find and install...
    default: Box Provider: virtualbox
    default: Box Version: >= 0
The box 'krec/ubuntu2004-x64' could not be found or
could not be accessed in the remote catalog. If this is a private
box on HashiCorp's Vagrant Cloud, please verify you're logged in via
`vagrant login`. Also, please double-check the name. The expanded
URL and error message are shown below:

URL: ["https://vagrantcloud.com/krec/ubuntu2004-x64"]
Error: SSL certificate problem: self signed certificate in certificate chain

 

Cause:-

If you're encountering a "self signed certificate in certificate chain" error when using Vagrant, it means that the SSL certificate used by the server you're connecting to is not trusted by your system because it is self-signed or not signed by a trusted authority. This can be a security risk, so there can be 2 cases

1. In some cases(testing) it may be acceptable to temporarily disable certificate validation for testing or development purposes.

2. you need to use a self-signed certificate for SSL/TLS connections in a production environment, you can add the certificate to the trusted certificates on your system.

Based on your use case you can implement any of the solution mentioned below

Thursday, March 30, 2023

[Solved] creating EC2 Subnet: InvalidParameterValue: Value (us-east-2b) for parameter availabilityZone is invalid. Subnets can currently only be created in the following availability zones: us-east-1a, us-east-1b, us-east-1c, us-east-1d, us-east-1e, us-east-1f.

 Issue:-

While creating the multiple region vpc through the terraform getting the error during terraform apply when it tries to create the subnets in 2nd vpc.


Error:-

 module.vpc2.aws_subnet.public[0]: Creating...  
 ╷  
 │ Error: creating EC2 Subnet: InvalidParameterValue: Value (us-west-2b) for parameter availabilityZone is invalid. Subnets can currently only be created in the following availability zones: us-east-1a, us-east-1b, us-east-1c, us-east-1d, us-east-1e, us-east-1f.  
 │      status code: 400, request id: 79a19b0b-93d1-4a78-9c0c-124e429c78de  
 │   
 │  with module.vpc2.aws_subnet.public[1],  
 │  on .terraform/modules/vpc2/main.tf line 359, in resource "aws_subnet" "public":  
 │ 359: resource "aws_subnet" "public" {  

 

Cause:-

Even though i mentioned the providers still the terraform was trying to create the us-west-2b subnet in the wrong region i.e. us-east-1 and it was not able to find those subnets and thats why aws is throughing the error that only us-east-1a, us-east-1b, us-east-1c, us-east-1d, us-east-1e, us-east-1f subnets are available to create the subnets.

Saturday, March 25, 2023

[Solved] forbidden: User "system:serviceaccount:default:app-name" cannot delete resource "pods" in API group "" in the namespace "default""

Issue:

When trying to delete a Kubernetes pod via the Go-client library, an error is encountered: "pods "app-name" is forbidden: User "system:serviceaccount:default:app-name" cannot delete resource "pods" in API group "" in the namespace "default""


Code:

The following code is used to delete the pod via the Go-client library:

 err := ks.clientset.CoreV1().Pods(kubeData.PodNamespace).Delete(context.Background(), kubeData.PodName, metav1.DeleteOptions{})  
 if err != nil {  
 log.Fatal(err)  
 }  

The serviceaccount file that i was passing was

 {{- $sa := print .Release.Name "-" .Values.serviceAccount -}}  
 ---  
 apiVersion: v1  
 kind: ServiceAccount  
 metadata:  
  name: {{ $sa }}  
  namespace: {{ .Release.Namespace }}  
 ---  
 apiVersion: rbac.authorization.k8s.io/v1  
 kind: Role  
 metadata:  
  name: {{ $sa }}  
 rules:  
  - apiGroups: ["apps"]  
   verbs: ["patch", "get", "list"]  
   resources:  
    - deployments  
 ---  
 apiVersion: rbac.authorization.k8s.io/v1  
 kind: Role  
 metadata:  
  name: {{ $sa }}  
 rules:  
  - apiGroups: ["apps"]  
   verbs: ["delete", "get", "list"]  
   resources:  
    - pods  
 ---  
 apiVersion: rbac.authorization.k8s.io/v1  
 kind: RoleBinding  
 metadata:  
  name: {{ $sa }}  
 roleRef:  
  apiGroup: rbac.authorization.k8s.io  
  kind: Role  
  name: {{ $sa }}  
 subjects:  
  - kind: ServiceAccount  
   name: {{ $sa }}  

[Solved] MountVolume.SetUp failed for volume

 Kubernetes Persistent Volume Claims (PVC) are used to abstract the underlying storage infrastructure, allowing developers to mount storage to a pod without knowing the details of the storage. However, sometimes the PVC may fail to mount, causing the applications to fail. In this article, we will discuss the steps to troubleshoot and resolve such issues.


Issue:

When trying to mount a PVC in a Kubernetes pod, the mount fails with the following error:

"MountVolume.SetUp failed for volume [volume name] : failed to fetch token: cannot get auth token"


Error:

The error message "MountVolume.SetUp failed for volume [volume name] : failed to fetch token: cannot get auth token" indicates that the pod was not able to authenticate to the storage provider and obtain the required credentials to mount the volume.

Saturday, February 4, 2023

[Solved] [WARNING]: An error occurred while calling ansible.utils.display.initialize_locale (unsupported locale setting). This may result in incorrectly calculated text widths that can cause Display to print incorrect line lengths

 ISSUE:-

While running the ansible-playbook it gives a warning everytime as shown below


Warning:-

[WARNING]: An error occurred while calling ansible.utils.display.initialize_locale (unsupported locale setting). This may result in incorrectly calculated text widths that can cause Display to print incorrect line lengths
An error occurred while calling ansible.utils.display.initialize_locale
(unsupported locale setting). This may result in incorrectly calculated
text widths that can cause Display to print incorrect line lengthstext 


Cause:-

The issue is occurs because the localisations are not setup propery and ansible gives a warning saying that the display to print can give incorrect line lengths which can cause difficulties in troubleshooting and reduces viewing experience overall. Its just a warning and not the error itself, means it wont affect the ansible working.

Monday, January 30, 2023

[Solved] Fingerprint sha256 has already been taken

 ISSUE:-

While cloning the repository from the Gitlab the following error occurs


Error:-

The form contains the following error:
Fingerprint sha256 has already been taken

Cause:-

The issue was occuring because i have already added the key of my laptop to the company's gitlab account. And than when i created my own personal gitlab account and tried to clone the repository it complains that the key is already been taken

Monday, January 16, 2023

[Solved] URL: ["https://vagrantcloud.com/ubuntu/bionic64"] Error: SSL certificate problem: self signed certificate in certificate chain

 ISSUE:-

While downloading the ubuntu/bionic64 using the Vagrantfile got the following error


Error:-

 URL: ["https://vagrantcloud.com/ubuntu/bionic64"]  
 Error: SSL certificate problem: self signed certificate in certificate chain

Cause:-

It is showing the SSL Certificate Issue

[Solved] The version of powershell currently installed on this host is less than the required minimum version.

 Issue:-

Vagrant failed to initiate at early stage with error that the powershell is outdated.


Error:-

 Vagrant failed to initialize at a very early stage:  
 The version of powershell currently installed on this host is less than  
 the required minimum version. Please upgrade the installed version of  
 powershell to the minimum required version and run the command again.  
  Installed version: N/A  
  Minimum required version: 3  

Cause:-

Because of the older version of the powershell vagrant failed.

Monday, January 9, 2023

[Solved] From inside of a Docker container, how do I connect to the localhost of the machine?

To connect to the localhost of the machine from inside a Docker container, you can use the host.docker.internal hostname. This special DNS name is resolved to the internal IP address used by the host.

For example, if you want to connect to a service running on the host machine's localhost on port 8080, you can use the following command from inside the container: 

curl http://host.docker.internal:8080

Note that this method of connecting to the host's localhost will only work if you are using Docker for Mac or Docker for Windows. If you are using Docker on a different operating system, you will need to use the IP address of the host machine instead.

You can also use the --network host flag when starting the container to allow it to connect directly to the host's network interfaces. For example:

 docker run --network host <image-name>  

This can be useful if you need to connect to a service running on the host that is not exposed on localhost, or if you need to bind to a specific IP address or port on the host.

[Solved] Database is uninitialized and password option is not specified

 

Issue:- 

When  launching a mysql container from an image from mysql:5.7 image the container stops with error message saying to put the environment variable for the mysql root password.

Error:-