Skip to content

Fixing EKS "You must be logged in to the server" Error

Problem Statement

When trying to connect to your Amazon EKS cluster using kubectl, you may encounter this common authentication error:

couldn't get current server API group list: the server has asked for the client to provide credentials
error: You must be logged in to the server (the server has asked for the client to provide credentials)

This occurs when your AWS credentials aren't properly recognized by the EKS cluster's API server. The error typically appears despite:

  1. Having the AWS CLI installed and configured
  2. Using aws configure with credentials that have EKS permissions
  3. Following standard Kubernetes configuration procedures

Solutions

1. Configure EKS Access Entry (Essential Step)

Important

AWS recently implemented Access Entries as the primary authorization method. IAM permissions alone are no longer sufficient.

You must explicitly grant your IAM identity access to the EKS cluster:

Via AWS CLI:

bash
aws eks create-access-entry \
    --cluster-name your-cluster-name \
    --principal-arn arn:aws:iam::ACCOUNT_ID:user/USER_NAME

Via AWS Console:

  1. Navigate to EKS → Your Cluster → Access tab
  2. Click "Add access entry"
  3. Enter your IAM ARN (user or role)
  4. Select STANDARD access type
  5. Associate appropriate access policies

Terraform Example (Add to Cluster Config):

hcl
data "aws_iam_user" "admin" {
  user_name = "your-iam-username"
}

resource "aws_eks_access_entry" "admin" {
  cluster_name  = aws_eks_cluster.your_cluster.name
  principal_arn = data.aws_iam_user.admin.arn
}

resource "aws_eks_access_policy_association" "admin" {
  cluster_name  = aws_eks_cluster.your_cluster.name
  policy_arn    = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy"
  principal_arn = aws_eks_access_entry.admin.principal_arn
  access_scope {
    type = "cluster"
  }
}

2. Verify AWS Credentials Configuration

Common issues and resolution steps:

  1. Confirm active credentials:

    bash
    aws sts get-caller-identity

    Verify the output matches your expected IAM user/role

  2. Region consistency:

    bash
    aws configure get region

    Must match your EKS cluster's region

  3. Environment variables conflict:

    bash
    unset AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY

    (Remove any overriding environment variables)

3. Refresh Kubernetes Configuration

After making credential changes:

New Terminal Required

bash
# Run in a NEW terminal
aws eks update-kubeconfig \
    --region YOUR_REGION \
    --name YOUR_CLUSTER_NAME

4. Validate AWS Config Files

Check for problematic entries in ~/.aws/config:

ini
# REMOVE this entry if found
# cli_auto_prompt = on

Remove invalid session tokens from ~/.aws/credentials:

ini
[default]
aws_access_key_id = AKIA...
aws_secret_access_key = ...
# COMMENT OUT invalid token:
# aws_session_token = ...

5. Verify Account Consistency

Ensure:

  • Same AWS account used for CLI and cluster creation
  • No root/non-root account mismatch
  • Create access entry for correct account if cluster was created with different credentials

6. Network Configuration (Advanced)

For VPC connectivity issues:

hcl
# Terraform (enable public endpoint)
module "eks" {
  cluster_endpoint_public_access = true
}
bash
# AWS Console:
EKS Cluster Networking Manage Endpoint Access

Security Note

Public endpoint access introduces security risks. Limit CIDR ranges in production.

Solution Comparison

SolutionUse CaseDifficultySecurity Impact
Access EntryAll new clustersLow✅ Minimal
Terminal/Kubeconfig ResetAfter credential updatesLowNone
Config CleanupWhen using temp tokensMedium⚠️ Verify
Network ConfigPrivate network issuesHigh🛑 Risk!

Best Practices

  1. Always create access entries for any IAM entity needing cluster access
  2. Least privilege policies (AdminPolicy → ViewPolicy → Custom)
  3. Regularly refresh credentials with:
    bash
    aws eks update-kubeconfig ...
  4. Use dedicated IAM roles instead of root accounts
  5. Infrastructure as Code - Manage access through Terraform/CloudFormation

After implementing these solutions, verify connection with:

bash
kubectl get nodes

:::success Most EKS authentication issues are resolved by configuring access entries and refreshing configurations in a new terminal. :::