Fixing EKS "You must be logged in to the server" Error
Problem Statement
When trying to connect to your Amazon EKS cluster using kubectl
, you may encounter this common authentication error:
couldn't get current server API group list: the server has asked for the client to provide credentials
error: You must be logged in to the server (the server has asked for the client to provide credentials)
This occurs when your AWS credentials aren't properly recognized by the EKS cluster's API server. The error typically appears despite:
- Having the AWS CLI installed and configured
- Using
aws configure
with credentials that have EKS permissions - Following standard Kubernetes configuration procedures
Solutions
1. Configure EKS Access Entry (Essential Step)
Important
AWS recently implemented Access Entries as the primary authorization method. IAM permissions alone are no longer sufficient.
You must explicitly grant your IAM identity access to the EKS cluster:
Via AWS CLI:
aws eks create-access-entry \
--cluster-name your-cluster-name \
--principal-arn arn:aws:iam::ACCOUNT_ID:user/USER_NAME
Via AWS Console:
- Navigate to EKS → Your Cluster → Access tab
- Click "Add access entry"
- Enter your IAM ARN (user or role)
- Select
STANDARD
access type - Associate appropriate access policies
Terraform Example (Add to Cluster Config):
data "aws_iam_user" "admin" {
user_name = "your-iam-username"
}
resource "aws_eks_access_entry" "admin" {
cluster_name = aws_eks_cluster.your_cluster.name
principal_arn = data.aws_iam_user.admin.arn
}
resource "aws_eks_access_policy_association" "admin" {
cluster_name = aws_eks_cluster.your_cluster.name
policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy"
principal_arn = aws_eks_access_entry.admin.principal_arn
access_scope {
type = "cluster"
}
}
2. Verify AWS Credentials Configuration
Common issues and resolution steps:
Confirm active credentials:
bashaws sts get-caller-identity
Verify the output matches your expected IAM user/role
Region consistency:
bashaws configure get region
Must match your EKS cluster's region
Environment variables conflict:
bashunset AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY
(Remove any overriding environment variables)
3. Refresh Kubernetes Configuration
After making credential changes:
New Terminal Required
# Run in a NEW terminal
aws eks update-kubeconfig \
--region YOUR_REGION \
--name YOUR_CLUSTER_NAME
4. Validate AWS Config Files
Check for problematic entries in ~/.aws/config
:
# REMOVE this entry if found
# cli_auto_prompt = on
Remove invalid session tokens from ~/.aws/credentials
:
[default]
aws_access_key_id = AKIA...
aws_secret_access_key = ...
# COMMENT OUT invalid token:
# aws_session_token = ...
5. Verify Account Consistency
Ensure:
- Same AWS account used for CLI and cluster creation
- No root/non-root account mismatch
- Create access entry for correct account if cluster was created with different credentials
6. Network Configuration (Advanced)
For VPC connectivity issues:
# Terraform (enable public endpoint)
module "eks" {
cluster_endpoint_public_access = true
}
# AWS Console:
EKS → Cluster → Networking → Manage Endpoint Access
Security Note
Public endpoint access introduces security risks. Limit CIDR ranges in production.
Solution Comparison
Solution | Use Case | Difficulty | Security Impact |
---|---|---|---|
Access Entry | All new clusters | Low | ✅ Minimal |
Terminal/Kubeconfig Reset | After credential updates | Low | None |
Config Cleanup | When using temp tokens | Medium | ⚠️ Verify |
Network Config | Private network issues | High | 🛑 Risk! |
Best Practices
- Always create access entries for any IAM entity needing cluster access
- Least privilege policies (AdminPolicy → ViewPolicy → Custom)
- Regularly refresh credentials with:bash
aws eks update-kubeconfig ...
- Use dedicated IAM roles instead of root accounts
- Infrastructure as Code - Manage access through Terraform/CloudFormation
After implementing these solutions, verify connection with:
kubectl get nodes
:::success Most EKS authentication issues are resolved by configuring access entries and refreshing configurations in a new terminal. :::