With AWS, it provides an awesome way to handle RBAC on EKS with external features to normal users. If you are new to RBAC, here is a very nice introduction to RBAC in Kubernetes over at my previous posts.
When working with Kubernetes clusters in AWS, the primary way to implement Kubernetes RBAC is with Amazon IAM. Hence, you can take advantage of AWS IAM Implementation. EKS uses IAM credentials for the authentication of valid entities like IAM users/roles by verifying the auth token using a server-side webhook service (IAM Authenticator). The same goes with Kubernetes RBAC for authorization. It is a tool to uses AWS IAM credentials to authenticate to a Kubernetes cluster.
AWS IAM Authenticator is an open-source project maintained by Kubernetes Special Interest Group (SIG).
EKS RBAC with IAM
API server forwards the IAM identity token in the request to the webhook service. The webhook service initially verifies whether the contained identity is a valid IAM identity with the associated AWS IAM service. Then webhook service consults the aws-auth ConfigMap on EKS to check whether the IAM identity corresponds to a valid cluster user.
AWS EKS uses a specific ConfigMap named aws-auth to manage the AWS Roles and AWS Users who are allowed to connect and manage the cluster.
By default, the IAM entity user or role that creates the cluster is granted system: master permission in RBAC configuration regardless of what permission they have in AWS. Hence, you should never use the root account to create the cluster. EKS allows giving access to other users by adding them in a configmap aws-auth in kube-system namespace. By default, this configmap is empty. Create the user specific one with policies.
Adding users to your EKS cluster has two sides:
- IAM (Identity and Access Management on the AWS side)
- RBAC (Role Based Access Management on Kubernetes).
New users and/or roles are declared via the aws-auth ConfigMap within Kubernetes. aws-auth configmap is based on aws-iam-authenticator and has several configuration options:
- mapRoles
- mapUsers
- mapAccounts
Demo
We will have two users with the following properties.
- EKS admin with full administrator permission
- Normal user to manage only deployments/replicasets/pods to a particular namespace
ConfigMap allows other IAM entities, such as users and roles, to access the EKS cluster. Let’s look at the aws-auth ConfigMap before we change anything. default content of the file.
kubectl -n kube-system get configmap aws-auth -o yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
selfLink: /api/v1/namespaces/kube-system/configmaps/aws-auth
uid: { { values } }
resourceVersion: "3433443"
creationTimestamp: "{{values}}"
data:
mapRoles: |-
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::{{values}}:role/{{vallues}}
username: system:node:{{values}}
To extend the system:masters permissions to other users and roles, you must add the aws-auth ConfigMap to the configuration of the amazon EKS cluster.
Let’s do the following tasks here
- Create a new IAM user called eksadmin and export its credentials
- Update the aws-auth configmap and add the “mapUsers” section.
- Map the IAM user arn to a pre-defined systems:master group that gives admin rights to our new user.
Create eksadmin user and export its credentials
To grant additional AWS users or roles the ability to interact with your cluster, you must add a new user or role into the aws-auth ConfigMap.
IAM_USER=eksadmin
aws iam create-user --user-name $IAM_USER
aws iam create-access-key --user-name $IAM_USER | tee /tmp/create_output.json
{
"AccessKey": {
"UserName": "eksadmin",
"Status": "Active",
"CreateDate": "date",
"SecretAccessKey": < AWS Secret Access Key > ,
"AccessKeyId": < AWS Access Key >
}
}
Export the default AWS credentials.
cat << EoF > eksadmin_creds.sh
export AWS_SECRET_ACCESS_KEY=$(jq -r .AccessKey.SecretAccessKey /tmp/create_output.json)
export AWS_ACCESS_KEY_ID=$(jq -r .AccessKey.AccessKeyId /tmp/create_output.json)
EoF
Update the aws-auth configMap and add the mapUsers section
kubectl -n kube-system edit configmap aws-auth
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: "{{values}}"
name: aws-auth
namespace: kube-system
resourceVersion: "3433443"
selfLink: /api/v1/namespaces/kube-system/configmaps/aws-auth
uid: { { values } }
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::{{values}}:role/{{values}}
username: system:node:{{values}}
mapUsers: | # ADD THIS section
- userarn: arn:aws:iam::{{values}}:user/eksadmin
username: eksadmin
groups:
- system:masters
Content under mapRoles or mapUsers are NOT object, they are all string. So, the |- token is required.
We can now test our new IAM user. Run the bash eksadmin_creds.sh && aws configure --profile eksadmin
command and use the user credentials for the key and secret key.
Check what identity you are using, and run:
$ aws sts get-caller-identity
{
"UserId": "AIDAVF3YR75UYCMUXIBIA",
"Account": "{{value}}",
"Arn": "arn:aws:iam::{{value}}:user/eksadmin"
}
Verify the role to list out the available nodes running on the EKS cluster.
kubectl get node
We still need to create a user that has deployment permissions in the “development” namespace. Let’s start by creating the namespace:
kubectl create namespace development # using our eksadmin user
Create csaju user and export its credentials
unset AWS_SECRET_ACCESS_KEY
unset AWS_ACCESS_KEY_ID
IAM_USER=csaju
aws iam create-user --user-name $IAM_USER
aws iam create-access-key --user-name $IAM_USER | tee /tmp/create_output.json
{
"AccessKey": {
"UserName": "rbac-user",
"Status": "Active",
"CreateDate": "date",
"SecretAccessKey": < AWS Secret Access Key > ,
"AccessKeyId": < AWS Access Key >
}
}
cat << EoF > csaju_creds.sh
export AWS_SECRET_ACCESS_KEY=$(jq -r .AccessKey.SecretAccessKey /tmp/create_output.json)
export AWS_ACCESS_KEY_ID=$(jq -r .AccessKey.AccessKeyId /tmp/create_output.json)
EoF
Up until now, as the cluster operator, you’ve been accessing the cluster as the admin user. Let’s now see what happens when we access the cluster as the newly created rbac-user. You should see something similar to the below, where we’re now making API calls as rbac-user:
bash csaju_creds.sh && aws sts get-caller-identity
{
"Account": <AWS Account ID>,
"UserId": <AWS User ID>,
"Arn": "arn:aws:iam::<AWS Account ID>:user/csaju"
}
Here is the role-deployment-manager.yaml file. We have the user, we have the role, and now we’re bind them together with a RoleBinding resource. Run the following to create this RoleBinding:
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: development
name: deployment-manager
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: deployment-manager-binding
namespace: development
subjects:
- kind: User
name: csaju
apiGroup: ""
roleRef:
kind: Role
name: deployment-manager
apiGroup: ""
kubectl apply -f role-deployment-manager.yaml
Here, the deployment-manager role is bound to a cluster user named csaju. This is what we need to remember for now. That role allows specific actions (verbs) in the development namespace. We need to edit the aws-auth configmap and add a new user mapping for the csaju user, but this time mapped to the “deployment-manager” role.
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: "{{values}}"
name: aws-auth
namespace: kube-system
resourceVersion: "3433443"
selfLink: /api/v1/namespaces/kube-system/configmaps/aws-auth
uid: { { values } }
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::{{values}}:role/{{values}}
username: system:node:{{values}}
mapUsers: |
- userarn: arn:aws:iam::{{values}}:user/eksadmin
username: eksadmin
groups:
- system:masters
- userarn: arn:aws:iam::{{values}}:user/csaju
username: csaju
groups:
- deployment-manager
Create the aws profile for the csaju user with aws configure --profile csaju
. Export the new profile: export AWS_DEFAULT_PROFILE="csaju"
. He/she should not be able to list cluster nodes:
kubectl get nodes
Error from server (Forbidden): nodes is forbidden: User "development" cannot list resource "nodes" in API group "" at the cluster scope
Tools to help with RBAC
Let’s now look at a few tools that help with RBAC. The in-built kubectl auth can-i is the most basic of commands to help to look at RBAC. This tells you if the user is allowed to do something. Examples:
kubectl auth can-i get pods
kubectl auth can-i get pod --namespace=development --as csaju
Rakkness: For single resources you can use kubectl auth can-i list deployments, but maybe you are looking for a complete overview? This is what rakkess is for. It lists access rights for the current user and all server resources, similar to the kubectl auth can-i —list.
RBAC Manager: A Kubernetes operator that simplifies the management of Role Bindings and Service Accounts.
Conclusion
Hopefully, this gave you a better understanding of the basic concepts needed to manage AWS IAM users in AWS EKS clusters. When you are in charge of an EKS cluster in AWS, it is very easy to effectively manage the access and permissions in your Kubernetes cluster using AWS roles and Kubernetes resources.