Jump to main content

Install EKS-D with MicroK8s

What is EKS-D

Amazon EKS Distro (EKS-D) is a Kubernetes distribution based on and used by Amazon Elastic Kubernetes Service (Amazon EKS). It provides latest upstream updates as well as extended security patching support. EKS-D follows the same Kubernetes version release cycle as Amazon EKS.

Deploying EKS-D

We will need to install MicroK8s with a specific channel that contains the EKS distribution.

sudo snap install microk8s --classic --channel 1.22-eksd/stable

MicroK8s channels are frequently updated with the each release of EKS-D. Channels are made up of a track and an expected level of MicroK8s’ stability. Try snap info microk8s to see what versions are currently published.

Enable AWS specific addons

The EKS-D channels package and include addons for the specific AWS resources that integrate with Kubernetes. These addons are;

  • IAM Authenticator
  • Elastic Block Storage CSI Driver
  • Elastic File System CSI Driver

Enabling IAM Authenticator

First we need to create an IAM role that is going to be mapped to users/groups in the Kubernetes cluster. The role can be created using the cli:

# get your account ID
ACCOUNT_ID=$(aws sts get-caller-identity --output text --query 'Account')

# define a role trust policy that opens the role to users in your account (limited by IAM policy)
POLICY=$(echo -n '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":"arn:aws:iam::'; echo -n "$ACCOUNT_ID"; echo -n ':root"},"Action":"sts:AssumeRole","Condition":{}}]}')

# create a role named KubernetesAdmin (will print the new role's ARN)
aws iam create-role \
  --role-name KubernetesAdmin \
  --description "Kubernetes administrator role (for AWS IAM Authenticator for Kubernetes)." \
  --assume-role-policy-document "$POLICY" \
  --output text \
  --query 'Role.Arn'

We then enable the aws-iam-authenticator addon, which will install required workloads and configure the api-server as necessary.

sudo microk8s enable aws-iam-authenticator

As an example we can map the our role to a cluster admin by replacing the <ROLE_ARN> with the created ARN and updating the aws-iam-authenticator ConfigMap with the below file:

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: kube-system
  name: aws-iam-authenticator
  labels:
    k8s-app: aws-iam-authenticator
data:
  config.yaml: |
    # a unique-per-cluster identifier to prevent replay attacks
    # (good choices are a random token or a domain name that will be unique to your cluster)
    clusterID: aws-cluster-****
    server:
      # each mapRoles entry maps an IAM role to a username and set of groups
      # Each username and group can optionally contain template parameters:
      #  1) "{{AccountID}}" is the 12 digit AWS ID.
      #  2) "{{SessionName}}" is the role session name, with `@` characters
      #     transliterated to `-` characters.
      #  3) "{{SessionNameRaw}}" is the role session name, without character
      #     transliteration (available in version >= 0.5).
      mapRoles:
      # statically map arn:aws:iam::000000000000:role/KubernetesAdmin to a cluster admin
      - roleARN: <ROLE_ARN>
        username: kubernetes-admin
        groups:
        - system:masters
      # each mapUsers entry maps an IAM role to a static username and set of groups
      mapUsers:
      # map user IAM user Alice in 000000000000 to user "alice" in "system:masters"
      #- userARN: arn:aws:iam::000000000000:user/Alice
      #  username: alice
      #  groups:
      #  - system:masters
      # List of Account IDs to whitelist for authentication
      mapAccounts:
      # - <AWS_ACCOUNT_ID>

We need to restart the aws-iam-authenticator DaemonSet after updating the ConfigMap to propagate our changes.

sudo microk8s kubectl rollout restart ds aws-iam-authenticator -n kube-system

We need to install the aws-iam-authenticator binary to any machine that will use IAM authentication to manage the Kubernetes cluster. The AWS authenticator is called by kubectl and produces a token. This token is used to map you to a Kubernetes user. The installation steps depend on the workstation you are on. Please follow the steps described in the official docs.

Afterwards we can create a kubeconfig file and use it to authenticate to our MicroK8s cluster.

apiVersion: v1
clusters:
- cluster:
   server: <endpoint-url>
   certificate-authority-data: <base64-encoded-ca-cert>
  name: kubernetes
contexts:
- context:
   cluster: kubernetes
   user: aws
  name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      command: aws-iam-authenticator
      args:
        - "token"
        - "-i"
        - "<cluster-id>"
        - "-r"
        - "<role-arn>"
  1. Replace <endpoint-url> with the endpoint of your cluster. If you intend to access the cluster from outside EC2 through the node’s public endpoints (IP/DNS) please see the respective document. Note that the MicroK8s snap configures the API server to listen on all interfaces.
  2. Replace <base64-encoded-ca-cert> with the base64 representation of the clusters CA. Copy this from the output of microk8s config.
  3. Replace <aws-iam-authenticator> with the full path of where the aws-iam-authenticator binary is installed.
  4. Replace <cluster-id> with the cluster ID shown with sudo microk8s kubectl describe -n kube-system cm/aws-iam-authenticator | grep clusterID

You can install kubectl and have it use the just created kubeconfig file with the --kubeconfig parameter.

Enabling EBS CSI Driver

First we need to add required permissions with a policy to an AWS user we will use for the driver. The driver requires IAM permission to talk to Amazon EBS to manage the volume on user’s behalf. The example policy here defines these permissions. AWS maintains a managed policy, available at ARN arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy.

We can attach the policy to the user using the AWS CLI:

aws iam attach-user-policy \
  --user-name <user-name-here> \
  --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy

We can supply the credentials of the user we attached the policy to and deploy the driver with:

sudo microk8s enable aws-ebs-csi-driver -k <access-key-id> -a <secret-access-key>"

To test the setup afterwards you can create a PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ebs-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: ebs-sc
  resources:
    requests:
      storage: 4Gi

And use it in a pod:

apiVersion: v1
kind: Pod
metadata:
  name: ebs-app
spec:
  containers:
  - name: ebs-app
    image: centos
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: ebs-claim

To verify everything:

sudo microk8s kubectl exec -ti ebs-app -- tail -f /data/out.txt

Enabling EFS CSI Driver

First we need to setup proper IAM permissions. The driver requires IAM permission to talk to Amazon EFS to manage the volume on user’s behalf. Use an IAM instance profile to grant all the worker nodes with required permissions by attaching policy to the instance profile of the worker.

You can create the instance profile using the AWS CLI:

POLICY=$(echo -n '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Action":["elasticfilesystem:DescribeAccessPoints","elasticfilesystem:DescribeFileSystems","elasticfilesystem:DescribeMountTargets","ec2:DescribeAvailabilityZones"],"Resource":"*"},{"Effect":"Allow","Action":["elasticfilesystem:CreateAccessPoint"],"Resource":"*","Condition":{"StringLike":{"aws:RequestTag/efs.csi.aws.com/cluster":"true"}}},{"Effect":"Allow","Action":"elasticfilesystem:DeleteAccessPoint","Resource":"*","Condition":{"StringEquals":{"aws:ResourceTag/efs.csi.aws.com/cluster":"true"}}}]}')
ROLE_POLICY=$(echo -n '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"Service":"ec2.amazonaws.com"},"Action":"sts:AssumeRole"}]}')

POLICY_ARN=$(aws iam create-policy --policy-name mk8s-ec2-policy --policy-document "$POLICY" --query "Policy.Arn" --output text)

aws iam create-role --role-name mk8s-ec2-role --assume-role-policy-document "$ROLE_POLICY" --description "Kubernetes EFS role(for EFS CSI Driver for Kubernetes)."
aws iam attach-role-policy --role-name mk8s-ec2-role --policy-arn $POLICY_ARN

aws iam create-instance-profile --instance-profile-name mk8s-ec2-iprof
aws iam add-role-to-instance-profile --instance-profile-name mk8s-ec2-iprof --role-name mk8s-ec2-role

Attach the created instance profile to the instance using the AWS console.

Afterwards we need to create the EFS that will be used by the driver for provisioning volumes. We need to create the filesystem in the same availability zone that our workers are in. We can setup the EFS using the AWS CLI:

INSTANCE_ID="instance-id-here"

AVAILABILITY_ZONE=$(aws ec2 describe-instances --instance-id $INSTANCE_ID --query "Reservations | [0].Instances | [0].Placement.AvailabilityZone" --output text)

SUBNET_ID=$(aws ec2 describe-instances --instance-id $INSTANCE_ID --query "Reservations | [0].Instances | [0].SubnetId" --output text)

SG_ID=$(aws ec2 create-security-group --group-name mk8s-efs-sg --description "MicroK8s EFS testing security group" --query "GroupId" --output text)

aws ec2 authorize-security-group-ingress --group-id $SG_ID --protocol tcp --port 2049 --cidr 0.0.0.0/0

EFS_ID=$(aws efs create-file-system --encrypted --creation-token mk8s-efs --tags Key=Name,Value=mk8s-efs --availability-zone-name $AVAILABILITY_ZONE --query "FileSystemId" --output text)

aws efs create-mount-target --file-system-id $EFS_ID --subnet-id $SUBNET_ID --security-group $SG_ID

We also need to make sure the security groups used by the worker instances have the 2049 port open for inbound tcp connections.

aws ec2 authorize-security-group-ingress --group-id <sg-id-of-instance> --protocol tcp --port 2049 --cidr 0.0.0.0/0

Afterwards we can supply the id of the EFs we just created and enable the addon:

sudo microk8s enable aws-efs-csi-driver -i <efs-id>

To test the setup you can create a PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-claim
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi

And use it in a pod:

apiVersion: v1
kind: Pod
metadata:
  name: efs-app
spec:
  containers:
  - name: efs-app
    image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: efs-claim

To verify everything:

sudo microk8s kubectl exec -ti efs-app -- tail -f /data/out.txt

Links

IAM authenticator
Storage

Last updated 2 years ago. Help improve this document in the forum.