An EKS kubernetes cluster has already been deployed in the cloud, and
access permissions granted to the StudentsGroup. You are
going to use kubectl to interact with the cluster.
Get a login (non-root) on your groupXY-server:
ubuntu@ip-10-30-0-74:~$
To install kubectl version 1.33.3, follow these
steps:
Use curl to download the specific version of
kubectl from the official Kubernetes release page.
curl -LO https://dl.k8s.io/release/v1.33.3/bin/linux/amd64/kubectl(If this fails because curl isn’t installed, then
sudo apt-get install curl)
Use sudo to copy the kubectl binary to
/usr/local/bin, which is a common directory for
user-installed binaries. This command also sets the owner and group of
the binary to root and sets the correct permissions for
execution.
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectlCheck that the installation was successful by running the
kubectl version command. This will show you the version of
kubectl that is currently installed.
kubectl version --clientNote: The --client flag is used to show the
kubectl version without requiring a connection to a
Kubernetes cluster.
Use the AWS CLI to configure kubectl to connect to your Amazon EKS cluster. Replace ap-southeast-1 with the region your cluster is in. “my-eks-cluster” is the name of the EKS cluster.
aws eks --region ap-southeast-1 update-kubeconfig --name my-eks-clusterUnfortunately, this will return an error:
An error occurred (AccessDeniedException) when calling the DescribeCluster operation:
User: arn:aws:sts::058264411872:assumed-role/group12-awscli/i-07a53a2bc30fbfbe0
is not authorized to perform: eks:DescribeCluster on resource:
arn:aws:eks:ap-southeast-1:058264411872:cluster/my-eks-cluster
That’s because your VM is running with role
groupXY-awscli that you created before. That role has
permissions to talk to the EC2 and S3 APIs, but not the EKS API.
In the AWS web interface, go to IAM.
In the left navigation, select “Roles”. (Open the navigation pane using the hamburger if necessary)
Click on your existing role “groupXY-awscli” (where XY is your group number). You should see it has some policies already attached. The little orange cubes mean these are AWS managed (predefined) policies.
On the right hand side, click Add Permissions > Create Inline Policy.

In the policy editor:
All EKS actions (eks:*)
At the top, where it says Visual | JSON, click on JSON.
This shows you the internal form:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "eks:*",
"Resource": "*"
}
]
}
Now click Next.
For Policy name enter “groupXY-eks” (changing XY to your group number), then click Create Policy. You should find that your new policy is now included in your permissions.
Return to your VM, and repeat the command:
aws eks --region ap-southeast-1 update-kubeconfig --name my-eks-clusterYou should get a response similar to this:
Updated context arn:aws:eks:ap-southeast-1:058264411872:cluster/my-eks-cluster in /home/ubuntu/.kube/config
A kubectl config file has been written to the appropriate location, with credentials (a certificate) to talk to the Kubernetes API.
Now send a request to the Kubernetes API:
kubectl get nodes
Unfortunately, this is still rejected:
error: You must be logged in to the server (Unauthorized)
At this point, although you’re able to connect to the API, your user still doesn’t have any authorization at the Kubernetes level to do any actions.
Now you have to go to an area of the web interface you haven’t used before.
In the search box at the very top of the AWS web interface, enter “EKS” and jump to “Elastic Kubernetes Service”
You should get a list of clusters (with one cluster). Click on “my-eks-cluster”. You should get a busy page of information. Click on the “Access” tab in the middle, highlighted here:

The next page has a section headed “IAM access entries”. Click on “Create access entry”.

On the next page:

Click “Next” to get a summary:

Click “Create”
Try the command at the CLI again. The command below should output the nodes along with their statuses, roles, ages, and versions:
kubectl get nodesExpected Output:
NAME STATUS ROLES AGE VERSION
ip-10-0-1-95.ec2.internal Ready <none> 42h v1.33.2-eks-5e0fdde
ip-10-0-2-213.ec2.internal Ready <none> 42h v1.33.2-eks-5e0fdde
...etc
This output indicates that kubectl is correctly
configured to communicate with your EKS cluster and that your nodes are
ready and operational.
Learn how to create and use a Kubernetes secret to store and access sensitive information. Deploy a new pod with a Kubernetes secret inside
Create a password and base64 encode it. Take note of the output of the command
echo -n 'choose a good password and put it here' | base64If the command is successful, the output of the command above should look similar to this:
Y2hvb3NlIGEgZ29vZCBwYXNzd29yZCBhbmQgcHV0IGl0IGhlcmU=
Using a text editor, create a file called “secret.yaml” containing the text below.
Make sure to put your group number where it says groupXY
YAML is sensitive to indentation, ensure that you are preserving the indentation when you copy and pased the text below.
apiVersion: v1
kind: Secret
metadata:
name: my-secret-groupXY
type: Opaque
data:
password: your base64 encoded secretApply the secret using kubectl
kubectl apply -f secret.yaml
If the command is successful you should see output like this:
secret/my-secret-groupXY createdCheck if the secret was created successfully.
kubectl get secrets
View detailed information about the secret.
kubectl get secret my-secret-groupXY -o yaml
Create a Pod that uses the secret. Save the following YAML to
pod-using-secret.yaml.
apiVersion: v1
kind: Pod
metadata:
name: secret-test-pod-groupXY
spec:
containers:
- name: test-container
image: nginx
env:
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: my-secret-groupXY
key: password
restartPolicy: NeverApply the YAML to create the Pod.
kubectl apply -f pod-using-secret.yaml
If the command is successful you should see the output like this:
pod/secret-test-pod-groupXY created
Now check to make sure the container has been scheduled and is running
kubectl get pods
If the container is running, you should see output like this:
secret-test-pod-groupXY 1/1 Running 0 8s
Exec into the pod and verify the environment variable.
kubectl exec -it secret-test-pod-groupXY -- /bin/bash
echo $SECRET_PASSWORD # This should print the password you created
If you see the password you entered, then everything has succeeded.
Exit the interactive session with the following command:
exit
Using Kubectl, delete the pod you just created
kubectl delete pod secret-test-pod-groupXY
If you see the output
pod "secret-test-pod-groupXY" deleted, the delete has
succeeded.
Similarly, delete the secret
kubectl delete secret my-secret-groupXY
Learn to deploy an application in Kubernetes with a Deployment that ensures Pods are scheduled on different nodes. This exercise will highlight Kubernetes’ scheduling capabilities, demonstrate the cluster’s self-healing mechanism, and teach you how to manage application deployment and removal effectively.
groupXY in the provided resource manifests with your
specific group number.Generate a Kubernetes Secret to securely store sensitive information, to be utilized by your Deployment. This time we’ll use a shorter version of the command, that does the base64-encoding for you:
kubectl create secret generic my-secret-groupXY --from-literal=password='YourSecretPassword'If your command is successful, a secret named
my-secret-groupXY is now securely stored in your Kubernetes
cluster.
Prepare a YAML configuration file named
nginx-deployment-groupXY.yaml for your deployment. Ensure
to replace groupXY with your group number. This deployment
is configured to spread pods across different nodes.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-groupXY
spec:
replicas: 2
selector:
matchLabels:
app: nginx-groupXY
template:
metadata:
labels:
app: nginx-groupXY
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- nginx-groupXY
topologyKey: "kubernetes.io/hostname"
containers:
- name: nginx-container
image: nginx
env:
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: my-secret-groupXY
key: passwordDeploy the application:
kubectl apply -f nginx-deployment-groupXY.yamlIf your command is successful, the deployment
nginx-deployment-groupXY will initiate with 2 replicas,
utilizing pod anti-affinity to ensure distribution across different
nodes for enhanced fault tolerance and availability.
Confirm that your Pods are deployed across different nodes by inspecting the node assignment:
kubectl get pods -o wide -l app=nginx-groupXYIf your command is successful, the NODE
column in the output will show different node names for each Pod,
indicating that the Pods are running on separate nodes as intended.
Manually delete one Pod to observe the Kubernetes self-healing process:
kubectl delete pod <name of the pod> (The name will be something very long like
nginx-deployment-groupXY-6fd6bd8c9c-jh4gj - use copy and
paste!)
Check to see if your pod has been rescheduled. If it was rescheduled you will see a pod with an age around 5s
kubectl get pods If your command is successful, Kubernetes’ Deployment controller will automatically create a new Pod to replace the deleted one, ensuring the desired state of your deployment is maintained without manual intervention.
Remove the Deployment to clean up resources:
kubectl delete deployment nginx-deployment-groupXYIf your command is successful, the Deployment and its associated Pods will be deleted from your cluster, showcasing Kubernetes’ ability to manage application lifecycles efficiently.