0% found this document useful (0 votes)
84 views2 pages

Kubernetes Management Tasks and Commands

Uploaded by

Rafi Ulla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views2 pages

Kubernetes Management Tasks and Commands

Uploaded by

Rafi Ulla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd

1.

List all persistent volumes sorted by capacity, saving the full kubcet1 output in
text file path (given in exam)
kubectl get pv --sort-by=.[Link] -n namespace > [Link]
[Link] single instance of pod on every node (i.e. daemonset). do not alter taints
on node.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: elasticsearch
spec:
selector:
matchLabels:
name: elasticsearch
template:
metadata:
labels:
name: elasticsearch
spec:
containers:
- name: fluentd-elasticsearch
image: [Link]/fluentd_elasticsearch/fluentd:v2.5.2
[Link] a deployment as follows
� Name ngnix-app
� Using container ngnix with version 1.11.10-alpine
� The deployment should contain 3 replicas
Next, deploy the application with new version 1.13.0-alpine, by performing a
rolling update,
and record that update
Finally, rollback that update to the previous version 1.11.10-alpine
create deployment using yaml
k apply -f [Link] --record
kubectl rollout history deployment ngnix-app
kubectl set image deployment deployment/ngnix-app --ngnix-app=ngnix:1.13.0-alpine
--record
kubectl rollout history deployment ngnix-app
kubectl describe deployment ngnix-app
kubectl rollout undo [Link]/ngnix-app --record
4. Create and configure the service front-end-service so its accessible through
ClusterIP and routes to the existing pod named front-end.
kubectl expose pod front-end --name=front-end-service --type=ClusterIP --port=80
[Link] the deployment webserver to 3 pods.
kubectl scale deployment <deploymentname> --replicas=3
[Link] to see how many nodes are ready (not including nodes tainted Noscheduel)
and write the number to file name_of_file_given_in_question
kubectl get nodes -o jsonpath='{range .items[*]}{.[Link]}
{.[Link][*].effect}{"\n"}{end}'| grep -v NoSchedule | wc -l > [Link]
[Link] pods running high CPU workloads and write the name of the pod consuming most
CPU to the file name_of_file_given_in_question
kubectl top pods --sort-by=cpu --no-headers -l name=app | head -1 | awk '{print
$1}'> [Link]
8. Create a deployment as follows
� Name ngnix-random
� Exposed via a service ngnix-random
� Ensure that the service & pod are accessible via their respective DNS records
� The container(s) within any pod(s) running as a part of this deployment
should use the ngnix image
Next,use the utility nslookup to look up the DNS records of the Service & pod
and write the
output to [Link] and [Link] respectively.
9. take backup of etcd cluster save it to some file.
etcdctl snapshot --cacert="/etc/kubernetes/pki/etcd/[Link]"
--cert="/etc/kubernetes/pki/etcd/[Link]" --endpoints=[Link] --
key="/etc/kubernetes/pki/etcd/[Link]" save /tmp/[Link]
Snapshot saved at /tmp/[Link]
10.A kubernetes worker node is in state NotReady Investigate why this is the case,
and perform any appropriates steps to bring the node to a Ready state.
check kubelet service
systemctl status kubelet
systemctl enable kubelet
systemctl start kubelet
11. Configure the kubelet system- managed service (i.e. static pod), on mentioned
worker node. details of pod name will be given in exam
12. you need to setup k8s cluster of 1 master and 1 worker node using kubeadm tool,
[Link] file will be given using this file you need to initialized master
node.
13. Give a partially- functioning Kuberenetes cluster, identify symptoms of failure
on the cluster. in my case kubernetes api server was not running need to find out
issue and fix it
14. create persistentvolume using hostpath specification given in question
15. need to add init container in given pod specification file, this init container
will create some file
16. Set the node named worker node as unavailable and resheduel all the pods
running on it.
kubectl drain node01 --ignore-daemonsets --delete-local-data
17. Create a pod as follows
� Name non-persistent-redis
� Container imageredis
� Persistent volume with name app-cache
� Mouth path dataredis
It should launch in the qa namespace and the volume must not be persistent.
18. Create a Kubernetes secret as follows
� Name super-secret
� Credential mouse
Create a pod named pod-secrets-via-file, using the redis image, which mounts a
secret
named super-secret at secrets.
Create a second pod named pod-secrets-via-env, using the redis image, which
exports
credential as env
19. Create a single app conatiner with ( 1 and 4 instances)
NGINX+REDIS
20. Create a deployment with given replicas no and store the it in a yaml file
3:38
this is what i have tried from your text file
3:38
others have direct examples in documentation

Common questions

Powered by AI

To sort persistent volumes by capacity and save their details to a text file in Kubernetes, you can use the command: `kubectl get pv --sort-by=.spec.capacity.storage -n namespace > pv.txt`. This command lists all persistent volumes sorted by their storage capacity and saves the output to a file named 'pv.txt' in the given namespace .

To expose a Kubernetes pod via a ClusterIP service, use `kubectl expose pod <pod-name> --name=<service-name> --type=ClusterIP --port=<port-number>`. This command creates a ClusterIP service named 'front-end-service' routing traffic to the existing pod named 'front-end', and sets it to listen on port 80. This form of service exposes the pods internally within the cluster through an internal IP .

To check the number of ready nodes in a Kubernetes cluster while excluding those tainted with NoSchedule, use: `kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name} {.spec.taints[*].effect}{"\n"}{end}' | grep -v NoSchedule | wc -l`. This command lists nodes without the NoSchedule taint and outputs the count. You can redirect this output to a file by appending `> nodecount.txt` .

To create a DaemonSet in Kubernetes that runs a specific container on every node without altering node taints, define a YAML like this: Under 'apiVersion' use 'apps/v1', set 'kind' as 'DaemonSet', and include metadata with 'name' as 'elasticsearch'. For 'spec', set 'selector' to match a label (e.g., 'name: elasticsearch'). In 'template', under 'spec', add a 'containers' section with the desired container, for example, using the image 'quay.io/fluentd_elasticsearch/fluentd:v2.5.2'. Apply the configuration using `kubectl apply -f <filename>.yaml` .

To create a non-persistent Redis pod in Kubernetes within the 'qa' namespace, you need to define a pod configuration specifying `name: non-persistent-redis` and use `image: redis`. Do not set up a persistent volume; instead use an ephemeral volume for temporary storage, typically with a host path or emptyDir at the mount path 'dataredis'. Deploy it in the 'qa' namespace using `kubectl apply -f <your-pod-config>.yaml --namespace=qa`.

To perform a rolling update on a Kubernetes deployment to upgrade the container image and roll back to a previous version, first create the deployment using the desired image version, for example, `kubectl create deployment ngnix-app --image=nginx:1.11.10-alpine`. To upgrade, use `kubectl set image deployment/ngnix-app ngnix-app=nginx:1.13.0-alpine --record`, which performs the update while recording the change. You can check update history with `kubectl rollout history deployment/ngnix-app`. If needed to roll back to an earlier version, use `kubectl rollout undo deployment/ngnix-app --record`. This ensures version control for deployment upgrades .

To identify and handle high CPU workload pods in Kubernetes, run `kubectl top pods --sort-by=cpu --no-headers -l name=app`. This lists all pods sorted by CPU usage. Use `head -1` to get the top-most entry with the highest CPU usage. This information can be processed with `awk '{print $1}'` to extract the pod name. This helps focus response efforts on the pod consuming the most CPU .

To diagnose a Kubernetes cluster where the API server is not functioning, first check logs and events using `kubectl logs` and `kubectl describe events`. Then, inspect the status of etcd, control plane components, and ensure connectivity to the API server. Verify configuration files and network settings. Restarting the API server, checking firewall rules, or ensuring that etcd is accessible might be necessary. These checks are often done by inspecting server access logs, verifying process status, and using `kubectl get cs` to check component statuses, addressing any failing services or misconfigurations as identified .

To initialize a Kubernetes cluster with kubeadm, first define your configuration in a `kubeadm.config` file with details for the API server, network settings, etc. Begin by running `kubeadm init --config=kubeadm.config` on the master node. This sets up the control plane. After master setup, copy the cluster configuration, for example, using `mkdir -p $HOME/.kube && sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config`. Join a worker node using the token and certificate from the master with `kubeadm join <master-ip>:<master-port> --token <token> --discovery-token-ca-cert-hash <hash>`. This assembles the master and worker into the desired cluster configuration .

To drain a Kubernetes node and reschedule its pods on other nodes, use the command `kubectl drain <node-name> --ignore-daemonsets --delete-local-data`. This command marks the node as unavailable, ensuring running pods are terminated and rescheduled on other available nodes. It respects DaemonSets and deletes local data for stateful sets, making it suitable for maintenance and scaling activities .

You might also like