Kubernetes Management Tasks and Commands
Kubernetes Management Tasks and Commands
To sort persistent volumes by capacity and save their details to a text file in Kubernetes, you can use the command: `kubectl get pv --sort-by=.spec.capacity.storage -n namespace > pv.txt`. This command lists all persistent volumes sorted by their storage capacity and saves the output to a file named 'pv.txt' in the given namespace .
To expose a Kubernetes pod via a ClusterIP service, use `kubectl expose pod <pod-name> --name=<service-name> --type=ClusterIP --port=<port-number>`. This command creates a ClusterIP service named 'front-end-service' routing traffic to the existing pod named 'front-end', and sets it to listen on port 80. This form of service exposes the pods internally within the cluster through an internal IP .
To check the number of ready nodes in a Kubernetes cluster while excluding those tainted with NoSchedule, use: `kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name} {.spec.taints[*].effect}{"\n"}{end}' | grep -v NoSchedule | wc -l`. This command lists nodes without the NoSchedule taint and outputs the count. You can redirect this output to a file by appending `> nodecount.txt` .
To create a DaemonSet in Kubernetes that runs a specific container on every node without altering node taints, define a YAML like this: Under 'apiVersion' use 'apps/v1', set 'kind' as 'DaemonSet', and include metadata with 'name' as 'elasticsearch'. For 'spec', set 'selector' to match a label (e.g., 'name: elasticsearch'). In 'template', under 'spec', add a 'containers' section with the desired container, for example, using the image 'quay.io/fluentd_elasticsearch/fluentd:v2.5.2'. Apply the configuration using `kubectl apply -f <filename>.yaml` .
To create a non-persistent Redis pod in Kubernetes within the 'qa' namespace, you need to define a pod configuration specifying `name: non-persistent-redis` and use `image: redis`. Do not set up a persistent volume; instead use an ephemeral volume for temporary storage, typically with a host path or emptyDir at the mount path 'dataredis'. Deploy it in the 'qa' namespace using `kubectl apply -f <your-pod-config>.yaml --namespace=qa`.
To perform a rolling update on a Kubernetes deployment to upgrade the container image and roll back to a previous version, first create the deployment using the desired image version, for example, `kubectl create deployment ngnix-app --image=nginx:1.11.10-alpine`. To upgrade, use `kubectl set image deployment/ngnix-app ngnix-app=nginx:1.13.0-alpine --record`, which performs the update while recording the change. You can check update history with `kubectl rollout history deployment/ngnix-app`. If needed to roll back to an earlier version, use `kubectl rollout undo deployment/ngnix-app --record`. This ensures version control for deployment upgrades .
To identify and handle high CPU workload pods in Kubernetes, run `kubectl top pods --sort-by=cpu --no-headers -l name=app`. This lists all pods sorted by CPU usage. Use `head -1` to get the top-most entry with the highest CPU usage. This information can be processed with `awk '{print $1}'` to extract the pod name. This helps focus response efforts on the pod consuming the most CPU .
To diagnose a Kubernetes cluster where the API server is not functioning, first check logs and events using `kubectl logs` and `kubectl describe events`. Then, inspect the status of etcd, control plane components, and ensure connectivity to the API server. Verify configuration files and network settings. Restarting the API server, checking firewall rules, or ensuring that etcd is accessible might be necessary. These checks are often done by inspecting server access logs, verifying process status, and using `kubectl get cs` to check component statuses, addressing any failing services or misconfigurations as identified .
To initialize a Kubernetes cluster with kubeadm, first define your configuration in a `kubeadm.config` file with details for the API server, network settings, etc. Begin by running `kubeadm init --config=kubeadm.config` on the master node. This sets up the control plane. After master setup, copy the cluster configuration, for example, using `mkdir -p $HOME/.kube && sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config`. Join a worker node using the token and certificate from the master with `kubeadm join <master-ip>:<master-port> --token <token> --discovery-token-ca-cert-hash <hash>`. This assembles the master and worker into the desired cluster configuration .
To drain a Kubernetes node and reschedule its pods on other nodes, use the command `kubectl drain <node-name> --ignore-daemonsets --delete-local-data`. This command marks the node as unavailable, ensuring running pods are terminated and rescheduled on other available nodes. It respects DaemonSets and deletes local data for stateful sets, making it suitable for maintenance and scaling activities .