List PODs running on an specific node
kubectl get pods --all-namespaces -o wide --field-selector spec.nodeName=<node_name>
List Taints of all nodes
kubectl describe nodes | grep Taint
kubectl rollout restart daemonset/deployment/statefulset <daemonset/deployment/statefulset>
Get logs from a pod
kubectl logs <pod_name> <container_name> -n <namespace>
Connect to a container
kubectl exec -it <pod_name> -c <container_name> -n <namespace> -- /bin/bash
Service port forwarding
kubectl port-forward svc/[service-name] -n [namespace] [external-port]:[internal-port] --addess 0.0.0.0
Port forwarding from binding service
Getting nodes memory/cpu usage
kubectl top nodes
Getting top pods sort by cpu
kubectl top pods -A --sort-by='cpu'
Getting top pods sort by memory
kubectl top pods -A --sort-by='memory'
kubectl topuses kubernetes metrics API. metrics serverneed to be installed, which by default is installed in K3S
How to run curl in Kubernetes (for troubleshooting)
Run curl commands against any POD or service endpoint, running a new pod containing using a utility image containing
Example use official
curl docker image which support multiarch (amd64 and arm64)
kubectl run -it --rm --image=curlimages/curl curly -- sh
busybox image is another useful POD for troubleshooting, but it does not contain
curl commad (it contains
Patching Helm manifest files on the fly using Kustomize
Helm provides the possibility of manipulate, configure, and/or validate rendered manifests before they are installed by Helm:
--post-rendering option. This enables the use of
kustomize to apply configuration changes without the need to fork a public chart or requiring chart maintainers to specify every last configuration option for a piece of software.
kubectl kustomize <kustomization_directory> kubectl apply -k <kustomization_directory>
Based on procedure described in this post kustomize can be used to apply patches to manifest files generated by Helm before install them.
Step 1: Create directory
Step 2: Create
kustomizewrapper script within
#!/bin/bash # save incoming YAML to file cat <&0 > all.yaml # modify the YAML with kustomize kubectl kustomize . && rm all.yaml
The script simply save all incomming manifest files from helm chart to a temporal file
all.yamland then execute
kubectl kustomizeto the current directory, applying kustomize transformations, and finally remove the temporal file
Step 3: Create kutomize files. In this example, a environment variable (
POD_IP) within DaemonSet
longhorn-managerwill be patched with a new value.
apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - all.yaml patches: - path: patch.yaml target: kind: DaemonSet name: "longhorn-manager"
This file indicates to patch DaemonSet
apiVersion: apps/v1 kind: DaemonSet metadata: name: longhorn-manager spec: template: spec: containers: - name: longhorn-manager # (1) env: - name: POD_IP value: 0.0.0.0 valueFrom:
NOTE: It is needed to set null to key
valueFromin order to delete previous value.
Step 3: Execute dry-run of helm install to see the changes in the manifests files
helm install longhorn longhorn/longhorn -f ../longhorn_values.yml --post-renderer ./kustomize --debug --dry-run
Step 4: Deploy the helm
helm install longhorn longhorn/longhorn -f ../longhorn_values.yml --post-renderer ./kustomize --namespace longhorn-system
Ansible does not support yet –post-rendering option to helm module. There is open issue in kubernetes core asible collection for providing this functionallity.
Move pods from one node to another
In case one pods need to be executed in other node, maybe because it is pushig the node to its limits in terms of resources and there is another node less used.
The procedure is the following:
Step 1: Get information about the node where the pod is running
kubectl get pod <pod-name> -n <namespace> -o wide
Step 2: Cordon the node where the pod is currently running, so Kubernetes scheduler cannot use it to schedule new PODs
kubectl cordon <node>
Kubernetes cordon is an operation that marks or taints a node in your existing node pool as unschedulable. By using it on a node, you can be sure that no new pods will be scheduled for this node. The command prevents the Kubernetes scheduler from placing new pods onto that node, but it doesn’t affect existing pods on that node.
Step 3: Delete POD. It is assumed that POD is controlled by a replica set or statefulset so after deleting it, Kubernetes will reschedule it automatically in any node which is not cordoned
kubectl delete pod <pod> -n <namespace>
Step 4: Check the POD is started in another node
Step 5: Uncordon the node, so it can be used again to schedule pods.
kubectl uncordon <node>