Skip to Content

8 Reasons Why Kubernetes pod is in a Pending state

When a Kubernetes pod is in a Pending state, it means that the pod has been accepted by the cluster, but it cannot be scheduled to run on any node.

Several reasons could cause this, ranging from insufficient resources to scheduling constraints.

8 Reasons Why Kubernetes pod is in a Pending state

Here’s a systematic approach to troubleshoot a Pending pod status:

Check Pod Events

Use kubectl describe pod <pod-name>

to find detailed information about the pod’s scheduling attempts. The Events section may contain messages from the scheduler or other components indicating why the pod cannot be scheduled. For example:

  • FailedScheduling: This event provides reasons like unsatisfiable constraints (e.g., taints, affinities) or insufficient resources.
  • FailedAttachVolume or FailedMount: Indicates issues with attaching or mounting a volume, possibly due to missing PersistentVolume or storage class problems.

Check Resource Availability

Nodes might lack the necessary CPU or memory resources required by the pod. When examining node details with kubectl describe nodes, consider:

  • Allocatable vs Capacity: Understand the difference; Capacity is the total resource amount, while Allocatable accounts for Kubernetes system reservations.
  • Resource Requests and Limits: Compare the sum of requests and limits of all pods scheduled on the node with the node’s allocatable resources to identify potential resource shortages.

Examine Taints and Tolerations

Nodes might have taints that repel pods unless the pod has a matching toleration. When checking taints:

  • Understand Taint Effects: Effects like NoSchedule, PreferNoSchedule, and NoExecute determine how strict the taint is.
  • Matching Tolerations: Ensure the pod’s tolerations match the key, value, and effect of the node’s taints to allow scheduling.

Review Node Affinity Settings

Node affinity rules might be too restrictive. When reviewing affinity settings:

  • Required vs Preferred Rules: Required rules must be met for scheduling, while preferred rules influence scheduling decisions without being mandatory.
  • Label Matching: Ensure the nodes have labels that match the pod’s affinity label selectors.

Inspect PersistentVolumeClaims

PVC issues can prevent pods from being scheduled, especially for stateful applications. Key points to check:

  • PVC Status: Ensure the PVC status is Bound, indicating it’s successfully attached to a PersistentVolume.
  • StorageClass and Provisioner: Confirm the StorageClass exists and the dynamic provisioner (if used) is operational.

Check Quotas and Limits

Namespace quotas can limit resource allocation, affecting pod scheduling. When examining quotas:

  • Resource Quotas: Look for ResourceQuota objects in the namespace and compare their limits with current usage.
  • Pod Count Limits: Besides CPU and memory, quotas can also limit the number of pods, which might be the cause of the issue.

Validate Pod and Container Images

Issues with container images, such as incorrect names or inaccessible registry locations, can halt pod scheduling:

  • Image Pull Errors: Common issues include private registries requiring authentication or typos in image names/tags.
  • Image Pull Policy: The policy might require a fresh pull of the image, which could fail due to connectivity issues or rate limiting.

Analyze Scheduler Logs

The Kubernetes scheduler logs can provide insights into the decision-making process:

  • Verbose Logging: Increasing log verbosity can unveil detailed scheduling decisions and failures.
  • Search for Pod Name: Filter logs by the pod name to trace specific scheduling attempts and the reasons behind any failures.

Commands to diagnose a Kubernetes pod stuck in the Pending state

Check Pod Events

kubectl describe pod <pod-name> -n <namespace>

Look for messages in the “Events” section that might indicate scheduling issues.

Check Resource Availability

kubectl describe nodes

Examine the “Allocatable” and “Capacity” sections, as well as the resources requested by pods under “Non-terminated Pods”.

Examine Taints and Tolerations

To list taints on all nodes:

kubectl get nodes -o jsonpath='{.items[*].metadata.name}{"\t"}{.items[*].spec.taints}' | tr -s '[[:space:]]' '\n'

Ensure your pod’s tolerations match these taints.

Review Node Affinity Settings

Inspect your pod definition (pod.yaml) for the affinity section:

affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
        - matchExpressions:
            - key: <key>
              operator: In
              values:
                - <value>

Ensure that there are nodes with labels that match these requirements.

Inspect PersistentVolumeClaims

To check the status of PVCs:

kubectl get pvc -n <namespace>

Ensure that the PVCs your pod depends on are in the Bound state.

Check Quotas and Limits

To view resource quotas in a namespace:

kubectl describe quota -n <namespace>

Check if any resource quotas are close to being exceeded.

Validate Pod and Container Images

Ensure the container images in your pod specification are correct. To check image pull errors, look at the pod’s events:

kubectl describe pod <pod-name> -n <namespace> | grep -i "Failed"

This can help identify any issues with pulling the container image.

Analyze Scheduler Logs

First, find the name of the scheduler pod:

kubectl get pods -n kube-system | grep kube-scheduler

Then, view the logs of the scheduler (replace <scheduler-pod-name> with the actual name):

kubectl logs <scheduler-pod-name> -n kube-system

Consider increasing the log verbosity if needed by passing the -v flag (e.g., -v=4) to the kube-scheduler on startup for more detailed logs.

By applying these specific commands, you can methodically identify and resolve the issue causing the pod to remain in the Pending state.