Skip to Content

2 ways to get the memory usage of pod in Kubernetes

To check the memory usage of a pod in Kubernetes, we have multiple methods:

Using Direct Commands on Pods to get the memory usage in Kubernetes

This method doesn’t require any third-party tools. Start by listing all the pods in your cluster with

kubectl get pods

Then, access the shell of the pod you’re interested in with

kubectl exec -it [pod-name] -n [namespace] -- /bin/bash

replacing [pod-name] and [namespace] with your specific pod’s name and namespace.

Once inside, you can use commands like top and free -h to view CPU and RAM usage directly.

When using direct commands on Kubernetes pods, a few useful tips include:

Pro tips:

  1. kubectl exec: Run commands inside a container of a specific pod, e.g., kubectl exec <pod-name> — <command>.
  2. Interactive Commands/Shell Access: Use kubectl exec -it <pod-name> — /bin/bash (or /bin/sh for lightweight containers).
  3. Viewing Logs: Quickly view logs with kubectl logs <pod-name> for troubleshooting without pod access.
  4. Filtering Pods: Combine kubectl get pods with grep to execute commands on specific pods based on names or statuses.

For more details, you might want to check the official Kubernetes documentation.

Install Metrics Server and using kubectl top Command to get the memory usage in Kubernetes

This approach involves installing the Metrics Server in your cluster, which then allows you to use kubectl top pod to see CPU and RAM usage for all pods or a specific pod by specifying its name.

This method provides a quick overview of resource usage without needing to access each pod individually. The straightforward method is using the

kubectl top pod [pod-name] --containers

which shows you the memory usage for each container in the specified pod. If you’re interested in a broader overview, omitting the pod name will display information for all pods.

For more detailed insights, including memory limits and current usage, kubectl describe pod [pod-name] can be used. This command provides a comprehensive view of the pod’s configuration and current resource usage.

To install Metrics Server in Kubernetes, you typically use kubectl create with the appropriate YAML file for the Metrics Server deployment.

The official Kubernetes GitHub repository often has the latest version of the Metrics Server deployment YAML. You can download the YAML file and then apply it using

kubectl create -f <yaml-file-path>

kubectl create -f https://raw.githubusercontent.com/pythianarora/total-practice/master/sample-kubernetes-code/metrics-server.yaml

Remember to check the Kubernetes official documentation or the Metrics Server GitHub page for the most up-to-date installation instructions and any prerequisites for your specific Kubernetes cluster configuration. For detailed steps, it’s best to refer to the official Kubernetes documentation.

pro tips:

  • kubectl create is used to create a new resource, specifically defining that resource in the command or a YAML file. It’s more of a one-time operation. 
  • kubectl apply is used to apply a configuration change to a resource, with the ability to update resources in place with modifications defined in a YAML file. It’s more versatile for managing resources over time, supporting the creation of new resources and updating existing ones. 

Each of these methods has its advantages, depending on your specific needs and the level of detail required.

For a quick check, kubectl top is very convenient, while the direct command execution provides deeper insights for troubleshooting specific pods. Metrics Server and PodMetrics offer a balance between ease of use and detailed information, suitable for regular monitoring.

10 Useful kubectl exec Examples

Kubectl Commands Cheat Sheet and examples