How To Fix OOMKilled

Shubham - Oct 25 - - Dev Community

OOMKilled occurs in Kubernetes when a container exceeds its memory limit or tries to access unavailable resources on a node, flagged by exit code 137.

Typical OOMKilled looks like

NAME                READY     STATUS        RESTARTS     AGE
web-app-pod-1       0/1       OOMKilled     0            14m7s
Enter fullscreen mode Exit fullscreen mode

"Pods must use less memory than the total available on the node; if they exceed this, Kubernetes will kill some pods to restore balance."

Learn more about OOMKilled visually here:

Image description

How to Fix OOMKilled Kubernetes Error (Exit Code 137)

  1. Identify OOMKilled Event: Run kubectl get pods and check if the pod status shows OOMKilled.
  2. Gather Pod Details: Use kubectl to describe pod [pod-name] and review the Events section for the OOMKilled reason.

Check the Events section of the describe pod, and look for the following message:

State:          Running
       Started:      Mon, 11 Aug 2024 19:15:00 +0200
       Last State:   Terminated
       Reason:       OOMKilled
       Exit Code:    137
       ...
Enter fullscreen mode Exit fullscreen mode
  1. Analyze Memory Usage: Check memory usage patterns to identify if the limit was exceeded due to a spike or consistently high usage.
  2. Adjust Memory Settings: Increase memory limits in pod specs if necessary, or debug and fix any memory leaks in the application.
  3. Prevent Overcommitment: Ensure memory requests do not exceed node capacity by adjusting pod resource requests and limits.

Point worth noting:

"If a pod is terminate due to a memory issue. it doesn’t necessarily mean it will be removed from the node. If the node’s restart policy is set to ‘Always’, the pod will attempt to restart"

To check the QoS class of a pod, run this command:

kubectl get pod -o jsonpath='{.status.qosClass}'

To inspect the oom_score of a pod:

  1. Run kubectl exec -it /bin/bash
  2. To see the oom_score, run cat/proc//oom_score
  3. To see the oom_score_adj, run cat/proc//oom_score_adj

The pod with the lowest oom_score is the first to be terminated when the node experiences memory exhaustion.

. . . . . . . . . . . . . .