Kubernetes, an open source container orchestration platform,  helps to orchestrate your containerized application. One does not need to worry about scaling in, scaling out, availability of the application if Kubernetes is in place. Before the application is migrated or moved to the Kubernetes cluster, the cluster needs to be highly available with in-built disaster-recovery, as well as secure, scalable and optimized.

Kubernetes makes use of resources from the underlying Virtual Machines or Physical Machines which are then consumed by individual containers. The most common resources are CPU and RAM, there are others too.  Kubernetes can put a limit on these resources and their consumption by containers, if required.  It is a general practice to set a limit on CPU/Memory usage by the containers. CPU/Memory limit is the maximum CPU/Memory that a container can use. It restricts the container from using all the CPU/Memory available on the node. So, in theory this sounds good which protects nodes from going out of resource and becoming unresponsive.

CPU and Memory limits are implemented and work differently to each other. Memory limit is easier to detect where we just need to check for the pod's last restart status if it was killed due to Out Of Memory(OOMKilled). On the other hand, to implement CPU limit Kubernetes uses kernel throttling and exposes usage metrics and not cgroup related metrics, this makes it hard to detect CPU throttling. It means, if an application goes above the CPU limit, it gets throttled.

And this is what the problem is.

Before we understand the CPU throttling, let's understand the need of the CPU limit and how it works.

What if we do not specify the CPU limit and/or request in Kubernetes?

If you do not specify CPU limit then the container will not have any upper bound on the CPU and it then can use all of the CPU available on the node. This can make the CPU intensive container to slow down other containers on the same node and could exhaust all CPU available on the node.  This in turn can trigger events such as having Kubernetes components ,such as kubelet, to become unresponsive. This can make the node turn into a NotReady state and containers from the node will be rescheduled on some other node.

What is CPU throttling and How does the CPU limit work?

CPU throttling makes sure that if the application goes about the specified limit it gets throttled. Sometimes containers can also be observed throttling even if the CPU usage was near the limits or not. This happens because of  a bug in the Linux kernel throttling unnecessarily containers having CPU limits.

Now, before we proceed let’s first understand how the CPU limit works in Kubernetes.  Kubernetes uses CFS quotas to enforce CPU limits for the pods running an application. The Completely Fair Scheduler (CFS) is a process scheduler that handles CPU resource allocation for executing processes, based on time period and not based on available CPU power and uses two files cfs_period_us and cfs_quota_us.

  • cpu.cfs_quota_us: the total available run-time within a period [in microseconds]
  • cpu.cfs_period_us: the length of a period [in microseconds]

cat /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us
o/p → 100000
Here, 100000 us = 100 ms This is always 100ms

cat /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us
o/p → -1
Here, A value of -1 for cpu.cfs_quota_us indicates that the group does not have any bandwidth restriction in place

There is a single threaded application running on a CPU with CPU constraint and  the application needs 200 ms of processing time. The graph of the application completing its request would look something as follows.

In the second case, if we set a CPU limit of 0.4 CPUs to the application, the application gets 40 ms of runtime after every 100 milliseconds of period. Here, the request which took 200 ms in the previous case would now take 440 ms to complete.

From the above example, it is clear that the CPU limit is the real cause of the problem.


Let’s take the following example where the worker node has 1 CPU. You can check it using “cat /proc/cpuinfo” command on the linux server.

Now, before we create a Pod with a CPU limit, let’s deploy a “Metrics Server” on the cluster which is an aggregator of resource usage data. This will be  needed to check metrics on the Pods/Nodes.

The Metrics server can be deployed using the following commands which first clone the “kubernetes-metrics-server” git repository and create the required objects on the cluster.

git clone
cd kubernetes-metrics-server/
kubectl  apply -f .

To create a pod with CPU request and limits, create a file “pod-with-cpu-limit.yaml” with the following pod definition. This sets a Limit “1” and Request “0.5” to the Pod. Requests are what the container is guaranteed to get and Limits make sure a container never goes above a certain value.
[Source :]

This will create a pod without any issues as the Limit does not exceed the actual number of CPUs we have on the worker node and request is within the limit we specified.

Use the following command to create a pod

kubectl  apply -f pod-with-cpu-limit.yaml
kubectl  get pods
kubectl  describe pod cpu-demo-pod

Now, if you check the actual CPU usage of the pod using the following command ,it is 1003m (1 CPU). Here the CPU is throttling because of the  argument we passed to the pod which makes it exceed what we specified in the limit.

Use the following command to check the usage.

kubectl  top pods cpu-demo-pod

But in case, if you specify the CPU Limit  more than what is available on the worker node, you will face issues and the Pod will go into the Pending state.

First delete the existing pod, change the CPU Limits and Request, more that what is available, and try to create a Pod using the following command.

kubectl  get pods
kubectl  delete pod cpu-demo-pod
kubectl  apply -f pod-with-cpu-limit.yaml

Check the reason for the failure by describing the Pod.

kubectl  get pods
kubectl  describe pod cpu-demo-pod

You can clearly see that, due to CPU requests more than the actual CPUs available on the worker nodes, the Pod could not be created and is in Pending state.

Be careful while removing the CPU Limit

Removing CPU limits won't be an easy decision with respect to cluster stability. If the CPU limit is not set by default it limits to the highest available value on the given node. Before removing the CPU limit, it is important to understand how the services work and their need for the CPU.

We can try removing the CPU limit for latency sensitive services, CPU limit cannot just be removed randomly from any service.

It is a good idea to isolate services which do not have CPU limits. This will help control and identify such services/pods easily if any issue occurs w.r.t. resource allocation.


If Docker containers/ Kubernetes Pods are running on Linux systems then they can be underperforming due to throttling. Though, it could be dangerous but removing CPU Limited is a solution to solve this throttling issue. This can also be handled by upgrading the kernel to the version which has a fix to the CPU throttling issue.  Applications with no CPU limit can also be isolated to a different node so that it can be easier to find out the culprit container/pods which are affecting the performance.