Resource usage metrics, such as container CPU and memory usage,
are available in Kubernetes through the Metrics API. These metrics can be either accessed directly
by user, for example by using kubectl top
command, or used by a controller in the cluster, e.g.
Horizontal Pod Autoscaler, to make decisions.
Through the Metrics API you can get the amount of resource currently used by a given node or a given pod. This API doesn’t store the metric values, so it’s not possible for example to get the amount of resources used by a given node 10 minutes ago.
The API is no different from any other API:
/apis/metrics.k8s.io/
pathThe API is defined in k8s.io/metrics repository. You can find more information about the API there.
Note: The API requires metrics server to be deployed in the cluster. Otherwise it will be not available.
Metrics Server is a cluster-wide aggregator of resource usage data.
It is deployed by default in clusters created by kube-up.sh
script
as a Deployment object. If you use a different Kubernetes setup mechanism you can deploy it using the provided
deployment yamls.
Metric server collects metrics from the Summary API, exposed by Kubelet on each node.
Metrics Server registered in the main API server through Kubernetes aggregator.
Learn more about the metrics server in the design doc.
Was this page helpful?
Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub repo if you want to report a problem or suggest an improvement.