Join today

Hands-On Lab: Kubernetes Queue-Based Autoscaling with Custom Metrics

Build a full custom metrics pipeline in Kubernetes, expose Redis queue depth as a native cluster metric, and configure an HPA that scales your workers directly in response to queue load.
🏅 Included in the PRO membership!
Write your awesome label here.

By completing this lab, you will be able to:

  • Explain why CPU-based HPA fails for I/O-bound, queue-backed workloads and identify the right metric to use instead
  • Expose a Prometheus metric through the Kubernetes custom metrics API using the Prometheus adapter
  • Leverage queue depth as an application-level signal to drive horizontal scaling decisions
  • Configure an HPA that targets a business-meaningful metric rather than an infrastructure proxy like CPU
  • Wire a complete custom metrics pipeline from a Redis queue through to an HPA scaling decision
What you are going to learn

From CPU Guesses to Queue-Driven Scaling

This lab puts you hands-on with one of the most production-relevant autoscaling patterns in Kubernetes: scaling queue workers based on actual queue depth. You will deploy a four-component pipeline connecting a Redis exporter, Prometheus, the Prometheus adapter, and an HPA. Along the way, you will prove why CPU utilization fails as a reliable scaling signal for I/O-bound and queue-based workloads, and then build the solution that replaces it.

By the end of this lab, you will have configured the Prometheus adapter to serve a custom metric through the Kubernetes custom metrics API, written an HPA manifest that targets a named custom metric tied to a live Redis queue, and watched your worker pods scale from one to five replicas within seconds of load hitting the queue. You will also work through the scaling math, the stabilization window, and the trade-offs of choosing different target values.

Recommended Courses

The following courses are highly recommended before tackling this lab. They will provide all the knowledge you need to understand everything that is discussed in the lab. That being said, if you are already familiar with Docker and its fundamentals (images, containers, build context, Dockerfiles, etc.), feel free to jump in right away!

Lab Contents