Horizontal Pod Autoscaler
Horizontal Pod Autoscaler automatically scales the number of pods in a replication controller, deployment or replica set based on observed CPU utilization (or, with beta support, on some other, application-provided metrics).
Prerequisites
Heapster monitoring needs to be deployed in the cluster as Horizontal Pod Autoscaler uses it to collect metrics
git clone https://github.com/kubernetes/heapster.git
cd heapster
kubectl create -f deploy/kube-config/influxdb/
deployment.extensions "monitoring-grafana" created
service "monitoring-grafana" created
serviceaccount "heapster" created
deployment.extensions "heapster" created
service "heapster" created
deployment.extensions "monitoring-influxdb" created
service "monitoring-influxdb" created
kubectl create -f deploy/kube-config/rbac/heapster-rbac.yaml
clusterrolebinding.rbac.authorization.k8s.io "heapster" created
kubectl cluster-info
Kubernetes master is running at https://139.59.81.67:6443
Heapster is running at https://139.59.81.67:6443/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://139.59.81.67:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
monitoring-grafana is running at https://139.59.81.67:6443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
monitoring-influxdb is running at https://139.59.81.67:6443/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Step One: Run & expose php-apache server
To demonstrate Horizontal Pod Autoscaler we will use a custom docker image based on the php-apache image. The Dockerfile has the following content:
FROM php:5-apache
ADD index.php /var/www/html/index.php
RUN chmod a+rx index.php
It defines an index.php page which performs some CPU intensive computations:
<?php
$x = 0.0001;
for ($i = 0; $i <= 1000000; $i++) {
$x += sqrt($x);
}
echo "OK!";
?>
First, we will start a deployment running the image and expose it as a service:
kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --expose --port=80
service "php-apache" created
deployment.apps "php-apache" created
Step Two: Create Horizontal Pod Autoscaler
Now that the server is running, we will create the autoscaler using kubectl autoscale. The following command will create a Horizontal Pod Autoscaler that maintains between 1 and 10 replicas of the Pods controlled by the php-apache deployment we created in the first step of these instructions. Roughly speaking, HPA will increase and decrease the number of replicas (via the deployment) to maintain an average CPU utilization across all Pods of 50%.
kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
deployment.apps "php-apache" autoscaled
We may check the current status of autoscaler by running:
kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache <unknown>/50% 1 10 0 10s