Scaling node pools

While you can run Cluster Autoscaler, you can also manually scale your node pools up or down when you need more finite control over your environment. For example, if you require 10 machines to run a process, you can manually set the scaling to run those 10 machines only. However, if also using the Cluster Autoscaler, you must stay within your minimum and maximum bounds.

Scaling up node pools

To scale up a node pool in a cluster, run the command that follows, replacing the value 5 with the actual number of replicas you need:

dkp scale nodepools ${NODEPOOL_NAME} --replicas=5 --cluster-name=${CLUSTER_NAME}
CODE

Your output should be similar to this example, indicating the scaling is in progress:

INFO[2021-07-26T08:54:35-07:00] Running scale nodepool command                clusterName=demo-cluster managementClusterKubeconfig= namespace=default src="nodepool/scale.go:82"
INFO[2021-07-26T08:54:35-07:00] Nodepool example scaled to 5 replicas  clusterName=demo-cluster managementClusterKubeconfig= namespace=default src="nodepool/scale.go:94"
CODE

After a few minutes, you can list the node pools with the command:

dkp get nodepools --cluster-name=${CLUSTER_NAME} --kubeconfig=${CLUSTER_NAME}.conf
CODE

Your output should be similar to this example, with the number of DESIRED and READY replicas increased to 5:

NODEPOOL                        DESIRED               READY               KUBERNETES VERSION
example                         5                     5                   v1.24.6
demo-cluster-md-0               4                     4                   v1.24.6
CODE

Scaling down node pools

To scale down a node pool, run the command:

dkp scale nodepools ${NODEPOOL_NAME} --replicas=4 --cluster-name=${CLUSTER_NAME}
CODE

INFO[2021-07-26T08:54:35-07:00] Running scale nodepool command                clusterName=demo-cluster managementClusterKubeconfig= namespace=default src="nodepool/scale.go:82"
INFO[2021-07-26T08:54:35-07:00] Nodepool example scaled to 4 replicas  clusterName=demo-cluster managementClusterKubeconfig= namespace=default src="nodepool/scale.go:94"
CODE

In a default cluster, the nodes to delete are selected at random. This behavior is controller by CAPI’s delete policy. However, when using the Konvoy CLI to scale down a node pool, you can specify the Kubernetes Nodes you want to delete.

To do this, set the flag --nodes-to-delete with a list of nodes as shown in the next command. This adds an annotation cluster.x-k8s.io/delete-machine=yes to the matching Machine object that contains status.NodeRef with the node names from --nodes-to-delete.

dkp scale nodepools ${NODEPOOL_NAME} --replicas=3 --nodes-to-delete=<> --cluster-name=${CLUSTER_NAME}
CODE

INFO[2021-07-26T08:54:35-07:00] Running scale nodepool command                clusterName=demo-cluster managementClusterKubeconfig= namespace=default src="nodepool/scale.go:82"
INFO[2021-07-26T08:54:35-07:00] Nodepool example scaled to 3 replicas  clusterName=demo-cluster managementClusterKubeconfig= namespace=default src="nodepool/scale.go:94"
CODE

Scaling node pools when using cluster autoscaler

If you configured the cluster autoscaler for the demo-cluster-md-0 node pool, the value of --replicas must be within the minimum and maximum bounds.

For example, assuming you have the these annotations:

kubectl --kubeconfig=${CLUSTER_NAME}.conf annotate machinedeployment ${NODEPOOL_NAME} cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size=2
kubectl --kubeconfig=${CLUSTER_NAME}.conf annotate machinedeployment ${NODEPOOL_NAME} cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size=6
CODE

Try to scale the node pool to 7 replicas with the command:

dkp scale nodepools ${NODEPOOL_NAME} --replicas=7 -c demo-cluster
CODE

This action results in an error similar to:

INFO[2021-07-26T09:46:37-07:00] Running scale nodepool command                clusterName=demo-cluster managementClusterKubeconfig= namespace=default src="nodepool/scale.go:82"
Error: failed to scale nodepool: scaling MachineDeployment is forbidden: desired replicas 7 is greater than the configured max size annotation cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: 6
CODE

Similarly, scaling down to a number of replicas less than the configured min-size also returns an error.