Skip to main content
Skip table of contents

Scale vSphere Node Pools

While you can run Cluster Autoscaler, you can also manually scale your node pools up or down when you need more finite control over your environment. For example, if you require 10 machines to run a process, you can manually set the scaling to run those 10 machines only. However, if also using the Cluster Autoscaler, you must stay within your minimum and maximum bounds.

Scaling up Node Pools

  1. To scale up a node pool in a cluster, run the command that follows, replacing the value 5 with the actual number of replicas you need:

CODE
dkp scale nodepools ${NODEPOOL_NAME} --replicas=5 --cluster-name=${CLUSTER_NAME}

Your output should be similar to this example, indicating the scaling is in progress:

CODE
INFO[2021-07-26T08:54:35-07:00] Running scale nodepool command                clusterName=demo-cluster managementClusterKubeconfig= namespace=default src="nodepool/scale.go:82"
INFO[2021-07-26T08:54:35-07:00] Nodepool example scaled to 5 replicas  clusterName=demo-cluster managementClusterKubeconfig= namespace=default src="nodepool/scale.go:94"

  1. After a few minutes, you can list the node pools with the command:

CODE
dkp get nodepools --cluster-name=${CLUSTER_NAME} --kubeconfig=${CLUSTER_NAME}.conf

Your output should be similar to this example, with the number of DESIRED and READY replicas increased to 5:

CODE
NODEPOOL                        DESIRED               READY               KUBERNETES VERSION
example                         5                     5                   v1.27.11
demo-cluster-md-0               4                     4                   v1.27.11

Scaling Down Node Pools

  1. To scale down a node pool, run the command:

CODE
dkp scale nodepools ${NODEPOOL_NAME} --replicas=4 --cluster-name=${CLUSTER_NAME}

Output:

CODE
INFO[2021-07-26T08:54:35-07:00] Running scale nodepool command                clusterName=demo-cluster managementClusterKubeconfig= namespace=default src="nodepool/scale.go:82"
INFO[2021-07-26T08:54:35-07:00] Nodepool example scaled to 4 replicas  clusterName=demo-cluster managementClusterKubeconfig= namespace=default src="nodepool/scale.go:94"

In a default cluster, the nodes to delete are selected at random. This behavior is controller by CAPI’s delete policy. However, when using the Konvoy CLI to scale down a node pool, you can specify the Kubernetes Nodes you want to delete.

To do this, set the flag --nodes-to-delete with a list of nodes as shown in the next command. This adds an annotation cluster.x-k8s.io/delete-machine=yes to the matching Machine object that contains status.NodeRef with the node names from --nodes-to-delete.

CODE
dkp scale nodepools ${NODEPOOL_NAME} --replicas=3 --nodes-to-delete=<> --cluster-name=${CLUSTER_NAME}

Output:

CODE
INFO[2021-07-26T08:54:35-07:00] Running scale nodepool command                clusterName=demo-cluster managementClusterKubeconfig= namespace=default src="nodepool/scale.go:82"
INFO[2021-07-26T08:54:35-07:00] Nodepool example scaled to 3 replicas  clusterName=demo-cluster managementClusterKubeconfig= namespace=default src="nodepool/scale.go:94"

Scaling Node Pools when Using Cluster Autoscaler

If you configured the cluster autoscaler for the demo-cluster-md-0 node pool, the value of --replicas must be within the minimum and maximum bounds.

  1. For example, assuming you have the these annotations:

CODE
kubectl --kubeconfig=${CLUSTER_NAME}.conf annotate machinedeployment ${NODEPOOL_NAME} cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size=2
kubectl --kubeconfig=${CLUSTER_NAME}.conf annotate machinedeployment ${NODEPOOL_NAME} cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size=6

  1. Try to scale the node pool to 7 replicas with the command:

CODE
dkp scale nodepools ${NODEPOOL_NAME} --replicas=7 -c demo-cluster

  1. This action results in an error similar to:

CODE
INFO[2021-07-26T09:46:37-07:00] Running scale nodepool command                clusterName=demo-cluster managementClusterKubeconfig= namespace=default src="nodepool/scale.go:82"
Error: failed to scale nodepool: scaling MachineDeployment is forbidden: desired replicas 7 is greater than the configured max size annotation cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: 6

Similarly, scaling down to a number of replicas less than the configured min-size also returns an error.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.