Skip to main content
Skip table of contents

Pre-provisioned Add Nodes to Existing Node Pool

This sections covers the prerequisites and procedure you need to scale-up or scale-down nodes in an existing DKP cluster. 

Prerequisites

  • You must have the bootstrap node running with the SSH key/secrets created.

  • The export values in the environment variables section should contain the addresses of the nodes that you need to add Pre-provisioned: Define Infrastructure.

  • Update the preprovisioned_inventory.yaml with the new host addresses.

  • Run the kubectl apply command.

Scale up a Cluster Node

Follow these steps:

  1. Fetch the existing preprovisioned_inventory:

    CODE
    $ kubectl get preprovisionedinventory
  2.  Edit the preprovisioned_inventory to add additional IPs needed for additional worker nodes in the spec.hosts: section: 

    CODE
    $ kubectl edit preprovisionedinventory <preprovisioned_inventory> -n default 
  3. Add any additional IPs that you require:

    CODE
    spec: 
        hosts: 
        - address: <worker.ip.add.1> 
        - address: <worker.ip.add.2> 

    After you edit preprovisioned_inventory, fetch the machine deployment. The naming convention with md means that it is for worker machines.
    For example:

    CODE
    $ kubectl --kubeconfig ${CLUSTER_NAME}.conf get machinedeployment 
    NAME                     CLUSTER        AGE     PHASE     REPLICAS   READY   UPDATED   UNAVAILABLE 
    machinedeployment-md-0   cluster-name   9m10s   Running   4          4       4   
  4. Scale the worker node to the required number. In this example we are scaling from 4 to 6 worker nodes:

    CODE
    $ kubectl --kubeconfig ${CLUSTER_NAME}.conf scale --replicas=6 machinedeployment machinedeployment-md-0
    
    machinedeployment.cluster.x-k8s.io/machinedeployment-md-0 scaled 
  5. Monitor the scaling with this command, by adding -w option to watch:

    CODE
    $ kubectl --kubeconfig ${CLUSTER_NAME}.conf get machinedeployment -w
    
    NAME                     CLUSTER        AGE   PHASE       REPLICAS   READY   UPDATED   UNAVAILABLE 
    machinedeployment-md-0   cluster-name   20m   ScalingUp   6          4       6         2 
  6. Also you can check the machine deployment if it is already scaled. The output should resemble this example: 

    CODE
    $ kubectl --kubeconfig ${CLUSTER_NAME}.conf get machinedeployment
    
    NAME                     CLUSTER        AGE     PHASE     REPLICAS   READY   UPDATED   UNAVAILABLE 
    machinedeployment-md-0   cluster-name   3h33m   Running   6          6       6 
  7. Alternately, you can use this command and verify the NODENAME column and you should see the additional worker nodes added and in Running state:

    CODE
    $ kubectl --kubeconfig ${CLUSTER_NAME}.conf get machines -o wide
    
    NAME     CLUSTER     AGE     PROVIDERID     PHASE     VERSION     NODENAME

Scale Down a Cluster Node

Follow these steps

  1. Run this command on your worker nodes:

    CODE
    kubectl scale machinedeployment <machinedeployment-name> --replicas <new number>

  2. For control plane nodes, execute the following command:

    CODE
    kubectl scale kubeadmcontrolplane ${CLUSTER_NAME}-control-plane --replicas <new number>

Additional Notes for Scaling Down

It is possible for machines to get stuck in the provisioning stage when you scaling down. You can utilize a delete operation to clear the stale machine deployment:

CODE
kubectl delete machine ${CLUSTER_NAME}-control-plane-<hash>
CODE
kubectl delete machine <machinedeployment-name>-<hash>

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.