Prerequisites

Before you begin, you must:

Replace a worker node

In certain situations, you may want to delete a worker node and have Cluster API replace it with a newly-provisioned machine.

  1. Identify the name of the node to delete.

    List the nodes:

    kubectl --kubeconfig ${CLUSTER_NAME}.conf get nodes
    CODE

    The output from this command resembles the following:

    NAME                                       STATUS   ROLES                  AGE   VERSION
    d2iq-e2e-cluster-1-control-plane-7llgd     Ready    control-plane,master   20h   v1.22.8
    d2iq-e2e-cluster-1-control-plane-vncbl     Ready    control-plane,master   20h   v1.22.8
    d2iq-e2e-cluster-1-control-plane-wbgrm     Ready    control-plane,master   19h   v1.22.8
    d2iq-e2e-cluster-1-md-0-74c849dc8c-67rv4   Ready    <none>                 20h   v1.22.8
    d2iq-e2e-cluster-1-md-0-74c849dc8c-n2skc   Ready    <none>                 20h   v1.22.8
    d2iq-e2e-cluster-1-md-0-74c849dc8c-nkftv   Ready    <none>                 20h   v1.22.8
    d2iq-e2e-cluster-1-md-0-74c849dc8c-sqklv   Ready    <none>                 20h   v1.22.8
    
    CODE

  2. Export a variable with the node name to use in the next steps:

    This example uses the name d2iq-e2e-cluster-1-md-0-74c849dc8c-67rv4.

    export NAME_NODE_TO_DELETE="d2iq-e2e-cluster-1-md-0-74c849dc8c-67rv4"
    CODE

  3. Delete the Machine resource with the command:

    NAME_MACHINE_TO_DELETE=$(kubectl --kubeconfig ${CLUSTER_NAME}.conf get machine -ojsonpath="{.items[?(@.status.nodeRef.name==\"$NAME_NODE_TO_DELETE\")].metadata.name}")
    kubectl --kubeconfig ${CLUSTER_NAME}.conf delete machine "$NAME_MACHINE_TO_DELETE"
    CODE

    machine.cluster.x-k8s.io "d2iq-e2e-cluster-1-md-0-74c849dc8c-67rv4" deleted
    CODE

    The command does not return immediately, but it does return after the Machine resource is deleted.

    A few minutes after the Machine resource is deleted, the corresponding Node resource is also deleted.

  4. Observe the Machine resource replacement using this command:

    kubectl --kubeconfig ${CLUSTER_NAME}.conf get machinedeployment
    CODE

    NAME                      CLUSTER              REPLICAS   READY   UPDATED   UNAVAILABLE   PHASE       AGE   VERSION
    d2iq-e2e-cluster-1-md-0   d2iq-e2e-cluster-1   4          3       4         1             ScalingUp   20h   v1.22.8
    CODE

    In this example, there exist 4 replicas, but only 3 are ready. One replica is unavailable, and the ScalingUp phase means a new Machine is being created.

  5. Identify the replacement Machine using this command:

    export NAME_NEW_MACHINE=$(kubectl --kubeconfig ${CLUSTER_NAME}.conf get machines \
        -l=cluster.x-k8s.io/deployment-name=${CLUSTER_NAME}-md-0 \
        -ojsonpath='{.items[?(@.status.phase=="Provisioning")].metadata.name}{"\n"}')
    echo "$NAME_NEW_MACHINE"
    CODE

    If the output is empty, the new Machine has probably exited the Provisioning phase and entered the Running phase.

  6. Identify the replacement Node using this command:

    kubectl --kubeconfig ${CLUSTER_NAME}.conf get nodes \
        -o=jsonpath="{.items[?(@.metadata.annotations.cluster\.x-k8s\.io/machine==\"$NAME_NEW_MACHINE\")].metadata.name}"
    CODE

    The output should be similar to this example:

    d2iq-e2e-cluster-1-md-0-74c849dc8c-rc528
    CODE

    If the output is empty, the Node resource is not yet available, or does not yet have the expected annotation. Wait a few minutes, then repeat the command.