Delete an AWS Cluster
A self-managed workload cluster cannot delete itself. If your workload cluster is self-managed, you must first create a bootstrap cluster and move the cluster lifecycle services to it, before deleting the workload cluster.
If you did not make your workload cluster self-managed, as described in Make New Cluster Self-Managed, see the instructions in Delete the workload cluster.
Create a Bootstrap Cluster and Move CAPI Resources
Follow these steps to create a bootstrap cluster and move CAPI resources:
Make sure your AWS credentials are up to date. Refresh the credentials using this command:
CODEdkp update bootstrap credentials aws --kubeconfig $HOME/.kube/config
The bootstrap cluster will host the Cluster API controllers that reconcile the cluster objects marked for deletion. Create a bootstrap cluster:
NOTE: To avoid using the wrong kubeconfig, the following steps use explicit kubeconfig paths and contexts.
CODEdkp create bootstrap --kubeconfig $HOME/.kube/config --with-aws-bootstrap-credentials=true
CODE✓ Creating a bootstrap cluster ✓ Initializing new CAPI components
Move the Cluster API objects from the workload to the bootstrap cluster: The cluster lifecycle services on the bootstrap cluster are ready, but the workload cluster configuration is on the workload cluster. The
move
command moves the configuration, which takes the form of Cluster API Custom Resource objects, from the workload to the bootstrap cluster. This process is also called a Pivot.CODEdkp move capi-resources \ --from-kubeconfig ${CLUSTER_NAME}.conf \ --from-context ${CLUSTER_NAME}-admin@${CLUSTER_NAME} \ --to-kubeconfig $HOME/.kube/config \ --to-context kind-konvoy-capi-bootstrapper
CODE✓ Moving cluster resources You can now view resources in the moved cluster by using the --kubeconfig flag with kubectl. For example: kubectl --kubeconfig $HOME/.kube/config get nodes
Use the cluster lifecycle services on the workload cluster to check the workload cluster’s status:
CODEdkp describe cluster --kubeconfig $HOME/.kube/config -c ${CLUSTER_NAME}
CODENAME READY SEVERITY REASON SINCE MESSAGE Cluster/aws-example True 91s ├─ClusterInfrastructure - AWSCluster/aws-example True 103s ├─ControlPlane - KubeadmControlPlane/aws-example-control-plane True 91s │ ├─Machine/aws-example-control-plane-55jh4 True 102s │ ├─Machine/aws-example-control-plane-6sn97 True 102s │ └─Machine/aws-example-control-plane-nx9v5 True 102s └─Workers └─MachineDeployment/aws-example-md-0 True 108s ├─Machine/aws-example-md-0-cb9c9bbf7-hcl8z True 102s ├─Machine/aws-example-md-0-cb9c9bbf7-rtdqw True 102s ├─Machine/aws-example-md-0-cb9c9bbf7-td29r True 102s └─Machine/aws-example-md-0-cb9c9bbf7-w64kg True 102s
NOTE: After moving the cluster lifecycle services to the workload cluster, remember to use DKP with the workload cluster kubeconfig and use DKP with the bootstrap cluster to delete the workload cluster.
Wait for the cluster control-plane to be ready:
CODEkubectl --kubeconfig $HOME/.kube/config wait --for=condition=controlplaneready "clusters/${CLUSTER_NAME}" --timeout=20m
CODEcluster.cluster.x-k8s.io/aws-example condition met
Delete the Workload Cluster
Make sure your AWS credentials are up to date. Refresh the credentials using this command:
CODEdkp update bootstrap credentials aws --kubeconfig $HOME/.kube/config
NOTE: Persistent Volumes (PVs) are not deleted automatically by design to preserve your data. However, the PVs take up storage space if not deleted. You must delete PVs manually. Information for backup of a cluster and PVs is on the page, Back up your Cluster's Applications and Persistent Volumes.
To delete a cluster, you would use
dkp delete cluster
and pass in the name of the cluster you are trying to delete with--cluster-name
flag. You would usekubectl get clusters
to get those details (--cluster-name
and--namespace
) of the Kubernetes cluster to delete it.
NOTE: Do not usedkp get clusters
since that gets you Kommander cluster details rather than Konvoy kubernetes cluster details.CODEkubectl get clusters
Delete the Kubernetes cluster and wait a few minutes:
NOTE: Before deleting the cluster, dkp deletes all Services of type LoadBalancer on the cluster. Each Service is backed by an AWS Classic ELB. Deleting the Service deletes the ELB that backs it. To skip this step, use the flag
--delete-kubernetes-resources=false
. Do not skip this step if the VPC is managed by DKP. When DKP deletes the cluster, it deletes the VPC. If the VPC has any AWS Classic ELBs, AWS does not allow the VPC to be deleted, and DKP cannot delete the cluster.CODEdkp delete cluster --cluster-name=${CLUSTER_NAME} --kubeconfig $HOME/.kube/config
CODE✓ Deleting Services with type LoadBalancer for Cluster default/azure-example ✓ Deleting ClusterResourceSets for Cluster default/azure-example ✓ Deleting cluster resources ✓ Waiting for cluster to be fully deleted Deleted default/azure-example cluster
After the workload cluster is deleted, you can delete the bootstrap cluster.
Delete the Bootstrap Cluster
After you have moved the workload resources back to a bootstrap cluster and deleted the workload cluster, you no longer need the bootstrap cluster. You can safely delete the bootstrap cluster with these steps:
Make sure your AWS credentials are up to date. Refresh the credentials using this command:
CODEdkp update bootstrap credentials aws --kubeconfig $HOME/.kube/config
Delete the bootstrap cluster:
CODEdkp delete bootstrap --kubeconfig $HOME/.kube/config
CODE✓ Deleting bootstrap cluster
Air-gapped environments:
Make sure your AWS credentials are up to date. Refresh the credentials using this command:
CODEdkp update bootstrap credentials aws --kubeconfig $HOME/.kube/config
Delete the provisioned Kubernetes cluster and wait a few minutes:
CODEdkp delete cluster --cluster-name=${CLUSTER_NAME}
Delete the
kind
Kubernetes cluster:CODEdkp delete bootstrap
Known Limitations
The Konvoy version used to create the workload cluster must match the Konvoy version used to delete the workload cluster.