vSphere Create a New Cluster
Prerequisites
Before you begin, make sure you have created a vSphere Bootstrap cluster.
Name your Cluster
Give your cluster a unique name suitable for your environment. The cluster name may only contain the following characters:
a-z
,0-9
,.
, and-
. Cluster creation will fail if the name has capital letters. See Kubernetes for more naming information.Set the CLUSTER_NAME environment variable with the command:
CODEexport CLUSTER_NAME=<my-vsphere-cluster>
Create a New vSphere Kubernetes Cluster
Follow these steps:
Use the following command to set the environment variables for vSphere:
CODEexport VSPHERE_SERVER=<example.vsphere.url> export VSPHERE_USERNAME=<user@example.vsphere.url> export VSPHERE_PASSWORD=<example_password>
Ensure your vSphere credentials are up-to-date by refreshing the credentials with the command:
CODEdkp update bootstrap credentials vsphere
Generate the Kubernetes cluster objects by copying and editing this command to include the correct values, including the VM template name you assigned in the previous procedure:
To increase Dockerhub's rate limit use your Dockerhub credentials when creating the cluster, by setting the following flag
--registry-mirror-url=https://registry-1.docker.io --registry-mirror-username= --registry-mirror-password=
on thedkp create cluster command
.⚠️ IMPORTANT: Ensure your subnets do not overlap with your host subnet because they cannot be changed after cluster creation. If you need to change the kubernetes subnets, you must do this at cluster creation. The default subnets used in DKP are:
CODEspec: clusterNetwork: pods: cidrBlocks: - 192.168.0.0/16 services: cidrBlocks: - 10.96.0.0/12
The following example shows a common configuration. Other options and their corresponding flags are available in the expands below the code and were also explained in vSphere Cluster Creation Customization Choices.
See dkp create cluster vsphere reference for the full list of cluster creation flags.
dkp create cluster vsphere \
--cluster-name ${CLUSTER_NAME} \
--network <NETWORK_NAME> \
--control-plane-endpoint-host <xxx.yyy.zzz.000> \
--data-center <DATACENTER_NAME> \
--data-store <DATASTORE_NAME> \
--folder <FOLDER_NAME> \
--server <VCENTER_API_SERVER_URL> \
--ssh-public-key-file <SSH_PUBLIC_KEY_FILE> \
--resource-pool <RESOURE_POOL_NAME> \
--virtual-ip-interface <ip_interface_name> \
--vm-template <TEMPLATE_NAME>
--dry-run \
--output=yaml \
> ${CLUSTER_NAME}.yaml
Expand the drop-downs for registry, HTTP, FIPS and other flags to use during the cluster creation step above. For more information regarding this flag or others, please refer to the CLI section of the documentation for dkp create cluster and select your provider.
Inspect or edit the cluster objects and familiarize yourself with Cluster API before editing the cluster objects as edits can prevent the cluster from deploying successfully. See Customizing CAPI Components for a Cluster.
Familiarize yourself with Cluster API before editing the cluster objects as edits can prevent the cluster from deploying successfully.
(Optional) Modify Control Plane Audit logs - Users can make modifications to the
KubeadmControlplane
cluster-api object to configure differentkubelet
options. See Configuring the Control Plane if you wish to configure your control plane beyond the existing options that are available from flags.Create the cluster from the objects generated in the
dry run
. A warning will appear in the console if the resource already exists and will require you to remove the resource or update your YAML.CODEkubectl create -f ${CLUSTER_NAME}.yaml
NOTE: If you used the
--output-directory
flag in yourdkp create .. --dry-run
step above, create the cluster from the objects you created by specifying the directory:CODEkubectl create -f <existing-directory>/
Use the
wait
command to monitor the cluster control-plane readiness:CODEkubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --timeout=20m
Output:
CODEcluster.cluster.x-k8s.io/${CLUSTER_NAME} condition met
The
READY
status becomesTrue
after the cluster control-plane becomes Ready in one of the following steps.After DKP creates the objects on the API server, the Cluster API controllers reconcile them, creating infrastructure and machines. As the controllers progress, they update the Status of each object. Run the DKP describe command to monitor the current status of the cluster:
CODEdkp describe cluster -c ${CLUSTER_NAME}
Check all machines has
NODE_NAME
assigned:CODEkubectl get machines
Verify that the kubeadm control plane is ready with the command:
CODEkubectl get kubeadmcontrolplane
The output appears similar to the following:
CODENAME CLUSTER INITIALIZED API SERVER AVAILABLE REPLICAS READY UPDATED UNAVAILABLE AGE VERSION d2iq-e2e-cluster-1-control-plane d2iq-e2e-cluster-1 true true 3 3 3 0 14h v1.28.7
Describe the kubeadm control plane and check its status and events with the command:
CODEkubectl describe kubeadmcontrolplane
As they progress, the controllers also create Events, which you can list using the command:
CODEkubectl get events | grep ${CLUSTER_NAME}
For brevity, this example uses grep
. You can also use separate commands to get Events for specific objects, such as kubectl get events --field-selector involvedObject.kind="VSphereCluster"
and kubectl get events --field-selector involvedObject.kind="VSphereMachine"
.
DKP uses the vsphere CSI driver as the default storage provider. Use a Kubernetes CSI compatible storage that is suitable for production. See the Kubernetes documentation called Changing the Default Storage Class for more information.
If you’re not using the default, you cannot deploy an alternate provider until after the dkp create cluster
is finished. However, it must be determined before Kommander installation.
Known Limitations
Be aware of these limitations in the current release of DKP Konvoy.
The DKP Konvoy version used to create a bootstrap cluster must match the DKP Konvoy version used to create a workload cluster.
DKP Konvoy supports deploying one workload cluster.
DKP Konvoy generates a set of objects for one Node Pool.
DKP Konvoy does not validate edits to cluster objects.
The optional next step is to Make the vSphere Cluster Self-managed. The step is optional because, as an example, if you are using an existing, self-managed cluster to create a managed cluster, you would not want the managed cluster to be self-managed.