Create new vSphere Cluster
Prerequisites
Before you begin, make sure you have created a vSphere Bootstrap cluster.
Name your cluster
Give your cluster a unique name suitable for your environment.
Set the CLUSTER_NAME environment variable with the command:
CODEexport CLUSTER_NAME=<my-vsphere-cluster>
Create a New vSphere Kubernetes Cluster
Follow these steps:
Use the following command to set the environment variables for vSphere:
CODEexport VSPHERE_SERVER=example.vsphere.url export VSPHERE_USERNAME=user@example.vsphere.url export VSPHERE_PASSWORD=example_password
Ensure your vSphere credentials are up-to-date by refreshing the credentials with the command:
CODEdkp update bootstrap credentials vsphere
3. Generate the Kubernetes cluster objects by copying and editing this command to include the correct values, including the VM template name you assigned in the previous procedure:
To increase Dockerhub's rate limit use your Dockerhub credentials when creating the cluster, by setting the following flag
--registry-mirror-url=https://registry-1.docker.io --registry-mirror-username= --registry-mirror-password=
on thedkp create cluster command
.Ensure your subnets do not overlap with your host subnet because they cannot be changed after cluster creation. If you need to change the kubernetes subnets, you must do this at cluster creation. The default subnets used in DKP are:
CODEspec: clusterNetwork: pods: cidrBlocks: - 192.168.0.0/16 services: cidrBlocks: - 10.96.0.0/12
The following
dkp create cluster
example shows a common configuration. See dkp create cluster vsphere reference for the full list of cluster creation options:
dkp create cluster vsphere \
--cluster-name ${CLUSTER_NAME} \
--network <NETWORK_NAME> \
--control-plane-endpoint-host <xxx.yyy.zzz.000> \
--data-center <DATACENTER_NAME> \
--data-store <DATASTORE_NAME> \
--folder <FOLDER_NAME> \
--server <VCENTER_API_SERVER_URL> \
--ssh-public-key-file <SSH_PUBLIC_KEY_FILE> \
--resource-pool <RESOURE_POOL_NAME> \
--virtual-ip-interface <ip_interface_name> \
--vm-template <TEMPLATE_NAME>
--dry-run \
--output=yaml \
> ${CLUSTER_NAME}.yaml
(Optional) Alternatively, you can create individual files with different smaller manifests for ease in editing using the
--output-directory
flag. This will create multiple files in the specified directory which must already exist:CODE--output-directory=<existing-directory>
For more information regarding this flag or others, please refer to the CLI section of the documentation for dkp create cluster and select your provider.
4. (Optional) To configure the Control Plane and Worker nodes to use an HTTP proxy:
export CONTROL_PLANE_HTTP_PROXY=http://example.org:8080
export CONTROL_PLANE_HTTPS_PROXY=http://example.org:8080
export CONTROL_PLANE_NO_PROXY="example.org,example.com,example.net,localhost,127.0.0.1,10.96.0.0/12,192.168.0.0/16,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,.svc,.svc.cluster,.svc.cluster.local,169.254.169.254,.elb.amazonaws.com"
export WORKER_HTTP_PROXY=http://example.org:8080
export WORKER_HTTPS_PROXY=http://example.org:8080
export WORKER_NO_PROXY="example.org,example.com,example.net,localhost,127.0.0.1,10.96.0.0/12,192.168.0.0/16,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,.svc,.svc.cluster,.svc.cluster.local,169.254.169.254,.elb.amazonaws.com"
Replace
example.org,example.com,example.net
with you internal addresseslocalhost
and127.0.0.1
addresses should not use the proxy10.96.0.0/12
is the default Kubernetes service subnet192.168.0.0/16
is the default Kubernetes pod subnetkubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local
is the internal Kubernetes kube-apiserver service.svc,.svc.cluster,.svc.cluster.local
is the internal Kubernetes services169.254.169.254
is the AWS metadata server.elb.amazonaws.com
is for the worker nodes to allow them to communicate directly to the kube-apiserver ELB
5. (Optional) Create a Kubernetes cluster with HTTP proxy configured. This step assumes you did not already create a cluster in the previous steps:
To increase Dockerhub's rate limit use your Dockerhub credentials when creating the cluster, by setting the following flag --registry-mirror-url=https://registry-1.docker.io --registry-mirror-username= --registry-mirror-password=
on the dkp create cluster command
.
dkp create cluster vsphere --cluster-name=${CLUSTER_NAME} \
--control-plane-http-proxy="${CONTROL_PLANE_HTTP_PROXY}" \
--control-plane-https-proxy="${CONTROL_PLANE_HTTPS_PROXY}" \
--control-plane-no-proxy="${CONTROL_PLANE_NO_PROXY}" \
--worker-http-proxy="${WORKER_HTTP_PROXY}" \
--worker-https-proxy="${WORKER_HTTPS_PROXY}" \
--worker-no-proxy="${WORKER_NO_PROXY}" \
--dry-run \
--output=yaml \
> ${CLUSTER_NAME}.yaml
6. Inspect or edit the cluster objects:
Familiarize yourself with Cluster API before editing the cluster objects as edits can prevent the cluster from deploying successfully.
The objects are Custom Resources defined by Cluster API components, and they belong in three different categories:
Cluster
A Cluster object has references to the infrastructure-specific and control plane objects. Because this is a vSphere cluster, there is an object that describes the infrastructure-specific cluster properties.
Control plane
A KubeadmControlPlane object describes the control plane, which is the group of machines that run the Kubernetes control plane components, which include the etcd distributed database, the API server, the core controllers, and the scheduler. The object describes the configuration for these components. The object also has a reference to an infrastructure-specific object that describes the properties of all control plane machines. Here, it references an vSphereMachineTemplate object.
Node pool
A node pool is a collection of machines with identical properties. For example, a cluster might have one node pool with large memory capacity, another node pool with GPU support. Each node pool is described by three objects: The MachinePool references an object that describes the configuration of Kubernetes components (for example, kubelet) deployed on each node pool machine, and an infrastructure-specific object that describes the properties of all node pool machines. Here, it references a KubeadmConfigTemplate, and a vSphereMachineTemplate object.
For in-depth documentation about the objects, read Concepts in the Cluster API Book.
7. Modify control plane audit logs settings using the information contained in the page Configuring the Control Plane.
8. Create the cluster from the objects. A warning will appear in the console if the resource already exists and will require you to remove the resource or update your YAML.
kubectl create -f ${CLUSTER_NAME}.yaml
:note: NOTE: If you used the --output-directory
flag in your dkp create .. --dry-run
step above, create the cluster from the objects you created by specifying the directory:
kubectl create -f <existing-directory>/
Output will be similar to below:
cluster.cluster.x-k8s.io/vsphere-example created
cluster.infrastructure.cluster.x-k8s.io/vsphere-example created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/vsphere-example-control-plane created
machinedeployment.cluster.x-k8s.io/vsphere-example-mp-0 created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/vsphere-example-mp-0 created
9. Use the wait
command to monitor the cluster control-plane readiness:
kubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --timeout=20m
cluster.cluster.x-k8s.io/${CLUSTER_NAME} condition met
The READY
status becomes True
after the cluster control-plane becomes Ready in one of the following steps.
After DKP creates the objects on the API server, the Cluster API controllers reconcile them, creating infrastructure and machines. As the controllers progress, they update the Status of each object.
10. Run the DKP describe command to monitor the current status of the cluster:
dkp describe cluster -c ${CLUSTER_NAME}
NAME READY SEVERITY REASON SINCE MESSAGE
Cluster/d2iq-e2e-cluster_name-1 True 13h
├─ClusterInfrastructure - VSphereCluster/d2iq-e2e-cluster_name-1 True 13h
├─ControlPlane - KubeadmControlPlane/d2iq-control-plane True 13h
│ ├─Machine/d2iq--control-plane-7llgd True 13h
│ ├─Machine/d2iq--control-plane-vncbl True 13h
│ └─Machine/d2iq--control-plane-wbgrm True 13h
└─Workers
└─MachineDeployment/d2iq--md-0 True 13h
├─Machine/d2iq--md-0-74c849dc8c-67rv4 True 13h
├─Machine/d2iq--md-0-74c849dc8c-n2skc True 13h
├─Machine/d2iq--md-0-74c849dc8c-nkftv True 13h
└─Machine/d2iq--md-0-74c849dc8c-sqklv True 13h
11. Check all machines has NODE_NAME
assigned:
kubectl get machines
The output appears similar to the following:
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
d2iq-e2e-cluster-1-control-plane-7llgd d2iq-e2e-cluster-1 d2iq-e2e-cluster-1-control-plane-7llgd vsphere://421638e2-e776-9af6-f683-5e105de5da5a Running 13h v1.26.6
d2iq-e2e-cluster-1-control-plane-vncbl d2iq-e2e-cluster-1 d2iq-e2e-cluster-1-control-plane-vncbl vsphere://42168835-7fef-95c4-3652-ebcad3e10d36 Running 13h v1.26.6
d2iq-e2e-cluster-1-control-plane-wbgrm d2iq-e2e-cluster-1 d2iq-e2e-cluster-1-control-plane-wbgrm vsphere://421642df-afc4-b6c2-9e61-5b86e7c37eac Running 13h v1.26.6
d2iq-e2e-cluster-1-md-0-74c849dc8c-67rv4 d2iq-e2e-cluster-1 d2iq-e2e-cluster-1-md-0-74c849dc8c-67rv4 vsphere://4216f467-8483-73cb-a8b6-8d6a4a71e4b4 Running 14h v1.26.6
d2iq-e2e-cluster-1-md-0-74c849dc8c-n2skc d2iq-e2e-cluster-1 d2iq-e2e-cluster-1-md-0-74c849dc8c-n2skc vsphere://42161cde-9904-4dd2-7a3e-cdfc7655f090 Running 14h v1.26.6
d2iq-e2e-cluster-1-md-0-74c849dc8c-nkftv d2iq-e2e-cluster-1 d2iq-e2e-cluster-1-md-0-74c849dc8c-nkftv vsphere://42163a0d-eb8d-b5a6-82d5-188e24817c00 Running 14h v1.26.6
d2iq-e2e-cluster-1-md-0-74c849dc8c-sqklv d2iq-e2e-cluster-1 d2iq-e2e-cluster-1-md-0-74c849dc8c-sqklv vsphere://42161dff-92a5-6da9-7ac1-e987e2c8fed2 Running 14h v1.26.6
12. Verify that the kubeadm control plane is ready with the command
kubectl get kubeadmcontrolplane
The output appears similar to the following:
NAME CLUSTER INITIALIZED API SERVER AVAILABLE REPLICAS READY UPDATED UNAVAILABLE AGE VERSION
d2iq-e2e-cluster-1-control-plane d2iq-e2e-cluster-1 true true 3 3 3 0 14h v1.26.6
13. Describe the kubeadm control plane and check its status and events with the command:
kubectl describe kubeadmcontrolplane
14. As they progress, the controllers also create Events, which you can list using the command:
kubectl get events | grep ${CLUSTER_NAME}
For brevity, this example uses grep
. You can also use separate commands to get Events for specific objects, such as kubectl get events --field-selector involvedObject.kind="VSphereCluster"
and kubectl get events --field-selector involvedObject.kind="VSphereMachine"
.
Known Limitations
Be aware of these limitations in the current release of DKP Konvoy.
The DKP Konvoy version used to create a bootstrap cluster must match the DKP Konvoy version used to create a workload cluster.
DKP Konvoy supports deploying one workload cluster.
DKP Konvoy generates a set of objects for one Node Pool.
DKP Konvoy does not validate edits to cluster objects.
The optional next step is to Make the vSphere Cluster Self-managed. The step is optional because, as an example, if you are using an existing, self-managed cluster to create a managed cluster, you would not want the managed cluster to be self-managed.