Prerequisites
Before you begin, make sure you have created a vSphere Bootstrap cluster.
Name your Cluster
Give your cluster a unique name suitable for your environment. The cluster name may only contain the following characters: a-z
, 0-9
, .
, and -
. Cluster creation will fail if the name has capital letters. See Kubernetes for more naming information.
Set the CLUSTER_NAME environment variable with the command:
CODE
export CLUSTER_NAME=<my-vsphere-cluster>
Create a New vSphere Kubernetes Cluster
Follow these steps:
Use the following command to set the environment variables for vSphere:
CODE
export VSPHERE_SERVER=<example.vsphere.url>
export VSPHERE_USERNAME=<user@example.vsphere.url>
export VSPHERE_PASSWORD=<example_password>
Ensure your vSphere credentials are up-to-date by refreshing the credentials with the command:
CODE
dkp update bootstrap credentials vsphere
Generate the Kubernetes cluster objects by copying and editing this command to include the correct values, including the VM template name you assigned in the previous procedure:
To increase Dockerhub's rate limit use your Dockerhub credentials when creating the cluster, by setting the following flag --registry-mirror-url=https://registry-1.docker.io --registry-mirror-username= --registry-mirror-password=
on the dkp create cluster command
.
⚠️ IMPORTANT: Ensure your subnets do not overlap with your host subnet because they cannot be changed after cluster creation. If you need to change the kubernetes subnets, you must do this at cluster creation. The default subnets used in DKP are:
CODE
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
CODE
dkp create cluster vsphere \
--cluster-name ${CLUSTER_NAME} \
--network <NETWORK_NAME> \
--control-plane-endpoint-host <xxx.yyy.zzz.000> \
--data-center <DATACENTER_NAME> \
--data-store <DATASTORE_NAME> \
--folder <FOLDER_NAME> \
--server <VCENTER_API_SERVER_URL> \
--ssh-public-key-file <SSH_PUBLIC_KEY_FILE> \
--resource-pool <RESOURE_POOL_NAME> \
--virtual-ip-interface <ip_interface_name> \
--vm-template <TEMPLATE_NAME>
--dry-run \
--output=yaml \
> ${CLUSTER_NAME}.yaml
Expand the drop-downs for registry, HTTP, FIPS and other flags to use during the cluster creation step above. For more information regarding this flag or others, please refer to the CLI section of the documentation for dkp create cluster and select your provider.
Flatcar
Flatcar OS use this flag used to instruct the bootstrap cluster to make some changes related to the installation paths:
If using a REGISTRY MIRROR, use these FLAGS in your create cluster command:
(Optional) Use a registry mirror. Configure your cluster to use an existing local registry as a mirror when attempting to pull images previously pushed to your registry.
Export the environment variable with your registry information. These tell the cluster where to locate the local registry to use by defining the URL or other needed information:
YAML
export REGISTRY_URL="<https/http>://<registry-address>:<registry-port>"
export REGISTRY_CA=<path to the CA on the bastion>
export REGISTRY_USERNAME=<username>
export REGISTRY_USERNAME=<password>
REGISTRY_URL
: the address of an existing container registry accessible in the VPC that the new cluster nodes will be configured to use a mirror registry when pulling images.
REGISTRY_CA
: (optional) the path on the bastion machine to the container registry CA. Konvoy will configure the cluster nodes to trust this CA. This value is only needed if the registry is using a self-signed certificate and the AMIs are not already configured to trust this CA.
REGISTRY_USERNAME
: optional, set to a user that has pull access to this registry.
REGISTRY_PASSWORD
: optional if username is not set.
When creating the cluster, apply the variables you defined above during the dkp create cluster
command with the flags needed for your environment:
CODE
--registry-mirror-url=${REGISTRY_URL} \
--registry-mirror-cacert=${REGISTRY_CA} \
--registry-mirror-username=${REGISTRY_USERNAME} \
--registry-mirror-password=${REGISTRY_PASSWORD}
See also: vSphere Registry Mirrors Registry Mirror Tools
FIPS Requirements
To create a cluster in FIPS mode, inform the controllers of the appropriate image repository and version tags of the official D2iQ FIPS builds of Kubernetes by adding those flags to dkp create cluster
command:
CODE
--kubernetes-version=v1.27.11+fips.0 \
--etcd-version=3.5.6+fips.0 \
--kubernetes-image-repository=docker.io/mesosphere \
HTTP ONLY
If your environment uses HTTP/HTTPS proxies, you must include the flags --http-proxy
, --https-proxy
, and --no-proxy
and their related values in this command for it to be successful. More information is available in Configuring an HTTP/HTTPS Proxy.
CODE
--http-proxy <<http proxy list>>
--https-proxy <<https proxy list>>
--no-proxy <<no proxy list>>
To configure the Control Plane and Worker nodes to use an HTTP proxy:
CODE
export CONTROL_PLANE_HTTP_PROXY=http://example.org:8080
export CONTROL_PLANE_HTTPS_PROXY=http://example.org:8080
export CONTROL_PLANE_NO_PROXY="example.org,example.com,example.net,localhost,127.0.0.1,10.96.0.0/12,192.168.0.0/16,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,.svc,.svc.cluster,.svc.cluster.local,169.254.169.254,.elb.amazonaws.com"
export WORKER_HTTP_PROXY=http://example.org:8080
export WORKER_HTTPS_PROXY=http://example.org:8080
export WORKER_NO_PROXY="example.org,example.com,example.net,localhost,127.0.0.1,10.96.0.0/12,192.168.0.0/16,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,.svc,.svc.cluster,.svc.cluster.local,169.254.169.254,.elb.amazonaws.com"
Replace:
example.org,example.com,example.net
with you internal addresses
localhost
and 127.0.0.1
addresses should not use the proxy
10.96.0.0/12
is the default Kubernetes service subnet
192.168.0.0/16
is the default Kubernetes pod subnet
kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local
is the internal Kubernetes kube-apiserver service
.svc,.svc.cluster,.svc.cluster.local
is the internal Kubernetes services
Create a Kubernetes cluster with HTTP proxy configured using the flags and exported variables during the dkp create cluster
command:
CODE
--control-plane-http-proxy=${CONTROL_PLANE_HTTP_PROXY} \
--control-plane-https-proxy=${CONTROL_PLANE_HTTPS_PROXY} \
--control-plane-no-proxy=${CONTROL_PLANE_NO_PROXY} \
--worker-http-proxy=${WORKER_HTTP_PROXY} \
--worker-https-proxy=${WORKER_HTTPS_PROXY} \
--worker-no-proxy=${WORKER_NO_PROXY} \
Individual manifests using the Output Directory flag:
You can create individual files with different smaller manifests for ease in editing using the --output-directory
flag. This will create multiple files in the specified directory which must already exist:
CODE
--output-directory=<existing-directory>
Refer to the Cluster Creation Customization Choices section for more information on how to use optional flags such as the --output-directory
flag.
Inspect or edit the cluster objects and familiarize yourself with Cluster API before editing the cluster objects as edits can prevent the cluster from deploying successfully. See Customizing CAPI Components for a Cluster.
Familiarize yourself with Cluster API before editing the cluster objects as edits can prevent the cluster from deploying successfully.
(Optional) Modify Control Plane Audit logs - Users can make modifications to the KubeadmControlplane
cluster-api object to configure different kubelet
options. See Configuring the Control Plane if you wish to configure your control plane beyond the existing options that are available from flags.
Create the cluster from the objects generated in the dry run
. A warning will appear in the console if the resource already exists and will require you to remove the resource or update your YAML.
CODE
kubectl create -f ${CLUSTER_NAME}.yaml
NOTE: If you used the --output-directory
flag in your dkp create .. --dry-run
step above, create the cluster from the objects you created by specifying the directory:
CODE
kubectl create -f <existing-directory>/
OUTPUT:
CODE
cluster.cluster.x-k8s.io/vsphere-example created
cluster.infrastructure.cluster.x-k8s.io/vsphere-example created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/vsphere-example-control-plane created
machinedeployment.cluster.x-k8s.io/vsphere-example-mp-0 created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/vsphere-example-mp-0 created
Use the wait
command to monitor the cluster control-plane readiness:
CODE
kubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --timeout=20m
Output:
CODE
cluster.cluster.x-k8s.io/${CLUSTER_NAME} condition met
The READY
status becomes True
after the cluster control-plane becomes Ready in one of the following steps.
After DKP creates the objects on the API server, the Cluster API controllers reconcile them, creating infrastructure and machines. As the controllers progress, they update the Status of each object. Run the DKP describe command to monitor the current status of the cluster:
CODE
dkp describe cluster -c ${CLUSTER_NAME}
OUTPUT:
CODE
NAME READY SEVERITY REASON SINCE MESSAGE
Cluster/d2iq-e2e-cluster_name-1 True 13h
├─ClusterInfrastructure - VSphereCluster/d2iq-e2e-cluster_name-1 True 13h
├─ControlPlane - KubeadmControlPlane/d2iq-control-plane True 13h
│ ├─Machine/d2iq--control-plane-7llgd True 13h
│ ├─Machine/d2iq--control-plane-vncbl True 13h
│ └─Machine/d2iq--control-plane-wbgrm True 13h
└─Workers
└─MachineDeployment/d2iq--md-0 True 13h
├─Machine/d2iq--md-0-74c849dc8c-67rv4 True 13h
├─Machine/d2iq--md-0-74c849dc8c-n2skc True 13h
├─Machine/d2iq--md-0-74c849dc8c-nkftv True 13h
└─Machine/d2iq--md-0-74c849dc8c-sqklv True 13h
Check all machines has NODE_NAME
assigned:
CODE
kubectl get machines
OUTPUT:
CODE
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
d2iq-e2e-cluster-1-control-plane-7llgd d2iq-e2e-cluster-1 d2iq-e2e-cluster-1-control-plane-7llgd vsphere://421638e2-e776-9af6-f683-5e105de5da5a Running 13h v1.27.11
d2iq-e2e-cluster-1-control-plane-vncbl d2iq-e2e-cluster-1 d2iq-e2e-cluster-1-control-plane-vncbl vsphere://42168835-7fef-95c4-3652-ebcad3e10d36 Running 13h v1.27.11
d2iq-e2e-cluster-1-control-plane-wbgrm d2iq-e2e-cluster-1 d2iq-e2e-cluster-1-control-plane-wbgrm vsphere://421642df-afc4-b6c2-9e61-5b86e7c37eac Running 13h v1.27.11
d2iq-e2e-cluster-1-md-0-74c849dc8c-67rv4 d2iq-e2e-cluster-1 d2iq-e2e-cluster-1-md-0-74c849dc8c-67rv4 vsphere://4216f467-8483-73cb-a8b6-8d6a4a71e4b4 Running 14h v1.27.11
d2iq-e2e-cluster-1-md-0-74c849dc8c-n2skc d2iq-e2e-cluster-1 d2iq-e2e-cluster-1-md-0-74c849dc8c-n2skc vsphere://42161cde-9904-4dd2-7a3e-cdfc7655f090 Running 14h v1.27.11
d2iq-e2e-cluster-1-md-0-74c849dc8c-nkftv d2iq-e2e-cluster-1 d2iq-e2e-cluster-1-md-0-74c849dc8c-nkftv vsphere://42163a0d-eb8d-b5a6-82d5-188e24817c00 Running 14h v1.27.11
d2iq-e2e-cluster-1-md-0-74c849dc8c-sqklv d2iq-e2e-cluster-1 d2iq-e2e-cluster-1-md-0-74c849dc8c-sqklv vsphere://42161dff-92a5-6da9-7ac1-e987e2c8fed2 Running 14h v1.27.11
Verify that the kubeadm control plane is ready with the command:
CODE
kubectl get kubeadmcontrolplane
The output appears similar to the following:
CODE
NAME CLUSTER INITIALIZED API SERVER AVAILABLE REPLICAS READY UPDATED UNAVAILABLE AGE VERSION
d2iq-e2e-cluster-1-control-plane d2iq-e2e-cluster-1 true true 3 3 3 0 14h v1.27.11
Describe the kubeadm control plane and check its status and events with the command:
CODE
kubectl describe kubeadmcontrolplane
As they progress, the controllers also create Events, which you can list using the command:
CODE
kubectl get events | grep ${CLUSTER_NAME}
For brevity, this example uses grep
. You can also use separate commands to get Events for specific objects, such as kubectl get events --field-selector involvedObject.kind="VSphereCluster"
and kubectl get events --field-selector involvedObject.kind="VSphereMachine"
.
DKP uses the vsphere CSI driver as the default storage provider. Use a Kubernetes CSI compatible storage that is suitable for production. See the Kubernetes documentation called Changing the Default Storage Class for more information.
If you’re not using the default, you cannot deploy an alternate provider until after the dkp create cluster
is finished. However, it must be determined before Kommander installation.
Known Limitations
Be aware of these limitations in the current release of DKP Konvoy.
The DKP Konvoy version used to create a bootstrap cluster must match the DKP Konvoy version used to create a workload cluster.
DKP Konvoy supports deploying one workload cluster.
DKP Konvoy generates a set of objects for one Node Pool.
DKP Konvoy does not validate edits to cluster objects.
The optional next step is to Make the vSphere Cluster Self-managed. The step is optional because, as an example, if you are using an existing, self-managed cluster to create a managed cluster, you would not want the managed cluster to be self-managed.
Next Steps
vSphere Make the Cluster Self-Managed