vSphere Air-gapped: Create the Management Cluster
Name your cluster
Give your cluster a unique name suitable for your environment.
Set the CLUSTER_NAME environment variable with the command:
CODEexport CLUSTER_NAME=<my-vsphere-cluster>
DKP uses local static provisioner as the default storage provider. However, localvolumeprovisioner
is not suitable for production use. You should use a Kubernetes CSI compatible storage that is suitable for production.
You can choose from any of the storage options available for Kubernetes. To disable the default that Konvoy deploys, set the default StorageClass
localvolumeprovisioner
as non-default. Then set your newly created StorageClass to be the default by following the commands in the Kubernetes documentation called Changing the Default Storage Class.
Create a new vSphere Kubernetes cluster
Use the following steps to create a new, air-gapped vSphere cluster.
Configure your cluster to use an existing registry as a mirror when attempting to pull images:
IMPORTANT: The image must be created by Konvoy Image Builder in order to use the registry mirror feature.CODEexport REGISTRY_URL=<https/http>://<registry-address>:<registry-port> export REGISTRY_CA=<path to the CA on the bastion> export REGISTRY_USERNAME=<username> export REGISTRY_PASSWORD=<password>
REGISTRY_URL
: the address of an existing registry accessible in the VPC that the new cluster nodes will be configured to use a mirror registry when pulling images.REGISTRY_CA
: (optional) the path on the bastion machine to the registry CA. Konvoy will configure the cluster nodes to trust this CA. This value is only needed if the registry is using a self-signed certificate and the AMIs are not already configured to trust this CA.REGISTRY_USERNAME
: optional, set to a user that has pull access to this registry.REGISTRY_PASSWORD
: optional if username is not set.
Load the container image:
CODEdocker load -i konvoy-bootstrap-image-v2.5.1.tar
CODEpodman load -i konvoy-bootstrap-image-v2.5.1.tar
Create a Kubernetes cluster by copying the following command and substituting the valid values for your environment:
dkp create cluster vsphere \
--cluster-name ${CLUSTER_NAME} \
--network <NETWORK_NAME> \
--control-plane-endpoint-host <CONTROL_PLANE_IP> \
--data-center <DATACENTER_NAME> \
--data-store <DATASTORE_NAME> \
--folder <FOLDER_NAME> \
--server <VCENTER_API_SERVER_URL> \
--ssh-public-key-file </path/to/key.pub> \
--resource-pool <RESOURCE_POOL_NAME> \
--vm-template konvoy-ova-vsphere-os-release-k8s_release-vsphere-timestamp \
--virtual-ip-interface <ip_interface_name> \
--extra-sans "127.0.0.1" \
--registry-mirror-url=${REGISTRY_URL} \
--registry-mirror-cacert=${REGISTRY_CA} \
--registry-mirror-username=${REGISTRY_USERNAME} \
--registry-mirror-password=${REGISTRY_PASSWORD} \
--self-managed
A self-managed cluster refers to one in which the CAPI resources and controllers that describe and manage it are running on the same cluster they are managing.
Cluster Verification
If you want to monitor or verify the installation of your clusters, refer to:
Verify your Cluster and DKP Installation.
As they progress, the controllers create Events, which you can also monitor using the command:
kubectl get events | grep ${CLUSTER_NAME}
For brevity, this example uses grep
. You can also use separate commands to get Events for specific objects, such as kubectl get events --field-selector involvedObject.kind="VSphereCluster"
and kubectl get events --field-selector involvedObject.kind="VSphereMachine"
.