vSphere Air-gapped: Create the Management Cluster
This page of instructions will create a self-managed air-gapped management cluster. A self-managed cluster refers to one in which the CAPI resources and controllers that describe and manage it are running on the same cluster they are managing.
Name your Cluster
Give your cluster a unique name suitable for your environment. The cluster name may only contain the following characters:
a-z
,0-9
,.
, and-
. Cluster creation will fail if the name has capital letters. See Kubernetes for more naming information.Set the CLUSTER_NAME environment variable with the command:
export CLUSTER_NAME=<my-vsphere-cluster>
DKP uses the vsphere CSI driver as the default storage provider. Use a Kubernetes CSI compatible storage that is suitable for production.
Create a New vSphere Kubernetes Cluster
Use the following steps to create a new, air-gapped vSphere cluster.
Configure your cluster to use an existing local registry as a mirror when attempting to pull images:
IMPORTANT: The image must be created by Konvoy Image Builder in order to use the registry mirror feature.CODEexport REGISTRY_URL=<https/http>://<registry-address>:<registry-port> export REGISTRY_CA=<path to the CA on the bastion> export REGISTRY_USERNAME=<username> export REGISTRY_PASSWORD=<password>
REGISTRY_URL
: the address of an existing local registry accessible in the VPC that the new cluster nodes will be configured to use a mirror registry when pulling images.REGISTRY_CA
: (optional) the path on the bastion machine to the registry CA. Konvoy will configure the cluster nodes to trust this CA. This value is only needed if the registry is using a self-signed certificate and the AMIs are not already configured to trust this CA.REGISTRY_USERNAME
: optional, set to a user that has pull access to this registry.REGISTRY_PASSWORD
: optional if username is not set.
Load the image, using either the
docker
orpodman
command:CODEdocker load -i konvoy-bootstrap-image-v2.8.1.tar
OR
CODEpodman load -i konvoy-bootstrap-image-v2.8.1.tar
CODEpodman image tag konvoy-bootstrap:2.8.1 docker.io/konvoy-bootstrap:v2.8.1
Create a Kubernetes cluster by copying the following command and substituting the valid values for your environment:
dkp create cluster vsphere \
--cluster-name=${MANAGED_CLUSTER_NAME} \
--additional-tags=owner=$(whoami) \
--kubeconfig=<management-cluster-kubeconfig-path> \
--namespace ${WORKSPACE_NAMESPACE}
--network <NETWORK_NAME> \
--control-plane-endpoint-host <CONTROL_PLANE_IP> \
--data-center <DATACENTER_NAME> \
--data-store <DATASTORE_NAME> \
--folder <FOLDER_NAME> \
--server <VCENTER_API_SERVER_URL> \
--ssh-public-key-file </path/to/key.pub> \
--resource-pool <RESOURCE_POOL_NAME> \
--vm-template konvoy-ova-vsphere-os-release-k8s_release-vsphere-timestamp \
--virtual-ip-interface <ip_interface_name> \
--extra-sans "127.0.0.1" \
--registry-mirror-url=${REGISTRY_URL} \
--registry-mirror-cacert=${REGISTRY_CA} \
--registry-mirror-username=${REGISTRY_USERNAME} \
--registry-mirror-password=${REGISTRY_PASSWORD} \
--self-managed
If your environment uses HTTP/HTTPS proxies, you must include the flags --http-proxy
, --https-proxy
, and --no-proxy
and their related values in this command for it to be successful. More information is available in Configuring an HTTP/HTTPS Proxy.
For bootstrap and custom YAML cluster creation, refer to the Custom Installation and Additional Infrastructure Tools section of the documentation for vSphere: vSphere Infrastructure
Retrieve the kubeconfig and Explore New vSphere Cluster
Follow these steps:
Fetch the kubeconfig file with the command:
CODEdkp get kubeconfig --cluster-name ${MANAGED_CLUSTER_NAME} --kubeconfig <management-cluster-kubeconfig-path> -n ${WORKSPACE_NAMESPACE} > ${MANAGED_CLUSTER_NAME}.conf
List the nodes with the following command:
CODEkubectl --kubeconfig=${MANAGED_CLUSTER_NAME}.conf get nodes
List the pods with the following command:
NOTE: Wait for the Status to move to Ready while calico-node
pods are being deployed.
kubectl --kubeconfig=${MANAGED_CLUSTER_NAME}.conf get pods -A
Cluster Verification
If you want to monitor or verify the installation of your clusters, refer to:
Verify your Cluster and DKP Installation.
As they progress, the controllers create Events, which you can also monitor using the command:
kubectl get events | grep ${CLUSTER_NAME}
For brevity, this example uses grep
. You can also use separate commands to get Events for specific objects, such as kubectl get events --field-selector involvedObject.kind="VSphereCluster"
and kubectl get events --field-selector involvedObject.kind="VSphereMachine"
.