Create a new Air-gapped vSphere Cluster
Prerequisites
Before you begin, be sure that you have created a Bootstrap Cluster
Create a new vSphere Kubernetes cluster
Use the following steps to create a new, air-gapped vSphere cluster.
Configure your cluster to use an existing registry as a mirror when attempting to pull images:
IMPORTANT: The image must be created by the konvoy-image-builder project in order to use the registry mirror feature.
CODEexport REGISTRY_URL=<https/http>://<registry-address>:<registry-port> export REGISTRY_CA=<path to the CA on the bastion> export REGISTRY_USERNAME=<username> export REGISTRY_PASSWORD=<password>
REGISTRY_URL
: the address of an existing registry accessible in the VPC that the new cluster nodes will be configured to use a mirror registry when pulling images.REGISTRY_CA
: (optional) the path on the bastion machine to the registry CA. Konvoy will configure the cluster nodes to trust this CA. This value is only needed if the registry is using a self-signed certificate and the AMIs are not already configured to trust this CA.REGISTRY_USERNAME
: optional, set to a user that has pull access to this registry.REGISTRY_PASSWORD
: optional if username is not set.
Create a Kubernetes cluster by copying the following command and substituting the valid values for your environment:
NOTE: Ensure your subnets do not overlap with your host subnet because they cannot be changed after cluster creation. If you need to change the kubernetes subnets, you must do this at cluster creation. The default subnets used in DKP are:CODEspec: clusterNetwork: pods: cidrBlocks: - 192.168.0.0/16 services: cidrBlocks: - 10.96.0.0/12
Create cluster with common configurations shown below:
CODEdkp create cluster vsphere --cluster-name ${CLUSTER_NAME} \ --network <NETWORK_NAME> \ --control-plane-endpoint-host <CONTROL_PLANE_IP> \ --data-center <DATACENTER_NAME> \ --data-store <DATASTORE_NAME> \ --folder <FOLDER_NAME> \ --server <VCENTER_API_SERVER_URL> \ --ssh-public-key-file </path/to/key.pub> \ --resource-pool <RESOURCE_POOL_NAME> \ --vm-template konvoy-ova-vsphere-os-release-k8s_release-vsphere-timestamp \ --virtual-ip-interface eth0 \ --extra-sans "127.0.0.1" \ --registry-mirror-url=${REGISTRY_URL} \ --registry-mirror-cacert=${REGISTRY_CA} \ --registry-mirror-username=${REGISTRY_USERNAME} \ --registry-mirror-password=${REGISTRY_PASSWORD}
If your environment uses HTTP/HTTPS proxies, you must include the flags
--http-proxy
,--https-proxy
, and--no-proxy
and their related values in this command for it to be successful. More information is available in Configuring an HTTP/HTTPS Proxy.Inspect the created cluster resources with the command:
CODEkubectl get clusters,kubeadmcontrolplanes,machinedeployments
Use the
wait
command to monitor the cluster control-plane readiness:CODEkubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --timeout=20m
Output:
CODEcluster.cluster.x-k8s.io/${CLUSTER_NAME} condition met
The
READY
status becomesTrue
after the cluster control-plane becomes Ready in one of the following steps.After DKP creates the objects on the API server, the Cluster API controllers reconcile them, creating infrastructure and machines. As the controllers progress, they update the Status of each object.
Run the DKP
describe
command to monitor the current status of the cluster:CODEdkp describe cluster -c ${CLUSTER_NAME}
CODENAME READY SEVERITY REASON SINCE MESSAGE Cluster/e2e-airgapped-1 True 13h ├─ClusterInfrastructure - VSphereCluster/e2e-airgapped-1 True 13h ├─ControlPlane - KubeadmControlPlane/e2e-airgapped-1-control-plane True 13h │ ├─Machine/e2e-airgapped-1-control-plane-7llgd True 13h │ ├─Machine/e2e-airgapped-1-control-plane-vncbl True 13h │ └─Machine/e2e-airgapped-1-control-plane-wbgrm True 13h └─Workers └─MachineDeployment/e2e-airgapped-1-md-0 True 13h ├─Machine/e2e-airgapped-1-md-0-74c849dc8c-67rv4 True 13h ├─Machine/e2e-airgapped-1-md-0-74c849dc8c-n2skc True 13h ├─Machine/e2e-airgapped-1-md-0-74c849dc8c-nkftv True 13h └─Machine/e2e-airgapped-1-md-0-74c849dc8c-sqklv True 13h
As they progress, the controllers also create Events, which you can list using the command:
CODEkubectl get events | grep ${CLUSTER_NAME}
For brevity, this example uses
grep
. You can also use separate commands to get Events for specific objects, such askubectl get events --field-selector involvedObject.kind="VSphereCluster"
andkubectl get events --field-selector involvedObject.kind="VSphereMachine"
.
You can make the cluster self-managed with the information in the linked page.
Next Step:
You can explore your new cluster.
Known limitations
NOTE: Be aware of these limitations in the current release of DKP Konvoy.
The DKP Konvoy version used to create a bootstrap cluster must match the DKP Konvoy version used to create a workload cluster.
DKP Konvoy supports deploying one workload cluster.
DKP Konvoy generates a set of objects for one Node Pool.
DKP Konvoy does not validate edits to cluster objects.