Skip to main content
Skip table of contents

vSphere Create a New Air-gapped Cluster

Prerequisites

Before you begin, be sure that you have created a Bootstrap Cluster

Name your Cluster

  1. Give your cluster a unique name suitable for your environment. The cluster name may only contain the following characters: a-z, 0-9, ., and -. Cluster creation will fail if the name has capital letters. See Kubernetes for more naming information.

  2. When specifying the cluster-name, you must use the same cluster-name as used when defining your inventory objects.

  3. Set the CLUSTER_NAME environment variable with the command:

    CODE
    export CLUSTER_NAME=<my-vsphere-cluster>

IMPORTANT: Ensure your subnets do not overlap with your host subnet because they cannot be changed after cluster creation. If you need to change the kubernetes subnets, you must do this at cluster creation. The default subnets used in DKP are:

  • CODE
    spec:
      clusterNetwork:
        pods:
          cidrBlocks:
          - 192.168.0.0/16
        services:
          cidrBlocks:
          - 10.96.0.0/12

Create a New vSphere Kubernetes Cluster

Use the following steps to create a new, air-gapped vSphere cluster.

IMPORTANT: The image must be created by the konvoy-image-builder project in order to use the registry mirror feature.

  1. Configure your cluster to use an existing registry as a mirror when attempting to pull images as done on the vSphere Air-gapped Seed the Registry page previously.

  2. Create the Kubernetes cluster objects by copying the following command and substituting the valid values for your environment:

    CODE
    dkp create cluster vsphere \
      --cluster-name ${CLUSTER_NAME} \
      --network <NETWORK_NAME> \
      --control-plane-endpoint-host <CONTROL_PLANE_IP> \
      --data-center <DATACENTER_NAME> \
      --data-store <DATASTORE_NAME> \
      --folder <FOLDER_NAME> \
      --server <VCENTER_API_SERVER_URL> \
      --ssh-public-key-file </path/to/key.pub> \
      --resource-pool <RESOURCE_POOL_NAME> \
      --vm-template konvoy-ova-vsphere-os-release-k8s_release-vsphere-timestamp \
      --virtual-ip-interface eth0 \
      --extra-sans "127.0.0.1" \
      --registry-mirror-url=${REGISTRY_URL} \
      --registry-mirror-cacert=${REGISTRY_CA} \
      --registry-mirror-username=${REGISTRY_USERNAME} \
      --registry-mirror-password=${REGISTRY_PASSWORD} \
      --dry-run \
      --output=yaml \
    > ${CLUSTER_NAME}.yaml

Expand the drop-downs for HTTP, FIPS and other flags to use during the cluster creation step above.

Flatcar

Flatcar OS use this flag used to instruct the bootstrap cluster to make some changes related to the installation paths:

CODE
--os-hint flatcar
FIPS Requirements

To create a cluster in FIPS mode, inform the controllers of the appropriate image repository and version tags of the official D2iQ FIPS builds of Kubernetes by adding those flags to dkp create cluster command:

CODE
--kubernetes-version=v1.28.7+fips.0 \
--etcd-version=3.5.10+fips.0 \
--kubernetes-image-repository=docker.io/mesosphere \
HTTP ONLY

If your environment uses HTTP/HTTPS proxies, you must include the flags --http-proxy, --https-proxy, and --no-proxy and their related values in this command for it to be successful. More information is available in Configuring an HTTP/HTTPS Proxy.

CODE
--http-proxy <<http proxy list>>
--https-proxy <<https proxy list>>
--no-proxy <<no proxy list>>
  • To configure the Control Plane and Worker nodes to use an HTTP proxy:

    CODE
    export CONTROL_PLANE_HTTP_PROXY=http://example.org:8080
    export CONTROL_PLANE_HTTPS_PROXY=http://example.org:8080
    export CONTROL_PLANE_NO_PROXY="example.org,example.com,example.net,localhost,127.0.0.1,10.96.0.0/12,192.168.0.0/16,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,.svc,.svc.cluster,.svc.cluster.local,169.254.169.254,.elb.amazonaws.com"
    
    export WORKER_HTTP_PROXY=http://example.org:8080
    export WORKER_HTTPS_PROXY=http://example.org:8080
    export WORKER_NO_PROXY="example.org,example.com,example.net,localhost,127.0.0.1,10.96.0.0/12,192.168.0.0/16,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,.svc,.svc.cluster,.svc.cluster.local,169.254.169.254,.elb.amazonaws.com"
  • Replace:

    • example.org,example.com,example.net with you internal addresses

    • localhost and 127.0.0.1 addresses should not use the proxy

    • 10.96.0.0/12 is the default Kubernetes service subnet

    • 192.168.0.0/16 is the default Kubernetes pod subnet

    • kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local is the internal Kubernetes kube-apiserver service

    • .svc,.svc.cluster,.svc.cluster.local is the internal Kubernetes services

    • 169.254.169.254 is the AWS metadata server

    • .elb.amazonaws.com is for the worker nodes to allow them to communicate directly to the kube-apiserver ELB

  • Create a Kubernetes cluster with HTTP proxy configured using the flags and exported variables during dkp create cluster command:

    CODE
      --control-plane-http-proxy="${CONTROL_PLANE_HTTP_PROXY}" \
      --control-plane-https-proxy="${CONTROL_PLANE_HTTPS_PROXY}" \
      --control-plane-no-proxy="${CONTROL_PLANE_NO_PROXY}" \
      --worker-http-proxy="${WORKER_HTTP_PROXY}" \
      --worker-https-proxy="${WORKER_HTTPS_PROXY}" \
      --worker-no-proxy="${WORKER_NO_PROXY}" \
Individual manifests using the Output Directory flag:

You can create individual files with different smaller manifests for ease in editing using the --output-directory flag. This will create multiple files in the specified directory which must already exist:

CODE
--output-directory=<existing-directory>

Refer to the Cluster Creation Customization Choices section for more information on how to use optional flags such as the --output-directory flag.

DKP uses the vsphere CSI driver as the default storage provider. Use a Kubernetes CSI compatible storage that is suitable for production. See the Kubernetes documentation called Changing the Default Storage Class for more information.

If you’re not using the default, you cannot deploy an alternate provider until after the dkp create cluster is finished. However, it must be determined before Kommander installation.

  1. Inspect the created cluster resources with the command:

    CODE
    kubectl get clusters,kubeadmcontrolplanes,machinedeployments

  2. (Optional) Edit the cluster objects and familiarize yourself with Cluster API before editing the cluster objects as edits can prevent the cluster from deploying successfully. See Customizing CAPI Components for a Cluster .

  3. Create the cluster from the objects generated in the dry run. A warning will appear in the console if the resource already exists and will require you to remove the resource or update your YAML.

    CODE
    kubectl create -f ${CLUSTER_NAME}.yaml

    NOTE: If you used the --output-directory flag in your dkp create .. --dry-run step above, create the cluster from the objects you created by specifying the directory:

    CODE
    kubectl create -f <existing-directory>/
  4. Use the wait command to monitor the cluster control-plane readiness:

    CODE
    kubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --timeout=20m

    Output:

    CODE
    cluster.cluster.x-k8s.io/${CLUSTER_NAME} condition met

    The READY status becomes True after the cluster control-plane becomes Ready in one of the following steps.

    After DKP creates the objects on the API server, the Cluster API controllers reconcile them, creating infrastructure and machines. As the controllers progress, they update the Status of each object.

  5. Run the DKP describe command to monitor the current status of the cluster:

    CODE
    dkp describe cluster -c ${CLUSTER_NAME}
OUTPUT:
CODE
NAME                                                                READY  SEVERITY  REASON  SINCE  MESSAGE
Cluster/e2e-airgapped-1                                             True                     13h
├─ClusterInfrastructure - VSphereCluster/e2e-airgapped-1            True                     13h
├─ControlPlane - KubeadmControlPlane/e2e-airgapped-1-control-plane  True                     13h
│ ├─Machine/e2e-airgapped-1-control-plane-7llgd                     True                     13h
│ ├─Machine/e2e-airgapped-1-control-plane-vncbl                     True                     13h
│ └─Machine/e2e-airgapped-1-control-plane-wbgrm                     True                     13h
└─Workers
    └─MachineDeployment/e2e-airgapped-1-md-0                          True                     13h
    ├─Machine/e2e-airgapped-1-md-0-74c849dc8c-67rv4                 True                     13h
    ├─Machine/e2e-airgapped-1-md-0-74c849dc8c-n2skc                 True                     13h
    ├─Machine/e2e-airgapped-1-md-0-74c849dc8c-nkftv                 True                     13h
    └─Machine/e2e-airgapped-1-md-0-74c849dc8c-sqklv                 True                     13h
  1. As they progress, the controllers also create Events, which you can list using the command:

    CODE
    kubectl get events | grep ${CLUSTER_NAME}

    For brevity, this example uses grep. You can also use separate commands to get Events for specific objects, such as kubectl get events --field-selector involvedObject.kind="VSphereCluster" and kubectl get events --field-selector involvedObject.kind="VSphereMachine".

Known Limitations

NOTE: Be aware of these limitations in the current release of DKP Konvoy.

  • The DKP Konvoy version used to create a bootstrap cluster must match the DKP Konvoy version used to create a workload cluster.

  • DKP Konvoy supports deploying one workload cluster.

  • DKP Konvoy generates a set of objects for one Node Pool.

  • DKP Konvoy does not validate edits to cluster objects.

Next Step

vSphere Make your Air-gapped Cluster Self-Managed

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.