Pre-provisioned Air-gapped New Cluster
If your cluster is in an air-gapped environment, you must provide additional arguments when creating the cluster. Before you create a new DKP cluster below, other customizations are available that require different flags during dkp create cluster
command. Refer to Pre-provisioned Cluster Creation Customization Choices for more cluster customization options.
Name your Cluster
The cluster name may only contain the following characters: a-z
, 0-9
, .
, and -
. Cluster creation will fail if the name has capital letters. See Kubernetes for more naming information.
When specifying the cluster-name
, you must use the same cluster-name
as used when defining your inventory objects.
By default, the control-plane Nodes will be created in 3 different zones. However, the default worker Nodes will reside in a single Availability Zone. You may create additional node pools in other Availability Zones with the dkp create nodepool
command.
Follow these steps:
Give your cluster a unique name suitable for your environment.
Set the environment variable:
export CLUSTER_NAME=<preprovisioned-example>
Before you create a new DKP cluster below, choose an external load balancer or virtual IP and use the corresponding dkp create cluster
command flags.
Create an Air-gapped Kubernetes Cluster
Once you’ve defined the infrastructure and control plane endpoints, you can proceed to creating the cluster by following these steps to create a new pre-provisioned cluster.
DKP uses localvolumeprovisioner
as the default storage provider for a pre-provisioned environment. However, localvolumeprovisioner
is not suitable for production use. You should use a Kubernetes CSI compatible storage that is suitable for production.
After disabling localvolumeprovisioner
, you can choose from any of the storage options available for Kubernetes. To make that storage the default storage, use the commands shown in this section of the Kubernetes documentation: https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/
The following command relies on the pre-provisioned cluster API infrastructure provider to initialize the Kubernetes control plane and worker nodes on the hosts defined in the inventory.
NOTE: To increase Docker Hub's rate limit use your Docker Hub credentials when creating the cluster, by setting the following flag
--registry-mirror-url=https://registry-1.docker.io --registry-mirror-username= --registry-mirror-password=
on thedkp create cluster command
.NOTE: Ensure your subnets do not overlap with your host subnet because they cannot be changed after cluster creation. If you need to change the kubernetes subnets, you must do this at cluster creation. The default subnets used in DKP are:
CODEspec: clusterNetwork: pods: cidrBlocks: - 192.168.0.0/16 services: cidrBlocks: - 10.96.0.0/12
Create cluster command - Depending on the cluster size, it will take a few minutes to create:
The command below uses the default external load balancer to create Kubernetes cluster objects:
(Optional) If you have overrides for your clusters, you must specify the secret as part of the create cluster command. If these are not specified, the overrides for your nodes will not be applied.--override-secret-name=$CLUSTER_NAME-user-overrides
CODEdkp create cluster preprovisioned --cluster-name ${CLUSTER_NAME} --control-plane-endpoint-host <control plane endpoint host> --control-plane-endpoint-port <control plane endpoint port, if different than 6443> --pre-provisioned-inventory-file preprovisioned_inventory.yaml --ssh-private-key-file <path-to-ssh-private-key> --registry-mirror-url=${REGISTRY_URL} \ --dry-run \ --output=yaml \ > ${CLUSTER_NAME}.yaml
Virtual IP ALTERNATIVE - if you don’t have an external LB, and wish to use a VIRTUAL IP provided by kube-vip, specify these flags example below:
CODEdkp create cluster preprovisioned \ --cluster-name ${CLUSTER_NAME} \ --control-plane-endpoint-host 196.168.1.10 \ --virtual-ip-interface eth1 \ --dry-run \ --output=yaml \ > ${CLUSTER_NAME}.yaml
The output from this command is shortened here for reading clarity, but should start like this:
Generating cluster resources
cluster.cluster.x-k8s.io/preprovisioned-example created
cont.........
Inspect or edit the cluster objects and familiarize yourself with Cluster API before editing the cluster objects as edits can prevent the cluster from deploying successfully. See Pre-provisioned Customizing CAPI Clusters .
Familiarize yourself with Cluster API before editing the cluster objects as edits can prevent the cluster from deploying successfully.
Create the cluster from the objects generated in the
dry run
. A warning will appear in the console if the resource already exists and will require you to remove the resource or update your YAML.CODEkubectl create -f ${CLUSTER_NAME}.yaml
NOTE: If you used the
--output-directory
flag in yourdkp create .. --dry-run
step above, create the cluster from the objects you created by specifying the directory:CODEkubectl create -f <existing-directory>/
Use the wait command to monitor the cluster control-plane readiness:
CODEkubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --timeout=30m
Output:
CODEcluster.cluster.x-k8s.io/preprovisioned-example condition met
NOTE: Depending on the cluster size, it will take a few minutes to create.
When the command completes, you will have a running Kubernetes cluster! Use this command to get the Kubernetes kubeconfig
for the new cluster and proceed to installing the DKP Kommander UI:
dkp get kubeconfig -c ${CLUSTER_NAME} > ${CLUSTER_NAME}.conf
Azure requires changing the CNI encapsulation type of Calico from the default of IPtoIP to VXlan. If changing the Calico encapsulation, D2iQ recommends changing it after cluster creation, but before production.
Audit logs
To modify Control Plane Audit logs settings using the information contained in the page Configure the Control Plane.
Further Optional Steps:
Cluster Verification
If you want to monitor or verify the installation of your clusters, refer to Verify your Cluster and DKP Installation.