Pre-provisioned Air-gapped Define Environment
Fulfill the prerequisites for using a pre-provisioned infrastructure when Air-Gapped
The instructions below outline how to fulfill the prerequisites for using pre-provisioned infrastructure when using an air-gapped environment.
Air-Gapped Registry Prerequisites
DKP in an air-gapped environment requires a local container registry of trusted images to enable production level Kubernetes cluster management. In an environment with access to the internet, you retrieve artifacts from specialized repositories dedicated to them such as Docker images contained in DockerHub and Helm Charts that come from a dedicated Helm Chart repository. However, in an air-gapped environment, you need:
Local repositories to store Helm charts, Docker images and other artifacts. Tools such as ECR, jFrog, Harbor and Nexus handle multiple types of artifacts in one local repository.
Bastion Host - If you have not set up a Bastion Host yet, refer to that section of the Documentation.
The complete DKP air-gapped bundle which contains all the DKP components needed for an air-gapped environment installation and also to use a local registry in a non-air-gapped environment: Pre-provisioned Loading the Registry
Copy Air-gapped Artifacts onto Cluster Hosts
Using the Konvoy Image Builder, you can copy the required artifacts (such as charts, java or OS packages like RPM or Deb) onto your cluster hosts.
Assuming you have downloaded
dkp-air-gapped-bundle_v2.8.1_linux_amd64.tar.gz
and extracted the tarball to a local directory above, the Kubernetes image bundle is located inkib/artifacts/images
. To verify images and artifacts are have extracted there:Verify the image bundles exist in
kib/artifacts/images
:CODE$ ls kib/artifacts/images/ kubernetes-images-1.28.7-d2iq.1.tar kubernetes-images-1.28.7-d2iq.1-fips.tar
Verify the artifacts for your OS exist in the
artifacts/
directory and export the appropriate variables:CODE$ ls kib/artifacts/ 1.28.7_centos_7_x86_64.tar.gz 1.28.7_redhat_8_x86_64_fips.tar.gz containerd-1.6.28-d2iq.1-rhel-7.9-x86_64.tar.gz containerd-1.6.28-d2iq.1-rhel-8.6-x86_64_fips.tar.gz pip-packages.tar.gz 1.28.7_centos_7_x86_64_fips.tar.gz 1.28.7_rocky_9_x86_64.tar.gz containerd-1.6.28-d2iq.1-rhel-7.9-x86_64_fips.tar.gz containerd-1.6.28-d2iq.1-rocky-9.0-x86_64.tar.gz 1.28.7_redhat_7_x86_64.tar.gz 1.28.7_ubuntu_20_x86_64.tar.gz containerd-1.6.28-d2iq.1-rhel-8.4-x86_64.tar.gz containerd-1.6.28-d2iq.1-rocky-9.1-x86_64.tar.gz 1.28.7_redhat_7_x86_64_fips.tar.gz containerd-1.6.28-d2iq.1-centos-7.9-x86_64.tar.gz containerd-1.6.28-d2iq.1-rhel-8.4-x86_64_fips.tar.gz containerd-1.6.28-d2iq.1-ubuntu-20.04-x86_64.tar.gz 1.28.7_redhat_8_x86_64.tar.gz containerd-1.6.28-d2iq.1-centos-7.9-x86_64_fips.tar.gz containerd-1.6.28-d2iq.1-rhel-8.6-x86_64.tar.gz images
For example, for RHEL 8.4 you would set:
CODEexport OS_PACKAGES_BUNDLE=1.28.7_redhat_8_x86_64.tar.gz export CONTAINERD_BUNDLE=containerd-1.6.28-d2iq.1-rhel-8.4-x86_64.tar.gz
Export the following environment variables, ensuring that all control plane and worker nodes are included:
CODEexport CONTROL_PLANE_1_ADDRESS="<control-plane-address-1>" export CONTROL_PLANE_2_ADDRESS="<control-plane-address-2>" export CONTROL_PLANE_3_ADDRESS="<control-plane-address-3>" export WORKER_1_ADDRESS="<worker-address-1>" export WORKER_2_ADDRESS="<worker-address-2>" export WORKER_3_ADDRESS="<worker-address-3>" export WORKER_4_ADDRESS="<worker-address-4>" export SSH_USER="<ssh-user>" export SSH_PRIVATE_KEY_FILE="<private key file>"
SSH_PRIVATE_KEY_FILE
must be either the name of the SSH private key file in your working directory or an absolute path to the file in your user’s home directory.Generate an
inventory.yaml
to tellkonvoy-image
what the IP addresses are for the nodes in your cluster, so it knows where to upload the artifacts. This YAML is automatically picked up by thekonvoy-image upload
command in the next step. Thisinventory.yaml
should exclude any GPU workers, which will be handled in final additional steps.CODEcat <<EOF > inventory.yaml all: vars: ansible_user: $SSH_USER ansible_port: 22 ansible_ssh_private_key_file: $SSH_PRIVATE_KEY_FILE hosts: $CONTROL_PLANE_1_ADDRESS: ansible_host: $CONTROL_PLANE_1_ADDRESS $CONTROL_PLANE_2_ADDRESS: ansible_host: $CONTROL_PLANE_2_ADDRESS $CONTROL_PLANE_3_ADDRESS: ansible_host: $CONTROL_PLANE_3_ADDRESS $WORKER_1_ADDRESS: ansible_host: $WORKER_1_ADDRESS $WORKER_2_ADDRESS: ansible_host: $WORKER_2_ADDRESS $WORKER_3_ADDRESS: ansible_host: $WORKER_3_ADDRESS $WORKER_4_ADDRESS: ansible_host: $WORKER_4_ADDRESS EOF
Upload the artifacts onto cluster hosts with the following command:
BASHkonvoy-image upload artifacts \ --container-images-dir=./kib/artifacts/images/ \ --os-packages-bundle=./kib/artifacts/$OS_PACKAGES_BUNDLE \ --containerd-bundle=./kib/artifacts/$CONTAINERD_BUNDLE \ --pip-packages-bundle=./kib/artifacts/pip-packages.tar.gz
The
konvoy-image upload artifacts
command copies all OS packages and other artifacts onto each of the machines in your inventory. When you create the cluster, the provisioning process connects to each node and runs commands to install those artifacts and consequently Kubernetes running. KIB uses variable overrides to specify base image and container images to use in your new machine image. The variable overrides files for NVIDIA and FIPS can be ignored unless adding an overlay feature.
Use the overrides flag (EX:
--overrides overrides/fips.yaml
) and reference eitherfips.yaml
oroffline-fips.yaml
manifests located in the overrides directory or see these pages in the documentation: