This guide provides instructions for getting started with DKP to get your Kubernetes cluster up and running with basic configuration requirements in a vSphere environment. If you want to customize your vSphere environment, see vSphere Advanced Install.

DKP Prerequisites

Before using DKP to create a vSphere cluster, verify that you have:

  • An x86_64-based Linux® or macOS® machine.

  • The dkp binaries and Konvoy Image Builder (KIB) image bundle for Linux or macOS.

  • Docker® version 18.09.2 or later installed. You must have Docker installed on the host where the DKP Konvoy CLI runs. For example, if you are installing Konvoy on your laptop, ensure the laptop has a supported version of Docker.

On macOS, Docker runs in a virtual machine. Configure this virtual machine with at least 8GB of memory.

Configure vSphere Prerequisites

Before installing, verify that your VMware vSphere Client environment meets the following basic requirements:

  • Access to a bastion VM or other network connected host.

    • You must be able to reach the vSphere API endpoint from where the Konvoy command line interface (CLI) runs.

  • vSphere account with credentials configured - this account must have Administrator privileges.

  • A RedHat® subscription with user name and password for downloading DVD ISOs

  • For air-gapped environments, a bastion VM host template with access to a configured Docker registry

  • Valid vSphere values for the following:

    • vCenter API server URL

    • Datacenter name

    • Zone name that contains ESXi hosts for your cluster’s nodes

    • Datastore name for the shared storage resource to be used for the VMs in the cluster.

      • Use of PersistentVolumes in your cluster depends on Cloud Native Storage (CNS), available in vSphere v6.7.x with Update 3 and later versions. CNS depends on this shared Datastore’s configuration.

    • Datastore URL from the datastore record for the shared datastore you want your cluster to use.

      • You need this URL value to ensure that the correct Datastore is used when DKP creates VMs for your cluster in vSphere.

    • Folder name

    • Base template name, such as base-rhel-8, or base-rhel-7

    • Name of a Virtual Network that has DHCP enabled for both air-gapped and non air-gapped environments

    • Resource Pools - at least one resource pool needed, with every host in the pool having access to shared storage, such as VSAN

      • Each host in the resource pool needs access to shared storage, such as NFS or VSAN, to make use of MachineDeployments and high-availability control planes.

The next step is:

Create directory for KIB and DKP CLI

The command below creates directories for working with images created in KIB as well as a directory for running the commands for DKP:

mkdir kib && mkdir dkp

Get the needed D2iQ Software

Download and decompress KIB by running the following command:

cd kib
tar -xvf konvoy-image-bundle-v1.12.0_linux_amd64.tar.gz

Use this link to Download and decompress DKP and copy or move the dkp binary into the dkp subdirectory you created above..

Create a folder and resource pool in vCenter for DKP cluster

To create a folder in vCenter, follow the creation steps below:

  1. Right click on the datacenter

  2. Select New Folder

  3. Select host and cluster folder

  4. Name folder "D2IQ"

In order to create a Resource Pool, follow the steps below:

  1. Right click on the vCenter cluster you plan to use for DKP

  2. Select New Resource Pool

  3. Adjust values if you need to restrict resources for DKP

Build template using KIB

Run the following command replacing rhel-84.yaml with rhel-79.yaml if necessary to create the compatible image:

cd ..
cd kib/images/ova
vi rhel-84.yaml

For troubleshooting building an image, see the following solutions guide.

Adjust the packer file for your vSphere cluster

download_images: true
build_name: "vsphere-rhel-84"
packer_builder_type: "vsphere" 
guestinfo_datasource_slug: ""
guestinfo_datasource_ref: "v1.4.0"
guestinfo_datasource_script: "{{guestinfo_datasource_slug}}/{{guestinfo_datasource_ref}}/"
  vcenter_server: ""
  vsphere_username: "administrator@vsphere.local"
  vsphere_password: "Password"
  cluster: "cluster1"
  datacenter: "dc1"
  datastore: "vsanDatastore"
  folder: "d2iq"
  insecure_connection: "true"
  network: "VM Network"
  resource_pool: "D2IQ"
  template: "rhel-boot-8.4"
  vsphere_guest_os_type: "rhel8_64Guest"
  guest_os_type: "rhel8-64"
  # goss params
  distribution: "RHEL"
  distribution_version: "8.4"

Create overrides for docker credentials

Navigate to the directory that contains your image and then create the override files:

cd ..
cd ..

To override the docker credentials, run the overrides file command below:

vi overrides.yaml
- host: ""
  username: "<dockerhub-user>"
  password: "<dockerhub-password>
  auth: ""
  identityToken: ""

Build VM template using KIB

Run the following command, to build your template:

./konvoy-image build images/ova/rhel-84.yaml --overrides overrides.yaml

Create DKP cluster on vSphere

Export your vSphere Environment Variables

Copy the set of “exports” below to a text document in a notepad so that you can modify them and copy/paste into the CLI terminal. It is recommended to save this information for later reference.

export VSPHERE_SERVER="<vcenter-server-ip-address>"
export VSPHERE_PASSWORD='<password>'
export VSPHERE_USERNAME="<administrator@vsphere.local>"
export export CLUSTER_NAME=dkp
openssl s_client -connect <vcenter_ip.:443 < /dev/null 2>/dev/null | openssl x509 -fingerprint -noout -in /dev/stdin

Build the Bootstrap Cluster

cd ..
cd dkp 
./dkp create bootstrap --with-aws-bootstrap-credentials=false 

Create the DKP cluster deployment YAML

If you are not using self-signed certificates, remove the last line of the command--tls-thumb-print= before running the command below:

dkp create cluster vsphere --cluster-name="dkp" --network="VM Network" --control-plane-endpoint-host="<vip_for_api" --virtual-ip-interface="eth0" --data-center="<dc1>" --data-store="vsanDatastore" --folder="${VSPHERE_FOLDER}" --server="<vsphere_server_ip>" --ssh-public-key-file=/root/.ssh/ --resource-pool="DKP" --vm-template=konvoy-ova-vsphere-rhel-84-1.23.7-1649344885 --tls-thumb-print="tls-thumprint" --dry-run -o yaml > ${DKP_CLUSTER_NAME}.yaml

Create a new vSphere Kubernetes cluster

Create/deploy a cluster using the command below:

kubectl create -f ${DKP_CLUSTER_NAME}.yaml

If you wish to watch the cluster build, run the command below:

dkp describe cluster -c ${DKP_CLUSTER_NAME}

DKP deploys MetalLB on vSphere. This is an advanced step as it will require the IP addresses managed that needed to be managed by the MetalLB and therefore not necessary for Quick Start. However, know that it is an option and see the Configure MetalLB for vSphere documentation if necessary for your configuration.

Pivot the Cluster Controllers and Create CAPI Controllers on Cluster

./dkp create capi-components --kubeconfig ${DKP_CLUSTER_NAME}.conf

Once created, move the configuration to the new cluster using the command below:

./dkp move --to-kubeconfig ${DKP_CLUSTER_NAME}.conf

You now have a Self-Managing Kubernetes Cluster deployed on vSphere.

Adjust the Storage class to only allow PVs on a specfic VMware datastore

kubectl delete sc vsphere-raw-block-sc --kubeconfig ${DKP_CLUSTER_NAME}.conf

Create a storage class yaml with the URL of the VMware datastore you want to use

vi sc.yaml
kind: StorageClass
  annotations: "true"
  name: vsphere-raw-block-sc
allowVolumeExpansion: true
  datastoreurl: "ds:///vmfs/volumes/vsan:5238a205736fdb1f-c71f7ec7a0353662/" 
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

Apply the Storage class YAML to create a new default SC

kubectl apply -f sc.yaml --kubeconfig ${DKP_CLUSTER_NAME}.conf

Kommander Deployment

Deploy Kommander to the DKP Cluster

./dkp install kommander --kubeconfig ${DKP_CLUSTER_NAME}.conf

If you would like to watch the Helm Releases Deploy, run the following command:

watch kubectl get hr -A --kubeconfig ${DKP_CLUSTER_NAME}.conf

Explore the new Kubernetes cluster

The kubeconfig file is written to your local directory and you can now explore the cluster.

List the Nodes with the command:

kubectl --kubeconfig=${CLUSTER_NAME}.conf get nodes

Log in to the DKP UI

You can now log in to the DKP UI to explore.

Delete the Kubernetes Cluster and Cleanup your Environment

Follow these steps:

Delete the provisioned Kubernetes cluster and wait a few minutes:

dkp delete cluster \
--cluster-name=${CLUSTER_NAME} \
--kubeconfig=${CLUSTER_NAME}.conf \