Skip to main content
Skip table of contents

DKP Essential Upgrade

Upgrade your Konvoy environment within the DKP Essential License.

Prerequisites

Overview

To upgrade Konvoy for DKP Essential:

  1. Upgrade the Cluster API (CAPI) components.

  2. Upgrade the core addons.

  3. Upgrade the Kubernetes version.

If you have more than one Essential cluster, repeat all of these steps for each Essential cluster. For a full list of DKP Essential features, see DKP Essential.

  • For air-gapped environments, you must run konvoy-image upload artifacts to copy the artifacts onto the cluster hosts before you begin the Upgrade the CAPI Components section below.

    • CODE
      konvoy-image upload artifacts \
          --container-images-dir=./artifacts/images/ \
          --os-packages-bundle=./artifacts/$OS_PACKAGES_BUNDLE \
          --containerd-bundle=artifacts/$CONTAINERD_BUNDLE \
          --pip-packages-bundle=./artifacts/pip-packages.tar.gz
      
  • For air-gapped environments, seed the docker registry as explained here: Air-gapped Seed the Registry

Upgrade the CAPI components

New versions of DKP come pre-bundled with newer versions of CAPI, newer versions of infrastructure providers, or new infrastructure providers. When using a new version of the DKP CLI, upgrade all of these components first.

Ensure your dkp configuration references the management cluster where you want to run the upgrade by setting the KUBECONFIG environment variable, or using the --kubeconfig flag, in accordance with Kubernetes conventions.

Run the following upgrade command for the CAPI components:

If you created CAPI components using flags to specify values, use those same flags during Upgrade to preserve existing values while setting additional values.

CODE
dkp upgrade capi-components

The command should output something similar to the following:

CODE
✓ Upgrading CAPI components
✓ Waiting for CAPI components to be upgraded
✓ Initializing new CAPI components
✓ Deleting Outdated Global ClusterResourceSets

If the upgrade fails, review the prerequisites section and ensure that you’ve followed the steps in the DKP upgrade overview.

Upgrade the Core Addons

To install the core addons, DKP relies on the ClusterResourceSet Cluster API feature. In the CAPI component upgrade, we deleted the previous set of outdated global ClusterResourceSets because in past releases, some addons were installed using a global configuration. In order to support individual cluster upgrades, DKP now installs all addons with a unique set of ClusterResourceSets and corresponding referenced resources, all named using the cluster’s name as a suffix. For example: calico-cni-installation-my-aws-cluster.

If you modify any of the ClusterResourceSet definitions, these changes are not be preserved when running the command dkp upgrade addons. You must use the --dry-run -o yaml options to save the new configuration to a file and continue the same changes upon each upgrade.

Your cluster comes preconfigured with a few different core addons that provide functionality to your cluster upon creation. These include: CSI, CNI, Cluster Autoscaler, and Node Feature Discovery. New versions of DKP may come pre-bundled with newer versions of these addons.

If you have more than one essential cluster, ensure your dkp configuration references the cluster where you want to run the upgrade by setting the KUBECONFIG environment variable, or using the --kubeconfig flag, in accordance with Kubernetes conventions.

Perform the following steps to update your addons.

  1. Ensure your dkp configuration references the cluster where you want to run the upgrade by setting the KUBECONFIG environment variable, or using the --kubeconfig flag, in accordance with Kubernetes conventions.

  2. Upgrade the core addons in a cluster using the dkp upgrade addons command specifying the cluster infrastructure (choose aws, azure, vsphere, eks, gcp, preprovisioned) and the name of the cluster.

Examples for upgrade core addons commands:

CODE
export CLUSTER_NAME=my-azure-cluster
dkp upgrade addons azure --cluster-name=${CLUSTER_NAME}

OR

CODE
export CLUSTER_NAME=my-aws-cluster
dkp upgrade addons aws --cluster-name=${CLUSTER_NAME}

The output for the AWS example should be similar to:

CODE
Generating addon resources
clusterresourceset.addons.cluster.x-k8s.io/calico-cni-installation-my-aws-cluster upgraded
configmap/calico-cni-installation-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/tigera-operator-my-aws-cluster upgraded
configmap/tigera-operator-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/aws-ebs-csi-my-aws-cluster upgraded
configmap/aws-ebs-csi-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/cluster-autoscaler-my-aws-cluster upgraded
configmap/cluster-autoscaler-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/node-feature-discovery-my-aws-cluster upgraded
configmap/node-feature-discovery-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/nvidia-feature-discovery-my-aws-cluster upgraded
configmap/nvidia-feature-discovery-my-aws-cluster upgraded

See Also

DKP upgrade addons for more CLI command help.

Upgrade the Kubernetes version

When upgrading the Kubernetes version of a cluster, first upgrade the control plane and then the node pools.

  1. Build a new image if applicable.

  2. Upgrade the Kubernetes version of the control plane. Each cloud provider has distinctive commands. Below is the AWS command example. Select the drop-down menu next to your provider for compliant CLI.
    NOTE: The first example below is for AWS. If you created your initial cluster with a custom AMI using the --ami flag, it is required to set the --ami flag during the Kubernetes upgrade.

    CODE
    dkp update controlplane aws --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.25.4

If you need to verify or discover your cluster name to use with this example, first run the kubectl get clusters command.

Azure
CODE
dkp update controlplane azure --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.25.4 --compute-gallery-id <Azure Compute Gallery built by KIB for Kubernetes v1.25.4>
  • If these fields were specified in the override file during image creation, the flags must be used in upgrade:

    • --plan-offer, --plan-publisher and --plan-sku

    • CODE
      --plan-offer rockylinux-9
      --plan-publisher erockyenterprisesoftwarefoundationinc1653071250513
      --plan-sku rockylinux-9
vSphere
CODE
dkp update controlplane vsphere --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.25.4 --vm-template <vSphere template built by KIB for Kubernetes v1.25.4>
GCP
CODE
dkp update controlplane gcp --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.25.4 --image=projects/${GCP_PROJECT}/global/images/<GCP image built by KIB for Kubernetes v1.25.4>
Pre-provisioned
CODE
dkp update controlplane preprovisioned --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.25.4
EKS
CODE
dkp update controlplane eks --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.24.7

The output should be similar to the below example, with the provider name corresponding to the CLI you executed from the choices above:

Some advanced options are available for various providers. An example would be regarding AMI instance type: aws: --ami, --instance-type. To see all the options for your particular provider, run this command dkp update controlplane aws|vsphere|preprovisioned|gcp|azure --help for more advance options.

NOTE: The commanddkp update controlplane {provider} has a 30 minute default timeout for the update process to finish. If you see the error "timed out waiting for the condition“, you can check the control plane nodes version using the command kubectl get machines -o wide $KUBECONFIG before trying again.

The output should be similar to:

CODE
Updating control plane resource controlplane.cluster.x-k8s.io/v1beta1, Kind=KubeadmControlPlane default/my-aws-cluster-control-plane
Waiting for control plane update to finish.
 ✓ Updating the control plane
CODE
kubectl set image -n kube-system daemonset.v1.apps/kube-proxy kube-proxy=docker.io/mesosphere/kube-proxy:v1.25.4_fips.0

3. Upgrade the Kubernetes version of your node pools. Upgrading a node pool involves draining the existing nodes in the node pool and replacing them with new nodes. In order to ensure minimum downtime and maintain high availability of the critical application workloads during the upgrade process, we recommend deploying Pod Disruption Budget (Disruptions) for your critical applications. For more information, refer to Update Cluster Nodepools documentation.

a. First, get a list of all node pools available in your cluster by running the following command:

CODE
dkp get nodepool --cluster-name ${CLUSTER_NAME}

b. Select the nodepool you want to upgrade with the command below:

CODE
export NODEPOOL_NAME=my-nodepool

c. Then update the selected nodepool using the command below. The first example command shows AWS language, so select the drop-down menu for your provider for the correct command. Execute the update command for each of the node pools listed in the previous command.
NOTE: The first example below is for AWS. If you created your initial cluster with a custom AMI using the --ami flag, it is required to set the --ami flag during the Kubernetes upgrade.

CODE
dkp update nodepool aws ${NODEPOOL_NAME} --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.25.4
Azure
CODE
dkp update nodepool azure ${NODEPOOL_NAME} --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.25.4 --compute-gallery-id <Azure Compute Gallery built by KIB for Kubernetes v1.25.4>
  • If these fields were specified in the override file during image creation, the flags must be used in upgrade:

    • --plan-offer, --plan-publisher and --plan-sku

    • CODE
      --plan-offer rockylinux-9
      --plan-publisher erockyenterprisesoftwarefoundationinc1653071250513
      --plan-sku rockylinux-9
vSphere
CODE
dkp update nodepool vsphere ${NODEPOOL_NAME} --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.25.4 --vm-template <vSphere template built by KIB for Kubernetes v1.25.4>
GCP
CODE
dkp update nodepool gcp ${NODEPOOL_NAME} --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.25.4 --image=projects/${GCP_PROJECT}/global/images/<GCP image built by KIB for Kubernetes v1.25.4>
Pre-provisioned
CODE
dkp update nodepool preprovisioned ${NODEPOOL_NAME} --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.25.4
EKS
CODE
dkp update nodepool eks ${NODEPOOL_NAME} --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.24.7

The output should be similar to the following, with the name of the infrastructure provider shown accordingly:

CODE
Updating node pool resource cluster.x-k8s.io/v1beta1, Kind=MachineDeployment default/my-aws-cluster-my-nodepool
Waiting for node pool update to finish.
 ✓ Updating the my-aws-cluster-my-nodepool node pool

4. Repeat this step for each additional node pool.

Additional Considerations for upgrading a FIPS cluster:

If upgrading a FIPS cluster, to correctly upgrade the Kubernetes version, instead run the command shown below:

CODE
dkp update nodepool aws ${NODEPOOL_NAME} --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.25.4+fips.0 --ami=<ami-with-fips-id>

When all nodepools have been updated, your upgrade is complete. For the overall process for upgrading to the latest version of DKP, refer back to DKP Upgrade for more details.

For the overall process for upgrading to the latest version of DKP, refer back to DKP Upgrade.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.