Upgrade your Konvoy environment within the DKP Enterprise license.

Prerequisites

Overview

To upgrade Konvoy for DKP Enterprise:

  1. Upgrade the Cluster API (CAPI) components.

  2. Upgrade the core addons.

  3. Upgrade the Kubernetes version.

  4. Upgrade the Managed clusters.

  5. Upgrade the Kubernetes version of Managed clusters.

Perform all three steps on the management cluster first. Then, execute the second and third steps on additional managed clusters one cluster at a time. For the managed clusters, you use the KUBECONFIG for the management cluster and specify the name of the managed cluster(s) to upgrade.

For a full list of DKP Enterprise features, see DKP Enterprise.

  • For pre-provisioned air-gapped environments, you must run konvoy-image upload artifacts before beginning Upgrade the Capi Components section below.

  • You must maintain your attached clusters manually. Review the documentation from your cloud provider for more information.

  • See this section for a full explanation of DKP Enterprise.

Upgrade the CAPI Components

New versions of DKP come pre-bundled with newer versions of CAPI, newer versions of infrastructure providers, or new infrastructure providers. When using a new version of the DKP CLI, upgrade all of these components first.

If you are running on more than one management cluster, you must upgrade the CAPI components on each of these clusters.

Ensure your dkp configuration references the management cluster where you want to run the upgrade by setting the KUBECONFIG environment variable, or using the --kubeconfig flag, in accordance with Kubernetes conventions.

Execute the upgrade command for the CAPI components.

dkp upgrade capi-components
CODE

The output resembles the following:

✓ Upgrading CAPI components
✓ Waiting for CAPI components to be upgraded
✓ Initializing new CAPI components
✓ Deleting Outdated Global ClusterResourceSets
CODE

If the upgrade fails, review the prerequisites section and ensure that you’ve followed the steps in the DKP Upgrade overview. Furthermore, ensure you have adhered to the Prerequisites at the top of this page.

Upgrade the Core Addons

To install the core addons, DKP relies on the ClusterResourceSet Cluster API feature. In the CAPI component upgrade, we deleted the previous set of outdated global ClusterResourceSets because prior to DKP 2.4 some addons were installed using a global configuration. In order to support individual cluster upgrades, DKP 2.4 now installs all addons with a unique set of ClusterResourceSet and corresponding referenced resources, all named using the cluster’s name as a suffix. For example: calico-cni-installation-my-aws-cluster.

If you have modified any of the clusterResourceSet definitions, these changes will not be preserved when running the command dkp upgrade addons. You must use the --dry-run -o yaml options to save the new configuration to a file and remake the same changes upon each upgrade.

Your cluster comes preconfigured with a few different core addons that provide functionality to your cluster upon creation. These include: CSI, CNI, Cluster Autoscaler, and Node Feature Discovery. New versions of DKP may come pre-bundled with newer versions of these addons.

Perform the following steps to update these addons:

  1. If you have any additional managed clusters, you will need to upgrade the core addons and Kubernetes version for each one.

  2. Ensure your dkp configuration references the management cluster where you want to run the upgrade by setting the KUBECONFIG environment variable, or using the --kubeconfig flag, in accordance with Kubernetes conventions.

  3. Upgrade the core addons in a cluster using the dkp upgrade addons command specifying the cluster infrastructure (choose aws, azure, vsphere, eks,gcp, preprovisioned) and the name of the cluster.

Additional Considerations for EKS
  • The Kubernetes version for EKS clusters supported on DKP 2.4 is now v1.23. As EKS has disabled support for in-tree EBS volume provisioning in favor of CSI Volumes, DKP automatically deploys a new EBS CSI driver to your EKS cluster when you run the dkp upgrade addons command and label the Cluster object with a new label like: konvoy.d2iq.io/csi=aws-ebs.

  • Before running the dkp upgrade addons command when deploying EKS clusters, you must add the necessary IAM policy to your worker instances. If you are using the default IAM instance profile name, run the following command:
    aws iam attach-role-policy --role-name nodes.cluster-api-provider-aws.sigs.k8s.io --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy
    If you have customized your AWSMachineTemplate to use a different instance profile, review and add the policy to that profile.

Examples for upgrade core addons commands:

export CLUSTER_NAME=my-azure-cluster
dkp upgrade addons azure --cluster-name=${CLUSTER_NAME}
CODE

OR

export CLUSTER_NAME=my-aws-cluster
dkp upgrade addons aws --cluster-name=${CLUSTER_NAME}
CODE

The output for the AWS example should be similar to:

Generating addon resources
clusterresourceset.addons.cluster.x-k8s.io/calico-cni-installation-my-aws-cluster upgraded
configmap/calico-cni-installation-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/tigera-operator-my-aws-cluster upgraded
configmap/tigera-operator-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/aws-ebs-csi-my-aws-cluster upgraded
configmap/aws-ebs-csi-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/cluster-autoscaler-my-aws-cluster upgraded
configmap/cluster-autoscaler-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/node-feature-discovery-my-aws-cluster upgraded
configmap/node-feature-discovery-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/nvidia-feature-discovery-my-aws-cluster upgraded
configmap/nvidia-feature-discovery-my-aws-cluster upgraded
CODE

4. After addons are upgraded, you must remove deprecated addons. In DKP version 2.4, the nvidia-feature-discovery addon will no longer be shipped on new clusters, but rather will be handled by the nvidia-gpu-operator Kommander component. To perform removal, execute the following steps:

a. List all cluster resource sets by running:

kubectl get clusterresourcesets
CODE
NAME                                                         AGE
aws-ebs-csi-my-aws-cluster                                   7m46s
calico-cni-installation-my-aws-cluster                       7m46s
cluster-autoscaler-my-aws-cluster                            7m46s
node-feature-discovery-my-aws-cluster                        7m46
nvidia-feature-discovery-my-aws-cluster                      7m46s
CODE

b. Delete the ClusterResourceSet for nvidia-feature-discovery by running:

kubectl delete clusterresourceset nvidia-feature-discovery-my-aws-cluster
CODE

c. Delete ConfigMap ClusterResourceSet referred to by running the following command always using named nvidia-feature-discover-${CLUSTER_NAME}. If there is no related ConfigMap, then you can move on to the next step.

kubectl delete configmap nvidia-feature-discovery-my-aws-cluster
CODE

d. Get the kubeconfig for the cluster by running:

dkp get kubeconfig -c my-aws-cluster >> my-aws-cluster.conf
CODE

e. Delete the corresponding daemonset on the remote cluster by running. If there is no related daemonset, then you can move on to the next step.

kubectl --kubeconfig=my-aws-cluster.conf delete daemonset nvidia-feature-discovery-gpu-feature-discovery -n node-feature-discovery
CODE

Now that you completed updating your core addons, begin upgrading the Kubernetes version in the section below.

See Also

DKP upgrade addons for more CLI command help.

Upgrade the Kubernetes Version

When upgrading the Kubernetes version of a cluster, first upgrade the control plane and then the node pools. If you have any additional managed or attached clusters, you need to upgrade the core addons and Kubernetes version for each one.

  1. Build a new image if applicable.

    • If an AMI was specified when initially creating a cluster for AWS, you must build a new one with Konvoy Image Builder.

    • If an Azure Machine Image was specified for Azure, you must build a new one with Konvoy Image Builder.

    • If a vSphere template Image was specified for vSphere, you must build a new one with Konvoy Image Builder.

  2. Upgrade the Kubernetes version of the control plane. Each cloud provider has distinctive commands. Below is the AWS command example. Select the drop-down menu next to your provider for compliant CLI.

    dkp update controlplane aws --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.24.6
    CODE
Azure
dkp update controlplane azure --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.24.6 --compute-gallery-id <Azure Compute Gallery built by KIB for Kubernetes v1.24.6>
CODE
vSphere
dkp update controlplane vsphere --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.24.6 --vm-template <vSphere template built by KIB for Kubernetes v1.24.6>
CODE
GCP
dkp update controlplane gcp --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.24.6 --image <GCP image built by KIB for Kubernetes v1.24.6>
CODE
Pre-provisioned
dkp update controlplane preprovisioned --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.24.6
CODE
EKS
dkp update controlplane eks --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.23.12
CODE

Due to environment restrictions, EKS clusters will display different kubernetes patch version numbers upon upgrade completion as EKS does not allow for patch version specification.

The output should be similar to the below example, with the provider name corresponding to the CLI you executed from the choices above:

Updating control plane resource controlplane.cluster.x-k8s.io/v1beta1, Kind=KubeadmControlPlane default/my-aws-cluster-control-plane
Waiting for control plane update to finish.
 ✓ Updating the control plane
CODE

Some advanced options are available for various providers. To see all the options for your particular provider, run this command dkp update controlplane aws|vsphere|preprovisioned|azure|eks --help for more advance options like the example below:

This example for AWS AMI instance type: aws: --ami, --instance-type would be some of the options mentioned in the note above.

Additional Considerations for upgrading a FIPS cluster:

If upgrading a FIPS cluster, to correctly upgrade the Kubernetes version, instead run the command shown below:

dkp update controlplane aws --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.24.6+fips.0 --ami=<ami-with-fips-id>
CODE

3. Upgrade the Kubernetes version of your node pools. Upgrading a nodepool involves draining the existing nodes in the nodepool and replacing them with new nodes. In order to ensure minimum downtime and maintain high availability of the critical application workloads during the upgrade process, we recommend deploying Pod Disruption Budget (Disruptions ) for your critical applications. For more information, refer to Update Cluster Nodepools documentation.

a. First, get a list of all node pools available in your cluster by running the following command:

dkp get nodepool --cluster-name ${CLUSTER_NAME}
CODE

b. Select the nodepool you want to upgrade with the command below:

export NODEPOOL_NAME=my-nodepool
CODE

c. Then update the selected nodepool using the command below. The first example command shows AWS language, so select the drop-down menu for your provider for the correct command. Execute the update command for each of the node pools listed in the previous command:

dkp update nodepool aws ${NODEPOOL_NAME} --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.24.6
CODE
Azure
dkp update nodepool azure ${NODEPOOL_NAME} --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.24.6 --compute-gallery-id <Azure Compute Gallery built by KIB for Kubernetes v1.24.6>
CODE
vSphere
dkp update nodepool vsphere ${NODEPOOL_NAME} --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.24.6 --vm-template <vSphere template built by KIB for Kubernetes v1.24.6>
CODE
GCP
dkp update nodepool gcp ${NODEPOOL_NAME} --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.24.6 --image <GCP image built by KIB for Kubernetes v1.24.6>
CODE
Pre-provisioned
dkp update nodepool preprovisioned ${NODEPOOL_NAME} --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.24.6
CODE
EKS
dkp update nodepool eks ${NODEPOOL_NAME} --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.23.12
CODE

The output should be similar to the following, with the name of the infrastructure provider shown accordingly:

Updating node pool resource cluster.x-k8s.io/v1beta1, Kind=MachineDeployment default/my-aws-cluster-my-nodepool
Waiting for node pool update to finish.
 ✓ Updating the my-aws-cluster-my-nodepool node pool
CODE

d. Repeat this step for each additional node pool.

Additional Considerations for upgrading a FIPS cluster:

If upgrading a FIPS cluster, to correctly upgrade the Kubernetes version, instead run the command shown below:

dkp update nodepool aws ${NODEPOOL_NAME} --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.24.6+fips.0 --ami=<ami-with-fips-id>
CODE

When all nodepools have been updated, your upgrade is complete. For the overall process for upgrading to the latest version of DKP, refer back to DKP Upgrade for more details.

Upgrade Managed Clusters

If you have managed clusters, follow these steps to upgrade each cluster:

  1. Using the kubeconfig of your management cluster, find your cluster name and be sure to copy the information for all of your clusters:

    kubectl get clusters -A
    CODE
  2. Set your cluster variable:

    export CLUSTER_NAME=<your-managed-cluster-name>
    CODE
  3. Set your cluster's workspace variable:

    export CLUSTER_WORKSPACE=<your-workspace-namespace>
    CODE
  4. Then, upgrade the core addons (replacing aws with whatever infrastructure provider you would be using):

    dkp upgrade addons aws --cluster-name=${CLUSTER_NAME} -n ${CLUSTER_WORKSPACE} 
    CODE
  5. Check to see if you have any cluster resource sets that need to be cleaned up:

    kubectl get clusterresourcesets -n ${CLUSTER_WORKSPACE}
    CODE
  6. Delete the ClusterResourceSet for nvidia-feature-discovery by running:

    kubectl delete clusterresourceset nvidia-feature-discovery-my-aws-cluster -n ${CLUSTER_WORKSPACE}
    CODE
  7. Delete ConfigMap ClusterResourceSet referred to by running the following command, ensure you use using nvidia-feature-discover-${CLUSTER_NAME}. If there is no related ConfigMap, then you can move on to the next step.

    kubectl delete configmap nvidia-feature-discovery-my-aws-cluster -n ${CLUSTER_WORKSPACE}
    CODE
  8. Get the kubeconfig for the managed cluster by running:

    dkp get kubeconfig -c ${CLUSTER_NAME} -n ${CLUSTER_WORKSPACE} >> ${CLUSTER_NAME}.conf
    CODE
  9. Delete the corresponding daemonset on the remote cluster by running. If there is no related daemonset, then you can move on to the next step.

    kubectl --kubeconfig=${CLUSTER_NAME}.conf delete daemonset nvidia-feature-discovery-gpu-feature-discovery -n node-feature-discovery
    CODE

Upgrade Kubernetes Version on a Managed Cluster

After you complete the previous steps for all managed clusters and you update your core addons, begin upgrading the Kubernetes version.

You should first complete the upgrade of your Kommander Management Cluster before upgrading any managed clusters.

  1. Use this command to start upgrading the Kubernetes version:

    dkp update controlplane aws --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.24.6 -n ${CLUSTER_WORKSPACE}
    CODE
  2. Get a list of all node pools available in your cluster by running the following command:

    dkp get nodepools -c ${CLUSTER_NAME} -n ${CLUSTER_WORKSPACE}
    
    export NODEPOOL_NAME=<my-nodepool>
    CODE
  3. Use this command to upgrade the node pools:

    dkp update nodepool aws ${NODEPOOL_NAME} --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.24.6 -n ${CLUSTER_WORKSPACE}
    CODE