Skip to main content
Skip table of contents

DKP 2.8.1 Known Issues and Limitations

The following items are known issues with this release.

AWS Custom AMI Required for Kubernetes Version

Previous versions of DKP would default to using upstream AMIs published by the CAPA (Cluster API AWS) project when building AWS clusters if you did not specify your own AMI. However, those images are not currently available for the Kubernetes version used in the 2.8.1 release.

As a result, starting with this release of DKP, the behavior of the DKP create cluster aws command has been changed. It no longer defaults to using the upstream AMIs and instead requires that you specify an AMI built using Konvoy Image Builder (KIB), or by explicitly requesting that it use the upstream images.

For more information on using a custom AMI in cluster creation or during the upgrade process, refer to these topics:

vSphere and Ubuntu using Konvoy Image Builder Issue

Building a Konvoy Image Builder (KIB) image on vSphere using Ubuntu 20.04 with cloud-init 23.3.3-0ubuntu0~20.04.1 will fail in the 2.8.1 version of DKP. The issue will be resolved in an upcoming KIB patch release.

Known CVE's

Since the release of DKP 2.6, DKP has been scanning Catalog applications for CVEs. Beginning with DKP 2.7, only the latest version of each Catalog application will be scanned (and have its critical CVEs mitigated). ` For more information about the known CVE’s for compatible catalog apps, refer to D2iQ Security Updates.

Custom Banner on Cluster is lost during an upgrade

If you used the ‘Custom Banner’ UI functionality in a previous version of DKP and then Upgrade to DKP 2.8, the Banner is lost during the upgrade and will need to be recreated.

Rook Ceph Install Error

An issue might emerge when installing rook-ceph on vSphere clusters using RHEL operating systems.

This issue occurs during initial installation of rook-ceph, causing the object store used by Velero and Grafana Loki, to be unavailable. If the installation of Kommander component of DKP is unsuccessful due to rook-ceph failing, you might need to apply the following workaround:

  1. Run this command to check if the cluster is affected by this issue.

    CODE
    kubectl describe CephObjectStores dkp-object-store -n kommander
  2. If this output appears, the workaround needs to be applied so continue with the next step. If you do not see this output, you can stop at this step.

    CODE
    Name:         dkp-object-store
    Namespace:    kommander
    ...
      Warning  ReconcileFailed     7m55s (x19 over 52m)
      rook-ceph-object-controller  failed to reconcile CephObjectStore
      "kommander/dkp-object-store". failed to create object store deployments: failed
      to configure multisite for object store: failed create ceph multisite for
      object-store ["dkp-object-store"]: failed to commit config changes after
      creating multisite config for CephObjectStore "kommander/dkp-object-store":
      failed to commit RGW configuration period changes%!(EXTRA []string=[]): signal: interrupt
  3. Kubectl exec into the rook-ceph-tools pod.

    CODE
    export WORKSPACE_NAMESPACE=<workspace namespace>
    CEPH_TOOLS_POD=$(kubectl get pods -l app=rook-ceph-tools -n ${WORKSPACE_NAMESPACE} -o name)
    kubectl exec -it -n ${WORKSPACE_NAMESPACE} $CEPH_TOOLS_POD bash
  4. Run these commands to set dkp-object-store as the default zonegroup.
    (info) NOTE: The period update command may take a few minutes to complete

    CODE
    radosgw-admin zonegroup default --rgw-zonegroup=dkp-object-store
    radosgw-admin period update --commit
  5. Next, restart the rook-ceph-operator deployment for the CephobjectStore to be reconciled.

    CODE
    kubectl rollout restart deploy -n${WORKSPACE_NAMESPACE} rook-ceph-operator
  6. After running the commands above, the CephObjectStore should be Connected once the rook-ceph operator reconciles the object (this may take some time).

    CODE
    kubectl wait CephObjectStore --for=jsonpath='{.status.phase}'=Connected dkp-object-store -n ${WORKSPACE_NAMESPACE} --timeout 10m
JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.