Skip to main content
Skip table of contents

DKP 2.8.2 Known Issues and Limitations

The following items are known issues with this release.

Upstream Deprecations in DKP 2.8.2

If you are deploying DKP using CentOS 7.9, RHEL 7.9, or Oracle Linux RHCK 7.9 in a networked environment, you must maintain your own OS package mirrors so that they can build images. We cannot depend on upstream artifacts because they have reached their End Of Life or are using dependencies that have reached their End of Life. NCN-103128

CAPV controller can deadlock

A known bug in the upstream Cluster-API vSphere provider can result in the CAPV controller deadlocking. This can block machines from being being provisioned, with the observed behavior that the new machines are stuck in the Provisioning state. Restarting the capv-controller-manager will allow the provisioning to continue. NCN-101990

Custom Banner on Cluster is lost during an upgrade

If you used the ‘Custom Banner’ UI functionality in a previous version of DKP and then Upgrade to DKP 2.8, the Banner is lost during the upgrade and will need to be recreated.

Rook Ceph Install Error

An issue might emerge when installing rook-ceph on vSphere clusters using RHEL operating systems.

This issue occurs during initial installation of rook-ceph, causing the object store used by Velero and Grafana Loki, to be unavailable. If the installation of Kommander component of DKP is unsuccessful due to rook-ceph failing, you might need to apply the following workaround:

  1. Run this command to check if the cluster is affected by this issue.

    CODE
    kubectl describe CephObjectStores dkp-object-store -n kommander
  2. If this output appears, the workaround needs to be applied so continue with the next step. If you do not see this output, you can stop at this step.

    CODE
    Name:         dkp-object-store
    Namespace:    kommander
    ...
      Warning  ReconcileFailed     7m55s (x19 over 52m)
      rook-ceph-object-controller  failed to reconcile CephObjectStore
      "kommander/dkp-object-store". failed to create object store deployments: failed
      to configure multisite for object store: failed create ceph multisite for
      object-store ["dkp-object-store"]: failed to commit config changes after
      creating multisite config for CephObjectStore "kommander/dkp-object-store":
      failed to commit RGW configuration period changes%!(EXTRA []string=[]): signal: interrupt
  3. Kubectl exec into the rook-ceph-tools pod.

    CODE
    export WORKSPACE_NAMESPACE=<workspace namespace>
    CEPH_TOOLS_POD=$(kubectl get pods -l app=rook-ceph-tools -n ${WORKSPACE_NAMESPACE} -o name)
    kubectl exec -it -n ${WORKSPACE_NAMESPACE} $CEPH_TOOLS_POD bash
  4. Run these commands to set dkp-object-store as the default zonegroup.
    (info) NOTE: The period update command may take a few minutes to complete

    CODE
    radosgw-admin zonegroup default --rgw-zonegroup=dkp-object-store
    radosgw-admin period update --commit
  5. Next, restart the rook-ceph-operator deployment for the CephobjectStore to be reconciled.

    CODE
    kubectl rollout restart deploy -n${WORKSPACE_NAMESPACE} rook-ceph-operator
  6. After running the commands above, the CephObjectStore should be Connected once the rook-ceph operator reconciles the object (this may take some time).

    CODE
    kubectl wait CephObjectStore --for=jsonpath='{.status.phase}'=Connected dkp-object-store -n ${WORKSPACE_NAMESPACE} --timeout 10m
JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.