Known issues and limitations

The following items are known issues with this release.

Use Static Credentials to Provision an Azure Cluster

Only static credentials can be used when provisioning an Azure cluster.

EKS Upgrades do not use Declared Kubernetes Version

Upgrading an EKS cluster to DKP version 2.4 does not display the exact kubernetes version chosen prior to upgrading due to EKS environment restrictions not allowing for patch version specification. EKS clusters will retain the major version number (1.x) while selecting automatically different patch version number (1.x.x).

Intermittent Error Status when Creating EKS Clusters in the UI

When provisioning an EKS cluster through the UI, you may receive a brief error state because the EKS cluster may sporadically lose connectivity with the management cluster which results in the following symptoms:

  • The UI shows the cluster is in an error state.

  • The kubeconfig generated and retrieved from Kommander ceases to work.

  • Applications created on the management cluster may not be immediately federated to managed EKS clusters.

After a few moments, the error will resolve, without any action on your part. A new kubeconfig generated and retrieved from Kommander then works properly, and the UI shows that it is working again. In the meantime, you can continue to use the UI to work on the cluster such as deploy applications, create projects, and add roles.

Cluster Roles' information not available in the Projects section of the UI

Selecting a role of the Cluster Role type in the Projects > Roles section of a workspace displays an error message in the UI.

Workaround

You can still access a Cluster Role’s description and configuration from the UI. Take the following alternative path to view or edit the desired role:

  1. Select your workspace from the top navigation bar.

  2. Select Administration > Access Control from the sidebar.

  3. A table appears that lists roles from all Projects in the selected workspace.

  4. Select the Name or ID of the Cluster Role you want to access. A page opens that contains more information and configuration options for the role.

To access a Cluster Role’s description and configuration page, ensure your user has sufficient cluster view or edit rights. You will not be able to select roles for which you do not have access rights.

Resolve issues with failed HelmReleases

An issue with the Flux helm-controller can cause HelmReleases to fail with the error message Helm upgrade failed: another operation (install/upgrade/rollback) is in progress. This can happen when the helm-controller is restarted while a HelmRelease is still upgrading, or installing.

Workaround

To ensure the HelmRelease error was caused by the helm-controller restarting, first try to suspend/resume the HelmRelease:

kubectl -n <namespace> patch helmrelease <HELMRELEASE_NAME> --type='json' -p='[{"op": "replace", "path": "/spec/suspend", "value": true}]'
kubectl -n <namespace> patch helmrelease <HELMRELEASE_NAME> --type='json' -p='[{"op": "replace", "path": "/spec/suspend", "value": false}]'
CODE

This might resolve the issue. If not, continue with the following steps:   

You should see the HelmRelease attempting to reconcile, and then it either succeeds (with status: Release reconciliation succeeded) or it fails with the same error as before. 

If the HelmRelease is still in the failed state, it is likely related to the helm-controller restarting. For example, if the 'reloader' HelmRelease is the one that is stuck.

To resolve the issue, follow these steps:

  1. List secrets containing the affected HelmRelease name:

    kubectl get secrets -n ${NAMESPACE} | grep reloader
    CODE

    The output should look like this:

    kommander-reloader-reloader-token-9qd8b                        kubernetes.io/service-account-token   3      171m
    sh.helm.release.v1.kommander-reloader.v1                       helm.sh/release.v1                    1      171m
    sh.helm.release.v1.kommander-reloader.v2                       helm.sh/release.v1                    1      117m           
    CODE

    In this example, sh.helm.release.v1.kommander-reloader.v2 is the most recent revision.

  2. Find and delete the most recent revision secret, for example, sh.helm.release.v1.*.<revision>:

    kubectl delete secret -n <namespace> <most recent helm revision secret name>
    CODE
  3. Suspend and resume the HelmRelease to trigger a reconciliation:

    kubectl -n <namespace> patch helmrelease <HELMRELEASE_NAME> --type='json' -p='[{"op": "replace", "path": "/spec/suspend", "value": true}]'
    kubectl -n <namespace> patch helmrelease <HELMRELEASE_NAME> --type='json' -p='[{"op": "replace", "path": "/spec/suspend", "value": false}]'
    CODE

You should see the HelmRelease is reconciled and eventually the upgrade and install succeed.