Skip to main content
Skip table of contents

AKS: Attach Cluster

Attach an existing AKS cluster

You can attach existing Kubernetes clusters to the Management Cluster. After attaching the cluster, you can use the UI to examine and manage this cluster. The following procedure shows how to attach an existing Azure Kubernetes Service (AKS) cluster.

Before you Begin

This procedure requires the following items and configurations:

  • A fully configured and running Azure AKS cluster with administrative privileges.

  • The current version DKP Enterprise is installed on your cluster.

  • Ensure you have installed kubectl in your Management cluster.

This procedure assumes you have an existing and spun up Azure AKS cluster(s) with administrative privileges. Refer to the Azure AKS for setup and configuration information.

Attach AKS Clusters

Ensure that the KUBECONFIG environment variable is set to the Management cluster before attaching by running:

CODE
export KUBECONFIG=<Management_cluster_kubeconfig>.conf

Ensure you have access to your AKS clusters

  1. Ensure you are connected to your AKS clusters. Enter the following commands for each of your clusters:

    CODE
    kubectl config get-contexts
    kubectl config use-context <context for first AKS cluster>
  2. Confirm kubectl can access the AKS cluster:

    CODE
    kubectl get nodes

Create a kubeconfig file for your AKS cluster

To get started, ensure you have kubectl set up and configured with ClusterAdmin for the cluster you want to connect to Kommander.

  1. Create the necessary service account:

    CODE
    kubectl -n kube-system create serviceaccount kommander-cluster-admin
  2. Create a token secret for the serviceaccount:

    CODE
    kubectl -n kube-system create  -f - <<EOF
    apiVersion: v1
    kind: Secret
    metadata:
      name: kommander-cluster-admin-sa-token
      annotations:
        kubernetes.io/service-account.name: kommander-cluster-admin
    type: kubernetes.io/service-account-token
    EOF
  3. Verify that the serviceaccount token is ready by running this command:

    CODE
    kubectl -n kube-system get secret kommander-cluster-admin-sa-token -oyaml

    Verify that the data.token field is populated. The output should be similar to this:

    CODE
    apiVersion: v1
    data:
      ca.crt: LS0tLS1CRUdJTiBDR...
      namespace: ZGVmYXVsdA==
      token: ZXlKaGJHY2lPaUpTVX...
    kind: Secret
    metadata:
      annotations:
        kubernetes.io/service-account.name: kommander-cluster-admin
        kubernetes.io/service-account.uid: b62bc32e-b502-4654-921d-94a742e273a8
      creationTimestamp: "2022-08-19T13:36:42Z"
      name: kommander-cluster-admin-sa-token
      namespace: default
      resourceVersion: "8554"
      uid: 72c2a4f0-636d-4a70-9f1c-55a75f15e520
    type: kubernetes.io/service-account-token
  4. Configure the new service account for cluster-admin permissions:

    CODE
    cat << EOF | kubectl apply -f -
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: kommander-cluster-admin
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: kommander-cluster-admin
      namespace: kube-system
    EOF
  5. Set up the following environment variables with the access data that is needed for producing a new kubeconfig file:

    CODE
    export USER_TOKEN_VALUE=$(kubectl -n kube-system get secret/kommander-cluster-admin-sa-token -o=go-template='{{.data.token}}' | base64 --decode)
    export CURRENT_CONTEXT=$(kubectl config current-context)
    export CURRENT_CLUSTER=$(kubectl config view --raw -o=go-template='{{range .contexts}}{{if eq .name "'''${CURRENT_CONTEXT}'''"}}{{ index .context "cluster" }}{{end}}{{end}}')
    export CLUSTER_CA=$(kubectl config view --raw -o=go-template='{{range .clusters}}{{if eq .name "'''${CURRENT_CLUSTER}'''"}}"{{with index .cluster "certificate-authority-data" }}{{.}}{{end}}"{{ end }}{{ end }}')
    export CLUSTER_SERVER=$(kubectl config view --raw -o=go-template='{{range .clusters}}{{if eq .name "'''${CURRENT_CLUSTER}'''"}}{{ .cluster.server }}{{end}}{{ end }}')
  6. Confirm these variables have been set correctly:

    CODE
    export -p | grep -E 'USER_TOKEN_VALUE|CURRENT_CONTEXT|CURRENT_CLUSTER|CLUSTER_CA|CLUSTER_SERVER'
  7. Generate a kubeconfig file that uses the environment variable values from the previous step:

    CODE
    cat << EOF > kommander-cluster-admin-config
    apiVersion: v1
    kind: Config
    current-context: ${CURRENT_CONTEXT}
    contexts:
    - name: ${CURRENT_CONTEXT}
      context:
        cluster: ${CURRENT_CONTEXT}
        user: kommander-cluster-admin
        namespace: kube-system
    clusters:
    - name: ${CURRENT_CONTEXT}
      cluster:
        certificate-authority-data: ${CLUSTER_CA}
        server: ${CLUSTER_SERVER}
    users:
    - name: kommander-cluster-admin
      user:
        token: ${USER_TOKEN_VALUE}
    EOF
  8. This process produces a file in your current working directory called kommander-cluster-admin-config. The contents of this file are used in Kommander to attach the cluster.
    Before importing this configuration, verify the kubeconfig file can access the cluster:

    CODE
    kubectl --kubeconfig $(pwd)/kommander-cluster-admin-config get all --all-namespaces

Finalize attaching your cluster from the UI

Now that you have kubeconfig, go to the DKP UI and follow these steps below:

  1. From the top menu bar, select your target workspace.

  2. On the Dashboard page, select the Add Cluster option in the Actions dropdown menu at the top right.

  3. Select Attach Cluster.

  4. Select the No additional networking restrictions card. Alternatively, if you must use network restrictions, stop following the steps below, and see the instructions on the page Attach a cluster WITH network restrictions.

  5. Upload the kubeconfig file you created in the previous section (or copy its contents) into the Cluster Configuration section.

  6. The Cluster Name field automatically populates with the name of the cluster in the kubeconfig. You can edit this field with the name you want for your cluster.

  7. Add labels to classify your cluster as needed.

  8. Select Create to attach your cluster.

If a cluster has limited resources to deploy all the federated platform services, it will fail to stay attached in the DKP UI. If this happens, ensure your system has sufficient resources for all pods.

Next Steps:

Day 2 - Cluster Operations Management

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.