Create a separate service account when attaching existing clusters (for example Amazon EKS, or Azure AKS clusters)

If you already have a kubeconfig file to attach your cluster, go directly to Attach a Cluster with no Networking Restrictions (UI) or Attach a Cluster with Networking Restrictions.

The kubeconfig files generated from existing clusters are not usable out-of-the-box, because they call provisioner-specific CLI commands (like aws commands), and use locally-obtained authentication tokens that are not compatible with DKP. Having a separate service account also allows you to have a dedicated identity for all DKP operations.

To get started, ensure you have kubectl set up and configured with ClusterAdmin for the cluster you want to connect to Kommander.

  1. Create the necessary service account:

    kubectl -n kube-system create serviceaccount kommander-cluster-admin
    CODE
  2. Create a token secret for the serviceaccount:

    kubectl -n kube-system create  -f - <<EOF
    apiVersion: v1
    kind: Secret
    metadata:
      name: kommander-cluster-admin-sa-token
      annotations:
        kubernetes.io/service-account.name: kommander-cluster-admin
    type: kubernetes.io/service-account-token
    EOF
    CODE
  3. Verify that the serviceaccount token is ready by running this command:

    kubectl -n kube-system get secret kommander-cluster-admin-sa-token -oyaml
    CODE

    Verify that the data.token field is populated. The output should be similar to this:

    apiVersion: v1
    data:
      ca.crt: LS0tLS1CRUdJTiBDR...
      namespace: ZGVmYXVsdA==
      token: ZXlKaGJHY2lPaUpTVX...
    kind: Secret
    metadata:
      annotations:
        kubernetes.io/service-account.name: kommander-cluster-admin
        kubernetes.io/service-account.uid: b62bc32e-b502-4654-921d-94a742e273a8
      creationTimestamp: "2022-08-19T13:36:42Z"
      name: kommander-cluster-admin-sa-token
      namespace: default
      resourceVersion: "8554"
      uid: 72c2a4f0-636d-4a70-9f1c-55a75f15e520
    type: kubernetes.io/service-account-token
    NONE
  4. Configure the new service account for cluster-admin permissions:

    cat << EOF | kubectl apply -f -
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: kommander-cluster-admin
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: kommander-cluster-admin
      namespace: kube-system
    EOF
    NONE
  5. Set up the following environment variables with the access data that is needed for producing a new kubeconfig file:

    export USER_TOKEN_VALUE=$(kubectl -n kube-system get secret/kommander-cluster-admin-sa-token -o=go-template='{{.data.token}}' | base64 --decode)
    export CURRENT_CONTEXT=$(kubectl config current-context)
    export CURRENT_CLUSTER=$(kubectl config view --raw -o=go-template='{{range .contexts}}{{if eq .name "'''${CURRENT_CONTEXT}'''"}}{{ index .context "cluster" }}{{end}}{{end}}')
    export CLUSTER_CA=$(kubectl config view --raw -o=go-template='{{range .clusters}}{{if eq .name "'''${CURRENT_CLUSTER}'''"}}"{{with index .cluster "certificate-authority-data" }}{{.}}{{end}}"{{ end }}{{ end }}')
    export CLUSTER_SERVER=$(kubectl config view --raw -o=go-template='{{range .clusters}}{{if eq .name "'''${CURRENT_CLUSTER}'''"}}{{ .cluster.server }}{{end}}{{ end }}')
    NONE
  6. Confirm these variables have been set correctly:

    export -p USER_TOKEN_VALUE CURRENT_CONTEXT CURRENT_CLUSTER CLUSTER_CA CLUSTER_SERVER
    NONE
  7. Generate a kubeconfig file that uses the environment variable values from the previous step:

    cat << EOF > kommander-cluster-admin-config
    apiVersion: v1
    kind: Config
    current-context: ${CURRENT_CONTEXT}
    contexts:
    - name: ${CURRENT_CONTEXT}
      context:
        cluster: ${CURRENT_CONTEXT}
        user: kommander-cluster-admin
        namespace: kube-system
    clusters:
    - name: ${CURRENT_CONTEXT}
      cluster:
        certificate-authority-data: ${CLUSTER_CA}
        server: ${CLUSTER_SERVER}
    users:
    - name: kommander-cluster-admin
      user:
        token: ${USER_TOKEN_VALUE}
    EOF
    NONE
  8. This process produces a file in your current working directory called kommander-cluster-admin-config. The contents of this file are used in Kommander to attach the cluster.
    Before importing this configuration, verify the kubeconfig file can access the cluster:

    kubectl --kubeconfig $(pwd)/kommander-cluster-admin-config get all --all-namespaces
    NONE

Next Step:

Use this kubeconfig to:

If a cluster has limited resources to deploy all the federated platform services, it will fail to stay attached in the DKP UI. If this happens, check if there are any pods that are not getting the resources required.