EKS Attach a Cluster
You can attach existing Kubernetes clusters to the Management Cluster. After attaching the cluster, you can use the UI to examine and manage this cluster. The following procedure shows how to attach an existing Amazon Elastic Kubernetes Service (EKS) cluster.
This procedure assumes you have an existing and spun up Amazon EKS cluster(s) with administrative privileges. Refer to the Amazon EKS for setup and configuration information.
Install aws-iam-authenticator. This binary is used to access your cluster using kubectl.
Attach a Pre-existing EKS Cluster
Ensure that the KUBECONFIG
environment variable is set to the Management cluster before attaching by running:
export KUBECONFIG=<Management_cluster_kubeconfig>.conf
Access your EKS clusters
Ensure you are connected to your EKS clusters. Enter the following commands for each of your clusters:
CODEkubectl config get-contexts kubectl config use-context <context for first eks cluster>
Confirm
kubectl
can access the EKS cluster:CODEkubectl get nodes
Create a kubeconfig
File
To get started, ensure you have kubectl set up and configured with ClusterAdmin for the cluster you want to connect to Kommander.
Create the necessary service account:
CODEkubectl -n kube-system create serviceaccount kommander-cluster-admin
Create a token secret for the
serviceaccount
:CODEkubectl -n kube-system create -f - <<EOF apiVersion: v1 kind: Secret metadata: name: kommander-cluster-admin-sa-token annotations: kubernetes.io/service-account.name: kommander-cluster-admin type: kubernetes.io/service-account-token EOF
For more information on Service Account Tokens, refer to this article in our blog.
Verify that the
serviceaccount
token is ready by running this command:CODEkubectl -n kube-system get secret kommander-cluster-admin-sa-token -oyaml
Verify that the
data.token
field is populated. The output should be similar to this:NONEapiVersion: v1 data: ca.crt: LS0tLS1CRUdJTiBDR... namespace: ZGVmYXVsdA== token: ZXlKaGJHY2lPaUpTVX... kind: Secret metadata: annotations: kubernetes.io/service-account.name: kommander-cluster-admin kubernetes.io/service-account.uid: b62bc32e-b502-4654-921d-94a742e273a8 creationTimestamp: "2022-08-19T13:36:42Z" name: kommander-cluster-admin-sa-token namespace: default resourceVersion: "8554" uid: 72c2a4f0-636d-4a70-9f1c-55a75f15e520 type: kubernetes.io/service-account-token
Configure the new service account for
cluster-admin
permissions:NONEcat << EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kommander-cluster-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kommander-cluster-admin namespace: kube-system EOF
Set up the following environment variables with the access data that is needed for producing a new kubeconfig file:
NONEexport USER_TOKEN_VALUE=$(kubectl -n kube-system get secret/kommander-cluster-admin-sa-token -o=go-template='{{.data.token}}' | base64 --decode) export CURRENT_CONTEXT=$(kubectl config current-context) export CURRENT_CLUSTER=$(kubectl config view --raw -o=go-template='{{range .contexts}}{{if eq .name "'''${CURRENT_CONTEXT}'''"}}{{ index .context "cluster" }}{{end}}{{end}}') export CLUSTER_CA=$(kubectl config view --raw -o=go-template='{{range .clusters}}{{if eq .name "'''${CURRENT_CLUSTER}'''"}}"{{with index .cluster "certificate-authority-data" }}{{.}}{{end}}"{{ end }}{{ end }}') export CLUSTER_SERVER=$(kubectl config view --raw -o=go-template='{{range .clusters}}{{if eq .name "'''${CURRENT_CLUSTER}'''"}}{{ .cluster.server }}{{end}}{{ end }}')
Confirm these variables have been set correctly:
NONEexport -p | grep -E 'USER_TOKEN_VALUE|CURRENT_CONTEXT|CURRENT_CLUSTER|CLUSTER_CA|CLUSTER_SERVER'
Generate a kubeconfig file that uses the environment variable values from the previous step:
NONEcat << EOF > kommander-cluster-admin-config apiVersion: v1 kind: Config current-context: ${CURRENT_CONTEXT} contexts: - name: ${CURRENT_CONTEXT} context: cluster: ${CURRENT_CONTEXT} user: kommander-cluster-admin namespace: kube-system clusters: - name: ${CURRENT_CONTEXT} cluster: certificate-authority-data: ${CLUSTER_CA} server: ${CLUSTER_SERVER} users: - name: kommander-cluster-admin user: token: ${USER_TOKEN_VALUE} EOF
This process produces a file in your current working directory called
kommander-cluster-admin-config
. The contents of this file are used in Kommander to attach the cluster.
Before importing this configuration, verify thekubeconfig
file can access the cluster:NONEkubectl --kubeconfig $(pwd)/kommander-cluster-admin-config get all --all-namespaces
Attach from UI or Attach Manually Through the CLI
There are two options to attach the cluster. Choose from one of the following based on your preference.
Finish Attach EKS Cluster from the UI
Now that you have kubeconfig, go to the DKP UI and follow these steps below:
From the top menu bar, select your target workspace.
On the Dashboard page, select the Add Cluster option in the Actions dropdown menu at the top right.
Select Attach Cluster.
Select the No additional networking restrictions card. Alternatively, if you must use network restrictions, stop following the steps below, and see the instructions on the page Attach a cluster WITH network restrictions.
Upload the kubeconfig file you created in the previous section (or copy its contents) into the Cluster Configuration section.
The Cluster Name field automatically populates with the name of the cluster in the kubeconfig. You can edit this field with the name you want for your cluster.
Add labels to classify your cluster as needed.
Select Create to attach your cluster.
If a cluster has limited resources to deploy all the federated platform services, it will fail to stay attached in the DKP UI. If this happens, ensure your system has sufficient resources for all pods.
Manually Attach a DKP CLI Cluster to the Management Cluster
These steps are only applicable if you do not set a WORKSPACE_NAMESPACE when creating a cluster. If you already set a WORKSPACE_NAMESPACE, then you do not need to perform these steps since the cluster is already attached to the workspace.
Starting with DKP 2.6, when you create a Managed Cluster with the DKP CLI, it attaches automatically to the Management Cluster after a few moments.
However, if you do not set a workspace, the attached cluster will be created in the default
workspace. To ensure that the attached cluster is created in your desired workspace namespace, follow these instructions:
Confirm you have your MANAGED_CLUSTER_NAME variable set with the following command:
CODEecho ${MANAGED_CLUSTER_NAME}
Retrieve your kubeconfig from the cluster you have created without setting a workspace:
CODEdkp get kubeconfig --cluster-name ${MANAGED_CLUSTER_NAME} > ${MANAGED_CLUSTER_NAME}.conf
You can now either [attach it in the UI](link to attaching it to workspace via UI that was earlier), or attach your cluster to the workspace you want in the CLI.
NOTE: This is only necessary if you never set the workspace of your cluster upon creation.Retrieve the workspace where you want to attach the cluster:
CODEkubectl get workspaces -A
Set the WORKSPACE_NAMESPACE environment variable:
CODEexport WORKSPACE_NAMESPACE=<workspace-namespace>
You need to create a secret in the desired workspace before attaching the cluster to that workspace. Retrieve the kubeconfig secret value of your cluster:
CODEkubectl -n default get secret ${MANAGED_CLUSTER_NAME}-kubeconfig -o go-template='{{.data.value}}{{ "\n"}}'
This will return a lengthy value. Copy this entire string for a secret using the template below as a reference. Create a new attached-cluster-kubeconfig.yaml file:
CODEapiVersion: v1 kind: Secret metadata: name: <your-managed-cluster-name>-kubeconfig labels: cluster.x-k8s.io/cluster-name: <your-managed-cluster-name> type: cluster.x-k8s.io/secret data: value: <value-you-copied-from-secret-above>
Create this secret in the desired workspace:
CODEkubectl apply -f attached-cluster-kubeconfig.yaml --namespace ${WORKSPACE_NAMESPACE}
Create this
kommandercluster
object to attach the cluster to the workspace:CODEcat << EOF | kubectl apply -f - apiVersion: kommander.mesosphere.io/v1beta1 kind: KommanderCluster metadata: name: ${MANAGED_CLUSTER_NAME} namespace: ${WORKSPACE_NAMESPACE} spec: kubeconfigRef: name: ${MANAGED_CLUSTER_NAME}-kubeconfig clusterRef: capiCluster: name: ${MANAGED_CLUSTER_NAME} EOF
You can now view this cluster in your Workspace in the UI and you can confirm its status by running the below command. It may take a few minutes to reach "Joined" status:
CODEkubectl get kommanderclusters -A
If you have several Essential Clusters and want to turn one of them to a Managed Cluster to be centrally administrated by a Management Cluster, refer to Platform Expansion: Convert a DKP Essential Cluster to a DKP Enterprise Managed Cluster.
Related Information
For information on related topics or procedures, refer to the following:
Next Step
If needed, you can Delete the EKS Cluster from CLI. Otherwise, proceed to Day 2 - Cluster Operations Management.