EKS: Retrieve kubeconfig for EKS Cluster
This guide explains how to use the command line to interact with your newly deployed Kubernetes cluster.
Before you start, make sure you have created a workload cluster, as described in EKS: Create an EKS Cluster .
Explore the New Kubernetes Cluster
Follow these steps:
Get a
kubeconfig
file for the workload cluster:When the workload cluster is created, the cluster lifecycle services generate a
kubeconfig
file for the workload cluster, and write it to a Secret. Thekubeconfig
file is scoped to the cluster administrator.Get the
kubeconfig
from the Secret, and write it to a file, using this command:CODEdkp get kubeconfig -c ${CLUSTER_NAME} > ${CLUSTER_NAME}.conf
List the Nodes using this command:
CODEkubectl --kubeconfig=${CLUSTER_NAME}.conf get nodes
CODENAME STATUS ROLES AGE VERSION ip-10-0-122-211.us-west-2.compute.internal Ready <none> 35m v1.23.9-eks-ba74326 ip-10-0-127-74.us-west-2.compute.internal Ready <none> 35m v1.23.9-eks-ba74326 ip-10-0-71-155.us-west-2.compute.internal Ready <none> 35m v1.23.9-eks-ba74326 ip-10-0-93-47.us-west-2.compute.internal Ready <none> 35m v1.23.9-eks-ba74326
NOTE: It may take a few minutes for the Status to move to
Ready
while the Pod network is deployed. The Nodes' Status should change to Ready soon after thecalico-node
DaemonSet Pods are Ready.List the Pods using this command:
CODEkubectl --kubeconfig=${CLUSTER_NAME}.conf get --all-namespaces pods
CODENAMESPACE NAME READY STATUS RESTARTS AGE calico-system calico-kube-controllers-7d6749878f-ccsx9 1/1 Running 0 34m calico-system calico-node-2r6l8 1/1 Running 0 34m calico-system calico-node-5pdlb 1/1 Running 0 34m calico-system calico-node-n24hh 1/1 Running 0 34m calico-system calico-node-qrh7p 1/1 Running 0 34m calico-system calico-typha-7bbcb87696-7pk45 1/1 Running 0 34m calico-system calico-typha-7bbcb87696-t4c8r 1/1 Running 0 34m calico-system csi-node-driver-bz48k 2/2 Running 0 34m calico-system csi-node-driver-k5mmk 2/2 Running 0 34m calico-system csi-node-driver-nvcck 2/2 Running 0 34m calico-system csi-node-driver-x4xnh 2/2 Running 0 34m kube-system aws-node-2xp86 1/1 Running 0 35m kube-system aws-node-5f2kx 1/1 Running 0 35m kube-system aws-node-6lzm7 1/1 Running 0 35m kube-system aws-node-pz8c6 1/1 Running 0 35m kube-system cluster-autoscaler-789d86b489-sz9x2 0/1 Init:0/1 0 36m kube-system coredns-57ff979f67-pk5cg 1/1 Running 0 75m kube-system coredns-57ff979f67-sf2j9 1/1 Running 0 75m kube-system ebs-csi-controller-5f6bd5d6dc-bplwm 6/6 Running 0 36m kube-system ebs-csi-controller-5f6bd5d6dc-dpjt7 6/6 Running 0 36m kube-system ebs-csi-node-7hmm5 3/3 Running 0 35m kube-system ebs-csi-node-l4vfh 3/3 Running 0 35m kube-system ebs-csi-node-mfr7c 3/3 Running 0 35m kube-system ebs-csi-node-v8krq 3/3 Running 0 35m kube-system kube-proxy-7fc5x 1/1 Running 0 35m kube-system kube-proxy-vvkmk 1/1 Running 0 35m kube-system kube-proxy-x6hcc 1/1 Running 0 35m kube-system kube-proxy-x8frb 1/1 Running 0 35m kube-system snapshot-controller-8ff89f489-4cfxv 1/1 Running 0 36m kube-system snapshot-controller-8ff89f489-78gg8 1/1 Running 0 36m node-feature-discovery node-feature-discovery-master-7d5985467-52fcn 1/1 Running 0 36m node-feature-discovery node-feature-discovery-worker-88hr7 1/1 Running 0 34m node-feature-discovery node-feature-discovery-worker-h95nq 1/1 Running 0 35m node-feature-discovery node-feature-discovery-worker-lfghg 1/1 Running 0 34m node-feature-discovery node-feature-discovery-worker-prc8p 1/1 Running 0 35m tigera-operator tigera-operator-6dcd98c8ff-k97hq 1/1 Running 0 36m