Create a New Air-gapped AWS Cluster
Prerequisites
Before you begin, make sure you have created a Bootstrap cluster.
DKP uses localvolumeprovisioner
as the default storage provider. However, localvolumeprovisioner
is not suitable for production use. You should use a Kubernetes CSI compatible storage that is suitable for production.
You can choose from any of the storage options available for Kubernetes. To disable the default that Konvoy deploys, set the default StorageClasslocalvolumeprovisioner
as non-default. Then set your newly created StorageClass to be the default by following the commands in the Kubernetes documentation called Changing the Default Storage Class.
Create a New Cluster
Create a new AWS Kubernetes cluster in an Air-gapped environment in your AWS infrastructure. When you use existing infrastructure, DKP does not create, modify, or delete the following AWS resources:
Internet Gateways
NAT Gateways
Routing tables
Subnets
VPC
VPC Endpoints (for subnets without NAT Gateways)
An AWS subnet has Network ACLs that can control traffic in and out of the subnet. DKP does not modify the Network ACLs of an existing subnet. DKP uses Security Groups to control traffic. If a Network ACL denies traffic that is allowed by DKP-managed Security Groups, the cluster may not work correctly.
Set the environment variable to the name you assigned this cluster:
CODEexport CLUSTER_NAME=<aws-example>
The cluster name may only contain the following characters: a-z
, 0-9
, .
, and -
. Cluster creation will fail if the name has capital letters. See Kubernetes for more naming information.
Export variables for the existing infrastructure details:
CODEexport AWS_VPC_ID=<vpc-...> export AWS_SUBNET_IDS=<subnet-...,subnet-...,subnet-...> export AWS_ADDITIONAL_SECURITY_GROUPS=<sg-...> export AWS_AMI_ID=<ami-...>
AWS_VPC_ID
: the VPC ID where the cluster will be created. The VPC requires the following AWS VPC Endpoints to be already present:ec2
-com.amazonaws.{region}.ec2
elasticloadbalancing
-com.amazonaws.{region}.elasticloadbalancing
secretsmanager
-com.amazonaws.{region}.secretsmanager
autoscaling
-com.amazonaws.{region}.autoscaling
ecr
-com.amazonaws.{region}.ecr.api
-(authentication)
ecr
-com.amazonaws.{region}.ecr.dkr
-(data trasfer)
More details about AWS service using an interface VPC endpoint and AWS VPC endpoints list.
AWS_SUBNET_IDS
: a comma-separated list of one or more private Subnet IDs with each one in a different Availability Zone. The cluster control-plane and worker nodes will automatically be spread across these Subnets.AWS_ADDITIONAL_SECURITY_GROUPS
: a comma-seperated list of one or more Security Groups IDs to use in addition to the ones automatically created by CAPA.AWS_AMI_ID
: the AMI ID to use for control-plane and worker nodes. The AMI must be created by the konvoy-image-builder.
⚠️ IMPORTANT: Ensure your subnets do not overlap with your host subnet because they cannot be changed after cluster creation. If you need to change the kubernetes subnets, you must do this at cluster creation. The default subnets used in DKP are:
CODEspec: clusterNetwork: pods: cidrBlocks: - 192.168.0.0/16 services: cidrBlocks: - 10.96.0.0/12
⚠️ IMPORTANT: You must tag the subnets as described below to allow for Kubernetes to create ELBs for services of typeLoadBalancer
in those subnets. If the subnets are not tagged, they will not receive an ELB and the following error displays:Error syncing load balancer, failed to ensure load balancer; could not find any suitable subnets for creating the ELB.
.The tags should be set as follows, where
<CLUSTER_NAME>
corresponds to the name set inCLUSTER_NAME
environment variable:CODEkubernetes.io/cluster = <CLUSTER_NAME> kubernetes.io/cluster/<CLUSTER_NAME> = owned kubernetes.io/role/internal-elb = 1
⚠️ IMPORTANT: The AMI must be created with Konvoy Image Builder in order to use the registry mirror feature.
⚠️ If you do not already have a local registry set up, please refer to Local Registry Tools page for more information.
(Optional) Configure your cluster to use an existing local registry as a mirror when attempting to pull images previously pushed to your registry. Below is an example command for AWS ECR:
CODEexport REGISTRY_MIRROR_URL=<your_registry_url>
REGISTRY_MIRROR_URL
: the address of an existing local registry accessible in the VPC that the new cluster nodes will be configured to use a mirror registry when pulling images.NOTE: Other local registries may use the options below:
JFrog -
CONTAINER_REGISTRY_CA
: (optional) the path on the bastion machine to the registry CA. This value is only needed if the registry is using a self-signed certificate and the AMIs are not already configured to trust this CA.CONTAINER_REGISTRY_USERNAME
: optional-set to a user that has pull access to this registry.CONTAINER_REGISTRY_PASSWORD
: optional if username is not set.
Create a Kubernetes cluster. The following example shows a common configuration. See dkp create cluster aws reference for the full list of cluster creation options:
In previous DKP releases, AMI images provided by the upstream CAPA project would be used if you did not specify an AMI. However, the upstream images are not recommended for production and may not always be available. Therefore, DKP now requires you to specify an AMI when creating a cluster. To create an AMI, use Konvoy Image Builder.
There are two approaches to supplying the ID of your AMI. Either provide the ID of the AMI or provide a way for DKP to discover the AMI using location, format and OS information:
Option One - Provide the ID of your AMI:
Use the example command below leaving the existing flag that provides the AMI ID:
--ami AMI_ID
Option Two - Provide a path for your AMI with the information required for image discover:
Where the AMI is published using your AWS Account ID:
--ami-owner AWS_ACCOUNT_ID
The format or string used to search for matching AMIs and ensure it references the Kubernetes version plus the base OS name:
--ami-base-os ubuntu-20.04
The base OS information:
--ami-format 'example-{{.BaseOS}}-?{{.K8sVersion}}-*'
CODEdkp create cluster aws --cluster-name=${CLUSTER_NAME} \ --vpc-id=${AWS_VPC_ID} \ --ami=${AWS_AMI_ID} \ --subnet-ids=${AWS_SUBNET_IDS} \ --internal-load-balancer=true \ --additional-security-group-ids=${AWS_ADDITIONAL_SECURITY_GROUPS} \ --registry-mirror-url=<YOUR_ECR_URL>
(Optional) Add the registry mirror flag to the command above to pull from your local registry:
--registry-mirror-url=<YOUR_ECR_URL>
(Optional) The Control Plane and Worker nodes can be configured to use an HTTP proxy:
CODEexport CONTROL_PLANE_HTTP_PROXY=http://example.org:8080 export CONTROL_PLANE_HTTPS_PROXY=http://example.org:8080 export CONTROL_PLANE_NO_PROXY="example.org,example.com,example.net,localhost,127.0.0.1,10.96.0.0/12,192.168.0.0/16,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,.svc,.svc.cluster,.svc.cluster.local,169.254.169.254,.elb.amazonaws.com" export WORKER_HTTP_PROXY=http://example.org:8080 export WORKER_HTTPS_PROXY=http://example.org:8080 export WORKER_NO_PROXY="example.org,example.com,example.net,localhost,127.0.0.1,10.96.0.0/12,192.168.0.0/16,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,.svc,.svc.cluster,.svc.cluster.local,169.254.169.254,.elb.amazonaws.com"
Replace
example.org,example.com,example.net
with you internal addresseslocalhost
and127.0.0.1
addresses should not use the proxy10.96.0.0/12
is the default Kubernetes service subnet192.168.0.0/16
is the default Kubernetes pod subnetkubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local
is the internal Kubernetes kube-apiserver service.svc,.svc.cluster,.svc.cluster.local
is the internal Kubernetes services169.254.169.254
is the AWS metadata server.elb.amazonaws.com
is for the worker nodes to allow them to communicate directly to the kube-apiserver ELB
(Optional) Create a Kubernetes cluster with HTTP proxy configured. This step assumes you did not already create a cluster in the previous steps:
CODEdkp create cluster aws --cluster-name=${CLUSTER_NAME} \ --vpc-id=${AWS_VPC_ID} \ --ami=${AWS_AMI_ID} \ --subnet-ids=${AWS_SUBNET_IDS} \ --internal-load-balancer=true \ --additional-security-group-ids=${AWS_ADDITIONAL_SECURITY_GROUPS} \ --registry-mirror-url=<YOUR_ECR_URL> --control-plane-http-proxy="${CONTROL_PLANE_HTTP_PROXY}" \ --control-plane-https-proxy="${CONTROL_PLANE_HTTPS_PROXY}" \ --control-plane-no-proxy="${CONTROL_PLANE_NO_PROXY}" \ --worker-http-proxy="${WORKER_HTTP_PROXY}" \ --worker-https-proxy="${WORKER_HTTPS_PROXY}" \ --worker-no-proxy="${WORKER_NO_PROXY}"
Inspect the created cluster resources:
CODEkubectl get clusters,kubeadmcontrolplanes,machinedeployments
Wait for the cluster control-plane to be ready:
CODEkubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --timeout=60m
Then, Explore New AWS Cluster