GCP: Create the Management Cluster
Create a new Google Cloud Platform Kubernetes cluster in a non-air-gapped environment with the steps below.
DKP uses the GCP CSI driver as the default storage provider. Use a Kubernetes CSI compatible storage that is suitable for production.
Name your Cluster
Give your cluster a unique name suitable for your environment. The cluster name may only contain the following characters:
a-z
,0-9
,.
, and-
. Cluster creation will fail if the name has capital letters. See Kubernetes for more naming information.In GCP it is critical that the name is unique, as no two clusters in the same GCP account can have the same name.
Set the environment variable:
CODEexport CLUSTER_NAME=<gcp-example>
To increase Docker Hub's rate limit use your Docker Hub credentials when creating the cluster, by setting the following flag --registry-mirror-url=https://registry-1.docker.io --registry-mirror-username=<username> --registry-mirror-password=<password>
on the dkp create cluster
command.
Create a New GCP Cluster
Availability zones (AZs) are isolated locations within data center regions from which public cloud services originate and operate. Because all the nodes in a node pool are deployed in a single Availability Zone, you may wish to create additional node pools is to ensure your cluster has nodes deployed in multiple Availability Zones.
By default, the control-plane Nodes will be created in 3 different zones. However, the default worker Nodes will reside in a single zone. You may create additional node pools in other zones with the dkp create nodepool
command. The default region for the availability zones is us-west1
.
Google Cloud Platform does not publish images. You must first build the image using Konvoy Image Builder.
Create an image using Konvoy Image Builder (KIB) and then export the image name:
BASHexport IMAGE_NAME=projects/${GCP_PROJECT}/global/images/<image_name_from_kib>
Ensure your subnets do not overlap with your host subnet because they cannot be changed after cluster creation. If you need to change the kubernetes subnets, you must do this at cluster creation. The default subnets used in DKP are:
CODEspec: clusterNetwork: pods: cidrBlocks: - 192.168.0.0/16 services: cidrBlocks: - 10.96.0.0/12
(Optional) Modify Control Plane Audit logs - Users can make modifications to the
KubeadmControlplane
cluster-api object to configure differentkubelet
options. See the following guide if you wish to configure your control plane beyond the existing options that are available from flags.(Optional) Determine what VPC Network to use. All GCP accounts come with a preconfigured VPC Network named
default
, which will be used if you do not specify a different network.
To use a different VPC network for your cluster, create one by following these instructions for Create and Manage VPC Networks. Then specify the--network <new_vpc_network_name>
option on the create cluster command below. More information is available on GCP Cloud Nat and network flag.
Create a Kubernetes cluster. The following example shows a common configuration. See dkp create cluster gcp reference for the full list of cluster creation options.
CODEdkp create cluster gcp \ --cluster-name=${CLUSTER_NAME} \ --additional-tags=owner=$(whoami) \ --project=${GCP_PROJECT} \ --image=${IMAGE_NAME} \ --self-managed
If your environment uses HTTP/HTTPS proxies, you must include the flags --http-proxy
, --https-proxy
, and --no-proxy
and their related values in this command for it to be successful. More information is available in Configuring an HTTP/HTTPS Proxy.
Wait for the cluster control-plane to be ready:
CODEkubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --timeout=20m
After the objects are created on the API server, the Cluster API controllers reconcile them. They create infrastructure and machines. As they progress, they update the Status of each object. Konvoy provides a command to describe the current status of the cluster:
CODEdkp describe cluster -c ${CLUSTER_NAME}
A self-managed cluster refers to one in which the CAPI resources and controllers that describe and manage it are running on the same cluster they are managing. As part of the underlying processing using the
--self-managed
flag, the DKP CLI:creates a bootstrap cluster
creates a workload cluster
moves CAPI controllers from the bootstrap cluster to the workload cluster, making it self-managed
deletes the bootstrap cluster
To understand how this process works step by step, you can find customizable steps in GCP Infrastructure under Custom Installation and Additional Infrastructure Tools.
Cluster Verification
If you want to monitor or verify the installation of your clusters, refer to:
Verify your Cluster and DKP Installation.