Skip to main content
Skip table of contents

Create a New GCP Cluster

Name your Cluster

  1. Give your cluster a unique name suitable for your environment. The cluster name may only contain the following characters: a-z, 0-9, ., and -. Cluster creation will fail if the name has capital letters. See Kubernetes for more naming information.

    In GCP it is critical that the name is unique, as no two clusters in the same GCP account can have the same name.

  2. Set the environment variable:

    CODE
    export CLUSTER_NAME=<gcp-example>

To increase Docker Hub's rate limit use your Docker Hub credentials when creating the cluster, by setting the following flag --registry-mirror-url=https://registry-1.docker.io --registry-mirror-username=<username> --registry-mirror-password=<password> on the dkp create cluster command.

Create a New GCP Cluster

Availability zones (AZs) are isolated locations within data center regions from which public cloud services originate and operate. Because all the nodes in a node pool are deployed in a single Availability Zone, you may wish to create additional node pools is to ensure your cluster has nodes deployed in multiple Availability Zones.

By default, the control-plane Nodes will be created in 3 different zones. However, the default worker Nodes will reside in a single zone. You may create additional node pools in other zones with the dkp create nodepool command. The default region for the availability zones is us-west1.

Google Cloud Platform does not publish images. You must first build the image using Konvoy Image Builder.

  1. Create an image using Konvoy Image Builder (KIB) and then export the image name:

    BASH
    export IMAGE_NAME=projects/${GCP_PROJECT}/global/images/<image_name_from_kib>
  2. Ensure your subnets do not overlap with your host subnet because they cannot be changed after cluster creation. If you need to change the kubernetes subnets, you must do this at cluster creation. The default subnets used in DKP are:

    CODE
    spec:
      clusterNetwork:
        pods:
          cidrBlocks:
          - 192.168.0.0/16
        services:
          cidrBlocks:
          - 10.96.0.0/12
  3. (Optional) Modify Control Plane Audit logs - Users can make modifications to the KubeadmControlplane cluster-api object to configure different kubelet options. See the following guide if you wish to configure your control plane beyond the existing options that are available from flags.

  4. (Optional) Determine what VPC Network to use. All GCP accounts come with a preconfigured VPC Network named default, which will be used if you do not specify a different network.
    To use a different VPC network for your cluster, create one by following these instructions for Create and Manage VPC Networks. Then specify the --network <new_vpc_network_name> option on the create cluster command below. More information is available on GCP Cloud Nat and network flag.

  1. (Optional) Use a registry mirror. Configure your cluster to use an existing local registry as a mirror when attempting to pull images previously pushed to your registry.

Export Registry Variables

If you have a local registry, you must provide additional arguments when creating the cluster. These tell the cluster where to locate the local registry to use by defining the URL. Set the environment variable with your registry information:

YAML
export REGISTRY_URL="<https/http>://<registry-address>:<registry-port>"
export REGISTRY_CA=<path to the CA on the bastion>
export REGISTRY_USERNAME=<username>
export REGISTRY_USERNAME=<password>
  • REGISTRY_URL: the address of an existing container registry accessible in the VPC that the new cluster nodes will be configured to use a mirror registry when pulling images.

  • REGISTRY_CA: (optional) the path on the bastion machine to the container registry CA. Konvoy will configure the cluster nodes to trust this CA. This value is only needed if the registry is using a self-signed certificate and the AMIs are not already configured to trust this CA.

  • REGISTRY_USERNAME: optional, set to a user that has pull access to this registry.

  • REGISTRY_PASSWORD: optional if username is not set.

Set the flag during cluster creation:

CODE
--registry-mirror-url=${REGISTRY_URL} \
--registry-mirror-cacert=${REGISTRY_CA} \
--registry-mirror-username=${REGISTRY_USERNAME} \
--registry-mirror-password=${REGISTRY_PASSWORD}

See also: GCP Registry Mirrors

  1. Create a Kubernetes cluster. The following example shows a common configuration. See dkp create cluster gcp reference for the full list of cluster creation options.

    CODE
    dkp create cluster gcp \
    --cluster-name=${CLUSTER_NAME} \
    --additional-tags=owner=$(whoami) \
    --with-gcp-bootstrap-credentials=true \
    --project=${GCP_PROJECT} \
    --image=${IMAGE_NAME} \
    --dry-run \
    --output=yaml \
    > ${CLUSTER_NAME}.yaml

Expand the drop-downs for HTTP and other flags to use during the cluster creation step above. For more information regarding this flag or others, please refer to the CLI section of the documentation for dkp create cluster and select your provider.

HTTP Only:

If your environment uses HTTP/HTTPS proxies, you must include the flags --http-proxy, --https-proxy, and --no-proxy and their related values in this command for it to be successful. More information is available in Configuring an HTTP/HTTPS Proxy.

CODE
--http-proxy <<http proxy list>>
--https-proxy <<https proxy list>>
--no-proxy <<no proxy list>>
  • To configure the Control Plane and Worker nodes to use an HTTP proxy:

    CODE
    export CONTROL_PLANE_HTTP_PROXY=http://example.org:8080
    export CONTROL_PLANE_HTTPS_PROXY=http://example.org:8080
    export CONTROL_PLANE_NO_PROXY="example.org,example.com,example.net,localhost,127.0.0.1,10.96.0.0/12,192.168.0.0/16,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,.svc,.svc.cluster,.svc.cluster.local,169.254.169.254,.elb.amazonaws.com"
    
    export WORKER_HTTP_PROXY=http://example.org:8080
    export WORKER_HTTPS_PROXY=http://example.org:8080
    export WORKER_NO_PROXY="example.org,example.com,example.net,localhost,127.0.0.1,10.96.0.0/12,192.168.0.0/16,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,.svc,.svc.cluster,.svc.cluster.local,169.254.169.254,.elb.amazonaws.com"
  • Replace:

    • example.org,example.com,example.net with you internal addresses

    • localhost and 127.0.0.1 addresses should not use the proxy

    • 10.96.0.0/12 is the default Kubernetes service subnet

    • 192.168.0.0/16 is the default Kubernetes pod subnet

    • kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local is the internal Kubernetes kube-apiserver service

    • .svc,.svc.cluster,.svc.cluster.local is the internal Kubernetes services

    • 169.254.169.254 is the AWS metadata server

    • .elb.amazonaws.com is for the worker nodes to allow them to communicate directly to the kube-apiserver ELB

  • Create a Kubernetes cluster with HTTP proxy configured using these flags during dkp create cluster command:

    CODE
    --control-plane-http-proxy=${CONTROL_PLANE_HTTP_PROXY} \
    --control-plane-https-proxy=${CONTROL_PLANE_HTTPS_PROXY} \
    --control-plane-no-proxy=${CONTROL_PLANE_NO_PROXY} \
    --worker-http-proxy=${WORKER_HTTP_PROXY} \
    --worker-https-proxy=${WORKER_HTTPS_PROXY} \
    --worker-no-proxy=${WORKER_NO_PROXY} \
Individual manifests using the Output Directory flag:

You can create individual files with different smaller manifests for ease in editing using the --output-directory flag. This will create multiple files in the specified directory which must already exist:

CODE
--output-directory=<existing-directory>

Refer to the Cluster Creation Customization Choices section for more information on how to use optional flags such as the --output-directory flag.

  1. Inspect or edit the cluster objects and familiarize yourself with Cluster API before editing the cluster objects as edits can prevent the cluster from deploying successfully. See GCP Customizing CAPI Clusters

  2. (Optional) Modify Control Plane Audit logs - Users can make modifications to the KubeadmControlplane cluster-api object to configure different kubelet options. See the following guide if you wish to configure your control plane beyond the existing options that are available from flags.

  3. Create the cluster from the objects generated from the dry run. A warning will appear in the console if the resource already exists and will require you to remove the resource or update your YAML.

    CODE
    kubectl create -f ${CLUSTER_NAME}.yaml

    NOTE: If you used the --output-directory flag in your dkp create .. --dry-run step above, create the cluster from the objects you created by specifying the directory:

    CODE
    kubectl create -f <existing-directory>/.  
  4. Wait for the cluster control-plane to be ready:

    CODE
    kubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --timeout=20m
  5. After the objects are created on the API server, the Cluster API controllers reconcile them. They create infrastructure and machines. As they progress, they update the Status of each object. Konvoy provides a command to describe the current status of the cluster:

    CODE
    dkp describe cluster -c ${CLUSTER_NAME}
Output:
CODE
NAME                                                                      READY  SEVERITY  REASON  SINCE  MESSAGE
Cluster/gcp-example                                                       True                     52s
├─ClusterInfrastructure - GCPCluster/gcp-example
├─ControlPlane - KubeadmControlPlane/gcp-example-control-plane            True                     52s
│ ├─Machine/gcp-example-control-plane-6fbzn                               True                     2m32s
│ │ └─MachineInfrastructure - GCPMachine/gcp-example-control-plane-62g6s
│ ├─Machine/gcp-example-control-plane-jf6s2                               True                     7m36s
│ │ └─MachineInfrastructure - GCPMachine/gcp-example-control-plane-bsr2z
│ └─Machine/gcp-example-control-plane-mnbfs                               True                     54s
│   └─MachineInfrastructure - GCPMachine/gcp-example-control-plane-s8xsx
└─Workers
  └─MachineDeployment/gcp-example-md-0                                    True                     78s
    ├─Machine/gcp-example-md-0-68b86fddb8-8glsw                           True                     2m49s
    │ └─MachineInfrastructure - GCPMachine/gcp-example-md-0-zls8d
    ├─Machine/gcp-example-md-0-68b86fddb8-bvbm7                           True                     2m48s
    │ └─MachineInfrastructure - GCPMachine/gcp-example-md-0-5zcvc
    ├─Machine/gcp-example-md-0-68b86fddb8-k9499                           True                     2m49s
    │ └─MachineInfrastructure - GCPMachine/gcp-example-md-0-k8h5p
    └─Machine/gcp-example-md-0-68b86fddb8-l6vfb                           True                     2m49s
      └─MachineInfrastructure - GCPMachine/gcp-example-md-0-9h5vn

DKP uses the GCP CSI driver as the default storage provider. Use a Kubernetes CSI compatible storage that is suitable for production. See the Kubernetes documentation called Changing the Default Storage Class for more information.

If you’re not using the default, you cannot deploy an alternate provider until after the dkp create cluster is finished. However, it must be determined before Kommander installation.

Next Step

Make the New GCP Cluster Self-Managed

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.