Skip to main content
Skip table of contents

Create a Managed Cluster on VCD Using the DKP UI

After configuring VMware Cloud Director (VCD), you can use the DKP user interface to provision a VCD cluster quickly and easily.

Prerequisites

Ensure that you have fulfilled the VMware Cloud Director configuration prerequisites described in the section, Cloud Director for Service Providers, before you begin these procedures.

You must also create a VCD infrastructure provider before you can create additional VCD clusters.

Provision a VCD Cluster

Provisioning a production-ready cluster in VCD requires you to specify a fairly large number of parameters. Breaking up the sections of the form, as done below, makes it a little easier to complete.

Complete these procedures to provision a VCD cluster:

  • Provide basic cluster information

  • Specify an SSH user and public key

  • Define the cluster’s VCD resources

  • Configure node pools for the cluster

  • Specify pod and service CIDR values, if needed

  • Define a registry mirror, if needed

Provide Basic Cluster Information

In this section of the provisioning form, you give the cluster a name and provide some basic information:

  • Cluster Name: A valid Kubernetes name for the cluster.

  • Add Labels: Add any required Labels the cluster needs for your environment by selecting the + Add Label link. Changing a cluster Label may add or remove the cluster from Projects.

  • Infrastructure Provider: This field's value corresponds to the VCD infrastructure provider you created while fulfilling the prerequisites.

  • Kubernetes Version: Select a supported version of Kubernetes for this version of DKP.

Specify an SSH User

In this section, you specify the user name and public key values for an SSH user who has access to the VCD cluster after its creation. This group of credentials enables password-less SSH access for administrative tasks and debugging purposes, which improves security. DKP adds the public key to the authorized_keys file on each node, allowing the corresponding private key holder to access the nodes using SSH.

  1. Type an SSH Username to which the SSH Public Key belongs.
    Leaving this field blank means that DKP assigns the user name konvoy to the public key value you copy into the next field.

  2. Copy and paste an entire, valid SSH Public Key to go with the SSH Username in the previous field.

Define the Cluster’s VCD Resources

This section of the form identifies already existing resources in your VMware Cloud Director configuration. Refer to your VCD configuration to find the necessary values.

  1. Provide the following values for the Resources that are specific to VCD:

    • Datacenter: Select an existing Organization's Virtual Datacenter(VDC) name where you want to deploy the cluster.

    • Network: Select the Organization's virtual datacenter Network that the new cluster uses.

    • Organization: The Organization name under which you want to deploy the cluster.

    • Catalog: The name of the VCD Catalog that hosts the virtual machine templates used for cluster creation.

    • vApp Template: The vApp template used to create the virtual machines that comprise the cluster. A DKP cluster becomes a vApp in VCD.
      Storage Profile: The name of a VCD storage profile you want to use for a virtual machine. The default value in DKP is " * " and selects the policy defined as the default storage policy in VCD.

Configure Node Pools

You need to configure node pool information for both your control plane nodes and your worker nodes. The form splits these information sets into two groups.

  1. Provide the control plane node pool name and resource sizing information:

    • Node Pool Name: DKP sets this field’s value, control-plane, and you cannot change it.

    • Number of Nodes: Enter the number of control plane nodes to create for your new cluster.
      Valid values for production clusters are 3 or 5. You can enter 1 if you are creating a test cluster, but a single control plane is not a valid production configuration. You must enter an odd number to allow for internal leader selection processes to provide proper failover for high availability. The default value is 3 control plane nodes.

    • Placement Policy: The placement policy for control planes to be used on this machine.
      A VM placement policy defines the placement of a virtual machine on a host or group of hosts. It is a mechanism for cloud provider administrators to create a named group of hosts within a provider VDC. The named group of hosts is a subset of hosts within the provider VDC clusters that might be selected based on any criteria such as performance tiers or licensing. You can expand the scope of a VM placement policy to more than one provider VDC.

    • Sizing Policy: The sizing policy for control planes to be used on this machine.
      A VM sizing policy defines the compute resource allocation for virtual machines within an organization VDC. The compute resource allocation includes CPU and memory allocation, reservations, limits, and shares.

  2. Provide the worker node pool name and resource sizing information:

    • Node Pool Name: Enter a node pool name for the worker nodes. DKP sets this field’s default value to worker-0.

    • Replicas: Enter the number of worker nodes to create for your new cluster. The default value is 4 worker nodes.

    • Placement Policy: The placement policy for workers to be used on this machine.
      A VM placement policy defines the placement of a virtual machine on a host or group of hosts. It is a mechanism for cloud provider administrators to create a named group of hosts within a provider VDC. The named group of hosts is a subset of hosts within the provider VDC clusters that might be selected based on any criteria such as performance tiers or licensing. You can expand the scope of a VM placement policy to more than one provider VDC.

    • Sizing Policy: The sizing policy for workers to be used on this machine.
      A VM sizing policy defines the compute resource allocation for virtual machines within an organization VDC. The compute resource allocation includes CPU and memory allocation, reservations, limits, and shares.

Advanced Configuration

Specify CIDR Values (group)

In this section, you specify the CIDR blocks for Pods and Services in your VCD cluster. You also need to reserve a separate CIDR block for Services. These ranges must not overlap each other, or the existing network. Incorrect configuration can cause network conflicts and disrupt cluster operations.

  1. Specify the Pod Network CIDR to use in VCD clusters.
    The default value is 192.168.0.0/16.

  2. Specify the Service CIDR to use in VCD clusters.
    The default value is 10.96.0.0/12.

Define a Registry Mirror (group)

A registry mirror is a local caching proxy for a container image repository. When clients request an image, the mirror first tries to serve the image from its cache. If the image is not available in the cache, the mirror fetches it from the primary repository, caches it, and then serves it to the client.

  1. Enter the URL of a container registry to use as a mirror in the cluster.

  2. Type the Username for the account to use to authenticate to the registry mirror.

  3. Type the Password for the account to authenticate to the registry mirror.

  4. Copy and paste a CA Certificate chain to use while communicating with the registry mirror using TLS.

Control Plane Endpoint

In this section, you specify the control plane endpoint host and port values that enables both IP addresses and DNS names to map to the IP address for this cluster.

  1. Type the Host name as either the control plane endpoint IP or a hostname.

  2. Enter a Port value for the control plane endpoint port. The default value is 6443. To use an external load balancer, set this port value to to the load balancer’s listening port number.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.