Prerequisites
Name your Cluster
Give your cluster a unique name suitable for your environment.
Set the environment variable:
CODE
export CLUSTER_NAME=<azure-example>
To create a cluster name that is unique, use the following command. This creates a unique name every time you run it, so use it with forethought:
CODE
export CLUSTER_NAME=azure-example-$(LC_CTYPE=C tr -dc 'a-z0-9' </dev/urandom | fold -w 5 | head -n1)
echo $CLUSTER_NAME
Important Notes for Azure Environments
To use a custom Azure Image when creating your cluster, you must create that Azure Image using KIB first and then use the flag --compute-gallery-id
to apply the image.
CODE
...
--compute-gallery-id "<Managed Image Shared Image Gallery Id>"
Important to remember: The --compute-gallery-id
image will be in the format --compute-gallery-id /subscriptions/<subscription id>/resourceGroups/<resource group name>/providers/Microsoft.Compute/galleries/<gallery name>/images/<image definition name>/versions/<version id>
.
Ensure your subnets do not overlap with your host subnet because they cannot be changed after cluster creation. If you need to change the Kubernetes subnets, you must do this at cluster creation. The default subnets used in DKP are:
CODE
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
Availability zones (AZs) are isolated locations within data center regions from which public cloud services originate and operate. Because all the nodes in a node pool are deployed in a single Availability Zone, you may wish to create additional node pools to ensure your cluster has nodes deployed in multiple Availability Zones.
By default, the control-plane Nodes will be created in 3 different zones. However, the default worker Nodes will reside in a single Availability Zone. You may create additional node pools in other Availability Zones with the dkp create nodepool
command. See Microsoft’s documentation for more information on Availability Options for Azure VM.
Default Storage Provisioning: The below cluster creation directions instead describes how to create a cluster using Azure as the infrastructure provider provisioning clusters, which uses Azure Disks Container Storage Interface as the default StorageClass.
If you are using Azure as a Pre-provisioned environment: DKP uses localvolumeprovisioner
as the default storage provider if creating a pre-provisioned Azure cluster. However, localvolumeprovisioner
is not suitable for production use. You should use a Kubernetes CSI compatible storage that is suitable for production.
You can choose from any of the storage options available for Kubernetes. To disable the default that Konvoy deploys, set the default StorageClasslocalvolumeprovisioner
as non-default. Then set your newly created StorageClass to be the default by following the commands in the Kubernetes documentation called Changing the Default Storage Class.
Create a New Azure Kubernetes Cluster
Generate the Kubernetes cluster objects. See dkp create cluster azure reference for the full list of cluster creation options.
(Optional) Use a local registry mirror. Configure your cluster to use an existing local registry as a mirror when attempting to pull images previously pushed to your registry.
Export Registry Variables
If you have a local registry, you must provide additional arguments when creating the cluster. These tell the cluster where to locate the local registry to use by defining the URL. Set the needed environment variable(s) with your registry information:
YAML
export REGISTRY_URL="<https/http>://<registry-address>:<registry-port>"
export REGISTRY_CA=<path to the CA on the bastion>
export REGISTRY_USERNAME=<username>
export REGISTRY_USERNAME=<password>
REGISTRY_URL
: the address of an existing container registry accessible in the VPC that the new cluster nodes will be configured to use a mirror registry when pulling images.
REGISTRY_CA
: (optional) the path on the bastion machine to the container registry CA. Konvoy will configure the cluster nodes to trust this CA. This value is only needed if the registry is using a self-signed certificate and the AMIs are not already configured to trust this CA.
REGISTRY_USERNAME
: optional, set to a user that has pull access to this registry.
REGISTRY_PASSWORD
: optional if username is not set.
When creating the cluster, apply the variables you defined above during the dkp create cluster
command with the flags needed for your environment:
CODE
--registry-mirror-url=${REGISTRY_URL} \
--registry-mirror-cacert=${REGISTRY_CA} \
--registry-mirror-username=${REGISTRY_USERNAME} \
--registry-mirror-password=${REGISTRY_PASSWORD}
See also: Azure Registry Mirrors
Create a cluster:
CODE
dkp create cluster azure \
--cluster-name=${CLUSTER_NAME} \
--dry-run \
--output=yaml \
> ${CLUSTER_NAME}.yaml
Expand the drop-downs for HTTP, FIPS and other flags to use during the cluster creation step above. For more information regarding this flag or others, please refer to the CLI section of the documentation for dkp create cluster and select your provider.
FIPS Requirements
To create a cluster in FIPS mode, inform the controllers of the appropriate image repository and version tags of the official D2iQ FIPS builds of Kubernetes by adding those flags to dkp create cluster
command:
CODE
--kubernetes-version=v1.28.7+fips.0 \
--etcd-version=3.5.10+fips.0 \
--kubernetes-image-repository=docker.io/mesosphere \
HTTP ONLY
If your environment uses HTTP/HTTPS proxies, you must include the flags --http-proxy
, --https-proxy
, and --no-proxy
and their related values in this command for it to be successful. More information is available in Configuring an HTTP/HTTPS Proxy.
CODE
--http-proxy <<http proxy list>>
--https-proxy <<https proxy list>>
--no-proxy <<no proxy list>>
Custom DNS
To use a custom DNS on Azure, you need a DNS name in your control. Once the resource group has been created, you can create your hosted zone with the command below:
CODE
az network dns zone create --resource-group "d2iq-professional-services" --name
You no longer need to create a cluster issuer. There are several documents that explain custom DNS in the Kommander component.
Using Custom Image
When creating your cluster, you will add this flag during the create process for your custom image: --compute-gallery-id "<Managed Image Shared Image Gallery Id>"
. See the Prerequisites section Azure Using Konvoy Image Builder for specific consumption of image commands.
The SKU and Image Name will default to the values found in the image YAML. Ensure you have named the correct YAML file for your OS in the konvoy-image build
command.
Marketplace Image - Rocky Linux
To allow DKP to create a cluster with Marketplace based images such as for Rocky Linux, the following flags are available. If these fields were specified in the override file during image creation, the flags must be used in cluster creation:
If you see a similar error to "Creating a virtual machine from Marketplace image or a custom image sourced from a Marketplace image requires Plan information in the request." when creating a cluster, you must also set the following flags --plan-offer
, --plan-publisher
, --plan-sku
. For example when creating a cluster with Rocky Linux VMs, add the following flags to your dkp create cluster azure
command:
Individual manifests using the Output Directory flag:
You can create individual files with different smaller manifests for ease in editing using the --output-directory
flag. This will create multiple files in the specified directory which must already exist:
CODE
--output-directory=<existing-directory>
Refer to the Cluster Creation Customization Choices section for more information on how to use optional flags such as the --output-directory
flag.
For more information regarding this flag or others, please refer to the CLI for the dkp create cluster section of the documentation and select your provider.
Output:
CODE
Generating cluster resources
Inspect or edit the cluster objects and familiarize yourself with Cluster API before editing the cluster objects as edits can prevent the cluster from deploying successfully. See Azure Customizing CAPI Clusters .
Familiarize yourself with Cluster API before editing the cluster objects as edits can prevent the cluster from deploying successfully.
For in-depth documentation about the objects, read Concepts in the Cluster API Book.
(Optional) Modify Control Plane Audit logs - Users can make modifications to the KubeadmControlplane
cluster-api object to configure different kubelet
options. See Configuring the Control Plane if you wish to configure your control plane beyond the existing options that are available from flags.
Create the cluster from the objects generated during dry run
. A warning will appear in the console if the resource already exists and will require you to remove the resource or update your YAML.
CODE
kubectl create -f ${CLUSTER_NAME}.yaml
NOTE: If you used the --output-directory
flag in your dkp create .. --dry-run
step above, create the cluster from the objects you created by specifying the directory:
CODE
kubectl create -f <existing-directory>/
Output:
Output will be similar to below:
CODE
cluster.cluster.x-k8s.io/azure-example created
azurecluster.infrastructure.cluster.x-k8s.io/azure-example created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/azure-example-control-plane created
azuremachinetemplate.infrastructure.cluster.x-k8s.io/azure-example-control-plane created
secret/azure-example-etcd-encryption-config created
machinedeployment.cluster.x-k8s.io/azure-example-md-0 created
azuremachinetemplate.infrastructure.cluster.x-k8s.io/azure-example-md-0 created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/azure-example-md-0 created
clusterresourceset.addons.cluster.x-k8s.io/calico-cni-installation-azure-example created
configmap/calico-cni-installation-azure-example created
configmap/tigera-operator-azure-example created
clusterresourceset.addons.cluster.x-k8s.io/azure-disk-csi-azure-example created
configmap/azure-disk-csi-azure-example created
clusterresourceset.addons.cluster.x-k8s.io/cluster-autoscaler-azure-example created
configmap/cluster-autoscaler-azure-example created
clusterresourceset.addons.cluster.x-k8s.io/node-feature-discovery-azure-example created
configmap/node-feature-discovery-azure-example created
clusterresourceset.addons.cluster.x-k8s.io/nvidia-feature-discovery-azure-example created
configmap/nvidia-feature-discovery-azure-example created
Wait for the cluster control-plane to be ready:
CODE
kubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --timeout=20m
Output:
CODE
cluster.cluster.x-k8s.io/azure-example condition met
After the objects are created on the API server, the Cluster API controllers reconcile them. They create infrastructure and machines. As they progress, they update the Status of each object. Konvoy provides a command to describe the current status of the cluster:
CODE
dkp describe cluster -c ${CLUSTER_NAME}
Output:
CODE
NAME READY SEVERITY REASON SINCE MESSAGE
Cluster/azure-example True 3m4s
├─ClusterInfrastructure - AzureCluster/azure-example True 8m26s
├─ControlPlane - KubeadmControlPlane/azure-example-control-plane True 3m4s
│ ├─Machine/azure-example-control-plane-l8j9r True 3m9s
│ ├─Machine/azure-example-control-plane-slprd True 7m17s
│ └─Machine/azure-example-control-plane-xhxxg True 5m9s
└─Workers
└─MachineDeployment/azure-example-md-0 True 4m31s
├─Machine/azure-example-md-0-d67567c8b-2674r True 5m19s
├─Machine/azure-example-md-0-d67567c8b-mbmhk True 5m17s
├─Machine/azure-example-md-0-d67567c8b-pzg8k True 5m17s
└─Machine/azure-example-md-0-d67567c8b-z8km9 True 5m17s
As they progress, the controllers also create Events. List the Events using this command:
CODE
kubectl get events | grep ${CLUSTER_NAME}
For brevity, the example uses grep
. It is also possible to use separate commands to get Events for specific objects. For example, kubectl get events --field-selector involvedObject.kind="AzureCluster"
and kubectl get events --field-selector involvedObject.kind="AzureMachine"
.
Output:
CODE
15m Normal AzureClusterObjectNotFound azurecluster AzureCluster object default/azure-example not found
15m Normal AzureManagedControlPlaneObjectNotFound azuremanagedcontrolplane AzureManagedControlPlane object default/azure-example not found
15m Normal AzureClusterObjectNotFound azurecluster AzureCluster.infrastructure.cluster.x-k8s.io "azure-example" not found
8m22s Normal SuccessfulSetNodeRef machine/azure-example-control-plane-bmc9b azure-example-control-plane-fdvnm
10m Normal Machine controller dependency not yet met azuremachine/azure-example-control-plane-fdvnm Machine Controller has not yet set OwnerRef
12m Normal SuccessfulSetNodeRef machine/azure-example-control-plane-msftd azure-example-control-plane-z9q45
10m Normal SuccessfulSetNodeRef machine/azure-example-control-plane-nrvff azure-example-control-plane-vmqwx
12m Normal Machine controller dependency not yet met azuremachine/azure-example-control-plane-vmqwx Machine Controller has not yet set OwnerRef
14m Normal Machine controller dependency not yet met azuremachine/azure-example-control-plane-z9q45 Machine Controller has not yet set OwnerRef
14m Warning VMIdentityNone azuremachinetemplate/azure-example-control-plane You are using Service Principal authentication for Cloud Provider Azure which is less secure than Managed Identity. Your Service Principal credentials will be written to a file on the disk of each VM in order to be accessible by Cloud Provider. To learn more, see https://capz.sigs.k8s.io/topics/identities-use-cases.html#azure-host-identity
12m Warning ControlPlaneUnhealthy kubeadmcontrolplane/azure-example-control-plane Waiting for control plane to pass preflight checks to continue reconciliation: [machine azure-example-control-plane-msftd does not have APIServerPodHealthy condition, machine azure-example-control-plane-msftd does not have ControllerManagerPodHealthy condition, machine azure-example-control-plane-msftd does not have SchedulerPodHealthy condition, machine azure-example-control-plane-msftd does not have EtcdPodHealthy condition, machine azure-example-control-plane-msftd does not have EtcdMemberHealthy condition]
11m Warning ControlPlaneUnhealthy kubeadmcontrolplane/azure-example-control-plane Waiting for control plane to pass preflight checks to continue reconciliation: [machine azure-example-control-plane-nrvff does not have APIServerPodHealthy condition, machine azure-example-control-plane-nrvff does not have ControllerManagerPodHealthy condition, machine azure-example-control-plane-nrvff does not have SchedulerPodHealthy condition, machine azure-example-control-plane-nrvff does not have EtcdPodHealthy condition, machine azure-example-control-plane-nrvff does not have EtcdMemberHealthy condition]
9m52s Normal SuccessfulSetNodeRef machine/azure-example-md-0-84bd8b5f5b-b8cnq azure-example-md-0-bsc82
9m53s Normal SuccessfulSetNodeRef machine/azure-example-md-0-84bd8b5f5b-j8ldg azure-example-md-0-mjcbn
9m52s Normal SuccessfulSetNodeRef machine/azure-example-md-0-84bd8b5f5b-lx89f azure-example-md-0-pmq8f
10m Normal SuccessfulSetNodeRef machine/azure-example-md-0-84bd8b5f5b-pcv7q azure-example-md-0-vzprf
15m Normal SuccessfulCreate machineset/azure-example-md-0-84bd8b5f5b Created machine "azure-example-md-0-84bd8b5f5b-j8ldg"
15m Normal SuccessfulCreate machineset/azure-example-md-0-84bd8b5f5b Created machine "azure-example-md-0-84bd8b5f5b-lx89f"
15m Normal SuccessfulCreate machineset/azure-example-md-0-84bd8b5f5b Created machine "azure-example-md-0-84bd8b5f5b-pcv7q"
15m Normal SuccessfulCreate machineset/azure-example-md-0-84bd8b5f5b Created machine "azure-example-md-0-84bd8b5f5b-b8cnq"
15m Normal Machine controller dependency not yet met azuremachine/azure-example-md-0-bsc82 Machine Controller has not yet set OwnerRef
15m Normal Machine controller dependency not yet met azuremachine/azure-example-md-0-mjcbn Machine Controller has not yet set OwnerRef
15m Normal Machine controller dependency not yet met azuremachine/azure-example-md-0-pmq8f Machine Controller has not yet set OwnerRef
If changing the Calico encapsulation, D2iQ recommends changing it after cluster creation, but before production.
DKP uses Azure CSI as the default storage provider. You can use a Kubernetes CSI compatible storage solution that is suitable for production. See the Kubernetes documentation called Changing the Default Storage Class for more information.
If you’re not using the default, you cannot deploy an alternate provider until after the dkp create cluster
is finished. However, it must be determined before Kommander installation.
Known Limitations
Be aware of these limitations in the current release of Konvoy.
The Konvoy version used to create a bootstrap cluster must match the Konvoy version used to create a workload cluster.
Konvoy supports deploying one workload cluster.
Konvoy generates a set of objects for one Node Pool.
Konvoy does not validate edits to cluster objects.
Next Step
Azure Make new Cluster Self-Managed