Skip to main content
Skip table of contents

Pre-provisioned Modify the Calico Installation

Set the Interface

By default, Calico automatically detects the IP address to use for each node using the first-found method. This is not always appropriate for your particular nodes. In that case, you must modify Calico’s configuration to use a different method. An alternative is to use the interface method by providing the interface ID. Follow the steps outlined in this section to modify Calico’s configuration.

Azure does not set the interface. Proceed to Change the Encapsulation Type section below.

In this example, all cluster nodes use ens192 as the interface name.

  1. Get the pods running on your cluster with this command:

CODE
kubectl get pods -A --kubeconfig ${CLUSTER_NAME}.conf
CODE
NAMESPACE                NAME                                                                READY   STATUS            RESTARTS        AGE
calico-system            calico-kube-controllers-57fbd7bd59-vpn8b                            1/1     Running           0               16m
calico-system            calico-node-5tbvl                                                   1/1     Running           0               16m
calico-system            calico-node-nbdwd                                                   1/1     Running           0               4m40s
calico-system            calico-node-twl6b                                                   0/1     PodInitializing   0               9s
calico-system            calico-node-wktkh                                                   1/1     Running           0               5m35s
calico-system            calico-typha-54f46b998d-52pt2                                       1/1     Running           0               16m
calico-system            calico-typha-54f46b998d-9tzb8                                       1/1     Running           0               4m31s
default                  cuda-vectoradd                                                      0/1     Pending           0               0s
kube-system              coredns-78fcd69978-frwx4                                            1/1     Running           0               16m
kube-system              coredns-78fcd69978-kkf44                                            1/1     Running           0               16m
kube-system              etcd-ip-10-0-121-16.us-west-2.compute.internal                      0/1     Running           0               8s
kube-system              etcd-ip-10-0-46-17.us-west-2.compute.internal                       1/1     Running           1               16m
kube-system              etcd-ip-10-0-88-238.us-west-2.compute.internal                      1/1     Running           1               5m35s
kube-system              kube-apiserver-ip-10-0-121-16.us-west-2.compute.internal            0/1     Running           6               7s
kube-system              kube-apiserver-ip-10-0-46-17.us-west-2.compute.internal             1/1     Running           1               16m
kube-system              kube-apiserver-ip-10-0-88-238.us-west-2.compute.internal            1/1     Running           1               5m34s
kube-system              kube-controller-manager-ip-10-0-121-16.us-west-2.compute.internal   0/1     Running           0               7s
kube-system              kube-controller-manager-ip-10-0-46-17.us-west-2.compute.internal    1/1     Running           1 (5m25s ago)   15m
kube-system              kube-controller-manager-ip-10-0-88-238.us-west-2.compute.internal   1/1     Running           0               5m34s
kube-system              kube-proxy-gclmt                                                    1/1     Running           0               16m
kube-system              kube-proxy-gptd4                                                    1/1     Running           0               9s
kube-system              kube-proxy-mwkgl                                                    1/1     Running           0               4m40s
kube-system              kube-proxy-zcqxd                                                    1/1     Running           0               5m35s
kube-system              kube-scheduler-ip-10-0-121-16.us-west-2.compute.internal            0/1     Running           1               7s
kube-system              kube-scheduler-ip-10-0-46-17.us-west-2.compute.internal             1/1     Running           3 (5m25s ago)   16m
kube-system              kube-scheduler-ip-10-0-88-238.us-west-2.compute.internal            1/1     Running           1               5m34s
kube-system              local-volume-provisioner-2mv7z                                      1/1     Running           0               4m10s
kube-system              local-volume-provisioner-vdcrg                                      1/1     Running           0               4m53s
kube-system              local-volume-provisioner-wsjrt                                      1/1     Running           0               16m
node-feature-discovery   node-feature-discovery-master-84c67dcbb6-m78vr                      1/1     Running           0               16m
node-feature-discovery   node-feature-discovery-worker-vpvpl                                 1/1     Running           0               4m10s
tigera-operator          tigera-operator-d499f5c8f-79dc4                                     1/1     Running           1 (5m24s ago)   16m

If a calico-node pod is not ready on your cluster, you must edit the default Installation resource. To edit the Installation resource, run the command:

CODE
kubectl edit installation default --kubeconfig ${CLUSTER_NAME}.conf

2. Change the value for spec.calicoNetwork.nodeAddressAutodetectionV4 to interface: ens192, and save the resource:

CODE
spec:
  calicoNetwork:
  ...
    nodeAddressAutodetectionV4:
      interface: ens192

3. Save this resource. You may need to delete the node feature discovery worker pod in the node-feature-discovery namespace if that pod has failed. After you delete it, Kubernetes replaces the pod as part of its normal reconciliation.

Change the Encapsulation Type

Calico can leverage different network encapsulation methods to route traffic for your workloads. Encapsulation is useful when running on top of an underlying network that is not aware of workload IPs. Common examples of this include:

  • Public cloud environments where you don’t own the hardware.

  • AWS across VPC subnet boundaries.

  • Environments where you cannot peer Calico over BGP to the underlay or easily configure static routes.

WARNING: Switching encapsulation modes can cause disruption to in-progress connections. You can do this safely when the cluster is first deployed. However, if user workloads are already running on the cluster, plan accordingly for interruption.

Provider Specific Settings

The encapsulation type to be used for networking depends on the cloud provider. IP-in-IP is the default encapsulation method for Calico which most providers use, but not Azure.

Azure only supports VXLAN encapsulation type. Therefore, if you install on Azure pre-provisioned VMs, you must set the encapsulation mode to VXLAN.

D2iQ recommends changing the encapsulation type after cluster creation but before production using the method below. To change the encapsulation type, follow these steps:

  1. First, remove the existing default-ipv4-ippool IPPool resource from kubeconfig. The resource must be deleted, so that it can re-create after you edit the Installation resource. Execute the command below to delete:

    CODE
    kubectl delete ippool default-ipv4-ippool

  2. Run the following command to edit:

    CODE
    kubectl edit installation default --kubeconfig ${CLUSTER_NAME}.conf

  3. Change the value for encapsulation- encapsulation: as shown below:

    CODE
      spec:
      calicoNetwork:
        ipPools:
          - encapsulation: VXLAN

VXLAN

VXLAN is a tunneling protocol that encapsulates layer 2 Ethernet frames in UDP packets, enabling you to create virtualized layer 2 subnets that span Layer 3 networks. It has a slightly larger header than IP-in-IP which creates a slight reduction in performance over IP-in-IP.

IPIP

IP-in-IP is an IP tunneling protocol that encapsulates one IP packet in another IP packet. An outer packet header is added with the tunnel entry point and the tunnel exit point. The calico implementation of this protocol uses BGP to determine the exit point making this protocol unusable on networks that don’t pass BGP.

For more information, see:

If using Windows, see this documentation on Calico site regarding limitations: Calico for Windows VXLAN

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.