Skip to main content
Skip table of contents

CLI: Create and Configure the Tunnel

Connect a Remote Cluster

Create a tunnel connector

Create a tunnel connector on the Management cluster for the remote cluster.

  1. Establish a variable for the connector. Provide the name of the connector, by replacing the <connector_name> placeholder:

    CODE
    connector=<connector_name>
  2. Create the TunnelConnector object:

    CODE
    cat > connector.yaml <<EOF
    apiVersion: kubetunnel.d2iq.io/v1alpha1
    kind: TunnelConnector
    metadata:
      namespace: ${namespace}
      name: ${connector}
    spec:
      gatewayRef:
        name: ${gateway}
    EOF
    
    kubectl apply -f connector.yaml

    After you create the TunnelConnector object, DKP creates a manifest.yaml. This manifest.yaml contains the configuration information for the components required by the tunnel for a specific cluster.

  3. Verify the connector exists:

    CODE
    kubectl get tunnelconnector -n ${namespace} ${connector}
  4. Wait for the tunnel connector to reach Listening state and then export the agent manifest:

    CODE
    while [ "$(kubectl get tunnelconnector -n ${namespace} ${connector} -o jsonpath="{.status.state}")" != "Listening" ]
    do
      sleep 5
    done
    
    manifest=$(kubectl get tunnelconnector -n ${namespace} ${connector} -o jsonpath="{.status.tunnelAgent.manifestsRef.name}")
    while [ -z ${manifest} ]
    do
      sleep 5
      manifest=$(kubectl get tunnelconnector -n ${namespace} ${connector} -o jsonpath="{.status.tunnelAgent.manifestsRef.name}")
    done
    
    kubectl get secret -n ${namespace} ${manifest} -o jsonpath='{.data.manifests\.yaml}' | base64 -d > manifest.yaml

    The manifest.yaml is applied successfully after the command completes.

  5. Fetch the manifest.yaml to use it in the following section:

    CODE
    kubectl get secret -n ${namespace} ${manifest} -o jsonpath='{.data.manifests\.yaml}' | base64 -d > manifest.yaml

When attaching several clusters, ensure that you fetch the manifest.yaml of the cluster you are attempting to attach. Using the wrong combination of manifest.yaml and cluster will cause the attachment to fail.

Set up the managed cluster

In the following commands, the --kubeconfig flag ensures that you set the context to the Attached or Managed cluster. For alternatives and recommendations around setting your context, refer to Provide Context for Commands with a kubeconfig File.

  1. Apply the manifest.yaml file to the Attached or Managed cluster and deploy the tunnel agent:

    CODE
    kubectl apply --kubeconfig=<managed_cluster_kubeconfig.conf> -f manifest.yaml
  2. Check the status of the created pods:

    CODE
    kubectl get pods --kubeconfig=<managed_cluster_kubeconfig.conf> -n kubetunnel

    After a short time, expect to see a post-kubeconfig pod that reaches Completed state and a tunnel-agent pod that stays in Running state.

    CODE
    NAME                           READY   STATUS      RESTARTS   AGE
    post-kubeconfig-j2ghk          0/1     Completed   0          14m
    tunnel-agent-f8d9f4cb4-thx8h   0/1     Running     0          14m

Add the Attached or Managed cluster into Kommander

When you create a cluster using the DKP CLI, it does not attach automatically.

  1. On the Management cluster, wait for the tunnel to be connected by the tunnel agent:

    CODE
    while [ "$(kubectl get tunnelconnector -n ${namespace} ${connector} -o jsonpath="{.status.state}")" != "Connected" ]
    do
      sleep 5
    done
  2. Establish variables for the managed cluster. Replace the <private_cluster> placeholder with the name of the managed cluster:

    CODE
    managed=<private-cluster>
    display_name=${managed}
  3. Update the KommanderCluster object:

    CODE
    cat > kommander.yaml <<EOF
    apiVersion: kommander.mesosphere.io/v1beta1
    kind: KommanderCluster
    metadata:
      namespace: ${namespace}
      name: ${managed}
      annotations:
        kommander.mesosphere.io/display-name: ${display_name}
    spec:
      clusterTunnelConnectorRef:
        name: ${connector}
    EOF
    
    kubectl apply -f kommander.yaml
  4. Wait for the Attached or Managed cluster to join:

    CODE
    while [ "$(kubectl get kommandercluster -n ${namespace} ${managed} -o jsonpath='{.status.phase}')" != "Joined" ]
    do
      sleep 5
    done
    
    kubefed=$(kubectl get kommandercluster -n ${namespace} ${managed} -o jsonpath="{.status.kubefedclusterRef.name}")
    while [ -z "${kubefed}" ]
    do
      sleep 5
      kubefed=$(kubectl get kommandercluster -n ${namespace} ${managed} -o jsonpath="{.status.kubefedclusterRef.name}")
    done
    
    kubectl wait --for=condition=ready --timeout=60s kubefedcluster -n kube-federation-system ${kubefed}
    
    kubectl get kubefedcluster -n kube-federation-system ${kubefed}

    After the command completes, your cluster becomes visible in the DKP UI and you can start using it. Its metrics will be accessible through different dashboards such as Grafana, Karma, etc.

Create a network policy for the tunnel server

This step is optional, but improves security by restricting which remote hosts can connect to the tunnel.

  1. Apply a network policy that restricts tunnel access to specific namespaces and IP blocks.
    (info) The following example permits connections from:
    - Pods running in the kommander and kube-federation-system namespace.
    - Remote clusters with IP addresses in the ranges 192.0.2.0 to 192.0.2.255 and 203.0.113.0 to 203.0.113.255.
    - Pods running in namespaces with a label kubetunnel.d2iq.io/networkpolicy that match the tunnel name and namespace.

    CODE
    cat > net.yaml <<EOF
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      namespace: ${namespace}
      name: ${connector}-deny
      labels:
        kubetunnel.d2iq.io/tunnel-connector: ${connector}
        kubetunnel.d2iq.io/networkpolicy-type: "tunnel-server"
    spec:
      podSelector:
        matchLabels:
          kubetunnel.d2iq.io/tunnel-connector: ${connector}
      policyTypes:
      - Ingress
    ---
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      namespace: ${namespace}
      name: ${connector}-allow
      labels:
        kubetunnel.d2iq.io/tunnel-connector: ${connector}
        kubetunnel.d2iq.io/networkpolicy-type: "tunnel-server"
    spec:
      podSelector:
        matchLabels:
          kubetunnel.d2iq.io/tunnel-connector: ${connector}
      policyTypes:
      - Ingress
      ingress:
      - from:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: "kube-federation-system"
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: "kommander"
        - namespaceSelector:
            matchLabels:
              kubetunnel.d2iq.io/networkpolicy: ${connector}-${namespace}
        - ipBlock:
            cidr: 192.0.2.0/24
        - ipBlock:
            cidr: 203.0.113.0/24
    EOF
    
    kubectl apply -f net.yaml
  2. To enable applications running in another namespace to access the attached cluster, add the label kubetunnel.d2iq.io/networkpolicy=${connector}-${namespace} to the target namespace:

    CODE
    kubectl label ns ${namespace} kubetunnel.d2iq.io/networkpolicy=${connector}-${namespace}

    All pods in the target namespace can now reach the attached cluster services.

Next Step:

Use the Remote Cluster

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.