Leverage Multiple AWS Accounts for Kubernetes Cluster Deployments

Objective

You can leverage multiple AWS accounts in your organization to meet specific business purposes, reflect your organizational structure, or implement a multi-tenancy strategy. Specific scenarios include:

  • Implementing isolation between environment tiers such as development, testing, acceptance, and production.

  • Implementing separation of concerns between management clusters, and workload clusters.

  • Reducing the impact of security events and incidents.

For additional benefits of using multiple AWS accounts, refer to the following white paper.

This document describes how to leverage the D2iQ Kubernetes Platform (DKP) to deploy a management cluster, and multiple workload clusters, leveraging multiple AWS accounts.

Assumptions

This guide assumes you have some understanding of Cluster API concepts and basic DKP provisioning workflows on AWS.

Cluster API Concepts - cluster API concepts

Getting Started with DKP on AWS - AWS Quick Start

Glossary

  • Management cluster - The cluster that runs in AWS and is used to create target clusters in different AWS accounts.

  • Target account - The account where the target cluster is created.

  • Source account - The AWS account where the CAPA controllers for the management cluster runs.

Prerequisites

Before you begin deploying DKP on AWS, you configure the prerequisites for the environment you use either nonair-gapped or air-gapped.

Deploy DKP on AWS

  1. Deploy a management cluster in your AWS source account.
    AWS: create Kubernetes AWS cluster

  2. Configure a trusted relationship between source and target accounts and create a management cluster:

Step 1:

DKP leverages the Cluster API provider for AWS (CAPA) to provision Kubernetes clusters in a declarative way. Customers declare the desired state of the cluster through a cluster configuration YAML file which is generated using:

(AWS)

dkp create cluster aws --cluster-name=${CLUSTER_NAME} \
--dry-run \
--output=yaml \
> ${CLUSTER_NAME}.yaml
CODE

Step 2:

Configure a trust relationship between the source and target accounts.

Follow all the prerequisite steps in both the source and target accounts

  1. Create all policies and roles in management and workload accounts a. The prerequisite IAM policies for DKP are documented here: Configure AWS IAM policies.

  2. Establish a trust relationship in workload account for the management account.

    a. Go to your target (workload) account b. Search for the role control-plane.cluster-api-provider-aws.sigs.k8s.io c. Navigate to the Trust Relationship tab and select Edit Trust Relationship d. Add the following relationship:

    {
      "Effect": "Allow",
      "Principal": {
         "AWS": "arn:aws:iam::${mgmt-aws-account}:role/control-plane.cluster-api-provider-aws.sigs.k8s.io"
      },
      "Action": "sts:AssumeRole"
    }
    CODE
  3. Give permission to role in the source (management cluster) account to call the sts:AssumeRole API a. Log in to the source AWS account and attach the following inline policy to control-plane.cluster-api-provider-aws.sigs.k8s.io role:

    {
      "Version": "2012-10-17",
      "Statement": [
         {
           "Effect": "Allow",
           "Action": "sts:AssumeRole",
           "Resource": [
    "arn:aws:iam::${workload-aws-account}:role/control-plane.cluster-api-provider-aws.sigs.k8s.io"
           ]
         }
      ]
    }
    CODE
  4. Modify the management cluster configuration file and update the AWSCluster object with following details:

    apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
    kind: AWSCluster
    metadata:
    spec:
      identityRef:
         kind: AWSClusterRoleIdentity
         name: cross-account-role
    …
    …
    ---
    apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
    kind: AWSClusterRoleIdentity
    metadata:
      name: cross-account-role
    spec:
      allowedNamespaces: {}
      roleARN: "arn:aws:iam::${workload-aws-account}:role/control-plane.cluster-api-provider-aws.sigs.k8s.io"
      sourceIdentityRef:
        kind: AWSClusterControllerIdentity
        name: default
    CODE

After performing the above steps, your Management cluster will be configured to create new managed clusters in the target AWS workload account.

Next Steps: