Skip to main content
Skip table of contents

Velero with AWS S3 Buckets - Configure Velero

Customize Velero to allow the configuration of a non-default backup location.

  1. Create a ConfigMap to enable Velero to use AWS S3 buckets as backup storage location:

    CODE
    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: ConfigMap
    metadata:
      namespace: ${WORKSPACE_NAMESPACE}
      name: velero-overrides
    data:
       values.yaml: |
        configuration:
          backupStorageLocation:
          - bucket: ${BUCKET}
            provider: "aws"
            config:
              region: <AWS_REGION> # such as us-west-2
              s3ForcePathStyle: "false"
              insecureSkipTLSVerify: "false"
              s3Url: ""
              # profile should be set to the AWS profile name mentioned in the secret
              profile: default
            credentials:
              # With the proper IAM permissions with access to the S3 bucket,
              # you can attach the EC2 instances using the IAM Role, OR fill in "existingSecret" OR "secretContents" below.
              #
              # Name of a pre-existing secret (if any) in the Velero namespace
              # that should be used to get IAM account credentials.
              existingSecret: velero-aws-credentials
              # The key must be named "cloud", and the value corresponds to the entire content of your IAM credentials file.
              # For more information, consult the documentation for the velero plugin for AWS at:
              # [AWS] https://github.com/vmware-tanzu/velero-plugin-for-aws/blob/main/README.md
              secretContents: 
                # cloud: |
                #   [default]
                #   aws_access_key_id=<REDACTED>
                #   aws_secret_access_key=<REDACTED>
    EOF
  2. Patch the Velero AppDeployment to reference the created ConfigMap with the Velero overrides:

    1. To update Velero in all clusters in a workspace:

      CODE
      cat << EOF | kubectl -n ${WORKSPACE_NAMESPACE} patch appdeployment velero --type="merge" --patch-file=/dev/stdin
      spec:
        configOverrides:
          name: velero-overrides
      EOF
    2. To update Velero for a specific cluster in a workspace, see Customize an Application per Cluster.

  3. Check the ConfigMap on the HelmRelease object:

    CODE
    kubectl get hr -n kommander velero -o jsonpath='{.spec.valuesFrom[?(@.name=="velero-overrides")]}'

    The output looks like this if the deployment is successful:

    CODE
    {"kind":"ConfigMap","name":"velero-overrides"}
  4. Verify that the Velero pod is running:

    CODE
    kubectl get pods -A --kubeconfig=${CLUSTER_NAME}.conf |grep velero

 

You can also configure Velero by editing the kommander.yaml and rerunning the installation. To follow this alternative configuration path, expand the following section:

Alternative Configuration path for Management/Essential clusters

Configure Velero on the Management Cluster

  1. Refresh the kommander.yaml to add the customization of Velero:
    ⚠ Before running this command, ensure the kommander.yaml is the configuration file you are currently using for your environment. Otherwise, your previous configuration will be lost. ⚠

    CODE
    dkp install kommander -o yaml --init > kommander.yaml
  2. Configure DKP to load the plugins and to include the secret in the apps.velero section:
    (info) This process has been tested to work with plugins for AWS v1.1.0 and Azure v1.5.1. More recent versions of these plugins can be used, but have not been tested by D2iQ.

    CODE
    ...
      velero:
        values: |
          configuration:
          backupStorageLocation:
            bucket: ${BUCKET}
            config:
              region: <AWS_REGION> # such as us-west-2
              s3ForcePathStyle: "false"
              insecureSkipTLSVerify: "false"
              s3Url: ""
              # profile should be set to the AWS profile name mentioned in the secret
              profile: default
          credentials:
            # With the proper IAM permissions with access to the S3 bucket,
            # you can attach the EC2 instances using the IAM Role, OR fill in "existingSecret" OR "secretContents" below.
            #
            # Name of a pre-existing secret (if any) in the Velero namespace
            # that should be used to get IAM account credentials.
            existingSecret: velero-aws-credentials
            # The key must be named "cloud", and the value corresponds to the entire content of your IAM credentials file.
            # For more information, consult the documentation for the velero plugin for AWS at:
            # [AWS] https://github.com/vmware-tanzu/velero-plugin-for-aws/blob/main/README.md
            secretContents: 
              # cloud: |
              #   [default]
              #   aws_access_key_id=<REDACTED>
              #   aws_secret_access_key=<REDACTED>
    ...
  3. Use the modified kommander.yaml configuration to install this Velero configuration:

    CODE
    dkp install kommander --installer-config kommander.yaml --kubeconfig=${CLUSTER_NAME}.conf
  4. Check the ConfigMap on the HelmRelease object:

    CODE
    kubectl get hr -n kommander velero -o jsonpath='{.spec.valuesFrom[?(@.name=="velero-overrides")]}'

    The output looks like this if the deployment is successful:

    CODE
    {"kind":"ConfigMap","name":"velero-overrides"}
  5. Verify that the Velero pod is running:

    CODE
    kubectl get pods -A --kubeconfig=${CLUSTER_NAME}.conf |grep velero

Next Step:

Velero with AWS S3 Buckets - Establish a Backup Location

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.