Customize Velero to allow the configuration of a non-default backup location.
Create a ConfigMap
for the Velero configuration:
CODE
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
namespace: ${WORKSPACE_NAMESPACE}
name: velero-overrides
data:
values.yaml: |
configuration:
backupStorageLocation:
- bucket: ${BUCKET}
provider: "aws"
config:
region: <AWS_REGION> # such as us-west-2
s3ForcePathStyle: "false"
insecureSkipTLSVerify: "false"
s3Url: ""
# profile should be set to the AWS profile name mentioned in the secret
profile: default
credentials:
# With the proper IAM permissions with access to the S3 bucket,
# you can attach the EC2 instances using the IAM Role, OR fill in "existingSecret" OR "secretContents" below.
#
# Name of a pre-existing secret (if any) in the Velero namespace
# that should be used to get IAM account credentials.
existingSecret: velero-aws-credentials
# The key must be named "cloud", and the value corresponds to the entire content of your IAM credentials file.
# For more information, consult the documentation for the velero plugin for AWS at:
# [AWS] https://github.com/vmware-tanzu/velero-plugin-for-aws/blob/main/README.md
secretContents:
# cloud: |
# [default]
# aws_access_key_id=<REDACTED>
# aws_secret_access_key=<REDACTED>
EOF
Patch the Velero AppDeployment
to reference the created ConfigMap
with the Velero overrides:
To update Velero in all clusters in a workspace:
CODE
cat << EOF | kubectl -n ${WORKSPACE_NAMESPACE} patch appdeployment velero --type="merge" --patch-file=/dev/stdin
spec:
configOverrides:
name: velero-overrides
EOF
To update Velero for a specific cluster in a workspace, see Customize an Application per Cluster.
Check the ConfigMap
on the HelmRelease
object:
CODE
kubectl wait --for=jsonpath='{.spec.valuesFrom[1].name}'=velero-overrides HelmRelease/velero -n ${WORKSPACE_NAMESPACE}
Verify that the Velero pod is running:
CODE
kubectl get pods -A --kubeconfig=${CLUSTER_NAME}.conf |grep velero
You can also configure Velero by editing the kommander.yaml
and rerunning the installation. To follow this alternative configuration path, expand the following section:
Alternative Configuration path for Management/Essential clusters
Output the Kommander configuration to kommander.yaml
CODE
dkp install kommander -o yaml --init > kommander.yaml
Add the Velero configuration to the apps.velero.values
section:
NOTE: This process has been tested to work with plugins for AWS v1.1.0. Newer versions of these plugins can be used, but have not been tested by D2iQ.
CODE
...
velero:
values: |
configuration:
backupStorageLocation:
bucket: ${BUCKET}
config:
region: <AWS_REGION> # such as us-west-2
s3ForcePathStyle: "false"
insecureSkipTLSVerify: "false"
s3Url: ""
# profile should be set to the AWS profile name mentioned in the secret
profile: default
credentials:
# With the proper IAM permissions with access to the S3 bucket,
# you can attach the EC2 instances using the IAM Role, OR fill in "existingSecret" OR "secretContents" below.
#
# Name of a pre-existing secret (if any) in the Velero namespace
# that should be used to get IAM account credentials.
existingSecret: velero-aws-credentials
# The key must be named "cloud", and the value corresponds to the entire content of your IAM credentials file.
# For more information, consult the documentation for the velero plugin for AWS at:
# [AWS] https://github.com/vmware-tanzu/velero-plugin-for-aws/blob/main/README.md
secretContents:
# cloud: |
# [default]
# aws_access_key_id=<REDACTED>
# aws_secret_access_key=<REDACTED>
...
Run dkp install kommander
using the kommander.yaml
configuration file:
CODE
dkp install kommander --installer-config kommander.yaml --kubeconfig=${CLUSTER_NAME}.conf
After running dkp install kommander
see the ConfigMap on the HelmRelease
object:
CODE
kubectl wait --for=jsonpath='{.spec.valuesFrom[1].name}'=velero-overrides HelmRelease/velero -n kommander
Check the Helm releases that the new Velero configuration has been applied:
CODE
kubectl get helmrelease -n ${WORKSPACE_NAMESPACE} --kubeconfig=${CLUSTER_NAME}.conf
Verify that the Velero pod is running:
CODE
kubectl get pods -A --kubeconfig=${CLUSTER_NAME}.conf |grep velero
Create the backup storage location:
CODE
velero backup-location create -n ${WORKSPACE_NAMESPACE} <aws-backup-location-name> \
--provider aws \
--bucket ${BUCKET} \
--config region=<AWS_REGION> \
--credential=velero-aws-credentials=aws
Check that the backup storage location is Available
and referencing the correct S3 bucket:
CODE
kubectl get backupstoragelocations -n kommander -oyaml
Velero with AWS S3 - Establish a Backup Location