Configure the Control Plane
Users can make modifications to the KubeadmControlplane
cluster-api object to configure different kubelet options. Please see the following guide if you wish to configure your control plane beyond the existing options that are available from flags.
Prerequisites
Before you begin, make sure you have created your cluster using a bootstrap cluster from the respective Infrastructure Providers section.
Modifying Audit Logs
In order to modify control plane option, get the appropriate cluster-api
objects that describe the cluster by running the following command:
The following example uses AWS, but can be used for gcp
, azure
, preprovisioned
, and vsphere
clusters.
dkp create cluster aws -c {MY_CLUSTER_NAME} -o yaml --dry-run >> {MY_CLUSTER_NAME}.yaml
When you open {MY_CLUSTER_NAME}.yaml
with your favorite text editor, look for the KubeadmControlPlane
object for your cluster. Below is an example object:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
metadata:
name: my-cluster-control-plane
namespace: default
spec:
kubeadmConfigSpec:
clusterConfiguration:
apiServer:
extraArgs:
audit-log-maxage: "30"
audit-log-maxbackup: "10"
audit-log-maxsize: "100"
audit-log-path: /var/log/audit/kube-apiserver-audit.log
audit-policy-file: /etc/kubernetes/audit-policy/apiserver-audit-policy.yaml
cloud-provider: aws
encryption-provider-config: /etc/kubernetes/pki/encryption-config.yaml
extraVolumes:
- hostPath: /etc/kubernetes/audit-policy/
mountPath: /etc/kubernetes/audit-policy/
name: audit-policy
- hostPath: /var/log/kubernetes/audit
mountPath: /var/log/audit/
name: audit-logs
controllerManager:
extraArgs:
cloud-provider: aws
configure-cloud-routes: "false"
dns: {}
etcd:
local:
imageTag: 3.5.7
networking: {}
scheduler: {}
files:
- content: |
# Taken from https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/gci/configure-helper.sh
# Recommended in Kubernetes docs
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
# The following requests were manually identified as high-volume and low-risk,
# so drop them.
- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
resources:
- group: "" # core
resources: ["endpoints", "services", "services/status"]
- level: None
# Ingress controller reads 'configmaps/ingress-uid' through the unsecured port.
# TODO(#46983): Change this to the ingress controller service account.
users: ["system:unsecured"]
namespaces: ["kube-system"]
verbs: ["get"]
resources:
- group: "" # core
resources: ["configmaps"]
- level: None
users: ["kubelet"] # legacy kubelet identity
verbs: ["get"]
resources:
- group: "" # core
resources: ["nodes", "nodes/status"]
- level: None
userGroups: ["system:nodes"]
verbs: ["get"]
resources:
- group: "" # core
resources: ["nodes", "nodes/status"]
- level: None
users:
- system:kube-controller-manager
- system:kube-scheduler
- system:serviceaccount:kube-system:endpoint-controller
verbs: ["get", "update"]
namespaces: ["kube-system"]
resources:
- group: "" # core
resources: ["endpoints"]
- level: None
users: ["system:apiserver"]
verbs: ["get"]
resources:
- group: "" # core
resources: ["namespaces", "namespaces/status", "namespaces/finalize"]
- level: None
users: ["cluster-autoscaler"]
verbs: ["get", "update"]
namespaces: ["kube-system"]
resources:
- group: "" # core
resources: ["configmaps", "endpoints"]
# Don't log HPA fetching metrics.
- level: None
users:
- system:kube-controller-manager
verbs: ["get", "list"]
resources:
- group: "metrics.k8s.io"
# Don't log these read-only URLs.
- level: None
nonResourceURLs:
- /healthz*
- /version
- /swagger*
# Don't log events requests.
- level: None
resources:
- group: "" # core
resources: ["events"]
# node and pod status calls from nodes are high-volume and can be large, don't log responses for expected updates from nodes
- level: Request
users: ["kubelet", "system:node-problem-detector", "system:serviceaccount:kube-system:node-problem-detector"]
verbs: ["update","patch"]
resources:
- group: "" # core
resources: ["nodes/status", "pods/status"]
omitStages:
- "RequestReceived"
- level: Request
userGroups: ["system:nodes"]
verbs: ["update","patch"]
resources:
- group: "" # core
resources: ["nodes/status", "pods/status"]
omitStages:
- "RequestReceived"
# deletecollection calls can be large, don't log responses for expected namespace deletions
- level: Request
users: ["system:serviceaccount:kube-system:namespace-controller"]
verbs: ["deletecollection"]
omitStages:
- "RequestReceived"
# Secrets, ConfigMaps, and TokenReviews can contain sensitive & binary data,
# so only log at the Metadata level.
- level: Metadata
resources:
- group: "" # core
resources: ["secrets", "configmaps"]
- group: authentication.k8s.io
resources: ["tokenreviews"]
omitStages:
- "RequestReceived"
# Get responses can be large; skip them.
- level: Request
verbs: ["get", "list", "watch"]
resources:
- group: "" # core
- group: "admissionregistration.k8s.io"
- group: "apiextensions.k8s.io"
- group: "apiregistration.k8s.io"
- group: "apps"
- group: "authentication.k8s.io"
- group: "authorization.k8s.io"
- group: "autoscaling"
- group: "batch"
- group: "certificates.k8s.io"
- group: "extensions"
- group: "metrics.k8s.io"
- group: "networking.k8s.io"
- group: "node.k8s.io"
- group: "policy"
- group: "rbac.authorization.k8s.io"
- group: "scheduling.k8s.io"
- group: "settings.k8s.io"
- group: "storage.k8s.io"
omitStages:
- "RequestReceived"
# Default level for known APIs
- level: RequestResponse
resources:
- group: "" # core
- group: "admissionregistration.k8s.io"
- group: "apiextensions.k8s.io"
- group: "apiregistration.k8s.io"
- group: "apps"
- group: "authentication.k8s.io"
- group: "authorization.k8s.io"
- group: "autoscaling"
- group: "batch"
- group: "certificates.k8s.io"
- group: "extensions"
- group: "metrics.k8s.io"
- group: "networking.k8s.io"
- group: "node.k8s.io"
- group: "policy"
- group: "rbac.authorization.k8s.io"
- group: "scheduling.k8s.io"
- group: "settings.k8s.io"
- group: "storage.k8s.io"
omitStages:
- "RequestReceived"
# Default level for all other requests.
- level: Metadata
omitStages:
- "RequestReceived"
path: /etc/kubernetes/audit-policy/apiserver-audit-policy.yaml
permissions: "0600"
- content: |
#!/bin/bash
# CAPI does not expose an API to modify KubeProxyConfiguration
# this is a workaround to use a script with preKubeadmCommand to modify the kubeadm config files
# https://github.com/kubernetes-sigs/cluster-api/issues/4512
for i in $(ls /run/kubeadm/ | grep 'kubeadm.yaml\|kubeadm-join-config.yaml'); do
cat <<EOF>> "/run/kubeadm//$i"
---
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
metricsBindAddress: "0.0.0.0:10249"
EOF
done
path: /run/kubeadm/konvoy-set-kube-proxy-configuration.sh
permissions: "0700"
- content: |
[metrics]
address = "0.0.0.0:1338"
grpc_histogram = false
path: /etc/containerd/conf.d/konvoy-metrics.toml
permissions: "0644"
- content: |
#!/bin/bash
systemctl restart containerd
SECONDS=0
until crictl info
do
if (( SECONDS > 60 ))
then
echo "Containerd is not running. Giving up..."
exit 1
fi
echo "Containerd is not running yet. Waiting..."
sleep 5
done
path: /run/konvoy/restart-containerd-and-wait.sh
permissions: "0700"
- contentFrom:
secret:
key: value
name: my-cluster-etcd-encryption-config
owner: root:root
path: /etc/kubernetes/pki/encryption-config.yaml
permissions: "0640"
format: cloud-config
initConfiguration:
localAPIEndpoint: {}
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws
name: '{{ ds.meta_data.local_hostname }}'
joinConfiguration:
discovery: {}
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws
name: '{{ ds.meta_data.local_hostname }}'
preKubeadmCommands:
- systemctl daemon-reload
- /run/konvoy/restart-containerd-and-wait.sh
- /run/kubeadm/konvoy-set-kube-proxy-configuration.sh
machineTemplate:
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSMachineTemplate
name: my-cluster-control-plane
namespace: default
metadata: {}
replicas: 3
rolloutStrategy:
rollingUpdate:
maxSurge: 1
type: RollingUpdate
version: v1.27.11
If you use the above example as-is, ensure you update your Kubeneters version number on the final line by replacing the x.
Now a user can configure the fields below for the log backend. The log backend will write audit events to a file in JSON format. You can configure the log audit backend using the kube-apiserver
flags shown below:
audit-log-maxage
audit-log-maxbackup
audit-log-maxsize
audit-log-path
See upstream documentation for more information.
After modifying the values appropriately, you can create the cluster by running the command below:
kubectl create -f {MY_CLUSTER_NAME}.yaml
Once the cluster is created, users can get the corresponding kubeconfig for the cluster by running the following command:
dkp get kubeconfig -c {MY_CLUSTER_NAME} >> {MY_CLUSTER_NAME}.conf
Viewing the Audit Logs
Fluent Bit is disabled on the management cluster by default, to view the audit logs run the command below:
dkp diagnose --kubeconfig={MY_CLUSTER_NAME}.conf
A file similar to support-bundle-2022-08-15T02_28_48.tar.gz
will be created. Untar the file using a command similar to the example below:
tar -xzf support-bundle-2022-08-15T02_28_48.tar.gz
Navigate to the node-diagnostics sub directory from the extracted file with a command like the one shown below:
cd support-bundle-2022-08-15T02_28_48/node-diagnostics
Finally, to find the audit logs run the following command:
$ find . -type f | grep audit.log
./ip-10-0-142-117.us-west-2.compute.internal/data/kube_apiserver_audit.log
./ip-10-0-148-139.us-west-2.compute.internal/data/kube_apiserver_audit.log
./ip-10-0-128-181.us-west-2.compute.internal/data/kube_apiserver_audit.log
See other related pages below: