If your cluster workload changes, use a different instance type for your worker machines. To ensure your worker machines’ operating system is up to date, use a different machine image that includes a more recent patch version of the operating system.
By default, Konvoy groups your worker machines in the
worker node pool. If you change properties of the machines and apply the change, the machines may be destroyed and re-created, disrupting their running workloads.
This tutorial describes how to update the properties of worker machines without disrupting your cluster workload. You create a new node pool, with up-to-date properties. You then move your workload, to the new node pool, from the
worker nodepool, and then scale down the
worker node pool.
Follow these steps:
Use this command to list all node pools, and identify the node pool with worker machines:
konvoy get nodepools
Create a new node pool, called
worker2, copying the properties of the
konvoy create nodepool worker2 --from worker
cluster.yamlto change the machine image and other properties of the
worker2node pool if needed. If necessary, update the count.
This is an excerpt of an edited
cluster.yaml. Note that, compared to the
workernode pool, the
worker2node pool has twice as many nodes, uses a different instance type, a different machine image, and allocates twice as much space for image and container storage.
kind: ClusterProvisioner apiVersion: konvoy.mesosphere.io/v1beta1 spec: nodePools: - name: worker count: 4 machine: rootVolumeSize: 80 rootVolumeType: gp2 imagefsVolumeEnabled: true imagefsVolumeSize: 160 imagefsVolumeType: gp2 imagefsVolumeDevice: xvdb type: m5.2xlarge imageID: ami-01ed306a12b7d1c96 - name: worker2 count: 8 machine: rootVolumeSize: 80 rootVolumeType: gp2 imagefsVolumeEnabled: true imagefsVolumeSize: 320 imagefsVolumeType: gp2 imagefsVolumeDevice: xvdb type: p2.xlarge imageID: ami-079f731edfe27c29c
Apply the change to your infrastructure:
Move your workload, from the machines in the
workerpool, to the machines in the
worker2pool. For more information on draining, see Safely Drain a Node.
konvoy drain nodepool worker
Verify your workload has been rescheduled and is healthy. To list all Pods that are not Running, use this command:
kubectl get pods --all-namespaces=true --field-selector=status.phase!=Running
Scale down the
workernode pool to zero.
konvoy scale nodepool worker --count=0 konvoy up