Managing Kubernetes worker groups
To meet system requirements of applications running in Kubernetes clusters, you can have worker nodes with different number of CPUs and amount of RAM. Creating workers with different flavors is possible by using worker groups.
When creating a Kubernetes cluster, you can specify the configuration of only one worker group, the default worker group. After the cluster is created, add as many worker groups as you need. If required, you can also edit the number of workers in a group later.
Limitations
- Worker groups are not available for Kubernetes version 1.15.x.
- The default worker group in a Kubernetes cluster cannot be removed or replaced because it is part of the initial cluster stack. However, you can stop using it by cordoning and draining its node and letting the autoscaler scale it down to zero.
- In Kubernetes version 1.21.x and earlier, autoscaling to zero nodes is not supported.
Prerequisites
- A Kubernetes cluster is created, as described in Creating and deleting Kubernetes clusters.
To add a worker group
- On the Kubernetes clusters screen, click a Kubernetes cluster.
- On the cluster right pane, navigate to the Groups tab.
- In the Workers section, click Add.
- In the Add worker group window, specify a name for the group.
-
In the Worker group section, select a flavor for each worker, and then decide whether you want to allow automatic scaling of the worker group:
-
With Autoscaling enabled, the number of workers will be automatically increased if there are pods stuck in the pending state due to insufficient resources, and reduced if there are workers with no pods running on them. For scaling of the worker group, set its minimum and maximum size.
Some types of pods can prevent the autoscaler from removing a worker. To see a list of such pod types, refer to the official Kubernetes Autoscaler documentation.
- With Autoscaling disabled, the number of worker nodes that you set will be permanent.
-
- In the Labels section, enter labels that will be used to specify supplementary parameters for this Kubernetes cluster as key/value pairs. For example:
selinux_mode=permissive. - Click Add.
When the worker group is created, you can assign pods to these worker nodes, as explained in Assigning Kubernetes pods to specific nodes.
To edit the number of workers in a group
- On the Kubernetes cluster right pane, navigate to the Groups tab.
-
In the Workers section, click the pencil icon for the default worker group or the ellipsis icon for all other groups, and then select Edit.
- In the Edit workers window, enable or disable Autoscaling, or change the number of workers in the group.
- Click Save.
To delete a worker group
Click the ellipsis icon next to the required worker group, and then select Delete. The worker group will be deleted along with all of its workers. After the deletion, the worker group data will be lost.
To remove the default worker group
-
Enable the Kubernetes autoscaler and set the minimum node count of the default worker group to 0.
- On the Kubernetes cluster right pane, open the Groups tab.
-
In the Workers section, click the pencil icon next to the default worker group and select Edit.
- In the Edit workers window, enable Autoscaling and set the number of workers in the group to 0.
- Click Save.
This allows the autoscaler to scale the group down automatically once the node is empty.
-
Prevent new workloads from being scheduled on the default worker node:
kubectl cordon <node-name>
-
Drain existing workloads from the default node:
kubectl drain <node-name> --ignore-daemonsets --delete-local-data --force
This evicts all pods (except DaemonSets) and prepares the node for removal.
- Wait for the autoscaler to remove the node. After the last pod is evicted, the node will be automatically deleted after the standard autoscaler timeout (approximately 10 minutes).
Once the default worker node is removed, your new worker group will take over and run all workloads going forward.