Creating and deleting Kubernetes clusters

Limitations

  • Only users that have access to the corresponding project can perform operations with Kubernetes clusters.
  • To create two or more Kubernetes clusters in one private network, you need to split the network into subnets by using the flannel_network_cidr label.
  • In Kubernetes version 1.21.x and earlier, autoscaling to zero nodes is not supported.

Prerequisites

  • The Kubernetes-as-a-service component is installed by a system administrator. It can be deployed along with the compute cluster or later.
  • You have a network that will interconnect the Kubernetes master and worker nodes. It can be either a shared physical network or a virtual network linked to a physical one via a virtual router. The virtual network needs to have a gateway and a DNS server specified.
  • An SSH key is added. It will be installed on both the master and worker nodes.
  • You have enough resources for all of the Kubernetes nodes, taking their flavors into account.
  • It is also required that the network where you create a Kubernetes cluster does not overlap with these default networks:

    • 10.100.0.0/16—Used for pod-level networking
    • 10.254.0.0/16—Used for allocating Kubernetes cluster IP addresses

To create a Kubernetes cluster

  1. Go to the Kubernetes clusters screen, and then click Create on the right. A window will open where you can set your cluster parameters.
  2. Enter the cluster name, and then select a Kubernetes version and an SSH key.
  3. In the Network section, select a network that will interconnect the Kubernetes nodes in the cluster. If you select a virtual network, decide whether you need access to your Kubernetes cluster via a floating IP address:

    • If you select None, you will not have access to the Kubernetes API.
    • If you select For Kubernetes API, a floating IP address will be assigned to the master node or to the load balancer if the master node is highly available.
    • If you select For Kubernetes API and nodes, floating IP addresses will be additionally assigned to all of the Kubernetes nodes (masters and workers).

    Then, choose whether or not to enable High availability for the master node. If you enable high availability, three master node instances will be created. They will work in the Active/Active mode.

  4. In the Master node section, select a flavor for the master node. For production clusters, it is strongly recommended to use a flavor with at least 2 vCPUs and 8 GiB of RAM.
  5. Optionally, enable Integrated monitoring to automatically deploy the cluster-wide monitoring solution, which includes the following components: Prometheus, Alertmanager, and Grafana.

    This feature is experimental and not intended for use in production environments.

  6. In the Container volume section, select a storage policy, and then enter the size for volumes on both master and worker nodes.
  7. In the Default worker group section, select a flavor for each worker, and then decide whether you want to allow automatic scaling of the worker group:

    • With Autoscaling enabled, the number of workers will be automatically increased if there are pods stuck in the pending state due to insufficient resources, and reduced if there are workers with no pods running on them. For scaling of the worker group, set its minimum and maximum size.

      Some types of pods can prevent the autoscaler from removing a worker. To see a list of such pod types, refer to the official Kubernetes Autoscaler documentation.

    • With Autoscaling disabled, the number of worker nodes that you set will be permanent.

  8. In the Labels section, enter labels that will be used to specify supplementary parameters for this Kubernetes cluster in the key=value format. For example: selinux_mode=permissive. Currently, only the selinux and flannel_network_cidr labels are supported. You can use other labels at your own risk. To see the full list of supported labels, refer to the OpenStack documentation.

  9. Click Create.

Creation of the Kubernetes cluster will start. The master and worker nodes will appear on the Virtual machines screen, while their volumes will show up on the Volumes screen.

After the cluster is ready, click Kubernetes access for instructions on how you can access the dashboard. You can also access the Kubernetes master and worker nodes via SSH, by using the assigned SSH key and the user name core.

To delete a Kubernetes cluster

Click the required Kubernetes cluster on the Kubernetes clusters screen and click Delete. The master and worker VMs will be deleted along with their volumes.