Creating and deleting Kubernetes clusters

When creating a Kubernetes cluster, you can specify supplementary parameters by using the following labels:

selinux_mode
Choose the SELinux mode. Possible values: enforcing, permissive, and disabled. The default is permissive. Starting with version 1.31.x, you can disable SELinux to automatically install Envoy proxy with Cilium.
cilium_ebpf_enabled [starting with version 1.30.x]
Choose whether to use eBPF-based host-routing to optimize the host-internal packet routing. The default is false.
cilium_ipv4pool [starting with version 1.30.x]
Configure the IP pool for assigning pod IPs. The default is 10.100.0.0/16.
cilium_ipv4pool_mask_size [starting with version 1.30.x]
Set the size of a subnet assigned to each minion. The default is 24.
cilium_routing_mode [starting with version 1.30.x]
Enable native-routing mode or tunneling mode. Possible values: tunnel and native. The default is tunnel.
cilium_hubble_enabled [starting with version 1.30.x]
Choose whether to enable Hubble, a fully distributed networking and security observability platform. Disabled by default to maximize performance. The default is false.
cilium_tag [starting with version 1.30.x]
Specify a tag for the operator and agent used to provision the Cilium node.

To see the full list of supported labels, refer to the OpenStack documentation.

Limitations

  • Only users that have access to the corresponding project can perform operations with Kubernetes clusters.
  • In Kubernetes version 1.21.x and earlier, autoscaling to zero nodes is not supported.

Prerequisites

  • The Kubernetes-as-a-service component is installed by a system administrator. It can be deployed along with the compute cluster or later.
  • You have a network that will interconnect the Kubernetes master and worker nodes. It can be either a shared physical network or a virtual network linked to a physical one via a virtual router. The virtual network needs to have a gateway and a DNS server specified.
  • An SSH key is added. It will be installed on both the master and worker nodes.
  • You have enough resources for all of the Kubernetes nodes, taking their flavors into account.
  • It is also required that the network where you create a Kubernetes cluster does not overlap with these default networks:

    • 10.100.0.0/16—Used for pod-level networking
    • 10.254.0.0/16—Used for allocating Kubernetes cluster IP addresses

To create a Kubernetes cluster

  1. Go to the Kubernetes clusters screen, and then click Create on the right. A window will open where you can set your cluster parameters.
  2. Enter the cluster name, and then select a Kubernetes version and an SSH key.
  3. In the Network section:

    1. Select a network that will interconnect the Kubernetes nodes in the cluster.
    2. When selecting a virtual network, decide whether you need access to your Kubernetes cluster via a floating IP address:

      • If you select None, you will not have access to the Kubernetes API.
      • If you select For Kubernetes API, a floating IP address will be assigned to the master node or to the load balancer if the master node is highly available.
      • If you select For Kubernetes API and nodes, floating IP addresses will be additionally assigned to all of the Kubernetes nodes (masters and workers).
    3. If you require access to the Kubernetes cluster and your virtual network is linked to multiple physical networks via routers, select the network to pick up a floating IP address from.
    4. Then, choose whether or not to enable High availability for the master node. If you enable high availability, three master node instances will be created. They will work in the Active/Active mode.

  4. In the Master node section, select a flavor for the master node. For production clusters, it is strongly recommended to use a flavor with at least 2 vCPUs and 8 GiB of RAM.
  5. Optionally, enable Integrated monitoring to automatically deploy the cluster-wide monitoring solution, which includes the following components: Prometheus, Alertmanager, and Grafana.

    This feature is experimental and not intended for use in production environments.

  6. In the Container volume section, select a storage policy, and then enter the size for volumes on both master and worker nodes.
  7. In the Default worker group section, select a flavor for each worker, and then decide whether you want to allow automatic scaling of the worker group:

    • With Autoscaling enabled, the number of workers will be automatically increased if there are pods stuck in the pending state due to insufficient resources, and reduced if there are workers with no pods running on them. For scaling of the worker group, set its minimum and maximum size.

      Some types of pods can prevent the autoscaler from removing a worker. To see a list of such pod types, refer to the official Kubernetes Autoscaler documentation.

    • With Autoscaling disabled, the number of worker nodes that you set will be permanent.

  8. In the Labels section, enter labels that will be used to specify supplementary parameters for this Kubernetes cluster as key/value pairs. For example: selinux_mode=permissive.
  9. Click Create.

Creation of the Kubernetes cluster will start. The master and worker nodes will appear on the Virtual machines screen, while their volumes will show up on the Volumes screen.

After the cluster is ready, click Kubernetes access for instructions on how you can access the dashboard. You can also access the Kubernetes master and worker nodes via SSH, by using the assigned SSH key and the user name core.

To delete a Kubernetes cluster

Click the required Kubernetes cluster on the Kubernetes clusters screen and click Delete. The master and worker VMs will be deleted along with their volumes.