6.1. Creating and Deleting Kubernetes Clusters

Limitations:

  • Only users that have access to the corresponding project can perform operations with Kubernetes clusters.

Prerequisites:

  • The Kubernetes-as-a-service component is installed by a system administrator. It can be deployed along with the compute cluster or later.

  • You have a network that will interconnect the Kubernetes master and worker nodes. It can be either a shared physical network or a virtual network linked to a physical one via a virtual router. The virtual network needs to have a gateway and a DNS server specified.

  • An SSH key is added. It will be installed on both the master and worker nodes.

  • You have enough resources for all of the Kubernetes nodes, taking their flavors into account.

  • It is also required that the network where you create a Kubernetes cluster does not overlap with these default networks:

    • 10.100.0.0/24—Used for pod-level networking

    • 10.254.0.0/16—Used for allocating Kubernetes cluster IP addresses

6.1.1. Creating Kubernetes Cluster

  1. Go to the Kubernetes clusters screen, and then click Create on the right. A window will open where you can set your cluster parameters.

  2. Enter the cluster name, and then select a Kubernetes version and an SSH key.

  3. In the Network section, select a network that will interconnect the Kubernetes nodes in the cluster. If you select a virtual network, decide whether you need access to your Kubernetes cluster via a floating IP address:

    • If you select None, you will not have access to the Kubernetes API.

    • If you select For Kubernetes API, a floating IP address will be assigned to the master node or to the load balancer if the master node is highly available.

    • If you select For Kubernetes API and nodes, floating IP addresses will be additionally assigned to all of the Kubernetes nodes (masters and workers).

    Then, choose whether or not to enable High availability for the master node. If you enable high availability, three master node instances will be created. They will work in the Active/Active mode.

    ../_images/vhc-creating-and-deleting-kubernetes-clusters-1.png
  4. In the Master node section, select a flavor for the master node. For production clusters, it is strongly recommended to use a flavor with at least 2 vCPUs and 8 GiB of RAM.

  5. Optionally, enable Integrated monitoring to automatically deploy the cluster-wide monitoring solution, which includes the following components: Prometheus, Alertmanager, and Grafana.

    Note

    This feature is experimental and not supported in production environments.

  6. In the Container volume section, select a storage policy, and then enter the size for volumes on both master and worker nodes.

  7. In the Default worker group section, select a flavor for each worker, and then decide whether you want to allow automatic scaling of the worker group:

    • With Autoscaling enabled, the number of workers will be automatically increased if there are pods stuck in the pending state due to insufficient resources, and reduced if there are workers with no pods running on them. For scaling of the worker group, set its minimum and maximum size.

    • With Autoscaling disabled, the number of worker nodes that you set will be permanent.

      ../_images/vhc-creating-and-deleting-kubernetes-clusters-2.png
  8. In the Labels section, enter labels that will be used to specify supplementary parameters for this Kubernetes cluster in the key=value format. For example: selinux_mode=permissive. Currently, only the selinux label is supported. You can use other labels at your own risk. To see the full list of supported labels, refer to the OpenStack documentation.

  9. Click Create.

Creation of the Kubernetes cluster will start. The master and worker nodes will appear on the Virtual machines screen, while their volumes will show up on the Volumes screen.

After the cluster is ready, click Kubernetes access for instructions on how you can access the dashboard. You can also access the Kubernetes master and worker nodes via SSH, by using the assigned SSH key and the user name core.

6.1.2. Deleting Kubernetes Cluster

Click the required Kubernetes cluster on the Kubernetes clusters screen and click Delete. The master and worker VMs will be deleted along with their volumes