Managing Kubernetes clusters
Kubernetes clusters are created and managed by self-service users, as described in "Managing Kubernetes clusters" in the Self-Service Guide. In the admin panel, you can view Kubernetes cluster details, view master and worker groups, change service parameters, update the Kubernetes version, and delete Kubernetes clusters.
Virtuozzo Hybrid Infrastructure uses the soft anti-affinity policy for Kubernetes cluster nodes. According to this policy, Kubernetes nodes are distributed across compute nodes by groups: master nodes are distributed separately from workers. In this case, a compute node can host both a master node and a worker node. However, if there are not enough compute nodes to evenly distribute Kubernetes nodes from the same group, some of them can be placed on one compute node.
For Kubernetes service users to be able to use cluster autoscaling, the cluster must have a valid certificate issued by a trusted certificate authority, instead of a self-signed certificate.
Limitations
- Kubernetes versions 1.15.x–1.22.x are no longer supported. Kubernetes clusters created with these versions are marked with the Deprecated tag.
-
When a Kubernetes cluster is created, its configuration files contain the IP address or DNS name of the compute API endpoint. Modifying this IP address or DNS name will lead to inability to perform Kubernetes management operations. You can have one of the following scenarios:
-
If the high availability for the management node is disabled
The compute API is accessed via the IP address of the management node. In this case, changing this IP address or creating the management node HA is prohibited.
-
If the high availability for the management node is enabled
The compute API is accessed via the virtual IP address. In this case, changing this virtual IP address or destroying the management node HA is prohibited.
-
If a DNS name for the compute API is configured
Changing the DNS name is prohibited.
-
- Kubernetes cluster certificates are issued for five years. To renew the certificates, use the
vinfra service compute k8saas rotate-ca
command. Alternatively, you can use theopenstack coe ca rotate
command, as described in the OpenStack documentation. - The default Kubernetes network plugin does not support network policies. Starting with version 1.29.3, Kubernetes clusters can be created with the Cilium network plugin.
Prerequisites
- The compute cluster is created, as described in Creating the compute cluster.
- The Kubernetes service is installed during the compute cluster deployment or later, as described in Provisioning Kubernetes clusters.
To view the details of a Kubernetes cluster
Admin panel
On the Compute > Kubernetes screen, click a Kubernetes cluster to open its right pane.
Command-line interface
Use the following command:
vinfra service compute k8saas show <cluster>
<cluster>
- Cluster ID or name
For example, to view the details of the Kubernetes cluster k8s1
, run:
# vinfra service compute k8saas show k8s1 +----------------------------------+--------------------------------------------+ | Field | Value | +----------------------------------+--------------------------------------------+ | action_status | CREATE_COMPLETE | | boot_volume_size | 10 | | boot_volume_storage_policy | default | | containers_volume_size | 10 | | containers_volume_storage_policy | default | | create_timeout | 60 | | external_network_id | 10cc4d59-adac-4ec1-8e0a-df5015b82c64 | | id | 749737ae-2452-4a98-a057-b59b1c579a85 | | key_name | key1 | | master_flavor | medium | | master_node_count | 1 | | name | k8s1 | | network_id | d037623b-0db7-40c2-b38a-9ac34fbd1cc5 | | nodegroups | - action_status: CREATE_COMPLETE | | | flavor: medium | | | id: c3b4ec41-b8c1-4dae-9e1c-aa586b99a62c | | | is_default: true | | | name: default-master | | | node_count: 1 | | | role: master | | | status: ACTIVE | | | version: v1.22.2 | | | - action_status: CREATE_COMPLETE | | | flavor: small | | | id: 65b80f19-0920-48b7-84e0-d0c63c46e99f | | | is_default: true | | | name: default-worker | | | node_count: 3 | | | role: worker | | | status: ACTIVE | | | version: v1.22.2 | | project_id | d8a72d59539c431381989af6cb48b05d | | status | ACTIVE | | user_id | 5846f988280f42199ed030a22970d48e | | worker_pools | - flavor: small | | | node_count: 3 | +----------------------------------+--------------------------------------------+
To view master and worker groups
- On the Compute > Kubernetes screen, click a Kubernetes cluster.
- On the cluster right pane, navigate to the Groups tab.
- List all of the nodes in a group by clicking the arrow icon next to the required node group.
To renew the Kubernetes cluster certificates
Use the following command:
vinfra service compute k8saas rotate-ca <cluster>
<cluster>
- Cluster ID or name
For example, to renew the CA certificates for the Kubernetes cluster k8s1
, run:
# vinfra service compute k8saas rotate-ca k8s1
To delete a Kubernetes cluster
Admin panel
- On the Compute > Kubernetes screen, click a Kubernetes cluster.
- On the cluster right pane, click Delete.
- Click Delete in the confirmation window.
Command-line interface
Use the following command:
vinfra service compute k8saas delete <cluster>
<cluster>
- Cluster ID or name
For example, to delete the Kubernetes cluster k8s1
, run:
# vinfra service compute k8saas delete k8s1