Virtuozzo Hybrid Infrastructure 4.5 (4.5.0-284)

Issue date: 2021-02-15

Applies to: Virtuozzo Hybrid Infrastructure 4.5

Virtuozzo Advisory ID: VZA-2021-007

1. Overview

In this release, Virtuozzo Hybrid Infrastructure provides a wide range of new features that enhance the end-user experience and service providers’ interoperability. The improvements cover compute services, networking, storage core, monitoring, and the administrative user interface. Additionally, this release delivers stability improvements and addresses the issues found in previous releases.

2. New Features

  • [Compute] Allowing domain administrators to manage projects. Domain administrators can now manage projects within the assigned domain. With this permission, domain administrators can perform operations with projects by using the OpenStack command-line tool.

  • [Compute] Support for Kubernetes version 1.18. Kubernetes version 1.15 is no longer available in the self-service panel. Now, all management operations with Kubernetes clusters are supported for version 1.18.

  • [Compute] Volume size statistics per storage policy. Usage statistics for volumes can be aggregated per storage policy.

  • [Compute] Memory overcommitment. RAM overcommitment enables provisioning virtual machines with more RAM than the amount of physical RAM available on all compute nodes. The RAM overcommitment ratio is set for the entire compute cluster. This feature improves compute cluster efficiency for hybrid cloud disaster recovery.

  • [Compute] Persistent SSL certificate. A custom SSL certificate used for secure communication with a highly available cluster will not be overwritten after changing the high availability configuration.

  • [Compute] New balancing protocol for load balancers. Added support for the UDP protocol for load balancers in the self-service panel.

  • [Compute] Security groups support. Cloud administrators can control incoming and outgoing traffic to virtual machines by assigning virtual machines to security groups. A security group is a set of firewall rules that are applied to virtual network adapters.

  • [Networking] Inbound firewall rules for nodes. Cluster administrators are now able to filter incoming traffic on cluster nodes by using fully customizable access rules. Configuring inbound firewall rules help prevent access from untrusted sources to the cluster. The rules applied on a specific network or traffic type can limit incoming traffic from single IP addresses and subnet ranges.

  • [Storage core] Fencing of slow and pre-failed storage disks. Storage disks with low performance are now automatically detected and marked as slow. Slow disks are fenced from the cluster I/O, to avoid degrading cluster performance. After receiving an alert, cluster administrators can troubleshoot the hardware problem or replace the slow disk before its failure.

  • [Monitoring and alerting] Fine-grained logging for virtual machines. Added more details to the system log. Now, it contains full log messages and audit log entries of operations with virtual machines.

  • [UI] Node status online detection. Improved the internal mechanism to determine node availability. Nodes are now displayed in the admin panel in their current state without any delay. When a node status changes, an alert is generated and stored in the alert log.

  • [Other enhancements] Worker groups support for Kubernetes cluster. Added Kubernetes worker groups that enable creating multiple worker nodes with different number of CPUs and amount of RAM for a single Kubernetes cluster. Workers with different flavors help meet system requirements of applications running in Kubernetes clusters.

3. Important Notes

  • Kubernetes version 1.15 will be deprecated in future releases. Use the currently supported version 1.18 to plan your containerized environments.

  • Fibre Channel bus adapters are no longer supported. We are discontinuing support of Fibre Channel as an option while creating iSCSI target groups.

  • Legacy iSCSI targets (created in 2.4 and earlier versions) are deprecated. We are discontinuing support for TGTD-based iSCSI targets. Such targets are marked in the admin panel as legacy because they do not support the ALUA mode and their LUNs are not highly available. To enable high availability for them, detach a volume from an older target group and attach it to a newly created one.

  • Erasure coding redundancy change. Changing the redundancy scheme is only possible for backup storage. If you have ever changed the encoding scheme for your backup storage cluster with the help of the technical support team, re-apply your redundancy settings in version 4.5 to ensure that all data was encoded.

4. Bug Fixes

  • [Updates] Failed to complete an upgrade from version 3.5.5 due to an unsafe PostgreSQL restart. (VSTOR-39354)

  • [Updates] A software update task may block recovering of other tasks. (VSTOR-39344)

  • [Updates] Validation fails while upgrading a high availability cluster from version 4.0.0-734. (VSTOR-37858)

  • [Compute service] Fixed an issue when the load balancer service runs using WSGI application. (VSTOR-37514)

  • [Compute service] Failed to convert a VMDK image to the QCOW2 format while uploading via the admin panel. (VSTOR-39535)

  • [Compute service] The load balancer service uses public endpoints instead of the internal ones. (VSTOR-37396)

  • [Compute service] noVNC 1.1.0 does not provide a token in a request to websockify. (VSTOR-37855)

  • [Compute service] The networking service crashed unexpectedly because the libvirt domain was running on a wrong node. (VSTOR-40363)

  • [Compute service] The billing metering service upgrade fails if ‘gnocchi-storage-config’ is empty. (VSTOR-38060)

  • [Compute service] PostgreSQL fails when the root partition has insufficient free space. (VSTOR-37898)

  • [Compute service] Due to stale allocations, resource providers cannot be deleted. (VSTOR-37844)

  • [Compute service] The billing metering service creates a large number of small files on the storage, thus affecting the MDS performance. (VSTOR-39003)

  • [Compute service] The orchestration service uses a public keystone endpoint for internal communications. (VSTOR-37793)

  • [Compute service] Trial license keys for six months and one year are reported as invalid. (VSTOR-37289)

  • [Compute service] The number of subscriptions in the Redis server may become too large. (VSTOR-37487)

  • [Compute service] The compute creation wizard does not check availability for load balancer and Kubernetes repositories. (VSTOR-33894)

  • [Compute service] Load balancer creation fails if a VM without an IP address is added to the member list. (VSTOR-39489)

  • [Compute service] The block storage service stops sending lock heartbeats after any connection issue. (VSTOR-37608)

  • [Compute service] Cannot change the IP configuration of OVS bridge interfaces. (VSTOR-37399)

  • [User interface] Impossible to resize more columns in the table components. (VSTOR-31985)

  • [User interface] The management node does not return a clear backup status if high availability is enabled. (VSTOR-32254)

  • [Installer] In the installation wizard, it is not possible to turn on the network time if it was turned off on the previous step. (VSTOR-30581)

  • [Monitoring and alerting] During detection of a node availability, an incorrect schedule interval is used to calculate expiration date. (VSTOR-33502)

5. Known Issues

  • The deployment of compute add-on services fails due to unset environment variables. (VSTOR-30850)

  • A Kubernetes cluster cannot be created on a physical network without DHCP. (VSTOR-38799)

  • The built-in keystone authorization does not work in Kubernetes. (VSTOR-32458)

  • The soft anti-affinity policy for Kubernetes and load balancers VMs is used in the high availability mode. (VSTOR-30671)

  • A placement cannot be selected after VM creation. (VSTOR-40292)

  • An unclear error message in shown in the admin panel during compute cluster creation. (VSTOR-33893)

  • No error message is shown when a live migration fails. (VSTOR-39553)

  • An SSD disk is not recognized if it is managed by specific disk controllers. (VSTOR-36155)

  • An automatic update during node installation can break checking for updates. (VSTOR-38763)

  • An SNMP trap is not sent when a network interface is down. (VSTOR-32192)

6. Installing the Update

You can upgrade Virtuozzo Hybrid Infrastructure 4.0 to 4.5 in the SETTINGS > UPDATE section. A reboot is required to complete the upgrade. Upgraded nodes will be rebooted automatically, one at a time. During the reboot, the storage service and the admin panel might be unavailable on cluster configurations without the redundancy of services or data.

The JSON file with the list of new and updated packages is available at https://docs.virtuozzo.com/vza/VZA-2021-007.json.