Virtuozzo Hybrid Infrastructure 5.4 Update 3 (5.4.3-100)¶
Issue date: 2023-07-03
Applies to: Virtuozzo Hybrid Infrastructure 5.4
Virtuozzo Advisory ID: VZA-2023-017
1. Overview¶
In this release, Virtuozzo Hybrid Infrastructure provides a range of new features that cover core storage, the system configuration, updates, documentation, and the compute services. Additionally, this release delivers stability improvements and addresses issues found in previous releases.
2. New Features¶
[Compute service] Support for a clean installation of Kubernetes version 1.25.
[Core storage] Node recovery. Added the possibility to recover a system disk and stored data after a system disk failure by reinstalling Virtuozzo Hybrid Infrastructure from an ISO image.
[Core storage] Improved memory consumption of the metadata service for large clusters. The metadata service has been optimized to reduce RAM usage in very large clusters by approximately 50% in some cases.
[System configuration] Rollback to the previous version. In case of an unsuccessful upgrade to the latest version, you can roll back the configuration of the core and backup storage services to the previous working version.
[Updates] Improved eligibility checks for updates. Added a hardware compatibility check for unsupported and unmaintained adapters. During an upgrade, the system now checks for any discrepancy with the hardware configuration.
[Documentation] Expanded the Benchmarking and performance guide. Added more information about storage performance troubleshooting to the product documentation. The new sections explain how to configure clusters for specific use cases and troubleshoot performance issues, as well as identify performance limits for specific operations and limits affecting performance.
3. Important Notes¶
Starting from the next major release, Virtuozzo Hybrid Infrastructure will run on Linux kernel 5.x. To be able to update your cluster without service downtime, your hardware must be supported. Before the update, manually run the hardware compatibility check command-line tool, to ensure that your nodes have no unsupported hardware adapters.
Starting from this release, compute nodes will be automatically returned to operation after a reboot in case of a hardware node failure.
Single-MDS clusters and management nodes with disabled high availability cannot be recovered. To avoid data loss, the primary management node can only be recovered during an upgrade to 6.0. Recovered nodes can only be updated to newer versions with the help of the technical support team.
The configuration rollback is not possible for the object storage, file storage, and iSCSI services.
4. Bug Fixes¶
A node with a replaced network interface card cannot connect to a cluster. (VSTOR-65772)
Improved the chunks balancing logic. (VSTOR-65774)
The S3 service drops requests that do not match the configured server name. (VSTOR-67214)
A cluster update gets stuck due to duplicate compute endpoint entries in the database. (VSTOR-67420)
Cannot manage a load balancer because the listener is stuck with the ‘PENDING_UPDATE’ status. (VSTOR-67456)
Fixed sorting in alphabetical order for projects in the self-service panel. (VSTOR-67541)
During a failover, the high availability IP address can be set on two nodes simultaneously for several minutes. (VSTOR-67610)
The self-service panel does not allow connecting a virtual machine to a network that has the ‘access_as_shared’ permissions. (VSTOR-67924)
After an MTU update, a connection through a floating IP address breaks. (VSTOR-68406)
After being shelved, a virtual machine can have an incorrect ‘Active’ status. (VSTOR-68560)
After a number of retries, Gnocchi redeployment is stuck for an infinite time instead of stopping and returning an error. (VSTOR-68976)
The physical network tab does not show more than 40 domains and projects. (VSTOR-69050)
For the S3 service, an incorrect error ‘500 Internal Server Error’ is returned instead of ‘400 Bad Request.’ (VSTOR-69105)
The OpenStack Block Storage (Cinder) Volume agent is down on some nodes after an update. (VSTOR-69330)
S3 geo-replication does not work with buckets if a bucket name has uppercase letters. (VSTOR-69471)
Fixed the false-positive alert ‘Disk cache settings are not optimal’ and inability to assign disks in certain scenarios. (VSTOR-69762)
Fixed the error ‘Volume <VOLUME_ID> is attached to unexpected servers’ that occurs when trying to migrate virtual machines live. (VSTOR-69773)
An evacuation task is stuck in the running state while affecting other actions, like unfencing a node. (VSTOR-69821)
For a load balancer, it is impossible to view usage statistics of its balancing pools. (VSTOR-69831)
When using an e-mail to set an S3 quota for one user, a default quota is set. (VSTOR-70383)
Important stability and performance improvements for S3, core storage, compute, and the admin panel. (VSTOR-59507, VSTOR-67164, VSTOR-68116, VSTOR-68981, VSTOR-69123, VSTOR-69164, VSTOR-69222, VSTOR-69463, VSTOR-69819, VSTOR-69884, VSTOR-70402)
5. Known Issues¶
After a node deployment, the following false-positive alert appears in the node details: ‘Some disks stop responding and degrade the cluster performance. The disks are isolated from the cluster I/O. To troubleshoot the problem, check the disk connectivity, S.M.A.R.T. status, and dmesg output on the node.’ (VSTOR-70760)
After three consequent crashes, the shaman service moves to the suspended state and needs to be resumed manually once the problem is resolved. (VSTOR-70746)