Product release: Virtuozzo Infrastructure Platform 2.5

Issue date: 2019-01-15

Applies to: Virtuozzo Infrastructure Platform 2.5

Virtuozzo Advisory ID: VZA-2019-017

1. Overview

This product is formerly known as Virtuozzo Storage. With this release, Virtuozzo Infrastructure Platform offers a wide range of new features for compute virtualization and software-defined networking, as well as enhancements and stability improvements. It also addresses issues found in the previous releases.

2. New Features

  • Compute virtualization. Run virtual machines on Virtuozzo Infrastructure Platform nodes in the hyper-converged mode (storage and compute on same node) or the traditional way (storage and compute on separate nodes). Virtual machine management: run, resize, migrate, and open console to virtual machines. Private software-defined networking for virtual machines (VXLAN). Storage policies for virtual machines. Easy-to-use data redundancy options for virtual machine volumes. Easy to configure high availability for compute service and virtual machines. Supported guest operation systems: CentOS 6, CentOS 7, RHEL 6, RHEL 7, Debian 9, Ubuntu 16.04, Ubuntu 18.04, Windows 7, Windows 8.1, Windows 10, Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2, Windows Server 2016, Windows Server 2019. Full Windows guest support with automatic installation of paravirtualization drivers (virtio) during installation from ISO.

  • iSCSI targets get better performance and high availability via multi-path. The new iSCSI target subsystem and high availability engine reduce downtime by 2-4 times during node failures, making them barely noticeable for end-user applications. The subsystem uses Asymmetric Logical Unit Access (ALUA) in the Active-Passive mode.

  • Fibre Channel support. Export block storage over Fibre Channel. Check Installation guide for the list of supported cards.

  • RoCE and Infiniband RDMA support. Up to 25% better I/O latency and lower CPU utilization on InfiniBand, RoCE (RDMA over Converged Ethernet) and iWARP (Internet Wide-area RDMA Protocol).

  • New comprehensive monitoring: Built-in monitoring with pre-configured Prometheus and Grafana. Grafana dashboards for nodes, disks, network, latency, performance, storage services. New charts with tooltips in the admin panel, including I/O latency charts, new physical and logical space charts. Zoomable charts on dashboards: from 30 minutes to 1 week. Virtual machines performance monitoring: CPU, RAM, disks and networking.

  • Performance improvements for all-flash configurations. Get more IOPS with multi-threaded I/O on all-flash clusters. New fast-path approach in the core of Virtuozzo Infrastructure Platform reduces latency and implements fast multi-threaded I/O handling in Linux kernel.

  • New comprehensive command-line tool. Support for more operations. Unified output for all commands: as a text table, in JSON, and in XML.

  • Built-in ReadyKernel eliminates system update downtime. Based on the kpatch technology, ReadyKernel is live patching of a running Linux kernel to apply kernel hotfixes and CVEs in seconds.

  • New infrastructure networking. Simplifies cluster-wide traffic and firewall configuration. Traffic types are easily assigned to cluster networks on a single screen which minimizes chances of misconfiguration. Also, new traffic types for Compute service and SNMP are added.

  • UI and UX improvements: New navigation menu. New controls and fresh UI. Ability to send problem reports. Improved admin panel performance when listing resources of nodes with more than 250 local disks and 100 network interfaces.

  • Single repository for all components. Easy and integrated cluster updates from the unified RPM repository.

  • Other enhancements: New high availability engine significantly improves cluster reaction time to node failure for all services. New internal cluster DNS service that improves cluster services discovery. Better overall stability and performance.

3. Bug Fixes

  • Improved S3 cluster creation. Automatic high availability management for object storage configuration service. (VSTOR-2948, VSTOR-8883)

  • Persistent iSCSI Portal is used during configuration. (VSTOR-18256)

4. Known Issues

  • High availability for iSCSI/FC does not work for iSCSI initiator in Windows 7, Windows 10 due to the lack of Active-Active mode and persistent reservations. (VSTOR-5621, VSTOR-18121)

  • It is impossible to add a node to the management node high availability cluster if one of the nodes included in the management node high availability configuration is offline. The user needs to remove all nodes from the management node high availability cluster and recreate it from scratch. (VSTOR-10950, VSTOR-16716, VSTOR-17690)

  • It is not possible to release the “MDS + cache” disk role without releasing all the corresponding disks to the cache or whole node release. (VSTOR-11567)

  • It is not possible to cancel an ongoing migration. (VSTOR-12379)

  • SPLA license may stop working with an error “bad request”, if the local time is set to a past value. (VSTOR-12495)

  • In some cases, I/O may hang or cluster performance may degrade on iWARP cards in case of a node failure. (VSTOR-12872)

  • The chart zoom on disk performance graphs on node screen can’t be reset to the initial state. (VSTOR-13622)

  • S.M.A.R.T. alerts for system disks are not shown in the panel. (VSTOR-13811)

  • Need to manually refresh admin panel web page after creating the management node high availability configuration. (VSTOR-14800)

  • Need to manually refresh browser page after flavor creation. (VSTOR-15252)

  • Compute overview may not work during adding/releasing nodes in the compute cluster. (VSTOR-15978)

  • No “System + Metadata” disk role in the advanced mode during storage cluster creation or joining nodes to a storage cluster. (VSTOR-16523)

  • It is not possible to scale down virtual machine RAM, if the compute cluster has no free memory. (VSTOR-16644)

  • In some cases, a node could be treated as “offline” for a while after creating the management node high availability configuration. (VSTOR-16823)

  • The DHCP address might not be obtained by a virtual machine from a network that not assigned to the adapter. (VSTOR-16839)

  • “Network has undefined speed” alert is displayed for network interface with unplugged link. (VSTOR-17286)

  • The storage dashboard and the compute overview may report different physical space values, because the compute overview also takes into account licensed space. (VSTOR-17297)

  • Need to manually clean browser cache and cookies after destroying the compute cluster. Otherwise the newly created compute cluster will have empty compute dashboard. (VSTOR-17752)

  • User cannot release the master management node from the management node high availability configuration. (VSTOR-17852, VSTOR-18259)

  • Storage cluster name must be shorter than 50 characters. (VSTOR-17902)

  • The admin panel does not prevent migration of a virtual machine with a public NIC to a node that is not connected to an underlying public network. (VSTOR-17921)

  • It is not possible to use VLANs in virtual machines in the ‘private’ backnet. (VSTOR-17943)

  • No error on attempt to migrate a virtual machine to a node with no free RAM. (VSTOR-18053)

  • There is no “crashed” state for virtual machines. Such virtual machines are displayed as “ACTIVE” even though they are not operational anymore. (VSTOR-18054)

  • Virtual Machine live migration to a node with a different CPU may fail without an error message. (VSTOR-18061)

  • No error message when creating NFS cluster on a node without the ‘NFS’ traffic type. (VSTOR-18068)

  • Installation from ISO is possible only with a US keyboard. This may result in issues with installations via IPMI with non-US locales. (VSTOR-18277)

  • Unable to release a node from the compute cluster if it’s included in the management node high availability configuration. Release the node from the management node high availability first before releasing it from the compute cluster. (VSTOR-18299)

  • User is unable to reassign the “Compute private” and “Compute API” traffic types to other networks after the compute cluster has been deployed. (VSTOR-18491)

  • If only one, “Private”, network is used in a cluster, the user must unassign the “Admin panel” traffic type from the “Public” network before Management node high availability creation on top of only one (“Private”) network. (VSTOR-18730)

  • In some cases it is needed to add “vstoradmin” user to “vstorage-user” group manually (“usermod -a -G vstorage-users vstoradmin; systemctl restart vstorage-ui-backend”) after adding new node to Management node high availability. (VSTOR-19274)

The JSON file is available at https://docs.virtuozzo.com/vza/VZA-2019-017.json.