8.6. Managing CPU Pools¶
Warning
This feature is experimental. Libvirt may not be aware of new CPU features that may already be used in CPU pools. This may lead to issues with migration to destination nodes that do not have these unreported CPU features. In addition, CPU pools only work on Intel Ivy Bridge and newer CPUs. They do not work on AMD processors.
In Virtuozzo Hybrid Server, you can avoid stopping virtual environments on a node (e.g., for node maintenance) by temporarily migrating them live to another node. For live migration to be possible, the CPUs on the source and destination nodes must be manufactured by the same vendor, and the CPU features of the destination node must be the same or exceed those of the source node.
Such a requirement may lead to two issues:
If the target node has more CPU features than the source node, live migration back to the source server will not be possible.
If a node in a high availability cluster fails and its virtual environments are relocated to another node, that destination node may have a CPU from a different vendor or with a different set of features. This will prevent live migration back to the original node when it goes up again.
CPU pools solve these two issues by dividing your Virtuozzo Hybrid Server nodes into groups (pools) in which live migration between any two nodes is always guaranteed. This is achieved by determining CPU features common for all nodes in a pool and masking (disabling) the rest of the CPU features on nodes that have more of them. So a CPU pool is a group of nodes with equal CPU features.
Note the following:
Adding nodes with same CPUs to different CPU pools does not prevent live migration between such nodes.
If
xsave
instructions supported by CPUs on both source and destination nodes use different buffer sizes, migration will fail.
8.6.1. Adding Nodes to CPU Pools¶
Note
Nodes with CPUs from different vendors cannot be added to same CPU pools.
A node that is to be added to a CPU pool must not have running virtual machines and containers on it. To meet this requirement while avoiding virtual environment downtime, you can migrate all running virtual machines and containers live to a different node (and migrate them back live after the node has been added to a pool).
The easiest way to add a node to a CPU pool is to run the following command on it:
# cpupools join
The node will be added to a default CPU pool.
Default pools have the following features and limitations:
The naming pattern is
default_{intel}N
, e.g.,default_intel0
,default_intel1
, etc.A preset, unchangeable basic CPU mask provides maximum hardware compatibility at the expense of advanced CPU features. Different CPU feature masks are used for different CPU vendors.
Nodes which do not support the basic CPU feature mask are placed in different default CPU pools, e.g.,
default_intel1
,default_intel2
, etc.Nodes cannot be added to specific default CPU pools on purpose.
To make sure that as many common CPU features as possible are enabled for nodes in a pool for best performance, you can move the required nodes to a custom CPU pool. To do this:
On the node to be added to a custom CPU pool, run the
cpupools move
command. For example:# cpupools move mypool
The node will be moved to the CPU pool
mypool
. If the CPU pool does not exist, it will be created.Note
Custom CPU pools are created with the same basic CPU feature mask as default pools.
On any node in the new pool, run the
cpupools recalc
command to update the CPU feature mask and make sure that as many common CPU features as possible are enabled. For example:# cpupools recalc mypool
Now that node is in the desired CPU pool, you can migrate the node’s virtual machines and containers back live.
The general recommendation is to group nodes with CPUs of the similar microarchitecture, generation, or family as they have similar features. This way most of the CPU features will remain available for nodes after applying the CPU feature mask to the pool. This approach will help ensure the best possible performance for nodes and at the same time guarantee live migration compatibility.
8.6.2. Monitoring CPU Pools¶
To see which CPU pools exist in your cluster and which nodes are in them, run the cpupools stat
command on any node in the cluster. For example:
# cpupools stat
default_intel0:
320117e17894401a
bec9df1651b041d8
eaea4fc0ddb24597
mypool:
ca35929579a448db
* f9f2832d4e5f4996
The identifiers listed are Virtuozzo Storage node IDs which you can obtain with the shaman -v stat
command. For example:
# shaman -v stat
Cluster 'vstor1'
Nodes: 5
Resources: 1
NODE_IP STATUS NODE_ID RESOURCES
10.29.26.130 Active bec9df1651b041d8 0 CT
* 10.29.26.134 Active f9f2832d4e5f4996 0 CT
10.29.26.137 Active ca35929579a448db 0 CT
10.29.26.141 Active 320117e17894401a 0 CT
M 10.29.26.68 Active eaea4fc0ddb24597 1 CT
...
Note
The asterisk marks the current node (on which the command has been run).
8.6.3. Removing Nodes from CPU Pools¶
To remove the current node from a CPU pool, run the cpupools leave
command on it:
# cpupools leave