Monitoring compute nodes

You can monitor the compute node status on the Compute > Nodes screen. Nodes in the compute cluster can have the following statuses:

Healthy
The node operates normally.
Configuring
The node configuration (the default CPU model for VMs or the compute role) is changing.
Fenced
The node has become available after a failure, it is now fenced from scheduling new VMs on it.
Critical
The node has encountered a critical problem and is not operational.

To check compute services on a node

Admin panel

On the Infrastructure > Nodes screen, click the line with a compute node. On the right pane, the Compute services tab provides information about deployed compute controller and worker services on the node. Healthy compute services are highlighted on the tab in green, failed services in red, and disabled services for a fenced node in yellow.

Command-line interface

Use the following command:

vinfra service compute node show <node>
<node>
Node ID or hostname

For example, to view the details of the compute node node001, run:

# vinfra service compute node show node001
+----------------+------------------------------------------+
| Field          | Value                                    |
+----------------+------------------------------------------+
| fenced_reason  |                                          |
| host           | node001.vstoragedomain                   |
| host_ip        | 192.168.128.113                          |
| hypervisor     | hypervisor_type: QEMU                    |
|                | id: f36f9331-11a8-43c9-a90b-dbda9bdf9a00 |
|                | is_evacuating: false                     |
|                | state: up                                |
|                | status: enabled                          |
|                | vms: 0                                   |
| id             | 52565ca3-5893-8f6b-62ce-2f07b175b549     |
| in_maintenance | False                                    |
| orig_hostname  | node001                                  |
| placements     | []                                       |
| roles          | - controller                             |
|                | - compute                                |
| services       | - name: cinder-scheduler                 |
|                |   state: healthy                         |
|                | - name: cinder-volume                    |
|                |   state: healthy                         |
|                | - name: neutron-dhcp-agent               |
|                |   state: healthy                         |
|                | - name: neutron-l3-agent                 |
|                |   state: healthy                         |
|                | - name: neutron-metadata-agent           |
|                |   state: healthy                         |
|                | - name: neutron-openvswitch-agent        |
|                |   state: healthy                         |
|                | - name: nova-compute                     |
|                |   state: healthy                         |
|                | - name: nova-conductor                   |
|                |   state: healthy                         |
|                | - name: nova-scheduler                   |
|                |   state: healthy                         |
| state          | healthy                                  |
+----------------+------------------------------------------+

To view compute node details

Admin panel

On the Compute > Nodes screen, click a compute node. You can view the following compute node information:

  • Virtual CPU and RAM reservations:

    • Reserved for the system and storage services
    • Provisioned to virtual machines located on the node
    • Free virtual CPUs and RAM left on the node

    The number of virtual CPUs is a product of the number of physical CPUs on a node and the overcommitment ratio. The amount of RAM is a product of the amount of physical RAM on a node and the overcommitment ratio. To learn more about physical CPU and RAM reservations for system and storage services, refer to Server requirements.

  • Hosted virtual machines and their resource consumption

Command-line interface

Use the following command:

vinfra service compute node show <node> --with-stats
<node>
Node ID or hostname
--with-stats
Get node information with statistics

For example, to view the details of the compute node node001, run:

# vinfra service compute node show node001 --with-stats
+----------------+------------------------------------------+
| Field          | Value                                    |
+----------------+------------------------------------------+
| fenced_reason  |                                          |
| host           | node001.vstoragedomain                   |
| host_ip        | 192.168.128.113                          |
| hypervisor     | hypervisor_type: QEMU                    |
|                | id: f36f9331-11a8-43c9-a90b-dbda9bdf9a00 |
|                | is_evacuating: false                     |
|                | state: up                                |
|                | status: enabled                          |
|                | vms: 0                                   |
| id             | 52565ca3-5893-8f6b-62ce-2f07b175b549     |
| in_maintenance | False                                    |
| orig_hostname  | node001                                  |
| placements     | []                                       |
| roles          | - controller                             |
|                | - compute                                |
| services       | - name: cinder-scheduler                 |
|                |   state: healthy                         |
|                | - name: cinder-volume                    |
|                |   state: healthy                         |
|                | - name: neutron-dhcp-agent               |
|                |   state: healthy                         |
|                | - name: neutron-l3-agent                 |
|                |   state: healthy                         |
|                | - name: neutron-metadata-agent           |
|                |   state: healthy                         |
|                | - name: neutron-openvswitch-agent        |
|                |   state: healthy                         |
|                | - name: nova-compute                     |
|                |   state: healthy                         |
|                | - name: nova-conductor                   |
|                |   state: healthy                         |
|                | - name: nova-scheduler                   |
|                |   state: healthy                         |
| state          | healthy                                  |
| statistics     | compute:                                 |
|                |   block_capacity: 0                      |
|                |   block_usage: 0                         |
|                |   cpu_usage: 0.0                         |
|                |   vcpus: 0                               |
|                |   vcpus_free: 8                          |
|                |   vm_mem_capacity: 3731456000.0          |
|                |   vm_mem_free: 3731456000.0              |
|                |   vm_mem_reserved: 0                     |
|                |   vm_mem_usage: 0                        |
|                | datetime: '2023-01-10T13:04:18.280858'   |
|                | physical:                                |
|                |   cpu_cores: 4                           |
|                |   cpu_usage: 14.212499999994177          |
|                |   mem_free: 534638592                    |
|                |   mem_total: 25110126592                 |
|                |   swap_free: 0                           |
|                |   swap_total: 0                          |
|                |   vcpus_total: 32                        |
|                | reserved:                                |
|                |   cpus: 3                                |
|                |   memory: 21378670592                    |
|                |   vcpus: 24                              |
+----------------+------------------------------------------+