Managing high availability configuration

Once the management node high availability (HA) is created, it includes three nodes and protects your infrastructure from a failure of one management node. Additionally, you can expand the HA configuration up to five nodes, to keep your cluster operational even if two management nodes fail simultaneously.

Though it is possible to have four nodes in the management node HA, such a configuration should be considered intermediary, as it is highly recommended to have an odd number of nodes included.

Depending on the number of nodes, management of the HA configuration may differ:

  • With a three-node configuration, that is, with the minimum number of nodes, removing a node from the HA configuration is not possible without destroying the management node HA. If one of the HA nodes fails, you need to replace it with a healthy one.
  • With a five-node configuration, you can remove up to two nodes without destroying the management node HA. If one of the HA nodes fails, you can either replace it with a healthy one in one iteration, or you can remove the failed node, and then add another one instead.

When replacing a node in the HA configuration, you need to consider its role in the compute cluster:

  • A node that is added to the compute cluster and used to host virtual machines has the Worker role. In the vinfra command-line tool, this role is called compute.
  • A management node or a node that is included into the HA configuration has the Controller role. Such nodes are automatically added to the compute cluster, but they are not used to host VMs.
  • A node that is not added to the compute cluster is called a storage node.

The replacement option depends on the Worker role, that is, whether a node belongs to the compute cluster. Worker nodes can only be replaced with other nodes from the compute cluster. Non-worker nodes have two options, they can be replaced with both compute and storage nodes. The figure below shows all possible replacement options in the five-node HA configuration.

Destroying the HA configuration is required for network migration that enables changing network configuration.

Limitations

  • A node that is used by the backend services cannot be replaced in the HA configuration.
  • If one or more management nodes enter the maintenance mode, a failure of another management node may affect high availability of the cluster.
  • While the management node HA is being destroyed, management of the compute cluster may be unavailable.

Prerequisites

To add nodes to the HA configuration

Admin panel

  1. Go to Settings > System settings > Management node high availability.
  2. In the High availability nodes section, click Options, and then click Add node.
  3. In the Add node window, select one or two nodes to be added to the HA configuration, and then click Add.

Command-line interface

Use the following command:

vinfra cluster ha node add --nodes <nodes>
  • --nodes <nodes>
  • A comma-separated list of node IDs or hostnames

    For example, to add the nodes node001 and node002 to the HA configuration, run:

    # vinfra cluster ha node add --nodes node001,node002

    To replace nodes in the HA configuration

    Admin panel

    1. Go to Settings > System settings > Management node high availability.
    2. Click the ellipsis icon next to the node that you wish to replace in the HA configuration, and then click Replace.
    3. In the Replace node window, select the node that will be added into the HA configuration instead of the removed node, and then click Replace.

    Once an offline node is removed from the high availability configuration, it will remain so after becoming available again.

    Command-line interface

    Use the following command:

    vinfra cluster ha update [--virtual-ip <network:ip>] [--nodes <nodes>] [--force]
    
    --virtual-ip <network:ip>

    HA configuration mapping in the format:

    • network: network to include in the HA configuration (must include at least one of these traffic types: Internal management, Admin panel, Self-service panel, or Compute API).
    • ip: virtual IP address that will be used in the HA configuration.

    Specify this option multiple times to create a HA configuration for multiple networks.

    --nodes <nodes>
    A comma-separated list of node IDs or hostnames
    --force
    Skip checks for minimal hardware requirements

    For example, to update the management node HA configuration, that is, include the nodes node001, node002, and node005, run:

    # vinfra cluster ha update --nodes node001,node002,node005

    To remove nodes from the HA configuration

    Admin panel

    1. Go to Settings > System settings > Management node high availability.
    2. Click the ellipsis icon next to the node that you wish to remove from the HA configuration, and then click Remove.
    3. In the confirmation window, click Remove.

    Command-line interface

    Use the following command:

    vinfra cluster ha node remove [--force] <node>
    
    --force
    Skip the compute cluster state and forcibly remove the node(s). This option is required when removing multiple nodes or offline nodes.
  • <node>
  • Node ID(s) or hostname(s) to be removed. Note that the HA configuration must have at least 3 nodes to be operational.

    For example, to remove the nodes node002 and node005 from the HA configuration, run:

    # vinfra cluster ha node remove node002 node005 --force

    To destroy the HA configuration

    Admin panel

    1. Go to Settings > System settings > Management node high availability.
    2. In the High availability nodes section, click Options, and then click Destroy HA configuration.
    3. In the confirmation window, click Destroy.

    Once the high availability configuration is destroyed, you can log in to the admin panel at the IP address of the management node and on the same port 8888.

    Command-line interface

    Use the following command:

    vinfra cluster ha delete