Changing network configuration
You can change your network configuration and IP address assignment to cluster nodes by using network migration.
Limitations
- DHCP can be enabled for the source network but must be disabled for the target network. After migration, IP addresses obtained via DHCP will become static.
- Migration from IPv4 to IPv6 is not supported.
Prerequisites
- All of the connected node interfaces are online.
- Each network interface has only one IP address.
- High availability is disabled, as described in Managing high availability configuration. You can enable high availability later, if required.
- If a network is the default gateway network, all nodes connected to it must use the same default gateway.
- If you have restricted outbound traffic in your cluster, you need to manually add a rule that will allow outbound traffic on TCP and UDP ports 60000–60100, as described in Configuring outbound firewall rules.
To migrate a network from the source configuration to the target one
Admin panel
- On the Infrastructure > Networks screen, click the cogwheel icon next to the network name.
- In the network summary window, click Migrate.
-
In the Migrate network: <network name> window, review the current network configuration, and important information about potential risks, and edit the new network configuration, if required.
If you plan to move your cluster to another location, which implies cluster manual shutdown, select Cluster relocation with shutdown is planned.
Then, click Next.
-
On the next step, specify new IP addresses for cluster nodes, and click Try new configuration. Then, confirm your action by clicking Continue in the Try new configuration window.
- If you plan cluster relocation, you can shut down your cluster nodes and then turn them on in a new datacenter, as described in Shutting down and starting up the cluster. After cluster relocation, click Resume.
-
Wait until the new configuration is created and then click Apply.
While network migration is in progress, users cannot perform other tasks in the admin panel. Moreover, the self-service users may not have access to the portal and will need to wait until the migration is complete.
- If the connectivity checks fail, you need to fix the found issues and try again. If the specified new IP addresses are not available or valid, you can change them in the wizard and click Retry. With other network issues, revert to your old network configuration by clicking Revert, fix the issue, and try again.
- Wait until the migration is complete on all the connected interfaces, and then click Done.
- If you migrate a network with the Internal management or VM private traffic type, manually restart all running virtual machines, to be able to access them via VNC console.
Command-line interface
-
Start the network migration by using the following command:
vinfra cluster network migration start <network> [--subnet <subnet>] [--netmask <netmask>] [--gateway <gateway>] [--shutdown] [--node <node> <address>]
--network <network>
- Network ID or name
--subnet <subnet>
- New network subnet
--netmask <netmask>
- New network mask
--gateway <gateway>
- New network gateway
--shutdown
- Prepare the cluster to be shut down manually for relocation
--node <node> <address>
-
New node address in the format:
<node>
: node ID or hostname<address>
: IPv4 address
This option can be used multiple times.
For example:
# vinfra cluster network migration start --network "Private" \ --subnet 192.168.128.0 --netmask 255.255.255.0 --node node001 192.168.128.11 \ --node node002 192.168.128.12 --node node003 192.168.128.13 +----------------------------+--------------------------------------------------+ | Field | Value | +----------------------------+--------------------------------------------------+ | configuration | network_id: 3e3619b7-2c93-4e90-a187-135c6f8b9060 | | link | href: /api/v2/network/migration/2d4ec3a9-<...>/ | | | method: GET | | | rel: network-migration-details | | operation | network-migration | | progress | 0.0 | | single_interface_migration | False | | state | preparing | | task_id | 2d4ec3a9-7714-479d-a03c-1efbe6ffecf5 | | transitions | 0 | +----------------------------+--------------------------------------------------+
-
View the current network migration details. For example:
# vinfra cluster network migration show +----------------------------+-------------------------------------------------+ | Field | Value | +----------------------------+-------------------------------------------------+ | link | href: /api/v2/network/migration/2d4ec3a9-<...>/ | | | method: GET | | | rel: network-migration-details | | operation | network-migration | | progress | 1.0 | | single_interface_migration | False | | state | test-passed | | task_id | 2d4ec3a9-7714-479d-a03c-1efbe6ffecf5 | | transitions | 5 | +----------------------------+-------------------------------------------------+
The output shows that the new network configuration has been tested and can be applied.
-
If you plan cluster relocation, you can shut down your cluster nodes and then turn them on in a new datacenter, as described in Shutting down and starting up the cluster. After cluster relocation, run:
# vinfra cluster network migration resume
-
Continue the network migration and apply the new network configuration. For example:
# vinfra cluster network migration apply
-
If you migrate a network with the Internal management or VM private traffic type, manually restart all running virtual machines, to be able to access them via VNC console.
If the connectivity checks fail, you need to fix the found issues and try again. If the specified new IP addresses are not available or valid, you can change them by using the following command:
vinfra cluster network migration retry [--subnet <subnet>] [--netmask <netmask>] [--node <node> <address>]
--subnet <subnet>
- New network subnet
--netmask <netmask>
- New network mask
--node <node> <address>
-
New node address in the format:
<node>
: node ID or hostname<address>
: IPv4 address
This option can be used multiple times.
For example:
# vinfra cluster network migration retry --subnet 192.168.128.0 \ --netmask 255.255.255.0 --node node001 192.168.128.12 --node node002 192.168.128.13 \ --node node003 192.168.128.14 +----------------------------+-------------------------------------------------+ | Field | Value | +----------------------------+-------------------------------------------------+ | link | href: /api/v2/network/migration/2d4ec3a9-<...>/ | | | method: GET | | | rel: network-migration-details | | operation | network-migration | | progress | 0.9 | | single_interface_migration | False | | state | failed-to-apply | | task_id | 2ce42f0e-6401-47c1-a52f-33e7c68d0df4 | | transitions | 5 | +----------------------------+-------------------------------------------------+
With other network issues, revert to your old network configuration with vinfra cluster network migration revert
, fix the issue, and try again.
To troubleshoot a failed migration
- Connect to your cluster via SSH.
- Investigate /var/log/vstorage-ui-backend/celery.log to find the root cause.
- Fix the issue.
- Go back to the wizard screen and click Retry.