Changing network configuration
You can change your network configuration and IP address assignment to cluster nodes by using network migration.
Limitations
- DHCP can be enabled for the source network but must be disabled for the target network. After migration, IP addresses obtained via DHCP will become static.
- Migration from IPv4 to IPv6 is not supported.
Prerequisites
- All of the connected node interfaces are online.
- High availability is disabled, as described in Managing high availability configuration. You can enable high availability later, if required.
- If a network is the default gateway network, all nodes connected to it must use the same default gateway.
- If you have restricted outbound traffic in your cluster, you need to manually add a rule that will allow outbound traffic on TCP and UDP ports 60000–60100, as described in Configuring outbound firewall rules.
To migrate a network from the source configuration to the target one
Admin panel
- On the Infrastructure > Networks screen, click the table cell with the name of the required network.
- On the network right pane, click Migrate to open the Migrate network: <network name> dialog.
-
On the Network configuration step, specify a new configuration for your network and review important information about potential risks. If the network includes multiple subnets, you can select Do not change for the subnets that you want to keep unchanged.
If you plan to move your cluster to another location, which requires manually shutting down the cluster, select Cluster relocation with shutdown is planned.
Then, click Next.
- On the Changes overview step, review the changes to your network configuration and click Next.
-
On the IP address configuration step, specify the desired IP addresses for the cluster nodes or keep the default suggested values, and click Try new configuration.
In the Try new configuration window, confirm your action by clicking Continue.
- If you are planning cluster relocation, shut down the cluster nodes and start them in a new datacenter, as described in Shutting down and starting up the cluster. After the cluster relocation is complete, click Resume.
-
If the connectivity checks fail, resolve the found issues and try again.
- If the specified new IP addresses are not available or valid, update them and click Retry.
- For other network issues, revert to your old network configuration by clicking Revert, fix the issue, and try again.
-
Wait until the new configuration is created and validated, and then click Migrate.
While network migration is in progress, users cannot perform other tasks in the admin panel. Moreover, the self-service users may not have access to the portal and will need to wait until the migration is complete.
- On the Migration step, wait until the migration is complete on all the connected interfaces, and then click Done.
- If you migrate a network with the Internal management or VM private traffic type, manually restart all running virtual machines, to be able to access them via VNC console.
Command-line interface
-
Prepare a configuration template for network migration:
-
Generate a JSON file based on the current network configuration:
vinfra cluster network migration generate-config --network <network> [--output <file>] [--compact]
--network <network>- Network ID or name to generate configuration for
--output <file>- Output file path (default:
stdout) --compact- Generate compact configuration without comments and examples
For example:
# vinfra cluster network migration generate-config --network MyNetwork --output config.json --compact Configuration template written to: config.json Operation successful. # cat config.json { "network": { "id": "de31378d-4915-4344-8e61-0d21d9491c04", "name": "MyNetwork", "shutdown_required": false }, "subnet_mappings": [ { "old_subnet": "10.11.12.0/24", "new_subnet": "", "type": "ipv4", "gateway": "", "range": { "start": "", "end": "" }, "exclude": { "ips": [] } }, { "old_subnet": "2001:db8:abcd:1234::/64", "new_subnet": "", "type": "ipv6", "gateway": "", "range": { "start": "", "end": "" }, "exclude": { "ips": [] } } ], "nodes": [ { "node_id": "048fea61-ca0a-00a7-5baf-cbd10b096b64", "name": "node001.vstoragedomain", "new_ips": "" }, { "node_id": "fab23eb2-41d4-6695-313f-754418ff2e45", "name": "node002.vstoragedomain", "new_ips": "" }, { "node_id": "261b750f-59e7-4fac-d83d-75f243463b57", "name": "node003.vstoragedomain", "new_ips": "" } ] } -
In the generated configuration file, do the following:
- In the
new_subnetsection, specify the new subnet in CIDR notation. - In the
rangesection, specify an IP range for automatic assignment (for example, from 10.83.0.10 to 10.83.0.100, or use '*' for the end), or leave the values empty to use the entire subnet range. - In the
excludesection, list IP addresses to exclude from automatic assignment (for example,['10.83.0.50', '10.83.0.60-10.83.0.65']), or leave the values empty. - If you do not plan to migrate a subnets, remove its corresponding section.
- In the
new_ipssection, leave the values empty for automatic IP address assignment, or specify the desired IP addresses for the cluster nodes.
The resulting file may look as follows:
# vi config.json { "network": { "id": "de31378d-4915-4344-8e61-0d21d9491c04", "name": "MyNetwork", "shutdown_required": false }, "subnet_mappings": [ { "old_subnet": "10.11.12.0/24", "new_subnet": "10.20.12.0/24", "type": "ipv4", "gateway": "" } ], "nodes": [ { "node_id": "048fea61-ca0a-00a7-5baf-cbd10b096b64", "name": "node001.vstoragedomain", "new_ips": "10.20.12.11" }, { "node_id": "fab23eb2-41d4-6695-313f-754418ff2e45", "name": "node002.vstoragedomain", "new_ips": "10.20.12.12" }, { "node_id": "261b750f-59e7-4fac-d83d-75f243463b57", "name": "node003.vstoragedomain", "new_ips": "10.20.12.13" } ] } - In the
-
-
Start the network migration either using the prepared configuration file or specifying the required network parameters:
vinfra cluster network migration start <network> [--shutdown] [--subnet <subnet>] [--gateway <gateway>] [--gateway-v6 <gateway>] [--node <node> <ip_addresses>] [--range <range>] [--exclude <exclude>] [--config <config_file>]--network <network>- Network ID or name (required unless using
--config) --subnet <subnet>- New network subnets, comma separated. This option can be used multiple times. Example:
10.100.0.0/16,10.101.0.0/16. --gateway <gateway>- New network IPv4 gateway
--gateway-v6 <gateway>- New network IPv6 gateway
--shutdown- Prepare the cluster to be shut down manually for relocation
--node <node> <address>-
New node IP address configuration in the format:
<node>: node ID or hostname<address>: IPv4 and IPv6 addresses, comma separated
This option can be used multiple times. Example:
node1 10.101.30.1,fd12::1. --range <range>- IP range for automatic assignment (applied to
--subnet). This option can be used multiple times. Examples:10.70.30.20-10.70.30.50,10.70.30.20-*. --exclude <exclude>- List of IP addresses to exclude from automatic assignment (applied to
--subnet). This option can be used multiple times. Examples:10.70.30.30,10.70.30.30,10.70.30.32-10.70.30.35. --config <config_file>- Specify a JSON configuration file for migration (required unless using
--network)
Example 1:
# vinfra cluster network migration start --config config.json
Example 2:
# vinfra cluster network migration start --network "Private" --subnet 10.20.12.0/24 \ --node node001 10.20.12.11 --node node002 10.20.12.12 --node node003 10.20.12.13
-
View the current network migration details. For example:
# vinfra cluster network migration show +----------------------------+-------------------------------------------------+ | Field | Value | +----------------------------+-------------------------------------------------+ | link | href: /api/v2/network/migration/2d4ec3a9-<...>/ | | | method: GET | | | rel: network-migration-details | | operation | network-migration | | progress | 1.0 | | single_interface_migration | False | | state | test-passed | | task_id | 2d4ec3a9-7714-479d-a03c-1efbe6ffecf5 | | transitions | 5 | +----------------------------+-------------------------------------------------+
The output shows that the new network configuration has been tested and can be applied.
-
If you are planning cluster relocation, shut down the cluster nodes and start them in a new datacenter, as described in Shutting down and starting up the cluster. After the cluster relocation is complete, run:
# vinfra cluster network migration resume
-
Continue the network migration and apply the new network configuration. For example:
# vinfra cluster network migration apply
-
If you migrate a network with the Internal management or VM private traffic type, manually restart all running virtual machines, to be able to access them via VNC console.
If the connectivity checks fail, resolve the found issues and try again. If the specified new IP addresses are not available or valid, you can change them by using the following command:
vinfra cluster network migration retry [--subnet <subnet>] [--node <node> <ip_addresses>]
[--gateway <gateway>] [--gateway-v6 <gateway>]
[--range <range>] [--exclude <exclude>]
--subnet <subnet>- New network subnets, comma separated. This option can be used multiple times. Example:
10.100.0.0/16,10.101.0.0/16. --gateway <gateway>- New network IPv4 gateway
--gateway-v6 <gateway>- New network IPv6 gateway
--node <node> <address>-
New node IP address configuration in the format:
<node>: node ID or hostname<address>: IPv4 and IPv6 addresses, comma separated
This option can be used multiple times. Example:
node1 10.101.30.1,fd12::1. --range <range>- IP range for automatic assignment (applied to
--subnet). This option can be used multiple times. Examples:10.70.30.20-10.70.30.50,10.70.30.20-*. --exclude <exclude>- List of IP addresses to exclude from automatic assignment (applied to
--subnet). This option can be used multiple times. Examples:10.70.30.30,10.70.30.30,10.70.30.32-10.70.30.35.
For example:
# vinfra cluster network migration retry --subnet 10.20.12.0/24 --node node001 10.20.12.21 \ --node node002 10.20.12.22 --node node003 10.20.12.23
With other network issues, revert to your old network configuration with vinfra cluster network migration revert, fix the issue, and try again.
To troubleshoot a failed migration
- Connect to your cluster via SSH.
- Investigate /var/log/vstorage-ui-backend/celery.log to find the root cause.
- Fix the issue.
- Go back to the wizard screen and click Retry.