Split network plane for live migration¶
This spec is proposed to split the network plane of live migration from management network, in order to avoid the network performance impact caused by data transfer generated by live migration.
When we do live migration with QEMU/KVM driver, we use hostname of target compute node as the target of live migration. So the RPC call and live migration traffic will be in same network plane. Live migration will have impact on network performance, and this impact is significant when lots of live migration occurs concurrently, even if CONF.libvirt.live_migration_bandwidth is set.
The OpenStack deployer plan a specific network plane for live migration, which is separated from the management network. As the data transfer of live migrate is flowing in this specific network plane, its impact to network performance will be limited in this network plane and will have no impact for management network. The end user will not notice this change.
Add an new option CONF.my_live_migration_ip in configuration file, set None as default value. When pre_live_migration() execute in destination host, set the option into pre_migration_data, if it’s not None. When driver.live_migration() execute in source host, if this option is present in pre_migration_data, the ip address is used instead of CONF.libvirt.live_migration_uri as the uri for live migration, if it’s None, then the mechanism remains as it is now.
This spec focuses on the QEMU/KVM driver, the implementations for other drivers should be completed in separate blueprint.
Config live migration uri, like this:
live_migration_uri = "qemu+tcp://%s.INTERNAL/system"
Then modify the DNS configuration in the OpenStack deployment:
target_hostname 192.168.1.5 target_hostname.INTERNAL 220.127.116.11
But requiring such DNS changes in order to deploy and use OpenStack may not be practical due to organizational procedure limitations at many organizations.
Data model impact¶
REST API impact¶
This feature has no negative impact for security. Split data transfer and management will improve security somewhat by reducing the chance of a management plane denial of service.
Other end user impact¶
No impact on end user.
Using specifically planed network plane, when live migration, the impact of data transfer on network performance will no longer exist. The impact of live migration on network performance will be limited to its own network plane.
Other deployer impact¶
The added configuration option CONF.my_live_migration_ip will be available for all drivers, the default value is None. Thus, when OpenStack upgrades, the existing live migration mechanism remains, if the option of CONF.my_live_migration_ip has been set, this option will be used for live migration’s target uri. If the deployers want to use this function, a separated network plane will have to be planned in advance.
All drivers can implement this function using the same mechanism.
Add new configuration option CONF.my_live_migration_ip into [DEFAULT] group.
Modify the existing implementation of live migration, when pre_live_migration() execute in destination host, set the option into pre_migration_data, if it’s not None.
In QEMU/KVM driver when driver.live_migration() execute in source host, if this option is present in pre_migration_data, the ip address is used instead of CONF.libvirt.live_migration_uri as the uri for live migration, if it’s None, then the mechanism remains as it is now.
Changes will be made for live migration, thus related unit tests will be added.
The instruction for a new configuration option CONF.my_live_migration_ip will be added to the OpenStack Configuration Reference manual.
The operators can plan a specify network plane for live migration, like: 172.168.*.*, split it from management network (192.168.*.*), then add the option into nova.conf on every nova-compute host according to the planed IP addresses, like this: CONF.my_live_migration_ip=18.104.22.168.
The default value of new option is None, so the live-migration workflow is as same as the original by default.