Note
Only cloud administrators can perform live migrations. If your cloud is configured to use cells, you can perform live migration within but not between cells.
Migration enables an administrator to move a virtual-machine instance from one compute host to another. This feature is useful when a compute host requires maintenance. Migration can also be useful to redistribute the load when many VM instances are running on a specific physical machine.
The migration types are:
The following sections describe how to configure your hosts and compute nodes for migrations by using the KVM and XenServer hypervisors.
Prepare at least three servers. In this example, we refer to the servers as HostA, HostB, and HostC:
Ensure that NOVA-INST-DIR (set with state_path in the nova.conf file) is the same on all hosts.
In this example, HostA is the NFSv4 server that exports NOVA-INST-DIR/instances directory. HostB and HostC are NFSv4 clients that mount HostA.
Configuring your system
Configure your DNS or /etc/hosts and ensure it is consistent across all hosts. Make sure that the three hosts can perform name resolution with each other. As a test, use the ping command to ping each host from one another:
$ ping HostA
$ ping HostB
$ ping HostC
Ensure that the UID and GID of your Compute and libvirt users are identical between each of your servers. This ensures that the permissions on the NFS mount works correctly.
Ensure you can access SSH without a password and without StrictHostKeyChecking between HostB and HostC as nova user (set with the owner of nova-compute service). Direct access from one compute host to another is needed to copy the VM file across. It is also needed to detect if the source and target compute nodes share a storage subsystem.
Export NOVA-INST-DIR/instances from HostA, and ensure it is readable and writable by the Compute user on HostB and HostC.
For more information, see: SettingUpNFSHowTo or CentOS/Red Hat: Setup NFS v4.0 File Server
Configure the NFS server at HostA by adding the following line to the /etc/exports file:
NOVA-INST-DIR/instances HostA/255.255.0.0(rw,sync,fsid=0,no_root_squash)
Change the subnet mask (255.255.0.0) to the appropriate value to include the IP addresses of HostB and HostC. Then restart the NFS server:
# /etc/init.d/nfs-kernel-server restart
# /etc/init.d/idmapd restart
On both compute nodes, enable the ‘execute/search’ bit on your shared directory to allow qemu to be able to use the images within the directories. On all hosts, run the following command:
$ chmod o+x NOVA-INST-DIR/instances
Configure NFS on HostB and HostC by adding the following line to the /etc/fstab file
HostA:/ /NOVA-INST-DIR/instances nfs4 defaults 0 0
Ensure that you can mount the exported directory
$ mount -a -v
Check that HostA can see the NOVA-INST-DIR/instances/ directory
$ ls -ld NOVA-INST-DIR/instances/
drwxr-xr-x 2 nova nova 4096 2012-05-19 14:34 nova-install-dir/instances/
Perform the same check on HostB and HostC, paying special attention to the permissions (Compute should be able to write)
1 2 3 4 5 6 7 8 9 10 11 12 | $ ls -ld NOVA-INST-DIR/instances/
drwxr-xr-x 2 nova nova 4096 2012-05-07 14:34 nova-install-dir/instances/
$ df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 921514972 4180880 870523828 1% /
none 16498340 1228 16497112 1% /dev
none 16502856 0 16502856 0% /dev/shm
none 16502856 368 16502488 1% /var/run
none 16502856 0 16502856 0% /var/lock
none 16502856 0 16502856 0% /lib/init/rw
HostA: 921515008 101921792 772783104 12% /var/lib/nova/instances ( <--- this line is important.)
|
Update the libvirt configurations so that the calls can be made securely. These methods enable remote access over TCP and are not documented here.
Restart libvirt. After you run the command, ensure that libvirt is successfully restarted
# stop libvirt-bin && start libvirt-bin
$ ps -ef | grep libvirt
root 1145 1 0 Nov27 ? 00:00:03 /usr/sbin/libvirtd -d -l\
Configure your firewall to allow libvirt to communicate between nodes. By default, libvirt listens on TCP port 16509, and an ephemeral TCP range from 49152 to 49261 is used for the KVM communications. Based on the secure remote access TCP configuration you chose, be careful which ports you open, and always understand who has access. For information about ports that are used with libvirt, see the libvirt documentation.
Configure the downtime required for the migration by adjusting these parameters in the nova.conf file:
live_migration_downtime
live_migration_downtime_steps
live_migration_downtime_delay
The live_migration_downtime parameter sets the maximum permitted downtime for a live migration, in milliseconds. This setting defaults to 500 milliseconds.
The live_migration_downtime_steps parameter sets the total number of incremental steps to reach the maximum downtime value. This setting defaults to 10 steps.
The live_migration_downtime_delay parameter sets the amount of time to wait between each step, in seconds. This setting defaults to 75 seconds.
You can now configure other options for live migration. In most cases, you will not need to configure any options. For advanced configuration options, see the OpenStack Configuration Reference Guide.
Prior to the Kilo release, the Compute service did not use the libvirt live migration function by default. To enable this function, add the following line to the [libvirt] section of the nova.conf file:
live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_TUNNELLED
On versions older than Kilo, the Compute service does not use libvirt’s live migration by default because there is a risk that the migration process will never end. This can happen if the guest operating system uses blocks on the disk faster than they can be migrated.
Configuring KVM for block migration is exactly the same as the above configuration in Shared storage the section called shared storage, except that NOVA-INST-DIR/instances is local to each host rather than shared. No NFS client or server configuration is required.
Note
Compatible XenServer hypervisors. The hypervisors must support the Storage XenMotion feature. See your XenServer manual to make sure your edition has this feature.
Note
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License http://creativecommons.org/licenses/by/3.0/legalcode.