Libvirt: AIO mode for disk devices

Libvirt: AIO mode for disk devices

https://blueprints.launchpad.net/nova/+spec/libvirt-aio-mode

Libvirt and qemu provide two different modes for asynchronous IO (AIO), “native” and “threads”. Right now nova always uses the default thread mode. Depending on the disk type that is used for backing guest disks, it can be beneficial to use the native IO mode instead.

Problem description

Storage devices that are presented to instances can be backed by a variety of different storage backends. The storage device can be an image residing in the file system of the hypervisor, it can be a block device which is passed to the guest or it can be provided via a network. Images can have different formats (raw, qcow2 etc.) and block devices can be backed by different hardware (ceph, iSCSI, fibre channel etc.).

These different image formats and block devices require different settings in the hypervisor for optimizing IO performance. Libvirt/qemu offers a configurable asynchronous IO mode which increases performance when it is set correctly for the underlying image/block device type.

Right now nova sticks with the default setting, using userspace threads for asynchronous IO.

Use Cases

A deployer or operator wants to make sure that the users get the best possible IO performance based on the hardware and software stack that is used.

Users may have workloads that depend on optimal disk performance.

Both users and deployers would prefer that the nova libvirt driver automatically picks the asynchronous IO mode that best fits the underlying hardware and software.

Proposed change

The goal is to enhance the nova libvirt driver to let it choose the disk IO mode based on the knowledge it already has about the device in use.

For cinder volumes, different LibvirtVolumeDriver implementations exist for the different storage types. A new interface will be added to let the respective LibvirtVolumeDriver choose the AIO mode.

For ephemeral storage, the XML is generated by LibvirtConfigGuestDisk, which also allows to distinguish between file, block and network attachment of the guest disk.

Restrictions on when to use native AIO mode

  • Native AIO mode will not be enabled for sparse images as it can cause Qemu threads to be blocked when filesystem metadata need to be updated. This issue is much more unlikely to appear when using preallocated images. For the full discussion, see the IRC log in [4].

  • AIO mode has no effect if using the in-qemu network clients (any disks that use <disk type=’network’>). It is only relevant if using the in-kernel network drivers (source: danpb)

In the scenarios above, the default AIO mode (threads) will be used.

Cases where AIO mode is beneficial

  • Raw images and pre-allocated images in qcow2 format

  • Cinder volumes that are located on iSCSI, NFS or FC devices.

  • Quobyte (reported by Silvan Kaiser)

Alternatives

An alternative implementation would be to let the user specify the AIO mode for disks, similar to the current configurable caching mode which allows to distinguish between file and block devices. However, the AIO mode that fits best for a given storage type does not depend on the workload running in the guest, and it would be beneficial not to bother the operator with additional configuration parameters.

Another option would be to stick with the current approach - using the libvirt/qemu defaults. As there is no single AIO mode that fits best for all storage types, this would leave many users with inefficient settings.

Data model impact

No changes to the data model are expected, code changes would only impact the libvirt/qemu driver and persistent data are not affected.

REST API impact

None

Security impact

None

Notifications impact

None

Other end user impact

None

Performance Impact

IO performance for instances that run on backends which allow to exploit native IO mode will be improved. No adverse effect on other components.

Other deployer impact

None

Developer impact

None

Implementation

Assignee(s)

Primary assignee:

alexs-h

Work Items

  • Collect performance data for comparing AIO modes on different storage types

  • Implement AIO mode selection for cinder volumes

  • Implement AIO mode selection for ephemeral storage

Dependencies

None

Testing

Unit tests will be provided that verify the libvirt XML changes generated by this feature.

Also, CI systems that run libvirt/qemu would use the new AIO mode configuration automatically.

Documentation Impact

Wiki pages that cover IO configuration with libvirt/qemu as a hypervsior should be updated.

References

History

None

Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.

nova-specs