Attaching host devices to virtual machines
By attaching various devices from compute nodes to virtual machines, you can reduce VM network latency or accelerate visualization inside a guest operating system. The following PCI devices are supported:
- Graphics cards. With GPU passthrough, you can give an entire physical GPU to a single virtual machine, while vGPU enables dividing up video RAM of a physical GPU between multiple virtual machines. You can use both capabilities, GPU passthough and vGPU, on the same node.
- Network adapters with Single Root I/O Virtualization (SR-IOV) capabilities. The SR-IOV technology enables splitting a single physical adapter (physical function) into several virtual adapters (virtual functions). Each virtual function appears as a separate PCI device that can be attached to multiple virtual machines.
- Host bus adapters. To attach HBA devices to virtual machines, use the steps described for GPU passthrough.
Limitations
- PCI device passthrough and GPU virtualization is only available on servers that support Input/Output Memory Management Unit (IOMMU). For a list of IOMMU-supporting hardware, refer to this article.
- vGPU is supported for NVIDIA GPU cards.
Procedure overview
- Prepare compute nodes depending on a host device you are going to pass through or virtualize.
- Reconfigure the compute cluster to enable PCI passthrough or vGPU support.
- Create virtual machines with attached PCI or vGPU devices.