Attaching host devices to virtual machines

By attaching various devices from compute nodes to virtual machines, you can reduce VM network latency or accelerate visualization inside a guest operating system. The following PCI devices are supported:

Graphics cards

GPU passthrough dedicates an entire physical GPU to a single virtual machine, allowing the VM's GPU driver to interact with it as if it were directly connected via PCIe. This direct access bypasses the virtualization layer, delivering performance nearly identical to a native hardware setup.

The virtual GPU (vGPU) technology enables dividing up video RAM of a single physical GPU between multiple virtual machines. This ensures high-performance graphics, broad application compatibility, and cost efficiency across virtualized workloads.

You can use both capabilities, GPU passthough and vGPU, on the same node.

Network adapters with Single Root I/O Virtualization (SR-IOV) capabilities
The SR-IOV technology enables splitting a single physical adapter (physical function) into several virtual adapters (virtual functions). Each virtual function appears as a separate PCI device that can be attached to multiple virtual machines.
Host bus adapters
To attach HBA devices to virtual machines, use the steps described for GPU passthrough.

Limitations

  • PCI device passthrough and GPU virtualization is only available on servers that support Input/Output Memory Management Unit (IOMMU). For a list of IOMMU-supporting hardware, refer to this article.
  • vGPU is supported for NVIDIA GPU cards.