h1

Discover IOV and VMware NetQueue on a Budget

April 28, 2009

While researching advancements in I/O virtualization (VMware) we uncovered a “low cost” way to explore the advantages of IOV without investing in 10GbE equipment: the Intel 82576 Gigabit Network Controller which supports 8-receive queues per port. This little gem comes in a 2-port by 1Gbps PCI-express package (E1G142ET) for around $170/ea on-line. It also comes in a 4-port by 1Gbps package (full or half-height, E1G144ET) for around $450/ea on-line.

Enabling VMDq/NetQueue is straightforward:

  1. Enable NetQueue in VMkernel using VMware Infrastructure 3 Client:
    1. Choose Configuration > Advanced Settings > VMkernel.
    2. Select VMkernel.Boot.netNetqueueEnabled.
  2. Enable the igb module in the service console of the ESX Server host:# esxcfg-module -e igb
  3. Set the required load option for igb to turn on VMDq:
    The option IntMode=3 must exist to indicate loading in VMDq mode. A value of 3 for the IntMode parameter specifies using MSI-X and automatically sets the number of receive queues to the maximum supported (devices based on the 82575 Controller enable 4 receive queues per port; devices based on the 82576 Controller enable 8 receive queues per port). The number of receive queues used by the igb driver in VMDq mode cannot be changed.

    For a single port, use the command:

    # esxcfg-module -s "IntMode=3" igb

    For two or more ports, use a comma-separated list of values as shown in the following example (the parameter is applied to the igb-supported interfaces in the order they are enumerated on the PCI bus):

    # esxcfg-module -s "IntMode=3,3, ... 3" igb

  4. Reboot the ESX Server system.

    Note: If you are using jumbo frames, you also need to change the values for netPktHeapMinSize to 32 and netPktHeapMaxSize to 128. For more information, see Set netPktHeapMinSize=32 and netPktHeapMaxSize=128 to Enable Jumbo Frames and NetQueue (1004593)

After the reboot, you may want to verify that VMDq is enabled (see VMware’s KB 1009010 for details).

The relevant feature list is quite good:

Intel® QuickData Technology • DMA Engine: enhances data acceleration across the platform (network, chipset,
processor), thereby lowering CPU usage
• Direct Cache Access (DCA): enables the adapter to pre-fetch the data from memory, thereby avoiding cache misses and improving application response time
MSI-X support • Minimizes the overhead of interrupts
• Allows load balancing of interrupt handling between multiple cores/CPUs
Low Latency Interrupts • Based on the sensitivity of the incoming data it can bypass the automatic moderation
of time intervals between the interrupts
Header splits and replication in receive • Helps the driver to focus on the relevant part of the packet without the need to parse it
Multiple queues: 8 queues per port • Network packet handling without waiting or buffer overflow providing efficient packet prioritization
Tx/Rx IP, SCTP, TCP, and UDP checksum offloading (IPv4, IPv6) capabilities • Lower processor usage
• Checksum and segmentation capability extended to new standard packet type
Tx TCP segmentation offload (IPv4, IPv6) • Increased throughput and lower processor usage
• Compatible with large send offload feature (in Microsoft Windows* Server OSs)
Receive and Transmit Side Scaling for Windows*
environment and Scalable I/O for Linux*
environments (IPv4, IPv6, TCP/UDP)
• This technology enables the direction of the interrupts to the processor cores in order
to improve the CPU utilization rate
IPsec Offload • Offloads IPsec capability onto the adapter instead of the software to significantly improve I/O throughput and CPU utilization (for Windows* 2008 Server and Vista*)
LinkSec • A Layer 2 data protection solution that provides encryption and authentication ability between two individual devices (routers, switches, etc.)
• These adapters are prepared to provide LinkSec functionality when the ecosystem
supports this new technology
Virtual Machine Device queues (VMDq) • Offloads the data sorting functionality from the Hypervisor to the network silicon, thereby improving data throughput and CPU usage
• Provides QoS feature on the Tx data by providing round robin servicing and preventing head-of-line blocking
• Sorting based on MAC addresses and VLAN tags
Next-generation VMDq • Enhanced QoS feature by providing weighted round robin servicing for the Tx data
• Provides loopback functionality, where data transfer between the virtual machines within the same physical server need not go out to the wire and come back in. This improves throughput and CPU usage.
• Supports replication of multicast and broadcast data
PC-SIG SR-IOV implementation
(eight virtual functions per port)
• Provides an implementation of the PCI-SIG standard for I/O Virtualization. The physical
configuration of each port is divided into multiple virtual ports. Each virtual port is
assigned to an individual virtual machine directly by bypassing the virtual switch in
the Hypervisor, thereby resulting in near-native performance.
• Integrated with Intel® VT for Directed I/O (VT-d) to provide data protection between virtual machines by assigning separate physical addresses in the memory to each
virtual machine
IPv6 offloading • Checksum and segmentation capability extended to the new standard packet type
Advanced packet filtering • 24 exact-matched packets (unicast or multicast)
• 4096-bit hash filter for unicast and multicast frames
• Lower processor usage
• Promiscuous (unicast and multicast) transfer mode support
• Optional filtering of invalid frames
VLAN support with VLAN tag insertion, stripping
and packet filtering for up to 4096 VLAN tags
• Ability to create multiple VLAN segments

2 comments

  1. Hello,

    According to

    http://www.intel.com/support/network/adapter/pro100/sb/CS-031492.htm

    IOV isn’t currently supported in VMWare – do you know if that’s changed?

    Cheers, Michael

    Like


  2. does int mode 3 really exist?
    On the following link only int mode up to 2 is mentioned:
    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1026094

    Like



Comments are closed.