Introduction to SR-IOV and vNIC for IBM i

 

This is the first in a series of articles on frequently overlooked Power systems features that highlight the value for IBM i customers, starting with sharing Ethernet adapters with SR-IOV, and the added benefits that can be achieved with vNIC technology on top of SR-IOV. 

Whether you have an existing system that is already capable of these features, or you are considering migrating to new hardware, you can only benefit from knowing what your options are.

What Is SR-IOV?

SR-IOV (Single Root Input/Output Virtualization) is a hardware specification that allows multiple operating systems to simultaneously use a single I/O adapter in a virtualized environment.  It is not unique to the Power Hypervisor (PHYP).  You can find SR-IOV being used heavily in x86 based virtualization, such as VMWare or Hyper-V – a fact that just serves to complicate searches for information related to IBM i.

More to the point for the IBM i administrator, it allows a single SR-IOV capable adapter to be shared by multiple LPARs.  You can split a single adapter with two ports, dedicating each port to a separate lpar, or you can go more granular and share different percentages of the bandwidth of a single physical port between multiple partitions.  When sharing a single physical port, you get to specify the minimum percentage of outgoing bandwidth each partition gets, allowing each partition to use available bandwidth to burst higher when necessary.  It is also possible to limit the maximum outgoing bandwidth a given partition will use, although this is only possible via the HMC CLI, not the HMC GUI.

What is vNIC?

vNIC is a Power virtualization technology built into PowerVM that leverages the combination of VIOS (Virtual I/O Server) virtualization with SR-IOV adapters to get the performance and flexibility of SR-IOV with the additional flexibility and redundancy of a fully virtualized solution.  I expect to expand on VIOS in much more detail in future article.  For now, I’ll just say that vNIC provides an automated active/passive failover ability and supports the use of Live Partition Mobility.  If you already use VIOS, you should strongly consider SR-IOV adapters with vNIC rather than Shared Ethernet Adapters (SEA) unless you need the active/active load sharing configuration that is only available with SEA.  If you don’t use VIOS, watch out for a future article for why you should.

Why SR-IOV?

-          Better use of limited resources.  10G ethernet adapters have become common in enterprise configurations.  Most of these adapters have multiple ports.  Without SR-IOV, each adapter is usually dedicated to a single partition, often leaving the extra ports unused while additional adapters are dedicated to other partitions, leaving even more ports unused.  How many of these ports are utilized to their full capacity?  Not as many as you might think (seriously, collect some performance stats and see for yourself).  More adapters used at a fraction of their capacity means more cabling and more network switch ports, all used at a fraction of their capacity.  That gets costly, for both the server and network budgets, especially when working with 10G ports.

-          More flexibility.  Once you have connected ports to network switches, you can add partitions that use those ports without any additional cabling or network configuration.  This is especially true if you configure those ports as trunks and use VLAN tagging at the IBM i TCP/IP configuration to access different networks and IP address ranges.

-          Better Performance than other shared configurations.  Compared to traditional server-based networking configurations (VIOS Shared Ethernet Adapters or IBM i NWS Virtual Ethernet), SR-IOV connections perform much better.  Virtual ethernet connections have processor overhead, and many tuning parameters that limit performance.  SR-IOV establishes a hypervisor managed path to the hardware that is second only to a dedicated adapter.  In the real world, SR-IOV will perform effectively the same as a dedicated adapter, and better than any server virtualized adapter.

Who should use SR-IOV?

-          Large Enterprises should consider SR-IOV and vNIC technology to achieve high bandwidth connectivity to enterprise scale 10G (and up) infrastructure .  Automatic failover (vNIC) to redundant connections ensures connectivity that leverages the highly redundant network infrastructures that exist in high-end enterprises.

-          Small businesses should consider SR-IOV and vNIC technology to get the maximum capacity out of the investment in network connectivity.  Fewer adapters, less cabling and a smaller number of network ports is easier on the budget, while still providing the ability to adapt to changing business needs.  SR-IOV adapters provide the ability to share adapters between partitions without any server based virtualization, resulting in a simple to maintain shared configuration when other virtualization functions are not required.

What else do I need to know?

-          For all of the following, see the SR-IOV FAQ for details.  It can be found at:  https://community.ibm.com/community/user/power/viewdocument/sr-iov-vnic-and-hnv-information

  • You must have an SR-IOV supported adapter, so make sure your IBM Sales Representative or Business Partner knows you want SR-IOV when ordering a new system.
  • SR-IOV adapters must be placed in specific slots.  On Power 9 and Power 10 hardware, this includes most of the slots in the system.
  • There are limits on the number of SR-IOV enabled adapters per system.  As of November 2022, the maximum number of SR-IOV shared adapters is lower of 32 or the number of SR-IOV slots in the system.  This is not really limiting for most customers.
  • There are limits on how many shared (logical) ports can be assigned to a physical port, depending on the specific adapter (Ranging from 4 to 60)
  • There are limits on how many shared (logical) ports can be assigned per adapter (ranging from 48 to 120)
  • SR-IOV adapters in shared mode require Hypervisor memory (see FAQ)
  • Pay particular attention to limitations for 1G ports on supported adapters, especially 1G SFP+ in 10G+ adapters as these may not be supported for SR-IOV.
  • HMC is required for SR-IOV support.
  • VIOS is required for vNIC.  VIOS is NOT required for SR-IOV.
  • Sharing a Link Aggregation (e.g. LACP) of multiple ports is not allowed.  This is not as bad as it sounds as Link aggregation is effectively used as a redundancy measure in a VIOS SEA configuration rather than as a performance measure.  SEA simply does not have the capacity to use more than a single link’s bandwidth.  In practically all cases where Link Aggregation is used with VIOS, vNIC with failover is a better solution.  In the rare case that it is necessary, Link Aggregation can be managed at the IBM i O/S level with the CRTLINETH RSRCNAME(*AGG) command if the SR-IOV physical ports are 100% dedicated to a single partition.  See https://www.ibm.com/support/pages/configuring-ethernet-link-aggregation
  • Changing the minimum capacity of a SR-IOV logical port is disruptive, so plan accordingly.  Remember that the value is a minimum, and all logical ports can burst higher.  This means that barring any specific continuous outgoing bandwidth requirements, you are better off estimating low.
  • Bandwidth splitting on SR-IOV adapters is based on outgoing bandwidth only.  There is no way to split incoming bandwidth, so consideration should be given to anticipated incoming bandwidth when deciding on how many partitions can share a port.
  • SR-IOV cards are not owned by any partition, so typically adapter firmware updates are included in System firmware updates.  If necessary, there is a separate procedure to install adapter firmware updates separately that you may need to use. 

How to configure an SR-IOV port on IBM i

Rather than including a bunch of HMC screenshots that duplicate existing resources, I’ll direct you to the excellent reference material in the “Selected References” below, especially the Redpaper.  These references will show you how to put an SR-IOV adapter in shared or hypervisor mode and how to configure a logical port for a partition.  There is no difference between doing this for AIX and IBM i.  The specific web interface may change a bit with each HMC release, but the concepts remain the same.

Once the resource is created, the easiest way to determine the resource name is to select the partition from the HMC and get the CMNxx resource name from the “Hardware Virtualized I/O” page for SR-IOV, or the “vNIC” page for a vNIC.  It will also show up along with all of the other resources in WRKHDWRSC *CMN, or STRSST.  Once the resource name is located, configure it exactly as you would any other Ethernet resource by creating a Line description, IP address, etc.

You can dynamically add and remove SR-IOV and vNIC resources to/from a running partition.  Make sure that if you remove one, there are not any configurations using that resource.

What if you need help implementing SR-IOV or vNIC on an IBM i?

Whether you have a large environment or small, implementing new technologies can be challenging.  If you need help beyond the available documentation, the IBM i Technology Services team (formerly known as Lab Services) is available to help with implementation planning, execution, and knowledge transfer.  See https://www.ibm.com/it-infrastructure/services/lab-services for contact information or speak to your IBM Sales Representative or Business Partner.  If you are planning a new hardware purchase, you can include implementation services by the Technology Services team in your purchase.

Disclaimer

I am an employee of IBM on the IBM i Technology Services team (formerly known as Lab Services).  The opinions in this post are my own and don't necessarily represent IBM's positions, strategies, or opinions.

Selected References

I often find that researching topics related to Power Systems provides a wealth of information relating to AIX and VIOS, and substantially less that relates directly to IBM i.  Having spent a few years administering AIX systems, I am familiar with the many excellent AIX blogs that are available.  Many of these references are very AIX focused, but don’t let that dissuade you from reading them -- they are also excellent resources for IBM i administrators.

 

IBM Power Systems SR-IOV Technical Overview and Introduction Redpaper https://www.redbooks.ibm.com/abstracts/redp5065.html

 

IBM Support: Configuring Ethernet Link Aggregation

https://www.ibm.com/support/pages/configuring-ethernet-link-aggregation

 

IBM Community: SR-IOV FAQ

https://community.ibm.com/community/user/power/viewdocument/sr-iov-vnic-and-hnv-information

 

AIX for System Administrators – SR-IOV & vNIC summary pages

http://aix4admins.blogspot.com/2016/01/sr-iov-vnic.html

http://aix4admins.blogspot.com/2017/03/vnic_20.html

 

YouTube – This is the replay from the May 28th, Power Systems Virtual User Group Webinar covering Single Root I/O Virtualization (SR-IOV) presented by expert Chuck Graham

https://youtu.be/1ANyxQaSXOI


TL;DR 

SR-IOV lets you share ethernet adapter cards across multiple IBM i partitions without using VIOS.  vNIC adds the ability to include automatic active/passive failover if you also use VIOS.

 

2 comments:

  1. Hi VIncent,

    I've noticed that if I create the vNIC adapter without specific VLAN tagging, on IBMi I need to create the LIND with "Ethernet standard=*ALL" to get traffic flowing. If I cerate the LIND using let's say "Ethernet standard=*ETHV2" and then add VLAN on the interface, traffic is not flowing.

    Do you know why this happens?

    Regards,
    Tsvetan

    ReplyDelete
    Replies
    1. Sorry I do not. Could not find any related information either when I checked. Different SR-IOV adapters have different behavior. I have seen APARs where any change to a logical port tends to stop traffic until a Vary off/on combination, so it might not specifically be the ethernet standard, but rather the vary off/on to change it that "fixed" the issue. I would recommend opening a support ticket to get the real experts on the case. BTW, which adapter feature code?

      Delete

Proactive vNIC Changes for VIOS Maintenance

In January 2023, I published an article and shared Python code that allows monitoring of your systems to verify your vNIC backing devices ar...