PowerVM VIOS for IBM i

 

In this post, I will discuss the pros and cons of creating a completely virtualized IBM i environment with redundant VIOS (Virtual I/O Server).  You can just look at the name of this website to understand where I stand on that issue.  Many IBM i administrators try to avoid VIOS for several reasons.  To be completely honest, a LONG time ago I was even one of them.  That was a mistake.  I want to make the case for why you should consider VIOS for I/O virtualization on your next system.

In the present day, there are many options for virtualizing the workload on an IBM Power server.  The options range from absolutely no virtualization (a non-partitioned system), to all Input/Output and processor completely virtualized and mobile.  According to the 2022 Fortra (formerly HelpSystems) survey, 22% of you have a single partition, and 25% have two partitions.  If that’s you, you probably don’t need VIOS... yet. 

It is also common to find particularly critical partitions with dedicated processors and dedicated I/O resources on the same Power servers as fully virtualized partitions that are sharing resources. 

I’m a big fan of virtualizing everything, but I understand that is not always optimal.  Fortunately, PowerVM has the flexibility provide the right choice for you on a partition-by-partition basis.

Why should you virtualize I/O? 

Ask yourself a question:  If you have more than one partition, why don’t you buy a separate Power system for each partition? 

Your business probably requires multiple partitions for a reason: workload splitting, different applications, development/testing environments, etc.  You also have good reasons to consolidate your separate workloads onto a smaller number of more powerful systems.  Usually, those reasons relate to things like cost, allowance for growth, limited floor space, power, or cooling requirements.

The same reasons apply to why you should virtualize your I/O resources.  Ethernet infrastructure (especially 10G) is a limited resource.  Switches, cabling and SFPs all add to expenses and complexity.

Sharing fiber channel ports for storage also reduces the number of ports needed on SAN switches, as well as reducing cable needs.  This saves money and time.

If you use external (SAN) storage, you can even use Live Partition Mobility (LPM) to move running partitions between physical servers.  This is a very common practice in the AIX world, but fairly rare for IBM i.  More to come on that.

External Storage also allows you to leverage technologies such as FlashCopy to create backups with almost zero downtime or create test or reporting copies practically instantly.  It will also greatly simplify server migrations and enable storage-based replication for High Availability and Disaster Recovery solutions.  I’ll write a future article that delves deeper into the benefits of external storage, as it is a technology that deserves a deep dive.

When you have a fully virtualized PowerVM infrastructure in place, creating a new partition becomes a very simple thing.  There is no longer any need to assign any physical resources.  Just create new virtual resources with the HMC GUI and your partition (IBM i, AIX, or Linux) is ready to go.  Okay, you might need to do some zoning and maybe assign some storage before you can use it, but the partition will be ready to go.

Redundancy is critical

Proper virtualization leverages redundancy to improve reliability.  Ideally, all your virtualized resources should have backup. 

Virtual Ethernet connections should be based on vNIC with multiple backing adapters for automatic failover, or Shared Ethernet Adapters backed by multiple physical adapters in multiple VIOS.  Each adapter should connect to the network via separate network switches.  Eliminate all single points of failure and you will eliminate many potential problems before they happen.

Storage should have multiple paths via multiple fiber channel cards owned by multiple VIOS partitions connected through multiple SAN switches (fabrics) to multiple storage ports.  Again, eliminate those single points of failure.

A properly implemented virtual infrastructure is more reliable than individual physical adapters directly mapped to partitions.

Don’t fear the VIOS

If I had any musical talent, I’d make a version of the classic “Don’t Fear the Reaper” song as “Don’t Fear the VIOS”.  I don’t, so I’ll just stick with text.  Trust me.  It’s better this way.

Many IBM i administrators want to avoid VIOS because it is based on AIX, which is an unfamiliar technology.  As I mentioned before, I was one of those until I spent a few years at a company which used VIOS extensively.

Let me be very clear about this.  AIX guys are NOT smarter than IBM i guys.  They just understand a different command syntax.  They might be smarter than Windows guys, but who isn’t, right?

AIX users should NOT be the only ones that benefit from VIOS in their environments.  VIOS is intended to be implemented as an appliance, similar to the HMC, but exclusively in software.  There is a connection to the HMC that is the primary means of configuration.  There is also a command line environment that is subset of simplified AIX commands and some commands that are specific to VIOS.  It is well documented with both online help and manuals, but you will rarely need to use it.

The fact is, once you have done the basic install of VIOS, all your ongoing monitoring and configuration can be completed from the modern Enhanced HMC GUI interface.  If you want to add a partition, map a new fiber channel port , configure a new vNIC, etc. You do it all with clicks on a web interface.  The only time you MUST use the command line on the VIOS is for a few commands during an install, and to install software updates.  Software updates are usually a painless process that involves an install to an alternate boot disk and a simple reboot to activate.  The alternate disk install also means the upgrades are completely reversible in case of problems.  Remember that you want to have redundant connections to multiple VIOS, so that reboot will not be disruptive to your environment. 

I should mention that just because you usually don’t have to use the command line interface doesn’t mean you won’t want to use the command line interface.  There is a massive amount of information to be had from those simple commands.  Watch for a future post where I publish and explain some of my favorite information gathering VIOS commands.

The benefits of VIOS outweigh the costs, especially if you are using external storage.

Licensing topics

Fun fact, you are probably already licensed for VIOS.  PowerVM is required for partitioning, and all editions include VIOS.  If have PowerVM licenses for your server, you are already entitled to install VIOS.  You can get it from IBM Entitled System Support by going to “My Entitled Software”, “By Product” and select 5765-VE3. 

Another important consideration for those of you with extra processors not licensed for IBM i, VIOS is not IBM i, so you do not need those licenses for the processors running VIOS.  That means the processor overhead related to handling the I/O virtualization does not have a premium beyond the cost to activate the processor.  You can make sure you are in compliance by using HMC processor pools to limit the IBM i partitions to the number of licensed processors, and putting your VIOS (and Linux) in an uncapped pool.

Another virtualization topic specific to IBM i is the way the O/S and most applications are licensed.  I mentioned earlier that Live Partition Mobility, moving a running partition to a different server, is a common practice for AIX shops.  It is pretty rare for IBM i.  I think one of the key reasons that has been true historically is that AIX O/S and applications are not generally licensed to a processor while IBM i O/S and applications are pretty much always licensed to a processor serial number.  That means moving an active IBM i partition to another Power server can result in license problems.  Fortunately, IBM recently announced Virtual Serial Numbers that can be attached to a partition and migrate with it.  If Live Partition Mobility appeals to you, look into getting a Virtual Serial Number. 

I should mention that since LPM moves memory over a network to the other server, LPM on IBM i may require a much more robust network environment than the equivalent AIX resources.  IBM i uses single level storage, so it uses large amounts of very active memory.  There are certainly memory size and activity limits that could preclude the use of LPM for very large environments.  As always, your environment matters, and your results may vary.

 

iVirtualization (AKA i hosting i)

There is another option for virtualizing I/O and disk resources for a client partition by using the iVirtualization functionality built into IBM i since V6.1.  This functionality allows you to virtualize ethernet adapters owned by the parent partition and to create virtual disk objects that are shared to another client partition as virtual SCSI disks.

The *NWS* commands to support this are all native IBM commands that will look familiar to IBM i administrators.  Don’t kid yourself.  They are no less complex than the corresponding VIOS commands to someone that has never used them.

In some limited situations, iVirtualization might be a viable option.  For example, on a small system with internal NVMe on a single backplane such that it is not possible to split between multiple VIOS for redundancy. 

Another case where iVirtualization might be preferred is for a small linux test partition hosted from an existing IBM i partition with internal disk and no VIOS infrastructure.

I would not use it with external storage in any case as it would lose all of the benefit of multipathing.

Now here are the primary reasons I would recommend VIOS over iVirtualization:

-          License costs.  Hosting on IBM i means paying for an IBM i license for work that could be free.

-          Performance.  The numbers I have seen have consistently shown the client partitions do not perform as well as an equivalent VIOS configuration.  This is especially problematic with an IBM i client as performance is related to number of disks, which results in more objects and more overhead.

-          Completely manual configuration.  The HMC GUI configuration that is available with VIOS does not work with iVirtualization, so it needs to be configured completely with commands.

-          No Redundancy.  When the host is down, the clients are down. To be fair, you could use multiple host partitions and mirror disks in the client, but you can do that with VIOS also.

-          No LPM.  Live Partition Mobility is not supported for clients of iVirtualization.

-          No development.  If you look at the table of changes in the IBM i Virtualization Summary referenced below, you will see that there has been only one change to iVirtualization since 2015, compared to constant development and improvements for VIOS.

 

What if you need help implementing VIOS with IBM i?

Whether you have a large environment or small, implementing new technologies can be challenging.  If you need help beyond the available documentation, the IBM i Technology Services team (formerly known as Lab Services) is available to help with implementation planning, execution, and knowledge transfer.  See https://www.ibm.com/it-infrastructure/services/lab-services for contact information or speak to your IBM Sales Representative or Business Partner.  If you are planning a new hardware purchase, you can include implementation services by the Technology Services team in your purchase.

Disclaimer

I am an employee of IBM on the IBM i Technology Services team (formerly known as Lab Services).  The opinions in this post are mine and don't necessarily represent IBM's positions, strategies, or opinions.

 

References:

2022 IBM i Marketplace Survey Results - Fortra

https://www.fortra.com/resources/guides/ibm-i-marketplace-survey-results

 

IBM i Virtualization Summary

https://www.ibm.com/support/pages/node/1135420

 

No comments:

Post a Comment

Proactive vNIC Changes for VIOS Maintenance

In January 2023, I published an article and shared Python code that allows monitoring of your systems to verify your vNIC backing devices ar...