vSphere, HP EVA and path policies


The HP EVA comes in two types of storage arrays. An active/passive array and an active/active array. The HP EVA active/active storage arrays can handle I/O requests on both storage processors. With ESX 3.5 and an active/active EVA array you had two options for the path selection policy, fixed and MRU (Most Recently Used). With an active/active EVA array most people used the fix path policy to manually load balance the I/O over both storage processors. This process is very work intensive (it has to be done on all ESX hosts) and is prone to errors (e.g. a fixed path is configured to the wrong storage processor or path trashing).

With the release of vSphere and the vStorage API’s for multipathing, the round robin path selection policy is now fully supported. The HP EVA active/active storage arrays support ALUA which stands for Asymmetrical Logical Unit Access. ALUA occurs when the access characteristics of one port may differ from those of another port. ALUA allows a lun to be accessed via its primary path (via the owning Storage Processor) and via an asymmetrical path (via the not-owning Storage Processor). I/O to the not-owning Storage Processor or not-optimized path  comes with a performance penalty because the I/O has to be transmitted over the internal connection between the storage processors which does not have much bandwidth.

With ALUA the ESX host is aware of the non-optimized paths. So when you use the round robin path policy, ESX will load balance the I/O over the optimized paths (the paths to the owning storage processor of the lun) and use the non-optimized paths only in case of a failure on the optimized paths. To my opinion this has two advances: both path are used and manually load balancing isn’t necessary anymore which saves a lot of configuration work.

So here you have it! I recommend to use the round robin path policy with an active/active EVA array. If you don’t want to use the round robin path policy I would recommend to use the MRU policy. The MRU policy is ALUA aware and will also use the optimized paths.

9 Responses to vSphere, HP EVA and path policies

  1. Eric van der Meer says:

    Thanks for sharing this great info with us!

  2. Duncan says:

    I think either MRU or RR makes more sense than fixed in this case. If you use MRU you are actually using ALUA. When you pick your preferred controller on a per LUN base you will have loadbalancing.

  3. Ted Steenvoorden says:

    Hi Duncan, I agree with you but for some reason the HP enigeer recommends fixed. I will try to clear this up.

  4. Matthias says:

    Hi Ted,
    HP actually recommends MRU or Round Robin with the EVA4/6/8×00 arrays.
    We also think you should set the IOPS counter to 1 instead of 1000 to enable true load balancing.
    You don’t need any Multipathing Plug-in for HP’s arrays and vSphere 4. THe default plug-ins work with HP’s arrays. There will be a Best Practices paper for EVA and XP soon at http://www.hp.com/go/hpcft .

  5. Good post. Most of this information will be detailed in a white-paper I am authoring. Until then i wanted to shed some light on the confusion about MRU vs FIXED. In the ESX3.5 days we recommend the use of FIXED with EVA because it provided a more deterministic way to know where your I/O will end up after say a controller failure and restore. As mentioned in this post ESX3.5 wasn’t ALUA compliant. In ESX4.0, ESX is ALUA compliant furthermore, Round Robin and MRU are ALUA aware. So they will both give priority to the Active-Optimized paths to the LUN. FIXED on the other hand in ESX 4.0 ignores all ALUA settings and sends I/O down to the selected path.

    In the EVA guide configuration/user guide will find the recommendation that the following LUN presentation policies are supported with VMware:
    – No Preference
    – Path A or B failover/failback

    I personally recommend customers to always take the time and use the latter when configuring LUNs as it makes things much simple and easier later. However when set to No preference your LUN ownership will alternate per LUN between the two controllers. After a failover say a controller fails and is restored with MRU if you are using the No Preference setting it is really NOT guaranteed that your I/O access paths will be the same because the controllers will again arbitrarily decide which LUN is owned by which controller. However if you were using a Fixed path policy then you will re-use the same I/O paths as before failover and even if they are not optimal at first, the EVA will eventually make the path optimal. As mentioned in this post using Fixed is much more tedious as fixed paths must be matched on all servers and all LUNs. So for simplicity, the LUN presentation should always be set to path A or B failover/failback alternating the LUN to path A and path B. MRU is now more suitable than Fixed because it is very likely that in a large ESX cluster more than one EVA port on the optimal controller for a LUN will be use for I/O by servers in the cluster without additional administrator intervention. But again, Round Robin really should be the setting of choice to maximize port utilization and significantly reduce configuration time.

    R/

  6. Ted Steenvoorden says:

    Thanks for you detailed response. Through VMware support I got the response of a HP Engineer who recommended the fixed path policy, so I decided to mention this in the article as the recommendation of HP. Guess there are more recommendations😉 I have removed line from the article.

  7. Savannah says:

    Awesome blog!

    I thought about starting my own blog too but I’m just too lazy so, I guess Ill just have to keep checking yours out.
    LOL,

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: