Home > VMware > my first lefthand ISCSI VI architecture

my first lefthand ISCSI VI architecture

I’m currently reviewing a design of a new virtual infrastructure. The VI uses multiple 10GB links to connect to a very large lefthand san.
I’m more a Fibre Channel guy, but I believe that this solution will smoke most mid-range FC-sans. I cannot wait to deploy the VI on the SAN.
But I need to get used to some differences between ISCSI and fibre channel configurations.

The “problem” or my latest challenge is creating a LUN provisioning scheme where multiple clusters can connect to all the LUNs when a disaster occurs and a cluster has failed. Lefthand present the LUNs as targets instead using the LUN ID as a unique identifier. I’m used to design a LUN ID scheme per cluster, this way if a cluster fails, the “destination” cluster can connect to the LUNs of the failed cluster with the same LUN ID as the original cluster.

But when a (lefthand) LUN is presented to the ESX server, it will use a unique target ID instead of a unique LUN ID. (vmhba1:2:0)
I have done some testing and discovered that the assigned target ID can differ from ESX server to ESX server.

I’m curious if the target ID is used when creating the UUID of the VMFS datastore.
And I’m especially interested in what will happen if multiple ESX hosts are going to communicate with the LUN when all the ESX hosts will use a different “path”

Maybe there isn’t a problem at all and different targets will work well, but is seems that I need to stop thinking in FC solutions and get used to iscsi “quirks”.

I’ve read the field guide for VMware infrastructures, I googled on terms like “iscsi lun scheme’s” but I cannot seem to find any real-life scenario’s.
Maybe my Google skills are pitiful at the moment, and maybe someone can shed some lights on this and how they solved this “problem”.

  1. Ken Cline
    March 30, 2009 at 9:17 pm

    Hey Frank,

    Nice article – please keep us updated on how you progress. I’m interested in the solution to your (possible) problem!


  2. Paul Petty
    April 14, 2009 at 1:04 pm

    Thats a good question and one that I’m just discovering for myself too. We have an ESX cluster connected to several HP FC EVAs at present but we also have a large (ish) Lefthand iSCSI SAN which we want to be able to provision some data volumes for VMs. When we present multiple LUNs from the iSCSI SAN on each host they appear as LUN ID 0 but with different iSCSI target IDs? Like you, coming from a fibre world this seems strange and very off putting to me.
    I carried on doing some testing and added 3 LUNs to each node in the cluster and then created a VMFS on the first LUN. To my surprise this new VMFS worked and appears on all of the nodes that the LUN is presented to and likewise for additional VMFS disks too.
    All seems well in testing so far but if there is any difinitive answer out there I’d like to hear it.


  3. Ray
    June 7, 2009 at 12:43 pm

    We’ve only used iscsi and a combination of Sanmelody (single and multiple paths/HA) as well as Equallogic. While everything “just worked” I saw what you mentioned and no amount of Googling helped (as you found out).
    I even went to the point of making sure the Send Targets were in the same order on each host, nope, still the same (well different 🙂

    Be interested to know if there is a doc on this somewhere, it’s been a over a year for us now with the setup and all good!


  4. Beto
    September 4, 2009 at 12:49 pm

    I suppose one of the easiest ways to verify this is to test it before it goes into production. Set up the various nodes of your cluster with some test LUNs, then go yank a cable or otherwise instigate a failover and see how things work.

    Also, I’m not sure if this applies to lefthand, but I know with other vendor storage, it has been shown that NFS-connected (NAS) datastores actually get better performance than ISCSI LUNs. This may be something else to consider if you’ve got cycles to burn in your testing phase, although if you’re coming from an FC world this also may go against everything you know 🙂 (see this thread as a starting point in the debate – http://communities.vmware.com/message/737269)

    Best of luck in your deployment!

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: