One XenServer 6.0.2 active path showing

Only 1 path active on XS which is part of the pool
While creating new SR on XS 6.0.2 on EMC VNX 5300 will show one host with 1 active path rest all have all the active path . When multipath -ll is run it will also show one path . This shows that all the path is not having access to the lun .

From the host which has an issue logoff all the active path

iscsiadm -m node -U all  and then logback in iscsiadm -m node -L all.

Restart the multipathd service /etc/init.d/multipathd restart. This will bring all the path active on the host.

How to map iSCSI interface of XenServer 6.0.2 VNX 5300

How to map iSCSI interface of XenServer and 5.6, 6.0.2 VNX 5300

First define storage interface on XenServer and for that choose configure under “Management Interface ” and then select “New Interface” . Now specify the name and then choose the NIC which will be configure for storage traffic. Supply IP address. This needs to be repeated for all the interface which is connected to the storage.

Once this is done then perform discovery of the nic for the logging into EMC. This can be perform via CLI as well as via XenCenter.

My experience is to use XenCenter

To perform via XenCenter select new storage and choose “Software iSCSI”. Choose “Name” and then under location provide information as follows :

Target host: This will be IP address of target Storage Processor or controller . Specify all the IP address with ,

XenServer New Storage Repo

Target IQN: Here you will find IQN of the target storage processor . If it has 4 ports then you will see 4 IQN. We need to choose the one which is highlighted (*)

This will log all the targets on the EMC VNX box to mapped .

From the command line following needs to be run from the host in order to login.

Once all the message says successfully login , open EMC Unishpere and select “Connectivity status ” on left hand side under “Hosts “. This will popup Host initiators window. You will find one name with just IQN as shown below. This is the new targets which has logged in . Select “Edit ” from bottom and then provide information about “host name” “IP address” . IP address is of XenServer management IP . Make sure you choose initiator type as “clarion open ” and Fail Over mode as “ALUA 4”. ALUA is the latest fail over mode as per EMC.

EMC VNX Replicator: YouTube / Mirrored LUN with EMC Unisphere

Source: YouTube

Source: TechRepublic

Setting up a mirrored LUN with EMC Unisphere

Lauren Malhoit takes you through the basic set up for replicating a LUN to a recovery site using EMC’s Unisphere GUI.

A few weeks ago I wrote a blog about setting up vSphere SRM 5.0 with EMC Mirrorview.  Now that’s all well and good, but if you don’t have a LUN from your protected site being replicated to your recovery site, none of that will work!  A LUN is an acronym for Logical Unit.  It’s basically a portion of your storage that has been created on the storage side of things that you can present to servers (in the previous case ESXi servers).  In this blog, I’ll take you through the basic set up for replicating a LUN to the recovery site using EMC’s Unisphere GUI.  I’ll be using some SRM terms like “protected” and “recovery” sites to make things more clear.

I’m assuming you already have a LUN set up that is presented to your ESXi servers that you’d like to replicate.  This would be the LUN that your production servers are sitting on already.  So, at this point, just open a browser and login to the Unisphere GUI where you can find this particular LUN. You should be able to set up Unisphere so that you can see both sites and just click on the site you need to configure according to the following steps.

On the protected side

  1. Go to Storage>>Pools/RAID Groups, click on the RAID Groups tab and find the LUN within the RAID group.
  2. Right click on that LUN and click Properties.
  3. Look at the user blocks under “Capacity” and write down that number for later use.  Make sure you note the exact number of user blocks!

On the recovery side

  1. Highlight the RAID group with enough contiguous storage to support the LUN you’re trying to replicate.  Obviously, you’ll need the same amount of space on both sides.
  2. Right click that RAID group and click the Create LUN link.
  3. Select the RAID type (ex: RAID 5, you can find an explanation of RAID types here.
  4. Under LUN properties enter the User Capacity that you noted before in Step 3 on the protected side and change the dropdown menu to say “blocks.”  Click Apply.

Back to the protected side…

  1. Hover over the Replicas tab at the top and click on Mirrors
  2. On the left side click on the Create Mirror link
  3. Choose the Mirror Type (synchronous/asynchronous – for an explanation of the differences click here.
  4. Name the mirror, for this example I’ll just call it SRMmirror1.
  5. Under Primary Storage expand one of the datamovers (spA or spB) and choose the LUN you would like to replicate (the one you noted the User Capacity of earlier).
  6. Now right click on the mirror you just created, SRMmirror1, and click on Add Secondary Image.
  7. As you did in the previous two steps, expand the datamovers on the recovery side this time to find the LUN on the recovery site.  The only LUN that should appear is the one you just created with the exact number of blocks you need.  Select that LUN and click OK.

Back to the recovery side…

  1. Go to Storage>>Pools/RAID Groups.
  2. Click on the RAID groups tab and select the RAID Group where your LUN is.
  3. Right click on the LUN and select Add to storage group.  Move the LUN over to the storage group that will automatically present it to the ESXi servers.

It should start syncing at this point.  Depending on the amount of data and the link you have between your two sites this could take several days or weeks to complete the initial sync.  You can play with the sync settings as well to tell it how often you’d like it to sync and whether you’d like it to sync when the previous sync starts or after it ends.

This is a very basic description of what needs to be done to replicate LUNs using EMC Unisphere and MirrorView.  There are many more configurations that go into setting up storage properly, and especially using MirrorView snapshots (to fully utilize SRM failover testing), etc. I recommend getting help from your storage admin if you have one!  If you don’t, I would definitely suggest reading through the various admin guides and consulting with someone knowledgeable on this subject before jumping in head first.

About Lauren Malhoit

Lauren Malhoit has been in the IT field for over 10 years and has acquired several data center certifications. She’s currently a Technology Evangelist for Cisco focusing on ACI and Nexus 9000.

VNX 5300/VMware: Troubleshoot ESXi connectivity to SAN va iSCSI connection

Troubleshoot VMware ESXi/ESX to iSCSI array connectivity:

Note: This A rescan is required after every storage presentation change to the environment.

1.Log into the ESXi/ESX host and verify that the VMkernel interface (vmk) on the host can vmkping the iSCSI targets with this command:

# vmkping target_ip

If you are running an ESX host, also check that Service Console interface (vswif) on the host can ping the iSCSI target with:

# ping target_ip

Note: Pinging the storage array only applies when using the Software iSCSI initiator. In ESXi, ping and ping6 both run vmkping. For more information about vmkping, see Testing VMkernel network connectivity with the vmkping command (1003728).

2.Use netcat (nc) to verify that you can reach the iSCSI TCP port (default 3260) on the storage array from the host:

# nc -z target_ip 3260

Example output:

Connection to 10.1.10.100 3260 port [tcp/http] succeeded!

Note: The netcat command is available with ESX 4.x and ESXi 4.1 and later.

3.Verify that the host Hardware Bus Adapters (HBAs) are able to access the shared storage. For more information, see Obtaining LUN pathing information for ESX or ESXi hosts (1003973).

4.Confirm that no firewall is interfering with iSCSI traffic. For details on the ports and firewall requirements for iSCSI, see Port and firewall requirements for NFS and SW iSCSI traffic (1021626). For more information, see Troubleshooting network connection issues caused by firewall configuration (1007911).

Note: Check the SAN and switch configuration, especially if you are using jumbo frames (supported from ESX 4.x). To test the ping to a storage array with jumbo frames from an ESXi/ESX host, run this command:

# vmkping -s MTUSIZE IPADDRESS_OF_SAN -d

Where MTUSIZE is 9000 – (a header of) 216, which is 8784, and the -d option indicates “do not fragment”.

5.Ensure that the LUNs are presented to the ESXi/ESX hosts. On the array side, ensure that the LUN IQNs and access control list (ACL) allow the ESXi/ESX host HBAs to access the array targets. For more information, see Troubleshooting LUN connectivity issues on ESXi/ESX hosts (1003955).

Additionally ensure that the HOST ID on the array for the LUN (on ESX it shows up under LUN ID) is less than 255 for the LUN. The maximum LUN ID is 255. Any LUN that has a HOST ID greater than 255 may not show as available under Storage Adapters, though on the array they may reside in the same storage group as the other LUNS that have host IDs less than 255. This limitation exists in all versions of ESXi/ESX from ESX 2.x to ESXi 5.x. This information can be found in the maximums guide for the particular version of ESXi/ESX having the issue.

6.Verify that a rescan of the HBAs displays presented LUNs in the Storage Adapters view of an ESXi/ESX host. For more information, see Performing a rescan of the storage on an ESXi/ESX host (1003988).

7.Verify your CHAP authentication. If CHAP is configured on the array, ensure that the authentication settings for the ESXi/ESX hosts are the same as the settings on the array. For more information, see Checking CHAP authentication on the ESXi/ESX host (1004029).

8.Consider pinging any ESXi/ESX host iSCSI initiator (HBA) from the array’s targets. This is done from the iSCSI host.

9.Verify that the storage array is listed on the Storage/SAN Compatibility Guide. For more information, see Confirming ESXi/ESX host hardware (System, Storage, and I/O) compatibility (1003916).

Note: Some array vendors have a minimum-recommended microcode/firmware version to operate with VMware ESXi/ESX. This information can be obtained from the array vendor and the VMware Hardware Compatibility Guide.

10.Verify that the physical hardware is functioning correctly, including:

◦The Storage Processors (sometimes known as heads) on the array

◦The storage array itself

◦Check the SAN and switch configuration, especially if you are using jumbo frames (supported from ESX 4.x). To test the ping to a storage array with jumbo frames from ESXi/ESX, run this command:

# vmkping -s MTUSIZE STORAGE_ARRAY_IPADDRESS

Where MTUSIZE is 9000 – (a header of) 216, which is 8784.

Note: Consult your storage array vendor if you require assistance.

11.Perform some form of network packet tracing and analysis, if required. For more information, see:

◦Capturing virtual switch traffic with tcpdump and other utilities (1000880)

◦Troubleshooting network issues by capturing and sniffing network traffic via tcpdump (1004090)