One XenServer 6.0.2 active path showing

Only 1 path active on XS which is part of the pool
While creating new SR on XS 6.0.2 on EMC VNX 5300 will show one host with 1 active path rest all have all the active path . When multipath -ll is run it will also show one path . This shows that all the path is not having access to the lun .

From the host which has an issue logoff all the active path

iscsiadm -m node -U all  and then logback in iscsiadm -m node -L all.

Restart the multipathd service /etc/init.d/multipathd restart. This will bring all the path active on the host.

Advertisements

How to map iSCSI interface of XenServer 6.0.2 VNX 5300

How to map iSCSI interface of XenServer and 5.6, 6.0.2 VNX 5300

First define storage interface on XenServer and for that choose configure under “Management Interface ” and then select “New Interface” . Now specify the name and then choose the NIC which will be configure for storage traffic. Supply IP address. This needs to be repeated for all the interface which is connected to the storage.

Once this is done then perform discovery of the nic for the logging into EMC. This can be perform via CLI as well as via XenCenter.

My experience is to use XenCenter

To perform via XenCenter select new storage and choose “Software iSCSI”. Choose “Name” and then under location provide information as follows :

Target host: This will be IP address of target Storage Processor or controller . Specify all the IP address with ,

XenServer New Storage Repo

Target IQN: Here you will find IQN of the target storage processor . If it has 4 ports then you will see 4 IQN. We need to choose the one which is highlighted (*)

This will log all the targets on the EMC VNX box to mapped .

From the command line following needs to be run from the host in order to login.

Once all the message says successfully login , open EMC Unishpere and select “Connectivity status ” on left hand side under “Hosts “. This will popup Host initiators window. You will find one name with just IQN as shown below. This is the new targets which has logged in . Select “Edit ” from bottom and then provide information about “host name” “IP address” . IP address is of XenServer management IP . Make sure you choose initiator type as “clarion open ” and Fail Over mode as “ALUA 4”. ALUA is the latest fail over mode as per EMC.

How I increased IOPS 200 times with XenServer and PVS

Virtual eXperience

UPDATED INFO HERE: http://virtexperience.com/2014/03/10/an-update-about-my-experience-with-pvs/

In my previous blogpost I’m describing how the PVS 7.1 new Cache to Ram with overflow to disk does not give you any more IOPS than cache to disk. http://virtexperience.com/2013/11/05/pvs-7-1-ram-cache-with-overflow-to-disk/

During that test, I discovered that Intermediate Buffering in the PVS device can improve your performance 3 times on xenserver, but with some more experimenting I got up to 200 times the IOPS on a PVS device booted on Xenserver. Here is a little description of what I did and how I measured it.

First, a little disclaimer. This is observations done I a LAB environment, DO NOT implement this in production without testing properly.

I’m using IO meter to test, and here is my IO meter setup for this test and my previous cache to RAM test.

  • Target Disk: c:\
  • Maximum disk size 204800 (100MB)
  • Default Access specification
  • Update frequency 10 secs
  • Run time 1 minute

View original post 628 more words