How to map iSCSI interface of XenServer 6.0.2 VNX 5300

How to map iSCSI interface of XenServer and 5.6, 6.0.2 VNX 5300

First define storage interface on XenServer and for that choose configure under “Management Interface ” and then select “New Interface” . Now specify the name and then choose the NIC which will be configure for storage traffic. Supply IP address. This needs to be repeated for all the interface which is connected to the storage.

Once this is done then perform discovery of the nic for the logging into EMC. This can be perform via CLI as well as via XenCenter.

My experience is to use XenCenter

To perform via XenCenter select new storage and choose “Software iSCSI”. Choose “Name” and then under location provide information as follows :

Target host: This will be IP address of target Storage Processor or controller . Specify all the IP address with ,

XenServer New Storage Repo

Target IQN: Here you will find IQN of the target storage processor . If it has 4 ports then you will see 4 IQN. We need to choose the one which is highlighted (*)

This will log all the targets on the EMC VNX box to mapped .

From the command line following needs to be run from the host in order to login.

Once all the message says successfully login , open EMC Unishpere and select “Connectivity status ” on left hand side under “Hosts “. This will popup Host initiators window. You will find one name with just IQN as shown below. This is the new targets which has logged in . Select “Edit ” from bottom and then provide information about “host name” “IP address” . IP address is of XenServer management IP . Make sure you choose initiator type as “clarion open ” and Fail Over mode as “ALUA 4”. ALUA is the latest fail over mode as per EMC.

Microsoft Releases iSCSI Software Target 3.3 to Windows Server 2008 R2 Users – Reposted

Microsoft Releases iSCSI Software Target 3.3 to Windows Sever 2008 R2 Users

Microsoft released on Monday iSCSI Software Target 3.3, a Windows Server 2008 R2 addition that allows for shared block storage in storage area networks using the iSCSI protocol.

According to Microsoft’s announcement, iSCSI Software Target 3.3 is the first release that can be used in a production environment. The product enables “storage consolidation and sharing on a Windows Server by implementing the iSCSI (Internet Small Computer Systems Interface) protocol, which supports SCSI-block access to [a] storage device over a TCP/IP network,” according to the product overview at Microsoft’s Download Center.

This type of storage architecture offers a number of benefits, according to the product overview. It can be used to achieve high availability with Microsoft’s Hyper-V hypervisor using the “live migration” feature. Storage for application servers can be consolidated, including on a Windows failover cluster. Finally, Microsoft iSCSI Software Target 3.3 supports the remote booting of diskless computers “from a single operating system image using iSCSI.”

Microsoft’s team ran this release the software through extensive testing, particularly with Windows Server failover clusters and Hyper-V, according to the announcement. One scenario involved using Microsoft iSCSI Software Target in a “two-node Failover Cluster,” with 92 Hyper-V virtual machines storing data to one of the nodes. The team introduced a failure in the main node and found that all 92 virtual machines switched to the second node without a noticeable effect on the underlying application.

Microsoft is recommending using Service Pack 1 with Windows Server 2008 R2 for this release of Microsoft iSCSI Software Target. The product can be installed in a Hyper-V virtual machine. It doesn’t work with a core installation of Windows Server 2008 R2.

About the Author

Kurt Mackie is senior news producer for the 1105 Enterprise Computing Group.

CLUSTERING SERVER 2012 R2 WITH ISCSI STORAGE

Wednesday, December 31, 2014   , , , , , , , , ,,   No comments

Source: Exit The Fast Lane

Yay, last post of 2014! Haven’t invested in the hyperconverged Software Defined Storage model yet? No problem, there’s still time. In the meanwhile, here is how to cluster Server 2012 R2 using tried and true EqualLogic iSCSI shared storage.

EQL Group Manager

First, prepare your storage array(s), by logging into EQL Group Manager. This post assumes that your basic array IP, access and security settings are in place.  Set up your local CHAP account to be used later. Your organization’s security access policies or requirements might dictate a different standard here.

SNAGHTML3b62e029

Create and assign an Access Policy to the VDS/VSS in Group Manager otherwise this volume will not be accessible. This will make subsequent steps easier when it’s time to configure ASM.image

Create some volumes in Group Manager now so you can connect your initiators easily in the next step. It’s a good idea to create your cluster quorum LUN now as well.

image

Host Network Configuration

First configure the interfaces you intend to use for iSCSI on your cluster nodes. Best practice says that you should limit your iSCSI traffic to a private Layer2 segment, not routed and only connecting to the devices that will participate in the fabric. This is no different from Fiber Channel in that regard, unless you are using a converged methodology and sharing your higher bandwidth NICs. If using Broadcom NICs you can choose Jumbo Frames or hardware offload, the larger frames will likely net a greater performance impact. Each host NIC used to access your storage targets should have a unique IP address able to access the network of those targets within the same private Layer2 segment. While these NICs can technically be teamed using the native Windows LBFO mechanism, best practice says that you shouldn’t, especially if you plan to use MPIO to load balance traffic. If your NICs will be shared (not dedicated to iSCSI alone) then LBFO teaming is supported in that configuration. To keep things clean and simple I’ll be using 4 NICs, 2 dedicated to LAN, 2 dedicated to iSCSI SAN. Both LAN and SAN connections are physically separated to their own switching fabrics as well, this is also a best practice.

image

MPIO – the manual method

First, start the MS iSCSI service, which you will be prompted to do, and check its status in PowerShell using get-service –name msiscsi.

image

Next, install MPIO using Install-WindowsFeature Multipath-IO

Once installed and your server has been rebooted, you can set additional options in PowerShell or via the MPIO dialog under  File and Storage Services—> Tools.

image

Open the MPIO settings and tick “add support for iSCSI devices” under Discover Multi-Paths. Reboot again. Any change you make here will ask you to reboot. Make all changes once so you only have to do this one time.

image

The easier way to do this from the onset is using the EqualLogic Host Integration Tools (HIT Kit) on your hosts. If you don’t want to use HIT for some reason, you can skip from here down to the “Connect to iSCSI Storage” section.

Install EQL HIT Kit (The Easier Method)

The EqualLogic HIT Kit will make it much easier to connect to your storage array as well as configure the MPIO DSM for the EQL arrays. Better integration, easier to optimize performance, better analytics. If there is a HIT Kit available for your chosen OS, you should absolutely install and use it. Fortunately there is indeed a HIT Kit available for Server 2012 R2.

image

Configure MPIO and PS group access via the links in the resulting dialog.

image

In ASM (launched via the “configure…” links above), add the PS group and configure its access. Connect to the VSS volume using the CHAP account and password specified previously. If the VDS/VSS volume is not accessible on your EQL array, this step will fail!

image

Connect to iSCSI targets

Once your server is back up from the last reboot, launch the iSCSI Initiator tool and you should see any discovered targets, assuming they are configured and online. If you used the HIT Kit you will already be connected to the VSS control volume and will see the Dell EQL MPIO tab.

image

Choose an inactive target in the discovered targets list and click connect, be sure to enable multi-path in the pop-up that follows, then click Advanced.

image

Enable CHAP log on, specify the user/pw set up previously:

image

If your configuration is good the status of your target will change to Connected immediately. Once your targets are connected, the raw disks will be visible in Disk Manager and can be brought online by Windows.

image

When you create new volumes on these disks, save yourself some pain down the road and give them the same label as what you assigned in Group Manager! The following information can be pulled out of the ASM tool for each volume:

image

Failover Clustering

With all the storage pre-requisites in place you can now build your cluster. Setting up a Failover Cluster has never been easier, assuming all your ducks are in a row. Create your new cluster using the Failover Cluster Manager tool and let it run all compatibility checks.

image

Make sure your patches and software levels are identical between cluster nodes or you’ll likely fail the clustering pre-check with differing DSM versions:

image

Once the cluster is built, you can manipulate your cluster disks and bring any online as required. Cluster disks will not be able to be brought online until all nodes in the cluster can access the disk.

image

Next add your cluster disks to Cluster Shared Volumes to enable multi-host read/write and HA.

image

The new status will be reflected once this change is made.

image

Configure your Quorum to use the disk witness volume you created earlier. This disk does not need to be a CSV.

image

Check your cluster networks and make sure that iSCSI is set to not allow cluster network communication. Make sure that your cluster network is setup to allow cluster network communication as well as allowing client connections. This can of course be further segregated if desired using additional NICs to separate cluster and client communication.

image

Now your cluster is complete and you can begin adding HA VMs, if using Hyper-V, SQL, File or other roles as required.

References:

http://blogs.technet.com/b/keithmayer/archive/2013/03/12/speaking-iscsi-with-windows-server-2012-and-hyper-v.aspx

http://blogs.technet.com/b/askpfeplat/archive/2013/03/18/is-nic-teaming-in-windows-server-2012-supported-for-iscsi-or-not-supported-for-iscsi-that-is-the-question.aspx

VNX 5300/VMware: Troubleshoot ESXi connectivity to SAN va iSCSI connection

Troubleshoot VMware ESXi/ESX to iSCSI array connectivity:

Note: This A rescan is required after every storage presentation change to the environment.

1.Log into the ESXi/ESX host and verify that the VMkernel interface (vmk) on the host can vmkping the iSCSI targets with this command:

# vmkping target_ip

If you are running an ESX host, also check that Service Console interface (vswif) on the host can ping the iSCSI target with:

# ping target_ip

Note: Pinging the storage array only applies when using the Software iSCSI initiator. In ESXi, ping and ping6 both run vmkping. For more information about vmkping, see Testing VMkernel network connectivity with the vmkping command (1003728).

2.Use netcat (nc) to verify that you can reach the iSCSI TCP port (default 3260) on the storage array from the host:

# nc -z target_ip 3260

Example output:

Connection to 10.1.10.100 3260 port [tcp/http] succeeded!

Note: The netcat command is available with ESX 4.x and ESXi 4.1 and later.

3.Verify that the host Hardware Bus Adapters (HBAs) are able to access the shared storage. For more information, see Obtaining LUN pathing information for ESX or ESXi hosts (1003973).

4.Confirm that no firewall is interfering with iSCSI traffic. For details on the ports and firewall requirements for iSCSI, see Port and firewall requirements for NFS and SW iSCSI traffic (1021626). For more information, see Troubleshooting network connection issues caused by firewall configuration (1007911).

Note: Check the SAN and switch configuration, especially if you are using jumbo frames (supported from ESX 4.x). To test the ping to a storage array with jumbo frames from an ESXi/ESX host, run this command:

# vmkping -s MTUSIZE IPADDRESS_OF_SAN -d

Where MTUSIZE is 9000 – (a header of) 216, which is 8784, and the -d option indicates “do not fragment”.

5.Ensure that the LUNs are presented to the ESXi/ESX hosts. On the array side, ensure that the LUN IQNs and access control list (ACL) allow the ESXi/ESX host HBAs to access the array targets. For more information, see Troubleshooting LUN connectivity issues on ESXi/ESX hosts (1003955).

Additionally ensure that the HOST ID on the array for the LUN (on ESX it shows up under LUN ID) is less than 255 for the LUN. The maximum LUN ID is 255. Any LUN that has a HOST ID greater than 255 may not show as available under Storage Adapters, though on the array they may reside in the same storage group as the other LUNS that have host IDs less than 255. This limitation exists in all versions of ESXi/ESX from ESX 2.x to ESXi 5.x. This information can be found in the maximums guide for the particular version of ESXi/ESX having the issue.

6.Verify that a rescan of the HBAs displays presented LUNs in the Storage Adapters view of an ESXi/ESX host. For more information, see Performing a rescan of the storage on an ESXi/ESX host (1003988).

7.Verify your CHAP authentication. If CHAP is configured on the array, ensure that the authentication settings for the ESXi/ESX hosts are the same as the settings on the array. For more information, see Checking CHAP authentication on the ESXi/ESX host (1004029).

8.Consider pinging any ESXi/ESX host iSCSI initiator (HBA) from the array’s targets. This is done from the iSCSI host.

9.Verify that the storage array is listed on the Storage/SAN Compatibility Guide. For more information, see Confirming ESXi/ESX host hardware (System, Storage, and I/O) compatibility (1003916).

Note: Some array vendors have a minimum-recommended microcode/firmware version to operate with VMware ESXi/ESX. This information can be obtained from the array vendor and the VMware Hardware Compatibility Guide.

10.Verify that the physical hardware is functioning correctly, including:

◦The Storage Processors (sometimes known as heads) on the array

◦The storage array itself

◦Check the SAN and switch configuration, especially if you are using jumbo frames (supported from ESX 4.x). To test the ping to a storage array with jumbo frames from ESXi/ESX, run this command:

# vmkping -s MTUSIZE STORAGE_ARRAY_IPADDRESS

Where MTUSIZE is 9000 – (a header of) 216, which is 8784.

Note: Consult your storage array vendor if you require assistance.

11.Perform some form of network packet tracing and analysis, if required. For more information, see:

◦Capturing virtual switch traffic with tcpdump and other utilities (1000880)

◦Troubleshooting network issues by capturing and sniffing network traffic via tcpdump (1004090)