VDSPowerCli From Fling – Powershell for the Distributed Switch


Note: The functionality of this Fling has now been introduced into a release of PowerCLI. Whenever possible, use the latest supported version of PowerCLI, which can be downloaded here.

PowerShell is a scripting language Microsoft developed to help administrators manage the Windows environment. Third parties can write their own snap-ins (dynamic linked libraries) to implement new commands, which are called cmdlets. With VDSPowerCli, users can use the cmdlets provided by PowerCLI to manage vSphere Distributed Switch(VDS).


VDSPowerCli gives you the ability to manage:

  • VMware vSphere Distributed Switch
  • Distributed Port Group
  • Distributed Port

Fling Labs: vCenter Cluster Performance Tool

Source: Fling Labs

vCenter Cluster Performance Tool is a Powershell script that uses vSphere PowerCLI to obtain performance data for a cluster by aggregating information from individual hosts.

You have the following options to specify in the script.

  • An “interval” of 20s or 300s. The default is 20s, and corresponds to real time statistics. 300s corresponds to the 5 min interval statistics.
  • A stats query flag to obtain the list of counter IDs available on the vCenter Server. You can then pass the desired counter ID from that list to obtain Performance metrics for the cluster.


  • Gathers all data of the specified interval type that is available on each host in the specified cluster
  • Easy and a quick way of obtaining performance data for a vCenter cluster
  • Data is saved in a CSV file, which can then easily be fed into any charting software
  • A chart, in PNG format, is also generated for visualization

TechNet Magazine Videos

source: https://technet.microsoft.com/en-us/magazine/cc511304.aspx

TechNet Magazine Videos


Wednesday, December 31, 2014   , , , , , , , , ,,   No comments

Source: Exit The Fast Lane

Yay, last post of 2014! Haven’t invested in the hyperconverged Software Defined Storage model yet? No problem, there’s still time. In the meanwhile, here is how to cluster Server 2012 R2 using tried and true EqualLogic iSCSI shared storage.

EQL Group Manager

First, prepare your storage array(s), by logging into EQL Group Manager. This post assumes that your basic array IP, access and security settings are in place.  Set up your local CHAP account to be used later. Your organization’s security access policies or requirements might dictate a different standard here.


Create and assign an Access Policy to the VDS/VSS in Group Manager otherwise this volume will not be accessible. This will make subsequent steps easier when it’s time to configure ASM.image

Create some volumes in Group Manager now so you can connect your initiators easily in the next step. It’s a good idea to create your cluster quorum LUN now as well.


Host Network Configuration

First configure the interfaces you intend to use for iSCSI on your cluster nodes. Best practice says that you should limit your iSCSI traffic to a private Layer2 segment, not routed and only connecting to the devices that will participate in the fabric. This is no different from Fiber Channel in that regard, unless you are using a converged methodology and sharing your higher bandwidth NICs. If using Broadcom NICs you can choose Jumbo Frames or hardware offload, the larger frames will likely net a greater performance impact. Each host NIC used to access your storage targets should have a unique IP address able to access the network of those targets within the same private Layer2 segment. While these NICs can technically be teamed using the native Windows LBFO mechanism, best practice says that you shouldn’t, especially if you plan to use MPIO to load balance traffic. If your NICs will be shared (not dedicated to iSCSI alone) then LBFO teaming is supported in that configuration. To keep things clean and simple I’ll be using 4 NICs, 2 dedicated to LAN, 2 dedicated to iSCSI SAN. Both LAN and SAN connections are physically separated to their own switching fabrics as well, this is also a best practice.


MPIO – the manual method

First, start the MS iSCSI service, which you will be prompted to do, and check its status in PowerShell using get-service –name msiscsi.


Next, install MPIO using Install-WindowsFeature Multipath-IO

Once installed and your server has been rebooted, you can set additional options in PowerShell or via the MPIO dialog under  File and Storage Services—> Tools.


Open the MPIO settings and tick “add support for iSCSI devices” under Discover Multi-Paths. Reboot again. Any change you make here will ask you to reboot. Make all changes once so you only have to do this one time.


The easier way to do this from the onset is using the EqualLogic Host Integration Tools (HIT Kit) on your hosts. If you don’t want to use HIT for some reason, you can skip from here down to the “Connect to iSCSI Storage” section.

Install EQL HIT Kit (The Easier Method)

The EqualLogic HIT Kit will make it much easier to connect to your storage array as well as configure the MPIO DSM for the EQL arrays. Better integration, easier to optimize performance, better analytics. If there is a HIT Kit available for your chosen OS, you should absolutely install and use it. Fortunately there is indeed a HIT Kit available for Server 2012 R2.


Configure MPIO and PS group access via the links in the resulting dialog.


In ASM (launched via the “configure…” links above), add the PS group and configure its access. Connect to the VSS volume using the CHAP account and password specified previously. If the VDS/VSS volume is not accessible on your EQL array, this step will fail!


Connect to iSCSI targets

Once your server is back up from the last reboot, launch the iSCSI Initiator tool and you should see any discovered targets, assuming they are configured and online. If you used the HIT Kit you will already be connected to the VSS control volume and will see the Dell EQL MPIO tab.


Choose an inactive target in the discovered targets list and click connect, be sure to enable multi-path in the pop-up that follows, then click Advanced.


Enable CHAP log on, specify the user/pw set up previously:


If your configuration is good the status of your target will change to Connected immediately. Once your targets are connected, the raw disks will be visible in Disk Manager and can be brought online by Windows.


When you create new volumes on these disks, save yourself some pain down the road and give them the same label as what you assigned in Group Manager! The following information can be pulled out of the ASM tool for each volume:


Failover Clustering

With all the storage pre-requisites in place you can now build your cluster. Setting up a Failover Cluster has never been easier, assuming all your ducks are in a row. Create your new cluster using the Failover Cluster Manager tool and let it run all compatibility checks.


Make sure your patches and software levels are identical between cluster nodes or you’ll likely fail the clustering pre-check with differing DSM versions:


Once the cluster is built, you can manipulate your cluster disks and bring any online as required. Cluster disks will not be able to be brought online until all nodes in the cluster can access the disk.


Next add your cluster disks to Cluster Shared Volumes to enable multi-host read/write and HA.


The new status will be reflected once this change is made.


Configure your Quorum to use the disk witness volume you created earlier. This disk does not need to be a CSV.


Check your cluster networks and make sure that iSCSI is set to not allow cluster network communication. Make sure that your cluster network is setup to allow cluster network communication as well as allowing client connections. This can of course be further segregated if desired using additional NICs to separate cluster and client communication.


Now your cluster is complete and you can begin adding HA VMs, if using Hyper-V, SQL, File or other roles as required.




Exchange Powershell commands that I use

Exchange PShell

Basic Cmdlets
Lists mailboxes

Get-MailboxStatistics <Mailbox>
Details statistics of a particular mailbox

Get-Mailbox -OrganizationalUnit <OU name>

Get-Mailbox | Set-Mailbox -prohibitsendquota 500MB
String together 2 cmdlets sets prohibitsendquota to 500MB in all mailboxes

get-mailbox -OrganizationalUnit MIS | Set-mailbox -prohibitSendQuota 500MB

This command gives the status of all databases on all mailbox servers.

This command run set of tests to test the replication health

Update-Recipient -Identity “Madeleine lastname”
Issue with move request

get-messagetrackinglog -start “02/14/2012 12:01 AM” -end “02/14/2012 11:59 PM” -recipients

Export DL group membership
Get-DistributionGroupMember -identity “testdl” | Export-Csv C:MyFile.Csv

Export inbox to .pst file for archiving – AD membership to domainnameMailbox Support required
New-MailboxExportRequest –Mailbox EXAMPLE –FilePath \exchangeserverPSTFilesexample.pst


Get-MailboxExportRequest -Status Completed | Remove-MailboxExportRequest

To allow user to send on a the behalf of distribution group,
Set-DistributionGroup “Wellness Team” -GrantSendOnBehalfTo cleffler, jporto
You must specify all users IDs that need the rights in one command Example: cleffer, jporto with that command.

New-DynamicDistributionGroup -Name GI_Staff -OrganizationalUnit domainname/domainname -RecipientFilter { ((RecipientType -eq ‘UserMailbox’) -and(Office -eq ‘St. Louis Gastroenterology’) -and (-not(title -eq ‘Physician’))) }

New-DynamicDistributionGroup -Name GI_Doctors -OrganizationalUnit domainname/name -RecipientFilter { ((RecipientType -eq ‘UserMailbox’) -and(Office -eq ‘St. Louis Gastroenterology’) -and(title -eq ‘Physician’)) }

Change location of Dynamic group to the proper OU in users and computers.