RDM to VMKD – reposted blog

Source: How to convert a physical RDM into a VMDK disk

I found several VMs with the primary disk in VMDK format, followed by a second disk (dedicated to the applications, mainly databases, app servers or mail) in physical RDM format.

Since a part of the project is to have a complete data protection solution completely based on VMware VADP libraries, we suggested the customer to convert all the RDM disks into VMDK disks. With the latest vSphere releases, there are really no performance differences to justify RDM disks if not for Microsoft clusters or other situations requiring shared disks between VMs, while in the contrary it’s not possible to take snapshots of physical RDM disks, thus preventing backups of those disks and forcing to use backup agents inside the Guest OS.

To validate the procedure, and to reassure the customer, I realized a quick test to show the conversion process. This task can only be done once the infrastructure is upgraded at least to vSphere 4, since (based on KB1005241) a storage vmotion on ESX 3.5 of a virtual RDM does not convert it to VMDK, while this is possible with vSphere 4.0 or newer versions.

In my lab, I created a simple Windows 2003 VM, with a primary 20 Gb vmdk disk , and a secondary 5 Gb physical RDM disk:

Physical RDM disk

To complete the conversion, the VM must be shut down at least once, so you need to schedule the activity with the customer.

Once the VM is stopped, you need to edit its settings and remove the RDM disk. Write down the Virtual Device Node (0:1 in my example), because later you will have to reconnect the disk with the same value.

Remove RDM disk

Select the option “Deletes files from datastore”. Don’t be scared: since it’s a physical RDM, the only thing that will be deleted is the pointer to the RDM disk, not the disk itself.

Then, add a new RDM disk to the VM using the same LUN you removed before:

Add a new RDM disk

This time, choose “virtual” competibility mode, and select the same device node value as before. This let the Guest OS think the disk is the same as before.

Add virtual RDM

Now you can start again the VM. The downtime could be reduced to a couple of minutes if you do all the operation quickly. Then, initiate a storage vmotion, it will automatically convert the virtual RDM disk in a VMDK disk without further downtime (but be prepared to it if you do not have SVMotion license and thus you need a cold migration):

Disk converted into VMDK format

Advertisements

SAY HELLO TO VMOTION-COMPATIBLE SHARED-DISKS WINDOWS CLUSTERING ON VSPHERE

Say Hello to vMotion-compatible Shared-Disks Windows Clustering on vSphere

As you dive into the inner-workings of the new version of VMware vSphere (aka ESXi), one of the gems you will discover to your delight is the enhanced virtual machine portability feature that allows you to vMotion a running pair of clustered Windows workloads that have been configured with shared disks.

I pause here now to let you complete the obligatory jiggy dance. No? You have no idea what I just talked about up there, do you? Let me break it down for you:
In vSphere 6.0, you can configure two or more VMs running Windows Server Failover Clustering (or MSCS for pre-Windows 2012 OSes), using common, shared virtual disks (RDM) among them AND still be able to successfully vMotion any of the clustered nodes without inducing failure in WSFC or the clustered application. What’s the big-deal about that? Well, it is the first time VMware has ever officially supported such configuration without any third-party solution, formal exception, or a number of caveats. Simply put, this is now an official, out-of-the-box feature that does not have any exception or special requirements other than the following:
  • The VMs must be in “Hardware 11″ compatibility mode – which means that you are either creating and running the VMs on ESXi 6.0 hosts, or you have converted your old template to Hardware 11 and deployed it on ESXi 6.0
  • The disks must be connected to virtual SCSI controllers that have been configured for “Physical” SCSI Bus Sharing mode
  • And the disk type *MUST* be of the “Raw Device Mapping” type. VMDK disks are *NOT* supported for the configuration described in this document.
What is the value of this new feature?
Concurrent Host and Guest cluster provide a much-improve high availability option for virtualized workloads. It is a configuration that makes virtualization that much more superior to physical configuration and this is something that has been in much demand among our customers for a very long time. With this configuration, customers are able to provide application-level High Availability using the Windows Failover Clustering feature with which most Windows administrators are already familiar while (at the same time) providing machine-level resilience for both the ESXi hosts and the virtual machine using vSphere HA and vMotion.
  • vSphere HA ensures availability in a vSphere cluster by ensuring that, in the event of unplanned outage affecting an ESXi host, the VMs running on that host are automatically powered on on other ESXi hosts in the cluster.
  • In the event of a planned outage, a vSphere administrator can move a VM from an ESXi host while the VM is up and running and its application and services continue to be accessible to the end-user or dependent process/service. This movement of the VM is done using vMotion.
  • vMotion can also be used (manually or automatically) for resource-balancing in a given vSphere cluster through the VMware Dynamic Resource Scheduling (DRS) feature.
These three features satisfy all the requirements for Windows VM portability defined by Microsoft – See “Host-based failover clustering and migration for Exchange“ section of the Exchange 2013 virtualization whitepaper
Configuring Share-disk clustered VMs that support vMotion on vSphere 6.0 is not overly complicated. I shall now proceed to describe the process (I will skip any mention of configuring MSCS or Windows Clustering itself – see our “Setup for Failover Clustering and Microsoft Cluster Service” for a better and more comprehensive description of this process.
Configuring vMotion-compatible Shared Disks VMs (we assume here that the vSphere cluster is operational and contains ESXi 6.0)
  • Verify that the VM’s hardware version is at hardware 11

Hardware 11

  • If it’s not, you must upgrade it to 11

Upgrade Hardware 11

  • Ensure that the VMs are powered off
  • Add a virtual SCSI controller to the VMs

Add SCSI Controller

  • Set the SCSI controller’s bus sharing mode to “Physical”

SCSI Bus Sharing

  • Add a new RDM disk to the first VM

Add RDM Disks

    • (1) It is recommended that you store the mapping file in a location that is centrally and easily accessible to all the ESXi hosts in the vSphere cluster (you never know which host may house the VM at a point in future)
    • (2) Ensure that the new disk is connected to the “Physical Mode” SCSI controller configured in previous steps

Connect to the Correct SCSI Controller

  • Power on and log into the VM. Configure and format the disk in disk manager as desired.
  • Add an “existing disk” to the second VM.

Add Existing Disk

  • Ensure that you are selecting the disk that you added to the first VM (we are sharing disks here, remember?)
  • Ensure that this disk is also connected to the “Physical Mode” SCSI controller configured in previous steps
  • Power on this VM and log into Windows.
  • The disk should be visible in disk manager on both VMs
  • Repeat this process for all other VMs that will be sharing this disk (up to 5 such VMs are supported on vSphere)
That, my friend, is the extent of the configuration steps required to share vMotion-compatible disks among VMs in vSphere 6.0
Now, go ahead and migrate any of the VMs while they are powered on.
There is a catch – or two …. ok, maybe three catches
Yes, you knew this was coming, didn’t you?
  • VMware still does not support Storage vMotion for VMs that are configured with shared disks. If you attempt to perform a Storage vMotion on such workloads, the attempt will fail.
  • While it is technically possible to successfully use a VMDK (instead of an RDM) disk for the configuration described above, please be advised that VMware does not support such configuration. You will be able to vMotion the workloads successfully and things will appear to behave optimally and without a hitch. PLEASE DO NOT DO SO. Such configuration may lead to instability and data corruption. Please see Configuring Microsoft Cluster Service fails with the error: Validate SCSI-3 Persistent Reservation for more information.
  • Insufficient bandwidth WILL hamper your vMotion operation and cause service interruption for your clustered workloads. Wait, are you surprised? How do you suppose we get a running VM from one host to another? Teleportation? No, we copy it over the wire incrementally. We strive to complete the copy and the switch-over very rapidly. IF the vMotion network is congested or insufficient (say, perhaps, you try to vMotion a running “monster VM” with, say 128 CPUs and 4TB of RAM over a 1GB link that is shared with other trafffic), the copy and switch-over operation will take a very long time, long enough for the VM to lose heartbeat with its peer nodes and, consequently, trigger a failover or shutdown of its cluster service for lack of quorum.

To avoid the issue described in the previous paragraph (and to ensure the overall health and functionality of your vMotion operations), VMware recommends the following:

    • Put the vMotion VMKernel Portgroup on a 10GB (or higher) network
    • If you do not have a 10GB or higher network facilities, create more than one vMotion VMKernel Portgroup in the vSphere cluster. Use separate 1GB NIC for each portgroup
    • IF using 1GB NICs, consider enabling jumbo-frames at all levels of the network stack (from physical switches all the way to the in-guest network card)
    • IF none of the recommended options above is possible, consider tuning the cluster services inside the Guest OS to tolerate longer heartbeat timeouts. See Tuning Failover Cluster Network Thresholds for more information and recommended settings.

That’s all. Nothing fanciful or complicated – we took care of the complexities for you, so go ahead and vMotion that shared-disk clustered workload. But don’t forget the RDM.

UPGRADING TO VMWARE VSPHERE 5.5

Friday, October 25, 2013   , , , , , , ,

Source: Exit | the | Fast | Lane

Like all good stewards of all things virtual I need stay current on the very foundations of everything we do:  hypervisors. So this post contains my notes on upgrading to the new vSphere 5.5 build from 5.1. This isn’t meant to be an exhaustive step-by-step 5.5 upgrade guide, as that’s already been done, and done very well (see my resources section at the bottom for links). This is purely my experience upgrading with a few interesting call outs along the way that I felt worth writing down should anyone else encounter them.

The basic upgrade sequence goes like this, depending on which of these components you have in play:

  1. vCloud components
  2. View server components
  3. vCenter (and vSphere Clients)
  4. SRM/ VR/ vCOPS/ VDP/ VSA
  5. ESXi hosts
  6. VM Tools (and virtual hardware –see my warning in the clients section)
  7. vShield components
  8. View agent

The environment I’m upgrading consists of the following:

  • 2 x Dell PE R610 hosts running ESXi 5.1 on 1GB SD
  • 400GB of local SAS
  • 3TB Equallogic iSCSI storage
  • vCenter 5.1 VM (Windows)

vCenter 5.5

As was the case in 5.1, there are still 2 ways to go with regard to vCenter: Windows-based or vCenter Server Appliance (vCSA). The Windows option is fully featured and capable of supporting the scaling maximums as well as all published configurations. The vCSA is more limited but in 5.5, a bit less so. Most get excited about the vCSA because it’s a Linux appliance (no Windows license), it uses vPostgres (no SQL license) and is dead simple to set up via OVF deployment. The vCSA can use only Oracle externally, MS SQL Server support appears to have been removed. The scaling maximums of the vCSA has increased to 100 hosts/ 3000 VMs with the embedded database which is a great improvement over the previous version. There are a few things that still require the Windows version however, namely vSphere Update Manager and vCenter Linked Mode.

While I did deploy the vCSA, additionally, my first step was upgrading my existing Windows vCenter instance. The Simple Install method should perform a scripted install of the 4 main components. This method didn’t work completely for me as I had to install the inventory and vCenter services manually. This is also the dialog from which you would install the VUM service on your Windows vCenter server.

I did receive an error about VPXD failing to install after the vCenter Server installation ran for the first time. A quick reboot cleared this up. With vCenter upgraded, the vSphere Client also needs to be upgraded anywhere you plan to access the environment using that tool. VMware is making it loud and clear that the preferred method to manage vSphere moving forward is the web client.

vCenter can support up to 10,000 active VMs now on a single instance, which is a massive improvement. If you plan to possibly power on more than 2000 VMs simultaneously, make sure to increase the number of ephemeral ports during the port configuration dialog of the vCenter setup.

Alternatively, the vCSA is very easy to deploy and configure with some minor tweaks necessary to connect to AD authentication. Keep in mind that the maximum number of VMs supported on the vCSA with the embedded DB is only 3000. To get the full 10,000 out of the vCSA you will have to use an Oracle DB instance externally. The vCSA is configured via the web browser and is connected to by the web and vSphere Clients the same way as its Windows counterpart.

If you fail to connect to the vCSA through the web client and receive a “Failed to connect to VMware Lookup Service…” error like this:

…from the admin tab in the vCSA, select yes to enable the certificate regeneration option and restart the appliance.

Upgrading ESXi hosts:

The easiest way to do this is starting with hosts in a HA cluster attached to shared storage, as many will have in their production environments. With vCenter upgraded, move all VMs to one host, upgrade the other, rinse, repeat, until all hosts are upgraded. Zero downtime. For your lab environments, if you don’t have luxury of shared storage, 2 x vCenter servers can be used to make this easier as I’ll explain later. If you don’t have shared storage in your production environment and want to try that method, do so at your own risk.

There are two ways to go to upgrade your ESXi hosts: local ISO (scripted or not) or vSphere Update Manager. I used both methods, one on each, to update my hosts.

ISO Method

The ISO method simply requires that you attach the 5.5 ISO to a device capable of booting your server: either USB or Virtual Media via IPMI (iDRAC). Boot, walk through the steps, upgrade. If you’ve attached your ISO to Virtual Media, you can specify within vCenter to boot your host directly to the virtual CD to make this a little easier. Boot Options for your ESXi hosts are buried on the Processors page for each host in the web or vSphere Client.

VUM method:

Using VUM is an excellent option, especially if you already have this installed on your vCenter server.

  • In VUM console page, enter “admin view”
  • Download patches and upgrades list
  • Go to ESXi images and import 5.5 ISO
  • Create baseline during import operation

  • Scan hosts for non-compliance, attach baseline group to the host needing upgrade
  • Click Remediate, walk through the screens and choose when to perform the operation

The host will go into maintenance mode, disconnect from vCenter and sit at 22% completion for 30 minutes, or longer depending on your hardware, while everything happens behind the scenes. Feel free to watch the action in KVM or IPMI.

When everything is finished and all is well, the host will exit maintenance mode and vCenter will report a successful remediation.

Pros/ Cons:

  • The ISO method is very straight-forward and likely how you built your ESXi host to begin with. Boot to media, upgrade datastores, upgrade host. Speed will vary by media used and server hardware, whether connected to USB directly or ISO via Virtual Media.
    • This method requires a bit more hand-holding. IPMI, virtual media, making choices, ejecting media at the right time…nothing earth shattering but not light touch like VUM..
  • If you already have VUM installed on your vCenter server and are familiar with its operation, then upgrading it to 5.5 should be fairly painless. The process is also entirely hands-off, you press go and the host gets upgraded magically in the background.
    • The down side to this is that the vSphere ISO is stored on and the update procedure is processed from your vCenter server. This could add time delays to load everything from vCenter to your physical ESXi hosts, depending on your infrastructure.
    • This method is also only possible using the Windows version of vCenter and is one of the few remaining required uses of the vSphere Client.

No Shared Storage?

Upgrading hosts with no shared storage is a bit more trouble but still totally doable without having to manually SCP VM files between hosts. The key is using 2 vCenter instances in which vCSA works great as a second for this purpose. Simply transfer your host ownership between vCenter instances. As long as both hosts are “owned” by the same vCenter instance, any changes to inventory will be recognized and committed. Any VMs reported as orphaned should also be resolved this way. You can transfer vCenter ownership back and forth any number of times without negatively impacting your ESXi hosts.  Just keep all hosts on one or the other! The screenshot below shows 2 hosts owned by the Windows vCenter (left) and the vCSA on the right showing those same hosts disconnected. No VMs were negatively impacted doing this.

The reason this is necessary is because VM migrations are controlled and handled by vCenter, cold migrations included which is all you’ll be able to do here. Power off all VMs, migrate all VMs off one host, fire up vCenter on the other host, transfer host ownership, move the final vCenter VM off then upgrade that host. Not elegant but it will work and hopefully save some pain.

vSphere Clients

The beloved vSphere Client is now deprecated in this release and comes with a heavy push towards exclusive use of the Flash-based web client. My advice, start getting very familiar with the web client as this is where the future is heading, like it or not. Any new features enabled in vCenter 5.5 will only be accessible via the web client.

Here you can see this clearly called out on the, now, deprecated vSphere Client login page:

WARNING – A special mention needs to be made about virtual hardware version 10. If you upgrade your VMs to the latest hardware version, you will LOSE the ability to edit their settings in the vSphere Client. Yep, one step closer to complete obsolesce. If you’re not yet ready to give up using the vSphere Client, you may want to hold off upgrading the virtual hardware for a bit.

vSphere Web Client

The web client is not terribly different from the fat client but its layout and operational methods will take some getting used to. Everything you need is in there, it may just take a minute to find it. The recent tasks pane is also off to the right now, which I really like.

image

The familiar Hosts and Clusters view:

Some things just plain look different, most notably the vSwitch configuration. Also the configuration items you’ve grown used to being in certain property menus are now spread out and stored in different places. Again, not bad just…different.

I also see no Solutions and Applications section in the web client, only vApps. So things like the Dell Equallogic Virtual Storage Manager would have to be accessed via the old vSphere Client.

Client Integration Plugin

To access features like VM consoles, file transfers to datastores and OVF deployments via the web client, the 153MB Client Integration plugin must be installed. Attempting to use any features that require this should prompt you for the client install. One of the places it can also be found is by right-clicking while browsing within a datastore.

The installer will launch and require you to close IE before continuing.

VM consoles now appear in additional browser tabs which will take some getting used to. Full screen mode looks very much like a Windows RDP session which I like and appreciate.

Product Feature Request

This is a very simple request (plea) to VMware to please combine the Hosts and Clusters view with the VMs and Templates view. I have never seen any value in having these separated. Datacenters, hosts, VMs, folders and templates should all be visible and manageable from a single pane. There should be only 3 management sections separating the primary vSphere inventories: Compute (combined view), storage and network. Clean and simple.

Resources:

vSphere 5.5 upgrade order (KB): Link

vSphere 5.5 Configuration Maximiums: Link

Awesome and extensive vSphere 5.5 guide: Link

Resolution for VCA FQDN error: Link

CITRIX XENDESTOP AND PVS: A WRITE CACHE PERFORMANCE STUDY

Thursday, July 10, 2014   , , , , , , , , , , , ,   Source: Exit | the | Fast | Lane

image

If you’re unfamiliar, PVS (Citrix Provisioning Server) is a vDisk deployment mechanism available for use within a XenDesktop or XenApp environment that uses streaming for image delivery. Shared read-only vDisks are streamed to virtual or physical targets in which users can access random pooled or static desktop sessions. Random desktops are reset to a pristine state between logoffs while users requiring static desktops have their changes persisted within a Personal vDisk pinned to their own desktop VM. Any changes that occur within the duration of a user session are captured in a write cache. This is where the performance demanding write IOs occur and where PVS offers a great deal of flexibility as to where those writes can occur. Write cache destination options are defined via PVS vDisk access modes which can dramatically change the performance characteristics of your VDI deployment. While PVS does add a degree of complexity to the overall architecture, since its own infrastructure is required, it is worth considering since it can reduce the amount of physical computing horsepower required for your VDI desktop hosts. The following diagram illustrates the relationship of PVS to Machine Creation Services (MCS) in the larger architectural context of XenDesktop. Keep in mind also that PVS is frequently used to deploy XenApp servers as well.

image

PVS 7.1 supports the following write cache destination options (from Link):

  • Cache on device hard drive – Write cache can exist as a file in NTFS format, located on the target-device’s hard drive. This write cache option frees up the Provisioning Server since it does not have to process write requests and does not have the finite limitation of RAM.
  • Cache on device hard drive persisted (experimental phase only) – The same as Cache on device hard drive, except cache persists. At this time, this write cache method is an experimental feature only, and is only supported for NT6.1 or later (Windows 7 and Windows 2008 R2 and later).
  • Cache in device RAM – Write cache can exist as a temporary file in the target device’s RAM. This provides the fastest method of disk access since memory access is always faster than disk access.
  • Cache in device RAM with overflow on hard disk – When RAM is zero, the target device write cache is only written to the local disk. When RAM is not zero, the target device write cache is written to RAM first.
  • Cache on a server – Write cache can exist as a temporary file on a Provisioning Server. In this configuration, all writes are handled by the Provisioning Server, which can increase disk IO and network traffic.
  • Cache on server persistent – This cache option allows for the saving of changes between reboots. Using this option, after rebooting, a target device is able to retrieve changes made from previous sessions that differ from the read only vDisk image.

Many of these were available in previous versions of PVS, including cache to RAM, but what makes v7.1 more interesting is the ability to cache to RAM with the ability to overflow to HDD. This provides the best of both worlds: extreme RAM-based IO performance without the risk since you can now overflow to HDD if the RAM cache fills. Previously you had to be very careful to ensure your RAM cache didn’t fill completely as that could result in catastrophe. Granted, if the need to overflow does occur, affected user VMs will be at the mercy of your available HDD performance capabilities, but this is still better than the alternative (BSOD).

Results

Even when caching directly to HDD, PVS shows lower IOPS/ user numbers than MCS does on the same hardware. We decided to take things a step further by testing a number of different caching options. We ran tests on both Hyper-V and ESXi using our standard 3 user VM profiles against LoginVSI’s low, medium, high workloads. For reference, below are the standard user VM profiles we use in all Dell Wyse Datacenter enterprise solutions:

Profile Name Number of vCPUs per Virtual Desktop Nominal RAM (GB) per Virtual Desktop Use Case
Standard 1 2 Task Worker
Enhanced 2 3 Knowledge Worker
Professional 2 4 Power User

We tested three write caching options across all user and workload types: cache on device HDD, RAM + Overflow (256MB) and RAM + Overflow (512MB). Doubling the amount of RAM cache on more intensive workloads paid off big netting a near host IOPS reduction to 0. That’s almost 100% of user generated IO absorbed completely by RAM. We didn’t capture the IOPS generated in RAM here using PVS, but as the fastest medium available in the server and from previous work done with other in-RAM technologies, I can tell you that 1600MHz RAM is capable of tens of thousands of IOPS, per host. We also tested thin vs thick provisioning using our high end profile when caching to HDD just for grins. Ironically, thin provisioning outperformed thick for ESXi, the opposite proved true for Hyper-V. Toachieve these impressive IOPS number on ESXi it is important to enable intermediate buffering (see links at the bottom). I’ve highlighted the more impressive RAM + overflow results in red below. Note: IOPS per user below indicates IOPS generation as observed at the disk layer of the compute host. This does not mean these sessions generated close to no IOPS.

Hyper-visor PVS Cache Type Workload Density Avg CPU % Avg Mem Usage GB Avg IOPS/User Avg Net KBps/User
ESXi Device HDD only Standard 170 95% 1.2 5 109
ESXi 256MB RAM + Overflow Standard 170 76% 1.5 0.4 113
ESXi 512MB RAM + Overflow Standard 170 77% 1.5 0.3 124
ESXi Device HDD only Enhanced 110 86% 2.1 8 275
ESXi 256MB RAM + Overflow Enhanced 110 72% 2.2 1.2 284
ESXi 512MB RAM + Overflow Enhanced 110 73% 2.2 0.2 286
ESXi HDD only, thin provisioned Professional 90 75% 2.5 9.1 250
ESXi HDD only thick provisioned Professional 90 79% 2.6 11.7 272
ESXi 256MB RAM + Overflow Professional 90 61% 2.6 1.9 255
ESXi 512MB RAM + Overflow Professional 90 64% 2.7 0.3 272

For Hyper-V we observed a similar story and did not enabled intermediate buffering at the recommendation of Citrix. This is important! Citrix strongly recommends to not use intermediate buffering on Hyper-V as it degrades performance. Most other numbers are well inline with the ESXi results, save for the cache to HDD numbers being slightly higher.

Hyper-visor PVS Cache Type Workload Density Avg CPU % Avg Mem Usage GB Avg IOPS/User Avg Net KBps/User
Hyper-V Device HDD only Standard 170 92% 1.3 5.2 121
Hyper-V 256MB RAM + Overflow Standard 170 78% 1.5 0.3 104
Hyper-V 512MB RAM + Overflow Standard 170 78% 1.5 0.2 110
Hyper-V Device HDD only Enhanced 110 85% 1.7 9.3 323
Hyper-V 256MB RAM + Overflow Enhanced 110 80% 2 0.8 275
Hyper-V 512MB RAM + Overflow Enhanced 110 81% 2.1 0.4 273
Hyper-V HDD only, thin provisioned Professional 90 80% 2.2 12.3 306
Hyper-V HDD only thick provisioned Professional 90 80% 2.2 10.5 308
Hyper-V 256MB RAM + Overflow Professional 90 80% 2.5 2.0 294
Hyper-V 512MB RAM + Overflow Professional 90 79% 2.7 1.4 294

Implications

So what does it all mean? If you’re already a PVS customer this is a no brainer, upgrade to v7.1 and turn on “cache in device RAM with overflow to hard disk” now. Your storage subsystems will thank you. The benefits are clear in both ESXi and Hyper-V alike. If you’re deploying XenDesktop soon and debating MCS vs PVS, this is a very strong mark in the “pro” column for PVS. The fact of life in VDI is that we always run out of CPU first, but that doesn’t mean we get to ignore or undersize for IO performance as that’s important too. Enabling RAM to absorb the vast majority of user write cache IO allows us to stretch our HDD subsystems even further, since their burdens are diminished. Cut your local disk costs by 2/3 or stretch those shared arrays 2 or 3x. PVS cache in RAM + overflow allows you to design your storage around capacity requirements with less need to overprovision spindles just to meet IO demands (resulting in wasted capacity).

References:

DWD Enterprise Reference Architecture

http://support.citrix.com/proddocs/topic/provisioning-7/pvs-technology-overview-write-cache-intro.html

When to Enable Intermediate Buffering for Local Hard Drive Cache