Source: My Virtual Vision by Kees Baggerman

DuringSQL a lab setup of XenDesktop 7.6 I used a Microsoft SQL 2008 R2 instance which I installed before while setting up the rest of my lab environment. While the database setup worked seamlessly for other environments it seemed that I couldn’t access the SQL server from the XenDesktop Setup wizard.

I first tried the obvious things, using a service account didn’t help. After that I tried the SA account (just to see if it was an actual issue with the rights on the service account) but that didn’t work either.

The issue:

I couldn’t create the database from the XenDesktop wizard, I tried several accounts but they couldn’t connect to the database or didn’t have the rights the access the database server.


Apparently changing the user didn’t had the effect I wanted, I logged on to the SQL just to make sure my SA password was still valid and it was because I was able to logon to the SQL management Studio with the SA credentials. Because I was already logged on the SQL server I just went on opening the event viewer and found the following errors:

Screen Shot 2014-11-10 at 10.29.05


My friend Google then found the following topic:SSPI handshake failed with error code 0x8009030c, which led me to the How to Configure an SPN for SQL Server Site Database Servers. It seems that during the installation of SQL the SPNs for SQL server weren’t registered.

Solving this issue:

With the command ‘setspn -L %hostname%’ you can list the SPNs that are registered for a certain server.

v_3_Screen Shot 2014-11-10 at 10.29.43_v2

When I did this for my SQL server it didn’t list the SQL services so I had to register the SPN manually. Again I googled and found the following article:Register a Service Principal Name for Kerberos Connections.

This article described the following switches to manually register the SPN:

To register the SPN manually, the administrator must use the Setspn.exe tool that is provided with the Microsoft Windows Server 2003 Support Tools. For more information, see the Windows Server 2003 Service Pack 1 Support Tools KB article.
Setspn.exe is a command line tool that enables you to read, modify, and delete the Service Principal Names (SPN) directory property. This tool also enables you to view the current SPNs, reset the account’s default SPNs, and add or delete supplemental SPNs.
The following example illustrates the syntax used to register manually register an SPN for a TCP/IP connection.
setspn -A MSSQLSvc/ accountname
Note If an SPN already exists, it must be deleted before it can be reregistered.You do this by using the setspn command together with the -D switch. The following examples illustrate how to manually register a new instance-based SPN.For a default instance, use:
setspn -A MSSQLSvc/ accountnameFor a named instance, use:
setspn -A MSSQLSvc/ accountname

So I ran the command:

‘setspn -a MSSQLSvc/SQL001:1433 administrator’

The following screen output appeared:

Screen Shot 2014-11-10 at 10.30.01

After I registered the SPN for the SQL Server I listed the SPNs of the server again and the SQL service was registered. After a reboot I was able to connect to the database from the XenDesktop wizard.



XenDesktopI was setting up a new lab environment based on vSphere 5.5 and XenDesktop 7.6. When I wanted to deploy a new image within XenDesktop I got an error message ‘Unable to upload disk’.

I was running this setup on Nutanix hardware which I split up into two different Nutanix clusters to do some inter-cluster testing. My lab setup was built with vSphere 5.5, vCenter 5.5 and 2 VMware Clusters, again 2 clusters to do some inter-cluster testing.

From a Nutanix perspective I’ve created two storage pools and two containers.

Just for the general understanding of our definition of a storage pool and container:

Storage Pool
  • Key Role: Group of physical devices
  • Description: A storage pool is a group of physical storage devices including PCIe SSD, SSD, and HDD devices for the cluster.  The storage pool can span multiple Nutanix nodes and is expanded as the cluster scales.  In most configurations only a single storage pool is leveraged.
  • Key Role: Group of VMs/files
  • Description: A container is a logical segmentation of the Storage Pool and contains a group of VM or files (vDisks).  Some configuration options (eg. RF) are configured at the container level, however are applied at the individual VM/file level.  Containers typically have a 1 to 1 mapping with a datastore (in the case of NFS/SMB).

I created the XenDesktop environment, the Windows 7 image and was ready to start deploying desktops and was expecting a blazing performance.. Instead of pushing out desktops the XenDesktop console threw me an error: ‘Unable to upload disk’.

First thing I started on was to run all the tests within the XenDesktop console just to make sure that the XenDesktop installation and configuration was ok (which it was of course :)).

Next step was to run the error through Google and apparently I wasn’t the only one with this issue:

None of them resembled my exact issue tho, so I took another look at my environment and found that I’ve made a rookie mistake by creating two containers with the same name within the same VMware vCenter config (2 different clusters) which resulted in one datastore in VMware (vSphere actually merged my two different datastores based on the name) and there was only one cluster configured in XenDesktop and thus failing the upload of the disk.

Within the ‘passive’ Nutanix cluster I removed the old container and created the new container (Did I mentioned it took me about 5 minutes to do that?) and after that I was able to deploy the newly build image.

Kees Baggerman

Kees Baggerman is a senior performance and solution engineer for desktop and application virtualization solutions at Nutanix. Kees has driven numerous Microsoft and Citrix, and RES infrastructures functional/technical designs, migrations, implementations engagements over the years.


December 22, 2014

Written by

Carl Webster


Between August and November 2014, I have worked on three HP Moonshot Proof of Concepts (two in the USA and one in the Middle East). While Moonshot is not the solution to every desktop problem in the world, it has some very strong advantages. I decided to reach out to a couple of the Citrix leads on these PoCs and have them tell you why they selected to use Moonshot. Before I give you their comments, a couple of items:

  • Don’t ask for the company names, you will not get them
  • Don’t ask for the individual’s names, you will not get them
  • Don’t ask for their contact information, you will not get it
  • Don’t ask me to put you in contact with them, not going to happen
  • Don’t ask me to put them in contact with you, not going to happen

Both of these individuals read my blog and I will send them the link to this article. If you have any questions for them or me, leave a comment and if they are allowed to, they will respond to the comments.

I only edited their responses to correct typos, line formatting and wrapping in their responses and had them correct a couple of sentences that were not clear to me.

The questions I asked them to respond to: “Why did you select HP Moonshot over traditional VDI?  Because doesn’t Moonshot cost way more per user than traditional VDI?”

Medical Related Field

Note: This customer will deliver around 2,000 desktops with Moonshot.

Costs comparison: Moonshot HDI vs VDI

My friend Carl ask me last week: “Why did you select HP Moonshot over traditional VDI?  Because doesn’t Moonshot cost way more per user than traditional VDI?”

During my lifetime of layovers in the world’s most “cozy” terminal (much love EWR), I teetered away from basically disagreeing with the question, but I’m feeling more accommodating since then. Comparing the two methods is a tough one.

On one hand we have user dedicated PCs, and on the other we have a populated blade chassis, shared via your virtualization of choice. Totally an apples and oranges situation. So the difference maybe be jarring, in that the Moonshot m700 carts do not require any hypervisor. Every m700 cart has 4 self-contained systems and supports booting directly from Citrix PVS 7.1.

For those that have done HDI on the past with other solutions, this one is much smaller, at around 5u’s….get ready for this, 180 PCs in 5u’s. Maybe I’m easy to impress, but that is kind of amazing. 180 PCs with a fully dedicated 8gb, four core APU, and onboard SSD. If this PC was sitting on your desktop all by its lonesome, that would be a pretty great business-class system.

You could get the same specs from traditional VDI, but you would need a system that supported almost 1.5tb of memory and 720 cores, and then would require a virtualization layer to share it all out.

What end users want is an experience like they are using a dedicated computer, and what a better solution than one which is exactly that?! So this is why I almost disagree with the question. The cost difference as little as it may be is now negligible because the experience is head and shoulders above any traditional VDI experience I have encountered.

It is all about experience and alignment, the end-user knows what they want from their experience. It is up to us “techies” to get them to a solution that is in alignment with the business case.

Retail Related Field

Note: This customer will deliver around 750 desktops with Moonshot.

My answer would be:

As a long time Citrix admin, I knew the advantages of centralized compute environments. For many years I was at a community bank and we used terminals connecting to Citrix published desktops on MetaFrame XP.  This in essence was the first type of VDI.  We were able to support nearly 400 users with an IT staff of 4 full time employees. There was only a user side Linux device, and all user compute was load balanced across the farm.  Roaming profiles and redirected folders stayed on shares in the data center.  This gave a measure of security for PII data, knowing it was not on remote hard drives that could be lost or stolen.  Also there is an economic benefit to this model as terminals usually cost less than PCs and have a far longer useful life than PCs.  Using terminals also gives a centralized management framework that allows for minimal daily maintenance for the user end points.

So the concepts of VDI have strong advantages for organizations concerned with user data security, user hardware life cycles, and IT management with a small staff.

I am now at a larger organization with multiple corporate sites and several hundred retail stores. I had been trying for a year or more to raise interest in traditional VDI at my current company. We have a very robust VMware environment and SAN.  We also use XenApp to provide multiple user apps across the country to our retail stores and other corporate sites.

Additionally, we have a large number of onsite consultants working on multiple projects. My suggestion was to use VDI to provide all the advantages above on a new project. The retail store managers needed a way to have more robust applications and other access that could not be accommodated on a POS sales register.  Also, each consultant was issued a company laptop. The motivation was to keep data assets safe as possible and under company control.

My suggestion was to use VDI and terminals for a new store user system and for consultants. Including the consultants could enforce the traditional controls but allow for BYOD to reduce hardware expense.

But there was a lot of resistance because of the general understanding that VDI could go very badly. There is another problem with IOPS when it comes to VDI. All IOPS coming out of virtual desktops are typically treated as “equal” by the hypervisor. This causes a lack of consistent user experience (as user workloads vary). Imagine a user running a zip file compression or running an on-demand virus scan on the same host as the CEO who needs his desktop to work on his board meeting presentation. I researched several hybrid and flash based storage systems aligned with VDI deployments. My conclusion was that the total VDI solution was viable now because of the new storage options.

But that was not the only barrier.  The organization is very committed to vendor standardization and not enabling a sprawl of siloes of independent solutions.  So the addition of new VDI-centric storage was not agreeable.  And without that enhancement, the usual VDI IOPs concern remained.

Another hurdle turned out to be the business side.  As they came to understand the shared nature of VDI resources, there was growing resistance.  No one wanted a system that was not completely “theirs”. Even after explaining the IT benefits and small probabilities of user bottlenecks, it was still not well thought of. So traditional VDI was not seen as a safe and reliable solution to match the company culture and expectations.

Then I discovered the HP Moonshot platform and the Converged System 100. Immediately I knew that it had great potential.  Hosted Desktop Infrastructure solves all the concerns I encountered.  It matched our existing hardware vendor. It provides substantial dedicated CPU, GPU, and RAM for every user. And because of the nature of Citrix Provisioning and its ability to cache in memory, the user IOPs to disk are greatly reduced.  Plus Citrix Provisioning frees the onboard 64GB SSD for other uses.  It could hold persistent data, or business apps.  We use it as the page file location.

The use of XenDesktop and Receiver also creates a user system that can be available anytime on multiple devices.

I will say there is one caveat.  We decided to segregate the CS100 server components on dedicated VMware hosts. We also used a new HP 3PAR system as the underlying storage for all of the design. This was mainly because it started as a POC.  But because of its success, and vendor match, the additional hosts and storage was something that was accepted.

Another motivation for making that “giant leap” to Moonshot was the vision behind it. Having that chassis in your Data Center does more than enable HDI. Other server cartridges are available and more will be available in the future. I think it’s the beginning of a new phase of hardware consolidation and server computing. Also, the power consumption is impressive.  It only requires 33 watts typical for a cartridge running 4 Windows systems with a Quad core AMD APU, 8GB RAM, and an SSD.

Another plus is each Windows node has 2 x 1GB NICs.  This may not be meaningful when you think of an end user station.  But having it there gives you more options. We use 1 NIC as a normal LAN link.  The second is used as a direct link to a dedicated iSCSI LUN on the 3PAR.  Having a permanent storage partition per system has enabled us to add business data that is unique to each store location.

I am a big fan of HP HDI and Moonshot in general.  I know our particular situation will not match a lot of businesses.  But people should sit down and think about the potential it offers in terms of consolidation, energy savings, flexibility of architectures, end user mobility and user computing resources.  I believe it is a game changer on several levels.

There you go.

If you have any questions or comments, leave them in the comments section.



Citrix Director 7.6 Deep-Dive Part 5: Monitoring & Troubleshooting Anonymous User Sessions

Citrix Blog Repost

Anonymous (unauthenticated) user session support

A new feature of XenDesktop 7.6.

Instead of requiring users to log into Citrix Receiver with Active Directory user credentials, a combination of network security and authentication within the application itself is relied upon.

Anonymous Session Support -refers to running sessions as a set of pooled, local user accounts.

1.  This feature is popular in XenApp in the healthcare industry, since their applications typically have server back-ends with their own logons, separate from users’ AD accounts. Thus, the Windows account running the client application is irrelevant.

2,  Anonymous Session support consists of a pool of local user accounts that are managed by XenDesktop and typically named AnonXYZ, where XYZ is a unique 3-digit value.

More information on Anonymous Session Support feature is available here.

With anonymous sessions, the end user will not know the actual username.}

Each anonymous session is assigned a random name such as ANON001, ANON002, etc.,
1,  Citrix Director helps administrators to view details of each session of XenApp via User Search.  But here is the catch, how to view details of anonymous user session as they do not use Active Directory credentials for the session and the end user has no way to know what the username is?

2,  The Helpdesk Admin needs a way to be able to search for the user’s specific anonymous session, return the Help Desk view and User Details views in order to follow their standard troubleshooting processes.

EndPoint Search

The new functionality introduced for Citrix Director 7.6

It can be leveraged to view details of anonymous user sessions. Typically, the end user will know the name of their endpoint as many times there is a sticker attached to the screen or device with the device (endpoint) name.  When the end user calls into the help desk, they can now tell the Help Desk admin the endpoint name so the Help Desk administrator can start the troubleshooting process using Director.

1,  Sessions running on a particular endpoint device can be viewed through Endpoint Search functionality.

2,  Administrators can search for the client device and a list of all the sessions launched by that particular client are provided (as shown in the below screenshot), from which the administrator can choose the required session to view details of that session.

3,  Searching for an endpoint can be expensive across a large number of sites.

In order to improve performance, we have provided the ability to “group” endpoints. This is accomplished via the Director Config Tool, which restricts the search within a defined group. How do you group endpoints? All you have to do is run the Director Config Tool, select /createsitegroups, provide the IP and a name and your done!  Once the configuration is complete, the “Select a group” option will be available as part of the search view.

Note: Endpoint Search results include all clients from which a session is launched irrespective of whether the session is an anonymous user session or not.

If Director is monitoring multiple sites, the landing page after login will have search option for endpoint.

Within another view of Director, administrators can search for endpoint sessions using the new Search button on the ribbon bar of Director:

Below is the Screenshot of List of sessions running on a particular client machine:

Note: The endpoint names must be unique in order for Director to be able to search and return the appropriate session.

Details of Anonymous User Session in Client Details view:

  • Session Details: Anonymous field in Session Details Panel is used to indicate whether the session is Anonymous Or Not (As mentioned in the note above, Endpoint Search is not limited to Anonymous User Sessions).

Activity Manager and Machine Details Panel are similar to the User Details Page.

Note:  Shadow is disabled for Anonymous Sessions, as Anonymous user accounts are guest accounts that do not have permissions for Shadowing.

  • LogonDuration:  Logon Duration in Client Details Page is only for the current session and 7 day averages of logons from the client device, unlike when viewing a specific user, and the 7 day average is the average from that Delivery Group.Duration for each Logon Steps like Brokering etc. provided are same as in User Details Page.More on Logon Duration can be found here.
  •  Personalization: Reset Profile and PvD Reset Buttons will be disabled and Panel displays  “not available” as Anonymous User Accounts do not use Citrix Personalization Components.
  • HDX Insight : Network Data from HDX Insight will not be available for Anonymous Users.

Anonymous User Sessions In Filters View:

Director also facilitates the ability to filter out all Anonymous User Sessions through Sessions in Filters Page.

This provides the ability to quickly perform global actions on anonymous sessions (i.e. logoff) as needed.

Navigate to Filters->Sessions Page and use filters to select “Anonymous is Yes”  



Note: In the screenshot above, observe that Endpoint Name column is clickable. Clicking on Endpoint Name leads to the same behavior as Endpoint Search.


Adding to Director’s Help Desk functionality to include the ability to search and troubleshoot endpoints and machines allows the Help Desk to expand their troubleshooting use cases and enables one tool and one process for first call resolution.

Citrix Director 7.6 Deep-Dive Part 4: Troubleshooting Machines

Citrix Director 7.6 Deep-Dive Part 4: Troubleshooting Machines


XenDesktop 7.6 now includes machine details in Citrix Director. These details enable IT administrators to get more insight about the machines in use. The machine details page consists of machine utilization, infrastructure details, number of sessions, and hotfix details. With this new addition, the administrators can view machine-level details on the Director console itself.

As shown in the screenshot below, after logging into Director, you can now search for a machine directly by selecting “Machine” in the dropdown list on the left and then entering the name of the machine in the “Search for machine” field on the right.

The Director administrator can now configure Site groups as an additional search filter to narrow down results to these specific groups. Create the groups in the Director server by running the configuration tool with the following command prompt:

C:inetpubwwwrootDirectortoolsDirectorConfig.exe /createsitegroups

Then provide a Site group name and an IP address of the Site’s Delivery Controller to create the Delivery Group, as shown in the following screenshot:

After the Site groups are created, the administrator can select a group from the newly added “Select a group” field:

All machines that match the search string entered appear in the “Search for machine” dropdown. Then the administrator can select the appropriate machine to navigate to machine details page.

The machine details page has five sections:

  1. Machine Details
  2. Machine Utilization – CPU and memory usage
  3. Sessions – The total number of connected and disconnected sessions
  4. Infrastructure Panel – Hypervisor and Delivery Controller sections
  5. Hotfixes

Machine Details

The panel consists of the following fields:

  1. Machine name: The domainmachine name of the machine selected.
  2. Display name: The display name of the machine as configured while creating and publishing the Delivery Group.
  3. Delivery Group: The Delivery Group that contains the machine selected.
  4. Machine Catalog: The catalog that contains the machine selected.
  5. Remote PC access: Indicates whether the selected machine is configured for Remote PC Access.
  6. Site name: The Site name with which the machine is associated.
  7. Registration state: Indicates whether the machine is registered with the Delivery Controller.
  8. OS type: Indicates the operating system running on the machine.
  9. Allocation type: Indicates whether the allocation is static or random.
  10. Machine IP: Gives the IP address of the machine (Ipv4/Ipv6).
  11. Organizational unit: Gives the organizational unit with which the machine is associated in Active Directory.
  12. VDA version: Gives the version of the XenDesktop VDA installed on the machine.
  13. Host: Indicates the name of the hypervisor host as configured on Studio.
  14. Server: Indicates the name of the hypervisor as seen on the hypervisor console, such as XenCenter/VSphere/SCVMM console.
  15. VM name: Indicates the name of the virtual machine as seen on the hypervisor console.
  16. vCPU: Indicates the number of vCPUs allocated on the hypervisor for the machine.
  17. Memory: Indicates the memory allocated on the hypervisor for the machine.
  18. Hard disk: Indicates the hard disk allotted to the machine on the hypervisor.
  19. Avg. disc sec/transfer: The average time in seconds per every disk transfer as seen on the performance monitor tool on the machine.
  20. Current disk queue length: The disk queue length as seen on the performance monitor tool on the machine.
  21. Load evaluator index: This field, which is only present for server OS machines, gives a measure of the load on the server machine distributed across CPU, memory, disk and session count.

The Director admin can perform some additional operations on machine details page:

a)      Power Control – The Power Control dropdown allows the user to shut down, restart, force restart, force shut down, and start a virtual machine. To perform these power control operations on Remote PC machines, you must configure the XenDesktop Wake on LAN feature.

b)      Manage Users – You can now assign users to the machine directly from Director console. To do so, click the Manage Users button, which opens up the popup below:

c)      Maintenance Mode – You can now set the maintenance mode for the machine from the Director console by clicking on the Maintenance Mode button on the machine details panel. You can turn it off by clicking the same button again.

Machine Utilization

The Machine Utilization panel displays memory and CPU usage over the past minute so IT admins can monitor the load on the machine from the Director console. This enables help desk admins to solve issues related to slow and poor performance in user sessions because of either CPU or memory usage overload. The panel is updated every five seconds.


The Sessions panel shows the total number of sessions associated with the machine, including the number of connected and disconnected sessions. The numbers are hyperlinks that redirect to the Filters page.


The infrastructure is divided into two sections, hypervisor status and Delivery Controller.

Hypervisor Status – The alerts set on the hypervisor host are shown in this section. (Note: Alerts set on HyperV host currently are not supported.)

Delivery Controller – This panel consists of multiple fields that are explained below:

a)      Status: Status of the Delivery Controller, either online or offline. For example, the Director server is either unable to reach the Delivery Controller, or the Broker Service on the Delivery Controller is not running.

b)       Services: Shows the number of core services that are currently not available, including Citrix AD Identity Service, Broker Service, Central Configuration Service, Hosting Unit Service, Configuration Logging Service, Delegated Administration Service, Machine Creation Services and Monitor Service. Just like the alerts in the Hosts table, the administrator can click the alerts’ text and see a pop up displaying the name of the service, the time the service failed, and the location of that service.

c)      Site Database: Indicates whether the site database is connected. For example, the Delivery Controller is unable to contact the Site database; there is an issue with the database configuration; or there is version mismatch between the database and the service.

d)      License Server: Indicates whether you can connect to the license server configured for the Site. For example, the Controller is unable to contact the license server; if they are running the same machine then the service may be stopped.

e)      Configuration Logging Database: Indicates whether the Configuration Logging Database is connected. For example, the Citrix Configuration Logging Service on the Controller is not running.

Monitoring Database: Indicates whether the Monitoring Services Database is connected. For example, the Delivery Controller is unable to contact the Monitoring Services Database, or the Citrix Monitoring Service on the Controller is not running.


The Hotfixes panel consists of details pertaining to the hotfixes installed on the machine selected. Details displayed include component, component version, hotfix name, hotfix file name, links to Knowledge Center articles and effective date.

Count The Ways – Flash as Local Storage to an ESXi Host

Count The Ways – Flash as Local Storage to an ESXi Host
Posted: 21 Jul 2014   By: Joel Grace

When performance trumps all other considerations, flash technology is a critical component to achieve the highest level of performance. By deploying Fusion ioMemory, a VM can achieve near-native performance results. This is known as pass-through (or direct) I/O.

The process of achieving direct I/O involves passing the PCIe device to the VM, where the guest OS sees the underlying hardware as its own physical device. The ioMemory device is then formatted with “file system” by the OS, rather than presented as a virtual machine file system (VMFS) datastore. This provides the lowest latency, highest IOPS and throughput. Multiple ioMemory devices can also be combined to scale to the demands of the application.

Another option is to use ioMemory as a local VMFS datatstore. This solution provides high VM performance, while maintaining its ability to utilize features like thin provisioning, snapshots, VM portability and storage vMotion. With this configuration, the ioMemory can be shared by VMs on the same ESXi host and specific virtual machine disks (VMDK) stored here for application acceleration.

Either of these options can be used for each of the following design examples.

Benefits of Direct I/O:

Raw hardware performance of flash within a VM with Direct I/OProvides the ability to use RAID across ioMemory cards to drive higher performance within the VMUse of any file system to manage the flash storage

Considerations of Direct I/O:

ESXi host may need to be rebooted and CPU VT flag enabledFusion-io VSL driver will need to be install in the guest VM to manage deviceOnce assigned to a VM the PCI device cannot be share with any other VMs

Benefits Local Datastore:

High performance of flash storage for VM VMDKsMaintain VMware functions like snapshots and storage vMotion

Considerations Local Datastore:

Not all VMDKs for a given VM have to reside on local flash use shared storage for OS and flash for application DATA VMDKsSQL/SIOS

Many enterprise applications reveal their own high availability (HA) features when deployed in bare metal environments. These elements can be used inside VMs to provide an additional layer of protection to an application, beyond that of VMware HA.

Two great SQL examples of this are Microsoft’s Database Availability Groups and SteelEye DataKeeper. Fusion-io customers leverage these technologies in bare metal environments to run all-flash databases without sacrificing high availability. The same is true for virtual environments.

By utilizing shared-nothing cluster aware application HA, VMs can still benefit from the flexibility provided by virtualization (hardware abstraction, mobility, etc.), but also take advantage of local flash storage resources for maximum performance.


Maximum application performanceMaximum application availabilityMaintains software defined datacenter

Operational Considerations:

100% virtualization is a main goal, but performance is criticalDoes virtualized application have additional HA features?SAN/NAS based datastore can be used for Storage vMotion if hosts needs to be taken offline for maintenanceCITRIX

The Citrix XenDesktop and XenApp application suites also present interesting use cases for local flash in VMWare environments. Often times these applications are deployed in a stateless fashion via Citrix Provisioning Services, where several desktop clones or XenApp servers are booting from centralized read-only golden images. Citrix Provisioning Services stores all data changes during the users’ session in a user-defined write cache location.  When a user logs off or the XenApp server is rebooted, this data is flushed clean. The write cache location can be stored across the network on the PVS servers, or on local storage devices. By storing this data on a local Fusion-io datastore on the ESXi host, it drastically reduces access time to active user data making for a better Citrix user experience and higher VM density.


Maximum application performanceReduced network load between VM’s and Citrix PVS ServerAvoids slow performance when SAN under heavy IO pressureMore responsive applications for better user experience

Operational Considerations

Citrix Personal vDisks (persistent desktop data) should be directed to the PVS server storage for resiliency.PVS vDisk Images can also be stored on ioDrives in the PVS server further increasing performance while eliminating the dependence on SAN all together.ioDrive capacity determined by Citrix write cache sizing best practices, typically a 5GB .vmdk per XenDekstop instance.

70 desktops x 5GB write cache = 350GB total cache size (365GB ioDrive could be used in this case).

The Citrix XenDesktop and XenApp application suites also present interesting use cases for local flash in VMWare environments. Often times these applications are deployed in a stateless fashion via Citrix Provisioning Services, where several desktop clones or XenApp servers are booting from centralized read-only golden images. Citrix Provisioning Services stores all data changes during the users’ session in a user-defined write cache location.  When a user logs off or the XenApp server is rebooted, this data is flushed clean. The write cache location can be stored across the network on the PVS servers, or on local storage devices. By storing this data on a local Fusion-io datastore on the ESXi host, it drastically reduces access time to active user data making for a better Citrix user experience and higher VM density.

VMware users can boost their system to achieve maximum performance and acceleration using flash memory. Flash memory will maintain maximum application availability during heavy I/O pressure, and makes your applications more responsive, providing a better user experience. Flash can also reduce network load between VMs and Citrix PVS Server.Click here to learn more about how flash can boost performance in your VMware system.

Joel Grace Sales Engineer
Copyright © 2014 SanDisk Corporation. All rights reserved.Terms of UseTrademarksPrivacyCookies

The Citrix Blog repost: To infinity and beyond w/HP Moonshot for XenDesktop & a little wizardry!Xen

To infinity and beyond w/HP Moonshot for XenDesktop & a little wizardry! | Citrix Blogs<!– –>// // // <![CDATA[
jQuery(document).ready(function() {

Object.size = function(obj) {
var size = 0, key;
for (key in obj) {
if (obj.hasOwnProperty(key)) size++;
return size;

var customOrder = new Array("Filter by Language", "English" ,"Russian" ,"Spanish", "German", "French", "Japanese", "Chinese","Dutch", "Not Listed");
var dataArray = new Array();
for(var i = 0; i 0)
var strHTML = “”;
for(var j = 0; j < customOrder.length; ++j)
if (j == 1) {
strHTML += "
“+ customOrder[j] +””
} else {
strHTML += ”
“+ customOrder[j] +””

// ]]>

To infinity and beyond w/HP Moonshot for XenDesktop & a little wizardry!

HP Moonshot for XenDesktop

When the moon hits your eye like a big idea oh my that’s innovation…Or something like that.

YouTube link to XenDesktop and HP Moonshot wizard

Earlier in the year at Citrix Synergy 2014, I had a standing room only session where we presented the Moonshot for XenDesktop offering for the fist time and also talked about the new XenApp for Moonshot demo we just showed in the keynote earlier that day . One of the ideas we demonstrated during my session was the XenDesktop and HP Moonshot wizard for creating pooled physical desktops and also controlling power management for pooled physical desktops.

Currently XenDesktop uses the HCL function

To talk to the hypervisor management stack like System Center 2012R2 Virtual Machine Manager to provision virtual machines to and also control power of those virtual machines on HyperV. In the case of HP Moonshot there is no hypervisor which means there is no management stack so the question is then how do you provision pooled physical bare-metal desktops to PVS and control power without the hypervisor and the management stack?

In comes the the XenDesktop SDK and PVS PowerShell to allow us to have the same provisioning and power management without the hypervisor or management stack. Since that Synergy session the master of the QuickShadow code and long time Citrite Ronn Martin @Ronn_Martin has been hard at work building a new version of the Moonshot Wizard that we would love for our customers using HP Moonshot and XenDesktop 7.1, 7.5, and 7.6 to test. Of course we test our code as well so a special thanks to Patrick Brennan from Citrix for being my wingman testing and improving the code all these months. I couldn’t have done this without ya! While this is a great innovation concept between Citrix and HP it is not part of the native Citrix PVS or Studio console and is therefore considered unsupported. However, we welcome any feedback and will do our best to answer any technical questions that may arise using the forum link below.

The Moonshot Setup wizard for PVS can be downloaded using the url below:

•       This has been tested with PVS 7.1 and 7.6

The Moonshot power management for XenDesktop 7.1 can be downloaded using the url below:  ( thanks to Ronn for getting that link to me!)

The Moonshot power management for XenDesktop 7.5 and 7.6 can be downloaded using the url below:

XenDesktop and HP Moonshot Wizard YouTube Demo

The  XenDesktop HP Moonshot tools provided  are not currently Citrix supported tools. Use of these tools should be tested carefully and  used in a lab environment. Any feedback or suggestions  should be entered into the following Citrix discussionsurl :

Thank you and we look forward to your feedback! @TonySanchez_CTX


A blog dedicated to Citrix technology

There's More to the Story: a blog about LIFE, chronic illness, and Mental Health

I’m the loud and relentless "patient" voice and advocate they warned you about. I happen to have type 1 diabetes, ADHD, anxiety, OCD, PCOS, endometriosis, thyroid issues, asthma, allergies, lactose intolerance (and more), but there’s more to story.


Learn Troubleshoot and Manage Windows

Dirk & Brad's Windows Blog

Microsoft Platform How To's, Best Practices, and other Shenanigans from Highly-qualified Windows Dorks.

Ingmar Verheij

About Citrix, Remote Desktop, Performance, Workspace, Monitoring and more...

Virtual to the Core

Virtualization blog, the Italian way.

CloudPundit: Massive-Scale Computing

the business of Internet infrastructure, cloud computing, and data centers

Every Cloud Has a Tin Lining.


See no physical, hear no physical, speak no physical -

Ask the Architect

My workspace journey

The weblog of an IT pro specializing in virtualization, storage, and servers


a place under control of his big head

this is... The Neighborhood

the Story within the Story

Yellow Bricks

by Duncan Epping


Enterprise Storage Engineer

My Virtual Vision

My thoughts on application delivery