SHORTNAME ISSUES WITH CITRIX XENAPP AND CITRIX PVS (Repost from My Virtual Vision)

source: MY VIRTUAL VISION

I was working on a new Citrix PVS image for one of my current projects and the test before running the P2PVS.exe were good, all my apps worked like they should so I prepped the server for the imaging (preparing antivirus, last PVS optimizations and a full pre-scan of my VM) and I ran P2PVS.

After a successful P2PVS I ran into a couple of problems: The RES Workspace Manager Service didn’t start and one of the key components of this image (an outlook plugin) gave a weird error that didn’t show up at first. I did some troubleshooting and found that the RES Software WM Service couldn’t start because the executable wasn’t found, I could browse to the directory and the res.exe was there. Luckily Barry Schiffer came up with the idea to check the shortnames which are used to start the RES WM Service and a lot of software still use the shortnames for the Office folder as well so apparently we had shortname issues with Citrix XenApp and Citrix PVS.

I soon realized that I used the XenApp 6.x (Windows 2008 R2) – Optimization Guide to optimize the image and one of the recommendations was to disable the NTFSDisable8dot3NameCreation:

ctx_83

Note: Even in 2012 some applications still rely on 8.3 names. Scanning the for commonly used director names (i.e. C:PROGRA~1) can help revealing affected programs.

After changing this value back to “0” we were able to run a successful P2PVS and the errors in RES Workspace Manager and the key application were gone. Here’s somebackground info on why to disable shortnames and how to check this from the command line.

To disable/enable 8.3 naming convention, users can issue the set command as follows:

C:Windowssystem32>fsutil 8dot3name set /?
usage : set [0 through 3] | [<Volume Path> 1 | 0]

I also received this MS page on 8.3 name creation with the following sample commands

The following command disables 8dot3 name creation on all volumes:

fsutil 8dot3name set 1

The following command disables 8dot3 name creation on the C drive:

 fsutil 8dot3name set C: 1

Kees Baggerman

Kees Baggerman is a senior performance and solution engineer for desktop and application virtualization solutions at Nutanix. Kees has driven numerous Microsoft and Citrix, and RES infrastructures functional/technical designs, migrations, implementations engagements over the years.

One comment

Advertisements

(Old Post) XP Performance Optimizations for XenDesktop and Provisioning Server vDisks

Performance Optimizations for XenDesktop and Provisioning Server vDisks
By Paul Wilson

Older post on XP xendesktop, but some information still applies.
Introduction
Many environments have already managed to streamline the image building process and already have familiarity with the many Windows XP performance and optimization tips. Both XenDesktop (XD) and Provisioning Server (PVS) support the Windows XP operating system and can benefit from performance enhancements. For those who are familiar with these performance enhancements, this blog may provide little assistance in the way of new information. None of the optimizations below are required, but they are available here for your convenience if they make sense in your environment.

The optimizations are put into three sections: Those that apply to the current user profile, those that apply to all users on the machine, and recommendations before creating the vDisk. The first section deals with the items that can be set in the default user profile. The second section deals with settings that can be set by the administrator for all users that work on the machine. The final section recommends a few things to do before taking the vDisk image. When available, the section will provide links to the URL on Microsoft?s website that explains the setting further.

Settings for the Default User Profile
This section lists a few of the settings that will improve the user experience but are set at the user profile level.  The recommendation is to create a generic user and then set the applicable settings, when completed, replace the default user profile with the generic user profile, the steps for which are found at the end of this section.

Force Offscreen Compositing for Internet Explorer
Turning this setting off removes any of the flickering that may display when using Internet Explorer through XenDesktop, by telling Internet Explore to fully render the page prior to displaying it. This is especially helpful on Internet Explorer 7.

Open Internet Explorer
Select Tools >> Internet Options from the menu
Select the Advanced tab
In the Browsing section, enable the checkbox for ?Force offscreen compositing even under Terminal Services?
Click OK to save the changes
Restart Internet Explorer
More information available at http://support.microsoft.com/kb/271246/en-us

Remove the Menu Delay
The Start menu has a built-in delay of 400 milliseconds. To speed the menu response time, follow these steps to remove the delay:

Start the Registry Editor (Regedit.exe)
Navigate to  HKEY_CURRENT_USERControl PanelDesktop
Set the value of MenuShowDelay to 0
Exit the Registry Editor

Remove Unnecessary Visual Effects
Disabling unnecessary visual effects such as menu animations and shadow effects that generally just slow down the response time of the desktop.

Right-click My Computer
Click Properties
Click Advanced
Click the Settings button under the Performance section
Click ?Adjust for best performance?
If you want to keep the XP Visual Style, scroll to the bottom and check the last box titled ?Use visual styles on windows and buttons?

Disable the desktop cleanup wizard
To stop the wizard from automatically running every 60 days:

Right-click a blank spot on the desktop, and then click Properties to open the Display Properties dialog box
Click the Desktop tab
Click Customize desktop to open the Desktop Items dialog box
Disable the ?Run Desktop Cleanup Wizard every 60 days? setting
Click OK twice to close the dialog boxes
More information available at http://support.microsoft.com/kb/320154

Disable Automatic Searching of Network Printers and Shares
Automatic search periodically polls your network to check for new shared resources and adds relevant icons into My Network Places if anything is found. If you wish to prevent XP from regularly searching your network unnecessarily then follow these steps:

Open the Control Panel
Select Folder Options.  If you use the Control Panel Category View you?ll find Folder Options under Appearance and Themes
Click the View tab
In the Advanced Settings list, disable the ?Automatically Search for Network Folders and Printers? setting
Click OK

Disable the Windows XP Tour Notifier
If you did not turn this off before you logged in as your base user for the default profile, you can manually disable the prompt on a per-user basis by following these steps:

Start Registry Editor (Regedit.exe)
Navigate to HKEY_CURRENT_USERSoftwareMicrosoftWindowsCurrentVersionAppletsTour
On the Edit menu, point to New, click Dword Value, type RunCount
Set the data value to 0 (zero), and then click OK
Quit Registry Editor
More information available at http://support.microsoft.com/kb/311489

Turn off Automatic Updates
Since you are running a read-only image, using automatic updates will cause the operating system to continually download the same updates each time the image is booted. The best course of action is to turn it off. You have three options that can be used to disable the service:

Use the PVS Optimizer tool and leave the ?Disable automatic update service? box checked
~ or ~

In the Services Control Panel, change the Startup Type of the Automatic Updates service to ?Disabled?
~ or ~

Run GPEDIT.MSC and navigate to:  Local Computer Policy > Computer Configuration >  Administrative Templates > Windows Components >  Windows Update. Set the ?Configure Automatic Updates? setting to ?Disabled?

Turn off Language Bar
If there is no need for the language tool bar (the pen icon in the systray) you can disable it using either of these two methods.

Right-click taskbar > Toolbars and uncheck the ?Language Bar? option.
~ or ~

Navigate to Control Panel > Regional and Language Options > Languages (tab) > Details (button) > Language Bar (button at bottom). Disable the ?Show the Language bar on the desktop? and ?show additional Language bar icons in the taskbar?.

Make the User Profile the Default User Profile
When you are done completing all the User Profile Settings (using a generic user) you can copy the profile over to the default user using the process below.

Login as an administrator (Local Administrator is recommended) not as the base user for the profile because you cannot copy a profile that is in use.

Right-click on My Computer
Choose Properties
Select the Advanced tab
Click the Settings button under the ?User Profiles? section
Select your base user profile where the changes above were made and click Copy To
Click the Browse button and browse to C:Documents and SettingsDefault User
Click OK once to save the path
Click the Change button under ?Permitted to use?
Enter Everyone
Click OK to save
Click Yes to confirm overwriting of the default user profile
NOTE: Before copying it over, be sure to remove any user or machine specific data for the ICA Client, the ICA Streaming Client, Password Manager, and EdgeSight. Since the image prep for these items is beyond the scope of this blog, I will save it for a topic another day.

If you would like to know more about user profile management in general, check out David Wagner?s blog on the Citrix User Profile Manager available at http://community.citrix.com/blogs/citrite/davidwag/.
Settings for the Machine
This section provides a list of the optimizations that will affect all users of the image. These settings are usually set after logging in as an administrator.

Power Configuration Settings
Two of the power settings can adversely affect the performance of PVS. One of them is the hard disk power savings. If the PVS server is using a local hard disk for the vDisk cache, you do not want the operating system to power down the local drive. The other setting is the Hibernate setting. The PVS Optimizer tool will disable hibernating, but you can manually do it as well. Here are the steps for disabling the power settings:

Open Control Panel
Select the Power Options applet
Select the Power Schemes  tab
For the default power scheme, set ?Turn off hard disks? setting to Never
Select the Hibernate tab
Disable the ?Enable Hibernation? setting
Click OK to save the settings
Delete the c:hiberfil.sys hidden file

Permanently Remove the Language Bar
If there is no need for the language tool bar to be installed at all, you can permanently remove it by running the following command from a command-prompt:

Regsvr32.exe /u msutb.dll

To reinstall it because you later found out you should not have removed it, you can run this command:

Regsvr32.exe msutb.dll
Disable TCP Checksum Offloading
This performance optimization is highly recommended by both Citrix and Microsoft for all Windows XP workstations that will be communicating over the network with other Microsoft resources. To work around this problem, turn off TCP checksum offloading on the network adapter using these steps:

Start the Registry Editor (Regedit.exe)
Navigate to the registry key HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesTcpipParameters
In the right pane, make sure that the DisableTaskOffload registry entry exists. If this entry does not exist, follow these steps to add the entry:
On the Edit menu, point to New, and then click DWORD Value.
Type DisableTaskOffload, and then press Enter
Click DisableTaskOffload.
On the Edit menu, click Modify
Type 1 in the Value data box, and then press Enter
Exit Registry Editor
More information available at http://support.microsoft.com/kb/904946/
Turn off Security Center
To disable the Security Center service, so users are not prompted when the firewall or anti-virus updates are out-of-date, you can disable it by peforming the following steps:

Open the Services Control Panel
Edit the Security Center service properties and set the Startup Type to ?Disabled?

Disable Last Access Time Stamp
Windows XP has a habit of time stamping all the files it reads with the date and time it was last accessed. While this is a nice feature, it is not always necessary in PVS environments where the files are statically supplied from a standard image and no backup software will be used. Putting a timestamp on a recently read file creates a write access every time a read is executed. With Provisioning Server, these writes go to the vDiskCache file increasing network traffic if cached on the PVS server. To disable the last access timestamp behavior, complete the following steps:

Start a command prompt
Type FSUTIL behavior set disablelastaccess 1 and press Enter
Requires a reboot to take effect

Disable the Windows XP Tour Notifier for New Users
Windows XP likes to notify all new users that an XP tour can be taken. While this is a nice feature for new users, it typically is annoying for existing users. To suppress the XP tour prompt for all new users, follow these steps:

Start Registry Editor (Regedit.exe)
Navigate to the registry key HKEY_LOCAL_MACHINESoftwareMicrosoftWindowsCurrentVersionAppletsTour
If the Tour key does not exist, follow these steps to create it:
From the Edit menu choose New
Click Key and type Tour as the key name
On the Edit menu, point to New, and then click Dword Value
Type RunCount as the name for the new value
Set the data value for the RunCount value to 0 (zero), and then click OK
Quit the Registry Editor
More information available at http://support.microsoft.com/kb/311489
Turn off System Restore
System Restore is the feature that allows a computer system to be rolled back, or restored, to a point before certain events took place, for example, prior to specific software or hardware installations. When you are using a standard mode (read-only) vDisk, there is no reason to have the System Restore feature enabled. The PVS Optimizer tool disables the System Restore feature, but if you are not using that tool, you should complete the following steps:

Right-click My Computer, and then click Properties
On the Performance tab, click File System, or press Alt+F
On the Troubleshooting tab, click to select the Disable System Restore check box
Click OK twice, and then click Yes when you are prompted to restart the computer
To re-enable System Restore, follow steps 1-3, but in step 3, click to clear the Disable System Restore check box
More information available at http://support.microsoft.com/kb/264887
Disable Windows Indexing Services
Windows Indexing service adds overhead to the PVS vDisk by reading the files from the vDisk for indexing. Use one of the following three methods to disable Indexing:

Use the PVS Optimizer tool and leave the ?Disable Indexing Services? setting enabled
~ or ~

To turn off indexing at the drive level, perform these steps:
Open My Computer
Right-click on the drive on which you wish to disable the Indexing Service
Select Properties
Under the General tab, disable the ?Allow the Indexing Service to index this disk for fast file searching? setting
~ or ~

To disable the indexing service at the service level, perform these steps:
Click Start, Run, type services.msc then press Enter or click OK
Scroll to the ?Indexing Service? in the right-hand pane and double-click it
Change the Startup type to ?Manual? or ?Disable? and Apply
Click the Stop button and wait for the service to stop then click OK

Modify the Windows Service Timeout
In environments with shift changes and large amounts of virtual machines rebooting some virtual machines may fail to register because the Windows Service timeout may be reached before Citrix Desktop Service starts.  The Windows Service default timeout is 30 seconds which may not be long enough for all the services to when the virtual machines are coming online simultaneously. We recommend changing the 30-second default to 120-seconds to give the services times to start before the Citrix Desktop Service starts.   The timeout value is represented in milliseconds so 60 seconds = 60000 ms.  The following registry change can be made to lengthen the Windows Service timeout period.

Start Registry Editor (Regedit.exe)
Navigate to the registry key HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControl
If the ServicesPipeTimeout value is not present, use the following steps to create it:
Click the Control subkey
On the Edit menu, point to New, and then click DWORD Value
Type ServicesPipeTimeout, and then press Enter
Right-click the ServicesPipeTimeout key and then click Modify
Click Decimal
Type 120000, and then click OK
Quit the Registry Editor
Reboot for the changes to take effect

Disable Remaining Unnecessary Services
You can go through the list of other services that are configured on Windows XP and disable any ones that will not be used in your environment. Two possible services are the Wireless Zero Configuration service and the Themes service.
Enable ClearType
To enable ClearType and make any adjustments to suit your eyes, go to the Microsoft Typography pages and follow the simple instructions. You can adjust ClearType in the Control Panel after installing the software at the link.
Recommendations Before Imaging
Below is a list of recommendations that can be completed right before creating the vDisk image. Most of these are designed to optimize the layout of the files on the disk so that PVS server can operate at maximum efficiency.
Zero Deleted Files
SDelete is a secure file delete utility that can be used to free and cleanup unused space on the image. In short, it zeroes out any files that have been freed up by the operating system and helps the image run faster. For more information about how it works or to download it, visit the URL below. The recommended options are -z and -c. (sdelete -z -c)

Usage: sdelete [-p passes] [-s] [-q] <file or directory>

sdelete [-p passes] [-z|-c] [drive letter]

More information available at http://technet.microsoft.com/en-us/sysinternals/bb897443.aspx
Defragment the Local Disk
Run the some type of disk defragmenter tool to optimize the files on the image. It is best to run the utility after removing the pagefile.sys and hiberfil.sys files. If you will use a page file, just re-enable it after you defragment the disk, that way the page file is contiguous. The Windows Defragmenter can be found at Start >> Programs >> Accessories >> System Tools >> Disk Defragmenter.
Flush the DNS cache
Flush the DNS Cache using the ipconfig /flushdns command. This prevents any IP addresses cached on the read-only disk from interfering with DNS resolution at a later date.
Run ChkDsk
Verify the file system has no missing file links or pieces by running chkdsk –f from a command prompt.

VIRTUAL PROVISIONING SERVER – A SUCCESSFUL REAL WORLD EXAMPLE (Citrix log Repost)

I am an avid supporter of virtualizing Provisioning Server.  Servers today are just too powerful and it is a waste of resources to run things on bare metal.  Let’s face it, the average enterprise 1U rack or blade server has at least 2 sockets, 8+ cores and tons of RAM.  Running a single instance of Windows on a one of these servers is a complete waste of resources.  I have often heard people saying that you can only get 300 – 500 targets on a virtual PVS.  I have also seen customers thinking that they have to place a virtual PVS on each hypervisor host along with the target devices so that the number of targets per PVS is limited and that all traffic remains on the physical host and virtual switch.  I would like to finally debunk these myths and let you know that PVS virtualizes just fine, even in large environments and you do not have to treat it any differently than other infrastructure servers that run as virtual machines.   I would like to take this opportunity to provide a real world customer example showing that Provisioning Server is an excellent candidate to virtualize for all environments, even large ones.

Real World Example

First, for the sake of privacy I will not be disclosing the name or any other identifying information about the customer, but I will provide some basic technical details as it relates to the virtual PVS deployment as well as some data showing how well virtual PVS is scaling.

Environment basics

  • Hypervisor is VMware 4.1 for both servers and Windows 7 desktops
  • PVS 5.6 SP1 is virtualized on same hosts along with other supporting server VMs
  • Windows 7 32-bit is virtualized on separate VMware hosts dedicated to desktop VMs
  • All hosts are connected with 10Gb Ethernet
  • There are 5000+ concurrent Windows 7 virtual machines being delivered by virtual PVS
  • All virtual machines (both Windows 7 and PVS) have one NIC.  PVS traffic and production Windows traffic traverses the same network path
  • Each virtual PVS was configured as a Windows 2008 R2 VM with 4 vCPUs and 40 GB RAM
  • The PVS Store is a local disk (VMDK) unique to each PVS server
  • Each Windows 7 VM has a unique hard disk (VMDK) that hosts the PVS write cache

So, how many target devices do you think that we could successfully get on a single virtual PVS; 300, 500, 1000???  Well, check out the screen shot below which was taken in the middle of the afternoon during peak workload time:

As you can see, on the first three PVS servers, we are running almost 1500 concurrent target devices.  How is performance holding up from a network perspective? The console screen shot was taken from PVS 01 so the task manager data represents 1482 connected target devices.  From the task manager graph, you can see that we are averaging 7% network utilization with occasional spikes of 10%.  Since this is a 10Gb interface, that means sustained networking for 1500 Windows 7 target devices is 700 – 1000 Mb/s. In theory, a single 1 Gig interface would support this load.

How about memory and CPU usage?  Check out the task manger screen shot below taken from PVS 01 at the same time as the as the previous screen shot:

From a CPU perspective, you can see that we are averaging 13% CPU utilization with 1482 concurrently connected target devices.  Memory usage is only showing 6.74 GB committed; however, take note of the Cached memory (a.k.a. System Cache or File Cache).  The PVS server has used just under 34 GB RAM for file caching.  This extreme use of file cache is due to the fact that there are multiple different Windows 7 VHD files being hosted on the PVS server.  Windows will use all available free memory to cache the blocks of data being requested from these VHD files, thus reducing and almost eliminating the disk I/O on the virtual PVS servers.

At 1500 active targets, these virtual PVS servers are not even breaking a sweat.  So how many target devices could one of these virtual PVS servers support?  My customer has told me that they have seen it comfortably support 2000+ with plenty of head room still available.  It will obviously take more real world testing to validate where the true limit will be, but I would be very comfortable saying that each one of these virtual PVS servers could support 3000 active targets.

It is important to note that this customer is very proficient in all aspects of infrastructure and virtualization.  In fact, in my 13+ years of helping customers deploy Citrix solutions; the team working at this customer is by far the most proficient that I have ever worked with. They properly designed and optimized their network, storage and VMware environment to get the best performance possible.  While I will not be able to go into deep details about their configuration, I will provide some of the specific Citrix/PVS optimizations that have been implemented.

There are Advanced PVS Stream Service settings that can be configured on the PVS server.  These settings typically refer to the threads and ports available to service target devices.  For most optimal configuration it is recommended that there be at least one thread per active target device. For more information on this setting, refer to Thomas Berger’s blog post: http://blogs.citrix.com/2011/07/11/pvs-secrets-part-3-ports-threads/

For this customer we increased the port range so that 58 UDP ports were used along with 48 threads per port for a total of 2784 threads.  Below is a screen shot of the settings that were implemented:

It is also important to note that we gave 3GB RAM to each Windows 7 32-bit VM.  It is important to make sure that you do not starve your targets devices for memory. In the same way that the PVS server will use its System Cache RAM so that it does not have to keep reading the VHD blocks from disk, the Windows target devices will use System Cache RAM so that they do not have to keep requesting the same blocks of data from the PVS server.  Too little RAM in the target means that the network load on the PVS server will increase.   For more detailed information on how System Cache memory on PVS and target devices can affect performance, I highly recommend you read my white paper entitled Advanced Memory and Storage Considerations for Provisioning Services: http://support.citrix.com/article/ctx125126

Conclusion

Based on this real world example, you should not be afraid to virtualize Provisioning Server.  If you are virtualizing Provisioning Server make sure you take the following into consideration:

It is also import that all of our other best practices for PVS and VDI are not overlooked as well.  In this real world example, we also followed and implemented the applicable best practices as defined in these two links below:

  • Provisioning Services 5.6 Best Practices

http://support.citrix.com/article/CTX127549

  • Windows 7 Optimization Guide

http://support.citrix.com/article/CTX127050

As a final note before I wrap up, I would like to address XenServer as I know that I will l get countless questions since this real world example used VMware.  There have been discussions in the past that seem to suggest that XenServer does not virtualize PVS very well.  However, it is important to note that XenServer has made some significant improvements over the last year, which enables it to virtualize PVS just fine.  If you are using XenServer then make sure you do the following:

  • Use the latest version of XenServer: 5.6 SP2 (giving Dom0 4 vCPUs)
  • Use IRQBalance.  You can find more details on it here:

http://support.citrix.com/article/CTX127970

  • Use SR-IOV, if you can (but not required).  You can find more details on it here:

http://blogs.citrix.com/2010/09/14/citrix-provisioning-server-gets-virtual-with-sr-iov/

http://support.citrix.com/article/CTX126624

http://blogs.citrix.com/2010/09/12/performance-with-a-little-help-from-our-friends/

I hope you find that this real world example is useful and helps to eliminate some of the misconceptions about the ability to virtualize PVS.

Cheers,

Dan Allen

34 Comments

  1. Jay

    Thanks Dan. Great article. It appears NIC teaming/bonding is not required huh?

    • Dan Allen

      Good question. Bonding NICs within the Hypervisor is still something that should be done to provide higher availability and throughput. VMware supports LACP, so a single PVS VM can send traffic simultaneously over 2 NICs. At this point in time XenServer supports bonding to provide greater overall throughput and availability for the XenServer host, but a single VM can only have its traffic transmitted over a single NIC at any moment in time.

      • Nicholas Rintalan

        Also keep in mind, Jay, that Dan’s environment was 10 Gb. And assuming the networking infrastructure across the board is truly 10 Gb (i.e. switch side as well), then NIC teaming/bonding isn’t really an issue as you said. But if this was a 1 Gb environment (and I find that most still are today but that’s changing quickly…), NIC teaming/bonding all of the sudden becomes critically important…because we’ll start hitting that 1 Gb bottleneck with anywhere from 500-100 target devices. So that’s when it would have been critical for Dan (in this vSphere environment) to enable static LACP and make sure he has 2+ Gb of effective throughput for the stream traffic. The lack of LACP on the XS side is what makes virtualizing PVS “tough” in a 1 Gb environment if you’re trying to scale to 1000+ targets on each box.

        Hope that helps clear this up.

        -Nick Rintalan, Citrix Consulting

  2. Scott Cochran

    Great information Dan. Virtualizing Provisioning Server and using CIFS for the vDisk Store is something we have long avoided but the more data we see the more our minds are put at ease. I notice this example is not using CIFS for the vDisk store, it would be interesting to see the performance data of a real world example showing CIFS vDisk store(s) used in large scale…

    Another design element I noticed in this example is a single NIC/network being used for PvS Streaming and Production VM traffic. In the past I have seen recommendations to multi-home the PvS Targets and use separate networks to isolate PvS vDisk Streaming traffic from Production traffic in order to provide better scalability and maximum performance. Have you seen any data that proves or disproves this theory?

    Thanks,

    Scott

    • Dan Allen

      Great question about multi-homing PVS and targets. I have seen those recommendations as well. While there is nothing technically wrong with multi-homing and isolating the PVS traffic, it most situations it is overkill and is not required. With XenServer and VMware, PVS targets support the optimized network drivers that are installed with the hypervisor guest tools. These are fast and efficient drivers that have no issues handling production Windows and PVS storage traffic over the same network path. In my experience, the added complexity of trying to create a multi-homed target and manage a separate network for streaming traffic is just not worth it.
      Cheers,
      Dan

  3. Jorge Ponce de Leon

    Very good post Dan, thanks a lot! … Just a question: how many different vDisks are you managing in this case? Just to know about how much memory per vDisk you need to consider to cash in PVS.

    • Dan Allen

      I believe we had 5 or 6 different vDisks active at any one time. For effective caching, you should typically plan on 2 – 3 GB per vDisk.

  4. Joern Meyer

    Hi Dan, great post and thank you for your answers to all the questions so far. Could you share some information about the Network-Interfaces used for the VMs (PVS-Server and Targets). We found out, that we reach best performance using VMXNet, but it think we all know the problem PVS had with VMXNet 3 in the past. And what about CPU-Overcommitement on the PVS-Server-Hosts? Do you have that?

    Thanks, Joern

    • Dan Allen

      We used VMXNet3 for both server VMs and Win 7 target VMs. We released a patch back in January to fix the target device issues with VMXNet3. Check outhttp://support.citrix.com/article/CTX128160.

      CPUs on the hosts with PVS server VMs are technically overcommitted as there can be more VMs and active vCPUs than physical CPUs, but this customer has a well architected hypervisor solution such that total CPU host utilization is monitored so that overall host CPU utilization is within normal range. And of course there are affinity rules to prevent PVS VMs from running on the same host.

  5. Adam Oliver

    Great article Dan! This will help a lot with my own knowledge!

  6. Chris

    Dan,

    Do you have an opinion on running the virtual PVS servers on the same XenServer hosts as the virtual Win 7 machines?

    • Dan Allen

      I would not run PVS on the same hosts that support the Windows 7 VMs.
      -Dan

      • Todd Stocking

        Any particular reason why you wouldnt want to run VMs on the same host that PVS is virtualized on? We want to maximize our hosts and with 96GB of RAM and 12 Cores (24 with HT) we would prefer to be able to use some of the available resources for provisioned XenApp servers. Thoughts?

  7. Norman Vadnais

    I don’t see this article mentioning the amount of RAM on the WIn7 desktops or the size of the persistent disk allocated to each. Since proper sizing of the client helps attain maximum throughput of PVS, I would think those details are important.

    Can we get those?

    • Dan Allen

      The Windows 7 VMs each have 3GB RAM and the each VM has a 6GB disk for the PVS write cache, EventLogs, etc…

  8. Ionel

    Hi Dan,
    You posted this link:
    http://support.citrix.com/article/CTX127549
    How did you find it? i can not find it here:
    http://support.citrix.com/product/provsvr/psv5.6/
    or searching on support or searching on google

    • Dan Allen

      Strange. I when click the link as you reposted in your comment and in the body of my blog, it works fine for me. Also, if I google “CTX127459” it comes up as the first hit for me. Can you try it again?

  9. R. S. Sneeden

    Dan,

    Great article. I did this very thing last year, albeit for a much smaller environment (~300 XenDesktops). I’m curious as to the VMware host configuration? CPU type and RAM. I’m currently designing an environment roughly the same size as your customer.

    -rs

    • Dan Allen

      4U rack mount servers. Quad Socket Eight Core Intels with Hyper-Threading (32 physical cores, 64 logical with hyper-threading). 512 GB RAM per physical host.

  10. khanh

    Dan what kind of disk did you use for windows 7 with boot storm and logon storm? Also how did you move all the log files to the cache drive and does the logs delete them self or will the drive fill up if we don’t delete them?

    • Dan Allen

      We set Eventlogs to overwrite events as necessary and set them to a fixed size on the write cache disk (Drive D:). You can do this with a GPO. The write cache disks are on an EMC SAN connected to the VMware hosts via FC.

  11. Daniel Marsh

    Hi Dan,
    Provisioning Server best practice (CTX117374) says to disable TCP task offload – was this done in this environment? Im curious about the CPU usage, its always higher in our environments with a lot less clients, I always figured it was because TCP offload was disabled.
    Regards, Dan.

    • Dan Allen

      Yes, we disabled TCP task offload. As you can see from the above results, our CPU usage was OK.

  12. Lucab

    Cifs for storing vhd on PVS are terrible choice!!! Windows not cache Network share instead of local disk! If you want to have a unique repository for all PVS You need a cluster FS like Sanbolic Meliofs!!!

    • Dan Allen

      Lucab,
      Did you even read the article that I linked to? If you actaully read the article, then you will understand that making the registry changes I detail will actually allow Windows to cache the network share data. With that being said, there is nothing wrong with using a clustered file system like Melio.
      Cheers,
      Dan

  13. Jurjen

    Dan, how did you come up with the threads and ports numbers exactly? The blogpost from Thomas suggests to use the number of CPU cores. Just wondering if you had done some testing with different numbers to come to this conclusion.

    • Dan Allen

      Actually, Thomas suggested increasing it to make sure that that when multiplying the threads by the total number of ports; you end up with one per active target. He then said that Citrix lab testing suggested that performance is best for Streaming Service when it the cores equals or is greater than threads per port. However, if you are going to go large like we did at this customer highlighted in my article, then you need to go past that the threads/per core ratio. No worries, it will scale just fine as you can see from my customers results. For large environments, you definitely want a value much higher than the default of 8!

  14. Jason

    Dan –

    You mentioned that the write cache are on an EMC SAN. Would local disk on the host be acceptable? Or would that greatly impact performance?

    • Johnny Ma

      It depends on how many you are running and how many local disks you have in the host. Typically it is fine but you may run into an IOPS issue if you have too many on there but that can be solved by adding in a few SSDs if you really are inclined to use local storage.

  15. Paul Kothe

    This was an outstanding post and very helpful for me. I have a question about the storage of the vDisk images. It mentions in the other blog and in other whitepapers that we should use block level storage for the PVS Dsik that houses the vhd files. I am using NFS for my storage repositories and was wondering if that is still not considered block level storage since it is a file onthe NFS SR and not a true LUN and would it make a difference in a small environment? I am trying to shoehorn this into less than 100 user environments and making the numbers work has been hard. I like VDI in a box but I LOVE PVS 6 and management of the images is so easy compared to VDI in a Box. I am about to deploy a 8 user Xendesktop on a single host and planning on virtualizing all of it. Exchange is in the cloud so I feel comfortable with it. the 8 users are only using IE so load will be very little. So should I setup an ISCI lun for the PVS vdisk store or just use a thin provisioned NFS disk?

  16. Samurai Jack

    I also have question about the storage of vdisk, if you do not want to use local storage, or NFS, can they be placed on VMFS or an iSCSI or FC LUN and one or more pVS servers access it for HA capabilities? how would this work, or is NFS The way to go?

  17. Vic Dimevski

    Awesome article :)

  18. Jeff Lindholm

    Hello, I hope this thread has not gotten too old, its a great post – if this were a forum vs. a blog it would be ” pinned ;-)

    I have a question on the configuration above, on the hosts that have 1482 machines on them, and that you say you think would go to 2000 or 3000, what are you using for your IP subnets?

    I am assuming you are using something like a /21 for 2048 hosts, for 3000 you would need to go to something like a /20 for 4096 hosts.

    I ask because I am in an environment where we successfully deployed several hundred physical CAD workstations using PVS. I am using a pair of physical blade servers on 10Gig, and 10Gig or multiple-1Gig links to the edge switches in the closets and of course Gig to the desktops.

    Now we would like to expand this environment, but of course if I go beyond the 255 hosts in a /24 subnet then I have some decisions to make. I dont know if our network group will like it.

    We currently still have NetBIOS and WINS active, which I think we can eliminate, but I would be worried about broadcasts in general on such large subnets. Was this something team in question considered?

    To my knowledge, you still cant easily get a Provisioning Server working with multiple NICs (each in a different VLAN) due to having limited multiple NIC support for the PXE portion of the solution, and I want to avoid complex/problematic setups. But I would be interested if this has been addressed.

    Aside from that, of course if I can efficiently just run multiple virtual PVS servers across a few physical hosts so that I can have say 1/pair per subnet, I have a little more flexibility. I am getting some new HP Gen8 blades that will support SR-IOV and 256GB of RAM or more, so I could give 30-50GB RAM to each virtual PVS server.

    To avoid the overhead associated with copying lots of images when I need to make an update, I was going to look into the Melio product so that I could have say, 10 PVS servers that all ” see” the same storage.

    -Jeff

  19. Ray Zayas

    Dan,

    How did you determine the increase in the Buffers per thread from the default of 24?

    Thanks for the help!

MyXenApp

A blog dedicated to Citrix technologies

There's More to the Story: a blog about LIFE, chronic illness, and Mental Health

I’m the loud and relentless "patient" voice and advocate they warned you about. I happen to have type 1 diabetes, ADHD, anxiety, OCD, PCOS, endometriosis, thyroid issues, asthma, allergies, lactose intolerance (and more), but there’s more to story.

Blog of Julian Andres Klode

Debian, Ubuntu, Linux in general, and other free software

DeployWindows•Info

Sharing knowledge in deploying, troubleshooting and managing Windows

Dirk & Brad's Windows Blog

Microsoft Platform How To's, Best Practices, and other Shenanigans from Highly-qualified Windows Dorks.

Ingmar Verheij

About Citrix, Remote Desktop, Performance, Workspace, Monitoring and more...

Jack's Server blog

Blog about server management

Virtxpert

A blog by Jonathan Frappier about virtualization and technology

CloudPundit: Massive-Scale Computing

the business of Internet infrastructure, cloud computing, and data centers

UCSguru.com

Every Cloud Has a Tin Lining.

speakvirtual

See no physical, hear no physical, speak no physical - speakvirtual.com

IT BLOOD PRESSURE

IT can be easy

Ask the Architect

My virtual desktop journey

blog.scottlowe.org

The weblog of an IT pro specializing in virtualization, networking, cloud, servers, & Macs

akosijesyang

a place under control of his big head

The Neighborhood

society online's social conscious

Yellow Bricks

by Duncan Epping

THE SAN GUY

Proven Storage Professional