Citrix PVS database Move

CTX130499

How to Migrate a Provisioning Services Database to a New SQL Server

Objective

This article will cover the steps necessary to migrate an existing PVS database to a new database on an existing SQL server or to a new database on a new SQL server.

Instructions

  1. Backup the existing PVS database.
  2. Restore the PVS Database on the new SQL server following Microsoft best practices for database restoration.
  3. Shutdown all target devices.
  4. Run the configuration wizard on the first PVS server and choose “Join existing farm”.
  5. Specify the new server and database and finish running the wizard.
  6. Complete the previous step for all PVS servers in the farm.
  7. Begin booting the target devices.

My environment is using provisioning server 5.6. 1.1045 (soon to be upgraded).  Can’t wait to see the RAM overflow feature.

Services run as domain account =  service name

Citrix PVS Ramdisk Server = Manual

Citrix PVS Soap Server = Automatic

Citrix PVS Stream Service= Automatic

WINDOWS 2008 R2 REMOTE DESKTOP AND XENAPP 6 TUNING TIPS UPDATE (Repost)

Following the great article Terminal Server & XenApp Tuning Tips published on this website by Pierre Marmignon, this article defined all tips that I’ve found, test and validate for tuning Windows 2008 R2 and XenApp6.

Source: CitrixTools.Net

Please note that :
–       These informations are provided “as is” and that using these tips is at your own risks.
–       All this tuning tips have been tested and validated only on VMs running on Vmware Vsphere 4, let us know your feeback related to any other platform (either Hypervisor or physical server).
Windows 2008 R2 OS Tuning Tips for Remote Desktop Service and XenApp 6
 
Registry Hive
Value
Type
Data
Purpose(s)
HKLMSYSTEMCurrentControlSetServicesTcpipparameters
 KeepAliveTime
 REG_DWORD
180000
Determines how often TCP sends keep-alive transmissions
HKLMSYSTEMCurrentControlSetServicesTcpipparameters
 KeepAliveInterval
 REG_DWORD
 100
Determines how often TCP repeats keep-alive transmissions when no response is received
HKLMSYSTEMCurrentControlSetServicesTcpipparameters
 TcpMaxDataRetransmissions
 REG_DWORD
 10
Determines how many times TCP retransmits an unacknowledged data segment on an existing connection
HKLMSystemCurrentControlSetServicesLanmanServerParameters
 MaxWorkItems
 REG_DWORD
 512
Server Service Optimization
HKLMSystemCurrentControlSetServicesLanmanServerParameters
 MaxMpxCt
 REG_DWORD
 2048
Server Service Optimization
HKLMSystemCurrentControlSetServicesLanmanServerParameters
 MaxFreeConnections
 REG_DWORD
 100
Server Service Optimization
HKLMSystemCurrentControlSetServicesLanmanServerParameters
 MinFreeConnections
 REG_DWORD
 32
Server Service Optimization
HKLMSystemCurrentControlSetServicesLanmanWorkstationParameters
 UtilizeNTCaching
 REG_DWORD
 0
Disable Caching
HKLMSYSTEMCurrentControlSetServicesMRXSmbParameters
 OplocksDisabled
 REG_DWORD
 1
Disables Opportunistic Locking
HLMSYSTEMCurrentControlSetServicesLanmanworkstationParameters
 UseOpportunisticLocking
 REG_DWORD
 0
Disables Opportunistic Locking
HKLMSYSTEMCurrentControlSetServicesLanmanserverParameters
 EnableOplocks
 REG_DWORD
 0
Disables Opportunistic Locking
HKLMSYSTEMCurrentControlSetServicesTcpipparameters
EnableRSS
 REG_DWORD
0
Disable Receive Side Scaling
HKLMSYSTEMCurrentControlSetServicesTcpipparameters
EnableTCPA
 REG_DWORD
0
Disable TCP-acceleration
HKLMSYSTEMCurrentControlSetServicesTcpipparameters
EnableTCPChimney
 REG_DWORD
0
Disable TCP Chimney Offload
HKLMSYSTEMCurrentControlSetServicesLanmanWorkstationParameters
DisableBandwidthThrottling
 REG_DWORD
1
The default is 0. By default, the SMB redirector throttles throughput across high-latency network connections in some cases to avoid network-related timeouts. Setting this registry value to 1 disables this throttling, enabling higher file transfer throughput over high-latency network connections
HKLMSystemCurrentControlSetServicesLanmanWorkstationParameters
 MaxThreads
 REG_DWORD
 17
Maximum Concurrent Threads
HKLMSYSTEMCurrentControlSetServicesLanmanWorkstationParameters
DisableLargeMtu
 REG_DWORD
0
The default is 1. By default, the SMB redirector does not transfer payloads larger than approximately 64 KB per request. Setting this registry value to 0 enables larger request sizes, which can improve file transfer speed.
HKLMSYSTEMCurrentControlSetServicesTcpipparameters
 EnableWsd
 REG_DWORD
 0
The default is 1 for client operating systems. By default, Windows Scaling Diagnostics (WSD) automatically disables TCP receive window auto tuning when heuristics suspect a network switch component might not support the required TCP option (scaling). Setting this registry setting to 0 disables this heuristic and allows auto tuning to stay enabled. When no faulty networking devices are involved, applying the setting can enable more reliable high-throughput networking via TCP receive window auto tuning. For more information about disabling this setting
HKLMSYSTEMCurrentControlSetServicesLanmanWorkstationParameters
FileInfoCacheEntriesMax
 REG_DWORD
32768
The default is 64 with a valid range of 1 to 65536. This value is used to determine the amount of file metadata that can be cached by the client. Increasing the value can reduce network traffic and increase performance when a large number of files are accessed
HKLMSYSTEMCurrentControlSetServicesLanmanWorkstationParameters
DirectoryCacheEntriesMax
 REG_DWORD
4096
The default is 16 with a valid range of 1 to 4096. This value is used to determine the amount of directory information that can be cached by the client. Increasing the value can reduce network traffic and increase performance when large directories are accessed
HKLMSYSTEMCurrentControlSetServicesLanmanWorkstationParameters
FileNotFoundCacheEntriesMax
 REG_DWORD
32768
The default is 128 with a valid range of 1 to 65536. This value is used to determine the amount of file name information that can be cached by the client. Increasing the value can reduce network traffic and increase performance when a large number of file names are accessed.
HKLMSYSTEMCurrentControlSetServicesLanmanWorkstationParameters
MaxCmds
 REG_DWORD
32768
The default is 15. This parameter limits the number of outstanding requests on a session. Increasing the value can use more memory, but can improve performance by enabling deeper request pipelining. Increasing the value in conjunction with MaxMpxCt can also eliminate errors encountered due to large numbers of outstanding long-term file requests, such as FindFirstChangeNotification calls. This parameter does not affect connections with SMB 2 servers.
Windows 2008 CPU Tuning (for VM only)
 
Because I’m working with VMware VSphere Server 4, I Supposed that the Hypervisor manage my processor and I don’t want to have any CPU management in the VM, I force all my VM to the “min  Power Scheme“  (High Performance) with the following command “owercfg -setactive scheme_min“
And force my Processor Performance Boost Policy, Minimum and Maximum Processor Performance State and Processor Performance Core Parking Maximum and Minimum Cores to the maximum. (http://www.microsoft.com/whdc/system/sysperf/Perf_tun_srv-R2.mspx)
The following commands set Processor Performance Boost Policy to 100 percent on the current power plan:
Powercfg -setacvalueindex scheme_current sub_processor 45bcc044-d885-43e2-8605-ee0ec6e96b59 100
Powercfg -setactive scheme_current
The following commands set Processor Performance State parameters to 100 %
Powercfg -setacvalueindex scheme_current sub_processor 893dee8e-2bef-41e0-89c6-b55d0929964c 100
Powercfg -setactive scheme_current
Core parking is a new feature in Windows Server 2008 R2. The processor power management (PPM) engine and the scheduler work together to dynamically adjust the number of cores available to execute threads, to turn off core parking, set the Minimum Cores parameter to 100 percent by using the following commands:
Powercfg -setacvalueindex scheme_current sub_processor bc5038f7-23e0-4960-96da-33abaf5935ec 100
Powercfg -setactive scheme_current
Additional Windows Explorer Tuning
 
Registry Hive
Value
Type
Data
Purpose(s)
HKLMSOFTWAREMicrosoftWindowsCurrentVersionPoliciesExplorer
 UseDesktopIniCache
 REG_DWORD
 1
HKLMSOFTWAREMicrosoftWindowsCurrentVersionPoliciesExplorer
 NoRemoteRecursiveEvents
 REG_DWORD
 1
HKLMSOFTWAREMicrosoftWindowsCurrentVersionPoliciesExplorer
 NoRemoteChangeNotify
 REG_DWORD
 1
HKLMSOFTWAREMicrosoftWindowsCurrentVersionPoliciesExplorer
 StartRunNoHOMEPATH
 REG_DWORD
 1
HKLMSOFTWAREMicrosoftWindowsCurrentVersionPoliciesExplorer
 NoRecentDocsNetHood
 REG_DWORD
 1
HKLMSOFTWAREMicrosoftWindowsCurrentVersionPoliciesExplorer
 NoDetailsThumbnailOnNetwork
 REG_DWORD
 1
HKLMSystemCurrentControlSetServicesMRXSmbParameters
 InfoCacheLevel
 REG_DWORD
 16
HKCR*shellexPropertySheetHandlersCryptoSignMenu
 SuppressionPolicy
 REG_DWORD
 1048576
HKCR*shellexPropertySheetHandlers{3EA48300-8CF6-101B-84FB-666CCB9BCD32}
 SuppressionPolicy
 REG_DWORD
 1048576
HKCR*shellexPropertySheetHandlers{883373C3-BF89-11D1-BE35-080036B11A03}
 SuppressionPolicy
 REG_DWORD
 1048576
HKLMSOFTWAREMicrosoftWindowsCurrentVersionexplorerSCAPI
 Flags
 REG_DWORD
 1051650
HKLMSYSTEMCurrentControlSetControlSession Manager
 SafeDllSearchMode
 REG_DWORD
 1
HKLMSYSTEMCurrentControlSetControlSession Manager
 SafeProcessSearchMode
 REG_DWORD
 1
Windows 2008 R2 – RDP Tuning  
Registry Hive
Value
Type
Data
Purpose(s)
HKLMSYSTEMCurrentControlSetControlTerminal ServerWinStationsRDP-TcpUserOverrideControl PanelDesktop
 AutoEndTasks
 REG_SZ
 1
Determines whether user processes end automatically when the user either logs off.
HKLMSYSTEMCurrentControlSetControlTerminal ServerWinStationsRDP-TcpUserOverrideControl PanelDesktop
 WaitToKillAppTimeout
 REG_SZ
 20000
Determines how long the system waits for user processes to end after the user attempts to log off
HKLMSYSTEMCurrentControlSetControlTerminal ServerWinStationsRDP-TcpUserOverrideControl PanelDesktop
 MenuShowDelay
 REG_SZ
 10
Changes the Start menu display interval
HKLMSYSTEMCurrentControlSetControlTerminal ServerWinStationsRDP-TcpUserOverrideControl PanelDesktop
 CursorBlinkRate
 REG_SZ
 -1
Specifies how much time elapses between each blink of the selection cursor
HKLMSYSTEMCurrentControlSetControlTerminal ServerWinStationsRDP-TcpUserOverrideControl PanelDesktop
 DisableCursorBlink
 REG_DWORD
 1
Enables / Disables Cursor Blink
HKLMSYSTEMCurrentControlSetControlTerminal ServerWinStationsRDP-TcpUserOverrideControl PanelDesktop
 DragFullWindows
 REG_SZ
 0
Specifies what appears on the screen while a user drags a window / Only the outline of the window moves
HKLMSYSTEMCurrentControlSetControlTerminal ServerWinStationsRDP-TcpUserOverrideControl PanelDesktop
 SmoothScroll
 REG_DWORD
 0
Scrolls using smooth scrolling
HKLMSYSTEMCurrentControlSetControlTerminal ServerWinStationsRDP-TcpUserOverrideControl PanelDesktop
 Wallpaper
 REG_SZ
 (none)
Sets Wallpaper to “None”
HKLMSYSTEMCurrentControlSetControlTerminal ServerWinStationsRDP-TcpUserOverrideControl PanelDesktopWindowsMetrics
 MinAnimate
 REG_SZ
 0
Disabled. Window does not animate while being resized
HKLMSYSTEMCurrentControlSetControlTerminal ServerWinStationsRDP-TcpUserOverrideControl PanelDesktop
 InteractiveDelay
 REG_DWORD
 40
Optimizes Explorer and Start Menu responses Times
XenApp 6.0 – ICA tuning
  
HKLMSYSTEMCurrentControlSetControlTerminal ServerWinStationsICA-TcpUserOverrideControl PanelDesktop
 AutoEndTasks
 REG_SZ
 1
 Determines whether user processes end automatically when the user either logs off.
Registry Hive
Value
Type
Data
Purpose(s)
HKLMSYSTEMCurrentControlSetControlTerminal ServerWinStationsICA-TcpUserOverrideControl PanelDesktop
 WaitToKillAppTimeout
 REG_SZ
 20000
Determines how long the system waits for user processes to end after the user attempts to log off
HKLMSYSTEMCurrentControlSetControlTerminal ServerWinStationsICA-TcpUserOverrideControl PanelDesktop
 MenuShowDelay
 REG_SZ
 10
Changes the Start menu display interval
HKLMSYSTEMCurrentControlSetControlTerminal ServerWinStationsICA-TcpUserOverrideControl PanelDesktop
 CursorBlinkRate
 REG_SZ
 -1
Specifies how much time elapses between each blink of the selection cursor
HKLMSYSTEMCurrentControlSetControlTerminal ServerWinStationsICA-TcpUserOverrideControl PanelDesktop
DisableCursorBlink
REG_DWORD
 1
Enables / Disables Cursor Blink
HKLMSYSTEMCurrentControlSetControlTerminal ServerWinStationsICA-TcpUserOverrideControl PanelDesktop
 DragFullWindows
 REG_SZ
 0
Specifies what appears on the screen while a user drags a window / Only the outline of the window moves
HKLMSYSTEMCurrentControlSetControlTerminal ServerWinStationsICA-TcpUserOverrideControl PanelDesktop
 SmoothScroll
 REG_DWORD
 0
Scrolls using smooth scrolling
HKLMSYSTEMCurrentControlSetControlTerminal ServerWinStationsICA-TcpUserOverrideControl PanelDesktop
 Wallpaper
 REG_SZ
 (none)
Sets Wallpaper to “None”
HKLMSYSTEMCurrentControlSetControlTerminal ServerWinStationsICA-TcpUserOverrideControl PanelDesktopWindowsMetrics
 MinAnimate
 REG_SZ
 0
Disabled. Window does not animate while being resized
HKLMSYSTEMCurrentControlSetControlTerminal ServerWinStationsICA-TcpUserOverrideControl PanelDesktop
 InteractiveDelay
 REG_DWORD
 40
Optimizes Explorer and Start Menu responses Times
Please note that using these tips is at your own risks.
All these tips have been test with XenApp6 server running on VMware Vsphere 4 and should be test on your own environment.
Sources :
Author : Julien Sybille
CCEA & CCA XenDesktop
14 comments
By: Jonathan Pitre (JakeLD)
10 December, 2010

Very nice article Julien!

According to Thomas Koetzing paper on “Optimizing Logon and Logoff 1.4” available at http://www.thomaskoetzing.de/index.php?option=com_docman&task=doc_download&gid=135

the registry key HKLMSYSTEMCurrentControlSetControlTerminal ServerWinStationsRDP-TcpUserOverride dosen’t work anymore with Windows 2003 and 2008/2008 R2 I assume.

Here what he had to say on page 22: “With Windows 2003 those global keys don’t work anymore and has to be set on a per user basis.”

I tested it my self with existing profiles and it didn’t work…but I just read this morning that the profile must be NEW. So who’s right, you or Thomas ? 🙂

Also I suggest the following registry keys:

HKLMSSYSTEMCurrentControlSetControlProcessor
Key: “Capabilities” (dword)
Value: 0007e666
http://support.microsoft.com/kb/2000977

HKLMSYSTEMCurrentControlSetServicesTCPIPParameters
Key: “DisableTaskOffload” (dword)
Value: “1”

http://support.microsoft.com/kb/904946
http://support.citrix.com/article/CTX117491

Regards,

Jonathan Pitre

By: Helmut Hauser (Houzer)
16 December, 2010

Other tuning options are:

1st) Turn off TCP Offloading (at the NIC AND the OS)

OS:
%SYSTEMROOT%SYSTEM32
etsh.exe int tcp set global chimney=disabled
%SYSTEMROOT%SYSTEM32
etsh.exe int tcp set global rss=disabled

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesTcpipParameters]
“EnableRSS”=dword:00000000
“EnableTCPChimney”=dword:00000000
“EnableTCPA”=dword:00000000
“DisableTaskOffload”=dword:00000001

2nd) If dealing with APP-V 4.6 64 Bit RDS Client on W2K8R2 this one could be for you [SCCM related]:

[HKEY_LOCAL_MACHINESOFTWAREMicrosoftSoftGrid4.5ClientConfiguration]
“RequireAuthorizationIfCached”=dword:00000000
[HKEY_LOCAL_MACHINESOFTWAREMicrosoftSoftGrid4.5ClientNetwork]
“AllowDisconnectedOperation”=dword:00000001
“Online”=dword:00000000
“DOTimeoutMinutes”=dword:ffffff
“LimitDisconnectedOperation”=dword:00000000
[HKEY_LOCAL_MACHINESOFTWAREMicrosoftSoftGrid4.5ClientPermissions]
“ToggleOfflineMode”=dword:00000000

3rd)

other optimizations:

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesTcpipParameters]
“EnablePMTUBHDetect”=dword:00000001
“KeepAliveTime”=dword:00007530
“KeepAliveInterval”=dword:00001388
“TcpMaxDataRetransmissions”=dword:00000005
“EnableBcastArpReply”=dword:00000001
“DisableTaskOffload”=dword:00000001

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesLanmanServerParameters]
“TreatHostAsStableStorage”=dword:00000001

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlFileSystem]
“NtfsDisableLastAccessUpdate”=dword:00000001
“DontVerifyRandomDrivers”=dword:00000001
“NtfsDisable8dot3NameCreation”=dword:00000001

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlLsaKerberosParameters]
“MaxTokenSize”=dword:0000FFFF

[HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindowsCurrentVersionPoliciesSystem]
“DelayedDesktopSwitchTimeout”=dword:00000005

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControl]
ServicesPipeTimeout”=dword:0001d4c0

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesBrowserParameters]
“IsDomainMaster”=”FALSE”
“MaintainServerList”=”No”

As always – do NOT use this in a production environment.
Test it in a lab.
Be very careful when touching the autotuning TCP/IP stack. This can backfire.
Talk with the networking gents.

WHY USE HP MOONSHOT FROM THE CUSTOMER’S VIEW POINT (Repost of Carl Webster article)

December 22, 2014

Written by

Carl Webster

Source: WHY USE HP MOONSHOT FROM THE CUSTOMER’S VIEW POINT

Between August and November 2014, I have worked on three HP Moonshot Proof of Concepts (two in the USA and one in the Middle East). While Moonshot is not the solution to every desktop problem in the world, it has some very strong advantages. I decided to reach out to a couple of the Citrix leads on these PoCs and have them tell you why they selected to use Moonshot. Before I give you their comments, a couple of items:

  • Don’t ask for the company names, you will not get them
  • Don’t ask for the individual’s names, you will not get them
  • Don’t ask for their contact information, you will not get it
  • Don’t ask me to put you in contact with them, not going to happen
  • Don’t ask me to put them in contact with you, not going to happen

Both of these individuals read my blog and I will send them the link to this article. If you have any questions for them or me, leave a comment and if they are allowed to, they will respond to the comments.

I only edited their responses to correct typos, line formatting and wrapping in their responses and had them correct a couple of sentences that were not clear to me.

The questions I asked them to respond to: “Why did you select HP Moonshot over traditional VDI?  Because doesn’t Moonshot cost way more per user than traditional VDI?”

Medical Related Field

Note: This customer will deliver around 2,000 desktops with Moonshot.

Costs comparison: Moonshot HDI vs VDI

My friend Carl ask me last week: “Why did you select HP Moonshot over traditional VDI?  Because doesn’t Moonshot cost way more per user than traditional VDI?”

During my lifetime of layovers in the world’s most “cozy” terminal (much love EWR), I teetered away from basically disagreeing with the question, but I’m feeling more accommodating since then. Comparing the two methods is a tough one.

On one hand we have user dedicated PCs, and on the other we have a populated blade chassis, shared via your virtualization of choice. Totally an apples and oranges situation. So the difference maybe be jarring, in that the Moonshot m700 carts do not require any hypervisor. Every m700 cart has 4 self-contained systems and supports booting directly from Citrix PVS 7.1.

For those that have done HDI on the past with other solutions, this one is much smaller, at around 5u’s….get ready for this, 180 PCs in 5u’s. Maybe I’m easy to impress, but that is kind of amazing. 180 PCs with a fully dedicated 8gb, four core APU, and onboard SSD. If this PC was sitting on your desktop all by its lonesome, that would be a pretty great business-class system.

You could get the same specs from traditional VDI, but you would need a system that supported almost 1.5tb of memory and 720 cores, and then would require a virtualization layer to share it all out.

What end users want is an experience like they are using a dedicated computer, and what a better solution than one which is exactly that?! So this is why I almost disagree with the question. The cost difference as little as it may be is now negligible because the experience is head and shoulders above any traditional VDI experience I have encountered.

It is all about experience and alignment, the end-user knows what they want from their experience. It is up to us “techies” to get them to a solution that is in alignment with the business case.

Retail Related Field

Note: This customer will deliver around 750 desktops with Moonshot.

My answer would be:

As a long time Citrix admin, I knew the advantages of centralized compute environments. For many years I was at a community bank and we used terminals connecting to Citrix published desktops on MetaFrame XP.  This in essence was the first type of VDI.  We were able to support nearly 400 users with an IT staff of 4 full time employees. There was only a user side Linux device, and all user compute was load balanced across the farm.  Roaming profiles and redirected folders stayed on shares in the data center.  This gave a measure of security for PII data, knowing it was not on remote hard drives that could be lost or stolen.  Also there is an economic benefit to this model as terminals usually cost less than PCs and have a far longer useful life than PCs.  Using terminals also gives a centralized management framework that allows for minimal daily maintenance for the user end points.

So the concepts of VDI have strong advantages for organizations concerned with user data security, user hardware life cycles, and IT management with a small staff.

I am now at a larger organization with multiple corporate sites and several hundred retail stores. I had been trying for a year or more to raise interest in traditional VDI at my current company. We have a very robust VMware environment and SAN.  We also use XenApp to provide multiple user apps across the country to our retail stores and other corporate sites.

Additionally, we have a large number of onsite consultants working on multiple projects. My suggestion was to use VDI to provide all the advantages above on a new project. The retail store managers needed a way to have more robust applications and other access that could not be accommodated on a POS sales register.  Also, each consultant was issued a company laptop. The motivation was to keep data assets safe as possible and under company control.

My suggestion was to use VDI and terminals for a new store user system and for consultants. Including the consultants could enforce the traditional controls but allow for BYOD to reduce hardware expense.

But there was a lot of resistance because of the general understanding that VDI could go very badly. There is another problem with IOPS when it comes to VDI. All IOPS coming out of virtual desktops are typically treated as “equal” by the hypervisor. This causes a lack of consistent user experience (as user workloads vary). Imagine a user running a zip file compression or running an on-demand virus scan on the same host as the CEO who needs his desktop to work on his board meeting presentation. I researched several hybrid and flash based storage systems aligned with VDI deployments. My conclusion was that the total VDI solution was viable now because of the new storage options.

But that was not the only barrier.  The organization is very committed to vendor standardization and not enabling a sprawl of siloes of independent solutions.  So the addition of new VDI-centric storage was not agreeable.  And without that enhancement, the usual VDI IOPs concern remained.

Another hurdle turned out to be the business side.  As they came to understand the shared nature of VDI resources, there was growing resistance.  No one wanted a system that was not completely “theirs”. Even after explaining the IT benefits and small probabilities of user bottlenecks, it was still not well thought of. So traditional VDI was not seen as a safe and reliable solution to match the company culture and expectations.

Then I discovered the HP Moonshot platform and the Converged System 100. Immediately I knew that it had great potential.  Hosted Desktop Infrastructure solves all the concerns I encountered.  It matched our existing hardware vendor. It provides substantial dedicated CPU, GPU, and RAM for every user. And because of the nature of Citrix Provisioning and its ability to cache in memory, the user IOPs to disk are greatly reduced.  Plus Citrix Provisioning frees the onboard 64GB SSD for other uses.  It could hold persistent data, or business apps.  We use it as the page file location.

The use of XenDesktop and Receiver also creates a user system that can be available anytime on multiple devices.

I will say there is one caveat.  We decided to segregate the CS100 server components on dedicated VMware hosts. We also used a new HP 3PAR system as the underlying storage for all of the design. This was mainly because it started as a POC.  But because of its success, and vendor match, the additional hosts and storage was something that was accepted.

Another motivation for making that “giant leap” to Moonshot was the vision behind it. Having that chassis in your Data Center does more than enable HDI. Other server cartridges are available and more will be available in the future. I think it’s the beginning of a new phase of hardware consolidation and server computing. Also, the power consumption is impressive.  It only requires 33 watts typical for a cartridge running 4 Windows systems with a Quad core AMD APU, 8GB RAM, and an SSD.

Another plus is each Windows node has 2 x 1GB NICs.  This may not be meaningful when you think of an end user station.  But having it there gives you more options. We use 1 NIC as a normal LAN link.  The second is used as a direct link to a dedicated iSCSI LUN on the 3PAR.  Having a permanent storage partition per system has enabled us to add business data that is unique to each store location.

I am a big fan of HP HDI and Moonshot in general.  I know our particular situation will not match a lot of businesses.  But people should sit down and think about the potential it offers in terms of consolidation, energy savings, flexibility of architectures, end user mobility and user computing resources.  I believe it is a game changer on several levels.


There you go.

If you have any questions or comments, leave them in the comments section.

Thanks

Webster

VIRTUAL PROVISIONING SERVER – A SUCCESSFUL REAL WORLD EXAMPLE (Citrix log Repost)

I am an avid supporter of virtualizing Provisioning Server.  Servers today are just too powerful and it is a waste of resources to run things on bare metal.  Let’s face it, the average enterprise 1U rack or blade server has at least 2 sockets, 8+ cores and tons of RAM.  Running a single instance of Windows on a one of these servers is a complete waste of resources.  I have often heard people saying that you can only get 300 – 500 targets on a virtual PVS.  I have also seen customers thinking that they have to place a virtual PVS on each hypervisor host along with the target devices so that the number of targets per PVS is limited and that all traffic remains on the physical host and virtual switch.  I would like to finally debunk these myths and let you know that PVS virtualizes just fine, even in large environments and you do not have to treat it any differently than other infrastructure servers that run as virtual machines.   I would like to take this opportunity to provide a real world customer example showing that Provisioning Server is an excellent candidate to virtualize for all environments, even large ones.

Real World Example

First, for the sake of privacy I will not be disclosing the name or any other identifying information about the customer, but I will provide some basic technical details as it relates to the virtual PVS deployment as well as some data showing how well virtual PVS is scaling.

Environment basics

  • Hypervisor is VMware 4.1 for both servers and Windows 7 desktops
  • PVS 5.6 SP1 is virtualized on same hosts along with other supporting server VMs
  • Windows 7 32-bit is virtualized on separate VMware hosts dedicated to desktop VMs
  • All hosts are connected with 10Gb Ethernet
  • There are 5000+ concurrent Windows 7 virtual machines being delivered by virtual PVS
  • All virtual machines (both Windows 7 and PVS) have one NIC.  PVS traffic and production Windows traffic traverses the same network path
  • Each virtual PVS was configured as a Windows 2008 R2 VM with 4 vCPUs and 40 GB RAM
  • The PVS Store is a local disk (VMDK) unique to each PVS server
  • Each Windows 7 VM has a unique hard disk (VMDK) that hosts the PVS write cache

So, how many target devices do you think that we could successfully get on a single virtual PVS; 300, 500, 1000???  Well, check out the screen shot below which was taken in the middle of the afternoon during peak workload time:

As you can see, on the first three PVS servers, we are running almost 1500 concurrent target devices.  How is performance holding up from a network perspective? The console screen shot was taken from PVS 01 so the task manager data represents 1482 connected target devices.  From the task manager graph, you can see that we are averaging 7% network utilization with occasional spikes of 10%.  Since this is a 10Gb interface, that means sustained networking for 1500 Windows 7 target devices is 700 – 1000 Mb/s. In theory, a single 1 Gig interface would support this load.

How about memory and CPU usage?  Check out the task manger screen shot below taken from PVS 01 at the same time as the as the previous screen shot:

From a CPU perspective, you can see that we are averaging 13% CPU utilization with 1482 concurrently connected target devices.  Memory usage is only showing 6.74 GB committed; however, take note of the Cached memory (a.k.a. System Cache or File Cache).  The PVS server has used just under 34 GB RAM for file caching.  This extreme use of file cache is due to the fact that there are multiple different Windows 7 VHD files being hosted on the PVS server.  Windows will use all available free memory to cache the blocks of data being requested from these VHD files, thus reducing and almost eliminating the disk I/O on the virtual PVS servers.

At 1500 active targets, these virtual PVS servers are not even breaking a sweat.  So how many target devices could one of these virtual PVS servers support?  My customer has told me that they have seen it comfortably support 2000+ with plenty of head room still available.  It will obviously take more real world testing to validate where the true limit will be, but I would be very comfortable saying that each one of these virtual PVS servers could support 3000 active targets.

It is important to note that this customer is very proficient in all aspects of infrastructure and virtualization.  In fact, in my 13+ years of helping customers deploy Citrix solutions; the team working at this customer is by far the most proficient that I have ever worked with. They properly designed and optimized their network, storage and VMware environment to get the best performance possible.  While I will not be able to go into deep details about their configuration, I will provide some of the specific Citrix/PVS optimizations that have been implemented.

There are Advanced PVS Stream Service settings that can be configured on the PVS server.  These settings typically refer to the threads and ports available to service target devices.  For most optimal configuration it is recommended that there be at least one thread per active target device. For more information on this setting, refer to Thomas Berger’s blog post: http://blogs.citrix.com/2011/07/11/pvs-secrets-part-3-ports-threads/

For this customer we increased the port range so that 58 UDP ports were used along with 48 threads per port for a total of 2784 threads.  Below is a screen shot of the settings that were implemented:

It is also important to note that we gave 3GB RAM to each Windows 7 32-bit VM.  It is important to make sure that you do not starve your targets devices for memory. In the same way that the PVS server will use its System Cache RAM so that it does not have to keep reading the VHD blocks from disk, the Windows target devices will use System Cache RAM so that they do not have to keep requesting the same blocks of data from the PVS server.  Too little RAM in the target means that the network load on the PVS server will increase.   For more detailed information on how System Cache memory on PVS and target devices can affect performance, I highly recommend you read my white paper entitled Advanced Memory and Storage Considerations for Provisioning Services: http://support.citrix.com/article/ctx125126

Conclusion

Based on this real world example, you should not be afraid to virtualize Provisioning Server.  If you are virtualizing Provisioning Server make sure you take the following into consideration:

It is also import that all of our other best practices for PVS and VDI are not overlooked as well.  In this real world example, we also followed and implemented the applicable best practices as defined in these two links below:

  • Provisioning Services 5.6 Best Practices

http://support.citrix.com/article/CTX127549

  • Windows 7 Optimization Guide

http://support.citrix.com/article/CTX127050

As a final note before I wrap up, I would like to address XenServer as I know that I will l get countless questions since this real world example used VMware.  There have been discussions in the past that seem to suggest that XenServer does not virtualize PVS very well.  However, it is important to note that XenServer has made some significant improvements over the last year, which enables it to virtualize PVS just fine.  If you are using XenServer then make sure you do the following:

  • Use the latest version of XenServer: 5.6 SP2 (giving Dom0 4 vCPUs)
  • Use IRQBalance.  You can find more details on it here:

http://support.citrix.com/article/CTX127970

  • Use SR-IOV, if you can (but not required).  You can find more details on it here:

http://blogs.citrix.com/2010/09/14/citrix-provisioning-server-gets-virtual-with-sr-iov/

http://support.citrix.com/article/CTX126624

http://blogs.citrix.com/2010/09/12/performance-with-a-little-help-from-our-friends/

I hope you find that this real world example is useful and helps to eliminate some of the misconceptions about the ability to virtualize PVS.

Cheers,

Dan Allen

34 Comments

  1. Jay

    Thanks Dan. Great article. It appears NIC teaming/bonding is not required huh?

    • Dan Allen

      Good question. Bonding NICs within the Hypervisor is still something that should be done to provide higher availability and throughput. VMware supports LACP, so a single PVS VM can send traffic simultaneously over 2 NICs. At this point in time XenServer supports bonding to provide greater overall throughput and availability for the XenServer host, but a single VM can only have its traffic transmitted over a single NIC at any moment in time.

      • Nicholas Rintalan

        Also keep in mind, Jay, that Dan’s environment was 10 Gb. And assuming the networking infrastructure across the board is truly 10 Gb (i.e. switch side as well), then NIC teaming/bonding isn’t really an issue as you said. But if this was a 1 Gb environment (and I find that most still are today but that’s changing quickly…), NIC teaming/bonding all of the sudden becomes critically important…because we’ll start hitting that 1 Gb bottleneck with anywhere from 500-100 target devices. So that’s when it would have been critical for Dan (in this vSphere environment) to enable static LACP and make sure he has 2+ Gb of effective throughput for the stream traffic. The lack of LACP on the XS side is what makes virtualizing PVS “tough” in a 1 Gb environment if you’re trying to scale to 1000+ targets on each box.

        Hope that helps clear this up.

        -Nick Rintalan, Citrix Consulting

  2. Scott Cochran

    Great information Dan. Virtualizing Provisioning Server and using CIFS for the vDisk Store is something we have long avoided but the more data we see the more our minds are put at ease. I notice this example is not using CIFS for the vDisk store, it would be interesting to see the performance data of a real world example showing CIFS vDisk store(s) used in large scale…

    Another design element I noticed in this example is a single NIC/network being used for PvS Streaming and Production VM traffic. In the past I have seen recommendations to multi-home the PvS Targets and use separate networks to isolate PvS vDisk Streaming traffic from Production traffic in order to provide better scalability and maximum performance. Have you seen any data that proves or disproves this theory?

    Thanks,

    Scott

    • Dan Allen

      Great question about multi-homing PVS and targets. I have seen those recommendations as well. While there is nothing technically wrong with multi-homing and isolating the PVS traffic, it most situations it is overkill and is not required. With XenServer and VMware, PVS targets support the optimized network drivers that are installed with the hypervisor guest tools. These are fast and efficient drivers that have no issues handling production Windows and PVS storage traffic over the same network path. In my experience, the added complexity of trying to create a multi-homed target and manage a separate network for streaming traffic is just not worth it.
      Cheers,
      Dan

  3. Jorge Ponce de Leon

    Very good post Dan, thanks a lot! … Just a question: how many different vDisks are you managing in this case? Just to know about how much memory per vDisk you need to consider to cash in PVS.

    • Dan Allen

      I believe we had 5 or 6 different vDisks active at any one time. For effective caching, you should typically plan on 2 – 3 GB per vDisk.

  4. Joern Meyer

    Hi Dan, great post and thank you for your answers to all the questions so far. Could you share some information about the Network-Interfaces used for the VMs (PVS-Server and Targets). We found out, that we reach best performance using VMXNet, but it think we all know the problem PVS had with VMXNet 3 in the past. And what about CPU-Overcommitement on the PVS-Server-Hosts? Do you have that?

    Thanks, Joern

    • Dan Allen

      We used VMXNet3 for both server VMs and Win 7 target VMs. We released a patch back in January to fix the target device issues with VMXNet3. Check outhttp://support.citrix.com/article/CTX128160.

      CPUs on the hosts with PVS server VMs are technically overcommitted as there can be more VMs and active vCPUs than physical CPUs, but this customer has a well architected hypervisor solution such that total CPU host utilization is monitored so that overall host CPU utilization is within normal range. And of course there are affinity rules to prevent PVS VMs from running on the same host.

  5. Adam Oliver

    Great article Dan! This will help a lot with my own knowledge!

  6. Chris

    Dan,

    Do you have an opinion on running the virtual PVS servers on the same XenServer hosts as the virtual Win 7 machines?

    • Dan Allen

      I would not run PVS on the same hosts that support the Windows 7 VMs.
      -Dan

      • Todd Stocking

        Any particular reason why you wouldnt want to run VMs on the same host that PVS is virtualized on? We want to maximize our hosts and with 96GB of RAM and 12 Cores (24 with HT) we would prefer to be able to use some of the available resources for provisioned XenApp servers. Thoughts?

  7. Norman Vadnais

    I don’t see this article mentioning the amount of RAM on the WIn7 desktops or the size of the persistent disk allocated to each. Since proper sizing of the client helps attain maximum throughput of PVS, I would think those details are important.

    Can we get those?

    • Dan Allen

      The Windows 7 VMs each have 3GB RAM and the each VM has a 6GB disk for the PVS write cache, EventLogs, etc…

  8. Ionel

    Hi Dan,
    You posted this link:
    http://support.citrix.com/article/CTX127549
    How did you find it? i can not find it here:
    http://support.citrix.com/product/provsvr/psv5.6/
    or searching on support or searching on google

    • Dan Allen

      Strange. I when click the link as you reposted in your comment and in the body of my blog, it works fine for me. Also, if I google “CTX127459” it comes up as the first hit for me. Can you try it again?

  9. R. S. Sneeden

    Dan,

    Great article. I did this very thing last year, albeit for a much smaller environment (~300 XenDesktops). I’m curious as to the VMware host configuration? CPU type and RAM. I’m currently designing an environment roughly the same size as your customer.

    -rs

    • Dan Allen

      4U rack mount servers. Quad Socket Eight Core Intels with Hyper-Threading (32 physical cores, 64 logical with hyper-threading). 512 GB RAM per physical host.

  10. khanh

    Dan what kind of disk did you use for windows 7 with boot storm and logon storm? Also how did you move all the log files to the cache drive and does the logs delete them self or will the drive fill up if we don’t delete them?

    • Dan Allen

      We set Eventlogs to overwrite events as necessary and set them to a fixed size on the write cache disk (Drive D:). You can do this with a GPO. The write cache disks are on an EMC SAN connected to the VMware hosts via FC.

  11. Daniel Marsh

    Hi Dan,
    Provisioning Server best practice (CTX117374) says to disable TCP task offload – was this done in this environment? Im curious about the CPU usage, its always higher in our environments with a lot less clients, I always figured it was because TCP offload was disabled.
    Regards, Dan.

    • Dan Allen

      Yes, we disabled TCP task offload. As you can see from the above results, our CPU usage was OK.

  12. Lucab

    Cifs for storing vhd on PVS are terrible choice!!! Windows not cache Network share instead of local disk! If you want to have a unique repository for all PVS You need a cluster FS like Sanbolic Meliofs!!!

    • Dan Allen

      Lucab,
      Did you even read the article that I linked to? If you actaully read the article, then you will understand that making the registry changes I detail will actually allow Windows to cache the network share data. With that being said, there is nothing wrong with using a clustered file system like Melio.
      Cheers,
      Dan

  13. Jurjen

    Dan, how did you come up with the threads and ports numbers exactly? The blogpost from Thomas suggests to use the number of CPU cores. Just wondering if you had done some testing with different numbers to come to this conclusion.

    • Dan Allen

      Actually, Thomas suggested increasing it to make sure that that when multiplying the threads by the total number of ports; you end up with one per active target. He then said that Citrix lab testing suggested that performance is best for Streaming Service when it the cores equals or is greater than threads per port. However, if you are going to go large like we did at this customer highlighted in my article, then you need to go past that the threads/per core ratio. No worries, it will scale just fine as you can see from my customers results. For large environments, you definitely want a value much higher than the default of 8!

  14. Jason

    Dan –

    You mentioned that the write cache are on an EMC SAN. Would local disk on the host be acceptable? Or would that greatly impact performance?

    • Johnny Ma

      It depends on how many you are running and how many local disks you have in the host. Typically it is fine but you may run into an IOPS issue if you have too many on there but that can be solved by adding in a few SSDs if you really are inclined to use local storage.

  15. Paul Kothe

    This was an outstanding post and very helpful for me. I have a question about the storage of the vDisk images. It mentions in the other blog and in other whitepapers that we should use block level storage for the PVS Dsik that houses the vhd files. I am using NFS for my storage repositories and was wondering if that is still not considered block level storage since it is a file onthe NFS SR and not a true LUN and would it make a difference in a small environment? I am trying to shoehorn this into less than 100 user environments and making the numbers work has been hard. I like VDI in a box but I LOVE PVS 6 and management of the images is so easy compared to VDI in a Box. I am about to deploy a 8 user Xendesktop on a single host and planning on virtualizing all of it. Exchange is in the cloud so I feel comfortable with it. the 8 users are only using IE so load will be very little. So should I setup an ISCI lun for the PVS vdisk store or just use a thin provisioned NFS disk?

  16. Samurai Jack

    I also have question about the storage of vdisk, if you do not want to use local storage, or NFS, can they be placed on VMFS or an iSCSI or FC LUN and one or more pVS servers access it for HA capabilities? how would this work, or is NFS The way to go?

  17. Vic Dimevski

    Awesome article :)

  18. Jeff Lindholm

    Hello, I hope this thread has not gotten too old, its a great post – if this were a forum vs. a blog it would be ” pinned ;-)

    I have a question on the configuration above, on the hosts that have 1482 machines on them, and that you say you think would go to 2000 or 3000, what are you using for your IP subnets?

    I am assuming you are using something like a /21 for 2048 hosts, for 3000 you would need to go to something like a /20 for 4096 hosts.

    I ask because I am in an environment where we successfully deployed several hundred physical CAD workstations using PVS. I am using a pair of physical blade servers on 10Gig, and 10Gig or multiple-1Gig links to the edge switches in the closets and of course Gig to the desktops.

    Now we would like to expand this environment, but of course if I go beyond the 255 hosts in a /24 subnet then I have some decisions to make. I dont know if our network group will like it.

    We currently still have NetBIOS and WINS active, which I think we can eliminate, but I would be worried about broadcasts in general on such large subnets. Was this something team in question considered?

    To my knowledge, you still cant easily get a Provisioning Server working with multiple NICs (each in a different VLAN) due to having limited multiple NIC support for the PXE portion of the solution, and I want to avoid complex/problematic setups. But I would be interested if this has been addressed.

    Aside from that, of course if I can efficiently just run multiple virtual PVS servers across a few physical hosts so that I can have say 1/pair per subnet, I have a little more flexibility. I am getting some new HP Gen8 blades that will support SR-IOV and 256GB of RAM or more, so I could give 30-50GB RAM to each virtual PVS server.

    To avoid the overhead associated with copying lots of images when I need to make an update, I was going to look into the Melio product so that I could have say, 10 PVS servers that all ” see” the same storage.

    -Jeff

  19. Ray Zayas

    Dan,

    How did you determine the increase in the Buffers per thread from the default of 24?

    Thanks for the help!

Less really can be more when it comes to desktop virtualization. 180 PC-on-a-Chip desktops—minus the hypervisor—with Citrix XenDesktop, HP Moonshot, and AMD. Part 1

Less really can be more when it comes to desktop virtualization. 180 PC-on-a-Chip desktops—minus the hypervisor—with Citrix XenDesktop, HP Moonshot, and AMD. Part 1 | Citrix Blogs

Source: The Citrix Blog

//

 

Less really can be more when it comes to desktop virtualization. 180 PC-on-a-Chip desktops—minus the hypervisor—with Citrix XenDesktop, HP Moonshot, and AMD. Part 1

Last week at HP Discover in Barcelona, Spain, HP unveiled a revolutionary new member of the Moonshot platform called the Converged System 100 for Hosted Desktops designed exclusively with AMD for Citrix XenDesktop. This new architecture is unlike anything else the industry and Citrix was there side-by-side on the show floor unveiling this new jointly designed platform to customers and partners. The interest from attendees was unbelievable and the simplicity of this new platform with XenDesktop and Provisioning Services made attendees really understand that desktop virtualization can be made simpler with an architecture like this. Let’s take a look at the hardware and dive deeper into this new and exciting game changing architecture.

Chassis

The Moonshot 1500 platform is a 4.3U chassis that has an impressive array of compute, graphics, storage, and network. This new Proliant M700 Server cartridge for HDI, or Hosted Desktop Infrastructure, was designed for those key knowledge workers that need direct unfiltered access to hardware that has been traditionally managed by a hypervisor in the VDI world.  By providing this level of hardware access users can be assured that they will not have to share any hardware resources with anyone else that could potentially impact others in a traditional VDI architecture.  With this new architecture users now have access to their own dedicated processors, graphics, storage, and networking which increases the user experience and ultimately productivity.

Inside the Moonshot chassis are 45 dedicated plug-n-play M700 server cartridges. Each M700 cartridge has 4 PC-on-a-Chip nodes or systems that are powered by the chassis. With 4 PC-on-a Chip nodes X 45 cartridges that gives us a total of 180 dedicated PCoC systems. Each cartridge consumes an impressive low wattage amount of power that is typically 33 watts in active use, 20 watts at idle and a maximum of 63 watts. That’s about 8 watts per node on average which is equivalent to a small radio, but with the power and HDX experience of a boom box! For an entire chassis then the total amount of power that these 45 cartridges or 180 nodes  would consume on average is about 1500 watts which is about the equivalent of a home appliance microwave. Of course mileage may vary, but you get the point on how power savings can be applied here.

The image below showcases the Moonshot chassis fully loaded with 45 cartridges.

 

 

 

Cartridges

Each HP Proliant m700 is powered by a PC-on-a-Chip architecture designed by HP and AMD. Each node on a cartridge has an AMD Opteron X2150 APU (4) x86 core 1.5 GHz processor with AMD Radeon 8000 Series Graphics. The graphics and processor are a single piece of silicon die called an Accelerated Processing Unit or APU and offer 128 Radeon Cores up to 500 MHz.  This type of graphics card is perfectly designed for the knowledge worker who has light level graphics requirements like Direct X 11 enabled applications such as Microsoft Office 2013. This allows for a smaller footprint for a SOC and provides HP and AMD the flexibility to have 4 nodes per cartridge. Each node has a dedicated 8GB of Enhanced ECC DDR3 PC3-12800 SDRAM at 16000 MHz speed for a total of 32GB per cartridge. For storage each cartridge has an integrated storage controller with a dedicated 32GB SANDISK iSSD per node located on the Mezzanine Storage Kit for a total of 128GB space. Each iSSD is rated to perform up to 400 IOPS which more than sufficient for most traditional VDI or SBC users. Each node also has its own pair of 1GB Broadcom NICS allowing for a combined 2GB of dedicated network bandwidth per node. This makes for greater design choices for allowing node to have access to different VLANS for boot and production traffic if desired. For node deployment the BIOS allows each node for a series of simple boot methods such as boot via local iSSD, boot via PXE, and boot one time via PXE or HDD. Also each of the m700 nodes have the capability to leverage Wake-On-LAN or WOL using a magic packet. This enables even nodes that are powered off in the chassis to be powered on straight from the Provisioning Services console!

 

Networking

Inside the chassis is a simple and easy to leverage series of integrated switches. There are two switches that are segmented as switch A and switch B. Each Wolff switch can provide up to 4 x 40GB of stackable uplinks per switch. These Wolff switches are fully manageable switches with Layer 2 and Layer 3 routing functionality as well as QoS, SNMP and SFLOW functions. With each node having a 2 dedicated 1GB NICS and each cartridge delivering 8GB of potential traffic, these switches are ready to handle any type of HDI workload scenario.

 

XenDesktop and HDX

So far you have read about the hardware and its exciting capabilities, but is there a specific version of XenDesktop for the Moonshot platform? Yes there is. The HP Converged System 100 will only be supported by Citrix for those customers using XenDesktop 7.1 and Provisioning Services 7.1. While it’s possible that previous versions of XenDesktop may work, the main feature that only XenDesktop 7.1 provides is the capability for the Standard VDA to leverage the native GPU for those Direct X enabled applications, for example, without the need of the HDX 3D Pro VDA that was always the case before for leveraging GPUs. (The HDX 3d Pro VDA is required for higher end CAD applications, which also require a higher end GPU than what is inside the M700 cartridge. Think NVIDIA K2 and XenServer GPU pass through with HP BL380 Gen 8 blades here for HDX 3D Pro for those higher end users which is a separate architecture than Moonshot.) For those of us that have been keeping up to speed with XenDesktop, Derek Thorslund posted great blog about what the XenDesktop 7.1 VDA can provide for native graphics. Throughout the development of the Moonshot platform Citrix, HP, and AMD worked very closely on the HDX side. During this time Citrix developers were able to enhance our current 7.1 Standard VDA WDDM driver to be able to provide optimizations that are now capable of leveraging the AMD graphics cards which are a standard on the Moonshot HDI platform. This new WDDM driver enhancement now allows for a superior HDX experience that can directly leverage the GPU for each node! The example below shows the device manager Citrix WDDM driver as well as the AMD Radeon GPU. It is important to note again that this new AMD optimization is specifically designed and supported for the XenDesktop 7.1 standard VDA only and not the HDX 3D Pro VDA which is not supported by Citrix on the CS100 Moonshot platform at the time of writing this article.  This new enhancement is in the form of a hotfix (MSP) is available now on Citrix.com.

 http://support.citrix.com/article/CTX139622

http://support.citrix.com/article/CTX139621

Below is YouTube demonstration showcasing all these pieces in real-time!

XenDesktop on Moonshot

Direct URL also

180 bare-metal nodes to Windows 7 in minutes

In most situations there are going to be a few ways to deliver Windows to bare-metal nodes before the XenDesktop and Provisioning Services client installers can be deployed. The current HP supported method of delivering Windows 7 x64 to a node is using Windows Deployment Services or WDS. WDS is a free role of the Windows 2008R2 SP1 and Windows 2012/R2 operating system that can be enabled. Once we have our master image created the fun part begins. In the next series I’ll show the simple process of leveraging WDS to deploy Windows 7 to our master node in just a matter of a few minutes. Then I’ll demonstrate the PowerShell capabilities from Moonshot to PVS and how were able to build all 180 nodes just with PVS and Studio. More to come so check back soon….

Thank You

@TonySanchez_CTX

Citrix 7.6 Administer profiles within and across OUs

Administer profiles within and across OUs

Updated: 2013-07-31

Within OUs

You can control how Profile management administers profiles within an Organizational Unit (OU). In Windows Server 2008 environments, use Windows Management Instrumentation (WMI) filtering to restrict the .adm or .admx file to a subset of computers in the OU. WMI filtering is a capability of Group Policy Management Console with Service Pack 1 (GPMC with SP1). For more information on WMI filtering, see http://technet.microsoft.com/en-us/library/cc779036(WS.10).aspx andhttp://technet.microsoft.com/en-us/library/cc758471(WS.10).aspx. For more information on GPMC with SP1, see http://www.microsoft.com/DOWNLOADS/details.aspx?FamilyID=0a6d4c24-8cbd-4b35-9272-dd3cbfc81887&displaylang=en.

The following methods let you manage computers with different OSs using a single Group Policy Object (GPO) in a single OU. Each method is a different approach to defining the path to the user store:

  • Hard-coded strings
  • Profile management variables
  • System environment variables

Hard-coded strings specify a location that contains computers of just one type. This allows profiles from those computers to be uniquely identified by Profile management. For example, if you have an OU containing only Windows 7 computers, you might specify serverprofiles$%USERNAME%.%USERDOMAIN%Windows7 in Path to user store. In this example, the Windows7 folder is hard-coded. Hard-coded strings do not require any setup on the computers that run the Profile Management Service.

Profile management variables are the preferred method because they can be combined flexibly to uniquely identify computers and do not require any setup. For example, if you have an OU containing Windows 7 and Windows 8 profiles running on operating systems of different bitness, you might specify \serverprofiles$%USERNAME%.%USERDOMAIN%!CTX_OSNAME!!CTX_OSBITNESS! in Path to user store. In this example, the two Profile management variables might resolve to the folders Win7x86 (containing the profiles running on the Windows 7 32-bit operating system) and Win8x64 (containing the profiles running on the Windows 8 64-bit operating system). For more information on Profile management variables, see Profile Management Policies.

System environment variables require some configuration; they must be set up on each computer that runs the Profile Management Service. Where Profile management variables are not suitable, consider incorporating system environment variables into the path to the user store as follows.

On each computer, set up a system environment variable called %ProfVer%. (User environment variables are not supported.) Then, set the path to the user store as:

\upmserverupmshare%username%.%userdomain%%ProfVer%

For example, set the value for %ProfVer% to Win7 for your Windows 7 32-bit computers and Win7x64 for your Windows 7 64-bit computers. For Windows Server 2008 32-bit and 64-bit computers, use 2k8 and 2k8x64 respectively. Setting these values manually on many computers is time-consuming, but if you use Provisioning Services, you only have to add the variable to your base image.

An example of how to script this is at:

http://forums.citrix.com/thread.jspa?threadID=241243&tstart=0

This sample script includes lines for Windows Server 2000, which is unsupported by Profile management.

Tip: In Windows Server 2008 R2 and Windows Server 2012, you can speed up the creation and application of environment variables using Group Policy; in Group Policy Management Editor, click Computer Configuration > Preferences >Windows Settings > Environment, and then Action > New > Environment Variable.

Across OUs

You can control how Profile management administers profiles across OUs. Depending on your OU hierarchy and GPO inheritance, you can separate into one GPO a common set of Profile management policies that apply to multiple OUs. For example, Path to user store and Enable Profile management must be applied to all OUs, so you might store these separately in a dedicated GPO, enabling only these policies there (and leaving them unconfigured in all other GPOs).

You can also use a dedicated GPO to override inherited policies. For information on GPO inheritance, see the Microsoft Web site.

Citrix: PVS Configure Personal vDisks

Configure Personal vDisks

Updated: 2013-04-11

Citrix XenDesktop with personal vDisk technology is a high-performance enterprise desktop virtualization solution that makes VDI accessible to workers who require personalized desktops, by using pooled-static virtual machines.

Provisioning Services target devices that use personal vDisks are created using the Citrix XenDesktop Setup Wizard. Within a Provisioning Services farm, the wizard creates target devices, adds target devices to an existing site’s collection, and then assigns an existing vDisk, which is in standard image mode, to that device.

The wizard also creates XenDesktop virtual machines to associate with each Provisioning Services target device. A catalog exists in Citrix Desktop Studio that allows you to preserve the assignment of users to desktops; the same users are assigned the same desktop for later sessions. In addition, a dedicated storage disk is created (before logon) for each user so they can store all personalization’s to that desktop (personal vDisk). Personalizations include any changes to the vDisk image or desktop that are not made as a result of an image update, such as application settings, adds, deletes, modifications, or documents. Target devices using personal vDisks can also be reassigned a different vDisk if that vDisk is from the same base vDisk lineage. For additional information on using personal vDisks with XenDesktop, refer to XenDesktop’s About Personal vDisks topic.

Inventory is run when a Provisioning Services vDisk is configured or updated. The method selected to configure or update a vDisk image for use as a personal vDisk image may determine when vDisk inventory runs in your deployment. The content that follows identifies the different methods from which you can choose, provides the high-level tasks associated with each method, and indicates at which point inventory runs for each method.

After configuring and adding a new personal vDisk image, do not use your golden VM as the machine template because it creates an unnecessary large disk as your write cache disk (the size of your original HDD).

Configure and deploy a new personal vDisk image

Configuration methods include:

Provisioning Services, then capture image, then XenDesktop

  1. Install and configure the OS on a VM.
  2. Install the Provisioning Services target device software on the VM.
  3. Run the Provisioning Services Imaging Wizard to configure the vDisk.
  4. Reboot.
  5. The Provisioning Services Image Wizard’s second stage runs to capture the personal vDisk image.
  6. From the Console, set the target device to boot from the vDisk.
  7. Configure the VM to boot from the network, then reboot.
  8. Install XenDesktop software on the VM, then configure with advanced options for personal vDisk.
  9. Manually run inventory, then shut the VM down.
  10. From the Console, place the vDisk in Standard Image Mode. Image is ready for deployment.

Provisioning Services, then XenDesktop, then capture image

  1. Install and configure the OS in a VM.
  2. Install the Provisioning Services target device software on the VM.
  3. Install XenDesktop software and configure with advanced options for personal vDisks enabled.
  4. Reboot.
  5. Log on to the VM.
  6. Run the Provisioning Services Imaging Wizard on the VM to configure the vDisk. (Inventory automatically runs after the VM successfully shuts down and reboots.)
  7. The Imaging Wizard’s second stage runs to capture the personal vDisk image.
  8. Shut the VM down.
  9. From the Console, place the personal vDisk image in Standard Image Mode. The personal vDisk is ready for deployment.
  10. Before using a VM template to provisioning multiple VMs to a XenDesktop site, verify the new vDisk can successfully boot from the VM created to serve as the machine template (not the golden VM), and verify the write cache disk is recognized successfully:
    1. Place the vDisk image in Private Image mode.
    2. Boot the new vDisk image from the VM.
    3. Format the new write cache partition manually.
    4. Shut down the VM. During the shut down process, when prompted run personal vDisk inventory.
    5. Turn this VM into a template.

XenDesktop, then Provisioning Services, then capture image

  1. Install and configure the OS in a VM.
  2. Install XenDesktop software on the VM, then configure with advanced options for personal vDisk enabled.
  3. Reboot.
  4. Log on to, then shutdown the VM. Inventory automatically runs at shutdown.
  5. Log on to, then install the Provisioning Service’s target device software.
  6. Run the Provisioning Services Imaging Wizard on the VM to configure the vDisk.
  7. Reboot. (Inventory automatically runs after the VM successfully shuts down and reboots.)
  8. The Imaging Wizard’s second stage runs to capture the personal vDisk image.
  9. Shut the VM down.
  10. Place the vDisk in Standard Image Mode. The personal vDisk is ready for deployment.
  11. Before using a VM template to provisioning multiple VMs to a XenDesktop site, verify the new vDisk can successfully boot from the VM created to serve as the machine template (not the golden VM), and verify the write cache disk is recognized successfully:
    1. Place the vDisk image in Private Image mode.
    2. Boot the new vDisk image from the VM.
    3. Format the new write cache partition manually.
    4. Shut down the VM. During the shut down process, when prompted run personal vDisk inventory.
    5. Turn this VM into a template.

MCS

  1. Install and configure the OS in a MCS VM.
  2. Install XenDesktop software and configure with advanced options for personal vDisks.
  3. Reboot the VM.
  4. Log onto the VM, and then shut the VM down. Inventory automatically runs at shutdown.
  5. The personal vDisk image is ready for deployment.

Update an existing personal vDisk image

Updating existing personal vDisk methods include using:

  • Provisioning Services
  • MCS

Updates for both Provisioning Services and MCS must be done on VMs that do not have a personal vDisk.

Provisioning Services

  1. Create a new version of the vDisk image.
  2. Boot the VM from the vDisk image in Maintenance Mode.
  3. Install updates on the new vDisk version.
  4. Shut the VM down. Inventory runs automatically when the VM shuts down.
  5. Promote the new version to either Test or Production. Other VMs will have access to the updated vDisk version the next time they reboot.

MCS

  1. Boot the ‘golden’ VM.
  2. Install updates on the VM.
  3. Shut the VM down. Inventory automatically runs when the VM is shutdown.

For additional information on how to create a Provisioning Services target device that uses a personal vDisk, refer to Deploy virtual desktops to VMs using the XenDesktop Setup Wizard. To view the properties of a Provisioning Services target device configured to use a personal vDisk, refer to Configure target devices that use personal vDisks.

MyXenApp

A blog dedicated to Citrix technology

There's More to the Story: a blog about LIFE, chronic illness, and Mental Health

I’m the loud and relentless "patient" voice and advocate they warned you about. I happen to have type 1 diabetes, ADHD, anxiety, OCD, PCOS, endometriosis, thyroid issues, asthma, allergies, lactose intolerance (and more), but there’s more to story.

DeployWindows

Learn Troubleshoot and Manage Windows

Dirk & Brad's Windows Blog

Microsoft Platform How To's, Best Practices, and other Shenanigans from Highly-qualified Windows Dorks.

Ingmar Verheij

About Citrix, Remote Desktop, Performance, Workspace, Monitoring and more...

Virtual to the Core

Virtualization blog, the Italian way.

CloudPundit: Massive-Scale Computing

the business of Internet infrastructure, cloud computing, and data centers

UCSguru.com

Every Cloud Has a Tin Lining.

speakvirtual

See no physical, hear no physical, speak no physical - speakvirtual.com

IT BLOOD PRESSURE

IT can be easy

Ask the Architect

My workspace journey

blog.scottlowe.org

The weblog of an IT pro specializing in virtualization, storage, and servers

akosijesyang

a place under control of his big head

this is... The Neighborhood

the Story within the Story

Yellow Bricks

by Duncan Epping

THE SAN GUY

Enterprise Storage Engineer

My Virtual Vision

My thoughts on application delivery