Group Policy integration in XenApp for 2008 R2: Reposted

As I described in my previous post, our primary goal for XenApp management was to enable template-based management of XenApp servers. We realized that most environments used Group Policies and Active Directory OUs as a way to define these server templates. Most XenApp environments need GPOs in some capacity to configure RDS, profiles, lock-down servers, configure sessions and the operating system.

GPO integration therefore reduces the number of consoles used for common management task. This sounds counter-intuitive at first: the Group Policy Management Console (GPMC) is an extra console… But the reality is that tasks can be fully performed on the Active Directory consoles: Creating a new app silo or farm? Create a new OU, drop servers there, and assign a new Group Policy Object to that OU. Adding servers to the farm? Just drop to the right OU. Maintaining dev, test, and production farms? Just link the high-level policies to the right OUs, and override any farm-specific setting using child OUs.

Additionally, GPO integration means all GPO management features now apply to XenApp settings as well. GPMC supports backup/restore; migration; and resulting set of policies (planning and modeling). AGPM supports off-line editing; configuration logging; change control; role-based delegation; and more.

Finally, GPO integration allows separation of management roles within IT. XenApp administrators can delegate server provisioning more easily, knowing that the only required step is the correct OU assignment for the server – something non-XA admins can understand and perform without specific XA delegation.

How will it work?

When you install the XenApp for 2008 R2 Management Console, it will include extensions to GPMC and GP Editor. GP Editor will display new Citrix policy nodes under the existing Computer and User nodes. These apply to all servers and/or users under the scope of that GPO (generally the list of OUs the GPO is linked to). The GP Editor extension is also installed at all XenApp servers, so the Local GPO editor (gpedit.msc) will also display XA settings that apply to that computer alone.

This picture shows GPEdit after XenApp management consoles are installed:

In this example, I’ve selected “User Configuration”, “Policies”, and then “Citrix Policies”. The UI is the same as the policy editor found in the native XenApp MMC console. The difference is that these policies are associated with the Group Policy itself, rather than any one farm! In other words, this policy will apply to all computers and users under the scope of this policy, even if the computers are in multiple XenApp farms.

Note that we didn’t use standard ADMX files to represent our policies. ADMC couldn’t handle our filtering requirements. Our policies support session filtering based on the client-side parameters – AAC tags; client IP range; client name; etc – as well as computer filtering based on IMA Worker Groups membership.

You can set any number of policies under “Computer Configuration” and “User Configuration” for a single GPO. Each policy has its own filter – in the example above I’ve set policies for any user connecting from IP addresses different than 10.15.* – representing remote users.

All the Group Policies rules and features apply to the Citrix extension: loopback, enforced policies, ACL and WMI filtering. For example, the following Modeling Report shows Windows and XenApp policies side-by-side:

I’ve launched this simulation from the “Citrix Group Policy Modeling” wizard, added to GPMC after you install the XenApp extension. That wizard replicates all steps of the “Group Policy Modeling”, with one extra page where you can enter client IP, name, and AAC filters you want to simulate with.

Note that you can see which Group Policy setting “won” for every single setting. You can also see which XenApp filters “matched” the simulation, and the resulting policy group within the GPO.

We know, however, that some XenApp admins cannot effectively use Group Policy – they either lack delegated control to the XenApp OU; or they are not using Active Directory. In this case, you can fully manage your farm using the Policy node in the XenApp Delivery Services console. I’ve shown how that is done at this blog post here.

Does this mean we have two policy systems, one in IMA, and another with Group Policies?

Not at all! I will describe how the IMA Policies and Citrix Group Policies extension work together in my next post.

Learn more about XenApp for R2

  • Download the tech preview for XenApp for Windows Server 2008 R2.
  • Register for the TechTalk hosted by Sridhar Mullapudi.

Horizon Migration Tool – XenApp to Horizon (eek!)

Source: Fling Labs


The Horizon Migration Tool helps you migrate published applications and desktops from XenApp to Horizon View. One XenApp farm is migrated to one or more Horizon View farm(s).

The GUI wizard-based tool helps you:

  • Validate the View agent status on RDS hosts (from View connection server, and XenApp server)
  • Create farms
  • Validate application availability on RDS hosts
  • Migrate application/desktop to one or multiple farms (new or existing)
  • Migrate entitlements to new or existing applications/desktops. Combination of application entitlements are supported
  • Check environment
  • Identify incompatible features and configuration

Click to enlarge


XenApp 7.x Architecture and Sizing



Peter Fine here from Dell CCC Solution Engineering, where we just finished an extensive refresh for our XenApp recommendation within the Dell Wyse Datacenter for Citrix solution architecture.  Although not called “XenApp” in XenDesktop versions 7 and 7.1 of the product, the name has returned officially for version 7.5. XenApp is still architecturally linked to XenDesktop from a management infrastructure perspective but can also be deployed as a standalone architecture from a compute perspective. The best part of all now is flexibility. Start with XenApp or start with XenDesktop then seamlessly integrate the other at a later time with no disruption to your environment. All XenApp really is now, is a Windows Server OS running the Citrix Virtual Delivery Agent (VDA). That’s it! XenDesktop on the other hand = a Windows desktop OS running the VDA.


The logical architecture depicted below displays the relationship with the two use cases outlined in red. All of the infrastructure that controls the brokering, licensing, etc is the same between them. This simplification of architecture comes as a result of XenApp shifting from the legacy Independent Management Architecture (IMA) to XenDesktop’s Flexcast Management Architecture (FMA). It just makes sense and we are very happy to see Citrix make this move. You can read more about the individual service components of XenDesktop/ XenApp here.


Expanding the architectural view to include the physical and communication elements, XenApp fits quite nicely with XenDesktop and compliments any VDI deployment. For simplicity, I recommend using compute hosts dedicated to XenApp and XenDesktop, respectively, for simpler scaling and sizing. Below you can see the physical management and compute hosts on the far left side with each of their respective components considered within. Management will remain the same regardless of what type of compute host you ultimately deploy but there are several different deployment options. Tier 1 and tier 2 storage are comprehended the same way when XenApp is in play, which can make use of local or shared disk depending on your requirements. XenApp also integrates nicely with PVS which can be used for deployment and easy scale out scenarios.  I have another post queued up for PVS sizing in XenDesktop.


From a stack view perspective, XenApp fits seamlessly into an existing XenDesktop architecture or can be deployed into a dedicated stack. Below is a view of a Dell Wyse Datacenter stack tailored for XenApp running on either vSphere or Hyper-v using local disks for Tier 1. XenDesktop slips easily into the compute layer here with our optimized host configuration. Be mindful of the upper scale utilizing a single management stack as 10K users and above is generally considered very large for a single farm. The important point to note is that the network, mgmt and storage layers are completely interchangeable between XenDesktop and XenApp. Only the host config in the compute layer changes slightly for XenApp enabled hosts based on our optimized configuration.


Use Cases

There are a number of use cases for XenApp which ultimately relies on Windows Server’s RDSH role (terminal services). The age-old and most obvious use case is for hosted shared sessions, i.e. many users logging into and sharing the same Windows Server instance via RDP. This is useful for managing access to legacy apps, providing a remote access/ VPN alternative, or controlling access to an environment through which can only be accessed via the XenApp servers. The next step up naturally extends to application virtualization where instead of multiple users being presented with and working from a full desktop, they simply launch the applications they need to use from another device. These virtualized apps, of course, consume a full shared session on the backend even though the user only interacts with a single application. Either scenario can now be deployed easily via Delivery Groups in Citrix Studio.


XenApp also compliments full XenDesktop VDI through the use of application off-load. It is entirely viable to load every application a user might need within their desktop VM, but this comes at a performance and management cost. Every VDI user on a given compute host will have a percentage of allocated resources consumed by running these applications which all have to be kept up to date and patched unless part of the base image. Leveraging XenApp with XenDesktop provides the ability to off-load applications and their loads from the VDI sessions to the XenApp hosts. Let XenApp absorb those burdens for the applications that make sense. Now instead of running MS Office in every VM, run it from XenApp and publish it to your VDI users. Patch it in one place, shrink your gold images for XenDesktop and free up resources for other more intensive non-XenApp friendly apps you really need to run locally. Best of all, your users won’t be able to tell the difference!



We performed a number of tests to identify the optimal configuration for XenApp. There are a number of ways to go here: physical, virtual, or PVS streamed to physical/ virtual using a variety of caching options. There are also a number of ways in which XenApp can be optimized. Citrix wrote a very good blog article covering many of these optimization options, of which most we confirmed. The one outlier turned out to be NUMA where we really didn’t see much difference with it turned on or off. We ran through the following test scenarios using the core DWD architecture with LoginVSI light and medium workloads for both vSphere and Hyper-V:

  • Virtual XenApp server optimization on both vSphere and Hyper-V to discover the right mix of vCPUs, oversubscription, RAM and total number of VMs per host
  • Physical Windows 2012 R2 host running XenApp
  • The performance impact and benefit of NUMA enabled to keep the RAM accessed by a CPU local to its adjacent DIMM bank.
  • The performance impact of various provisioning mechanisms for VMs: MCS, PVS write cache to disk, PVS write cache to RAM
  • The performance impact of an increased user idle time to simulate a less than 80+% concurrency of user activity on any given host.

To identify the best XenApp VM config we tried a number of configurations including a mix of 1.5x CPU core oversubscription, fewer very beefy VMs and many less beefy VMs. Important to note here that we based on this on the 10-core Ivy Bridge part E5-2690v2 that features hyperthreading and Turbo boost. These things matter! The highest density and best user experience came with 6 x VMs each outfitted with 5 x vCPUs and 16GB RAM.  Of the delivery methods we tried (outlined in the table below), Hyper-V netted the best results regardless of provisioning methodology. We did not get a better density between PVS caching methods but PVS cache in RAM completely removed any IOPS generated against the local disk. I’ll got more into PVS caching methods and results in another post.

Interestingly, of all the scenarios we tested, the native Server 2012 R2 + XenApp combination performed the poorest. PVS streamed to a physical host is another matter entirely, but unfortunately we did not test that scenario. We also saw no benefit from enabling NUMA. There was a time when a CPU accessing an adjacent CPU’s remote memory banks across the interconnect paths hampered performance, but given the current architecture in Ivy Bridge and its fat QPIs, this doesn’t appear to be a problem any longer.

The “Dell Light” workload below was adjusted to account for less than 80% user concurrency where we typically plan for in traditional VDI. Citrix observed that XenApp users in the real world tend to not work all at the same time. Less users working concurrently means freed resources and opportunity to run more total users on a given compute host.

The net of this study shows that the hypervisor and XenApp VM configuration matter more than the delivery mechanism. MCS and PVS ultimately netted the same performance results but PVS can be used to solve other specific problems if you have them (IOPS).


* CPU % for ESX Hosts was adjusted to account for the fact that Intel E5-2600v2 series processors with the Turbo Boost feature enabled will exceed the ESXi host CPU metrics of 100% utilization. With E5-2690v2 CPUs the rated 100% in ESXi is 60000 MHz of usage, while actual usage with Turbo has been seen to reach 67000 MHz in some cases. The Adjusted CPU % Usage is based on 100% = 66000 MHz usage and is used in all charts for ESXi to account for Turbo Boost. Windows Hyper-V metrics by comparison do not report usage in MHz, so only the reported CPU % usage is used in those cases.

** The “Dell Light” workload is a modified VSI workload to represent a significantly lighter type of user. In this case the workload was modified to produce about 50% idle time.

†Avg IOPS observed on disk is 0 because it is offloaded to RAM.

Summary of configuration recommendations:

  • Enable Hyper-Threading and Turbo for oversubscribed performance gains.
  • NUMA did not show to have a tremendous impact enabled or disabled.
  • 1.5x CPU oversubscription per host produced excellent results. 20 physical cores x 1.5 oversubscription netting 30 logical vCPUs assigned to VMs.
  • Virtual XenApp servers outperform dedicated physical hosts with no hypervisor so we recommend virtualized XenApp instances.
  • Using 10-Core Ivy Bridge CPUs, we recommend running 6 x XenApp VMs per host, each VM assigned 5 x vCPUs and 16GB RAM.
  • PVS cache in RAM (with HD overflow) will reduce the user IO generated to disk almost nothing but may require greater RAM densities on the compute hosts. 256GB is a safe high water mark using PVS cache in RAM based on a 21GB cache per XenApp VM.


Dell Wyse Datacenter for Citrix – Reference Architecture

XenApp/ XenDesktop Core Concepts

Citrix Blogs – XenApp Scalability

  1. Do you have anything on XenApp 7.5 + HDX 3D? This is super helpful, but there is even less information on sizing for XenApp when GPUs are involved.


  2. Unfortunately we don’t yet have any concrete sizing data for XenApp with graphics but this is Tee’d up for us to tackle next. I’ll add some of the architectural considerations which will hopefully help.


  3. Two questions:
    1. Did you include antivirus in your XenApp scalability considerations? If not, physical box overhead with Win 2012 R2 and 1 AV instance is minimal, when compared to 6 PVS streamed VMs outfitted with 6 AV instances respectively (I am not recommending to go physical though).
    2. When suggesting PVS cache in RAM to improve scalability of XenApp workloads, do you consider CPU, not the IO, to be the main culprit? After all, you only have 20 cores in a 2 socket box, while there are numerous options to fix storage IO.

    PS. Some of your pictures are not visible


  4. Hi Alex,

    1) Yes, we always use antivirus in all testing that we do at Dell. Real world simulation is paramount. Antivirus used here is still our standard McAfee product, not VDI-optimized.

    2) Yes, CPU is almost always the limiting factor and exhausts first, ultimately dictating the limits of compute scale. You can see here that PVS cache in RAM didn’t change the scale equation, even though it did use slightly less CPU, but it all but eliminates the disk IO problem. We didn’t go too deep on the higher IO use cases with cache in RAM but this can obviously be considered a poor man’s Atlantis ILIO.

    Thanks for stopping by!



A blog dedicated to Citrix technology

There's More to the Story: a blog about LIFE, chronic illness, and Mental Health

I’m the loud and relentless "patient" voice and advocate they warned you about. I happen to have type 1 diabetes, ADHD, anxiety, OCD, PCOS, endometriosis, thyroid issues, asthma, allergies, lactose intolerance (and more), but there’s more to story.


Learn Troubleshoot and Manage Windows

Dirk & Brad's Windows Blog

Microsoft Platform How To's, Best Practices, and other Shenanigans from Highly-qualified Windows Dorks.

Ingmar Verheij

About Citrix, Remote Desktop, Performance, Workspace, Monitoring and more...

Virtual to the Core

Virtualization blog, the Italian way.

CloudPundit: Massive-Scale Computing

the business of Internet infrastructure, cloud computing, and data centers

Every Cloud Has a Tin Lining.


See no physical, hear no physical, speak no physical -

Ask the Architect

My workspace journey

The weblog of an IT pro specializing in virtualization, storage, and servers


a place under control of his big head

this is... The Neighborhood

the Story within the Story

Yellow Bricks

by Duncan Epping


Enterprise Storage Engineer

My Virtual Vision

My thoughts on application delivery