Link to Free utilities compiled from VirtXpert Blog

Free utilities compiled from VirtXpert Blog

Hypervisors

  • VMware vSphere Hypervisor (ESXi):  The best platform, IMO, hopefully you can spring for at least Essentials plus
  • Citrix Xen Server:  Let’s face it, at a startup or SMB we may not be able to afford a commercial license, Xen provides Live Migration for free but good luck supporting it.
  • KVM:  Opensource platform, not for the faint of heart

Storage

  • Nexenta Community Edition:  Turn bare metal equipment into a fully functional NAS.  18TB limit on the community edition would be fine for most (all?) SMBs.
  • FreeNAS:  A very solid open source project with years off support and development behind it.
  • NAS4Free:  I haven’t used it, looks similar to FreeNAS without the name recognition, its still being maintained.
  • FalconStor Virtual SAN appliance:  Missing some features (snapshots and HA) but yet another storage solution.
  • QuadStor:  An open source application to turn a linux VM or physical machine into a very fully features storage appliance.  I really need to get this in my lab.  HA, Dedup, VAAI!
  • UberAlign:  A utility from Nicholas Weaver (@lynxbat) to help fix alignment issues for VMs.

Backup and Recovery

Monitoring

  • vCenter Operations Manager Foundation:  Another freebie when you purchase vSphere.  You paid for the awesomest hypervisor, why not take advantage of the tools you got included?
  • Veeam One:  You virtualized your infrastructure which is great, but you can’t just peak your head into the datacenter and see if everything is flashing green.  Veeam allows you to monitor your virtual infrastructure.
  • RVTools:  I haven’t had a chance to test this yet, lots of people swear by it.
  • Xangati:  Another handy tool to watch over your gear, free version is good for one server which, if you have one server, is actually great.
  • Nagios and Icinga:  The defacto open source monitoring tool and its fork.  Developers can’t get along forever.
  • Opsview:  Based on Nagios, provides a nicer GUI, though honestly its been 4 years since I used Nagios core.
  • op5 Monitor:  Another based on Nagios, open source code for unlimited monitoring and a free version with limited support for up to 20 devices.
  • Dell Foglight for Virtualization (Free):  A package of several free resources from Dell/Quest.  I am glad to see Dell is still maintaining the free products.  Vladan Seget has a nice write up here on the features.
  • Monitor.us:  Web based monitoring application, some limitations as all freemium products do.
  • ScaleXtreme:  Web based monitoring application, some limitations as all freemium products do.
  • vCheck:  A PoSH script by Alan Renouf that emails you status of your environment.
  • Netwrix VMware Change Reporter:  Similar to Alan’s script (not actually tested) but monitors changes to VM settings.
  • VirtualIQ Free:  For up to 2 hosts/5 CPU and 25 VM’s.
  • Indeni Health Check:  Free, light version of Indeni’s Dynamic Knowledge base.
  • IgniteFree:  Database monitoring tool from Confio for MS SQL and Oracle.  Wish there was a PostgreSQL version!
  • Event-O-Matic:  A PoSH/PowerCLI script generator that assists in the collection of specific log files from Luc Dekens
  • ExtraHop Discovery Edition:  Watch your apps for real time performance problems
  • SolarWindows Alert Central:  Helps you escalate alerts to certain individuals or groups versus sending to groups.
  • AlienVault OSSIM OpenSource:  Awful lot of free monitoring tools huh, this one attempts to correlate logs/monitoring from various open source tools to help you identify the cause of an issue.  This may also be the easiest Snort install you could do!
  • Tripwire SecureScan:  Free for up to 100 IP’s
  • Opvizor: Free edition supports a single cluster with up to 48 hours of data new2
  • Opvizor Snapwatcher: Free during beta, find and nuke snapshots, even corrupt ones new2
  • CloudPhysics:  Community edition of the powerful monitoring solution that backs alerts with aggregated data new2
  • DataDog:  Monitor up to 5 hosts with various dashboards.  Free version only keeps data for 1 day new2
  • VMTurbo Virtual Health Monitor:  Monitor your VM environment in real time new2

Logging

  • VMware Syslog Collector and Dump Collector:  Included in vCenter download – at some point there is going to be a problem/error and having your logs centrally collected will be nice.  Seems like busy work now but you will be glad when you need it.
  • Splunk:  Free version, as most do, has some limitations but as you grown you can remove these with a paid version.
  • GreyLog:  A new logging tool to the OS world, built on MongoDB.  Installation script from GitHub
  • Syslog-ng:  Moar syslog
  • nxlog:  You need a way to get your Windows logs to these cool syslog servers!
  • Nagios Log Monitor: Check out my write up here new2
  • ELK: A combination of opensource tools, check out the write up by Larry Smith Jr and his#vBrownBag presentation new2

Automation

  • VMware PowerCLI:  Free, included, have I mentioned all the stuff thats included with your paid vSphere license?  I think I did
  • vCenter Orchestrator:  Yea you guessed it, included when you buy
  • Windows PowerShell:  Installed with Windows – Its there, why click around when you can write a script (I am guilty of not doing this enough)
  • AutoIT:  Another popular scripting platform
  • Puppet:  I will start by saying I don’t know enough about it, I need to learn more, the Enterprise version is free for up to 10 nodes
  • Ansible:  Another configuration management tool, similar to Puppet yet so far easy to use.  Free for up to 10 nodes
  • Chef:  Another automation platform popular with the DevOps movement, a free and paid version like Puppet and Ansible
  • CloudBlot C2:  Claims to offer an easy to use IaaS tool, taking shots at vCAC.  Free for up to 100 VMs
  • ESX Deployment Appliance:  Free appliance to help automate the installation of ESX/ESXi
  • Ultimate Deployment Appliance:  Helps automate the installation of both deploy Windows and Linux
  • VMware WebCommander:  Part of the Fling labs, publish PowerShell and PowerCLI scripts via a self service portal to your users
  • Windows vCenter to VCSA Fling: Fling from VMware to migrate Window vCenter to the vCenter Virtual Appliance

Networking

  • Zen Loadbalancer:  Sometimes you need more than one server, web, View Connection etc but real load balancers get pricey
  • VyOS: The replacement for Vyatta new2
  • pfSense:  Another OS load balancer
  • Untangle:  Free firewall, really just several other OS projects packaged together but has a nice GUI and means you don’t have to package them together.
  • Smoothwall:  Yet another OS firewall
  • Endian :  Yet another OS firewall
  • PacketFence:  OpenSource Network Access Control
  • Sophos UTM Home:  Only really good for a home lab, but many of us have home labs so here’s a firewall
  • Open DayLight:  SDN is the future, why not get started here
  • Kemp Load Balancer: A seemingly free, yet full functional and optimized load balancer new2

VDI

Patching

  • Windows WSUS:  Think it goes without saying that Windows needs to be patched
  • RedHat Spacewalk:  Yes even Linux needs patches, you dont want a vulnerable Apache web server kicking around do you?
  • VMware Update Manager (VUM):  Something something, included vSphere something.  I wish VMware Go was still around, free and made patching a breeze

Training

  • VMware MyLearn:  Lots of great free classes from new features, DR training and even advanced topics such as vCloud Director.
  • VMware Learning:  An extension of MyLearn (IMO) with lots of free great videos, versus the slide/formal training style on MyLearn.
  • Local VMUG:  Great opportunity to meet new people and learn about the latest from the VMware community and vendors.
  • CodeAcademy:  Coding = automation.
  • #vBrownBag:  Community contributed podcasts primarily focused on VMware but also on various technology topics including OpenStack.
  • VMware Hands On Labs (HOL):  Free hands on training on various topics directly from VMware.
  • Virtual Design Master:  A free, live competition that puts aspiring architects skills to the test.  Currently in Season 2
  • Phil Wiffen’s TwistedEthics.com Free Training List:  Great list of free training resources Phil as curated

Community Contributed Certification Resources 

Misc

Collaboration

  • Slack:  A group messaging tool with search, paid versions retain more history
  • Asana:  A great task manager, share tasks with others as well
  • Trello:  A slightly different take on managing your todo items
  • MangoApps:  As with any free SaaS service, some limitations
  • SocialCast:  Free for up to 50 users with some limitations
  • Sharetronix:  Hey your still reading?  Cool, probably wondering what this is all about.  My other passion is collaboration, I hate email and “social” seems like a much better way to communicate
  • Zimbra OpenSource:  Open source edition of Zimbra for email and calendaring
Advertisements

XENAPP 7.X ARCHITECTURE AND SIZING

Thursday, May 08, 2014   , , , , , , , , , ,

Source: Exit | the | Fast | Lane

image

Peter Fine here from Dell CCC Solution Engineering, where we just finished an extensive refresh for our XenApp recommendation within the Dell Wyse Datacenter for Citrix solution architecture.  Although not called “XenApp” in XenDesktop versions 7 and 7.1 of the product, the name has returned officially for version 7.5. XenApp is still architecturally linked to XenDesktop from a management infrastructure perspective but can also be deployed as a standalone architecture from a compute perspective. The best part of all now is flexibility. Start with XenApp or start with XenDesktop then seamlessly integrate the other at a later time with no disruption to your environment. All XenApp really is now, is a Windows Server OS running the Citrix Virtual Delivery Agent (VDA). That’s it! XenDesktop on the other hand = a Windows desktop OS running the VDA.

Architecture

The logical architecture depicted below displays the relationship with the two use cases outlined in red. All of the infrastructure that controls the brokering, licensing, etc is the same between them. This simplification of architecture comes as a result of XenApp shifting from the legacy Independent Management Architecture (IMA) to XenDesktop’s Flexcast Management Architecture (FMA). It just makes sense and we are very happy to see Citrix make this move. You can read more about the individual service components of XenDesktop/ XenApp here.

image

Expanding the architectural view to include the physical and communication elements, XenApp fits quite nicely with XenDesktop and compliments any VDI deployment. For simplicity, I recommend using compute hosts dedicated to XenApp and XenDesktop, respectively, for simpler scaling and sizing. Below you can see the physical management and compute hosts on the far left side with each of their respective components considered within. Management will remain the same regardless of what type of compute host you ultimately deploy but there are several different deployment options. Tier 1 and tier 2 storage are comprehended the same way when XenApp is in play, which can make use of local or shared disk depending on your requirements. XenApp also integrates nicely with PVS which can be used for deployment and easy scale out scenarios.  I have another post queued up for PVS sizing in XenDesktop.

image

From a stack view perspective, XenApp fits seamlessly into an existing XenDesktop architecture or can be deployed into a dedicated stack. Below is a view of a Dell Wyse Datacenter stack tailored for XenApp running on either vSphere or Hyper-v using local disks for Tier 1. XenDesktop slips easily into the compute layer here with our optimized host configuration. Be mindful of the upper scale utilizing a single management stack as 10K users and above is generally considered very large for a single farm. The important point to note is that the network, mgmt and storage layers are completely interchangeable between XenDesktop and XenApp. Only the host config in the compute layer changes slightly for XenApp enabled hosts based on our optimized configuration.

image

Use Cases

There are a number of use cases for XenApp which ultimately relies on Windows Server’s RDSH role (terminal services). The age-old and most obvious use case is for hosted shared sessions, i.e. many users logging into and sharing the same Windows Server instance via RDP. This is useful for managing access to legacy apps, providing a remote access/ VPN alternative, or controlling access to an environment through which can only be accessed via the XenApp servers. The next step up naturally extends to application virtualization where instead of multiple users being presented with and working from a full desktop, they simply launch the applications they need to use from another device. These virtualized apps, of course, consume a full shared session on the backend even though the user only interacts with a single application. Either scenario can now be deployed easily via Delivery Groups in Citrix Studio.

image

XenApp also compliments full XenDesktop VDI through the use of application off-load. It is entirely viable to load every application a user might need within their desktop VM, but this comes at a performance and management cost. Every VDI user on a given compute host will have a percentage of allocated resources consumed by running these applications which all have to be kept up to date and patched unless part of the base image. Leveraging XenApp with XenDesktop provides the ability to off-load applications and their loads from the VDI sessions to the XenApp hosts. Let XenApp absorb those burdens for the applications that make sense. Now instead of running MS Office in every VM, run it from XenApp and publish it to your VDI users. Patch it in one place, shrink your gold images for XenDesktop and free up resources for other more intensive non-XenApp friendly apps you really need to run locally. Best of all, your users won’t be able to tell the difference!

image

Optimization

We performed a number of tests to identify the optimal configuration for XenApp. There are a number of ways to go here: physical, virtual, or PVS streamed to physical/ virtual using a variety of caching options. There are also a number of ways in which XenApp can be optimized. Citrix wrote a very good blog article covering many of these optimization options, of which most we confirmed. The one outlier turned out to be NUMA where we really didn’t see much difference with it turned on or off. We ran through the following test scenarios using the core DWD architecture with LoginVSI light and medium workloads for both vSphere and Hyper-V:

  • Virtual XenApp server optimization on both vSphere and Hyper-V to discover the right mix of vCPUs, oversubscription, RAM and total number of VMs per host
  • Physical Windows 2012 R2 host running XenApp
  • The performance impact and benefit of NUMA enabled to keep the RAM accessed by a CPU local to its adjacent DIMM bank.
  • The performance impact of various provisioning mechanisms for VMs: MCS, PVS write cache to disk, PVS write cache to RAM
  • The performance impact of an increased user idle time to simulate a less than 80+% concurrency of user activity on any given host.

To identify the best XenApp VM config we tried a number of configurations including a mix of 1.5x CPU core oversubscription, fewer very beefy VMs and many less beefy VMs. Important to note here that we based on this on the 10-core Ivy Bridge part E5-2690v2 that features hyperthreading and Turbo boost. These things matter! The highest density and best user experience came with 6 x VMs each outfitted with 5 x vCPUs and 16GB RAM.  Of the delivery methods we tried (outlined in the table below), Hyper-V netted the best results regardless of provisioning methodology. We did not get a better density between PVS caching methods but PVS cache in RAM completely removed any IOPS generated against the local disk. I’ll got more into PVS caching methods and results in another post.

Interestingly, of all the scenarios we tested, the native Server 2012 R2 + XenApp combination performed the poorest. PVS streamed to a physical host is another matter entirely, but unfortunately we did not test that scenario. We also saw no benefit from enabling NUMA. There was a time when a CPU accessing an adjacent CPU’s remote memory banks across the interconnect paths hampered performance, but given the current architecture in Ivy Bridge and its fat QPIs, this doesn’t appear to be a problem any longer.

The “Dell Light” workload below was adjusted to account for less than 80% user concurrency where we typically plan for in traditional VDI. Citrix observed that XenApp users in the real world tend to not work all at the same time. Less users working concurrently means freed resources and opportunity to run more total users on a given compute host.

The net of this study shows that the hypervisor and XenApp VM configuration matter more than the delivery mechanism. MCS and PVS ultimately netted the same performance results but PVS can be used to solve other specific problems if you have them (IOPS).

image

* CPU % for ESX Hosts was adjusted to account for the fact that Intel E5-2600v2 series processors with the Turbo Boost feature enabled will exceed the ESXi host CPU metrics of 100% utilization. With E5-2690v2 CPUs the rated 100% in ESXi is 60000 MHz of usage, while actual usage with Turbo has been seen to reach 67000 MHz in some cases. The Adjusted CPU % Usage is based on 100% = 66000 MHz usage and is used in all charts for ESXi to account for Turbo Boost. Windows Hyper-V metrics by comparison do not report usage in MHz, so only the reported CPU % usage is used in those cases.

** The “Dell Light” workload is a modified VSI workload to represent a significantly lighter type of user. In this case the workload was modified to produce about 50% idle time.

†Avg IOPS observed on disk is 0 because it is offloaded to RAM.

Summary of configuration recommendations:

  • Enable Hyper-Threading and Turbo for oversubscribed performance gains.
  • NUMA did not show to have a tremendous impact enabled or disabled.
  • 1.5x CPU oversubscription per host produced excellent results. 20 physical cores x 1.5 oversubscription netting 30 logical vCPUs assigned to VMs.
  • Virtual XenApp servers outperform dedicated physical hosts with no hypervisor so we recommend virtualized XenApp instances.
  • Using 10-Core Ivy Bridge CPUs, we recommend running 6 x XenApp VMs per host, each VM assigned 5 x vCPUs and 16GB RAM.
  • PVS cache in RAM (with HD overflow) will reduce the user IO generated to disk almost nothing but may require greater RAM densities on the compute hosts. 256GB is a safe high water mark using PVS cache in RAM based on a 21GB cache per XenApp VM.

Resources:

Dell Wyse Datacenter for Citrix – Reference Architecture

XenApp/ XenDesktop Core Concepts

Citrix Blogs – XenApp Scalability

4 comments :
  1. Do you have anything on XenApp 7.5 + HDX 3D? This is super helpful, but there is even less information on sizing for XenApp when GPUs are involved.

    Reply

  2. Unfortunately we don’t yet have any concrete sizing data for XenApp with graphics but this is Tee’d up for us to tackle next. I’ll add some of the architectural considerations which will hopefully help.

    Reply

  3. Two questions:
    1. Did you include antivirus in your XenApp scalability considerations? If not, physical box overhead with Win 2012 R2 and 1 AV instance is minimal, when compared to 6 PVS streamed VMs outfitted with 6 AV instances respectively (I am not recommending to go physical though).
    2. When suggesting PVS cache in RAM to improve scalability of XenApp workloads, do you consider CPU, not the IO, to be the main culprit? After all, you only have 20 cores in a 2 socket box, while there are numerous options to fix storage IO.

    PS. Some of your pictures are not visible

    Reply

  4. Hi Alex,

    1) Yes, we always use antivirus in all testing that we do at Dell. Real world simulation is paramount. Antivirus used here is still our standard McAfee product, not VDI-optimized.

    2) Yes, CPU is almost always the limiting factor and exhausts first, ultimately dictating the limits of compute scale. You can see here that PVS cache in RAM didn’t change the scale equation, even though it did use slightly less CPU, but it all but eliminates the disk IO problem. We didn’t go too deep on the higher IO use cases with cache in RAM but this can obviously be considered a poor man’s Atlantis ILIO.

    Thanks for stopping by!

XENAPP 7.X ARCHITECTURE AND SIZING

XenApp 7.x Architecture and Sizing

Source: http://weestro.blogspot.com/2014/05/xenapp-7x-architecture-and-sizing.html

image

Peter Fine here from Dell CCC Solution Engineering, where we just finished an extensive refresh for our XenApp recommendation within the Dell Wyse Datacenter for Citrix solution architecture.  Although not called “XenApp” in XenDesktop versions 7 and 7.1 of the product, the name has returned officially for version 7.5. XenApp is still architecturally linked to XenDesktop from a management infrastructure perspective but can also be deployed as a standalone architecture from a compute perspective. The best part of all now is flexibility. Start with XenApp or start with XenDesktop then seamlessly integrate the other at a later time with no disruption to your environment. All XenApp really is now, is a Windows Server OS running the Citrix Virtual Delivery Agent (VDA). That’s it! XenDesktop on the other hand = a Windows desktop OS running the VDA.

Architecture

The logical architecture depicted below displays the relationship with the two use cases outlined in red. All of the infrastructure that controls the brokering, licensing, etc is the same between them. This simplification of architecture comes as a result of XenApp shifting from the legacy Independent Management Architecture (IMA) to XenDesktop’s Flexcast Management Architecture (FMA). It just makes sense and we are very happy to see Citrix make this move. You can read more about the individual service components of XenDesktop/ XenApp here.

image

Expanding the architectural view to include the physical and communication elements, XenApp fits quite nicely with XenDesktop and compliments any VDI deployment. For simplicity, I recommend using compute hosts dedicated to XenApp and XenDesktop, respectively, for simpler scaling and sizing. Below you can see the physical management and compute hosts on the far left side with each of their respective components considered within. Management will remain the same regardless of what type of compute host you ultimately deploy but there are several different deployment options. Tier 1 and tier 2 storage are comprehended the same way when XenApp is in play, which can make use of local or shared disk depending on your requirements. XenApp also integrates nicely with PVS which can be used for deployment and easy scale out scenarios.  I have another post queued up for PVS sizing in XenDesktop.

image

From a stack view perspective, XenApp fits seamlessly into an existing XenDesktop architecture or can be deployed into a dedicated stack. Below is a view of a Dell Wyse Datacenter stack tailored for XenApp running on either vSphere or Hyper-v using local disks for Tier 1. XenDesktop slips easily into the compute layer here with our optimized host configuration. Be mindful of the upper scale utilizing a single management stack as 10K users and above is generally considered very large for a single farm. The important point to note is that the network, mgmt and storage layers are completely interchangeable between XenDesktop and XenApp. Only the host config in the compute layer changes slightly for XenApp enabled hosts based on our optimized configuration.

image

Use Cases

There are a number of use cases for XenApp which ultimately relies on Windows Server’s RDSH role (terminal services). The age-old and most obvious use case is for hosted shared sessions, i.e. many users logging into and sharing the same Windows Server instance via RDP. This is useful for managing access to legacy apps, providing a remote access/ VPN alternative, or controlling access to an environment through which can only be accessed via the XenApp servers. The next step up naturally extends to application virtualization where instead of multiple users being presented with and working from a full desktop, they simply launch the applications they need to use from another device. These virtualized apps, of course, consume a full shared session on the backend even though the user only interacts with a single application. Either scenario can now be deployed easily via Delivery Groups in Citrix Studio.

image

XenApp also compliments full XenDesktop VDI through the use of application off-load. It is entirely viable to load every application a user might need within their desktop VM, but this comes at a performance and management cost. Every VDI user on a given compute host will have a percentage of allocated resources consumed by running these applications which all have to be kept up to date and patched unless part of the base image. Leveraging XenApp with XenDesktop provides the ability to off-load applications and their loads from the VDI sessions to the XenApp hosts. Let XenApp absorb those burdens for the applications that make sense. Now instead of running MS Office in every VM, run it from XenApp and publish it to your VDI users. Patch it in one place, shrink your gold images for XenDesktop and free up resources for other more intensive non-XenApp friendly apps you really need to run locally. Best of all, your users won’t be able to tell the difference!

image

Optimization

We performed a number of tests to identify the optimal configuration for XenApp. There are a number of ways to go here: physical, virtual, or PVS streamed to physical/ virtual using a variety of caching options. There are also a number of ways in which XenApp can be optimized. Citrix wrote a very good blog article covering many of these optimization options, of which most we confirmed. The one outlier turned out to be NUMA where we really didn’t see much difference with it turned on or off. We ran through the following test scenarios using the core DWD architecture with LoginVSI light and medium workloads for both vSphere and Hyper-V:

  • Virtual XenApp server optimization on both vSphere and Hyper-V to discover the right mix of vCPUs, oversubscription, RAM and total number of VMs per host
  • Physical Windows 2012 R2 host running XenApp
  • The performance impact and benefit of NUMA enabled to keep the RAM accessed by a CPU local to its adjacent DIMM bank.
  • The performance impact of various provisioning mechanisms for VMs: MCS, PVS write cache to disk, PVS write cache to RAM
  • The performance impact of an increased user idle time to simulate a less than 80+% concurrency of user activity on any given host.

To identify the best XenApp VM config we tried a number of configurations including a mix of 1.5x CPU core oversubscription, fewer very beefy VMs and many less beefy VMs. Important to note here that we based on this on the 10-core Ivy Bridge part E5-2690v2 that features hyperthreading and Turbo boost. These things matter! The highest density and best user experience came with 6 x VMs each outfitted with 5 x vCPUs and 16GB RAM.  Of the delivery methods we tried (outlined in the table below), Hyper-V netted the best results regardless of provisioning methodology. We did not get a better density between PVS caching methods but PVS cache in RAM completely removed any IOPS generated against the local disk. I’ll got more into PVS caching methods and results in another post.

Interestingly, of all the scenarios we tested, the native Server 2012 R2 + XenApp combination performed the poorest. PVS streamed to a physical host is another matter entirely, but unfortunately we did not test that scenario. We also saw no benefit from enabling NUMA. There was a time when a CPU accessing an adjacent CPU’s remote memory banks across the interconnect paths hampered performance, but given the current architecture in Ivy Bridge and its fat QPIs, this doesn’t appear to be a problem any longer.

The “Dell Light” workload below was adjusted to account for less than 80% user concurrency where we typically plan for in traditional VDI. Citrix observed that XenApp users in the real world tend to not work all at the same time. Less users working concurrently means freed resources and opportunity to run more total users on a given compute host.

The net of this study shows that the hypervisor and XenApp VM configuration matter more than the delivery mechanism. MCS and PVS ultimately netted the same performance results but PVS can be used to solve other specific problems if you have them (IOPS).

image

* CPU % for ESX Hosts was adjusted to account for the fact that Intel E5-2600v2 series processors with the Turbo Boost feature enabled will exceed the ESXi host CPU metrics of 100% utilization. With E5-2690v2 CPUs the rated 100% in ESXi is 60000 MHz of usage, while actual usage with Turbo has been seen to reach 67000 MHz in some cases. The Adjusted CPU % Usage is based on 100% = 66000 MHz usage and is used in all charts for ESXi to account for Turbo Boost. Windows Hyper-V metrics by comparison do not report usage in MHz, so only the reported CPU % usage is used in those cases.

** The “Dell Light” workload is a modified VSI workload to represent a significantly lighter type of user. In this case the workload was modified to produce about 50% idle time.

†Avg IOPS observed on disk is 0 because it is offloaded to RAM.

Summary of configuration recommendations:

  • Enable Hyper-Threading and Turbo for oversubscribed performance gains.
  • NUMA did not show to have a tremendous impact enabled or disabled.
  • 1.5x CPU oversubscription per host produced excellent results. 20 physical cores x 1.5 oversubscription netting 30 logical vCPUs assigned to VMs.
  • Virtual XenApp servers outperform dedicated physical hosts with no hypervisor so we recommend virtualized XenApp instances.
  • Using 10-Core Ivy Bridge CPUs, we recommend running 6 x XenApp VMs per host, each VM assigned 5 x vCPUs and 16GB RAM.
  • PVS cache in RAM (with HD overflow) will reduce the user IO generated to disk almost nothing but may require greater RAM densities on the compute hosts. 256GB is a safe high water mark using PVS cache in RAM based on a 21GB cache per XenApp VM.

Resources:

Dell Wyse Datacenter for Citrix – Reference Architecture

XenApp/ XenDesktop Core Concepts

Citrix Blogs – XenApp Scalability

  1. Do you have anything on XenApp 7.5 + HDX 3D? This is super helpful, but there is even less information on sizing for XenApp when GPUs are involved.

    Reply

  2. Unfortunately we don’t yet have any concrete sizing data for XenApp with graphics but this is Tee’d up for us to tackle next. I’ll add some of the architectural considerations which will hopefully help.

    Reply

  3. Two questions:
    1. Did you include antivirus in your XenApp scalability considerations? If not, physical box overhead with Win 2012 R2 and 1 AV instance is minimal, when compared to 6 PVS streamed VMs outfitted with 6 AV instances respectively (I am not recommending to go physical though).
    2. When suggesting PVS cache in RAM to improve scalability of XenApp workloads, do you consider CPU, not the IO, to be the main culprit? After all, you only have 20 cores in a 2 socket box, while there are numerous options to fix storage IO.

    PS. Some of your pictures are not visible

    Reply

  4. Hi Alex,

    1) Yes, we always use antivirus in all testing that we do at Dell. Real world simulation is paramount. Antivirus used here is still our standard McAfee product, not VDI-optimized.

    2) Yes, CPU is almost always the limiting factor and exhausts first, ultimately dictating the limits of compute scale. You can see here that PVS cache in RAM didn’t change the scale equation, even though it did use slightly less CPU, but it all but eliminates the disk IO problem. We didn’t go too deep on the higher IO use cases with cache in RAM but this can obviously be considered a poor man’s Atlantis ILIO.

    Thanks for stopping by!

    Reply