XenAPP 6.x Servers removed from farm still showing up in Delivery Console

Unable to remove a XenAPP 6.x server from the farm.  The server still shows up in the delivery console.  The elegant way of removing it gracefully was not possible because it was a Citrix Provisioned server.

Command line on a data collector

qfarm

Server that shows up is in the list

dscheck /full servers /clean

Force a delete

dscheck /full servers /deletemf SERVERNAME

SERVERNAME is case sensitive

dscheck /full servers /clean

The server should now be out of the delivery console.

CITRIX XENDESTOP AND PVS: A WRITE CACHE PERFORMANCE STUDY

Thursday, July 10, 2014   , , , , , , , , , , , ,   Source: Exit | the | Fast | Lane

image

If you’re unfamiliar, PVS (Citrix Provisioning Server) is a vDisk deployment mechanism available for use within a XenDesktop or XenApp environment that uses streaming for image delivery. Shared read-only vDisks are streamed to virtual or physical targets in which users can access random pooled or static desktop sessions. Random desktops are reset to a pristine state between logoffs while users requiring static desktops have their changes persisted within a Personal vDisk pinned to their own desktop VM. Any changes that occur within the duration of a user session are captured in a write cache. This is where the performance demanding write IOs occur and where PVS offers a great deal of flexibility as to where those writes can occur. Write cache destination options are defined via PVS vDisk access modes which can dramatically change the performance characteristics of your VDI deployment. While PVS does add a degree of complexity to the overall architecture, since its own infrastructure is required, it is worth considering since it can reduce the amount of physical computing horsepower required for your VDI desktop hosts. The following diagram illustrates the relationship of PVS to Machine Creation Services (MCS) in the larger architectural context of XenDesktop. Keep in mind also that PVS is frequently used to deploy XenApp servers as well.

image

PVS 7.1 supports the following write cache destination options (from Link):

  • Cache on device hard drive – Write cache can exist as a file in NTFS format, located on the target-device’s hard drive. This write cache option frees up the Provisioning Server since it does not have to process write requests and does not have the finite limitation of RAM.
  • Cache on device hard drive persisted (experimental phase only) – The same as Cache on device hard drive, except cache persists. At this time, this write cache method is an experimental feature only, and is only supported for NT6.1 or later (Windows 7 and Windows 2008 R2 and later).
  • Cache in device RAM – Write cache can exist as a temporary file in the target device’s RAM. This provides the fastest method of disk access since memory access is always faster than disk access.
  • Cache in device RAM with overflow on hard disk – When RAM is zero, the target device write cache is only written to the local disk. When RAM is not zero, the target device write cache is written to RAM first.
  • Cache on a server – Write cache can exist as a temporary file on a Provisioning Server. In this configuration, all writes are handled by the Provisioning Server, which can increase disk IO and network traffic.
  • Cache on server persistent – This cache option allows for the saving of changes between reboots. Using this option, after rebooting, a target device is able to retrieve changes made from previous sessions that differ from the read only vDisk image.

Many of these were available in previous versions of PVS, including cache to RAM, but what makes v7.1 more interesting is the ability to cache to RAM with the ability to overflow to HDD. This provides the best of both worlds: extreme RAM-based IO performance without the risk since you can now overflow to HDD if the RAM cache fills. Previously you had to be very careful to ensure your RAM cache didn’t fill completely as that could result in catastrophe. Granted, if the need to overflow does occur, affected user VMs will be at the mercy of your available HDD performance capabilities, but this is still better than the alternative (BSOD).

Results

Even when caching directly to HDD, PVS shows lower IOPS/ user numbers than MCS does on the same hardware. We decided to take things a step further by testing a number of different caching options. We ran tests on both Hyper-V and ESXi using our standard 3 user VM profiles against LoginVSI’s low, medium, high workloads. For reference, below are the standard user VM profiles we use in all Dell Wyse Datacenter enterprise solutions:

Profile Name Number of vCPUs per Virtual Desktop Nominal RAM (GB) per Virtual Desktop Use Case
Standard 1 2 Task Worker
Enhanced 2 3 Knowledge Worker
Professional 2 4 Power User

We tested three write caching options across all user and workload types: cache on device HDD, RAM + Overflow (256MB) and RAM + Overflow (512MB). Doubling the amount of RAM cache on more intensive workloads paid off big netting a near host IOPS reduction to 0. That’s almost 100% of user generated IO absorbed completely by RAM. We didn’t capture the IOPS generated in RAM here using PVS, but as the fastest medium available in the server and from previous work done with other in-RAM technologies, I can tell you that 1600MHz RAM is capable of tens of thousands of IOPS, per host. We also tested thin vs thick provisioning using our high end profile when caching to HDD just for grins. Ironically, thin provisioning outperformed thick for ESXi, the opposite proved true for Hyper-V. Toachieve these impressive IOPS number on ESXi it is important to enable intermediate buffering (see links at the bottom). I’ve highlighted the more impressive RAM + overflow results in red below. Note: IOPS per user below indicates IOPS generation as observed at the disk layer of the compute host. This does not mean these sessions generated close to no IOPS.

Hyper-visor PVS Cache Type Workload Density Avg CPU % Avg Mem Usage GB Avg IOPS/User Avg Net KBps/User
ESXi Device HDD only Standard 170 95% 1.2 5 109
ESXi 256MB RAM + Overflow Standard 170 76% 1.5 0.4 113
ESXi 512MB RAM + Overflow Standard 170 77% 1.5 0.3 124
ESXi Device HDD only Enhanced 110 86% 2.1 8 275
ESXi 256MB RAM + Overflow Enhanced 110 72% 2.2 1.2 284
ESXi 512MB RAM + Overflow Enhanced 110 73% 2.2 0.2 286
ESXi HDD only, thin provisioned Professional 90 75% 2.5 9.1 250
ESXi HDD only thick provisioned Professional 90 79% 2.6 11.7 272
ESXi 256MB RAM + Overflow Professional 90 61% 2.6 1.9 255
ESXi 512MB RAM + Overflow Professional 90 64% 2.7 0.3 272

For Hyper-V we observed a similar story and did not enabled intermediate buffering at the recommendation of Citrix. This is important! Citrix strongly recommends to not use intermediate buffering on Hyper-V as it degrades performance. Most other numbers are well inline with the ESXi results, save for the cache to HDD numbers being slightly higher.

Hyper-visor PVS Cache Type Workload Density Avg CPU % Avg Mem Usage GB Avg IOPS/User Avg Net KBps/User
Hyper-V Device HDD only Standard 170 92% 1.3 5.2 121
Hyper-V 256MB RAM + Overflow Standard 170 78% 1.5 0.3 104
Hyper-V 512MB RAM + Overflow Standard 170 78% 1.5 0.2 110
Hyper-V Device HDD only Enhanced 110 85% 1.7 9.3 323
Hyper-V 256MB RAM + Overflow Enhanced 110 80% 2 0.8 275
Hyper-V 512MB RAM + Overflow Enhanced 110 81% 2.1 0.4 273
Hyper-V HDD only, thin provisioned Professional 90 80% 2.2 12.3 306
Hyper-V HDD only thick provisioned Professional 90 80% 2.2 10.5 308
Hyper-V 256MB RAM + Overflow Professional 90 80% 2.5 2.0 294
Hyper-V 512MB RAM + Overflow Professional 90 79% 2.7 1.4 294

Implications

So what does it all mean? If you’re already a PVS customer this is a no brainer, upgrade to v7.1 and turn on “cache in device RAM with overflow to hard disk” now. Your storage subsystems will thank you. The benefits are clear in both ESXi and Hyper-V alike. If you’re deploying XenDesktop soon and debating MCS vs PVS, this is a very strong mark in the “pro” column for PVS. The fact of life in VDI is that we always run out of CPU first, but that doesn’t mean we get to ignore or undersize for IO performance as that’s important too. Enabling RAM to absorb the vast majority of user write cache IO allows us to stretch our HDD subsystems even further, since their burdens are diminished. Cut your local disk costs by 2/3 or stretch those shared arrays 2 or 3x. PVS cache in RAM + overflow allows you to design your storage around capacity requirements with less need to overprovision spindles just to meet IO demands (resulting in wasted capacity).

References:

DWD Enterprise Reference Architecture

http://support.citrix.com/proddocs/topic/provisioning-7/pvs-technology-overview-write-cache-intro.html

When to Enable Intermediate Buffering for Local Hard Drive Cache

HP SYSTEM MANAGEMENT HOMEPAGE SHOWS NO ITEMS

Source: Exit The Fast Lane

If you’ve installed the Proliant Support Pack (8.x) after the fact, or built a new server with SmartStart and did not enable SNMP then you’ve probably seen this after the install:

 
All of the HP agents are started and reporting “all is well” but no specific component information is displayed. This is because the Management Homepage relies on SNMP which is either not installed or not configured properly. Even if you don’t have an enterprise SNMP trap receiver you need to configure the service on the local server to send updates to itself, at least. First ensure that the SNMP service is installed then open it’s properties. On the Traps tab enter a community name of your choosing, the typical names are “public” and “private”, public being read-only and private being read-write. Make sure that the loopback address is added to the trap destinations area. On the Security tab enter the community name you just created in the accepted community names box and set its permissions to READ WRITE. Ensure that traps sent from the localhost are allowed to be received. Restart the SNMP service which will also restart all of the HP management agents.

 

Launch the System Management Homepage again and it should look more like this:



Another unfortunate scenario in which this issue can arise is when you have installed an unsupported OS on a given server platform. For instance, Server 2008 R2 on a DL380 G4. In this case many of the PSP components will not be installed and therefore will not work correctly.
40 comments :
  1. Thanks for the info, solved the same problem for me.

    Reply

  2. Thanks – Great post, but no luck resolving this one for me.

    The only difference I get is that my SNMP service is NOT dependant on the HP services, therefore they don’t restart when I restart SNMP service.

    Perhaps, they’re still not talking to each other properly. Any one got any thoughts on what to do?

    Mick

    Reply

  3. The HP services are dependent on the SNMP service. These are:

    HP Foundation Agents
    HP NIC Agents
    HP Insight Server Agents
    HP Insight Storage Agents

    A couple of things you could try. You could manually create the dependencies in the registry by adding SNMP to the “DependOnService” key for each HP required service. This will at least ensure that SNMP is fully started before the HP services start.

    You might also re-install the PSP and force the updates to install over what is installed already. With SNMP already installed/started whatever associations are missing should be created.

    Peter

    Reply

  4. Thanks for sharing this information, this solved this issue for me too although SMTP wàs running!

    Reply

  5. Ensure you have the management agents installed.

    Reply

  6. In my case, when installing the proliant support pack on a windows 2008 R2 64bit Server failed with missing dependencies. Namely the Enhanced System Management Controller driver and Ilo Advanced driver.
    Since I could not find the 64bit R2 Version driver, I ended up installing the 2008 64bit Version. It worked when I started the install with the Windows 2008 SP1 compatibility. After that I was able to install the Proliant Support pack successfully, and the system homepage showed all items.

    Reply

  7. You the man. Saved me a few hours trying to figure this out on my own. Muchos gracias

    Reply

  8. Worked a charm thanks for sharing

    Reply

  9. To whoever posted:

    “In my case, when installing the proliant support pack on a windows 2008 R2 64bit Server failed with missing dependencies. Namely the Enhanced System Management Controller driver and Ilo Advanced driver.
    Since I could not find the 64bit R2 Version driver, I ended up installing the 2008 64bit Version. It worked when I started the install with the Windows 2008 SP1 compatibility. After that I was able to install the Proliant Support pack successfully, and the system homepage showed all items”

    Thank you, thank you, thank you!! Just wish I’d found this post 3 hours and about 20 (very very slow) HP downloads earlier 😉

    Reply

  10. Thank you for the post. Worked perfect.

    Reply

  11. Thank You VERY MUCH! Not much out there on this fix… saved me probably another hour of searching.. Big Beer for you!

    Reply

  12. Worked for me, thanks.

    Reply

  13. Thaaaank You!!!!!!!!!!!!!!!!!!!!!!

    Reply

  14. Many thanks. Saved additional hours of hair pulling.

    Reply

  15. “Another unfortunate scenario in which this issue can arise is when you have installed an unsupported OS on a given server platform. For instance, Server 2008 R2 on a DL380 G4. In this case many of the PSP components will not be installed and therefore will not work correctly.”

    So, is there any way at all to get HP Insight to work on a DL380 G4 with 2008 R2?

    Reply

  16. Cheers dude worked a treat for me!

    Reply

  17. Great post, thanks for your help.

    Reply

  18. Worked perfect thanks!

    Reply

  19. Thank you very much!

    Reply

  20. Thanks!

    Reply

  21. Very clear and helpful, thanks!

    Reply

  22. Adding my thanks. I reckon I expected the installer to notify me of any missing dependencies (I did not have SNMP installed)…

    Reply

  23. If you’re useing Linux as your preferred OS and you’re expecting the same issue, try the following:

    1. /etc/init.d/hp-snmp-agents [re]start
    2. /etc/init.d/snmpd [re]start
    3. cpqacuxe -stop && sleep 5 && cpqacuxe –enable-remote

    Make sure you’re /etc/snmp/snmpd.conf is correct.

    Best regards

    Reply

  24. Thanks for the info, very useful 🙂

    Reply

  25. Cheers big ears

    Reply

  26. FAB – Great. Save me loads of time tracking this down. Thanks Muchly.

    Reply

  27. worked great, thanks!

    Reply

  28. it worked for me, thanks.

    Reply

  29. Brilliant…

    Reply

  30. Thanks that worked great!!

    Reply

  31. Thanks it worked for me as well

    Reply

  32. Steven @ June 28, 2012 3:52 AM

    Any chance you might explain or show an example of a ‘correct’ snmp.conf?

    Reply

  33. Just adding my thanks as well. It’s best to use the smartstart cd to assist you in installing the windows OS, but my ML350 Gen8 didn’t come with any CD/DVD’s except documentation, which I did not read (oops).

    Reply

  34. I have solved the problem following your instruction. Thanks

    instructing you for Security systems

    Reply

  35. Thanks Sir this information, this solved this issue for me too although SMTP wàs running!

    Really great tip.

    Reply

  36. THANKS … A LOT

    Reply

  37. Hi,

    in my case i have already configured SNMP trap for NOC server. but HP system management home page doesnt show any details. its fully blank.

    as per above screenshot i have configured and check the same. no luck. please anyone help me out on this case

    Reply

  38. it worked for me, thanks.

    Reply