RDM to VMKD – reposted blog

Source: How to convert a physical RDM into a VMDK disk

I found several VMs with the primary disk in VMDK format, followed by a second disk (dedicated to the applications, mainly databases, app servers or mail) in physical RDM format.

Since a part of the project is to have a complete data protection solution completely based on VMware VADP libraries, we suggested the customer to convert all the RDM disks into VMDK disks. With the latest vSphere releases, there are really no performance differences to justify RDM disks if not for Microsoft clusters or other situations requiring shared disks between VMs, while in the contrary it’s not possible to take snapshots of physical RDM disks, thus preventing backups of those disks and forcing to use backup agents inside the Guest OS.

To validate the procedure, and to reassure the customer, I realized a quick test to show the conversion process. This task can only be done once the infrastructure is upgraded at least to vSphere 4, since (based on KB1005241) a storage vmotion on ESX 3.5 of a virtual RDM does not convert it to VMDK, while this is possible with vSphere 4.0 or newer versions.

In my lab, I created a simple Windows 2003 VM, with a primary 20 Gb vmdk disk , and a secondary 5 Gb physical RDM disk:

Physical RDM disk

To complete the conversion, the VM must be shut down at least once, so you need to schedule the activity with the customer.

Once the VM is stopped, you need to edit its settings and remove the RDM disk. Write down the Virtual Device Node (0:1 in my example), because later you will have to reconnect the disk with the same value.

Remove RDM disk

Select the option “Deletes files from datastore”. Don’t be scared: since it’s a physical RDM, the only thing that will be deleted is the pointer to the RDM disk, not the disk itself.

Then, add a new RDM disk to the VM using the same LUN you removed before:

Add a new RDM disk

This time, choose “virtual” competibility mode, and select the same device node value as before. This let the Guest OS think the disk is the same as before.

Add virtual RDM

Now you can start again the VM. The downtime could be reduced to a couple of minutes if you do all the operation quickly. Then, initiate a storage vmotion, it will automatically convert the virtual RDM disk in a VMDK disk without further downtime (but be prepared to it if you do not have SVMotion license and thus you need a cold migration):

Disk converted into VMDK format

UPGRADING TO VMWARE VSPHERE 5.5

Friday, October 25, 2013   , , , , , , ,

Source: Exit | the | Fast | Lane

Like all good stewards of all things virtual I need stay current on the very foundations of everything we do:  hypervisors. So this post contains my notes on upgrading to the new vSphere 5.5 build from 5.1. This isn’t meant to be an exhaustive step-by-step 5.5 upgrade guide, as that’s already been done, and done very well (see my resources section at the bottom for links). This is purely my experience upgrading with a few interesting call outs along the way that I felt worth writing down should anyone else encounter them.

The basic upgrade sequence goes like this, depending on which of these components you have in play:

  1. vCloud components
  2. View server components
  3. vCenter (and vSphere Clients)
  4. SRM/ VR/ vCOPS/ VDP/ VSA
  5. ESXi hosts
  6. VM Tools (and virtual hardware –see my warning in the clients section)
  7. vShield components
  8. View agent

The environment I’m upgrading consists of the following:

  • 2 x Dell PE R610 hosts running ESXi 5.1 on 1GB SD
  • 400GB of local SAS
  • 3TB Equallogic iSCSI storage
  • vCenter 5.1 VM (Windows)

vCenter 5.5

As was the case in 5.1, there are still 2 ways to go with regard to vCenter: Windows-based or vCenter Server Appliance (vCSA). The Windows option is fully featured and capable of supporting the scaling maximums as well as all published configurations. The vCSA is more limited but in 5.5, a bit less so. Most get excited about the vCSA because it’s a Linux appliance (no Windows license), it uses vPostgres (no SQL license) and is dead simple to set up via OVF deployment. The vCSA can use only Oracle externally, MS SQL Server support appears to have been removed. The scaling maximums of the vCSA has increased to 100 hosts/ 3000 VMs with the embedded database which is a great improvement over the previous version. There are a few things that still require the Windows version however, namely vSphere Update Manager and vCenter Linked Mode.

While I did deploy the vCSA, additionally, my first step was upgrading my existing Windows vCenter instance. The Simple Install method should perform a scripted install of the 4 main components. This method didn’t work completely for me as I had to install the inventory and vCenter services manually. This is also the dialog from which you would install the VUM service on your Windows vCenter server.

I did receive an error about VPXD failing to install after the vCenter Server installation ran for the first time. A quick reboot cleared this up. With vCenter upgraded, the vSphere Client also needs to be upgraded anywhere you plan to access the environment using that tool. VMware is making it loud and clear that the preferred method to manage vSphere moving forward is the web client.

vCenter can support up to 10,000 active VMs now on a single instance, which is a massive improvement. If you plan to possibly power on more than 2000 VMs simultaneously, make sure to increase the number of ephemeral ports during the port configuration dialog of the vCenter setup.

Alternatively, the vCSA is very easy to deploy and configure with some minor tweaks necessary to connect to AD authentication. Keep in mind that the maximum number of VMs supported on the vCSA with the embedded DB is only 3000. To get the full 10,000 out of the vCSA you will have to use an Oracle DB instance externally. The vCSA is configured via the web browser and is connected to by the web and vSphere Clients the same way as its Windows counterpart.

If you fail to connect to the vCSA through the web client and receive a “Failed to connect to VMware Lookup Service…” error like this:

…from the admin tab in the vCSA, select yes to enable the certificate regeneration option and restart the appliance.

Upgrading ESXi hosts:

The easiest way to do this is starting with hosts in a HA cluster attached to shared storage, as many will have in their production environments. With vCenter upgraded, move all VMs to one host, upgrade the other, rinse, repeat, until all hosts are upgraded. Zero downtime. For your lab environments, if you don’t have luxury of shared storage, 2 x vCenter servers can be used to make this easier as I’ll explain later. If you don’t have shared storage in your production environment and want to try that method, do so at your own risk.

There are two ways to go to upgrade your ESXi hosts: local ISO (scripted or not) or vSphere Update Manager. I used both methods, one on each, to update my hosts.

ISO Method

The ISO method simply requires that you attach the 5.5 ISO to a device capable of booting your server: either USB or Virtual Media via IPMI (iDRAC). Boot, walk through the steps, upgrade. If you’ve attached your ISO to Virtual Media, you can specify within vCenter to boot your host directly to the virtual CD to make this a little easier. Boot Options for your ESXi hosts are buried on the Processors page for each host in the web or vSphere Client.

VUM method:

Using VUM is an excellent option, especially if you already have this installed on your vCenter server.

  • In VUM console page, enter “admin view”
  • Download patches and upgrades list
  • Go to ESXi images and import 5.5 ISO
  • Create baseline during import operation

  • Scan hosts for non-compliance, attach baseline group to the host needing upgrade
  • Click Remediate, walk through the screens and choose when to perform the operation

The host will go into maintenance mode, disconnect from vCenter and sit at 22% completion for 30 minutes, or longer depending on your hardware, while everything happens behind the scenes. Feel free to watch the action in KVM or IPMI.

When everything is finished and all is well, the host will exit maintenance mode and vCenter will report a successful remediation.

Pros/ Cons:

  • The ISO method is very straight-forward and likely how you built your ESXi host to begin with. Boot to media, upgrade datastores, upgrade host. Speed will vary by media used and server hardware, whether connected to USB directly or ISO via Virtual Media.
    • This method requires a bit more hand-holding. IPMI, virtual media, making choices, ejecting media at the right time…nothing earth shattering but not light touch like VUM..
  • If you already have VUM installed on your vCenter server and are familiar with its operation, then upgrading it to 5.5 should be fairly painless. The process is also entirely hands-off, you press go and the host gets upgraded magically in the background.
    • The down side to this is that the vSphere ISO is stored on and the update procedure is processed from your vCenter server. This could add time delays to load everything from vCenter to your physical ESXi hosts, depending on your infrastructure.
    • This method is also only possible using the Windows version of vCenter and is one of the few remaining required uses of the vSphere Client.

No Shared Storage?

Upgrading hosts with no shared storage is a bit more trouble but still totally doable without having to manually SCP VM files between hosts. The key is using 2 vCenter instances in which vCSA works great as a second for this purpose. Simply transfer your host ownership between vCenter instances. As long as both hosts are “owned” by the same vCenter instance, any changes to inventory will be recognized and committed. Any VMs reported as orphaned should also be resolved this way. You can transfer vCenter ownership back and forth any number of times without negatively impacting your ESXi hosts.  Just keep all hosts on one or the other! The screenshot below shows 2 hosts owned by the Windows vCenter (left) and the vCSA on the right showing those same hosts disconnected. No VMs were negatively impacted doing this.

The reason this is necessary is because VM migrations are controlled and handled by vCenter, cold migrations included which is all you’ll be able to do here. Power off all VMs, migrate all VMs off one host, fire up vCenter on the other host, transfer host ownership, move the final vCenter VM off then upgrade that host. Not elegant but it will work and hopefully save some pain.

vSphere Clients

The beloved vSphere Client is now deprecated in this release and comes with a heavy push towards exclusive use of the Flash-based web client. My advice, start getting very familiar with the web client as this is where the future is heading, like it or not. Any new features enabled in vCenter 5.5 will only be accessible via the web client.

Here you can see this clearly called out on the, now, deprecated vSphere Client login page:

WARNING – A special mention needs to be made about virtual hardware version 10. If you upgrade your VMs to the latest hardware version, you will LOSE the ability to edit their settings in the vSphere Client. Yep, one step closer to complete obsolesce. If you’re not yet ready to give up using the vSphere Client, you may want to hold off upgrading the virtual hardware for a bit.

vSphere Web Client

The web client is not terribly different from the fat client but its layout and operational methods will take some getting used to. Everything you need is in there, it may just take a minute to find it. The recent tasks pane is also off to the right now, which I really like.

image

The familiar Hosts and Clusters view:

Some things just plain look different, most notably the vSwitch configuration. Also the configuration items you’ve grown used to being in certain property menus are now spread out and stored in different places. Again, not bad just…different.

I also see no Solutions and Applications section in the web client, only vApps. So things like the Dell Equallogic Virtual Storage Manager would have to be accessed via the old vSphere Client.

Client Integration Plugin

To access features like VM consoles, file transfers to datastores and OVF deployments via the web client, the 153MB Client Integration plugin must be installed. Attempting to use any features that require this should prompt you for the client install. One of the places it can also be found is by right-clicking while browsing within a datastore.

The installer will launch and require you to close IE before continuing.

VM consoles now appear in additional browser tabs which will take some getting used to. Full screen mode looks very much like a Windows RDP session which I like and appreciate.

Product Feature Request

This is a very simple request (plea) to VMware to please combine the Hosts and Clusters view with the VMs and Templates view. I have never seen any value in having these separated. Datacenters, hosts, VMs, folders and templates should all be visible and manageable from a single pane. There should be only 3 management sections separating the primary vSphere inventories: Compute (combined view), storage and network. Clean and simple.

Resources:

vSphere 5.5 upgrade order (KB): Link

vSphere 5.5 Configuration Maximiums: Link

Awesome and extensive vSphere 5.5 guide: Link

Resolution for VCA FQDN error: Link

Scaled up VM-Level Protection now GA (Reblogged from Virtual Geek)

Source: Virtual Geek

[UPDATED – 11/20/14, 7:32AM ET: VSAN notes, VCOPS link]

For people who love the idea of replication (local for recovery purposes and remote for DR purposes), but want it as software only (no hardware dependency at all), and with VM-level granularity – a new choice is now here.

“Hello World” from Recoverpoint for Virtual Machines.

You can get more here.

Think:

  • software-only VM-level IO splitter
  • software-only Recoverpoint Appliance
  • Rich services – deep device counts, broad replication RPOs (from sync to async, time based, change based).
  • snap and replicate techniques (think vSphere Replication as an example) = copies.  Recoverpoint is a continuous replication (journalled IO) technique – you can recover to any point in time.
  • Local and Remote replicas
  • Super efficient: compressed, deduped – in our experience the WAN efficiency is one of the highest of any replication approach on the market
  • Larger scale than vSphere Replication.   Scale target is about 1000 VMs per vSphere cluster.

For EMCers, EMC partners, and EMC customers (talk to your partner/EMCer) you can play with Recoverpoint for Virtual Machines using vLab starting NOW (http://portal.demoemc.com)

BTW – vLab our massive at scale (tens of thousands of labs every month, many times that in VMs created and destself-service portal for all our products and solutions – brought to the world on an Enterprise Hybrid Cloud build on the vRealize Suite on Vblock by the way!)

This means that ANY storage model – whether it’s EMC XtremIO, a customer using VSAN, a customer using an NFS datastore – anything – can have rich replication capabilities.   BTW, one important note – there is a restriction on the current release where the VMs that are protected can be on anything, but the Recoverpoint journal needs to be on a VMFS datastore.   This will be lifted in the future (clearly it must be if RP4V will be included with EMC’s Project Mystic – aka EMC’s EVO:RAIL++ appliance).

Also, a great link I saw after posting – the always awesome Matt Cowger has created a VCOPS aka vRealize Operations adapter for Recoverpoint for VMs, aka RP4VM4VCOPS 🙂  Get it here: http://www.exaforge.com/rp4vm4vcops/

Check out the demo from Itzik Reich below.   Expect it to be available for anyone to use freely (of course, if you want support, you need to purchase) and easily downloadable soon!

Count The Ways – Flash as Local Storage to an ESXi Host

Count The Ways – Flash as Local Storage to an ESXi Host
Posted: 21 Jul 2014   By: Joel Grace

When performance trumps all other considerations, flash technology is a critical component to achieve the highest level of performance. By deploying Fusion ioMemory, a VM can achieve near-native performance results. This is known as pass-through (or direct) I/O.

The process of achieving direct I/O involves passing the PCIe device to the VM, where the guest OS sees the underlying hardware as its own physical device. The ioMemory device is then formatted with “file system” by the OS, rather than presented as a virtual machine file system (VMFS) datastore. This provides the lowest latency, highest IOPS and throughput. Multiple ioMemory devices can also be combined to scale to the demands of the application.

Another option is to use ioMemory as a local VMFS datatstore. This solution provides high VM performance, while maintaining its ability to utilize features like thin provisioning, snapshots, VM portability and storage vMotion. With this configuration, the ioMemory can be shared by VMs on the same ESXi host and specific virtual machine disks (VMDK) stored here for application acceleration.

Either of these options can be used for each of the following design examples.

Benefits of Direct I/O:

Raw hardware performance of flash within a VM with Direct I/OProvides the ability to use RAID across ioMemory cards to drive higher performance within the VMUse of any file system to manage the flash storage

Considerations of Direct I/O:

ESXi host may need to be rebooted and CPU VT flag enabledFusion-io VSL driver will need to be install in the guest VM to manage deviceOnce assigned to a VM the PCI device cannot be share with any other VMs

Benefits Local Datastore:

High performance of flash storage for VM VMDKsMaintain VMware functions like snapshots and storage vMotion

Considerations Local Datastore:

Not all VMDKs for a given VM have to reside on local flash use shared storage for OS and flash for application DATA VMDKsSQL/SIOS

Many enterprise applications reveal their own high availability (HA) features when deployed in bare metal environments. These elements can be used inside VMs to provide an additional layer of protection to an application, beyond that of VMware HA.

Two great SQL examples of this are Microsoft’s Database Availability Groups and SteelEye DataKeeper. Fusion-io customers leverage these technologies in bare metal environments to run all-flash databases without sacrificing high availability. The same is true for virtual environments.

By utilizing shared-nothing cluster aware application HA, VMs can still benefit from the flexibility provided by virtualization (hardware abstraction, mobility, etc.), but also take advantage of local flash storage resources for maximum performance.

Benefits:

Maximum application performanceMaximum application availabilityMaintains software defined datacenter

Operational Considerations:

100% virtualization is a main goal, but performance is criticalDoes virtualized application have additional HA features?SAN/NAS based datastore can be used for Storage vMotion if hosts needs to be taken offline for maintenanceCITRIX

The Citrix XenDesktop and XenApp application suites also present interesting use cases for local flash in VMWare environments. Often times these applications are deployed in a stateless fashion via Citrix Provisioning Services, where several desktop clones or XenApp servers are booting from centralized read-only golden images. Citrix Provisioning Services stores all data changes during the users’ session in a user-defined write cache location.  When a user logs off or the XenApp server is rebooted, this data is flushed clean. The write cache location can be stored across the network on the PVS servers, or on local storage devices. By storing this data on a local Fusion-io datastore on the ESXi host, it drastically reduces access time to active user data making for a better Citrix user experience and higher VM density.

Benefits:

Maximum application performanceReduced network load between VM’s and Citrix PVS ServerAvoids slow performance when SAN under heavy IO pressureMore responsive applications for better user experience

Operational Considerations

Citrix Personal vDisks (persistent desktop data) should be directed to the PVS server storage for resiliency.PVS vDisk Images can also be stored on ioDrives in the PVS server further increasing performance while eliminating the dependence on SAN all together.ioDrive capacity determined by Citrix write cache sizing best practices, typically a 5GB .vmdk per XenDekstop instance.

70 desktops x 5GB write cache = 350GB total cache size (365GB ioDrive could be used in this case).

The Citrix XenDesktop and XenApp application suites also present interesting use cases for local flash in VMWare environments. Often times these applications are deployed in a stateless fashion via Citrix Provisioning Services, where several desktop clones or XenApp servers are booting from centralized read-only golden images. Citrix Provisioning Services stores all data changes during the users’ session in a user-defined write cache location.  When a user logs off or the XenApp server is rebooted, this data is flushed clean. The write cache location can be stored across the network on the PVS servers, or on local storage devices. By storing this data on a local Fusion-io datastore on the ESXi host, it drastically reduces access time to active user data making for a better Citrix user experience and higher VM density.

VMware users can boost their system to achieve maximum performance and acceleration using flash memory. Flash memory will maintain maximum application availability during heavy I/O pressure, and makes your applications more responsive, providing a better user experience. Flash can also reduce network load between VMs and Citrix PVS Server.Click here to learn more about how flash can boost performance in your VMware system.

Joel Grace Sales Engineer
Source: http://www.fusionio.com/blog/count-the-ways.
Copyright © 2014 SanDisk Corporation. All rights reserved.Terms of UseTrademarksPrivacyCookies

Migrate XenServer VM to VMware ESXi 5.5

The following was found on Experts-Exchange

1. download and install VMware vCenter Converter Standalone 5.0
2. select convert machine
3. select source type “Powered-on machine”
4.put the details for the remote machine
5. enter the vCenter or ESXi/ESX host details
6. give it a name and select a location
7. select a datastore
8. finish and wait

Convert from XenServer 5.6SP2 to VMWare ESXi 5.

The following is taken from http://didyourestart.blogspot.com/2013/02/convert-from-xenserver-56sp2-to-vmware.html

It’s best practice to rebuild rather than convert.  I only converted machines that couldn’t be rebuilt, where being replaced soon (but not ready to replace just yet), or when I was short on time and had to move it immediately.

Server 2008 / 2008 R2

  1. Download and install VMWare Converter 4.3, yes, the older version
  2. Disable any services necessary (ie, IIS, etc)
  3. Ensure your logged in through the default view, not RDP.
  4. Uninstall XenTools and reboot
  5. Go into Device Manager
  6. You’ll see that the SCSI Controller doesn’t have a driver.
    1. VMWare converter won’t see the disks because of this
  7. Right click the SCSI Controller
  8. Update Driver Software
  9. Browse my computer for driver software
  10. Let me pick from a list of device drivers on my computer
  11. (Standard IDE ATA/ATAPI contoller)
    1. IDE Channel
    2. If you get the wrong one you’ll likely see a BSOD upon reboot
  12. Reboot
  13. Open VM Converter
  14. Convert Machine
  15. Select “This local Machine”
  16. Note that “View source details…” lights up. Click it
  17. Ensure that a Source disk is listed (if you didn’t change the controller driver then none will be listed and it will error when you attempt to convert)
  18. Type in the info for one of your VMWare hosts
  19. Select your datastore target
  20. Change RAM, CPU, etc as fit
  21. Finish and wait
  22. Once it’s completed shutdown the VM in XenServer
  23. In the VMWare console edit the VM.
  24. Delete the CDROM and Hard Disk
  25. Add a new Hard Disk as the SCSI 0:0 and point to the VMDK
  26. Add new CDROM with basic settings
  27. Start the machine and install tools
  28. Note that the VM Version is listed as 4
    1. Shutdown the VM
    2. Right click the VM and choose the option for “Upgrade Virtual Hardware”
    3. It should now show as a vmx-09
  29. Change the nic to vmxnet3 if desired
  30. Boot and change IP address if needed
  31. Uninstall VMWare converter

Since typing the Windows 2008 section, I tried something new that worked amazingly well with little downtime.  I did this with Windows 2008 RTM x32 and Windows 2008 R2 successfully.

  1. Download and install VMWare Converter 4.3.  New version may work better.
  2. Open VM Converter
  3. Convert Machine
  4. Select “This local Machine”
  5. Type in the info for one of your VMWare hosts
  6. Select your datastore target
  7. I had to edit the devices and change the controller to IDE
  8. Finish and wait
  9. At this point it’s extermely important to remember that we don’t want both VM’s on at the same time.  BUT I wanted to ensure that my new VMWare VM would boot…
  10. Change Settings
    1. Change network to an isolated network off production.
  11. Delete the CDROM and Hard Disk
  12. Add a new Hard Disk as the SCSI 0:0 and point to VMDK
  13. Add new CDROM with basic settings
  14. Start the machine
  15. Uninstall XenServer Tools
  16. Reboot
  17. Install VMWare Tools
  18. Shutdown
  19. Note that the VM Version is listed as 4
    1. Shutdown the VM
    2. Right click the VM and choose the option for “Upgrade Virtual Hardware”
    3. It should now show as a vmx-09
  20. Boot the server and ensure it boots
  21. Shutdown VMWare VM
  22. Shutdown XenServer VM
  23. Edit VMWare VM and change NIC to production network
  24. Boot and change IP address if needed
  25. Uninstall VMWare converter

Windows 2008: May have to delete the NIC (which was listed as Flexible) and add a new one for E1000.

Check IntialKeyboardIndicators key which may get messed up.
This is found under KHEY_USERS.DefaultControl PanelKeyboard
It would be set to 21474836648 after conversion
Changing this back to 0 made it work as expected.

Migrating Xenserver 6.1 to Vmware 5.1

Now this is the story all about how
Our life got flipped, turned upside down
And we’d like to take a minute, so just sit right there
And we’ll tell you how all how we moved to VMWare.

In Xenserver Enterprise born and raised
In the server room where we spent most of our days
Chilling out, maxing, relaxing all cool
And all shooting all the servers into the pool
When a couple of updates, they were up to no good
Started making trouble in our neighbourhood
We had numerous crashes and the users got scared
So we said “We’re moving all the servers onto VMware”

We asked for advice and it became clear
VMware is the game that we should play here
If anything I could say that this software was rare
But I thought nah, forget it, let’s get on VMware!

We started moving servers about seven or eight
And couple of days later we were almost straight
Looked at our kingdom we were finally there
Sitting all our servers on Vmware!

The real story of how and why we moved is slightly more complex.

We originally made the decision to invest in Xenserver in 2009 and by the end of that year had bought Xenserver Enterprise, 2 Dell R710 servers, a Dell AX4-5 SAN and also a QLogic 5602 fibre switch. We then had to wait for Xenserver 5.5 Update 2 to allow us to use our hardware. (Yeah I know we should have spent more time with the HCL.)

Well after waiting we soon had Xenserver up and running and over the next couple of month moved most of our infrastructure in Xen using the Xen convert tool. Things worked well and we had very little trouble. Towards the end of 2010 we put the 5.6 update on. Still no problems.

We then had a spate of issues with the hardware server mysteriously rebooting which we thought we nailed down to a faulty memory module and or a need hotfix. No big issue, we simply bought a new set of memory and applied the hotfixes. We then went and added a new HP P2000 SAN due to needing more storage. Again all was good. Update to Xenserver 6. Still all is good with the (virtual) world.

Fast forward to Feb 2013. Our virtual infrastructure needed expanding so we went and bought a spanking new HP DL380 hooked it into the infrastructure and then disappointment. We need the 6.1 update to add in our new server to the farm. No problem we think. One evening, shutdown the VMs, do a rolling upgrade, power up the VMs. 2-3 Hours work and then home.

Little did we know.

At first everything seemed to go well. The update went onto the server and the VM’s booted back up. The overall update time was about 5 Hours due to a small issue with a couple of VMs not wanting to power up. No biggy this kind of thing has happened before I know the fix.

However this was just the opening salvo in what would become a 2 week campaign of intimidation and fear from Xenserver. During the next 2 weeks I tried updating the Xen Tools on each VM (where it would let me), removing Xentools (again if it would let me), applying hotfixes to Xenserver. And throughout this VMs would hang, I would have to kill VMs and go through the destroy domain procedure, I would have to recover VM’s from snapshots, I would have to detach the VHDs rescan the SR and re-attach the VHD to recover the VM. The list of problems seemed endless.

After much research, reading of forums and speaking to people the only real solution seemed to be a move away from Xenserver. The question of which hypervisor to use was in no doubt – VMWare.

We went for VMware essentials as we couldn’t see us using more than 3 physical servers. Alongside this we decided to go with Veeam as our backup/DR solution.

So how did we do it.

We started off by trimming down the number of virtual boxes in our Xen environment so that they would run on just 2 older (Dell R710) physical servers. Then we took a server (HP DL380) that had been slated for Exchange (but not implemented) and upped its memory and rebuilt the RAID array. We then created our VMWare infrastructure using the 2 new HP DL 380s. This was nice and easy and didn’t cause any issues. Then came the most important part – moving the VMs.

This, as is turned out, was nice and straight forward. We just used the VMware converter and treated the Xen VMs as though they were physical boxes putting them onto the RAID array of one of the servers. Once we had converted about half the VMs we started on moving the remaining VHDs onto a single SAN (The older smaller Dell AX4-5).

Then came the fun. We removed the HP SAN from the Xenpool, reconfigured the zoning on the switch and then we reconfigured the P2000 from a 3.6 Tb RAID 10 to a 5.4 Tb RAID 5 config. There was a small issue installing the FC card and attaching the HBA in VMWare but a quick search online and a small update/hack later it was attached. We then moved all the VMDKs from the server to the SAN.

We moved the remaining VMs to the VMware infrastructure but left 1 of the Xenservers running because we had a completely screwed up Linux install that was happily running a webserver. We moved the last VHD to the local storage on the Xenserver and promptly put the whole nightmare out of our heads.

As for VMware – well what can I say 2 weeks later and everything is still running smoothly. We have Veeam doing a nightly back to a local server, our users aren’t complaining. The final stage of the move will involve removing the final Xenserver and then add the Dell into the mix. We will then use the AX4-5 as an offsite replica for Veeam to copy the essential VMs every night.

Bye bye Citrix XenServer

http://www.vmguru.com/2013/10/bye-bye-citrix-xenserver/

As we are in the week of the obituaries, let’s do another one. A few weeks ago when vSphere 5.5 was release I updated our Enterprise Hypervisor Comparison. As Citrix and Red Hat both had released a new version of their hypervisor product I also added those. Normally I only need to check for new features added or product limits which have been upgraded. But this time was different!

In the column with the new Citrix XenServer 6.2 I had to remove feature which were previously included in the product. WTF?

I rarely come across any XenServer deployments and when I speak to colleagues, customers, etc. I often hear Citrix XenServer is dead. Based on the number of XenServer deployments I see and the number of customers changing to Hyper-V or vSphere this seems to support this theory. Instead of adding new features and upgrading product limits, I had to retire numerous features.

Features retired in XenServer 6.2:

  • Workload Balancing and associated functionality (e.g. power-consumption based consolidation);
  • XenServer plug-in for Microsoft’s System Center Operations Manager;
  • Virtual Machine Protection and Recovery (VMPR);
  • Web Self Service;
  • XenConvert (P2V).


Features with no further development and removal in future releases:

  • Microsoft System Center Virtual Machine Manager (SCVMM) support;
  • Integrated StorageLink (iSL);
  • Distributed Virtual Switch (vSwitch) Controller (DVSC). The Open vSwitch remains fully supported and developed.

It has never been a secret that Microsoft and Citrix joined forces but as expected Citrix XenServer had no place there as Microsoft invested big on Hyper-V. But now it seems that Citrix has killed XenServer. With version 6.2 they moved XenServer to a fully open source model essentially giving it back to the community. Of course much of XenServer already was open source, using code from the Xen Project, Linux kernel and the Cloud Platform (XCP) initiative. But with the retirement of many existing features it seems that Citrix is stripping XenServer from all Citrix add-ons before giving the basic core back to the open source community.

Citrix still delivers a supported commercial distribution of XenServer but when an identical free version is available …… At the feature and functionality level, the only difference is that the free version of  XenServer will not be able to use XenCenter for automated installation of security fixes, updates and maintenance releases. Free Citrix XenServer does include XenCenter for server management, but not patch management. I doubt many customers will buy a version of XenServer for patch management alone.

It’s interesting to see Gartner has moved Citrix outside the leaders Quadrant and placed it in the visionaries Quadrant. Visionaries in the x86 server virtualization infrastructure market have a differentiated approach or product, but they aren’t meeting their potential from an execution standpoint.

Magic Quadrant for x86 Server Virtualization Infrastructure 2012.PNGMagic Quadrant for x86 Server Virtualization Infrastructure 2013.PNG

So it looks like Citrix has given up on XenServer and is going to focus on their core business, the desktop and the ecosystem of products around it.

Within their partnership with Microsoft they cannot or may not compete with Hyper-V although XenServer has,in the past, always been a better product than Hyper-V. With the battle on application delivery intensifying, their focus needs to be on their main portfolio. VMware is targeting Citrix’s application delivery platform with VMware Horizon Workspace and on the desktop front Citrix faces two enemies. Where Microsoft Remote Desktop Services is targeting their Server BAsed Computing/XenApp platform and VMware Horizon View is battling Citrix XenDesktop.

I wonder when we will hear that Citrix finally killed XenServer …..

How To Convert A Xen Virtual Machine To VMware

https://www.howtoforge.com/how-to-convert-a-xen-virtual-machine-to-vmware

This article explains how you can convert a Xen guest to a VMware guest. The steps descibed here assume advanced VMware and Xen knowledge.

Additional software requirements:

  • qemu
  • VMware Server 1.xx
  • VMware Converter
  • Knoppix LiveCD or the distribution’s first CD

Xen -> VMware VM Migration Steps (Kernel Step)

The kernel on the VM to be migrated must support fully virtualized operation. The kernels used for para-virtulized machines using RHEL/Fedora/CentOS as a guest do not support fully virtualized operation by default. The best way to deal with this is to also install a standard kernel in the machine, port the machine and finally remove the Xen kernel.

1. Since this is a highly risky procedure, FIRST CREATE A BACK-UP OF YOUR VIRTUAL MACHINE!!!

2. Download a kernel with the same version number and architecture as the Xen kernel, except it should be the a generic one. Use the distribution CD/DVD or any other repository to get it.

3. Use RPM tools to install the kernel.

4. Modify /etc/modprobe.conf to add the proper SCSI and network card modules:

alias eth0 xennet
alias scsi_hostadapter xenblk

will be replaced by

alias eth0 pcnet32
alias scsi_hostadapter mptbase
alias scsi_hostadapter1 mptspi
alias scsi_hostadapter2 ata_piix

Modify /etc/inittab by removing the # in front of the getty and puting a comment in front of the line containg the xen console:

1:2345:respawn:/sbin/mingetty --noclear tty1
2:2345:respawn:/sbin/mingetty
3:2345:respawn:/sbin/mingetty
4:2345:respawn:/sbin/mingetty
5:2345:respawn:/sbin/mingetty
6:2345:respawn:/sbin/mingetty

This is a one way action. Once modified the kernel modules, you won’tbe able to properly start the machine, and you will receive a Kernel panic error message.

Xen – > VMware VM Migration Steps (Disk Step)

To convert a XEN machine in a .vmdk format to be used with VMware, a tool called qemu will be used. QEMU is a generic and open source machine emulator and virtualizer. It is also a fast processor emulator using dynamic translation to achieve good emulation speed.

1. Download qemu from DAG repository. Use the EL5 package for any Fedora/RHEL5/CentOS5 that you use.

http://dag.wieers.com/rpm/packages/qemu/

2. Convert the XEN machine to VMware:

qemu-img convert <source_xen_machine> -O vmdk <destination_vmware.vmdk>

3. At this point, we have a valid VMware Server 1.xx disk image. This can be powered on onto any VMware Server. We need to do it anyway in order to build a .VMX file that will be later used. This stage also confirms whether the newly machine runs properly.

3.1 Create a new virtual machine. Do not create a new HDD, but use the previously created vmdk.

3.2 Power it on in order to validate that it is usable and to allow the machine to reconfigure itself.

4. Move the VMware Server virtual machine to a Windows workstation running VMwareConverter.

5. Using VMware Converter, convert the VMware Server virtual machine to VMware ESXi.

Xen -> VMware VM Migration Steps (ESX Step)

1. Configure the virtual machine to boot first from CD-ROM drive.

2. Modify the machine’s HDD SCSI controller type from BUS Logic to LSI Logic.

Edit Virtual Machine Settings > SCSI Controller 0 > Change type > LSI Logic.

3. Boot using Knoppix or the distribution’s first CD.

4. Mount the VM’s disk and chroot to it.

5. Get the disk architecture using fdisk -l, and modify /etc/fstab accordingly.

6. Create a new initrd image. You also must know the version of the running kernel. For example, if you are running kernel 2.6.18-1234, then the initrd command would look like this:

# mkinitrd -v -f /boot/initrd-2.6.18-1234.img 2.6.18-1234

7. Edit /boot/grub/menu.lst to boot from this initrd.

8. Keep your fingers crossed and reboot the machine.

Don’t forget to re-configure your network card.

External references:

http://communities.vmware.com/docs/DOC-8300

Expert-Exchange

xen isn’t one of the platforms directly targeted as a source image by converter, but you can use it to bring in the VM’s like they were running on a physical computer.  You will probably want to uninstall any Xen tools before beginning the conversion process so they don’t interfere when you boot the VM’s up the first time.

Converter converts LVM volumes to EXT3 partitions unless you use the cold cloning method, too.  vCenter Converter converting a Linux operating system does not maintain LVMs on the resulting virtual machine (http://kb.vmware.com/kb/1019398)

Citrix PVS 7.6 issues with v10 VMs on vSphere 5.5

PVS 7.6 issues with v10 VMs on vSphere 5.5

Posted by citrixgeek1 on November 20, 2014

http://citrixgeeks.com/2014/11/20/pvs-7-6-issues-with-v10-vms-on-vsphere-5-5/

You know, there are only so many delays I’m willing to deal with in a day.

First, there’s the bug earlier that bit me during install.  Can’t have a space in the name of the OU.

Now, I find another one that gave me the redass.  HARD.

So you’ve got vSphere 5.5.  Excellent.  Citrix says it’s supported.  Everything looks fine.  The customer wants v10 VMs, which is a pain (mostly because VMware’s web interface is a kludgy, bug-ridden POS), but whatever.  NOTE:  Yes, I’m a VCP, too, so don’t think I’m just “hatin on the competition”.  It does need work!

So you build your base image, optimize it, and install the PVS Target device driver.

Reboot, and it hangs loading windows.  I actually removed the bootux disabled entry using bcdedit just so I could see what was going on.

What’s the problem?

With v10 VMs, VMware attaches the virtual CDROM using SATA, not IDE.  Apparently the PVS target device driver can’t deal with that, so the VM never finishes loading.  NOTE:  It ONLY does this when there’s a vDisk attached – if you remove the vDisk from the target device, Windows will boot every time, so it’s not like the driver just outright breaks something.  Even more infuriating.

The solution?  Switch the CDROM to IDE.  Then, don’t forget to remove the SATA adapter from the VM.  Then after you’ve done that, make sure you go into device mangler and remove all the dead stuff – the SATA adapter itself, as well as any ATA channels that are no longer present.  You should still see two ATA channels present after the removal.  Basically, you want to remove all the grayed out items.  How?

Open an administrative command prompt, and enter “set devmgr_show_nonpresent_devices=1″.

Then, “start devmgmt.msc”

Then click view, and then show hidden devices.  Then expand IDE/ATA adapters, and remove all that stuff.

Again, remove only the grayed out items.

While you’re in there, check the Network Adapters, and remove all the grayed out NICs, too (but you already did that, right)?  *IF* you found any grayed out NICs and removed them, you should uninstall and reinstall the target device driver to ensure it binds to the correct NIC.

Then go ahead and re-run the imaging wizard, and you should FINALLY be able to pull an image of your VM.

Me?  I’m pretty disappointed in Citrix.  vSphere 5.5 has been out for a while now, and PVS 7.6 was only just released a couple months ago.  One would think they could have accounted for this, or at least made prominent note of it somewhere telling people about the problem.

But alas, here I am having to blog and complain about it.  Maybe next time..

Provisioningly,
CG1