RDM to VMKD – reposted blog

Source: How to convert a physical RDM into a VMDK disk

I found several VMs with the primary disk in VMDK format, followed by a second disk (dedicated to the applications, mainly databases, app servers or mail) in physical RDM format.

Since a part of the project is to have a complete data protection solution completely based on VMware VADP libraries, we suggested the customer to convert all the RDM disks into VMDK disks. With the latest vSphere releases, there are really no performance differences to justify RDM disks if not for Microsoft clusters or other situations requiring shared disks between VMs, while in the contrary it’s not possible to take snapshots of physical RDM disks, thus preventing backups of those disks and forcing to use backup agents inside the Guest OS.

To validate the procedure, and to reassure the customer, I realized a quick test to show the conversion process. This task can only be done once the infrastructure is upgraded at least to vSphere 4, since (based on KB1005241) a storage vmotion on ESX 3.5 of a virtual RDM does not convert it to VMDK, while this is possible with vSphere 4.0 or newer versions.

In my lab, I created a simple Windows 2003 VM, with a primary 20 Gb vmdk disk , and a secondary 5 Gb physical RDM disk:

Physical RDM disk

To complete the conversion, the VM must be shut down at least once, so you need to schedule the activity with the customer.

Once the VM is stopped, you need to edit its settings and remove the RDM disk. Write down the Virtual Device Node (0:1 in my example), because later you will have to reconnect the disk with the same value.

Remove RDM disk

Select the option “Deletes files from datastore”. Don’t be scared: since it’s a physical RDM, the only thing that will be deleted is the pointer to the RDM disk, not the disk itself.

Then, add a new RDM disk to the VM using the same LUN you removed before:

Add a new RDM disk

This time, choose “virtual” competibility mode, and select the same device node value as before. This let the Guest OS think the disk is the same as before.

Add virtual RDM

Now you can start again the VM. The downtime could be reduced to a couple of minutes if you do all the operation quickly. Then, initiate a storage vmotion, it will automatically convert the virtual RDM disk in a VMDK disk without further downtime (but be prepared to it if you do not have SVMotion license and thus you need a cold migration):

Disk converted into VMDK format

UPGRADING TO VMWARE VSPHERE 5.5

Friday, October 25, 2013   , , , , , , ,

Source: Exit | the | Fast | Lane

Like all good stewards of all things virtual I need stay current on the very foundations of everything we do:  hypervisors. So this post contains my notes on upgrading to the new vSphere 5.5 build from 5.1. This isn’t meant to be an exhaustive step-by-step 5.5 upgrade guide, as that’s already been done, and done very well (see my resources section at the bottom for links). This is purely my experience upgrading with a few interesting call outs along the way that I felt worth writing down should anyone else encounter them.

The basic upgrade sequence goes like this, depending on which of these components you have in play:

  1. vCloud components
  2. View server components
  3. vCenter (and vSphere Clients)
  4. SRM/ VR/ vCOPS/ VDP/ VSA
  5. ESXi hosts
  6. VM Tools (and virtual hardware –see my warning in the clients section)
  7. vShield components
  8. View agent

The environment I’m upgrading consists of the following:

  • 2 x Dell PE R610 hosts running ESXi 5.1 on 1GB SD
  • 400GB of local SAS
  • 3TB Equallogic iSCSI storage
  • vCenter 5.1 VM (Windows)

vCenter 5.5

As was the case in 5.1, there are still 2 ways to go with regard to vCenter: Windows-based or vCenter Server Appliance (vCSA). The Windows option is fully featured and capable of supporting the scaling maximums as well as all published configurations. The vCSA is more limited but in 5.5, a bit less so. Most get excited about the vCSA because it’s a Linux appliance (no Windows license), it uses vPostgres (no SQL license) and is dead simple to set up via OVF deployment. The vCSA can use only Oracle externally, MS SQL Server support appears to have been removed. The scaling maximums of the vCSA has increased to 100 hosts/ 3000 VMs with the embedded database which is a great improvement over the previous version. There are a few things that still require the Windows version however, namely vSphere Update Manager and vCenter Linked Mode.

While I did deploy the vCSA, additionally, my first step was upgrading my existing Windows vCenter instance. The Simple Install method should perform a scripted install of the 4 main components. This method didn’t work completely for me as I had to install the inventory and vCenter services manually. This is also the dialog from which you would install the VUM service on your Windows vCenter server.

I did receive an error about VPXD failing to install after the vCenter Server installation ran for the first time. A quick reboot cleared this up. With vCenter upgraded, the vSphere Client also needs to be upgraded anywhere you plan to access the environment using that tool. VMware is making it loud and clear that the preferred method to manage vSphere moving forward is the web client.

vCenter can support up to 10,000 active VMs now on a single instance, which is a massive improvement. If you plan to possibly power on more than 2000 VMs simultaneously, make sure to increase the number of ephemeral ports during the port configuration dialog of the vCenter setup.

Alternatively, the vCSA is very easy to deploy and configure with some minor tweaks necessary to connect to AD authentication. Keep in mind that the maximum number of VMs supported on the vCSA with the embedded DB is only 3000. To get the full 10,000 out of the vCSA you will have to use an Oracle DB instance externally. The vCSA is configured via the web browser and is connected to by the web and vSphere Clients the same way as its Windows counterpart.

If you fail to connect to the vCSA through the web client and receive a “Failed to connect to VMware Lookup Service…” error like this:

…from the admin tab in the vCSA, select yes to enable the certificate regeneration option and restart the appliance.

Upgrading ESXi hosts:

The easiest way to do this is starting with hosts in a HA cluster attached to shared storage, as many will have in their production environments. With vCenter upgraded, move all VMs to one host, upgrade the other, rinse, repeat, until all hosts are upgraded. Zero downtime. For your lab environments, if you don’t have luxury of shared storage, 2 x vCenter servers can be used to make this easier as I’ll explain later. If you don’t have shared storage in your production environment and want to try that method, do so at your own risk.

There are two ways to go to upgrade your ESXi hosts: local ISO (scripted or not) or vSphere Update Manager. I used both methods, one on each, to update my hosts.

ISO Method

The ISO method simply requires that you attach the 5.5 ISO to a device capable of booting your server: either USB or Virtual Media via IPMI (iDRAC). Boot, walk through the steps, upgrade. If you’ve attached your ISO to Virtual Media, you can specify within vCenter to boot your host directly to the virtual CD to make this a little easier. Boot Options for your ESXi hosts are buried on the Processors page for each host in the web or vSphere Client.

VUM method:

Using VUM is an excellent option, especially if you already have this installed on your vCenter server.

  • In VUM console page, enter “admin view”
  • Download patches and upgrades list
  • Go to ESXi images and import 5.5 ISO
  • Create baseline during import operation

  • Scan hosts for non-compliance, attach baseline group to the host needing upgrade
  • Click Remediate, walk through the screens and choose when to perform the operation

The host will go into maintenance mode, disconnect from vCenter and sit at 22% completion for 30 minutes, or longer depending on your hardware, while everything happens behind the scenes. Feel free to watch the action in KVM or IPMI.

When everything is finished and all is well, the host will exit maintenance mode and vCenter will report a successful remediation.

Pros/ Cons:

  • The ISO method is very straight-forward and likely how you built your ESXi host to begin with. Boot to media, upgrade datastores, upgrade host. Speed will vary by media used and server hardware, whether connected to USB directly or ISO via Virtual Media.
    • This method requires a bit more hand-holding. IPMI, virtual media, making choices, ejecting media at the right time…nothing earth shattering but not light touch like VUM..
  • If you already have VUM installed on your vCenter server and are familiar with its operation, then upgrading it to 5.5 should be fairly painless. The process is also entirely hands-off, you press go and the host gets upgraded magically in the background.
    • The down side to this is that the vSphere ISO is stored on and the update procedure is processed from your vCenter server. This could add time delays to load everything from vCenter to your physical ESXi hosts, depending on your infrastructure.
    • This method is also only possible using the Windows version of vCenter and is one of the few remaining required uses of the vSphere Client.

No Shared Storage?

Upgrading hosts with no shared storage is a bit more trouble but still totally doable without having to manually SCP VM files between hosts. The key is using 2 vCenter instances in which vCSA works great as a second for this purpose. Simply transfer your host ownership between vCenter instances. As long as both hosts are “owned” by the same vCenter instance, any changes to inventory will be recognized and committed. Any VMs reported as orphaned should also be resolved this way. You can transfer vCenter ownership back and forth any number of times without negatively impacting your ESXi hosts.  Just keep all hosts on one or the other! The screenshot below shows 2 hosts owned by the Windows vCenter (left) and the vCSA on the right showing those same hosts disconnected. No VMs were negatively impacted doing this.

The reason this is necessary is because VM migrations are controlled and handled by vCenter, cold migrations included which is all you’ll be able to do here. Power off all VMs, migrate all VMs off one host, fire up vCenter on the other host, transfer host ownership, move the final vCenter VM off then upgrade that host. Not elegant but it will work and hopefully save some pain.

vSphere Clients

The beloved vSphere Client is now deprecated in this release and comes with a heavy push towards exclusive use of the Flash-based web client. My advice, start getting very familiar with the web client as this is where the future is heading, like it or not. Any new features enabled in vCenter 5.5 will only be accessible via the web client.

Here you can see this clearly called out on the, now, deprecated vSphere Client login page:

WARNING – A special mention needs to be made about virtual hardware version 10. If you upgrade your VMs to the latest hardware version, you will LOSE the ability to edit their settings in the vSphere Client. Yep, one step closer to complete obsolesce. If you’re not yet ready to give up using the vSphere Client, you may want to hold off upgrading the virtual hardware for a bit.

vSphere Web Client

The web client is not terribly different from the fat client but its layout and operational methods will take some getting used to. Everything you need is in there, it may just take a minute to find it. The recent tasks pane is also off to the right now, which I really like.

image

The familiar Hosts and Clusters view:

Some things just plain look different, most notably the vSwitch configuration. Also the configuration items you’ve grown used to being in certain property menus are now spread out and stored in different places. Again, not bad just…different.

I also see no Solutions and Applications section in the web client, only vApps. So things like the Dell Equallogic Virtual Storage Manager would have to be accessed via the old vSphere Client.

Client Integration Plugin

To access features like VM consoles, file transfers to datastores and OVF deployments via the web client, the 153MB Client Integration plugin must be installed. Attempting to use any features that require this should prompt you for the client install. One of the places it can also be found is by right-clicking while browsing within a datastore.

The installer will launch and require you to close IE before continuing.

VM consoles now appear in additional browser tabs which will take some getting used to. Full screen mode looks very much like a Windows RDP session which I like and appreciate.

Product Feature Request

This is a very simple request (plea) to VMware to please combine the Hosts and Clusters view with the VMs and Templates view. I have never seen any value in having these separated. Datacenters, hosts, VMs, folders and templates should all be visible and manageable from a single pane. There should be only 3 management sections separating the primary vSphere inventories: Compute (combined view), storage and network. Clean and simple.

Resources:

vSphere 5.5 upgrade order (KB): Link

vSphere 5.5 Configuration Maximiums: Link

Awesome and extensive vSphere 5.5 guide: Link

Resolution for VCA FQDN error: Link

Scaled up VM-Level Protection now GA (Reblogged from Virtual Geek)

Source: Virtual Geek

[UPDATED – 11/20/14, 7:32AM ET: VSAN notes, VCOPS link]

For people who love the idea of replication (local for recovery purposes and remote for DR purposes), but want it as software only (no hardware dependency at all), and with VM-level granularity – a new choice is now here.

“Hello World” from Recoverpoint for Virtual Machines.

You can get more here.

Think:

  • software-only VM-level IO splitter
  • software-only Recoverpoint Appliance
  • Rich services – deep device counts, broad replication RPOs (from sync to async, time based, change based).
  • snap and replicate techniques (think vSphere Replication as an example) = copies.  Recoverpoint is a continuous replication (journalled IO) technique – you can recover to any point in time.
  • Local and Remote replicas
  • Super efficient: compressed, deduped – in our experience the WAN efficiency is one of the highest of any replication approach on the market
  • Larger scale than vSphere Replication.   Scale target is about 1000 VMs per vSphere cluster.

For EMCers, EMC partners, and EMC customers (talk to your partner/EMCer) you can play with Recoverpoint for Virtual Machines using vLab starting NOW (http://portal.demoemc.com)

BTW – vLab our massive at scale (tens of thousands of labs every month, many times that in VMs created and destself-service portal for all our products and solutions – brought to the world on an Enterprise Hybrid Cloud build on the vRealize Suite on Vblock by the way!)

This means that ANY storage model – whether it’s EMC XtremIO, a customer using VSAN, a customer using an NFS datastore – anything – can have rich replication capabilities.   BTW, one important note – there is a restriction on the current release where the VMs that are protected can be on anything, but the Recoverpoint journal needs to be on a VMFS datastore.   This will be lifted in the future (clearly it must be if RP4V will be included with EMC’s Project Mystic – aka EMC’s EVO:RAIL++ appliance).

Also, a great link I saw after posting – the always awesome Matt Cowger has created a VCOPS aka vRealize Operations adapter for Recoverpoint for VMs, aka RP4VM4VCOPS 🙂  Get it here: http://www.exaforge.com/rp4vm4vcops/

Check out the demo from Itzik Reich below.   Expect it to be available for anyone to use freely (of course, if you want support, you need to purchase) and easily downloadable soon!

Count The Ways – Flash as Local Storage to an ESXi Host

Count The Ways – Flash as Local Storage to an ESXi Host
Posted: 21 Jul 2014   By: Joel Grace

When performance trumps all other considerations, flash technology is a critical component to achieve the highest level of performance. By deploying Fusion ioMemory, a VM can achieve near-native performance results. This is known as pass-through (or direct) I/O.

The process of achieving direct I/O involves passing the PCIe device to the VM, where the guest OS sees the underlying hardware as its own physical device. The ioMemory device is then formatted with “file system” by the OS, rather than presented as a virtual machine file system (VMFS) datastore. This provides the lowest latency, highest IOPS and throughput. Multiple ioMemory devices can also be combined to scale to the demands of the application.

Another option is to use ioMemory as a local VMFS datatstore. This solution provides high VM performance, while maintaining its ability to utilize features like thin provisioning, snapshots, VM portability and storage vMotion. With this configuration, the ioMemory can be shared by VMs on the same ESXi host and specific virtual machine disks (VMDK) stored here for application acceleration.

Either of these options can be used for each of the following design examples.

Benefits of Direct I/O:

Raw hardware performance of flash within a VM with Direct I/OProvides the ability to use RAID across ioMemory cards to drive higher performance within the VMUse of any file system to manage the flash storage

Considerations of Direct I/O:

ESXi host may need to be rebooted and CPU VT flag enabledFusion-io VSL driver will need to be install in the guest VM to manage deviceOnce assigned to a VM the PCI device cannot be share with any other VMs

Benefits Local Datastore:

High performance of flash storage for VM VMDKsMaintain VMware functions like snapshots and storage vMotion

Considerations Local Datastore:

Not all VMDKs for a given VM have to reside on local flash use shared storage for OS and flash for application DATA VMDKsSQL/SIOS

Many enterprise applications reveal their own high availability (HA) features when deployed in bare metal environments. These elements can be used inside VMs to provide an additional layer of protection to an application, beyond that of VMware HA.

Two great SQL examples of this are Microsoft’s Database Availability Groups and SteelEye DataKeeper. Fusion-io customers leverage these technologies in bare metal environments to run all-flash databases without sacrificing high availability. The same is true for virtual environments.

By utilizing shared-nothing cluster aware application HA, VMs can still benefit from the flexibility provided by virtualization (hardware abstraction, mobility, etc.), but also take advantage of local flash storage resources for maximum performance.

Benefits:

Maximum application performanceMaximum application availabilityMaintains software defined datacenter

Operational Considerations:

100% virtualization is a main goal, but performance is criticalDoes virtualized application have additional HA features?SAN/NAS based datastore can be used for Storage vMotion if hosts needs to be taken offline for maintenanceCITRIX

The Citrix XenDesktop and XenApp application suites also present interesting use cases for local flash in VMWare environments. Often times these applications are deployed in a stateless fashion via Citrix Provisioning Services, where several desktop clones or XenApp servers are booting from centralized read-only golden images. Citrix Provisioning Services stores all data changes during the users’ session in a user-defined write cache location.  When a user logs off or the XenApp server is rebooted, this data is flushed clean. The write cache location can be stored across the network on the PVS servers, or on local storage devices. By storing this data on a local Fusion-io datastore on the ESXi host, it drastically reduces access time to active user data making for a better Citrix user experience and higher VM density.

Benefits:

Maximum application performanceReduced network load between VM’s and Citrix PVS ServerAvoids slow performance when SAN under heavy IO pressureMore responsive applications for better user experience

Operational Considerations

Citrix Personal vDisks (persistent desktop data) should be directed to the PVS server storage for resiliency.PVS vDisk Images can also be stored on ioDrives in the PVS server further increasing performance while eliminating the dependence on SAN all together.ioDrive capacity determined by Citrix write cache sizing best practices, typically a 5GB .vmdk per XenDekstop instance.

70 desktops x 5GB write cache = 350GB total cache size (365GB ioDrive could be used in this case).

The Citrix XenDesktop and XenApp application suites also present interesting use cases for local flash in VMWare environments. Often times these applications are deployed in a stateless fashion via Citrix Provisioning Services, where several desktop clones or XenApp servers are booting from centralized read-only golden images. Citrix Provisioning Services stores all data changes during the users’ session in a user-defined write cache location.  When a user logs off or the XenApp server is rebooted, this data is flushed clean. The write cache location can be stored across the network on the PVS servers, or on local storage devices. By storing this data on a local Fusion-io datastore on the ESXi host, it drastically reduces access time to active user data making for a better Citrix user experience and higher VM density.

VMware users can boost their system to achieve maximum performance and acceleration using flash memory. Flash memory will maintain maximum application availability during heavy I/O pressure, and makes your applications more responsive, providing a better user experience. Flash can also reduce network load between VMs and Citrix PVS Server.Click here to learn more about how flash can boost performance in your VMware system.

Joel Grace Sales Engineer
Source: http://www.fusionio.com/blog/count-the-ways.
Copyright © 2014 SanDisk Corporation. All rights reserved.Terms of UseTrademarksPrivacyCookies