Virtualizing Citrix XenApp on vSphere, part II

December 15, 2009

 

This is the second part of my articles on virtualizing XenApp on vSphere. You can read the first part here.

vSphere design considerations

I recommend to create a dedicated VMware cluster for virtualized XenApp servers. The reason for this is use of vSphere functionality and the resulting licensing costs. Because XenApp VM’s are highly resource intensive there is no need for VMware DRS. VMware DRS is useful when there are a lot of VM’s with different resource demands but with XenApp VM’s this is not the case. All XenApp VM’s are resource intensive. Furthermore I do not recommend to use vmotion on XenApp VM’s because end-users could experience a delay in there XenApp session during a vmotion. Based on these facts a vSphere Standard edition license gives enough functionality for the VMware cluster with XenApp servers, which I will call the VMware XenApp cluster in the rest of this article. By using vSphere standard, the XenApp servers are protected against hardware failures by VMware HA and using the vSphere standard license keeps the cost of a virtualized XenApp server low. There’s one caveat, if you use distributed vSwitches in your environment you are stuck with standard vSwitches with vSphere standard or you should use the vSphere Enterprise Plus license for the VMware XenApp cluster.

Hardware considerations

Selecting the right server hardware is essential for successful virtualization of XenApp servers. I recommend to use hardware based on the Intel Nehalem processor architecture. At time of writing this article, this is the Intel Xeon 55XX series. The Nehalem processor architecture provides support for hardware-assisted memory management unit (MMU) virtualization. MMU virtualization provides a significant performance increase.

The number of physical CPU cores should be matched with the number of vCPU’s. For virtualized XenApp servers, do not overcommit CPU resources. For example, if you intend to run 4 XenApp servers with 2 vCPU’s, your ESX server should have 8 cores. It is possible to slightly overcommit but you should monitor your CPU ready time for bottlenecks.

I used diskless HP Proliant BL460 G6 servers for building a VMware XenApp cluster, but any modern server will do. These servers are equipped with 2 Quadcore Intel E5540 CPU’s and 24 GB of RAM. With this configuration I managed to run 5 XenApp VM’s with a good performance. With 6 XenApp VM’s the CPU ready time becomes greater than 5% and the performance isn’t acceptable anymore.

Because XenApp VM’s are disk I/O intensive (especially during logon hours) I recommend to make dedicated datastores with the size for holding a maximum of four XenApp VM’s within the VMware XenApp cluster. This way the XenApp’s VM’s make optimal use of the I/O queue’s each lun provide. Also create the datastores on the fastest lun you can provide. I used raid 1 fibrechannel lun’s for the XenApp VM datastores.

Application Landscape

Each XenApp VM will contain some standard applications and custom application specific to the business. In this article I focus only on the standard applications. Here’s a quick overview of the standard application set I use on a XenApp VM:

– Microsoft Internet Explorer 7
– Microsoft Office 2003
– Antivirus software

I used Internet Explorer 7 because Internet Explorer 8 had more CPU spikes. I don’t know the reason for this, maybe some web application has not been optimized for IE8 but I didn’t have the time to investigate this problem. By default IE8 also starts two iexplorer processes for each user which requires a little more resources. Office 2003 was used because the customer was still using Office 2003. This is a plus because Office 2007 requires more resources. I recommended to use antivirus software on your XenApp VM.

With this configuration I managed to run 5 XenApp VM’s on a single ESXi host with each XenApp VM supporting 25 users. This makes a total of 150 users on one physical server. The user experience with this configuration is within normal range.

The following chart show’s the CPU usage from the ESXi host with 5 XenApp servers running. image

The following chart show’s the CPU usage from a XenApp VM.

image

Please note that the CPU ready time is below the 5%.


Virtualizing Citrix XenApp on vSphere, part I

October 26, 2009

Today many organizations who use XenApp are still bound to x86 platforms because of legacy applications which don’t run on a x64 platform. Sometimes an application does run on a x64 platform but is not supported by the vendor on a x64 platform. By sticking to the x86 platform, modern server hardware can’t be fully utilized due to the memory limit of 4 GB of the x86 platform.

To overcome this limitation virtualization comes to practice! Virtualizing a XenApp server has always been a challenge. However with the maturity of vSphere and current CPU’s, there isn’t a limitation anymore to not virtualize a XenApp server. In this article I will share you my experience of implementing a virtualized XenApp production environment for a customers and give you my recommendations for a successful virtualization of XenApp.

This article is split in different parts. In the first part I focus on the configuration of the XenApp VM. In the second part I will look at the application landscape and the underlying ESX host and in the last part I will look at the performance results.

Building a XenApp VM

The configuration of a XenApp VM is crucial to the performance of a virtualized XenApp server and has a direct link with the hardware configuration of the underlying ESX host. As with every VM there are four main performance aspects that need to be optimized for the corresponding workload: cpu, memory, disk I/O and network I/O.

CPU
The most important resource for the XenApp VM is CPU. XenApp VM’s tend to use a lot of CPU resources and this is most likely to be the first bottleneck. In creating you XenApp VM, there are two scenario’s: scale-out or scale-up. In the scale-out scenario there a lot of light XenApp VM’s created with one vCPU. In the scale-up scenario less VM’s are created with two vCPU. The main objective is to not over commit your physical CPU’s. Let’s say you have a ESX host with two Quadcore CPU’s, which is a total of 8 cores. If you create eight 1 vCPU VM’s, each VM can schedule a dedicated CPU. The same applies for 2 vCPU VM’s, if you create four 2 vCPU VM’s, each VM can schedule a dedicated set of CPU’s.

Depending of the workload on your XenApp VM, one of this scenario’s fits best. If you have light workloads the scale-out scenario might be best, but in most situations the scale-up scenario does the best job. In most circumstances, using 2 vCPU’s allows for more users and a more consistent user experience. With the improvements made to the CPU scheduler in vSphere, like further relaxed co-scheduling, SMP XenApp VM’s are no longer a problem. If you are using a host with Intel Nehalem CPU’s enable the CPU and Memory hardware assistance (more information on this in part II).

Disk I/O
The second important resource is disk I/O. This will be further explained in the next part of this article but for now I recommend to use two virtual disks for a XenApp VM. One for the operating system and one for the applications. For optimal disk I/O performance, make sure you align the file system in the guest OS.

Memory
The next resource for the XenApp VM is memory. With memory there is one simple rule. Don´t over commit memory for your XenApp VM´s. Depending of the workload of the XenApp server, configure the XenApp VM with the corresponding amount of memory. In most situations this will be 4096 MB of RAM (assuming you are using a 32 bit OS). Make sure you also make a memory reservation of the same size. This way the XenApp VM has all the configured RAM available and the VMware balloon driver cannot degrade the performance of the XenApp VM.

Network I/O
The last resource for the XenApp VM is network. I haven´t seen any XenApp VM implementation where network i/o results in a bottleneck but for best results use the new VMXNET 3 vNIC. The VMXNET 3 has less impact on the CPU which is always useful.

Other considerations
I recommend to use Windows Server 2003 R2 x86 for building the XenApp VM. Windows Server 2008 uses a lot more resources. This probably will be a lot better with Windows Server 2008 R2 but at time of writing this article, XenApp is not certified for use with Windows Server 2008 R2. Furthermore I recommend to remove the CDRom drive and floppy disks. The floppy disk can be complete disabled in the BIOS of the VM. Always install the latest VMware Tools to provide the optimized drivers for the virtual SCSI and network devices and install the balloon driver.

So let’s summarize the preferable XenApp VM configuration:

2 vCPU
4096 MB of RAM with 4096 MB reserved
1 LSI Logic Parallel SCSI controller
2 virtual disks, one of OS and one for applications
1 VMXNET 3 vNIC
CDRom drive removed
Floppy drive removed
CPU and Memory hardware assistance enabled if using Intel Nehalim processors

Again, depending on your environment another configuration could be more desirable. A consistent server configuration is very important in a XenApp farm so I recommend to build a dedicated template for deploying XenApp VM’s.

In the next part of this article I will look at the application landscape and the underlying ESX host for building your virtualized XenApp farm.

Continue to part II.


XenApp on vSphere

September 29, 2009

If you’re a XenApp user making a virtualization platform choice you need to read the new Project VRC paper that came out last week NOW.  The paper is titled, "VRC, VSI and Clocks Reviewed".  The results have shifted dramatically since the January findings.  Instead of trailing, ESX 4.0 now has a 4% advantage over XenServer 5.5 in the number of Terminal Services users it can support on a server. 

You can read the VRC whitepaper here (requires free registration).