Performance of App-V and ThinApp

We weres recently asked to provide evidence that virtualising an application would not affect its performance.

The request was quite reasonable. The application in question was a high-performance engineering application: Patran by MSC Software. Patran has some configurable parameters to optimise performance on high-performance workstations. Not much point in optimising it if the virtualisation caused a loss of performance.

My first thought was that virtualisation really shouldn’t affect performance. Application virtualisation redirects the file system and registry to alternate locations. You can see this quite clearly in the structure of the package. This might affect IO intensive operations, but not operations in memory. But this is just theory, and I can quite understand an engineering manager would want to see more than a theory.

My second thought was to look for data on performance from the vendors (in this case VMware for ThinApp and Microsoft for App-V). But I didn’t find anything useful, which is odd.

So then we looked at the problem again, and began to realise that it could be really quite difficult. How would you demonstrate that in no way was the virtualised app slower than the physical app? How would you create controlled tests? For a few benchmarks, obviously, but not for every function.

The problem became harder when the testers showed some results than indicated the virtualised app was significantly slower. The test was to use Fraps to measure the Frames Per Second (FPS) when running a test model. Patran needs to render the graphical model on the screen as the user manipulates it. The test showed that the virtualised app rendered the model 33% slower than the physical app.

I was surprised by this, as the rendering clearly happens in memory on the graphics card, and has nothing to do with IO. But then I looked at the data a bit more and I found that the result was not really 33%. What really happened is that rendering is done at either 30 FPS or 60 FPS, In the case of this one test, the virtualised app hit 30 more than 60, and vice versa for the physical app. Still, it was not going to be possible to wait for any adverse test result and then find out whether it was significant or not.

The route we took was to take some benchmarking software and to virtualise it. That would mean that all the benchmarks would run virtualised, and the same benchmarks could be run normally. The software I took was PassMark PerformanceTest.

PerformanceTest has a wide range of benchmarks: for CPU, Disk, Memory and Graphics. The tests showed that for every benchmark the virtual app performed about the same, with no significant difference.

Here is the summary overall:

Test Rating CPU G2D G3D Mem Disk
Native 1924.4 3443.5 480.0 583.9 1674.3 3117.5
ThinApp 1915.1 3462.7 462.3 581.0 1706.3 3206.6

And here’s the summary for 3D Graphics:

Test 3D Graphics Mark DirectX 9 Simple DirectX 9 Complex DirectX 10 DirectX 11 DirectCompute
Native 584 41.1 22.6 4.4 9.7 315.1
ThinApp 581 41.0 22.6 4.4 9.6 313.5

Based on this, it seems fairly unlikely that an application would perform significantly worse by being virtualised.

Check your BIOS Power Management Settings

I have been working on a large End User Computing programme for a while, and not found the time to blog, so now it is time to catch up with a few snippets.

This one is about Virtual Desktop Infrastructure (VDI) and the BIOS settings of the physical servers. Here’s the summary: VDI depends on high performance hosts, but by default hosts are typically configured for a balance of performance and energy efficiency. Check your BIOS. It may not be what you think.

I first came across this a while ago when working on a new VDI deployment of Windows 7 on VMware View, running on Dell blade servers to an EqualLogic SAN. We noticed that the desktops and applications were not launching as quickly as expected, even with low loads on the servers, networks and storage. We did a lot of work to analyse what was happening. It’s not easy with non-persistent VDI, because you don’t know what machine the user will log on to. The end result was a surprising one.

The problem statement was: “Opening Outlook and Word seems to be sluggish, even though the host resources are not being fully used. Performance is not slow. It is just not as good as we were expecting”.

My company, Airdesk, is usually called in after the IT team have been unable to resolve the problem for a while. If the problem were obvious it would have been solved already. This means that we were looking for a more obscure cause. For example, in this case, it was not a simple case of CPU, memory or disk resources, because these are easily monitored in the vSphere console. So already we knew that we were looking for something more hidden. Here’s a good article on Troubleshooting ESX/ESXi virtual machine performance issues. Let’s assume the IT team has done all that and still not found the problem.

My approach to troubleshooting is hypothesis-based. We identify all the symptoms. We identify the things that could cause those symptoms. We devise tests to rule them out. It’s not as easy as that, because you can’t always restructure the production environment for testing. You need tools to tell you what is going on.

In this case the tools we used were:

  • vSphere console to monitor the hosts and the virtual machines from outside
  • Performance Monitor to monitor processor and disk activity from inside the virtual machine
  • Process Monitor (Sysinternals) to see what was actually happening during the launch
  • WinSAT to provide a consistent benchmark of performance inside the virtual machine
  • Exmon for monitoring the Outlook-Exchange communication

The tools told us that the application launch was CPU-bound, but there was no significant CPU load on the hosts. CPU Ready time (the measure of delay in scheduling CPU resources on a host) was normal. We could see spikes of disk latency, but these did not explain the time delay in opening applications.

Our conclusion was that the virtual machines were not getting access to the CPU that the vSphere console said was available to them. What could cause that? Something perhaps that throttled the performance of the CPU? Intel SpeedStep maybe? The vSphere console showed that it was configured for High Performance. But we decided to check the BIOS on the hosts and, sure enough, they were configured for Active Power Controller (hardware-based power management for energy efficiency).

Bn0jduy9sfymors2aviygg74674

We changed the BIOS settings, and the result was immediate. Performance of the virtual desktop was electric. Click on an application and, bang, it was open. We saved potentially £ tens of thousands by finding the cause and not throwing resources at it.

You have two types of setting in Dell BIOS:

  1. Processor settings, which can be optimized for different workloads
  2. Power Management settings, which give a choice between efficiency and power.

In our case we wanted to configure the processors for a general-purpose workload but we also wanted to provide immediate access to full resources, without stepping the processor down to save power based on average utilisation. So the Maximum Performance power setting was the one we needed. You could also set the BIOS Power Management to be OS-Controlled, and allow the vSphere setting to take effect. The point of this post is that the vSphere setting said the hosts were in High Performance mode, while the troubleshooting showed they were not.

That was a little while ago. I was reminded of it recently while designing the infrastructure for a global VDI environment based on XenServer (the hypervisor) and XenDesktop (the VDI) running on HP blade servers to a 3PAR SAN. In the Low Level Design we said “Max performance MUST be set in the host server BIOS”.

Sure enough, in the HP BIOS the default power setting is for Balanced Power and Performance, and this needs to be changed. In a XenServer VDI environment it needs to be set to maximum performance. See this technote from Citrix on How to Configure a XenServer Host’s BIOS Power Regulator for Maximum Performance.

Power Settings Default at Startup-Cropped

If you are not managing the BIOS power management settings on your virtualisation hosts, you are not getting the results you should.

Virtual or Versatile Desktop

There is a lot of industry talk about Virtual Desktops at the moment. This is the desktop OS running as a virtual machine on a server in the datacenter. It sounds like the solution to all those difficult desktop problems, but it is more like a niche within a niche. Much more interesting is the Versatile Desktop. The Versatile Desktop is a personal computing device that is able to run different desktops at different times.

Personal computing requirements are hugely varied, so it is not surprising that the industry provides many different solutions apart from the common or garden desktop. The virtual desktop certainly has a place among them. Here’s how I see the logic flow:

  • Standard requirement: standard workstation or laptop
  • When that won’t work: a desktop published over terminal services or Citrix
  • When that won’t work (because the application mix or the personalisation requirements cannot run on a shared server): a desktop running on a virtual machine.

But is a remote connection to a virtual desktop the best way to do this? Perhaps we could just boot the client device into different desktops locally – a Versatile Desktop.

Part of the attraction of the virtual desktop is that we know that virtualization is highly effective for servers, so: why not for desktops? The reason is that we are usually trying to do something entirely different. For servers we are trying to partition one hardware device to run multiple OS’s at the same time. There are cases where we want the desktop to do that too, for example when running a development lab of several virtual servers on one desktop machine. But mostly we want the desktop hardware to be able to run different OS’s at different times. Either different users with a different desktop requirement, or the same user requiring different desktops for different things.

Bearing in mind that we can already do this easily with terminal services, the problem only arises when terminal services cannot work, for example when:

  1. the user is offline or on a slow connection
  2. the applications do not work over terminal service or the desktop needs to be heavily personalised
  3. the user requires specialised features on the local device such as: power saving; advanced graphics and audio; wireless and WWAN – and all the other features of a full spec device.

One way to do this is with a client hypervisor (like VMWare Workstation). The problem with a hypervisor is that, almost by definition, it cannot give us the full features of the local device. The hypervisor emulates the native hardware drivers with generic variants, or passes through to the hardware but only for one OS. So for the virtual machine OS we may as well not have a full featured device. If we didn’t need a high spec local device then fine, but then why have it?

A better way to do this would be somehow to switch between different OS’s stored on the hard disk. We could store different OS’s on different partitions of the hard disk. Then let’s say we had a function key or a small graphical menu so we could just switch between different OS’s. We could boot one high performance desktop for one purpose, and a different OS for another. Both would provide a full OS: available offline; running any applications and fully personalised; and with the full features of the local device. The way to do this is with Unified Extensible Firmware Interface (UEFI).

UEFI is a replacement for the old fashioned BIOS. BIOS is 16-bit with 1 MByte of addressable space, and so is inherently limited in what it can do. There is no mouse in BIOS. UEFI can be 32-bit or 64-bit and so can run a full GUI. In effect, we can have a pre-OS graphical interface that enables the user to choose what to do next. The UEFI can boot any UEFI-aware OS, including Windows 7, Linux and Mac OS X.

UEFI began life as EFI for Intel Itanium processors in 2000. The specification is now controlled by the industry wide Unified EFI Forum, and at version 2.03.

  • Apple uses UEFI to boot Mac OS X and Windows 7 on the same machine: so-called Boot Camp.
  • Acer uses it on their Aspire One D250 to boot Windows 7 and Android
  • HP uses it on notebooks to provide System Diagnostics and a UEFI Boot Mode.

UEFI provides the opportunity for a Versatile Desktop. With UEFI the user could select:

  • an iPad-like touch screen interface for casual or social usage
  • a production desktop for business usage or heavyweight applications; and different production desktops for different users or purposes
  • a Linux client for a seamless remote desktop over terminal services.

So how does the Versatile Desktop compare to the Virtual Desktop?

Pro

  • Direct access to the full features of the hardware
  • Instantly available

Con

  • UEFI implementations are proprietary. It depends what the vendor lets you do.

Is this a practical proposition? We will look in another post at the opportunities for using UEFI with the HP Elitebook.

Citrix: Off your Trolley Express

Until recently it has been possible to automate the installation of most software on a Windows computer using Group Policy. Group Policy is a standard component of a Windows domain and so there is no additional cost. Starting with version 11.2 Citrix no longer recommend using Group Policy to install the Citrix Online plug-in. Are they off their trolley?

Windows Installer

Microsoft introduced Windows Installer with Windows 2000 as the preferred method for installing software on Windows computers. Windows Installer is a service in Windows that provides the standard mechanisms for software installations. The vendor creates an installation package with an .msi file. The msi is a database that contains instructions and resources for Windows Installer. When the user runs the msi, Windows Installer performs the actions indicated for the package. The benefit for the user and the vendor is that there is a standard process for:

  • identifying what is installed
  • installing, repairing and removing an application
  • updating an application with a patch or an upgrade
  • performing custom steps (depending on the existing hardware or OS configuration, for example)
  • managing the User Interface and silent installation
  • and many other standard software installation processes: feature selection; rollback; logging; advertisement.

Windows Installer is now at Version 5.0 and most vendors have adapted their installations one way or another to use the Windows Installer service. Vendor installation packages have gradually evolved. In a first stage, many vendors adapted their existing installation routine simply to run as Custom Actions within the Windows Installer package. This defeated the object of using Windows Installer, since it could only have the most limited information about the package. But at least the basics of the installation worked in a standard way.

In a second stage, vendors have adapted their installation to be performed as a native Windows Installer package, with the properties and actions handled directly by Windows Installer. The vendor typically uses Custom Actions only for tasks that are not handled by the Windows Installer service.

Most vendors have continued to include an executable Setup with the installation package. This might typically perform a few pre-installation functions, and then run the msi. For example, the Setup might install .NET Framework 2.0 as a pre-requisite. The Setup can also customize the msi depending, for example, on the OS language, by generating a Transform (with an mst file extension). Provided you know what the Setup does, you can perform those tasks independently and just extract and run the msi directly.

Group Policy Software Installation

Who cares whether the installation is a Windows Installer msi or a non-Windows Setup, as long as it works? The answer is: Group Policy. Also starting with Windows 2000 Microsoft introduced Group Policy to control the configuration of computers in the Domain. Group Policy uses client side extensions to perform different types of actions defined in domain policies. One of these extensions is Software Installation. The Software Installation client side extension tells Windows Installer what installation actions to run.

  • The Group Policy Software Installation policy knows whether it has run or not. It knows what users or computers it needs to run for. It knows not to run over a slow link. It can use a WMI filter to run on certain classes of computer and not others. Depending on the policy configuration it passes commands to Windows Installer to perform the installation, upgrade or removal of software.
  • Windows Installer then performs those actions the same as if the command line were executed manually. It reports back to the client side extension whether the installation was successful. The client side extension reports back to the Group Policy service whether the policy has been completed successfully or not.

Group Policy is a standard component of Windows domains, and therefore there is no additional cost for using it to install software. Without Group Policy you need to use some kind of third party tool. Although technically you can run a script, this method does not provide the control of the installation that you have with Group Policy, unless you develop in effect your own custom client side extension. Group Policy Software Installation operates only on Windows Installer database (msi) and transform (mst) files. It does not operate on Setup (exe) files. So if the vendor package comes as a Setup, and does not unpack as an msi, it can not be installed by Group Policy Software Installation.

Many enterprises will already have a separate software installation tool, like SCCM or Altiris Software Delivery Solution. But if you do not, and you use Group Policy Software Installation, then you need an msi. Now that most vendors have adapted their installations to use Windows Installer, the great majority of products can be installed using Group Policy Software Installation:

  • either directly
  • or by extracting the msi from the Setup
  • or by re-packaging older or simpler products using something like Wise Package Studio.

Problems

Rather ironically, having set the standard and provided the tools, Microsoft were the first major vendor to break ranks. Office 2007 has a Setup that runs a series of separate msi’s but it uses a Patch file (msp) instead of a Transform (mst) to customize the installation. Group Policy Software Installation cannot run a Patch file. If you want to customize the installation of Office 2007 (for example to select which components to install) you cannot use Group Policy. Microsoft simply recommend that you use their client management tool SCCM. But if you were quite happily proceeding with all your software installations using Group Policy, it was a bit of a shock to find that you can’t install Office 2007 that way. Fortunately there is a workaround that enables you to perform a standard installation without customization, and therefore with Group Policy. Here are the deployment options MS recommend for installing Office 2010. You will see that they don’t include Group Policy Software Installation.

Now (since Version 11.2) Citrix have taken a similar approach for the Citrix Online plug-ins, for similar reasons. The Citrix "client" now consists of several components or plug-ins:

  1. Web plug-in that provides the core XenApp ICA client functions and enables connection to a XenApp farm using a web browser (always required)
  2. Desktop Viewer that provides controls and preferences for a published desktop (optional)
  3. USB handler that controls what happens when you plug in a USB device during a session (optional)
  4. Program Neighborhood Agent (PNA) that reads a configuration from a XenApp Web Interface server and configures shortcuts in the Start menu for the published applications
  5. Single Sign-on that captures the user logon details and enables the PNA to pass them through to the Web Interface server (optional, for the PNA)
  6. HDX media stream for Flash Player for client side rendering of Flash content (optional)

Why so complicated? Citrix are trying to provide a client that works both for published applications (connection to a Terminal Server) and virtual desktops (connection to a Virtual Machine running a Windows client OS like Windows 7) based on a combination of plug-ins. This is Citrix making a big move to dominate the market for Virtual Machine-based desktops by adapting their ICA protocol and client services for connections to a remote VM.

Each plug-in is an msi. However Citrix have developed a custom setup controller called Trolley Express to control the running of the individual msi’s. Trolley Express does the following:

  • manages the sequence of msi’s and their rollback in the event of failure
  • manages upgrades and removal
  • provides an overall log file, and a log for each msi
  • passes the OS language to the individual msi’s
  • passes command line parameters to the msi’s in a transform.

It’s not very much really. I don’t see anything here that could not have been developed as an msi wrapper with nested msi’s, or indeed as separate msi’s with component options. Here’s an extract from the log file to show what Trolley Express is doing.

But Citrix have gone much further than just using a custom setup. They have developed a whole proprietary client management system. The Merchandising Server acts as a client management server, and the Receiver acts as an agent performing the plug-in installation and configuration determined by the Server. This operates independently of Microsoft domains. You could run it on a campus and control the client on any computer connecting to a Citrix service. You can use it for the Citrix Access Gateway (SSL VPN) client as well as the XenApp server client. There is a receiver for Windows, Mac, iPad and Smart Phone.

Installation of the client with Group Policy is still possible, and it works faultlessly, but Citrix do not recommend it. They say:

"Citrix does not recommend extracting the .msi files in place of running the installer packages [an exe]. However, there might be times when you have to extract the plugin .msi files from CitrixOnlinePluginFull.exe manually, rather than running the installer package (for example, company policy prohibits using the .exe file). If you use the extracted .msi files for your installation, using the .exe installer package to upgrade or uninstall and reinstall might not work properly. The Administrative installation option available in some previous versions of the plugin is not supported with this release. To customize the online plugin installation, see ‘To configure and install the online plugin using the plugin installer and commandline parameters [only available for the .exe]’."

This seems a big jump, from an msi that can be installed by Group Policy, to a full client management system and no msi. But it is really the same way that other complex clients are managed: SCCM itself; Microsoft Forefront Client Security; McAfee ePolicy Orchestrator. They all use a server to install and configure the clients and agents. Citrix provide the Merchandising server as a virtual appliance, so you don’t even need an additional license for the OS or database.

In summary:

  1. You can install nearly everything on a Windows computer using Group Policy Software Installation, and it is a standard component of Windows domains with no extra cost
  2. First Office 2007 and now the Citrix Online Plug-in are not recommended for Group Policy installation – although they can be made to work
  3. Do you need to buy a software delivery tool after all? Nearly, but not quite. For Citrix you can use the Merchandising Server appliance.

I am all in favour of client management tools like SCCM and Altiris where you need them. But I also like to reduce costs where you don’t. For the moment you can still get by with Group Policy Software installation.

Thick or thin client

The standard user desktop can be delivered in radically different ways. While this is interesting technically, what difference does it make to your business? Some of the claims are just plain confusing or misleading.

The standard user desktop can be delivered in radically different ways: standard PC; netbook; virtualized applications; remote desktop; virtual desktop; virtual disk; the list goes on. It is a big subject, so it is hard to know where to start. There are use cases for different types of desktop that seem obvious, but the more you look into it the less obvious it is.

Let’s explode the problem to see what is actually happening. Then we can form a better view of how the methods differ. On the standard PC we have the following subsystems, all connected by the motherboard:

  • Processor
  • Hard drive
  • RAM Memory
  • Graphics
  • Network interface
  • Interfaces for different types of devices
  • Services like power and cooling
  • A BIOS that controls how they work together.

I am sorry this is so basic but we have to start somewhere. Obviously we could move different parts to different places. We could put the graphics controller and a few other bits and pieces near to the monitor and the user, and put the rest miles away in a cupboard somewhere. What would this achieve? There would be less noise and heat and it would take less space on the desk. Nothing to break or steal. Sounds good. What have we got? A Terminal. We could obviously explode our PC in lots of different ways to achieve different results. The different explosions that the engineers have given us are today:

  1. Remote KVM
    • Put the PC in a cupboard and operate the keyboard, video and mouse (KVM) remotely.
    • What exactly do we need locally? Just something to transmit the KVM signals presumably.
    • But KVM switches works only over short distances. Over long distances they need an IP communication protocol, with something as a server and something as a client.
    • Here’s how Adder do it: Infinity Transmitter and Receiver
  2. Remote PC
    • Put the PC in a cupboard and connect to it remotely using a remote communications protocol.
    • Strip it down so it shares components with other PC’s like power supply and cooling (a Blade PC).
    • Use a terminal with an operating system on the desk, to run a remote desktop client that communicates with a remote desktop server service.
    • Here’s how HP do it: HPBlade-Thin
  3. Remote Disk, or Remote Boot
    • With the remote PC I still need a terminal locally to run a remote desktop to it. The terminal has a processor, memory, and connections of its own. To avoid all this duplication why not keep those local and just put the hard drive in a cupboard. Then, obviously, instead of lots of hard drives I could use space on shared disks.
    • The trouble is, I have to get the OS or some of it at least into local memory. RAM is volatile, so I have to do it each time the machine starts up.
    • This works OK if the OS is a stripped down utility like a public kiosk, but not with a full desktop.
  4. Shared OS (aka Terminal Services)
    • Put the PC in a cupboard, but make it a very large PC and use the Windows OS to share out sessions to different users.
    • The OS method of sharing needs to be pretty good to make sure that one part-share of a big PC is as effective as a whole share of a smaller PC.
    • Windows has this built in as Remote Desktop Services. XenApp is a more specialized version: Citrix ICA Connection
  5. Shared hardware (aka Virtual Machine)
    • Instead of having lots of separate PC’s in a cupboard, I could use a Hypervisor on one large machine to share the same physical hardware between different OS instances and allocate one instance to each user.
    • The trouble is, the hypervisor has to run on an Intel processor the same as the virtual machine, so there are only so many multiples I can achieve. I could give one user a very powerful virtual machine, like a workstation. Or I could give 10 or 20 users a smaller machine, like a low-spec PC. But I can’t give lots of users a virtual workstation.
  6. No desktop!
    • Deliver every application into a web browser, instead of a full desktop. Cut out the middleman and go straight from a simple browser on a thin client or even an iPhone to an application on a server somewhere remote.
    • This is essentially what Google Apps are doing.
    • This works fine if every application you need is web-enabled, but if you need even one that isn’t (say, Adobe Photoshop) then you need something else.

All this remoting and sharing. Many of the descriptions are confusing. Many of the claims are misleading. We just need to understand how the box on the user’s desk communicates with the box in the cupboard, and what happens in the box.

To use a full desktop like Windows 7 remotely, we need to use an IP protocol running over ethernet. Everything you see at the remote end has to come via the TCP/IP connection and the communications protocol. A normal graphics cable to your monitor runs at around 4 Gbps or more. Over TCP/IP this has to come down to say 100 Mbps over a LAN, or down to some fraction of 1 Mbps over a WAN. Obviously if you have 10 users on a 2 Mbps WAN leased line, then the most they can have at the same time is 200 Kbps. To come down from 4 Gbps to 200 Kbps means that something has to give in what you can see on the screen and how fast you can work on the desktop.

  1. Microsoft use their own Remote Desktop Protocol (RDP). Remote Desktop Services (RDS) runs on the box in the cupboard, and Remote Desktop Connection (RDC) runs on the local box. They communicate using RDP.
  2. Citrix provide a heavily optimized proprietary protocol, Independent Computing Architecture (ICA). XenApp runs on the box in the cupboard. A Citrix client runs on the local box ("plug-in" for Windows, "receiver" for Linux). ICA does a lot of clever things to make the local response appear fast and consistent.

What is happening on the box in the cupboard is exactly the same as you could do if it were under your desk, with the same result. You can run multiple user sessions on one OS, or multiple virtual machines on one physical machine. You can break the physical hardware down from one box into separate Blade servers and SAN storage. You can add specialist graphics accelerators. What you are going to get as a remote desktop is exactly the same as if you were there, except it has to come over one of the remote communication protocols.

There is one thing to add. If we share the hardware of the remote box, we introduce an array of new problems about connecting users to the right machine and configuring that machine to have the right resources. For shared OS these are handled by Remote Desktop Services or XenApp. For shared hardware the connection broker, the virtual disk and so on are solving new problems created by virtualizing the box, not adding new features to the user’s experience.

So when people talk about, say, "a virtual application running on your thin client" what they mean is: "an application running on a box in a cupboard that you can interact with using a remote communication protocol". A "virtual desktop" is a virtual machine running on a box in a cupboard that you can interact with using a remote communication protocol. A "virtual disk" is shared storage for the box in the cupboard that you can interact with using a remote communication protocol.

What difference does it all make in practice?

A. Ergonomics

If you put the PC in a cupboard and connect to it remotely with a thin client there is no doubt it will take less space and create less heat and noise on the desk. You have just moved them somewhere else. You now have two processors, two lots of memory, two power supplies. But clearly, if the desk ergonomics are important, then a remote desktop works well.

B. Security

With the thin client there is nothing much to steal. No information is stored on the client after the user logs off. But with a well managed PC there is very little user data on the PC anyway and if you really want to, even that can be wiped each time.

In a very insecure environment, having literally only the bits of the remote desktop graphics present locally provides less opportunity to exploit. In a normal workplace it won’t make a difference.

C. Robustness

The only difference is a hard drive and a fan. With new solid state disks even this difference is gone. Hard drives don’t fail that often, and with a properly managed desktop, rebuilding the OS image is quick, easy and remote.

D. Cost

A huge subject, and I am not even going to try to generalize, but bear in mind that the thin client is only a remote access device to a desktop provided somewhere else, so it is an added cost, not a reduced cost. By and large, if you need a license to run something on the desktop, you need the same licenses to run it on a remote desktop.

E. Ease of management

Much the same tools are required to manage servers supplying the remote desktop, as the PC desktop. If the desktop is properly automated, one thousand PC’s are not more difficult to manage than servers supplying remote desktop to one thousand devices. There is some extra complexity to managing PC desktop (for example, Wake on LAN for automated patching) but this is balanced by the tools needed for the added complexity of the remote desktop sharing such as load balancing, printing and profile management.

F. Performance

The performance aspect is fascinating and a topic in itself. There is a trade-off between:

  • the amount of graphics needed to see what is going on, and
  • the amount of data that would need to be transferred between my desktop and a server (for example, a file server or application server), and
  • the amount of bandwidth available to me, and
  • the synchronicity of the transaction (I need to see graphics right now, but I can wait for a file to be printed).

If I have a lot of data going between desktop and server and not much bandwidth (for example running a report in a remote finance system), then it might be better if only the graphics have to be sent to me. If I am working with video (for example watching a training DVD), then I want it very local.

The XenApp ICA protocol will compress the graphics of a remote desktop to around 200 Kbps or less. This means we can get a perfectly adequate remote desktop over a WAN, provided we don’t need demanding video. We can open, edit and save a 10MB Powerpoint presentation in a blink with a remote desktop, whereas opening it directly over a 200 Kbps WAN connection would be hopeless.

The determining factor for the desktop is really where the data has to live, from every consideration:

  • where the user can have sufficiently responsive access to it
  • where it can be held securely and backed up
  • where you can authenticate to get to it
  • where all the people who need to get to it from different places can reach it
  • how it integrates with application data.

So for example, in a five person remote office, users will want fast access to their own personal data but unless you have a local server with UPS, cooling, and backup it may be better to put the data in a central data center and use a remote desktop to get to it.

G. Flexibility

Let’s say you have a wide range of different applications, used by different people on different desktops. Let’s also say that some are incompatible with others, or have different pre-requisites. Perhaps some users require Office 2003 while others are using Office 2007. Some applications might require Access 2000. Isn’t a local desktop, or a remote virtual desktop, more flexible to deal with this variety?

As long as the applications are not incompatible, you can install them all on a Shared OS remote desktop. You can control what shortcuts people actually see using Group Policy Preferences, or other third party solutions like AppSense. You can use application virtualization, to an extent, to isolate incompatible applications from each other.

To conclude:

Obviously there are many entirely different use cases where one type of desktop delivery works better than another. The aim of this blog is not remotely to generalise across different use cases. The aim is just to see what is actually going on when we remove the PC from the desk.

Citrix EdgeSight 5.1 on Windows Server 2008

EdgeSight is the monitoring product Citrix obtained when they purchased Reflectent Software in 2006. In XenApp 5 it now replaces the old Resource Manager. The Basic features (replacing Resource Manager) are included with the XenApp Enterprise Edition license. You need additional EdgeSight licenses to run the Advanced features and the Endpoint monitoring features. You may also find you need an extra MS SQL Server license.

From the Installation Guide:

  • Basic agents provide the Resource Management capability that is included in XenApp-Enterprise Edition and require only that you have a XenApp Enterprise license available on your Citrix Licensing Server.
  • Advanced agents provide the fully featured version of EdgeSight for XenApp and require that you have either a XenApp-Platinum Edition license or an EdgeSight for XenApp license available on your Citrix Licensing Server.

When you install, you choose what features you are licensed for. You can also change this later in the configuration panel.

Licenses2

EdgeSight requires an installation of MS SQL Server Reporting Services, which in turn requires IIS. It also needs an MS SQL database engine. This presents some interesting choices. I usually aim to use a shared MS SQL Server resource, so it can be properly administered. However a non-production service can often use the free MS SQL Server Express edition.

But EdgeSight collects a lot of performance data. It is also rather unusual in creating a separate file group for each reporting period. By default it creates 8 500 MB database files. MS SQL Server Express has a 4 GB database size limit. So it is not really suitable for EdgeSight in production mode.

Filegroups1

If you have a shared Reporting Services installation you should be able to use that. If not, you need to install Reporting Services somewhere else. However an installation of Reporting Services requires a full license of MS SQL Server. MS SQL Server Express Advanced contains Reporting Services. But you can only use the Express Reporting Services against a database on the same server.

So we end up with EdgeSight 5.1 on Windows Server 2008 in practice requiring either a shared Reporting Services, or a full SQL Server license.

If you have a large Citrix server farm you may consider that an additional SQL Server license is a minor cost. But if you have a small farm and can no longer use Resource Manager, you may be surprised to find you need to pay for an additional SQL Server license to use EdgeSight. You might just get away with it by changing the default file group sizes and setting aggressive purging.

VMware ESXi on HP Proliant G6

I am not exactly sceptical, more like cautious about virtualisation. But ESXi on a Proliant DL380 G6 is a rocket machine.

Virtualisation is great, but I am a little bit cautious about where it fits in. Obviously it is great for development and testing. And it is good for under-used and/or poor quality software that requires isolation. But if you have a file server, a Citrix server or a SQL server that is using most of two processors and most of its memory, then you don’t get far by making it a virtual machine.

But the new HP Proliant DL380 G6 has 2 processors with quad cores. It has a maximum of 144GB of memory. And space for 16 300GB SAS disks. This is a mighty machine.

The unit of cost in a data center today is power. The DL380 has dual power supplies like anything else, but it consumes no more power then before. So you have more memory, more cores, more storage but not more data center costs.

Normally you might think of running the OS on a pair of mirrored disks, using up two of your slots. Or if you are brave you might consider Boot from SAN. But the DL380 has another options. You can insert a $5 2GB flash card onto the motherboard and run the whole partitioning software (hypervisor) from there. It really is almost as if the hypervisor is an extension of BIOS, and why not? Its function is to enable the OS to access the hardware, but in this case it is multiple OS’s. I don’t think it can be long before partitioning the hardware is just a natural part of building the system.

Buy the flash card and download the HP version of ESXi: HP VMWare ESXi 4.0 Getting Started

Install ESXi on the flash card:

Esxi4_bis

Esxi42

Then when ESXi is up and running, you can manage your virtual machines from the vSphere client.

VSphere