The standard user desktop can be delivered in radically different ways. While this is interesting technically, what difference does it make to your business? Some of the claims are just plain confusing or misleading.
The standard user desktop can be delivered in radically different ways: standard PC; netbook; virtualized applications; remote desktop; virtual desktop; virtual disk; the list goes on. It is a big subject, so it is hard to know where to start. There are use cases for different types of desktop that seem obvious, but the more you look into it the less obvious it is.
Let’s explode the problem to see what is actually happening. Then we can form a better view of how the methods differ. On the standard PC we have the following subsystems, all connected by the motherboard:
- Hard drive
- RAM Memory
- Network interface
- Interfaces for different types of devices
- Services like power and cooling
- A BIOS that controls how they work together.
I am sorry this is so basic but we have to start somewhere. Obviously we could move different parts to different places. We could put the graphics controller and a few other bits and pieces near to the monitor and the user, and put the rest miles away in a cupboard somewhere. What would this achieve? There would be less noise and heat and it would take less space on the desk. Nothing to break or steal. Sounds good. What have we got? A Terminal. We could obviously explode our PC in lots of different ways to achieve different results. The different explosions that the engineers have given us are today:
- Remote KVM
- Put the PC in a cupboard and operate the keyboard, video and mouse (KVM) remotely.
- What exactly do we need locally? Just something to transmit the KVM signals presumably.
- But KVM switches works only over short distances. Over long distances they need an IP communication protocol, with something as a server and something as a client.
- Here’s how Adder do it: Infinity Transmitter and Receiver
- Remote PC
- Put the PC in a cupboard and connect to it remotely using a remote communications protocol.
- Strip it down so it shares components with other PC’s like power supply and cooling (a Blade PC).
- Use a terminal with an operating system on the desk, to run a remote desktop client that communicates with a remote desktop server service.
- Here’s how HP do it:
- Remote Disk, or Remote Boot
- With the remote PC I still need a terminal locally to run a remote desktop to it. The terminal has a processor, memory, and connections of its own. To avoid all this duplication why not keep those local and just put the hard drive in a cupboard. Then, obviously, instead of lots of hard drives I could use space on shared disks.
- The trouble is, I have to get the OS or some of it at least into local memory. RAM is volatile, so I have to do it each time the machine starts up.
- This works OK if the OS is a stripped down utility like a public kiosk, but not with a full desktop.
- Shared OS (aka Terminal Services)
- Put the PC in a cupboard, but make it a very large PC and use the Windows OS to share out sessions to different users.
- The OS method of sharing needs to be pretty good to make sure that one part-share of a big PC is as effective as a whole share of a smaller PC.
- Windows has this built in as Remote Desktop Services. XenApp is a more specialized version:
- Shared hardware (aka Virtual Machine)
- Instead of having lots of separate PC’s in a cupboard, I could use a Hypervisor on one large machine to share the same physical hardware between different OS instances and allocate one instance to each user.
- The trouble is, the hypervisor has to run on an Intel processor the same as the virtual machine, so there are only so many multiples I can achieve. I could give one user a very powerful virtual machine, like a workstation. Or I could give 10 or 20 users a smaller machine, like a low-spec PC. But I can’t give lots of users a virtual workstation.
- No desktop!
- Deliver every application into a web browser, instead of a full desktop. Cut out the middleman and go straight from a simple browser on a thin client or even an iPhone to an application on a server somewhere remote.
- This is essentially what Google Apps are doing.
- This works fine if every application you need is web-enabled, but if you need even one that isn’t (say, Adobe Photoshop) then you need something else.
All this remoting and sharing. Many of the descriptions are confusing. Many of the claims are misleading. We just need to understand how the box on the user’s desk communicates with the box in the cupboard, and what happens in the box.
To use a full desktop like Windows 7 remotely, we need to use an IP protocol running over ethernet. Everything you see at the remote end has to come via the TCP/IP connection and the communications protocol. A normal graphics cable to your monitor runs at around 4 Gbps or more. Over TCP/IP this has to come down to say 100 Mbps over a LAN, or down to some fraction of 1 Mbps over a WAN. Obviously if you have 10 users on a 2 Mbps WAN leased line, then the most they can have at the same time is 200 Kbps. To come down from 4 Gbps to 200 Kbps means that something has to give in what you can see on the screen and how fast you can work on the desktop.
- Microsoft use their own Remote Desktop Protocol (RDP). Remote Desktop Services (RDS) runs on the box in the cupboard, and Remote Desktop Connection (RDC) runs on the local box. They communicate using RDP.
- Citrix provide a heavily optimized proprietary protocol, Independent Computing Architecture (ICA). XenApp runs on the box in the cupboard. A Citrix client runs on the local box ("plug-in" for Windows, "receiver" for Linux). ICA does a lot of clever things to make the local response appear fast and consistent.
What is happening on the box in the cupboard is exactly the same as you could do if it were under your desk, with the same result. You can run multiple user sessions on one OS, or multiple virtual machines on one physical machine. You can break the physical hardware down from one box into separate Blade servers and SAN storage. You can add specialist graphics accelerators. What you are going to get as a remote desktop is exactly the same as if you were there, except it has to come over one of the remote communication protocols.
There is one thing to add. If we share the hardware of the remote box, we introduce an array of new problems about connecting users to the right machine and configuring that machine to have the right resources. For shared OS these are handled by Remote Desktop Services or XenApp. For shared hardware the connection broker, the virtual disk and so on are solving new problems created by virtualizing the box, not adding new features to the user’s experience.
So when people talk about, say, "a virtual application running on your thin client" what they mean is: "an application running on a box in a cupboard that you can interact with using a remote communication protocol". A "virtual desktop" is a virtual machine running on a box in a cupboard that you can interact with using a remote communication protocol. A "virtual disk" is shared storage for the box in the cupboard that you can interact with using a remote communication protocol.
What difference does it all make in practice?
If you put the PC in a cupboard and connect to it remotely with a thin client there is no doubt it will take less space and create less heat and noise on the desk. You have just moved them somewhere else. You now have two processors, two lots of memory, two power supplies. But clearly, if the desk ergonomics are important, then a remote desktop works well.
With the thin client there is nothing much to steal. No information is stored on the client after the user logs off. But with a well managed PC there is very little user data on the PC anyway and if you really want to, even that can be wiped each time.
In a very insecure environment, having literally only the bits of the remote desktop graphics present locally provides less opportunity to exploit. In a normal workplace it won’t make a difference.
The only difference is a hard drive and a fan. With new solid state disks even this difference is gone. Hard drives don’t fail that often, and with a properly managed desktop, rebuilding the OS image is quick, easy and remote.
A huge subject, and I am not even going to try to generalize, but bear in mind that the thin client is only a remote access device to a desktop provided somewhere else, so it is an added cost, not a reduced cost. By and large, if you need a license to run something on the desktop, you need the same licenses to run it on a remote desktop.
E. Ease of management
Much the same tools are required to manage servers supplying the remote desktop, as the PC desktop. If the desktop is properly automated, one thousand PC’s are not more difficult to manage than servers supplying remote desktop to one thousand devices. There is some extra complexity to managing PC desktop (for example, Wake on LAN for automated patching) but this is balanced by the tools needed for the added complexity of the remote desktop sharing such as load balancing, printing and profile management.
The performance aspect is fascinating and a topic in itself. There is a trade-off between:
- the amount of graphics needed to see what is going on, and
- the amount of data that would need to be transferred between my desktop and a server (for example, a file server or application server), and
- the amount of bandwidth available to me, and
- the synchronicity of the transaction (I need to see graphics right now, but I can wait for a file to be printed).
If I have a lot of data going between desktop and server and not much bandwidth (for example running a report in a remote finance system), then it might be better if only the graphics have to be sent to me. If I am working with video (for example watching a training DVD), then I want it very local.
The XenApp ICA protocol will compress the graphics of a remote desktop to around 200 Kbps or less. This means we can get a perfectly adequate remote desktop over a WAN, provided we don’t need demanding video. We can open, edit and save a 10MB Powerpoint presentation in a blink with a remote desktop, whereas opening it directly over a 200 Kbps WAN connection would be hopeless.
The determining factor for the desktop is really where the data has to live, from every consideration:
- where the user can have sufficiently responsive access to it
- where it can be held securely and backed up
- where you can authenticate to get to it
- where all the people who need to get to it from different places can reach it
- how it integrates with application data.
So for example, in a five person remote office, users will want fast access to their own personal data but unless you have a local server with UPS, cooling, and backup it may be better to put the data in a central data center and use a remote desktop to get to it.
Let’s say you have a wide range of different applications, used by different people on different desktops. Let’s also say that some are incompatible with others, or have different pre-requisites. Perhaps some users require Office 2003 while others are using Office 2007. Some applications might require Access 2000. Isn’t a local desktop, or a remote virtual desktop, more flexible to deal with this variety?
As long as the applications are not incompatible, you can install them all on a Shared OS remote desktop. You can control what shortcuts people actually see using Group Policy Preferences, or other third party solutions like AppSense. You can use application virtualization, to an extent, to isolate incompatible applications from each other.
Obviously there are many entirely different use cases where one type of desktop delivery works better than another. The aim of this blog is not remotely to generalise across different use cases. The aim is just to see what is actually going on when we remove the PC from the desk.