Versatile Desktop™ through UEFI

Versatile Desktop is the ability to run different business desktops on the same client device. We can already do this easily through terminal services, but only if we are online, and without the full features of the client device such as enhanced graphics or audio.

Unified Extensible Firmware Interface (UEFI) makes it easier than before to run different desktops locally, with the full features of the device. This post looks at how widespread and practical UEFI is as a means of achieving the Versatile Desktop.

We need the Versatile Desktop anywhere that we might previously have used two or more separate physical computers. Examples are:

  1. Television production with a dedicated computer for video editing (no anti-virus) and a standard desktop for other applications
  2. A sales person who is mostly out of the office and needs an unrestricted desktop with admin access when travelling, but no admin access when back on the network
  3. A lawyer with a Windows 7 laptop for the corporate document management system and a Mac for personal use
  4. A software developer with a specialised setup for development tools and a separate desktop for games
  5. A finance department with a dedicated computer and smartcard authentication for a legacy online banking application
  6. A support organisation with desktops on different VPN’s for different clients.

The UEFI standard

The Unified Extensible Firmware Interface (UEFI) is an industry standard for an interface layer between the hardware devices and the operating system. Its main purpose is to provide pre-OS services and to load an OS.

So the computer has multiple hardware devices; firmware controls the devices; UEFI drivers and applications perform pre-OS functions and load an OS.

UEFI diagram

UEFI is a replacement for BIOS. The UEFI standard is organised by the UEFI Forum, which includes all major computer hardware vendors. Most new-design motherboards now ship with UEFI support. However you might not notice it because the OEM vendor may stick with using BIOS, or may use it in compatibility mode to work like the BIOS.

UEFI is a technical standard for pre-OS execution, but it also provides advantages to the user:

  • a graphical user interface
  • networking
  • authentication
  • access to applications provided by the vendor, for example hardware diagnostics or device configuration
  • faster startup.

The most important, in terms of the Versatile Desktop, is that UEFI makes it easier than before to install multiple OS’s and to select which one to start. You can have either a default OS, or an option to choose the OS at startup.

UEFI Versatile Desktops

Here are a few examples of UEFI in action.

1. Acer

  • The Aspire One D250 Model 1613 has a dual boot option between Windows 7 and Android:Acer Aspire One D250-1613

2. Apple Mac

  • Boot Camp is the name for Apple’s implementation of loading the OS through EFI
  • Boot Camp Assistant enables the user to install Windows 7 Boot Camp Assistant
  • At startup you have an option to choose which OS to run Boot Camp Startup

3. HP notebooks

  • Most HP notebooks implement UEFI
  • HP System Diagnostics is a UEFI application HP System Diagnostics UEFI
  • HP notebooks can be switched to UEFI Boot Mode (disabled by default)

So with UEFI we can have graphical and networking system applications before the OS runs; a choice between different full desktop OS’s; and perhaps a minimalist quick-starting desktop with access to other remote desktops via Citrix.

Virtual or Versatile Desktop

There is a lot of industry talk about Virtual Desktops at the moment. This is the desktop OS running as a virtual machine on a server in the datacenter. It sounds like the solution to all those difficult desktop problems, but it is more like a niche within a niche. Much more interesting is the Versatile Desktop. The Versatile Desktop is a personal computing device that is able to run different desktops at different times.

Personal computing requirements are hugely varied, so it is not surprising that the industry provides many different solutions apart from the common or garden desktop. The virtual desktop certainly has a place among them. Here’s how I see the logic flow:

  • Standard requirement: standard workstation or laptop
  • When that won’t work: a desktop published over terminal services or Citrix
  • When that won’t work (because the application mix or the personalisation requirements cannot run on a shared server): a desktop running on a virtual machine.

But is a remote connection to a virtual desktop the best way to do this? Perhaps we could just boot the client device into different desktops locally – a Versatile Desktop.

Part of the attraction of the virtual desktop is that we know that virtualization is highly effective for servers, so: why not for desktops? The reason is that we are usually trying to do something entirely different. For servers we are trying to partition one hardware device to run multiple OS’s at the same time. There are cases where we want the desktop to do that too, for example when running a development lab of several virtual servers on one desktop machine. But mostly we want the desktop hardware to be able to run different OS’s at different times. Either different users with a different desktop requirement, or the same user requiring different desktops for different things.

Bearing in mind that we can already do this easily with terminal services, the problem only arises when terminal services cannot work, for example when:

  1. the user is offline or on a slow connection
  2. the applications do not work over terminal service or the desktop needs to be heavily personalised
  3. the user requires specialised features on the local device such as: power saving; advanced graphics and audio; wireless and WWAN – and all the other features of a full spec device.

One way to do this is with a client hypervisor (like VMWare Workstation). The problem with a hypervisor is that, almost by definition, it cannot give us the full features of the local device. The hypervisor emulates the native hardware drivers with generic variants, or passes through to the hardware but only for one OS. So for the virtual machine OS we may as well not have a full featured device. If we didn’t need a high spec local device then fine, but then why have it?

A better way to do this would be somehow to switch between different OS’s stored on the hard disk. We could store different OS’s on different partitions of the hard disk. Then let’s say we had a function key or a small graphical menu so we could just switch between different OS’s. We could boot one high performance desktop for one purpose, and a different OS for another. Both would provide a full OS: available offline; running any applications and fully personalised; and with the full features of the local device. The way to do this is with Unified Extensible Firmware Interface (UEFI).

UEFI is a replacement for the old fashioned BIOS. BIOS is 16-bit with 1 MByte of addressable space, and so is inherently limited in what it can do. There is no mouse in BIOS. UEFI can be 32-bit or 64-bit and so can run a full GUI. In effect, we can have a pre-OS graphical interface that enables the user to choose what to do next. The UEFI can boot any UEFI-aware OS, including Windows 7, Linux and Mac OS X.

UEFI began life as EFI for Intel Itanium processors in 2000. The specification is now controlled by the industry wide Unified EFI Forum, and at version 2.03.

  • Apple uses UEFI to boot Mac OS X and Windows 7 on the same machine: so-called Boot Camp.
  • Acer uses it on their Aspire One D250 to boot Windows 7 and Android
  • HP uses it on notebooks to provide System Diagnostics and a UEFI Boot Mode.

UEFI provides the opportunity for a Versatile Desktop. With UEFI the user could select:

  • an iPad-like touch screen interface for casual or social usage
  • a production desktop for business usage or heavyweight applications; and different production desktops for different users or purposes
  • a Linux client for a seamless remote desktop over terminal services.

So how does the Versatile Desktop compare to the Virtual Desktop?

Pro

  • Direct access to the full features of the hardware
  • Instantly available

Con

  • UEFI implementations are proprietary. It depends what the vendor lets you do.

Is this a practical proposition? We will look in another post at the opportunities for using UEFI with the HP Elitebook.

Migrating applications to Windows 7

One of the biggest challenges when upgrading to Windows 7 is in testing and preparing applications. This blog puts together a few conclusions that might assist you in planning the work.

The extended lifespan of Windows XP and Server 2003 has been a sort of "peace dividend" or "pension holiday". When you do come to upgrade it is important not to underestimate the cost and uncertainty involved in application compatibility. But at the same time you don’t need to accept that the migration will take forever.

The problem is that applications can be incompatible with Windows 7 in many different ways. Some of these are trivial and easily solved. Some are harder to solve. Some are hard to find and impossible to solve. You don’t know until you test. The same applies to running the applications in Citrix on Server 2008 R2, with the added complication of 64-bit. Here are a few examples to illustrate:

Standard third party application: Lotus Notes

  • The current version 8.5.1 is not certified on Windows 7. Does it work OK or not? Do you wait until later this year for 8.5.2, or go ahead with 8.5.1? There is a patch, Fix Pack 1, that is certified but it adds complexity to the installation.
  • You would think it would be quite simple to find out: ask the vendor. But most vendors do not certify previous versions. That does not mean they don’t run perfectly well. In this case, although 8.5.1 is not certified, the release notes for Fix Pack 1 contain only trivial changes and 8.5.1 appears to work fine, so there is no reason to delay.

Specialised third party application: legal software

  • The installation fails on Vista/Windows 7. Examination of the logs and the Windows Installer file shows there is a custom action to copy templates into the user profile path. The path is hard coded and fails.
  • The solution is to customise the installer to remove the custom action and replicate it in a custom script. Inform the vendor so they can modify the installer.

Custom system: membership database

  • This is an old system with a SystemBuilder 4GL graphical interface to a Unidata database. The version of SystemBuilder being used is not certified or even tested on Vista/Windows 7. The SBClient application contains an OEM Pervasive SQL client that is also obsolete. The client does mail merge to Word 2003, so it would need to be tested if used with Word 2007 or 2010.
  • There is a new version of SystemBuilder that, amazingly for such an old product, is certified both on Windows 7 and on Server 2008 R2 Terminal Services. The new version seems to work perfectly with the old system. However you need to change the client side and the server side of the graphical interface at the same time, so it would be a big bang change to a critical system.
  • But, after packaging the old version using Wise Package Studio, it seems to work fine on both Windows 7 and on Server 2008 Terminal Services, so there is no need to upgrade.

Other Gotcha’s

  • Applications with OEM versions of Crystal Report 11 or earlier do not install on Windows 7. Crystal Reports 11 is not supported on Windows 7, and you can’t upgrade an OEM edition, but it can be modified to install successfully.
  • Applications using the common VB6 SendKeys function raise an error on Windows 7. Sendkeys does not work with UAC. UAC can only be turned off per computer, not per application so there is no workaround except to turn UAC off entirely.
  • In XP you can use the Printer button on the PageSetupDialog to set different printer properties for an application. In the Vista/Windows 7 API it’s gone. There’s no error, it’s just not there. But in .NET Framework it’s still there! This might seem rather obscure, but the point is: you would have to do a lot of testing to discover this and then find out whether it matters to your users of that application.

Obviously you could wait till your applications are fully tested or upgraded to the latest certified versions, but this could take impossibly long. If you have just one core application that is not ready, you can’t upgrade the desktop.

A lot of people seem to be combining application virtualization with a Windows 7 rollout. Perhaps surprisingly, application virtualization is largely irrelevant to compatibility across OS’s. With a virtualized app, the same dll’s run within the OS with exactly the same results. If the application faults natively, it will fault when virtualized. Virtualization can be used to implement a compatibility fix, but you still need the fix.

The best way to approach this is with a structured testing environment and a full set of delivery options. Then, for the difficult applications, you can set a time limit.

Structured Testing Environment

  • Wise Package Studio or similar, to examine the internal structure of the application and check for conflicts between applications.
  • A full VMWare testing environment with VMWare Workstation and ESXi, so you can create all the packaging and testing environments you need and, most importantly, leave them running so that users can log on remotely to test.
  • Scripted or automated tests and test accounts for each application.
  • Microsoft Application Compatibility Toolkit for testing and distributing fixes
  • Thorough documentation and audit trail of the testing.

Delivery options

  • Native installation for compatible and well behaved applications
  • Citrix XenApp published applications, or perhaps virtual desktop, for incompatible applications
  • Virtualization for conflicting applications (e.g. applications that require different versions of common components) or badly behaved applications (e.g. applications that change the default behaviour of the OS)

Most larger organisations already use several delivery options. What is new is to work out the interdependencies of different applications and which platforms they need to sit on. For example, if the incompatible app does a mail merge to Word or a report export to Excel, then the back end platform needs to have Office. It won’t be able to merge and export to the front end. This means that you also have to consider the user profile settings across different delivery platforms. If the user changes a default printer on the Windows 7 front end, should the same change be made to the back end or not?

With this approach, structured testing and multiple delivery options, you can set a time limit for preparing applications for Windows 7 migration. You can migrate the core desktop to Windows 7, while migrating older applications when they are ready.

Citrix: Off your Trolley Express

Until recently it has been possible to automate the installation of most software on a Windows computer using Group Policy. Group Policy is a standard component of a Windows domain and so there is no additional cost. Starting with version 11.2 Citrix no longer recommend using Group Policy to install the Citrix Online plug-in. Are they off their trolley?

Windows Installer

Microsoft introduced Windows Installer with Windows 2000 as the preferred method for installing software on Windows computers. Windows Installer is a service in Windows that provides the standard mechanisms for software installations. The vendor creates an installation package with an .msi file. The msi is a database that contains instructions and resources for Windows Installer. When the user runs the msi, Windows Installer performs the actions indicated for the package. The benefit for the user and the vendor is that there is a standard process for:

  • identifying what is installed
  • installing, repairing and removing an application
  • updating an application with a patch or an upgrade
  • performing custom steps (depending on the existing hardware or OS configuration, for example)
  • managing the User Interface and silent installation
  • and many other standard software installation processes: feature selection; rollback; logging; advertisement.

Windows Installer is now at Version 5.0 and most vendors have adapted their installations one way or another to use the Windows Installer service. Vendor installation packages have gradually evolved. In a first stage, many vendors adapted their existing installation routine simply to run as Custom Actions within the Windows Installer package. This defeated the object of using Windows Installer, since it could only have the most limited information about the package. But at least the basics of the installation worked in a standard way.

In a second stage, vendors have adapted their installation to be performed as a native Windows Installer package, with the properties and actions handled directly by Windows Installer. The vendor typically uses Custom Actions only for tasks that are not handled by the Windows Installer service.

Most vendors have continued to include an executable Setup with the installation package. This might typically perform a few pre-installation functions, and then run the msi. For example, the Setup might install .NET Framework 2.0 as a pre-requisite. The Setup can also customize the msi depending, for example, on the OS language, by generating a Transform (with an mst file extension). Provided you know what the Setup does, you can perform those tasks independently and just extract and run the msi directly.

Group Policy Software Installation

Who cares whether the installation is a Windows Installer msi or a non-Windows Setup, as long as it works? The answer is: Group Policy. Also starting with Windows 2000 Microsoft introduced Group Policy to control the configuration of computers in the Domain. Group Policy uses client side extensions to perform different types of actions defined in domain policies. One of these extensions is Software Installation. The Software Installation client side extension tells Windows Installer what installation actions to run.

  • The Group Policy Software Installation policy knows whether it has run or not. It knows what users or computers it needs to run for. It knows not to run over a slow link. It can use a WMI filter to run on certain classes of computer and not others. Depending on the policy configuration it passes commands to Windows Installer to perform the installation, upgrade or removal of software.
  • Windows Installer then performs those actions the same as if the command line were executed manually. It reports back to the client side extension whether the installation was successful. The client side extension reports back to the Group Policy service whether the policy has been completed successfully or not.

Group Policy is a standard component of Windows domains, and therefore there is no additional cost for using it to install software. Without Group Policy you need to use some kind of third party tool. Although technically you can run a script, this method does not provide the control of the installation that you have with Group Policy, unless you develop in effect your own custom client side extension. Group Policy Software Installation operates only on Windows Installer database (msi) and transform (mst) files. It does not operate on Setup (exe) files. So if the vendor package comes as a Setup, and does not unpack as an msi, it can not be installed by Group Policy Software Installation.

Many enterprises will already have a separate software installation tool, like SCCM or Altiris Software Delivery Solution. But if you do not, and you use Group Policy Software Installation, then you need an msi. Now that most vendors have adapted their installations to use Windows Installer, the great majority of products can be installed using Group Policy Software Installation:

  • either directly
  • or by extracting the msi from the Setup
  • or by re-packaging older or simpler products using something like Wise Package Studio.

Problems

Rather ironically, having set the standard and provided the tools, Microsoft were the first major vendor to break ranks. Office 2007 has a Setup that runs a series of separate msi’s but it uses a Patch file (msp) instead of a Transform (mst) to customize the installation. Group Policy Software Installation cannot run a Patch file. If you want to customize the installation of Office 2007 (for example to select which components to install) you cannot use Group Policy. Microsoft simply recommend that you use their client management tool SCCM. But if you were quite happily proceeding with all your software installations using Group Policy, it was a bit of a shock to find that you can’t install Office 2007 that way. Fortunately there is a workaround that enables you to perform a standard installation without customization, and therefore with Group Policy. Here are the deployment options MS recommend for installing Office 2010. You will see that they don’t include Group Policy Software Installation.

Now (since Version 11.2) Citrix have taken a similar approach for the Citrix Online plug-ins, for similar reasons. The Citrix "client" now consists of several components or plug-ins:

  1. Web plug-in that provides the core XenApp ICA client functions and enables connection to a XenApp farm using a web browser (always required)
  2. Desktop Viewer that provides controls and preferences for a published desktop (optional)
  3. USB handler that controls what happens when you plug in a USB device during a session (optional)
  4. Program Neighborhood Agent (PNA) that reads a configuration from a XenApp Web Interface server and configures shortcuts in the Start menu for the published applications
  5. Single Sign-on that captures the user logon details and enables the PNA to pass them through to the Web Interface server (optional, for the PNA)
  6. HDX media stream for Flash Player for client side rendering of Flash content (optional)

Why so complicated? Citrix are trying to provide a client that works both for published applications (connection to a Terminal Server) and virtual desktops (connection to a Virtual Machine running a Windows client OS like Windows 7) based on a combination of plug-ins. This is Citrix making a big move to dominate the market for Virtual Machine-based desktops by adapting their ICA protocol and client services for connections to a remote VM.

Each plug-in is an msi. However Citrix have developed a custom setup controller called Trolley Express to control the running of the individual msi’s. Trolley Express does the following:

  • manages the sequence of msi’s and their rollback in the event of failure
  • manages upgrades and removal
  • provides an overall log file, and a log for each msi
  • passes the OS language to the individual msi’s
  • passes command line parameters to the msi’s in a transform.

It’s not very much really. I don’t see anything here that could not have been developed as an msi wrapper with nested msi’s, or indeed as separate msi’s with component options. Here’s an extract from the log file to show what Trolley Express is doing.

But Citrix have gone much further than just using a custom setup. They have developed a whole proprietary client management system. The Merchandising Server acts as a client management server, and the Receiver acts as an agent performing the plug-in installation and configuration determined by the Server. This operates independently of Microsoft domains. You could run it on a campus and control the client on any computer connecting to a Citrix service. You can use it for the Citrix Access Gateway (SSL VPN) client as well as the XenApp server client. There is a receiver for Windows, Mac, iPad and Smart Phone.

Installation of the client with Group Policy is still possible, and it works faultlessly, but Citrix do not recommend it. They say:

"Citrix does not recommend extracting the .msi files in place of running the installer packages [an exe]. However, there might be times when you have to extract the plugin .msi files from CitrixOnlinePluginFull.exe manually, rather than running the installer package (for example, company policy prohibits using the .exe file). If you use the extracted .msi files for your installation, using the .exe installer package to upgrade or uninstall and reinstall might not work properly. The Administrative installation option available in some previous versions of the plugin is not supported with this release. To customize the online plugin installation, see ‘To configure and install the online plugin using the plugin installer and commandline parameters [only available for the .exe]’."

This seems a big jump, from an msi that can be installed by Group Policy, to a full client management system and no msi. But it is really the same way that other complex clients are managed: SCCM itself; Microsoft Forefront Client Security; McAfee ePolicy Orchestrator. They all use a server to install and configure the clients and agents. Citrix provide the Merchandising server as a virtual appliance, so you don’t even need an additional license for the OS or database.

In summary:

  1. You can install nearly everything on a Windows computer using Group Policy Software Installation, and it is a standard component of Windows domains with no extra cost
  2. First Office 2007 and now the Citrix Online Plug-in are not recommended for Group Policy installation – although they can be made to work
  3. Do you need to buy a software delivery tool after all? Nearly, but not quite. For Citrix you can use the Merchandising Server appliance.

I am all in favour of client management tools like SCCM and Altiris where you need them. But I also like to reduce costs where you don’t. For the moment you can still get by with Group Policy Software installation.

Thick or thin client

The standard user desktop can be delivered in radically different ways. While this is interesting technically, what difference does it make to your business? Some of the claims are just plain confusing or misleading.

The standard user desktop can be delivered in radically different ways: standard PC; netbook; virtualized applications; remote desktop; virtual desktop; virtual disk; the list goes on. It is a big subject, so it is hard to know where to start. There are use cases for different types of desktop that seem obvious, but the more you look into it the less obvious it is.

Let’s explode the problem to see what is actually happening. Then we can form a better view of how the methods differ. On the standard PC we have the following subsystems, all connected by the motherboard:

  • Processor
  • Hard drive
  • RAM Memory
  • Graphics
  • Network interface
  • Interfaces for different types of devices
  • Services like power and cooling
  • A BIOS that controls how they work together.

I am sorry this is so basic but we have to start somewhere. Obviously we could move different parts to different places. We could put the graphics controller and a few other bits and pieces near to the monitor and the user, and put the rest miles away in a cupboard somewhere. What would this achieve? There would be less noise and heat and it would take less space on the desk. Nothing to break or steal. Sounds good. What have we got? A Terminal. We could obviously explode our PC in lots of different ways to achieve different results. The different explosions that the engineers have given us are today:

  1. Remote KVM
    • Put the PC in a cupboard and operate the keyboard, video and mouse (KVM) remotely.
    • What exactly do we need locally? Just something to transmit the KVM signals presumably.
    • But KVM switches works only over short distances. Over long distances they need an IP communication protocol, with something as a server and something as a client.
    • Here’s how Adder do it: Infinity Transmitter and Receiver
  2. Remote PC
    • Put the PC in a cupboard and connect to it remotely using a remote communications protocol.
    • Strip it down so it shares components with other PC’s like power supply and cooling (a Blade PC).
    • Use a terminal with an operating system on the desk, to run a remote desktop client that communicates with a remote desktop server service.
    • Here’s how HP do it: HPBlade-Thin
  3. Remote Disk, or Remote Boot
    • With the remote PC I still need a terminal locally to run a remote desktop to it. The terminal has a processor, memory, and connections of its own. To avoid all this duplication why not keep those local and just put the hard drive in a cupboard. Then, obviously, instead of lots of hard drives I could use space on shared disks.
    • The trouble is, I have to get the OS or some of it at least into local memory. RAM is volatile, so I have to do it each time the machine starts up.
    • This works OK if the OS is a stripped down utility like a public kiosk, but not with a full desktop.
  4. Shared OS (aka Terminal Services)
    • Put the PC in a cupboard, but make it a very large PC and use the Windows OS to share out sessions to different users.
    • The OS method of sharing needs to be pretty good to make sure that one part-share of a big PC is as effective as a whole share of a smaller PC.
    • Windows has this built in as Remote Desktop Services. XenApp is a more specialized version: Citrix ICA Connection
  5. Shared hardware (aka Virtual Machine)
    • Instead of having lots of separate PC’s in a cupboard, I could use a Hypervisor on one large machine to share the same physical hardware between different OS instances and allocate one instance to each user.
    • The trouble is, the hypervisor has to run on an Intel processor the same as the virtual machine, so there are only so many multiples I can achieve. I could give one user a very powerful virtual machine, like a workstation. Or I could give 10 or 20 users a smaller machine, like a low-spec PC. But I can’t give lots of users a virtual workstation.
  6. No desktop!
    • Deliver every application into a web browser, instead of a full desktop. Cut out the middleman and go straight from a simple browser on a thin client or even an iPhone to an application on a server somewhere remote.
    • This is essentially what Google Apps are doing.
    • This works fine if every application you need is web-enabled, but if you need even one that isn’t (say, Adobe Photoshop) then you need something else.

All this remoting and sharing. Many of the descriptions are confusing. Many of the claims are misleading. We just need to understand how the box on the user’s desk communicates with the box in the cupboard, and what happens in the box.

To use a full desktop like Windows 7 remotely, we need to use an IP protocol running over ethernet. Everything you see at the remote end has to come via the TCP/IP connection and the communications protocol. A normal graphics cable to your monitor runs at around 4 Gbps or more. Over TCP/IP this has to come down to say 100 Mbps over a LAN, or down to some fraction of 1 Mbps over a WAN. Obviously if you have 10 users on a 2 Mbps WAN leased line, then the most they can have at the same time is 200 Kbps. To come down from 4 Gbps to 200 Kbps means that something has to give in what you can see on the screen and how fast you can work on the desktop.

  1. Microsoft use their own Remote Desktop Protocol (RDP). Remote Desktop Services (RDS) runs on the box in the cupboard, and Remote Desktop Connection (RDC) runs on the local box. They communicate using RDP.
  2. Citrix provide a heavily optimized proprietary protocol, Independent Computing Architecture (ICA). XenApp runs on the box in the cupboard. A Citrix client runs on the local box ("plug-in" for Windows, "receiver" for Linux). ICA does a lot of clever things to make the local response appear fast and consistent.

What is happening on the box in the cupboard is exactly the same as you could do if it were under your desk, with the same result. You can run multiple user sessions on one OS, or multiple virtual machines on one physical machine. You can break the physical hardware down from one box into separate Blade servers and SAN storage. You can add specialist graphics accelerators. What you are going to get as a remote desktop is exactly the same as if you were there, except it has to come over one of the remote communication protocols.

There is one thing to add. If we share the hardware of the remote box, we introduce an array of new problems about connecting users to the right machine and configuring that machine to have the right resources. For shared OS these are handled by Remote Desktop Services or XenApp. For shared hardware the connection broker, the virtual disk and so on are solving new problems created by virtualizing the box, not adding new features to the user’s experience.

So when people talk about, say, "a virtual application running on your thin client" what they mean is: "an application running on a box in a cupboard that you can interact with using a remote communication protocol". A "virtual desktop" is a virtual machine running on a box in a cupboard that you can interact with using a remote communication protocol. A "virtual disk" is shared storage for the box in the cupboard that you can interact with using a remote communication protocol.

What difference does it all make in practice?

A. Ergonomics

If you put the PC in a cupboard and connect to it remotely with a thin client there is no doubt it will take less space and create less heat and noise on the desk. You have just moved them somewhere else. You now have two processors, two lots of memory, two power supplies. But clearly, if the desk ergonomics are important, then a remote desktop works well.

B. Security

With the thin client there is nothing much to steal. No information is stored on the client after the user logs off. But with a well managed PC there is very little user data on the PC anyway and if you really want to, even that can be wiped each time.

In a very insecure environment, having literally only the bits of the remote desktop graphics present locally provides less opportunity to exploit. In a normal workplace it won’t make a difference.

C. Robustness

The only difference is a hard drive and a fan. With new solid state disks even this difference is gone. Hard drives don’t fail that often, and with a properly managed desktop, rebuilding the OS image is quick, easy and remote.

D. Cost

A huge subject, and I am not even going to try to generalize, but bear in mind that the thin client is only a remote access device to a desktop provided somewhere else, so it is an added cost, not a reduced cost. By and large, if you need a license to run something on the desktop, you need the same licenses to run it on a remote desktop.

E. Ease of management

Much the same tools are required to manage servers supplying the remote desktop, as the PC desktop. If the desktop is properly automated, one thousand PC’s are not more difficult to manage than servers supplying remote desktop to one thousand devices. There is some extra complexity to managing PC desktop (for example, Wake on LAN for automated patching) but this is balanced by the tools needed for the added complexity of the remote desktop sharing such as load balancing, printing and profile management.

F. Performance

The performance aspect is fascinating and a topic in itself. There is a trade-off between:

  • the amount of graphics needed to see what is going on, and
  • the amount of data that would need to be transferred between my desktop and a server (for example, a file server or application server), and
  • the amount of bandwidth available to me, and
  • the synchronicity of the transaction (I need to see graphics right now, but I can wait for a file to be printed).

If I have a lot of data going between desktop and server and not much bandwidth (for example running a report in a remote finance system), then it might be better if only the graphics have to be sent to me. If I am working with video (for example watching a training DVD), then I want it very local.

The XenApp ICA protocol will compress the graphics of a remote desktop to around 200 Kbps or less. This means we can get a perfectly adequate remote desktop over a WAN, provided we don’t need demanding video. We can open, edit and save a 10MB Powerpoint presentation in a blink with a remote desktop, whereas opening it directly over a 200 Kbps WAN connection would be hopeless.

The determining factor for the desktop is really where the data has to live, from every consideration:

  • where the user can have sufficiently responsive access to it
  • where it can be held securely and backed up
  • where you can authenticate to get to it
  • where all the people who need to get to it from different places can reach it
  • how it integrates with application data.

So for example, in a five person remote office, users will want fast access to their own personal data but unless you have a local server with UPS, cooling, and backup it may be better to put the data in a central data center and use a remote desktop to get to it.

G. Flexibility

Let’s say you have a wide range of different applications, used by different people on different desktops. Let’s also say that some are incompatible with others, or have different pre-requisites. Perhaps some users require Office 2003 while others are using Office 2007. Some applications might require Access 2000. Isn’t a local desktop, or a remote virtual desktop, more flexible to deal with this variety?

As long as the applications are not incompatible, you can install them all on a Shared OS remote desktop. You can control what shortcuts people actually see using Group Policy Preferences, or other third party solutions like AppSense. You can use application virtualization, to an extent, to isolate incompatible applications from each other.

To conclude:

Obviously there are many entirely different use cases where one type of desktop delivery works better than another. The aim of this blog is not remotely to generalise across different use cases. The aim is just to see what is actually going on when we remove the PC from the desk.

Windows 7 Deployment Part 7

There are several different tools for installing drivers in Windows 7. This blog aims to describe them and show how they differ.

Driver installation tools for Windows 7:

  • DISM
  • DPInst
  • DrvInst
  • PnpEnum
  • PnPUnattend
  • PnPUtil

DISM

Deployment Image Servicing and Management (dism.exe) is the new tool for modifying Windows images. It replaces the individual tools that were introduced for Vista images. There is plenty of documentation about DISM.

DISM is a "framework" tool that gives access to different "providers". The DISM host itself controls things like logging and rebooting. The different providers do the work with their own command line options, called by the DISM host.

  • CBSProvider (Component Based Servicing)
  • OSProvider (OS updates)
  • WIMProvider (handling the WIM file)
  • SMIProvider (Settings Management)
  • DMIProvider (Driver Management)

DISM is a tool for servicing both Online and Offline images. Dism /image:[path] refers to a mounted offline image. Dism /online refers to the current running image. However, you can not use DISM to add or remove drivers from an online Windows 7 image. The facility to do this does not exist. The commands /add-driver and /remove-driver apply only to offline images. When the image is Online:

  • you can add and remove updates
  • you can enable and disable features
  • you can not add and remove drivers.

Servicing in Audit mode, with the image online, uses PnPUnattend, not DISM.

DPInst

Driver Package Installer (DPInst.exe) is part of the Driver Installer Framework (DIFx) provided to vendors to enable them to distribute drivers. DIFx gives vendors three tools of varying complexity to install drivers. The intention is that the user just clicks Run and is insulated from the method used.

  • DPInst
  • DIFxApp
  • DIFxAPI
  1. DPInst is a very simple installer. Put the inf and sys files together in a folder with a copy of DPInst.exe. Execute DPInst and the driver is imported into the Driver Store. By default, the tool searches the current directory and tries to install all driver packages found.
  2. DIFxApp is a plug-in or extension to Windows Installer or InstallShield. It provides the actions that enable, for example, a Windows Installer msi to install drivers.
  3. DIFxApi enables the vendor to write custom installers.

So by using DPInst you would effectively be authoring a driver installation package on behalf of the vendor. Why does Microsoft provide DPInst at all? Because it has simple features to enable a vendor to distribute a driver, including:

  • localisation
  • customisation of text, icons and bitmaps
  • an option to add an EULA

It might be fun to use this to add drivers during deployment, but it is not what it is for.

DrvInst

DrvInst.exe is the Driver Installation Module of Windows 7. When Windows detects new hardware, DrvInst is the module that selects drivers from the Driver Store and sets up the driver for the hardware.

PnpEnum

PnPEnum.exe is a Microsoft utility that enumerates the Plug and Play hardware ID’s in a system. It is not part of the operating system. It is supplied as part of the Microsoft Platform Support Reporting tools. It is also part of Microsoft Deployment Toolkit (MDT).

In MDT 2010 it is used by the ZTIDrivers.wsf script as part of the task of importing drivers:

  • PnpEnum.exe outputs the hardware ID’s in pnpenum.xml
  • this is matched against the list of MDT Out-of-Box Drivers in drivers.xml
  • matching drivers are copied to c:drivers
  • c:drivers is defined as the Driver Path in the PnPCustomization component of WinPE or auditSystem.

PnPUnattend

PnPUnattend.exe is part of the operating system. During the auditSystem pass of setup (if there is one configured to run) it automatically imports drivers in the path defined in the unattend.xml PnpCustomizationsNonWinPE component.

This is specifically to install drivers from a path, unattended, as part of the Audit pass of setup. The command line options are: /s to search the driver path without importing; and /l to show logging information.

PnPUtil

PnPUtil.exe is part of the operating system. PnPUtil is a command-line tool to add and remove third party plug and play drivers. After Windows 7 is deployed, you can use PnPUtil to add or update specific drivers.

  • PnPUtil -a imports the specified driver or drivers
  • PnPutil -e lists the third party drivers that have already been added (but it does not provide a facility to pipe the output to a file)
  • PnPUtil -d deletes the specified third party driver.

Windows 7 Deployment Part 6

Importing a block of drivers into an image takes quite a bit of time. This is not important before deployment, but during deployment it can add many minutes to the imaging process. During deployment you really want a process to inspect the target computer and obtain just the drivers required for it. For this we need specialist tools. Microsoft Deployment Toolkit (MDT) 2010 does this. It is interesting to see how it does it.

In Microsoft Deployment Toolkit (MDT) 2010, open the Deployment Workbench and import drivers into the Out-of-Box Drivers folder.

WorkbenchImportDrivers

You can also filter the drivers into Selection Profiles

WorkbenchSelectionProfile

In the Task Sequence, select the Preinstall action Inject Drivers

TaskSequencePreInstall

When we study the Setup methods and the unattend.xml syntax, there is no such thing as "Preinstall", and no component to "inject" drivers, so what is going on?

Here is what happens when you use the MDT Preinstall task sequence to inject drivers:

  1. MDT uses its own tool PnpEnum.exe in the WinPE pass to:
    • enumerate the hardware devices on the computer
    • identify drivers for these devices in the Out-of-Box Drivers folder on the server
    • copy them into a folder in the offline image, C:Drivers
  2. Setup performs an Offline Servicing pass
    • The folder C:Drivers is specified as the DriversPath In the PnPCustomizationWinPE component of the Offline Servicing Pass
    • Deployment Image Servicing and Management (DISM) does its stuff to import the drivers
  3. During the Specialize pass, Setup selects and installs drivers from the Driver Store as normal.

The DISM import process took 1 minute on a test VM, because only the required drivers were imported, not all the drivers in the Out-of-Box Drivers folder. Apart from that, the process is the same as the DISM process performed in Offline Servicing.

Here are the details of the process:

  • The MDT scripted installation process running in WinPE executes the ZTIDrivers.wsf script
  • ZTIDrivers runs PnpEnum.exe from the Deployment Share on the server. PnpEnum is an MDT tool that enumerates the hardware devices in the computer. The script pipes the output to a local file PnpEnum.xml.
  • ZTIDrivers looks at the Selection Profile for this Task Sequence and finds the folders on the network with the drivers matching the profile.
  • ZTIDrivers processes each hardware device in PnpEnum.xml and checks if there is a matching driver.
  • If there is, the driver is copied to C:Drivers.
  • The ZTIDrivers script ends.
  • By default there is an offlineServicing pass, and if there are drivers in C:Drivers they will be imported.

So the trick that MDT has performed is to find only the required drivers, instead of a large block of drivers. The process of importing and selecting the drivers is exactly the same as if you had used Windows Deployment Services (WDS) with an unattend.xml file built with Windows System Image Manager (WSIM).

Windows 7 Deployment Part 5

If you have only a few standard models of computer in the organisation then you can maintain specific Windows 7 images for each. But if you have many models you may want to be able to add or update drivers without capturing a new image.

This piece looks at the different ways you can add drivers, and what happens when you do.

You can add or remove drivers in two main ways:

  1. Servicing the image as a file
  2. Running Windows 7 in Audit mode.

You can do either of these at two different places in the deployment workflow:

  1. On a base image, recapturing the image as an updated or customised version of the base before it is deployed anywhere
  2. While deploying an image to a specific production computer.

This gives you four potential methods:

  1. Servicing the image before deployment
  2. Servicing the image during deployment
  3. Running in Audit mode before deployment
  4. Running in Audit mode during deployment.

First, a bit of background about how Windows 7 manages drivers. Windows 7 keeps a database of approved drivers in a special folder called the Driver Store. This folder has Full Control permissions only for System and Trusted Installer. Therefore the interactive sesson cannot write to it and a system process is needed to import drivers into it.

The processes involved in using a driver from the Driver Store are:

  • Windows detects new hardware
  • Windows selects the best match drivers in the Driver Store and sets up the hardware with the drivers.

The default Driver Store in Windows 7 has in the order of 650 sets of drivers.

DriverStoreFolderCount1

1. Servicing before deployment

Before deployment the Windows 7 image exists as a file. The file can be opened and manipulated. Deployment Image Servicing and Management (DISM) is the new tool to do this. DISM is supplied as part of the Windows Automated Installation Kit (WAIK), which you set up on any deployment workstation or server. DISM mounts the image file (.wim) as a folder in Windows, giving you the opportunity to add files or edit the registry.

After mounting the image with dism /mount-wim, you add drivers to the driver store with dism /add-driver. You then commit the changes and unmount the image. Obviously you could do something like add a set of Dell drivers to a default image and commit it as a customised image for Dells.

Dism /add-driver reads each driver INF file provided in the command line and does the following:

  1. Imports the driver package into a new folder in the Driver Store AddDrivers2
  2. Indexes the INF file as oem#nnn in the %SystemRoot%inf folder AddDrivers4
  3. For boot critical drivers only:
    • creates the registry keys in the offline registry hive
    • copies files into %SystemRoot%System32 and %SystemRoot%System32Drivers
    • copies files into %ProgramFiles%[Vendor].

Dism /add-driver does not run the DrvInst process to select the driver. This only happens online.

For three devices from one vendor this took 11 minutes. Now these drivers will be loaded on boot or evaluated by Plug and Play on hardware detection the same as if they had been shipped with Windows.

DriverStoreFolderCount2

You can also use DISM to:

  • enable or disable Windows features
  • apply Windows updates (but not service packs)
  • add or remove a language pack, and configure international settings

2. Servicing during deployment

You can modify the Windows 7 image after it is copied to the computer but before it boots by performing an Offline Servicing pass in Setup.

To add drivers during the Offline Servicing pass you need to configure the sysprep unattend.xml file.

To configure the file with Windows System Image Manager (WSIM), which is also part of the WAIK:

  1. Open the Distribution Share. WSIMOutofBoxDrivers
  2. Right click the Out-of-Box Drivers folder and select Insert Driver Path to Pass 2 offlineServicing. The required component is added to your unattend.xml file.WSIMAddDriversPathOfflineServicing
  3. The path can be the Distribution Share, or it can be a folder in the image like C:Drivers, or both. If a network share, you can specify credentials.

Note that WSIM is just a GUI for editing the unattend. The Out-of-Box Drivers folder shown in the Distribution Share is a way of adding the correct path.

Here is what happens when you perform Setup with an offlineServicing pass:

  • WinPE, after partitioning the disk and laying down the image file, moves into an offlineServicing pass and calls Deployment Image Servicing and Management (DISM)
  • DISM imports the driver package from the Driver Path you specified into a folder in the Driver Store, in the same way as it does before deployment.

After the offlineServicing pass, Windows boots as normal into the Specialize pass where DrvInst is called to select matching drivers from the Driver Store based on driver ranking.

3. Audit mode before deployment

Another way to add or update drivers before deployment is to perform an Audit pass. The Audit pass is a way of booting Windows without completing the normal Windows Welcome to complete the setup. The difference between servicing an offline image and an Audit pass is that in Audit mode the OS is actually running. DISM before deployment will install drivers and updates, but it will not install user applications or service packs. You can do these things with an Audit pass.

Specifying an Audit pass instead of OOBE during capture of the base image

SysprepAudit

To add drivers automatically during the Audit pass you need to configure the sysprep unattend.xml file:

  1. Open WSIM.
  2. Right click the Out-of-Box Drivers folder and select Insert Driver Path to Pass 5 auditSystem. The required component is added to your unattend.xml file.WSIMAddDriversPathAuditSystem
  3. As with Offline Servicing, the path can be the Distribution Share, or it can be a folder in the image like C:Drivers, or both.

Here is what happens when you select an auditSystem pass and specify the Driver Path:

  • Windows 7 boots into Audit mode
  • Setup launches PnPUnattend.exe
  • PnpUnattend imports the driver package into a folder in the Driver Store
  • Indexes the INF file as oem#nnn in the %SystemRoot%inf folder
  • Prompts the user for unsigned driversAuditPassUnsignedDriver
  • At the end of the import, DrvInst runs to install drivers for detected hardware.

On a test VM this process took 9 minutes. You then need to run sysprep to generalize the image again before re-capturing it.

Note that in the Audit pass we used a similar component in the unattend as we did for offlineServicing but:

  • In Offline Servicing setup used DISM
  • In Offline Servicing setup does not run DrvInst to set up the hardware with the driver
  • In Audit mode setup used PnpUnattend
  • In Audit mode setup does run DrvInst to set up the hardware with the driver.

You can also use Audit mode to:

  • install software
  • apply service packs
  • check that the image fully works.

4. Audit mode during deployment

You can also perform an Audit pass while deploying the image to its final destination computer, after Windows boots, but before OOBE. Once in Audit mode you can install drivers and software. The computer will remain in audit mode until you run sysprep /oobe /restart.

Obviously during deployment you want this all to be automatic. You can configure this in the unattend.xml:

  • use the PnpCustomizationsNonWinPE component of the auditSystem pass to specify a Driver PathWSIMAddDriversPathAuditSystem
  • use the Deployment component of the auditUser pass to reseal at the end of Audit and reboot into OOBE

Setup will call PnpUnattend.exe to import the drivers in c:drivers into the Driver Store and then use DrvInst.exe to install them.

Summary

  1. You can install drivers (and customize the Windows 7 image in other ways) before or during deployment.
  2. You can use Offline Servicing or Audit mode at either time.
  3. Offline Servicing uses DISM. Audit mode uses PnPUnattend.
  4. The process of importing drivers into the Driver Store and selecting a driver to match a Plug and Play hardware device is the same in either case.
  5. Importing drivers takes a significant amount of time. This does not matter before deployment. But during deployment you will want to minimise the number of drivers you import.
  6. Offline Servicing and Audit mode use the same PnPCustomization xml in the unattend.xml file to specify a driver path for the additional drivers.

Windows 7 Deployment Part 4

Time to deploy Windows 7. You take a look at your desktop and laptop inventory. Is it going to be easier to create an image for each model, or to create one image and to add the different drivers and components required for each model? Adding drivers sounds more efficient, but creating different images is more straightforward. What are the trade-offs? And what tools do you need? The more you think about it the less clear it can seem.

If you have a limited range of models, and you decide to create one image for each model, you can use Windows Deployment Services (WDS) alone to create and deploy the image. You will:

  • Build your reference image
  • Run Sysprep to generalize it
  • Capture it
  • Add an unattend.xml to automate the deployment
  • Deploy it to the same make and model of computer.

However the default sysprep /generalize command during image capture strips out computer-specific information. This includes the device detection and driver selection information. In the sysprep /specialize pass during image deployment the computer re-runs the device detection and driver selection, and re-installs the drivers.

This adds minutes to each deployment. It can be a few minutes for a simple VM image, up to twenty minutes or more for a laptop. You can save a lot of time by using the sysprep option PersistAllDeviceInstalls in your Unattend to avoid stripping out and reloading the same drivers.

Here is the Unattend:

<unattend xmlns="urn:schemas-microsoft-com:unattend">
   <settings pass="generalize">
      <component name="Microsoft-Windows-PnpSysprep" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" processorArchitecture="x86">
         <PersistAllDeviceInstalls>true</PersistAllDeviceInstalls>
      </component>
   </settings>
</unattend>

This has to go into an Unattend file used during image capture, not image deployment. Using WDS you will need to add the custom unattend.xml file to the reference image and run sysprep at the command line instead of the GUI before shutting down and capturing the image.

Custom unattend

Customizing the Generalize pass in WDS

This command line translates into the following:

  • Run the Generalize pass now
  • Run the Out of Box Experience when the computer next starts
  • Shutdown instead of restart, so the image can be captured with WinPE
  • Use the provided xml

This is just a small tweak to your workflow when using WDS to deploy images to the same make and model of computer. Don’t forget, you need to sysprep and capture an image that has not been joined to the domain.

In Microsoft Deployment Toolkit (MDT) 2010 the process of creating the reference image and capturing it is automated in a Task Sequence. However there is no opportunity to interrupt the task sequence to change the unattend.xml. Instead you modify the unattend file for the task sequence beforehand in the Workbench.

Customise unattend

Select the OS Info in the Task Sequence

Add to unattend

Edit the unattend.xml and save it

There is an option in the WDS Client (aka Windows Deployment Wizard) to Prepare to capture the machine, but stop before running sysprep. This would give you an opportunity to customise the image, including the unattend file.

Custom prepare

But there is no option to resume the capture, so you would choose this option if you want to hand the image over to a different capture process.

Windows 7 Deployment Part 3

If you want to perform completely unattended imaging, you need to change the computer so that it boots from the network when there is an imaging task. There are two ways you can do this:

  • You can change the boot device order so that a network boot is tried first. This is changed in the BIOS setup.
  • You can leave the boot order as it is but, when required, edit the boot configuration of the hard disk so there is no bootable partition. Then the boot sequence will fail through to the network.
  • Change the BIOS

    Normally when the computer boots there is a BIOS option F12 to break out of the boot sequence and perform a network boot. The computer then looks for a PXE server on the network to download an OS into RAM disk and boot from that. Obviously you have to be at the computer to press the F12 key.

    However if you press F2 to enter the BIOS setup you can change the boot device order to put the network boot first. When this is done, the computer will always first attempt a network boot. It will register itself with a PXE server and download a boot loader. This only takes a few seconds. The computer is then under the control of the boot loader program and can be told what to do next. It can:

    1. Wait a few seconds for user input before continuing with a normal hard disk boot
    2. Automatically proceed to download and boot from a boot image.

    Being under the control of the PXE server provides the opportunity to automate the imaging task. However you need a server deployment tool that can make use of this. WDS and MDT do not do this.

    To change the boot order you need to visit the computer. But if you are going to visit the computer you may as well do it when you re-image. And then you have no further need to change the boot order. So changing the boot order is really only relevant if you want to be able to perform unattended imaging of clients in the future. There are two ways you can set the boot order remotely.

    You can push out a BIOS update to the computers. If the computers are one or two years old this may not be a bad idea. You can use the vendor’s tools to do this.

    Here is an example of the HP BIOS update tools provided by Altiris.

    HP Altiris BIOS

    And the equivalent for Dell.

    Dell Altiris BIOS

    Or you can use Intel vPro. The Active Management Technology (AMT) feature enables you to create a security context between computers at the BIOS or firmware level. Once this security context is created you can use it to manipulate the BIOS remotely. AMT is a seriously heavyweight feature that enables secure management of the computer before the OS is booted, "Out of Band". You can power on, take an inventory (e.g recognise which computer it is) and perform operations on the disk without booting. To do this you need an AMT Management tool.

    AMT

    Change the Boot Configuration

    In Windows 7 the Boot Configuration is stored in the Boot Configuration Data Store. It is edited with BCDEdit. For fully automated imaging your deployment server can edit the boot configuration and then restart the computer to boot into an imaging task.