Windows 10 Licensing on Cloud

You probably know that, until recently, the Microsoft license did not permit you to run a Windows Client OS on cloud infrastructure. This has now changed. The exact license terms are difficult to find, and the cases where the changes could make a difference are limited. Here is a summary.

The clause that restricts you is the one that permits you to run a virtualised copy of Windows only "on (a) device(s)dedicated to Customer’s use". Here is the relevant document: Licensing Windows Desktop OS for Virtual Machines.

The key parts of this are:

  • Virtual Device Access (VDA) Rights are what you need to access a virtual copy of the Windows client OS. "VDA Rights" are not the same as "VDA Subscription". VDA Rights are what you acquire either with Software Assurance to run a copy of Windows, or a VDA Subscription if you are running something else.
  • VDA Rights are subject to the restriction above, to run only on dedicated hardware.

To state the obvious, this means no Windows 10 in Azure or AWS running on shared infrastructure. Under these terms, for example, you cannot use Azure to provide a DR facility for enterprise desktops.

In May 2016 a Microsoft blog said that Windows 10 would be coming to Azure through a partnership with Citrix, using XenDesktop: Microsoft and Citrix Partner to Help Customers Move to the Cloud. This was picked up widely in the press. The Citrix offer was announced in April 2017: Citrix XenDesktop Essentials for Azure.

On the face of it this is a significant change. Yes, it has a minimum requirement of 25 users, but still it is:

  • a monthly subscription, not a long term contract
  • pay for capacity if you use it, and not if you don’t.

The curious thing about this is that there is no corresponding announcement from Microsoft, and no apparent change in Windows 10 licensing. So what exactly has changed?

  • The Citrix offer requires the customer to have an "Enterprise Agreement"
  • This EA will cover all users and devices in the organisation, already permitting them to access virtual Windows 10 Enterprise through VDA Rights (although restricted to dedicated hardware).

So the change is that, provided you have an Enterprise Agreement, and use XenDesktop Essentials with a minimum of 25 accounts, you do not need to use explicitly dedicated hardware.

Separately, in May 2017 Microsoft introduced a new offer: Azure Hybrid Use Benefit for Windows Server and Windows Client. This is not explicitly related to the Citrix XenDesktop Essentials offer. It allows customers to upload a Windows 10 Enterprise image to Azure, but "Only Enterprise customers with Windows 10 Enterprise E3/E5 per user or Windows VDA per user… are eligible".

You can already run a Windows desktop in Amazon Web Services (AWS). Here the licensing terms are more straightforward:

  • For a regular Windows "desktop experience" you get a licensed copy of Windows Server Datacenter Edition. Desktop Experience is a feature of Windows server that adds some of the features of a Windows client. Datacenter Edition is the license that allows you to run multiple virtual copies of the OS on one host.
  • For a minimum of 200 machines per month, you can Bring Your Own License (BYOL), provided you have VDA Rights (see above).
  • This puts a value on the license part of the VM of $4 per month, but with a 200 minimum.

So in summary:

  1. You can already run a virtual desktop (a real dedicated desktop, not a session) using a Windows Server OS on Azure or AWS without restriction
  2. You can already run a virtual desktop using your own Windows client licenses on any dedicated hardware, if you have VDA Rights through Software Assurance or a VDA Subscription.
  3. As a special case of 2) above, you can already do this on AWS with a minimum of 200 desktops
  4. You can now (2017) run a virtual desktop with your own Windows client licenses in Azure, if you have a Microsoft Enterprise agreement.

To use a virtual desktop on any scale you will still need the surrounding infrastructure: a machine composer; a broker; and a client. XenDesktop Essentials provides a way of obtaining these on a monthly rental, compared to the normal annual subscription or perpetual license.

Windows 10 Performance on AWS

Amazon Web Services (AWS) offers a range of Windows 10 virtual desktops, called WorkSpaces. Let’s see how they perform.

The summary is that:

  1. A Standard Windows 10 WorkSpace performs similarly to a top of the range Dell laptop
  2. A Graphics Windows 10 WorkSpace performs similarly to a high performance Dell workstation.

That’s useful to know. If you want to give people access to a good all-round machine, then the Standard WorkSpace will do it. And if you want to give them access to a high performance machine occasionally, then a Graphics WorkSpace will do it. Meanwhile they can carry around a tablet like the Surface Pro for everyday convenience, and still have access to the whole range of Office 365 applications.

The costings are a bit of a surprise, but that will have to follow in another post.

First, the definition of the WorkSpaces. AWS offers four levels of performance for Windows 10:

Value 1 vCPU, 2 GiB Memory, 10 GB User Storage
Standard 2 vCPU, 4 GiB Memory, 50 GB User Storage
Performance 2 vCPU, 7.5 GiB Memory, 100 GB User Storage
Graphics 8 vCPU, 15 GiB Memory, 1 GPU, 4 GiB Video Memory, 100 GB User Storage

The Windows 10 WorkSpaces run a copy of Windows Server 2016, using one Datacenter Edition license for all copies running on the same host. So it is not quite accurate to call it a Windows 10 desktop. AWS describe it as: " a Windows 10 desktop experience, powered by Windows Server 2016." It makes no practical difference to the functionality, or the benchmarking.

An AWS WorkSpace is a virtual machine with a rudimentary system for brokering the machines to different users, and a remote access client. This, again, makes no difference to functionality or performance, but it explains why we have these categories (Value, Standard etc.) rather than the usual mix of ECS virtual machines.

The software I use for benchmarking is PassMark PerformanceTest. I have been using it for some time. It is a good product, and I have my own benchmarks from different types of machines to compare with. The methodology is very simple: start the machine; install the software; run the benchmark. Ideally you might do several runs, but I have not found that to be necessary.

Let’s get to the results. First the benchmarks for the different WorkSpaces.

Computer Value Standard Performance Graphics
CPU Mark 1774.9 3527.4 2450.8 7879.3
2D Graphics Mark 297.9 513.2 344.3 460.8
Memory Mark 742.3 1494.8 1647.9 1869.7
Disk Mark 801.2 805.8 880.9 1252.2
3D Graphics Mark N/A N/A N/A 3988.4
PassMark Rating 751.5 1223.2 1010.6 2652.2

The Performance WorkSpace is a surprise. This is configured with the same 2 vCPU as the Standard, and with more memory. But the results are lower than for the Standard. I checked this twice, and I ran the test again on the following day to confirm. The figures here are the best obtained. A possible reason is that this is configured with only one physical core, with hyperthreading enabled, whereas the Standard is two physical cores, with hyperthreading disabled. Whatever the reason, it is obviously not worth paying more for the Performance WorkSpace, unless you need the additional memory. It could really be called a "Memory" WorkSpace.

Here is the comparison with other machines. First the Standard WorkSpace compared with a Dell Latitude E7240, a good quality laptop.

Computer Standard E7240
CPU Mark 3527.4 3495.3
2D Graphics Mark 513.2 563.6
Memory Mark 1494.8 1166.1
Disk Mark 805.8 2186.2
3D Graphics Mark N/A 457.4
PassMark Rating 1223.2 1719.6

The Standard WorkSpace is comparable to a top of the range laptop like the E7240 (although that model is a bit old now). The CPU benchmark is comparable, although the SSD on the physical laptop is much faster than the virtualised SSD on the WorkSpace. The WorkSpace CPU is two cores on an Intel Xeon E5-2676, while the laptop CPU was 4 cores on an Intel Core i5-4210U.

Here is the Graphics WorkSpace compared with a Dell Precision M6700 mobile workstation (again, a bit old now):

Computer Graphics M6700
CPU Mark 7879.3 9520.0
2D Graphics Mark 460.8 754.0
Memory Mark 1869.7 2232.1
Disk Mark 1252.2 589.5
3D Graphics Mark 3988.4 956.0
PassMark Rating 2652.2 2075.0

We can see that:

  • CPU is comparable – 8 cores on an Intel Xeon E5-2670 against 8 cores on an Intel Core i7-3940XM
  • Disk is better than the Standard, not as good as the Dell laptop SSD, but better than the Dell workstation SATA
  • The graphics are outstanding

My overall impression is that I would be happy with the Standard WorkSpace as a substitute of a laptop, and very happy with the Graphics WorkSpace as a substitute for a workstation.

The Future Desktop

I have been doing a bit of work with data visualization recently, using Tableau. It got me thinking about the way we use data to produce information, and how that is changing.

One of my early career challenges was to analyse what effect promotions had on overall product sales. In Unilever at that time the standard practice was to run product promotions with the supermarkets every few months. The idea was to gain more prominent shelf space, and so increase sales. The promotion had to offer something extra (money off, extra free, two for one) and manufacturing had to be geared up to support the extra volume. The annual financial plan had to be modelled on the anticipated peaks and troughs of volume. In fact you could say that the whole operation was geared around these promotions.

But the question was: did we actually increase overall profitability; or did we displace volume from one cycle to another? My job was to look at the evidence to see what we could conclude about the effectiveness of promotion on profit.

The trouble is, I had no tools. I could get data about production, physical sales to the supermarket and market share by getting reports from the "mainframe", but I had no tools to analyse them. I had to draw graphs by hand. I plotted sales volume against market share and drew these up on paper and on acetates (remember those?). The results were presented to the Board, and I was asked to go and discuss them. I could make only the vaguest conclusions: promotions did not seem to increase market share in any sustained way; sales volume seemed to fall after a promotion by as much as it had increased; average price sold and profitability went down as much as sales volume went up.

At that time there were no computers on desks. Now the purpose of the desk is to hold the computer. Today I would be able to draw nice graphs, with bubbles expanding and floating upwards. But would it make any difference? No, because there was no useful data to make the correlation between the promotion and the effect on consumer behaviour. The real difference between then and now is not the computer. It is the data.

One of my pet peeves is the phrase "the pace of change is increasing". No, it is not. The pace of change is a constant. If it were increasing, it would either have to change direction and start slowing down at some point, or it would have to increase ad infinitum, which would be an absurdity. The phrase is a rhetorical device to encourage action. But you have to consider that if your call to action is a logical absurdity then there is something wrong.

OK, so what is changing, because something is? It is the availability of data about the world and our actions in it. The steadily lower cost of technology is making more and more data available, and giving us better tools to turn the data into usable information. We have more information, so we can act with more knowledge. We can use the data to gain a new insight into the behaviour of the world. It may be what we guessed intuitively, without data, or it may be new. So instead of "the pace of change is increasing" we have "the availability of data and information and knowledge is constantly increasing". We can respond in two ways:

  1. Collect more data. This is what the Internet of Things is about.
  2. Use the data more effectively. This is what Data Visualization is about.

Performance Measurement

This is about our experience recently on a project to improve the performance and stability of a set of engineering applications after migration to a new datacentre. We had really excellent data produced by the application centre business analysts. These showed in detail that applications were significantly slower than previously, across a wide range of transactions. On average, transactions were taking 25% longer (let’s say). Someone set the objective that we would not be satisfied until 90% of transactions were within the benchmark figure for each transaction.

On the face of it this was going to be difficult, because we knew that there would always be variability, and this new target effectively outlawed variation. We did not know the previous variability. If the benchmark transaction times were only met say 70% of the time previously, then there was no reason for them to now be met 90% of the time.

The first and obvious variable was the user site. We found that, if we excluded the sites with known poor networks, or those sites which seemed to have a much higher incidence of poor results (because that is how we knew they had a poor network), then the number of transactions outside the benchmark dropped significantly. But they were still a lot more than 10%. Obviously the site and the network did not account for all poor performance.

The second obvious factor was the performance of the computing platform (Citrix XenDesktop). We could not tell if a poor test result correlated with a general experience for other users of poor performance on the platform at that precise time. But the general feeling was that the platform must have periods of poor performance. So the number of virtual machines was increased; the number of users per virtual machine reduced; and in some cases the number of vCPU’s per virtual machine increased. It made no difference. There continued to be a significant number of transactions outside the benchmark times.

One of the issues for us was that we could not reproduce the problem on demand. The analysts had all experienced a bad transaction. But it was not repeatable. So we knew that we were looking for erratic rather than predictable results. When we looked at the test data again, we found that the Average time (the average time taken for a number of instances of the same transaction) was very misleading. We found that the Median value was indeed well below the benchmark transaction time. Most people were experiencing good performance most of the time, but some people were experiencing poor performance some of the time. The measurements at the time of poor performance were extreme, so they made the averages less useful.

The example I think of is taking a train to work. It normally takes 30 minutes. Four times out of five the train runs on time, but the fifth time it is cancelled and you have a 20 minute delay for the next train, which also runs more slowly, taking 40 minutes. It is not useful to say that the journey takes on average 36 minutes. You would not be on time to work more often if you allowed 36 minutes. Instead the conclusion is that the service is unreliable, which is quite a different thing.

So we plotted the actual times in a scatter graph, and it was immediately clear that the real problem was not performance, but reliability. We also calculated the standard deviation, as a more accurate representation of variability, which told us the same thing. Examples:

Transaction 7

Transaction 7

Transaction 7

We decided that, instead of looking at the things that affect performance (vCPU, vRAM, disk latency, network latency) we would look at the things that affect reliability. We started by analysing each transaction with SysInternals Process Monitor and Wireshark, to understand what exactly caused time to be taken. The results were a revelation. We found a set of causes that we would not have guessed existed:

  • A benchmark transaction exported from the old system without version history. The transaction attempted to validate the version number by checking prior versions, before giving up and running.
  • An export to Excel failed if Excel is already open in the background. It continued to fail silently until the user runs it with Excel not open.
  • A transaction called an external module. The module is signed with a certificate from the vendor. The transactionattempts to check the revocation of the certificate. If the user has an invalid proxy server configuration then there is a delay before a timeout expires and it continues. If the transaction is run a second time there is no check and it is fast.
  • The user logs on. The application searches in various non-existent locations for a user configuration. After around 20 seconds it finds a configuration and begins.
  • Running a transaction for the first time causes the data to be cached locally. The second time it runs from cache and is fast. Therefore the recorded time depends on the instance of running.
  • A report writes to Excel at a network location. The data is transferred to the remote file in very small packets, taking a long time. Another report run to a local file, which is then copied to the remote destination, and completes in a fraction of the time.

The conclusions? it is important to look at the data statistically to see whether the problem is about performance or reliability; and you need to understand the makeup of the transaction to know what may cause it to take longer than expected.

End of an era

We are seeing the end of an era in how we think of, and manage, the corporate desktop.

The corporate desktop is only about 12 to 15 years old. In a short burst, Microsoft introduced a range of technologies that made it practical to administer personal computers on a large scale: Active Directory, Group Policy, Windows Installer etc. Microsoft called it Intellimirror, although that name has disappeared. We take it all for granted now. It’s how desktops work.

Having an administered desktop like this was very important to the overall architecture of IT services. Devices on the LAN were safe and were allowed to access corporate data. Other devices were not. That’s why software like Flash, Java and Adobe Reader could be allowed to be out of date, and why people stuck with Windows XP and IE 8. They were on the LAN, so they were safe.

As things have evolved, it is getting to the point where this just isn’t the case anymore. The basic design has come to the end of the road. The effort to keep it up to date and secure is too great, and the benefit is no longer there.

I know you can keep the desktop up to date and secure. But its a lot of work and it is easy for it to break down. For the user this is all a waste of effort and cost. There’s no benefit to them. It is just a cost, a nuisance, and a constraint. As a minimum you need:

  1. Disk encryption, with boot PIN or password.
  2. Constant updates to Java, Flash, Adobe Reader, Chrome, Firefox. Not just regular, like every three months, but every few days.
  3. Every app virtualised, except the ones that won’t virtualise.
  4. Special treatment for web apps that need older versions of Internet Explorer and Java.
  5. A certificate infrastructure, and network access control, to test whether the device is one of yours or not.
  6. Security and audit controls to prevent, detect and respond to intrusions.

But mostly now the aim is to allow people to reach the main corporate services, like e-mail, from any device, and from anywhere. Not in all organisations, I know, but mostly I think. And why not?

If I can get to a service with Chrome, then I also don’t need to get to it on a company desktop. Any device with a browser will do. Web services and Cloud services don’t require a corporate desktop, and in many cases can’t tell if the client is a corporate desktop or not.

Take Office 365 as an example. I see a lot of organisations adopting it. The whole point of Office 365 is that you can use it on and off the network, and from any device (more or less). Office 365 has no method to detect whether your device is a corporate desktop or not. It can detect the IP address, and the type of device (Windows, iOS etc.), but it can’t detect whether the computer is joined to your domain, or has a machine certificate, or is encrypted, or the software is up to date – all the things that make a corporate desktop.

I think now we are looking ahead to a different paradigm.

  1. Device enrollment of any kind of device with something like Intune or AirWatch
  2. A corporate user identity, with different levels of authentication and authorisation for different services e.g. an Azure AD identity with Yubikey MFA for the finance and HR systems.
  3. Corporate applications delivered as separate services that you sign up to, and delivered mostly virtually or as web services, with no data on the end device.

I think this also means we will not need the monolithic, outsourced, integrated IT organisation. When IT is delivered as separate managed services, it does not need to be managed as a single entity. I would expect to see: Corporate Systems; Line of Business Systems; Local Systems.

How would this work in practice? Let’s say I am in engineering in a UK subsidiary of a global business. I get an Azure AD identity and a Yubikey from HR when I join. I pick my devices (a phone, a laptop) from a list, and they are delivered direct to me by the vendor. If I want, I download a corporate clean image, otherwise I just use Windows 10 OEM. I go to the Corporate Intranet new starters page, and enroll both devices in the Device Management system. They auto-discover the Office 365 e-mail and chat. I get a phone ID, which I key in to the phone on my desk.

From a portal I download the apps for my expenses and time reporting from Corporate Services. They only download onto an enrolled device. If I un-enroll, or fail to authenticate, they are wiped. Most of them will be virtual or web apps.

My engineering apps, like Autodesk, come from my Engineering Services. They will only install on an enrolled device. I can do what I like with the app, but I can’t get any important data without my Yubikey.

My own department pays the vendor for the devices. It pays Corporate services per employee. It has whatever Local Services it wants, for example its own helpdesk. Apps have a subscription per month.

OK, its not perfect, but it is a lot less complicated and easier to manage. It makes IT a set of services instead of an organisation.

Performance of App-V and ThinApp

We were recently asked to provide evidence that virtualising an application would not affect its performance.

The request was quite reasonable. The application in question was a high-performance engineering application: Patran by MSC Software. Patran has some configurable parameters to optimise performance on high-performance workstations. Not much point in optimising it if the virtualisation caused a loss of performance.

My first thought was that virtualisation really shouldn’t affect performance. Application virtualisation redirects the file system and registry to alternate locations. You can see this quite clearly in the structure of the package. This might affect IO intensive operations, but not operations in memory. But this is just theory, and I can quite understand an engineering manager would want to see more than a theory.

My second thought was to look for data on performance from the vendors (in this case VMware for ThinApp and Microsoft for App-V). But I didn’t find anything useful, which is odd.

So then we looked at the problem again, and began to realise that it could be really quite difficult. How would you demonstrate that in no way was the virtualised app slower than the physical app? How would you create controlled tests? For a few benchmarks, obviously, but not for every function.

The problem became harder when the testers showed some results than indicated the virtualised app was significantly slower. The test was to use Fraps to measure the Frames Per Second (FPS) when running a test model. Patran needs to render the graphical model on the screen as the user manipulates it. The test showed that the virtualised app rendered the model 33% slower than the physical app.

I was surprised by this, as the rendering clearly happens in memory on the graphics card, and has nothing to do with IO. But then I looked at the data a bit more and I found that the result was not really 33%. What really happened is that rendering is done at either 30 FPS or 60 FPS, In the case of this one test, the virtualised app hit 30 more than 60, and vice versa for the physical app. Still, it was not going to be possible to wait for any adverse test result and then find out whether it was significant or not.

The route we took was to take some benchmarking software and to virtualise it. That would mean that all the benchmarks would run virtualised, and the same benchmarks could be run normally. The software I took was PassMark PerformanceTest.

PerformanceTest has a wide range of benchmarks: for CPU, Disk, Memory and Graphics. The tests showed that for every benchmark the virtual app performed about the same, with no significant difference.

Here is the summary overall:

Test Rating CPU G2D G3D Mem Disk
Native 1924.4 3443.5 480.0 583.9 1674.3 3117.5
ThinApp 1915.1 3462.7 462.3 581.0 1706.3 3206.6

And here’s the summary for 3D Graphics:

Test 3D Graphics Mark DirectX 9 Simple DirectX 9 Complex DirectX 10 DirectX 11 DirectCompute
Native 584 41.1 22.6 4.4 9.7 315.1
ThinApp 581 41.0 22.6 4.4 9.6 313.5

Based on this, it seems fairly unlikely that an application would perform significantly worse by being virtualised.

Check your BIOS Power Management Settings

I have been working on a large End User Computing programme for a while, and not found the time to blog, so now it is time to catch up with a few snippets.

This one is about Virtual Desktop Infrastructure (VDI) and the BIOS settings of the physical servers. Here’s the summary: VDI depends on high performance hosts, but by default hosts are typically configured for a balance of performance and energy efficiency. Check your BIOS. It may not be what you think.

I first came across this a while ago when working on a new VDI deployment of Windows 7 on VMware View, running on Dell blade servers to an EqualLogic SAN. We noticed that the desktops and applications were not launching as quickly as expected, even with low loads on the servers, networks and storage. We did a lot of work to analyse what was happening. It’s not easy with non-persistent VDI, because you don’t know what machine the user will log on to. The end result was a surprising one.

The problem statement was: “Opening Outlook and Word seems to be sluggish, even though the host resources are not being fully used. Performance is not slow. It is just not as good as we were expecting”.

My company, Airdesk, is usually called in after the IT team have been unable to resolve the problem for a while. If the problem were obvious it would have been solved already. This means that we were looking for a more obscure cause. For example, in this case, it was not a simple case of CPU, memory or disk resources, because these are easily monitored in the vSphere console. So already we knew that we were looking for something more hidden. Here’s a good article on Troubleshooting ESX/ESXi virtual machine performance issues. Let’s assume the IT team has done all that and still not found the problem.

My approach to troubleshooting is hypothesis-based. We identify all the symptoms. We identify the things that could cause those symptoms. We devise tests to rule them out. It’s not as easy as that, because you can’t always restructure the production environment for testing. You need tools to tell you what is going on.

In this case the tools we used were:

  • vSphere console to monitor the hosts and the virtual machines from outside
  • Performance Monitor to monitor processor and disk activity from inside the virtual machine
  • Process Monitor (Sysinternals) to see what was actually happening during the launch
  • WinSAT to provide a consistent benchmark of performance inside the virtual machine
  • Exmon for monitoring the Outlook-Exchange communication

The tools told us that the application launch was CPU-bound, but there was no significant CPU load on the hosts. CPU Ready time (the measure of delay in scheduling CPU resources on a host) was normal. We could see spikes of disk latency, but these did not explain the time delay in opening applications.

Our conclusion was that the virtual machines were not getting access to the CPU that the vSphere console said was available to them. What could cause that? Something perhaps that throttled the performance of the CPU? Intel SpeedStep maybe? The vSphere console showed that it was configured for High Performance. But we decided to check the BIOS on the hosts and, sure enough, they were configured for Active Power Controller (hardware-based power management for energy efficiency).

Bn0jduy9sfymors2aviygg74674

We changed the BIOS settings, and the result was immediate. Performance of the virtual desktop was electric. Click on an application and, bang, it was open. We saved potentially £ tens of thousands by finding the cause and not throwing resources at it.

You have two types of setting in Dell BIOS:

  1. Processor settings, which can be optimized for different workloads
  2. Power Management settings, which give a choice between efficiency and power.

In our case we wanted to configure the processors for a general-purpose workload but we also wanted to provide immediate access to full resources, without stepping the processor down to save power based on average utilisation. So the Maximum Performance power setting was the one we needed. You could also set the BIOS Power Management to be OS-Controlled, and allow the vSphere setting to take effect. The point of this post is that the vSphere setting said the hosts were in High Performance mode, while the troubleshooting showed they were not.

That was a little while ago. I was reminded of it recently while designing the infrastructure for a global VDI environment based on XenServer (the hypervisor) and XenDesktop (the VDI) running on HP blade servers to a 3PAR SAN. In the Low Level Design we said “Max performance MUST be set in the host server BIOS”.

Sure enough, in the HP BIOS the default power setting is for Balanced Power and Performance, and this needs to be changed. In a XenServer VDI environment it needs to be set to maximum performance. See this technote from Citrix on How to Configure a XenServer Host’s BIOS Power Regulator for Maximum Performance.

Power Settings Default at Startup-Cropped

If you are not managing the BIOS power management settings on your virtualisation hosts, you are not getting the results you should.

Desktop Paradigm

This article is about managing the replacement for the traditional Windows XP desktop. It may sound like a straightforward upgrade of the desktop OS, or it may already seem like a complicated upgrade because of the business applications that don’t run on Windows 7. But in my view it is more than that. The old desktop paradigm that has been in place for more than twenty years is coming to an end. Without a paradigm we face a bundle of difficult choices.

A paradigm is a pattern or model; a world view underlying the theories and methodology of a particular subject. With a desktop paradigm we don’t have to give too much thought to individual components.

IT has never been too hot on empirical evidence. We tend to use words like "best practice" or "industry standard" or "most people" when in fact we have very little evidence to support our generalizations (like this one!). I have very rarely seen in IT anything you could call empirical evidence. However we do know what people buy (because companies release sales revenue) and we can assume that vendors try to sell what they think people buy. We can assume that the market leaders are selling more of what most people want to buy. This means that our understanding of "best practice" or "most people" is in fact an evolving view of the marketplace.

The Windows Desktop is a paradigm. There is a client OS that provides a blue space for applications to run in. It has a bundle of things running in the background. You can buy things from other vendors that run in the foreground. You can buy a block of hardware that has most things in it. You can buy things that you plug in to standard slots to extend it. Some of these, like the serial and VGA ports, keyboard and mouse, are decades old. You have to worry about how you get the Client OS onto the hardware, and how you get applications into the OS. You do things like "install" applications, and "patch" OS’s.

Much of how we manage the Windows desktop came in with IntelliMirror. IntelliMirror was a set of technologies introduced by Microsoft with Windows 2000, although the term itself soon disappeared. IntelliMirror included Active Directory, Group Policy, Roaming Profiles, Folder Redirection, Offline Files, Special Folders, Distributed File System, Windows Installer, Remote Installation, Sysprep. These represent a paradigm for managing the desktop.

Desktop security is a paradigm too. We use security software like anti-virus; a local firewall; a network firewall and proxy server to protect the perimeter of the local network; a DMZ for access in to web servers; a VPN for remote access. We might add Terminal Services (or Citrix) as a common variation on the standard desktop, using a thin client connecting to a session on a server.

We have had a few iterations since the desktop paradigm took shape. Windows hardware Quality Labs (WHQL) for drivers, UAC. But we have not had to build a business case for any of it. It is just the desktop. Everyone has it, just in different flavours.

The way we use a desktop, what we expect from it, and how we manage it, are just the way it is. The discussions we have about it are in the margins: do we need SCCM or not for software distribution; what product should we use for license management; which AV is best? We don’t discuss commissioning a private UEFI; or building a custom hardware device; although in a large organization we could do either.

But now, as Windows XP comes up to End of Life, things are not so clear. It is not just a question of migrating to Windows 8. The desktop paradigm has changed. What is different?

  • It has become clear that a large number of people (most?) use only e-mail and browser a large part (most?) of the time. It turns out they barely need a desktop at all. A smart phone or tablet is sufficient, maybe even better, for this. This leads to a segmentation of the market. Instead of giving everyone a standard PC or laptop, maybe a lot of people don’t really need one.
  • If I synchronize my e-mail, calendar, contacts and data on all my devices, and have access to them anywhere I go, then why provide them from a computer room in an office building? Why not provide them from a remote data centre? There is no DMZ. Everything I access is remote from me. I authenticate securely using a password and a PIN.
  • If the data center is remote from my own offices, and has highly specialised power, air conditioning and security requirements, why run it myself?
  • If I am using a smart phone or tablet for most of my communication and collaboration, and I can’t run Microsoft office on it, then do I really need it? Maybe I could make do with something simpler, like Google Docs.
  • If a tablet is not joined to any "domain", and is not "managed" by anyone except me, why do I need Active Directory, Group Policy and all of the IntelliMirror technology? And if I don’t need it for the tablet, why do I need it for a laptop, just because it runs Windows? And if it runs Windows RT, why would it need to be "managed" when other tablets do not?
  • If my tablet or smart phone connects to a guest wireless network, then why can’t I use my own personal laptop as well?

In some ways these have appeared, up to now, to be additive problems. Do we allow people to use a Mac at work? Do we let them use their iPhone for e-mail instead of a Blackberry? Can they connect their iPad to the company network? Can they add iTunes to their work laptop? But in a way they are subtractive problems. Once we do all this, what is left? We have a minority of people who need a "desktop" as the computing environment for specialist business applications that do not run any other way.

This means that we need to start evaluating things on their merits. What is the business case for Microsoft office vs. Google Docs? What is the business case for a (Windows) PC vs. a tablet? What is the business case for hosting (in a third party data centre) vs. running my own data centre? What is the business case for a virtual desktop over physical? This is complicated, because the questions are interdependent. If I use Microsoft Office at all, then I need a license for it. To use Office I need a Windows PC, virtual or physical, and I need a license for that. If I have a license for Office and for Windows, then I may as well use if for everything else. However if I don’t really need Office, then I don’t really need Windows, and I may as well use an Android or iOS tablet, or a Linux PC. If I need to use SAP I could do it with a browser application built with HTML5, on a Mac or anything else I like. If I don’t have a Windows PC, then any Windows applications I need can be published to me as a virtual application. But if the applications I need are incompatible (perhaps a specialised engineering application), leading me to a dedicated virtual desktop rather than shared, then I need a Microsoft VDA license and I may as well use a PC. At another level, it probably makes sense to run my remote services (like e-mail) from a third party data centre. But if I need a computer room on site for anything (like my data), and I already have the power and air conditioning for it, then I may as well use it.

When you have a multi-dimensional decision making process, you need one or two fixed points to build the decision around. Every business is different, but as we are moving from an established Desktop paradigm it makes sense to stick a toe in the water with regards to what the new paradigm is.

  1. Like it or not, people are spending more money on more devices, and finding ways they are useful. If you spread the costs over three or five years they are really not that expensive compared to, for example, office space or furniture. If it makes people productive I say give them a tablet AND a smart phone.
  2. People only need MS Office, with a conventional Windows PC, if they produce reports (financial reports, presentations, large documents). Other people don’t need it. They can use OpenOffice or Google Docs instead, and use an MS Office viewer or PDF to read reports produced with MS office. In a PDF you can add notes and comments to a report that was produced in MS Office, although of course you cannot edit it.
  3. In for a penny, in for a pound. Office workers used to need a desk when they worked with paper. Then they needed a desk to put a screen on. Now I think a lot of people no longer need a desk at all. The screen serves more to cut us off from other people than to enable us to communicate. Round tables, cafe style, pull up a chair, are more useful than desks. You might instead have quiet rooms, like a library, where people can go if they need to work on a report. Quiet room means no conversation, no phones, no audio. This also solves the problem of noise in open plan offices.
  4. For a long time my view has been that corporate assets (like data) belong behind their own perimeter firewall, and all end user devices should be authenticated and authorised in the same way, whether on LAN or WAN. This means that ALL devices accessing the assets, including corporate Windows PC’s, need to have strong authentication, and need to be able to protect confidential data.

Paradigms take years to develop, and evolve incrementally. Although IT love to play the game of thinking about what will be, in most cases it is perfectly fine to follow the trend. What makes now different is that Windows XP is going end of life. Large organisations need to replace XP desktops on a massive scale. They really do need to decide whether to replace XP desktops with Windows 8 desktops, or whether to strike out in a new direction.

Cloud and Windows 365

The idea of a Cloud Desktop is appealing, but can it exist?

Microsoft does not allow service provider licensing for Windows 7. You can have a monthly subscription for a remote desktop on Terminal Services running on Windows Server, but not for Windows 7. This has been clarified recently in a note from Microsoft: Delivery of Desktop-like Functionality through Outsourcer Arrangements and Service Provider License Agreements.

Terminal Services mean that the user shares the resources of the server with other users. To be reliable it needs to be very tightly controlled. The user cannot be an admin and cannot install software. The user cannot access high quality graphics, video and audio because they do not have direct, exclusive, access to the hardware.

Note that “The hosting hardware must be dedicated to, and for the benefit of the customer, and may not be shared by or with any other customers of that partner”. This is very curious. It means that you can buy a Windows 7 remote desktop running on a PC blade in a datacenter, but not on a VM (unless that also runs on dedicated hardware), even though Microsoft receive exactly the same license fee in both cases.

This is obviously an artificial restriction. One possible reason for this could be that Microsoft will soon introduce their own Windows 365 online desktop. A Windows 365 online desktop makes a lot of sense when used with Office 365, because all the data is then highly connected. You really can connect from nearly anywhere, with nearly any type of device.

At the moment with Office 365 that is not the case. Microsoft say that: “Because this infrastructure is located online, you can access it virtually anywhere from a desktop, laptop, or mobile phone”. You can access it, certainly, but you can use it properly only if the PC or Mac has Office installed locally.

Cloud and Office 365

Cloud is a brilliant marketing concept, but it can be difficult sometimes to pin down exactly what it means. This post looks at what Microsoft is offering in Office 365.

Office 365 is Microsoft’s version of cloud services for office applications. It provides "secure anywhere access to professional email, shared calendars, IM, video conferencing, and document collaboration". It is also a business (or multi-user) version of Windows Live, and a replacement for the earlier incarnation Business Productivity Online Services (BPOS).

My focus in this blog is what Office 365 delivers for a medium sized business. There are plenty of resources giving you the details of Office 365 features. The aim here is to show what it is, and discuss how you might use it.

Here is the admin portal. You can administer users, services and subscriptions here. Click on any of the images below to see a larger version with the details.

Office365 Admin Portal

Here is the user portal. This gives you access to Outlook, the SharePoint Team Site and Lync instant messaging.

Office365 Portal

SharePoint Team Site portal

Office365 SharePoint Home

Working with documents, either in the browser or by opening the application on the desktop

Office365 Documents

Using Word Web App. If you are thinking of using Web Apps instead of Office, you need to do a feature comparison to uderstand what you may be missing. For example:

  • In Word, no headers and footer, no section breaks
  • In Excel, no data sorting.

Of course there are far more differences than these, and you need to decide for yourself if they are relevant, but I mention these to show that it is not an academic comparison of features you never use.

Office365 Word

Using Outlook Web Access (OWA)

Office365 Outlook

Outlook options

Office365 Outlook Options

Outlook attachment, from the PC not SharePoint. You can map a drive to a SharePoint library in order to have direct access to the shared files from Outlook.

Office365 Outlook Attachment

Exchange mailbox administration

Office365 Mailbox

Exchange options

Office365 Exchange Phone and Voice

Forefront protection

Office365 Forefront for Exchange

Office 365 is a service operated by Microsoft, and of course pricing is set by Microsoft. Here is the UK pricing. Key points to note about the pricing plans:

All the pricing plans come with Exchange. Office 365 is essentially an online Exchange service plus other things on top.

The Small Business pricing plan adds Office Web Apps, somewhere to store files online (SharePoint) and an Instant messaging service (Lync).

The Midsize and Enterprise plans add SharePoint and Lync to Exchange. They have scaled up capacity and integrate with your own Active Directory. Different plans (E1 to E4) successively add features:

  • E1: Web Apps are view-only. You will need something else (Office on the desktop) to create files.
  • E2: Adds full Web Apps
  • E3: Adds Office Professional on the desktop
  • E4: Adds an on-premises Lync server for PBX

There are more feature differences that I have not mentioned, but they also add progressively through the plans.

There are also two Kiosk plans. These are like E1 and E2 but have cut-down versions of Exchange and SharePoint.

Features and pricing are changing all the time, so you will need to review features carefully before selecting a plan. However you can change plans at any time for any user, so you are not locked in to the wrong plan.

So what, really, is Office 365?

  1. It is subscription licensing, per user per month with the ability to scale down as well as up
  2. It is an online Exchange service operated by Microsoft
  3. It is an online file server or collaboration service, using SharePoint
  4. Being an online service, naturally, you can access it from anywhere
  5. You don’t need to run your own mail server, file server, mail filtering, archiving, backup server, intranet server, remote access. But you still need to run a print server, directory server, application server, management server.
  6. If you want to use the features of Microsoft Office (Word, Excel, Powerpoint and Outlook etc.) then you still need a PC or a Mac. You can’t do it from an iPad or Android tablet, or from a thin client. Office 365 is not a web-based version of Office. The exception to this is if the heavily cut down Web Apps version is sufficient.

Secure authentication for remote access

Being an online service you don’t have to provide remote access to your LAN. Your data is equally available from anywhere, so it works well for a distributed organisation. You also don’t have to provide backup and DR. But there is a curious anomaly: no two-factor authentication. Remote access creates a vulnerability to impersonation, since you cannot know who is entering the user’s credentials. Login details can easily be obtained if a user logs in from an insecure device or, for example, if the user loses a device that is configured for access, or just by guessing.

Two-factor authentication using a hardware or software token protects against this. Office 365 does not provide two-factor authentication. In this sense it is like opening your firewall to allow access to your servers: you just wouldn’t do it.

Office 365 uses Active Directory Federation Services (ADFS) to link your own directory of users with Microsoft. In your main premises the user is actually authenticating to your own AD. Remotely the user authenticates using your ADFS Proxy accessible from the Internet. The ADFS Proxy can require a more secure authentication for external access. Security vendors like RSA SecurID can integrate their two-factor authentication with your ADFS Proxy, and so enforce strong authentication in Office 365 for remote access.

Integration with other services

Being online and operated by Microsoft, there is the problem of how to integrate with other third party services. RIM have recently introduced Blackberry Business Cloud Services to integrate Office 365 with the Blackberry service. Microsoft Dynamics CRM Online will also be integrated. SharePoint Online allows you to use your own SharePoint intranet applications. As far as integrating with non-Microsoft services, that seems unlikely. I can’t at the moment see how you would integrate with EMC Documentum or Autonomy WorkSite.

You can still obtain Hosted Exchange, SharePoint and CRM separately, if the server-side features of Office 365 are not sufficient. These are multi-tenant versions of the servers run by third parties. These also use subscription licensing. And of course you can still outsouce the operation of dedicated services to run in a data center somewhere else.

Remoteness

When you change from using existing instructure on the LAN to using Office 365 on the Internet you need to provide additional bandwidth to it. Arguably e-mail does not need fast connections because it is asynchronous, but SharePoint as the library of shared documents will.

If you use WAN acceleration devices like Cisco WAAS or Blue Coat PacketShaper at remote sites, compression will no longer work because it requires a device at both ends, so you will need additional bandwidth at remote sites too.

Mix and match

The plans themselves are pretty much for marketing purposes. You can mix and match E (Midsize and Enterprise) and K (Kiosk) plans in the same organisation, and indeed you can simply add or remove components for any number of users. This means that, in effect, each component has a unique price that you can evaluate, and can be assigned to each user depending on their needs.

Costs and Benefits

So, the big question: if you are a 1,000 person organisation, is Office 365 a reasonable alternative to doing it yourself?

Exchange is going to cost from £16k per annum (kiosk), £31k (basic) and £52k (full). Archiving adds £23k. You will have to compare that with your own costs of running Exchange Server for 1000 users.

Office Pro Plus will cost £100 per user per annum. You can make a direct comparison of what it would cost to buy through Office 365 or through Volume Licensing. There is no difference in the end result: Office on the desktop and Web Apps online with both.

Web Apps will cost £47 per user per annum, as an alternative to the installed version of Office. You need to have SharePoint as well, to be able to use Web Apps. It can be SharePoint online or on-premises. There is no other way to obtain Web Apps as an alternative to Office installed on the desktop.

You also need to add the cost of additional bandwidth to get to Office 365 over the Internet. Your additional costs will depend on circumstances, but will be substantial.

To use Office Pro Plus you still need to run a full desktop service on Windows or Mac, or on terminal services. You will still need to run servers for:

  • Active Directory
  • DHCP and DNS
  • Print server
  • Other business applications like the finance system
  • Management of the PC’s: anti-virus, software distribution, patching, image deployment
  • Probably file server and backup server for data that is not in SharePoint. For example, SharePoint has an upload/download paradigm. I would expect a lot of people to hold data on the PC. Normally this would be redirected to a file server. So would a user roaming profile.

To run these servers, of course, you still need a computer room and IT staff. Therefore the cost-saving with Exchange Online and SharePoint Online is the incremental cost of running these on-premise in addition to the existing on-premise servers.

The mix and match aspect is important. Most of the organisations I know have Office, Exchange and SharePoint users ranging from expert to not at all. Although you can provide different editions of Office, that’s it. Office 365 Kiosk allows you to identify a body of users who only ever have light usage, and to license them at a significantly lower cost while still being integrated in the same infrastructure (the corporate directory, calendars, intranet).

If you have no existing infrastructure then there is a strategic choice to make between online and on-premise. But that is a rare situation. Most businesses aready have an infrastructure of IT services. They can choose to migrate services to Office 365 over time. For example, an upgrade to Exchange would be a good time to consider it. You really have to want to outsource Exchange and/or SharePoint for Office 365 to make sense.

Personally I don’t buy the argument about "allowing your valuable IT staff to concentrate on strategic matters". It either makes economic sense or it doesn’t. However I do think that if you remove routine tasks from IT staff then it is easier to focus on managing the remainder. The difficulty with managing IT is complexity, and so the less complexity the better.

You can obtain a trial of the Office 365 Enterprise Plan E3 here. You can also obtain a trial of the Kiosk Plan K2 here, if you are interested to see how it could work in a mix and match environment.

If you would like to contact Airdesk we can work through a cost-benefits analysis of online vs on-premise with you.

The Cloud is not a disruptive technology. It is a pricing plan.