Blog

End of an era

We are seeing the end of an era in how we think of, and manage, the corporate desktop.

The corporate desktop is only about 12 to 15 years old. In a short burst, Microsoft introduced a range of technologies that made it practical to administer personal computers on a large scale: Active Directory, Group Policy, Windows Installer etc. Microsoft called it Intellimirror, although that name has disappeared. We take it all for granted now. It’s how desktops work.

Having an administered desktop like this was very important to the overall architecture of IT services. Devices on the LAN were safe and were allowed to access corporate data. Other devices were not. That’s why software like Flash, Java and Adobe Reader could be allowed to be out of date, and why people stuck with Windows XP and IE 8. They were on the LAN, so they were safe.

As things have evolved, it is getting to the point where this just isn’t the case anymore. The basic design has come to the end of the road. The effort to keep it up to date and secure is too great, and the benefit is no longer there.

I know you can keep the desktop up to date and secure. But its a lot of work and it is easy for it to break down. For the user this is all a waste of effort and cost. There’s no benefit to them. It is just a cost, a nuisance, and a constraint. As a minimum you need:

  1. Disk encryption, with boot PIN or password.
  2. Constant updates to Java, Flash, Adobe Reader, Chrome, Firefox. Not just regular, like every three months, but every few days.
  3. Every app virtualised, except the ones that won’t virtualise.
  4. Special treatment for web apps that need older versions of Internet Explorer and Java.
  5. A certificate infrastructure, and network access control, to test whether the device is one of yours or not.
  6. Security and audit controls to prevent, detect and respond to intrusions.

But mostly now the aim is to allow people to reach the main corporate services, like e-mail, from any device, and from anywhere. Not in all organisations, I know, but mostly I think. And why not?

If I can get to a service with Chrome, then I also don’t need to get to it on a company desktop. Any device with a browser will do. Web services and Cloud services don’t require a corporate desktop, and in many cases can’t tell if the client is a corporate desktop or not.

Take Office 365 as an example. I see a lot of organisations adopting it. The whole point of Office 365 is that you can use it on and off the network, and from any device (more or less). Office 365 has no method to detect whether your device is a corporate desktop or not. It can detect the IP address, and the type of device (Windows, iOS etc.), but it can’t detect whether the computer is joined to your domain, or has a machine certificate, or is encrypted, or the software is up to date – all the things that make a corporate desktop.

I think now we are looking ahead to a different paradigm.

  1. Device enrollment of any kind of device with something like Intune or AirWatch
  2. A corporate user identity, with different levels of authentication and authorisation for different services e.g. an Azure AD identity with Yubikey MFA for the finance and HR systems.
  3. Corporate applications delivered as separate services that you sign up to, and delivered mostly virtually or as web services, with no data on the end device.

I think this also means we will not need the monolithic, outsourced, integrated IT organisation. When IT is delivered as separate managed services, it does not need to be managed as a single entity. I would expect to see: Corporate Systems; Line of Business Systems; Local Systems.

How would this work in practice? Let’s say I am in engineering in a UK subsidiary of a global business. I get an Azure AD identity and a Yubikey from HR when I join. I pick my devices (a phone, a laptop) from a list, and they are delivered direct to me by the vendor. If I want, I download a corporate clean image, otherwise I just use Windows 10 OEM. I go to the Corporate Intranet new starters page, and enroll both devices in the Device Management system. They auto-discover the Office 365 e-mail and chat. I get a phone ID, which I key in to the phone on my desk.

From a portal I download the apps for my expenses and time reporting from Corporate Services. They only download onto an enrolled device. If I un-enroll, or fail to authenticate, they are wiped. Most of them will be virtual or web apps.

My engineering apps, like Autodesk, come from my Engineering Services. They will only install on an enrolled device. I can do what I like with the app, but I can’t get any important data without my Yubikey.

My own department pays the vendor for the devices. It pays Corporate services per employee. It has whatever Local Services it wants, for example its own helpdesk. Apps have a subscription per month.

OK, its not perfect, but it is a lot less complicated and easier to manage. It makes IT a set of services instead of an organisation.

Performance of App-V and ThinApp

We were recently asked to provide evidence that virtualising an application would not affect its performance.

The request was quite reasonable. The application in question was a high-performance engineering application: Patran by MSC Software. Patran has some configurable parameters to optimise performance on high-performance workstations. Not much point in optimising it if the virtualisation caused a loss of performance.

My first thought was that virtualisation really shouldn’t affect performance. Application virtualisation redirects the file system and registry to alternate locations. You can see this quite clearly in the structure of the package. This might affect IO intensive operations, but not operations in memory. But this is just theory, and I can quite understand an engineering manager would want to see more than a theory.

My second thought was to look for data on performance from the vendors (in this case VMware for ThinApp and Microsoft for App-V). But I didn’t find anything useful, which is odd.

So then we looked at the problem again, and began to realise that it could be really quite difficult. How would you demonstrate that in no way was the virtualised app slower than the physical app? How would you create controlled tests? For a few benchmarks, obviously, but not for every function.

The problem became harder when the testers showed some results than indicated the virtualised app was significantly slower. The test was to use Fraps to measure the Frames Per Second (FPS) when running a test model. Patran needs to render the graphical model on the screen as the user manipulates it. The test showed that the virtualised app rendered the model 33% slower than the physical app.

I was surprised by this, as the rendering clearly happens in memory on the graphics card, and has nothing to do with IO. But then I looked at the data a bit more and I found that the result was not really 33%. What really happened is that rendering is done at either 30 FPS or 60 FPS, In the case of this one test, the virtualised app hit 30 more than 60, and vice versa for the physical app. Still, it was not going to be possible to wait for any adverse test result and then find out whether it was significant or not.

The route we took was to take some benchmarking software and to virtualise it. That would mean that all the benchmarks would run virtualised, and the same benchmarks could be run normally. The software I took was PassMark PerformanceTest.

PerformanceTest has a wide range of benchmarks: for CPU, Disk, Memory and Graphics. The tests showed that for every benchmark the virtual app performed about the same, with no significant difference.

Here is the summary overall:

Test Rating CPU G2D G3D Mem Disk
Native 1924.4 3443.5 480.0 583.9 1674.3 3117.5
ThinApp 1915.1 3462.7 462.3 581.0 1706.3 3206.6

And here’s the summary for 3D Graphics:

Test 3D Graphics Mark DirectX 9 Simple DirectX 9 Complex DirectX 10 DirectX 11 DirectCompute
Native 584 41.1 22.6 4.4 9.7 315.1
ThinApp 581 41.0 22.6 4.4 9.6 313.5

Based on this, it seems fairly unlikely that an application would perform significantly worse by being virtualised.

Check your BIOS Power Management Settings

I have been working on a large End User Computing programme for a while, and not found the time to blog, so now it is time to catch up with a few snippets.

This one is about Virtual Desktop Infrastructure (VDI) and the BIOS settings of the physical servers. Here’s the summary: VDI depends on high performance hosts, but by default hosts are typically configured for a balance of performance and energy efficiency. Check your BIOS. It may not be what you think.

I first came across this a while ago when working on a new VDI deployment of Windows 7 on VMware View, running on Dell blade servers to an EqualLogic SAN. We noticed that the desktops and applications were not launching as quickly as expected, even with low loads on the servers, networks and storage. We did a lot of work to analyse what was happening. It’s not easy with non-persistent VDI, because you don’t know what machine the user will log on to. The end result was a surprising one.

The problem statement was: “Opening Outlook and Word seems to be sluggish, even though the host resources are not being fully used. Performance is not slow. It is just not as good as we were expecting”.

My company, Airdesk, is usually called in after the IT team have been unable to resolve the problem for a while. If the problem were obvious it would have been solved already. This means that we were looking for a more obscure cause. For example, in this case, it was not a simple case of CPU, memory or disk resources, because these are easily monitored in the vSphere console. So already we knew that we were looking for something more hidden. Here’s a good article on Troubleshooting ESX/ESXi virtual machine performance issues. Let’s assume the IT team has done all that and still not found the problem.

My approach to troubleshooting is hypothesis-based. We identify all the symptoms. We identify the things that could cause those symptoms. We devise tests to rule them out. It’s not as easy as that, because you can’t always restructure the production environment for testing. You need tools to tell you what is going on.

In this case the tools we used were:

  • vSphere console to monitor the hosts and the virtual machines from outside
  • Performance Monitor to monitor processor and disk activity from inside the virtual machine
  • Process Monitor (Sysinternals) to see what was actually happening during the launch
  • WinSAT to provide a consistent benchmark of performance inside the virtual machine
  • Exmon for monitoring the Outlook-Exchange communication

The tools told us that the application launch was CPU-bound, but there was no significant CPU load on the hosts. CPU Ready time (the measure of delay in scheduling CPU resources on a host) was normal. We could see spikes of disk latency, but these did not explain the time delay in opening applications.

Our conclusion was that the virtual machines were not getting access to the CPU that the vSphere console said was available to them. What could cause that? Something perhaps that throttled the performance of the CPU? Intel SpeedStep maybe? The vSphere console showed that it was configured for High Performance. But we decided to check the BIOS on the hosts and, sure enough, they were configured for Active Power Controller (hardware-based power management for energy efficiency).

Bn0jduy9sfymors2aviygg74674

We changed the BIOS settings, and the result was immediate. Performance of the virtual desktop was electric. Click on an application and, bang, it was open. We saved potentially £ tens of thousands by finding the cause and not throwing resources at it.

You have two types of setting in Dell BIOS:

  1. Processor settings, which can be optimized for different workloads
  2. Power Management settings, which give a choice between efficiency and power.

In our case we wanted to configure the processors for a general-purpose workload but we also wanted to provide immediate access to full resources, without stepping the processor down to save power based on average utilisation. So the Maximum Performance power setting was the one we needed. You could also set the BIOS Power Management to be OS-Controlled, and allow the vSphere setting to take effect. The point of this post is that the vSphere setting said the hosts were in High Performance mode, while the troubleshooting showed they were not.

That was a little while ago. I was reminded of it recently while designing the infrastructure for a global VDI environment based on XenServer (the hypervisor) and XenDesktop (the VDI) running on HP blade servers to a 3PAR SAN. In the Low Level Design we said “Max performance MUST be set in the host server BIOS”.

Sure enough, in the HP BIOS the default power setting is for Balanced Power and Performance, and this needs to be changed. In a XenServer VDI environment it needs to be set to maximum performance. See this technote from Citrix on How to Configure a XenServer Host’s BIOS Power Regulator for Maximum Performance.

Power Settings Default at Startup-Cropped

If you are not managing the BIOS power management settings on your virtualisation hosts, you are not getting the results you should.

Desktop Paradigm

This article is about managing the replacement for the traditional Windows XP desktop. It may sound like a straightforward upgrade of the desktop OS, or it may already seem like a complicated upgrade because of the business applications that don’t run on Windows 7. But in my view it is more than that. The old desktop paradigm that has been in place for more than twenty years is coming to an end. Without a paradigm we face a bundle of difficult choices.

A paradigm is a pattern or model; a world view underlying the theories and methodology of a particular subject. With a desktop paradigm we don’t have to give too much thought to individual components.

IT has never been too hot on empirical evidence. We tend to use words like "best practice" or "industry standard" or "most people" when in fact we have very little evidence to support our generalizations (like this one!). I have very rarely seen in IT anything you could call empirical evidence. However we do know what people buy (because companies release sales revenue) and we can assume that vendors try to sell what they think people buy. We can assume that the market leaders are selling more of what most people want to buy. This means that our understanding of "best practice" or "most people" is in fact an evolving view of the marketplace.

The Windows Desktop is a paradigm. There is a client OS that provides a blue space for applications to run in. It has a bundle of things running in the background. You can buy things from other vendors that run in the foreground. You can buy a block of hardware that has most things in it. You can buy things that you plug in to standard slots to extend it. Some of these, like the serial and VGA ports, keyboard and mouse, are decades old. You have to worry about how you get the Client OS onto the hardware, and how you get applications into the OS. You do things like "install" applications, and "patch" OS’s.

Much of how we manage the Windows desktop came in with IntelliMirror. IntelliMirror was a set of technologies introduced by Microsoft with Windows 2000, although the term itself soon disappeared. IntelliMirror included Active Directory, Group Policy, Roaming Profiles, Folder Redirection, Offline Files, Special Folders, Distributed File System, Windows Installer, Remote Installation, Sysprep. These represent a paradigm for managing the desktop.

Desktop security is a paradigm too. We use security software like anti-virus; a local firewall; a network firewall and proxy server to protect the perimeter of the local network; a DMZ for access in to web servers; a VPN for remote access. We might add Terminal Services (or Citrix) as a common variation on the standard desktop, using a thin client connecting to a session on a server.

We have had a few iterations since the desktop paradigm took shape. Windows hardware Quality Labs (WHQL) for drivers, UAC. But we have not had to build a business case for any of it. It is just the desktop. Everyone has it, just in different flavours.

The way we use a desktop, what we expect from it, and how we manage it, are just the way it is. The discussions we have about it are in the margins: do we need SCCM or not for software distribution; what product should we use for license management; which AV is best? We don’t discuss commissioning a private UEFI; or building a custom hardware device; although in a large organization we could do either.

But now, as Windows XP comes up to End of Life, things are not so clear. It is not just a question of migrating to Windows 8. The desktop paradigm has changed. What is different?

  • It has become clear that a large number of people (most?) use only e-mail and browser a large part (most?) of the time. It turns out they barely need a desktop at all. A smart phone or tablet is sufficient, maybe even better, for this. This leads to a segmentation of the market. Instead of giving everyone a standard PC or laptop, maybe a lot of people don’t really need one.
  • If I synchronize my e-mail, calendar, contacts and data on all my devices, and have access to them anywhere I go, then why provide them from a computer room in an office building? Why not provide them from a remote data centre? There is no DMZ. Everything I access is remote from me. I authenticate securely using a password and a PIN.
  • If the data center is remote from my own offices, and has highly specialised power, air conditioning and security requirements, why run it myself?
  • If I am using a smart phone or tablet for most of my communication and collaboration, and I can’t run Microsoft office on it, then do I really need it? Maybe I could make do with something simpler, like Google Docs.
  • If a tablet is not joined to any "domain", and is not "managed" by anyone except me, why do I need Active Directory, Group Policy and all of the IntelliMirror technology? And if I don’t need it for the tablet, why do I need it for a laptop, just because it runs Windows? And if it runs Windows RT, why would it need to be "managed" when other tablets do not?
  • If my tablet or smart phone connects to a guest wireless network, then why can’t I use my own personal laptop as well?

In some ways these have appeared, up to now, to be additive problems. Do we allow people to use a Mac at work? Do we let them use their iPhone for e-mail instead of a Blackberry? Can they connect their iPad to the company network? Can they add iTunes to their work laptop? But in a way they are subtractive problems. Once we do all this, what is left? We have a minority of people who need a "desktop" as the computing environment for specialist business applications that do not run any other way.

This means that we need to start evaluating things on their merits. What is the business case for Microsoft office vs. Google Docs? What is the business case for a (Windows) PC vs. a tablet? What is the business case for hosting (in a third party data centre) vs. running my own data centre? What is the business case for a virtual desktop over physical? This is complicated, because the questions are interdependent. If I use Microsoft Office at all, then I need a license for it. To use Office I need a Windows PC, virtual or physical, and I need a license for that. If I have a license for Office and for Windows, then I may as well use if for everything else. However if I don’t really need Office, then I don’t really need Windows, and I may as well use an Android or iOS tablet, or a Linux PC. If I need to use SAP I could do it with a browser application built with HTML5, on a Mac or anything else I like. If I don’t have a Windows PC, then any Windows applications I need can be published to me as a virtual application. But if the applications I need are incompatible (perhaps a specialised engineering application), leading me to a dedicated virtual desktop rather than shared, then I need a Microsoft VDA license and I may as well use a PC. At another level, it probably makes sense to run my remote services (like e-mail) from a third party data centre. But if I need a computer room on site for anything (like my data), and I already have the power and air conditioning for it, then I may as well use it.

When you have a multi-dimensional decision making process, you need one or two fixed points to build the decision around. Every business is different, but as we are moving from an established Desktop paradigm it makes sense to stick a toe in the water with regards to what the new paradigm is.

  1. Like it or not, people are spending more money on more devices, and finding ways they are useful. If you spread the costs over three or five years they are really not that expensive compared to, for example, office space or furniture. If it makes people productive I say give them a tablet AND a smart phone.
  2. People only need MS Office, with a conventional Windows PC, if they produce reports (financial reports, presentations, large documents). Other people don’t need it. They can use OpenOffice or Google Docs instead, and use an MS Office viewer or PDF to read reports produced with MS office. In a PDF you can add notes and comments to a report that was produced in MS Office, although of course you cannot edit it.
  3. In for a penny, in for a pound. Office workers used to need a desk when they worked with paper. Then they needed a desk to put a screen on. Now I think a lot of people no longer need a desk at all. The screen serves more to cut us off from other people than to enable us to communicate. Round tables, cafe style, pull up a chair, are more useful than desks. You might instead have quiet rooms, like a library, where people can go if they need to work on a report. Quiet room means no conversation, no phones, no audio. This also solves the problem of noise in open plan offices.
  4. For a long time my view has been that corporate assets (like data) belong behind their own perimeter firewall, and all end user devices should be authenticated and authorised in the same way, whether on LAN or WAN. This means that ALL devices accessing the assets, including corporate Windows PC’s, need to have strong authentication, and need to be able to protect confidential data.

Paradigms take years to develop, and evolve incrementally. Although IT love to play the game of thinking about what will be, in most cases it is perfectly fine to follow the trend. What makes now different is that Windows XP is going end of life. Large organisations need to replace XP desktops on a massive scale. They really do need to decide whether to replace XP desktops with Windows 8 desktops, or whether to strike out in a new direction.

Cloud Cuckoo

Cloud is a great marketing concept. It creates an impression of something new and better. But is it really new and better, or is it for the birds, up there in Cloud Cuckoo Land?

There’s no need to define Cloud services. It has been done by the National Institute of Standards and Technology (NIST) in their Cloud Computing Definition. You could write a PhD thesis on the abuse of the term in advertising. Let’s assume for the moment that it is something like buying your IT as a service. What is new about that?

  • Running your IT in a third party data center is not new
  • Paying someone else to run it is not new
  • Financing your IT assets over a payment term is not new
  • Buying specific services, like payroll, on a subcription is not new

What is better about it?

  • Having your IT remote rather than local is no faster or cheaper than it was
  • Paying someone else to run it is not better or cheaper than it was
  • The cost of financing assets is not lower
  • Running specific services, like e-mail, on a subscription has not become better or cheaper than it used to be

Of course it has always made sense to run some of your IT remotely, like a public website. Nothing has changed in the arguments for and against, so we don’t need to repeat them.

Nothing has changed, either, in the basic economics. Remote data is still more expensive, by roughly the same factor as before. Having someone else do something for you is still generally more expensive than doing it yourself, or not doing it at all. Yes, there are economies of scale in data centers, but there always have been.

So what’s up?

The significant change has been Virtualisation, or time-slicing of computing resources. Virtualisation is done by a tiny piece of software (maybe 300MB) that separates different workloads and their use of resources like CPU and memory. It is a kind of extended BIOS, nothing more. If the operating system allowed complete separation of workloads you would not need it.

Time-slicing enables the units of computing resource to be rented out. Up to now, the unit of resource has been physical. No matter how you finance a server, someone has to buy it and allocate it to a workload. With time-slicing the resource can be taken from a pool and put back when not used. You pay for the amount of resource used, and the level of guaranteed availability of the resource.

Of course you still have to have software to make use of the computing resource. It would be difficult if you had to buy the software as an asset, even if you were renting the CPU cycles. The Microsoft Service Provider Licensing Agreement (SPLA) allows a service provider to charge for usage rather than sell the license.

If we take something like Exchange, there are now three models for obtaining the service:

  1. Buy the hardware and software
  2. Subscribe to seats in a shared Exchange system
  3. Rent the hardware and software.

It does not really matter where you run it (on premise or at a third party data center) or who runs it (yourself or subcontracted). There is nothing new about those options. Of the three models above, only the last is new.

So why would you want to rent the hardware and software to run your own system, rather than buy it outright or subscribe to a service instead? You would need to want a custom dedicated service (otherwise subscribe) as well as flexibility to scale resources up and down inside a three year period (otherwise buy).

What rental achieves, which is quite valuable, is to remove the capital expenditure aspect of the decision making. With the physical server model it was easy. You had to buy a number of boxes and do some sizing. With the Virtualisation model the decisions are more complex. You need to create a virtualisation infrastructure as well as the traditional server infrastructure.

Services like Dropbox, iCloud, Google Apps, Office 365 are not technological innovations. Services like Windows Azure and Amazon S3 are technological innovations, because they enable you to rent computing resources rather than buy, using Virtualisation.

So is it Cloud, or Cloud Cuckoo Land? I think the question you need to ask as a CIO is "can we rent it?". That is what’s new about Cloud.

Cloud and Windows 365

The idea of a Cloud Desktop is appealing, but can it exist?

Microsoft does not allow service provider licensing for Windows 7. You can have a monthly subscription for a remote desktop on Terminal Services running on Windows Server, but not for Windows 7. This has been clarified recently in a note from Microsoft: Delivery of Desktop-like Functionality through Outsourcer Arrangements and Service Provider License Agreements.

Terminal Services mean that the user shares the resources of the server with other users. To be reliable it needs to be very tightly controlled. The user cannot be an admin and cannot install software. The user cannot access high quality graphics, video and audio because they do not have direct, exclusive, access to the hardware.

Note that “The hosting hardware must be dedicated to, and for the benefit of the customer, and may not be shared by or with any other customers of that partner”. This is very curious. It means that you can buy a Windows 7 remote desktop running on a PC blade in a datacenter, but not on a VM (unless that also runs on dedicated hardware), even though Microsoft receive exactly the same license fee in both cases.

This is obviously an artificial restriction. One possible reason for this could be that Microsoft will soon introduce their own Windows 365 online desktop. A Windows 365 online desktop makes a lot of sense when used with Office 365, because all the data is then highly connected. You really can connect from nearly anywhere, with nearly any type of device.

At the moment with Office 365 that is not the case. Microsoft say that: “Because this infrastructure is located online, you can access it virtually anywhere from a desktop, laptop, or mobile phone”. You can access it, certainly, but you can use it properly only if the PC or Mac has Office installed locally.

Cloud and Office 365

Cloud is a brilliant marketing concept, but it can be difficult sometimes to pin down exactly what it means. This post looks at what Microsoft is offering in Office 365.

Office 365 is Microsoft’s version of cloud services for office applications. It provides "secure anywhere access to professional email, shared calendars, IM, video conferencing, and document collaboration". It is also a business (or multi-user) version of Windows Live, and a replacement for the earlier incarnation Business Productivity Online Services (BPOS).

My focus in this blog is what Office 365 delivers for a medium sized business. There are plenty of resources giving you the details of Office 365 features. The aim here is to show what it is, and discuss how you might use it.

Here is the admin portal. You can administer users, services and subscriptions here. Click on any of the images below to see a larger version with the details.

Office365 Admin Portal

Here is the user portal. This gives you access to Outlook, the SharePoint Team Site and Lync instant messaging.

Office365 Portal

SharePoint Team Site portal

Office365 SharePoint Home

Working with documents, either in the browser or by opening the application on the desktop

Office365 Documents

Using Word Web App. If you are thinking of using Web Apps instead of Office, you need to do a feature comparison to uderstand what you may be missing. For example:

  • In Word, no headers and footer, no section breaks
  • In Excel, no data sorting.

Of course there are far more differences than these, and you need to decide for yourself if they are relevant, but I mention these to show that it is not an academic comparison of features you never use.

Office365 Word

Using Outlook Web Access (OWA)

Office365 Outlook

Outlook options

Office365 Outlook Options

Outlook attachment, from the PC not SharePoint. You can map a drive to a SharePoint library in order to have direct access to the shared files from Outlook.

Office365 Outlook Attachment

Exchange mailbox administration

Office365 Mailbox

Exchange options

Office365 Exchange Phone and Voice

Forefront protection

Office365 Forefront for Exchange

Office 365 is a service operated by Microsoft, and of course pricing is set by Microsoft. Here is the UK pricing. Key points to note about the pricing plans:

All the pricing plans come with Exchange. Office 365 is essentially an online Exchange service plus other things on top.

The Small Business pricing plan adds Office Web Apps, somewhere to store files online (SharePoint) and an Instant messaging service (Lync).

The Midsize and Enterprise plans add SharePoint and Lync to Exchange. They have scaled up capacity and integrate with your own Active Directory. Different plans (E1 to E4) successively add features:

  • E1: Web Apps are view-only. You will need something else (Office on the desktop) to create files.
  • E2: Adds full Web Apps
  • E3: Adds Office Professional on the desktop
  • E4: Adds an on-premises Lync server for PBX

There are more feature differences that I have not mentioned, but they also add progressively through the plans.

There are also two Kiosk plans. These are like E1 and E2 but have cut-down versions of Exchange and SharePoint.

Features and pricing are changing all the time, so you will need to review features carefully before selecting a plan. However you can change plans at any time for any user, so you are not locked in to the wrong plan.

So what, really, is Office 365?

  1. It is subscription licensing, per user per month with the ability to scale down as well as up
  2. It is an online Exchange service operated by Microsoft
  3. It is an online file server or collaboration service, using SharePoint
  4. Being an online service, naturally, you can access it from anywhere
  5. You don’t need to run your own mail server, file server, mail filtering, archiving, backup server, intranet server, remote access. But you still need to run a print server, directory server, application server, management server.
  6. If you want to use the features of Microsoft Office (Word, Excel, Powerpoint and Outlook etc.) then you still need a PC or a Mac. You can’t do it from an iPad or Android tablet, or from a thin client. Office 365 is not a web-based version of Office. The exception to this is if the heavily cut down Web Apps version is sufficient.

Secure authentication for remote access

Being an online service you don’t have to provide remote access to your LAN. Your data is equally available from anywhere, so it works well for a distributed organisation. You also don’t have to provide backup and DR. But there is a curious anomaly: no two-factor authentication. Remote access creates a vulnerability to impersonation, since you cannot know who is entering the user’s credentials. Login details can easily be obtained if a user logs in from an insecure device or, for example, if the user loses a device that is configured for access, or just by guessing.

Two-factor authentication using a hardware or software token protects against this. Office 365 does not provide two-factor authentication. In this sense it is like opening your firewall to allow access to your servers: you just wouldn’t do it.

Office 365 uses Active Directory Federation Services (ADFS) to link your own directory of users with Microsoft. In your main premises the user is actually authenticating to your own AD. Remotely the user authenticates using your ADFS Proxy accessible from the Internet. The ADFS Proxy can require a more secure authentication for external access. Security vendors like RSA SecurID can integrate their two-factor authentication with your ADFS Proxy, and so enforce strong authentication in Office 365 for remote access.

Integration with other services

Being online and operated by Microsoft, there is the problem of how to integrate with other third party services. RIM have recently introduced Blackberry Business Cloud Services to integrate Office 365 with the Blackberry service. Microsoft Dynamics CRM Online will also be integrated. SharePoint Online allows you to use your own SharePoint intranet applications. As far as integrating with non-Microsoft services, that seems unlikely. I can’t at the moment see how you would integrate with EMC Documentum or Autonomy WorkSite.

You can still obtain Hosted Exchange, SharePoint and CRM separately, if the server-side features of Office 365 are not sufficient. These are multi-tenant versions of the servers run by third parties. These also use subscription licensing. And of course you can still outsouce the operation of dedicated services to run in a data center somewhere else.

Remoteness

When you change from using existing instructure on the LAN to using Office 365 on the Internet you need to provide additional bandwidth to it. Arguably e-mail does not need fast connections because it is asynchronous, but SharePoint as the library of shared documents will.

If you use WAN acceleration devices like Cisco WAAS or Blue Coat PacketShaper at remote sites, compression will no longer work because it requires a device at both ends, so you will need additional bandwidth at remote sites too.

Mix and match

The plans themselves are pretty much for marketing purposes. You can mix and match E (Midsize and Enterprise) and K (Kiosk) plans in the same organisation, and indeed you can simply add or remove components for any number of users. This means that, in effect, each component has a unique price that you can evaluate, and can be assigned to each user depending on their needs.

Costs and Benefits

So, the big question: if you are a 1,000 person organisation, is Office 365 a reasonable alternative to doing it yourself?

Exchange is going to cost from £16k per annum (kiosk), £31k (basic) and £52k (full). Archiving adds £23k. You will have to compare that with your own costs of running Exchange Server for 1000 users.

Office Pro Plus will cost £100 per user per annum. You can make a direct comparison of what it would cost to buy through Office 365 or through Volume Licensing. There is no difference in the end result: Office on the desktop and Web Apps online with both.

Web Apps will cost £47 per user per annum, as an alternative to the installed version of Office. You need to have SharePoint as well, to be able to use Web Apps. It can be SharePoint online or on-premises. There is no other way to obtain Web Apps as an alternative to Office installed on the desktop.

You also need to add the cost of additional bandwidth to get to Office 365 over the Internet. Your additional costs will depend on circumstances, but will be substantial.

To use Office Pro Plus you still need to run a full desktop service on Windows or Mac, or on terminal services. You will still need to run servers for:

  • Active Directory
  • DHCP and DNS
  • Print server
  • Other business applications like the finance system
  • Management of the PC’s: anti-virus, software distribution, patching, image deployment
  • Probably file server and backup server for data that is not in SharePoint. For example, SharePoint has an upload/download paradigm. I would expect a lot of people to hold data on the PC. Normally this would be redirected to a file server. So would a user roaming profile.

To run these servers, of course, you still need a computer room and IT staff. Therefore the cost-saving with Exchange Online and SharePoint Online is the incremental cost of running these on-premise in addition to the existing on-premise servers.

The mix and match aspect is important. Most of the organisations I know have Office, Exchange and SharePoint users ranging from expert to not at all. Although you can provide different editions of Office, that’s it. Office 365 Kiosk allows you to identify a body of users who only ever have light usage, and to license them at a significantly lower cost while still being integrated in the same infrastructure (the corporate directory, calendars, intranet).

If you have no existing infrastructure then there is a strategic choice to make between online and on-premise. But that is a rare situation. Most businesses aready have an infrastructure of IT services. They can choose to migrate services to Office 365 over time. For example, an upgrade to Exchange would be a good time to consider it. You really have to want to outsource Exchange and/or SharePoint for Office 365 to make sense.

Personally I don’t buy the argument about "allowing your valuable IT staff to concentrate on strategic matters". It either makes economic sense or it doesn’t. However I do think that if you remove routine tasks from IT staff then it is easier to focus on managing the remainder. The difficulty with managing IT is complexity, and so the less complexity the better.

You can obtain a trial of the Office 365 Enterprise Plan E3 here. You can also obtain a trial of the Kiosk Plan K2 here, if you are interested to see how it could work in a mix and match environment.

If you would like to contact Airdesk we can work through a cost-benefits analysis of online vs on-premise with you.

The Cloud is not a disruptive technology. It is a pricing plan.

Complexity in IT

In a previous post I said I thought that problems in IT are caused by complexity, and not by the pace of change, poor management or lack of skills (although any of those may contribute).

Here are some interesting thoughts from David Gelernter. Gelernter is Professor of Computer Science at Yale.

In his book Mirror Worlds he says:

"Information structures are, potentially, the most complicated structures known to man. Precisely because software is so easy to build, complexity is a deadly software killer."

"Programs that amount to a quarter of a million lines of text (there are about 7000 lines in this book, so picture 36 volumes of programs) are not in the least unusual." "It is very hard to make programs come out right. After a decent amount of effort they tend to be mostly right, with just a few small bugs. Fixing those small bugs (a bug can be small but catastrophic under the wrong circumstances) might take ten times longer than the entire specification, design, construction, and testing effort up to this point."

"If you are a software designer and you can’t master and subdue monumental complexity, you’re dead: your machines don’t work….Hence "managing complexity" must be your goal."

I don’t think many businesses or customers of IT fully recognise this. They think of IT as fiddly, not for them, full of jargon. They are happy to say they don’t really understand it. They don’t realize that hardly anyone does!

Why is IT so difficult?

A friend of mine, a very experienced and senior non-executive director, asked me why, in all the organisations he knows, IT is the area that causes the most difficulty. There are several common explanations, but I am not sure they add up. This leads me to a different explanation, with interesting consequences.

IT causes difficulty in many ways, for example:

  • results not achieved on time, not what was expected or promised, and not within budget
  • catastrophic failure of critical systems, or loss of data
  • systems hard to use, difficult to change, not integrated with other systems, expensive to maintain, hard to replace
  • problems with staff, and with suppliers: poor quality, high turnover, unreliable.

So my friend can be reasonably confident that a five year multi-billion pound engineering project will be completed successfully, while a one year million pound IT project is unlikely to run to plan. Why is that?

Possible explanations:

  1. IT is changing so fast that whatever you plan is obsolete within a short time
  2. People in IT generally lack professional training and skills
  3. People in the business don’t understand IT, and the people in IT don’t understand the business.

I have doubts about these explanations. They have a superficial truth, but for me they don’t explain the level of difficulty in managing IT successfully.

1. The rate of change

The IT industry is constantly producing new things, that’s true. But in other respects the rate of change is fairly slow. The way we do computing is not fundamentally very different from say ten years ago. Many of the same companies are selling many of the same products. If you started a project five years ago, no matter how large, it is difficult to see what technology has changed sufficiently to cause the project to fail.

2. Training and skills

Because things in IT go wrong, it is easy retrospectively to identify faults in the skills of the individuals as the cause, but it is not necessarily so. When things are difficult or impossible to achieve even the highest level of skill may not be sufficient. It is hard to imagine that in general the training and skills of people in IT are lower than in Sales, Marketing, Procurement, Distribution. Maybe those areas just aren’t as difficult, and so the managers appear to be more successful.

3. Understanding

There is a high threshold in getting to grips with the language of IT, certainly. But at the level at which IT and other people in the business need to communicate this really should not be relevant. Medicine has its own language, but doctors don’t seem to have the same problem communicating with patients. I suspect that problems in understanding are more to do with trust than with language.

So if these explanations don’t account for the difficulty with IT what are we left with? My view is that the root cause is complexity. IT systems are the link between a human intention and a computer chip. Human intentions are imprecise and hard to define, but chips are strictly binary. The layers of software in between intention and chip are hugely complex. To produce a predicable outcome is extremely difficult.

If it is true that the root cause of difficulty in managing IT is complexity then there are two consequences. The first is that we should aim to minimise the complexity in every possible way; and the second is that we need people who manage complexity very well.

Versatile Desktop™ through UEFI

Versatile Desktop is the ability to run different business desktops on the same client device. We can already do this easily through terminal services, but only if we are online, and without the full features of the client device such as enhanced graphics or audio.

Unified Extensible Firmware Interface (UEFI) makes it easier than before to run different desktops locally, with the full features of the device. This post looks at how widespread and practical UEFI is as a means of achieving the Versatile Desktop.

We need the Versatile Desktop anywhere that we might previously have used two or more separate physical computers. Examples are:

  1. Television production with a dedicated computer for video editing (no anti-virus) and a standard desktop for other applications
  2. A sales person who is mostly out of the office and needs an unrestricted desktop with admin access when travelling, but no admin access when back on the network
  3. A lawyer with a Windows 7 laptop for the corporate document management system and a Mac for personal use
  4. A software developer with a specialised setup for development tools and a separate desktop for games
  5. A finance department with a dedicated computer and smartcard authentication for a legacy online banking application
  6. A support organisation with desktops on different VPN’s for different clients.

The UEFI standard

The Unified Extensible Firmware Interface (UEFI) is an industry standard for an interface layer between the hardware devices and the operating system. Its main purpose is to provide pre-OS services and to load an OS.

So the computer has multiple hardware devices; firmware controls the devices; UEFI drivers and applications perform pre-OS functions and load an OS.

UEFI diagram

UEFI is a replacement for BIOS. The UEFI standard is organised by the UEFI Forum, which includes all major computer hardware vendors. Most new-design motherboards now ship with UEFI support. However you might not notice it because the OEM vendor may stick with using BIOS, or may use it in compatibility mode to work like the BIOS.

UEFI is a technical standard for pre-OS execution, but it also provides advantages to the user:

  • a graphical user interface
  • networking
  • authentication
  • access to applications provided by the vendor, for example hardware diagnostics or device configuration
  • faster startup.

The most important, in terms of the Versatile Desktop, is that UEFI makes it easier than before to install multiple OS’s and to select which one to start. You can have either a default OS, or an option to choose the OS at startup.

UEFI Versatile Desktops

Here are a few examples of UEFI in action.

1. Acer

  • The Aspire One D250 Model 1613 has a dual boot option between Windows 7 and Android:Acer Aspire One D250-1613

2. Apple Mac

  • Boot Camp is the name for Apple’s implementation of loading the OS through EFI
  • Boot Camp Assistant enables the user to install Windows 7 Boot Camp Assistant
  • At startup you have an option to choose which OS to run Boot Camp Startup

3. HP notebooks

  • Most HP notebooks implement UEFI
  • HP System Diagnostics is a UEFI application HP System Diagnostics UEFI
  • HP notebooks can be switched to UEFI Boot Mode (disabled by default)

So with UEFI we can have graphical and networking system applications before the OS runs; a choice between different full desktop OS’s; and perhaps a minimalist quick-starting desktop with access to other remote desktops via Citrix.