The problem of enterprise patching

A colleague was talking to me yesterday about his recent experience in implementing Microsoft System Center Configuration Manager (SCCM) for a customer. He is using the System Center Updates Publisher (SCUP) to deliver Dell firmware and software to clients. This got me thinking again about the best tools to use for keeping your non-Microsoft software up to date.

Keeping something like Adobe Flash Player up to date, for example, is a small problem that encapsulates a much larger one: how to ensure that the clients in an enterprise that have access to corporate data are adequately secure? Adobe Flash Player is ubiquitous. Updates come out frequently, and some of them are for critical security vulnerabilities like this. One way or another you will need to make sure that clients in the enterprise either don’t use Flash Player, or are reasonably up to date.

It is not easy to do.

  • Updates come out frequently, so you need to know that an update is required
  • There are Flash Players for Windows, Mac, Linux and Solaris
  • There are different trains. Train 10 is current, but train 9 is required on older operating systems.
  • Even when you know an update is required, you need the mechanism to perform the update on computers that require it.

Flash Player is probably one of the simplest products to update. Something like Adobe Reader is much more complicated. You can’t just run the latest patch. Version 9.2 will update 9.0 and 9.1, but version 9.3 will only update 9.2. That’s even before we get onto security patches like 9.3.3.

Windows Server Update Services (WSUS) takes care of patching for Microsoft products. The WSUS server component is free. It uses an online catalogue to determine what patches are available, and the built-in Windows Updates client to determine what is required on a given machine. But what about third party products, or different operating systems?

Any desktop systems management tool will perform an update that is given to it, but what is missing is:

  • alerting that the update is available
  • producing a dynamic collection or definition of computers that need it
  • specifying the command line to use when installing it.

BigFix Patch Management is a product that aims to do all this. What’s different about BigFix is that it provides the patching tasks for you to run. You just need to approve them for the update to be distributed to computers that need it. BigFix is highly suited to a heterogeneous environment because it works across different applications and operating systems. The company came onto the Gartner Magic Quadrant in late 2009 and was acquired by IBM in July 2010.

If you have a more uniform client base (say, mainly Windows clients) and you are already using WSUS for Microsoft patching, then you may not want to add another client management agent.

EminentWare is an interesting product that supports third party application patching on Windows clients using WSUS. EminentWare currently supports a fairly small range of products, but they accounts for probably the largest number of patches that need to be distributed. With EminentWare you can also create your own patches to distribute through WSUS. This is very handy for something like a Lotus Notes Fix Pack.

Secunia Corporate Software Inspector (CSI) also uses WSUS to push out security updates. Secunia’s main focus is on advisories, through their Vulnerability Intelligence services. Since earlier in 2010 they have added a capability to push out patches through WSUS. although I don’t have a list of which products can be updated this way.

The advantage of both the EminentWare and Secunia CSI approach is that you don’t need to run another client agent. The client for both detection and remediation is the built-in Windows Updates mechanism.

If you already use SCCM to manage clients, then the free SCUP extension enables you to add catalogues from other publishers to obtain and publish updates. This relies on the software vendors publishing a catalogue for SCUP. Unfortunately Adobe does not. Currently Citrix is the only software vendor that does. But at least it is a standard mechanism, and there is an opportunity for third parties to add to it.

If you already use Altiris to manage clients, then:

  • Patch Management Solution is available for Windows, Mac and Linux
  • Altiris maintains the catalogue for Microsoft and Adobe products, but not others
  • The Altiris client identifies where the patch is required and installs it.

Ideally what we want is a combination of these things:

  1. a public catalogue of patches from different vendors, independent of the distribution tool
  2. a generic query filter to identify machines where the patch is required
  3. integration with existing distribution and reporting tools.

That shouldn’t be hard, but it doesn’t exist.

If only it were that simple

Microsoft Forefront Client Security (FCS) server components do not run on 64 bit servers. OK, that’s no problem, we will have a dedicated 32 bit server. It should be simple enough, shouldn’t it?

Hang on a sec. We have to install the FCS Distribution component to configure the updates on WSUS. The WSUS server is 64 bit, and the FCS Distribution component will not install on it.

OK, let’s just install WSUS on the FCS Server. It doesn’t matter if we have two WSUS servers bringing down updates, in fact it’s quite neat and tidy. We could use the FCS copy of WSUS for FCS updates only.

Hang on a sec. The workstation can only have one WSUS server in Group Policy, and so it has to get its Windows Updates from the same server as FCS updates. That means the FCS server has to become the WSUS server. But we have distributed WSUS servers. That means they all have to get their updates from the FCS server. That’s putting the cart before the horse. I thought FCS was just making use of the updates distribution service, but now it is telling me how it has to run.

What is this Distribution component anyway? It only sets the update frequency, adds the Forefront updates to the Product list and sets up automatic approval. We can do that natively in WSUS 3.0. So maybe we don’t need it anyway, and can keep the existing WSUS server.

OK, moving on, let’s use our existing SQL 2005 Enterprise (this is on 64 bit, by the way). It is good practice to use an off box SQL Server, and it makes good use of an Enterprise license. Wait a minute. We need Reporting Services. And that needs IIS. OK, let’s install these on the SQL Server. And Integration Services, because it needs those as well.

Hang on a sec. The Collection database cannot be separated from the Collection server component. And that won’t install on 64 bit. And my SQL Server 2005 is 64 bit, so after all I need a dedicated SQL Server, even though I have a perfectly good SQL Server already.

But we can use SQL Express for a small service like this, can’t we? Nope, it needs Integration Services and is not supported on SQL Express. I give in. I can’t take any more. We’ll buy a SQL Standard license, or use the edition with dedicated SQL Enterprise license.

Right. Let’s get on with the installation.

What’s this? It won’t install. But it couldn’t be simpler! We are just doing a standard installation on a standard Server 2008 32 bit server. It keeps failing after the pre-requisites check but while creating the Collection database. What can it be? Nothing in the setup log.

We dig deeper into the logs. It is trying to create the database on the C: drive even though we set up SQL to store databases on the D: drive, which is standard good practice. It recognises that the default folder is on D: but changes this to C: There is not enough room on C: so it fails.

OK, lets create a very small database on C: then move it in SQL Server Management Studio to D: and expand it. FCS will be none the wiser. Yup, that’s OK. Its just a dumb installer script.

Right, let’s get on. We want to get the client onto Windows 7. I assume we are going to install the client, and then tell it where the updates (and security policy) come from. I can create the FCS policy, and use the existing Windows Updates policy. Let’s find the latest client. But where is the client? There isn’t one. What do you mean there isn’t one? I mean there isn’t one. You have to install the old client, dated 2007. The WSUS server interrogates the client, checks whether there is a MOMserver setting, and updates it with a newer client. Then the next time round the WSUS server interrogates the client, finds it is newer, and applies all the updates for the newer client. It could take forever! So we need to write a script that tells the client to run Windows Updates straightaway, to get itself up to date after the client is first installed.

Status: 0xc00000e9 An unexpected I/O error has occured

Here is an unusual error we experienced recently. We were rebuilding an HP Blade server remotely using the iLO. But Windows setup was failing part way through.

We were rebuilding an HP Blade server with Windows 2008 64-bit. It should be simple enough. You need to have an ISO of SmartStart and of Windows. Then, using the Virtual Media feature of Advanced iLO, you attach the SmartStart ISO and run through the setup dialogue. Then on request attach the Windows ISO. Windows setup begins,using the parameters you entered in the SmartStart routine.

But we found this error every time:


"The Installation was cancelled". "Windows could not apply unattend settings during pass (null)"

Normally with a Sysprep error you have a part finished machine and you can extract the sysprep log files to see what the problem is. But in this case, being managed by SmartStart, the only option was to abandon the setup.

So we tried leaving SmartStart out and running the plain Windows setup. This time we got a different error. Windows setup would start copying the installation files, and then somewhere in the middle give this error:


So the first impression is that there is something wrong with the Windows ISO. But we tried different versions of Windows with the same results, so that seemed unlikely.

A search for the Status message shed no light on it except confirming that it is a problem with reading the media, but we already guessed that.

Eventually we had an idea. We copied the ISO from a VM SAN drive where it was sitting to a physical drive on a PC, and attached the iLO to that.


Software Distribution and Altiris

Automated software distribution is immensely important in a managed IT operation. If you don’t have effective tools, software either becomes out of date or very time consuming and costly to manage. Altiris provides a powerful set of tools to manage software distribution more easily.

You can do a lot without buying any tools. Group Policy provides a software installation mechanism for Windows environments. You can install anything provided it is in Windows Installer format (an msi file) and has the parameters to allow a silent install. You can also use old-fashioned startup or logon scripts to install software.

Fairly quickly, however, you start to find applications where this is difficult or even impossible. It may only be 5% of all applications, but they still have to be installed. The Oracle client, for example. Or you may have some tweaks that need to be applied to registry or ini files after installation.

One response people adopt is to let software get out of date. But most software revisions are issued to fix problems, especially security problems. We use Qualys to provide reports on software vulnerabilities. If software has vulnerabilities you can be fairly sure that someone will try to exploit them. Running out of date versions of common applications such as Adobe Reader, Flash Player, Java, and QuickTime is very unwise.

If you have out of date software you can close the whole environment down to reduce the scope for damage. For example you can prevent downloads from the internet. But this creates an unproductive and restrictive workplace and it is not necessary for security. It is necessary only to avoid having to update software.

Some people argue that old software versions are required for compatibility. But you can generally run more than one version and specify which is used.

Another response is to allow users to be administrators of their own machines, so they can install software and cut down on your workload. This is also very unwise in a business environment. It is not a question of trust but of reducing the risk of harm. Windows Vista builds this concept into the security model of the whole OS and has User Account Control (UAC) to prevent administrative actions from the desktop.

A good set of tools just makes it easier and less costly to manage software distribution.

The first thing Altiris provides is an inventory. This enables you to have collections of machines with different characteristics. If you install the same software on all machines then all machines are the same and you don’t have too much need for an inventory. However with the proper tools you can have a less restrictive approach. For example you can let users self-select applications to install.

The inventory enables you to identify computers with a specific version of an application whether or not it was installed programmatically. So you can simply have a collection of computers where the version of Adobe Reader is less than 8.1.2. The collection is automatically kept up to date when the software inventory runs. You can have another collection, Adobe Reader KeepUpToDate, which is all computers where Adobe Reader should be the latest version. You can then distribute updates to all computers in the Adobe Reader less than 8.1.2 collection AND in the KeepUpToDate collection.

The engine of software distribution is the Altiris Task Server. Task Server enables you to specify a range of tasks for a collection of computers. The task types include: run script; defrag; deliver software; inventory; ipconfig; power control; backup; service control; and several others. You can assemble these tasks into jobs. So for example a job might be: 1) Wake a machine that is shut down. 2) Install a pre-requisite if not there (e.g. .NET Framework) 3) Run a script to back up some settings. 4) Uninstall an application. 5) Install an application. 6) Stop a service. 7) Apply settings. 8) Start a service. 9) Run a script to check the service is working. Each task in the job can be handled with return codes.
Likewise more complex server operations such as upgrades can be automated to make them accurate and repeatable, for example: run a backup of data the day before an upgrade; then before the upgrade itself stop the service (say for Domino) and make an incremental backup of changes since yesterday; then upgrade or move the service; restart the service and check it is running correctly. This can all be automated so it is the same on every server.

The Altiris software delivery package bundles together the components you need for an installation. In one package you can have several different options. You could have one Citrix client package, with different transforms for web-only client; PN Agent set to sign in automatically; PN Agent not set to sign in etc.

Software packages can be trickled down to the client depending on bandwidth. A large package such as a service pack could be trickled to the clients overnight and then installed from local source. Packages can also be multicast, so if 20 clients need to be updated it can be done in one multicast instead of 20 separate downloads.

You can select a local workstation to be a task server or a package server. In a small office without a dedicated server one workstation can obtain the package and distribute it to the other local clients. Task servers work by "tickling" the client with a UDP packet to tell it there is a job waiting. For an organisation, say, with a large number of shops or small regional offices this enables you to manage software without distributing it over the WAN or putting servers at every location.

Altiris lets you schedule tasks many ways. One of the standard problems is when to update laptops. When people come in to work they often need to start their laptop to get papers for a meeting or check their diary. It is very inconvenient if a large software update kicks off. Likewise at the end of the day they may be hurrying to get away in time for a train. Not a good time to start a 20 minute update. But if you let people just choose whether to install or not, the update may never get installed. Altiris enables you to either notify or warn users, and lets them defer the task for a set period. So you could schedule the update for 12:00 am and let users defer for up to 24 hours.

Altiris provides a software portal on the intranet that lets users choose software they need. This helps to avoid the sense that IT is controlling what people can do. If a user needs Visio or AutoCAD to do a piece of work they can select it from the software portal. A workflow will route a request for approval to a manager. When it is approved the software will be installed automatically. The cost will be charged to the appropriate cost centre and the license count amended. If additional licenses are required a workflow will be triggered to buy them. Without something like Altiris the request can get lost in the system and take weeks to get nowhere.

Most software today is provided as a Windows Installer msi package and can be customised with a transform file. Transformation is required to select options and enter serial numbers. A few products are still not provided as msi’s. If they have a silent command line Altiris can use that instead. Otherwise they can be packaged into Windows Installer packages using Wise Package Studio. Sometimes you require more than one version of an application, or incompatible applications. Software virtualisation is a new technique for dealing with this. The packaged software runs through a filter that provides its own registry and file system to isolate it from other applications.

You can think in terms of around £60 per machine for these tools. This sounds a lot. When you pay maybe £250 for a first rate computer with an OS, an extra £60 for some utilities is something you would rather avoid. You may also already have inventory and license management tools, so you don’t want to pay for them twice. Plus there are some Open Source tools that will do some of this.

I think it is more useful to look at it as an annual cost. For an enterprise of say 1000 machines, the capital cost is £60,000. Over five years, including annual maintenance, you pay £90,000, or £18,000 per year. For that you get lower staff costs and a better service to users.

Is it safe? Trust me

A law firm is an attractive target for computer-based crime. It has lots of confidential information, lots of people with access to the core systems and lots to lose if the information is disclosed. Information needs to be protected, but it also needs to be accessible over networks. So how can you make sure you trust the user or the computer that is accessing it?

Article published in Legal Week in May 2008.

In the past, computer security was mainly about using good password practices, stopping viruses and blocking access at the firewall. But now computer crime is more purposeful, and the protection needs to be more sophisticated. If an Eastern European gang can fit a card reader into a cash machine without anyone noticing, just think what they might be able to do inside a law firm.

So the questions are: when a user account authenticates to a system, how can I know it really is the person it is supposed to be? And when I trust a computer, by storing information on it or by allowing it to connect to the network, how can I know it is the computer I think it is?

Typically, you start by having good password policies and tightly restricting administrator-level access. Obviously, no-one works as an administrator of their own computer. This is a good deterrent for opportunistic attack, but not for purposeful crime. There are a number of ways in which good policies can be subverted. Bear in mind that a password only has to leak once, and if that is not detected it can be used maliciously for a long time.

A keylogger records keystrokes on the computer and so can be used to capture logon details. A hardware keylogger requires no rights to install. It can be hidden inside the keyboard. If you think how easy it is for equipment to be stolen from offices, you can see how easy it is for equipment to be subverted instead.

Normally you will have different levels of administrative access, but a junior IT support person can escalate their rights in several ways. The simplest would be to install software on a workstation and to wait for a more senior administrator to log on to it. With the captured logon details he can log on to a server and repeat the escalation up to enterprise administrator.

Passwords can also just ‘leak’. The domain administrator password might be fiendishly difficult, but a password for another administrator account might be commonly known. The end result is that when a password is supplied for access to a service, you cannot assume the user is who they claim to be.

It has come to the point when two-factor authentication should be required for any administrator account and for any account with access to highly confidential information. It is routinely used for banking transactions and for remote access, so it is not a strange idea. Two-factor authentication inside the organisation is now much easier to do than previously and is almost as easy as password authentication.

The most familiar form of two-factor authentication is the one-time password (OTP), used for example by RSA SecurID. A hardware device generates a unique number that, when combined with a password, authenticates the user. Many services such as SAP and Citrix support it. However, to use it to protect the logon on computers within the office requires changing the authentication mechanism in Windows, which is a big step. It can also be quite expensive for widespread use.

The other main form of two-factor authentication is the ‘smart card’. This is the same as the bank card chip and PIN. The chip is a secure microprocessor which holds a digital certificate. Entering the PIN number releases the public key of the certificate, and so authenticates the user.

Unlike OTP, it is now comparatively easy and inexpensive to implement smart card authentication for Windows. Windows Vista and Server 2008 support smart card authentication built-in. In Active Directory Group Policy you can specify that a user account requires a smart card for interactive logon, or you can specify that a server requires smart card authentication. The user certificate is stored in Active Directory as part of the account properties. You can use the built-in Windows Certificate Authority to generate certificates, or you can use a third-party certificate authority.

It used to be that you needed to have a smart card reader with accompanying driver, but smart card devices from companies such as Gemalto now come in a USB format and the driver is built in to Vista. So you simply plug in the USB stick, enter a PIN and you are authenticated. Many non-IT professionals will find this easier than remembering complex and expiring passwords. I would certainly offer it as an alternative to those who dislike passwords.

Once you adopt smart card authentication, there are some interesting avenues for it. You can use the smart card to store certificates from a public certificate authority, for example VeriSign, and this enables you to authenticate uniquely outside the organisation as well as inside. You can combine the smart card with storage on the same USB drive and use it to encrypt the data on the drive. You can give a smart card to your clients so they can authenticate securely to your extranet and access client confidential information.

You can even combine the smart card with building access systems. Gemalto smart cards, for example, can be integrated with the Mifare chip, which is commonly used for swipe cards and is used in the London Oyster card.

But what about trusting the computer? Why do you need to do that?

The information on a computer is protected by a logon but it can be accessed directly from the disk simply by booting with a different operating system. There is no need to steal the computer. Just boot a computer from a USB drive and read or write what you want. You could drop a batch file into the startup folder of an administrator account so it runs silently when they log on. This applies to a laptop left in a meeting room, a desktop at night, or a server in a remote office.

To protect the whole disk, you need something outside the disk. The trusted platform module (TPM) is the answer. A TPM chip is effectively a smart card on the motherboard of the computer. TPM began life as a means primarily to protect digital assets against piracy, but has a number of other useful security functions.

You can use it to: prevent tampering because the state of the computer can be stored in the TPM chip and checked when the computer starts; secure encryption keys, so that information is only unlocked if the TPM allows it; and allow or disallow a computer to connect to the network (or connect remotely over VPN, for example).

Windows Vista and Server 2008 use the TPM chip to provide the security for BitLocker whole disk encryption. You do not even need to supply a password. As long as the boot sequence of the computer is unchanged, the disk can be unlocked. Then, once the system has started, you can authenticate the user with a smart card. Active Directory provides the infrastructure for key recovery and both TPM and BitLocker can be managed through Group Policy.

There are a few general points about these technologies. They require an infrastructure to manage them, so you need to plan carefully how you are going to use them and for the impact they will have. They are not expensive to implement. In my view they are not inconvenient and may even be more convenient than they are currently. But, if there is a culture in the firm of being not entirely welcoming to new IT security measures, IT people may be reluctant to recommend them. The question the managing partner or IT director needs to ask is: ‘Is it safe?’.

From The Times July 2, 2008

Body Shop ‘snoop’ John Shevlin fined for insider dealing

A former IT technician at Body Shop, the ethical retailer, has been fined for market abuse in a rare victory for the Financial Services Authority in its battle against insider dealing.

The City regulator said yesterday that it had fined John Shevlin £85,000 after he was found to have gained inside knowlege by snooping on confidential e-mails between executives.

Mr Shevlin, who worked at the beauty company’s head office in London, borrowed more than his annual salary to bet that Body Shop’s share price would fall, having obtained a sneak preview of an unexpectedly bleak Christmas trading update.

As an IT technician, it is likely that Mr Shevlin had privileged access to executives’ passwords, enabling him to access their computers without their knowledge, the FSA said.

Top tips for portable storage

Like a lot of technology, portable storage is remarkably complicated once you get into it. You need some more space so you go to get an external drive, but there are hundreds of different types. What’s the difference? What do I need?

If you are short of time, here’s the summary:

  • To carry a small number of files – Sandisk Micro
  • To carry more data, including large files, but lightweight – Seagate FreeAgent.

No need to carry a laptop around. Just carry the data and plug it into an available PC. Use SyncToy to keep data up to date in several places.

The SanDisk micro lets you carry around a surprising amount of data. It is tiny and inexpensive. There is not good reason not to use one. You can encrypt the data in cases you lose it.


  • It is limited in size, currently 4-8 GB
  • It is slow to access, even with super extreme flash

If you need more, a top tip is the Seagate FreeAgent range of portable drives.

  • They are formatted as NTFS instead of FAT. This means they can use large files, for example a backup of your system.
  • They are much faster drives. They operate at 7200 RPM, the same speed as your internal drive. Other drives, to save money, operate at 4500 RPM but you would need to know what you are looking for to find this.
  • They appear in your Explorer window as a regular drive, not as a removable drive. Some backup utilities will not work with removable storage.
  • They are small and light, less than 200g.
  • They are inexpensive. You can buy several of them and use them to take backups of your data off site.
  • You can use the free utility from Microsoft, SyncToy, to match up data on the external and internal drives.
  • You can use the Altiris Recovery Solution to make a regular snapshot of your entire computer so you can recover it if anything goes wrong.

They are not really suitable for archiving. If you want to make a long term copy of your data and put it somewhere safe, you need a long term storage medium like Iomega Rev drives. Same if you have a large amount of photos and video and you want to make a copy just in case.

They are also not the same as extra external storage for your PC. You can get much larger drives, but they are heavier and need a separate power supply.

But they certainly save lugging a laptop around, if you are going from office to office and just need to carry some files with you.

SharePoint and Wikis

Atlassian Confluence is the leading Enterprise Wiki. Confluence now has a Connector for Microsoft Office SharePoint Server. This gives you the best of both worlds.

Atlassian Confluence is the leading Enterprise Wiki. It is a powerful tool for enabling collaboration in an organisation, in ways not possible with previous methods. In a wiki users create content and structure for themselves, allowing people to make connections and develop ideas. For example, plenty of intranets have forums for things like Classified, Announcements etc. But you can’t just create your own forum and discuss your ideas with whoever you want. In a wiki you can.

SharePoint is the new publishing platform from Microsoft. Microsoft recognise that e-mails, documents, web pages, forum postings are all the same really – just information with various characteristics like formatting and replying. So they put everything in one and called it SharePoint. SharePoint has a "wiki", but it is really just a standard page with an edit button. It ticks the feature list, but does not compare with a proper wiki.

SharePoint comes in two flavours. Free with Windows Server is Windows SharePoint Services (WSS). Microsoft Office SharePoint Server (MOSS) is a separate product requiring its own client licenses, and very much more capable. This is a good marketing strategy. As you get SharePoint for free, it is very widely adopted. In my view, SharePoint is likely to gradually overtake the traditional file system as the way of finding and creating documents. This makes it more difficult for a proper wiki to be adopted.

Now Atlassian have introduced a Connector for SharePoint. There is some mutual interest here. Microsoft jointly announced a strategic partnership with Atlassian, and reading between the lines it is something their large customers wanted.

The connector enables you to run a proper wiki in SharePoint, without moving between two different environments. You can:

  • Search across both sets of content in one search
  • Link content between them
  • Store documents in SharePoint and use them in Confluence
  • Embed Confluence pages in the SharePoint portal
  • Use a single sign-on.

Some people are just using Confluence for their entire Intranet. But if you are using SharePoint for your document store and as a portal for web services, you may want to integrate them by using the Confluence SharePoint Connector.

The SharePoint connector currently works with the full version, MOSS. If you want to use Confluence as your wiki and WSS as your portal, you can still join them. You can embed wiki pages in the portal, and you can use LDAP to provide a common username and password, but no Single sign-on, and no Search.

BlackBerry enterprise resilience

Research in Motion, makers of BlackBerry, are a spectacularly successful company. They had a great idea and made it work, when you would have put odds on Nokia or Sony beating them.

However the BlackBerry Enterprise Server software that makes BlackBerries work for businesses is a spectacularly bad piece of software. This piece is intended not as a criticism of RIM, but as a comment on the business of creating enterprise software.

There is nothing really special about BlackBerry, except the business concept. It’s a neat device but not obviously better than the alternatives. What makes it unique is the Push when a message arrives for you.

The BlackBerry Enterprise Server (BES) is what connects the corporate e-mail system to the carriers’ mobile network, to push the messages out. Company executives are very fond of their BlackBerry, so BES is a very important service. Never mind the ERP systems, BES is right up there as mission critical. If there were a disaster, the one service the executives would want to work is BlackBerry.

However there’s a flaw. You can’t make the BES service resilient in case the server or the network fail. If either happens, someone has to go and change things around a bit to get the service working. If they are on holiday, or can’t find the instructions, or it’s the weekend, then you just have to wait.

BlackBerry provide two methods for disaster recovery, but both are workarounds. Let’s just have a look at the problems:

  • The license key can only be installed on one server at a time, and the service stops if you install it on another. That’s very severe. It means you can’t easily set up a test server, or a replacement, or a standby unless it is switched off. You actually have to buy a complete second license in order to have a standby. That’s just extraordinary.
  • The BlackBerry service should run under an account so that it has network access. Of all daft things, the database connection is set up in the name of the person who is logged on, instead of the service account. If you want to put the database on a different server, as is normal, you need to allow the computer system account to access the remote server. Plain daft.
  • The installation creates a database, but it also modifies the base schema of the database system. This means you can’t really run it on a generic database server, as you can nearly every other database in the world. Even dafter. It also means that if you backup the database and put it on another server, you won’t be able to use it until you have done a "pretend" installation of BES on that server.
  • You can set up a second server, but if the first one fails you have to re-configure the service so that users are switched over to the other server. So even though you have set up a second server and bought a second license, it can’t be used until you manually switch over. There is no concept of connecting to the first available server, as there is with most services.
  • Normally when you connect something like BES to an external SQL server, you use an alias for the SQL server. So instead of setting up a connection to ServerA, you connect to an alias of the server, BESQSL. This means that if the SQL server fails, you just change the alias to point to a different SQL server. You don’t have to change the configuration of the connecting server at all. BES can’t do this. It is coded only to connect to a physical server name.

Here’s a contrast. The RSA SecurID Authentication Manager is another mission critical product. When you enter a one-time password using one of those key fobs, the Authentication Manager validates it and lets you in. If the Authentication Manager is not working, no-one can use the service. Unlike BES, the Authentication Manager is designed to be resilient:

  • You get a second spare license with the first.
  • All data is automatically replicated to the second server.
  • Authentication agents (running on the servers that are being secured) automatically balance between the available Authentication Manager servers.

Why the difference?

  1. BlackBerry has a very strong consumer pull in the business. IT people are never going to suggest not using BlackBerry because of the quality of the server software.
  2. RIM distribution is through the mobile carriers. A bit like Nokia, RIM could sell independently, but they don’t. The mobile carriers are also not really selling to their customers based on the quality of their software.
  3. Developing enterprise grade software is actually extraordinarily expensive.
  4. In contrast, RSA is a technical sell. There are plenty of other strong authentication methods. RSA are just aiming to be the best, which includes providing true enterprise grade software.

So, as I said, this is not in any way a criticism of RIM. Their business model makes providing this software less important than continuing to make a highly attractive consumer service.

Desktop Automation

Desktop automation has been around for years. Ghost was written in 1996 to enable IT support people to clone a computer. ZENWorks was introduced by Netware in 1998 to help manage server and desktop configurations across the network. So it’s very far from a new idea. What’s interesting about it however is that it never stops evolving.

Article published in Legal Week in December 2007.

The basic aims of automation are simple: to reduce your IT costs; deliver services more quickly; and be more reliable and secure. In itself automation does not do anything that you could not do manually. It’s just that you would have more staff than you need, and the chances of everything being done consistently, correctly, and quickly would be low. If you are not heavily automated, you are almost certainly inefficient and not delivering a top class IT service. In a rising market this may not be your top priority, but you should be aware of what you are missing, and if you need to reduce costs or improve performance this is the way to do it.

Here are a few examples of things you should be able to do if you are fully automated:

  • Rebuild a training room of PC’s with a different configuration in perhaps 15 minutes.
  • Have a PA (not the IT helpdesk) set up a new starter with all the correct settings and equipment (phone, Blackberry, laptop, accounts for different systems) in about 10 minutes. Just as important, have them disable a temp or contractor’s account and retrieve equipment immediately when they leave.
  • Run a fully up to date operating system and software everywhere, all the time; run incompatible software on the same machines; have the minimum number of licenses that are actually being used; and not be constrained by the difficulty of doing it.
  • Provide complete support to a person with a laptop in a hotel or airport overseas, the same as if they were in the office.
  • Have next to no faults; no manual activities that require an IT person to visit the desk; no repeat requests to sort out the same problem; no onsite IT support staff.
  • Receive alerts for any errors and out-of compliance events.

These may not sound very interesting compared to, say, a client extranet but they are all part of operating efficiently. In fact I’d say they are a good test of whether you are. The tools to do this are readily available.

You can achieve much of what you need with the tools supplied as standard with Windows Server. Group Policy enables you to control the configuration of PC’s and servers down to the last degree. You can also use Group Policy to deploy software as long as it is available in an msi format. Windows Deployment Services will deploy the operating system and enable you, for example, to upgrade more easily to Vista. Distributed File System will replicate a library of software applications and system images across all your sites so they are available for rapid local installation.

If you are short of cash, there are Open Source tools like OCS for inventory and software distribution, and OTRS for helpdesk. The problem with this is that by the time you have implemented these, the true cost is likely to be as much as or more than the commercial tools.

To go further than the tools supplied with Windows or Open Source you need an integrated toolset, like Altiris or LANDesk. For example, Altiris Deployment Solution uses multicast to deploy an OS to multiple computers in a training room at the same time without saturating the network. The pre-boot operating system enables you to control what happens on the machine remotely even before it has loaded Windows. This means you can rebuild, or upgrade to Vista for example, without ever visiting the desk. Integrated tools will give the helpdesk a complete inventory and history of the PC when they are trying to solve a problem. A packager like Wise Package Studio will enable you to build an msi for your custom code so it can be installed automatically with the software distribution tool. Other tools from people like Quest and MTech enable a department PA to provision multiple services to users automatically without contacting the helpdesk, and with less risk of mistakes or delays.

One of the largest law firms is introducing a world wide desktop automation system. They are going to be able to roll out a large number of in-house customisations of Microsoft Office to integrate with their Document Management system. This would just not be realistic without heavy automation. On the other hand, I visited a smaller firm where the IT director did not think he would be able to persuade the partners to buy the tools, even though the department was overstaffed and costing the practice more than if they used the tools. Some people seem more comfortable with staff costs than paying for automation tools.

Several firms I have spoken to recently are introducing thin client systems with no desktop software at all. This simplifies desktop administration, but you then have a significant potential problem with application compatibility on the servers. Software virtualisation is a new technique to deal with this. The software runs in an isolated layer separated by a filter from the operating system, so you can run incompatible software in separate layers. This allows you to run, for example, different versions of the Oracle client and to determine which version runs when you open an application, without doing extensive compatibility testing. Even the user settings and registry keys are isolated. A problem package can be rolled back leaving the system exactly as it was – not the case with a standard software installation. As a result you need fewer servers and can load balance more flexibly across your Citrix infrastructure.

Automation has a significant impact on staffing in IT. You need far fewer people, but with a higher skill level. Basically you no longer need the jack of all trades, but you do need specialists. The work on the ground can be done by facilities staff instead of IT people. The technical work however needs highly skilled people.

There are a number of gotchas, where things can go badly wrong. The wrong instruction sent out by mistake to the wrong computers can obviously cause far more destruction in one go than an incompetent desktop support person working on one machine. For example a custom software package will usually change the machine registry. If it makes the wrong change you can’t just reverse it. There is no record of what was there before. You should never ever let an inexperienced person distribute a software package, or a Group Policy.

This presents a staffing dilemma. A software packaging or group policy expert is wasted working in one place for a long time. A package is a package, deployed to one machine in one company, or ten thousand machines in ten companies. On the other hand, outsourcing without automating just swaps your management for theirs. It is unlikely to change the skills levels or the outcomes much. We feel that the way of the future is to buy in automated desktop and server management as a service. For example, we have a software library appliance that contains OS images and standard applications with automated installations. Add your licenses and you are done.