Migrating applications to Windows 7

One of the biggest challenges when upgrading to Windows 7 is in testing and preparing applications. This blog puts together a few conclusions that might assist you in planning the work.

The extended lifespan of Windows XP and Server 2003 has been a sort of "peace dividend" or "pension holiday". When you do come to upgrade it is important not to underestimate the cost and uncertainty involved in application compatibility. But at the same time you don’t need to accept that the migration will take forever.

The problem is that applications can be incompatible with Windows 7 in many different ways. Some of these are trivial and easily solved. Some are harder to solve. Some are hard to find and impossible to solve. You don’t know until you test. The same applies to running the applications in Citrix on Server 2008 R2, with the added complication of 64-bit. Here are a few examples to illustrate:

Standard third party application: Lotus Notes

  • The current version 8.5.1 is not certified on Windows 7. Does it work OK or not? Do you wait until later this year for 8.5.2, or go ahead with 8.5.1? There is a patch, Fix Pack 1, that is certified but it adds complexity to the installation.
  • You would think it would be quite simple to find out: ask the vendor. But most vendors do not certify previous versions. That does not mean they don’t run perfectly well. In this case, although 8.5.1 is not certified, the release notes for Fix Pack 1 contain only trivial changes and 8.5.1 appears to work fine, so there is no reason to delay.

Specialised third party application: legal software

  • The installation fails on Vista/Windows 7. Examination of the logs and the Windows Installer file shows there is a custom action to copy templates into the user profile path. The path is hard coded and fails.
  • The solution is to customise the installer to remove the custom action and replicate it in a custom script. Inform the vendor so they can modify the installer.

Custom system: membership database

  • This is an old system with a SystemBuilder 4GL graphical interface to a Unidata database. The version of SystemBuilder being used is not certified or even tested on Vista/Windows 7. The SBClient application contains an OEM Pervasive SQL client that is also obsolete. The client does mail merge to Word 2003, so it would need to be tested if used with Word 2007 or 2010.
  • There is a new version of SystemBuilder that, amazingly for such an old product, is certified both on Windows 7 and on Server 2008 R2 Terminal Services. The new version seems to work perfectly with the old system. However you need to change the client side and the server side of the graphical interface at the same time, so it would be a big bang change to a critical system.
  • But, after packaging the old version using Wise Package Studio, it seems to work fine on both Windows 7 and on Server 2008 Terminal Services, so there is no need to upgrade.

Other Gotcha’s

  • Applications with OEM versions of Crystal Report 11 or earlier do not install on Windows 7. Crystal Reports 11 is not supported on Windows 7, and you can’t upgrade an OEM edition, but it can be modified to install successfully.
  • Applications using the common VB6 SendKeys function raise an error on Windows 7. Sendkeys does not work with UAC. UAC can only be turned off per computer, not per application so there is no workaround except to turn UAC off entirely.
  • In XP you can use the Printer button on the PageSetupDialog to set different printer properties for an application. In the Vista/Windows 7 API it’s gone. There’s no error, it’s just not there. But in .NET Framework it’s still there! This might seem rather obscure, but the point is: you would have to do a lot of testing to discover this and then find out whether it matters to your users of that application.

Obviously you could wait till your applications are fully tested or upgraded to the latest certified versions, but this could take impossibly long. If you have just one core application that is not ready, you can’t upgrade the desktop.

A lot of people seem to be combining application virtualization with a Windows 7 rollout. Perhaps surprisingly, application virtualization is largely irrelevant to compatibility across OS’s. With a virtualized app, the same dll’s run within the OS with exactly the same results. If the application faults natively, it will fault when virtualized. Virtualization can be used to implement a compatibility fix, but you still need the fix.

The best way to approach this is with a structured testing environment and a full set of delivery options. Then, for the difficult applications, you can set a time limit.

Structured Testing Environment

  • Wise Package Studio or similar, to examine the internal structure of the application and check for conflicts between applications.
  • A full VMWare testing environment with VMWare Workstation and ESXi, so you can create all the packaging and testing environments you need and, most importantly, leave them running so that users can log on remotely to test.
  • Scripted or automated tests and test accounts for each application.
  • Microsoft Application Compatibility Toolkit for testing and distributing fixes
  • Thorough documentation and audit trail of the testing.

Delivery options

  • Native installation for compatible and well behaved applications
  • Citrix XenApp published applications, or perhaps virtual desktop, for incompatible applications
  • Virtualization for conflicting applications (e.g. applications that require different versions of common components) or badly behaved applications (e.g. applications that change the default behaviour of the OS)

Most larger organisations already use several delivery options. What is new is to work out the interdependencies of different applications and which platforms they need to sit on. For example, if the incompatible app does a mail merge to Word or a report export to Excel, then the back end platform needs to have Office. It won’t be able to merge and export to the front end. This means that you also have to consider the user profile settings across different delivery platforms. If the user changes a default printer on the Windows 7 front end, should the same change be made to the back end or not?

With this approach, structured testing and multiple delivery options, you can set a time limit for preparing applications for Windows 7 migration. You can migrate the core desktop to Windows 7, while migrating older applications when they are ready.

Intel and McAfee

Intel announced on 19 Aug 2010 that it will buy McAfee for around $8bn. This has caused some surprise. Intel does not sell directly to the end-user, and it does not develop application software. It is not obvious what it achieves by acquiring a software vendor. Here’s my guess as to why Intel is doing it.

There is a complex pattern of change going on in the architecture of the server computer. As parts get cheaper and more powerful, they can be reconfigured in many ways. The basic model of one box and one chip per business function (e.g. the mail server, the domain controller) no longer exists.

Virtualisation and Cloud computing are just marketing words, but underneath is a continuous evolution and adaptation of components. The BIOS (very small bit of control code) becomes the EFI (much larger) and then the Hypervisor (even larger bit of control code). Virtualisation is not a new concept. It just signifies that the hardware has temporarily outstripped the operating system in the ability to run diverse tasks. The hardware is sitting there saying "give me more", but the OS can’t isolate them enough, so we put a thin layer in between to share the hardware. Next step is the "OS" shrinks to be task based, like Windows Server Core. Likewise cloud computing is not a new technology. It signifies that fibre optic networks are cheap enough to move servers off site, where they can share resources like cooling and power supply more easily.

One aspect of this continuous evolution and adaptation is that the security risks are changing. It used to be accepted that "inside" was inherently safer than "outside". Outside you need two factor authentication and strong encryption. Inside you can get away with the odd admin password passed over the network in the clear. Now you can’t assume this. For example on shared hardware you need to process security keys (used for disk encryption) outside of shared memory where they might be discovered by different virtual machines on the same physical host.

As a result there is a lot of work going on to improve the manageability and security of computers below the operating system layer.

  • faster and stronger encryption
  • better protection of encryption keys and passwords
  • more isolation of different virtual machines
  • detection of unexpected state changes.

For Intel this includes initiatives such as: Active Management Technology (AMT); Virtualization Technology (VT); and Trusted Execution Technology (TXT). These have also been evolving over the past five years and more. Here is a really good insight into what AMT does: AMT

So I think Intel must have acquired McAfee in order to adapt their antivirus technology for implementation in hardware. This would enable the physical host to scan virtual guests and preserve the integrity of the system. The host would be able to detect if the guest had been altered. It would also be able to detect if shared drivers for graphics and audio had been tampered with. It might even be easier to stop the AV process running away with the CPU, which happens frequently in software.

Why McAfee? I don’t know. I am not aware of any technical superiority between different AV vendors. Perhaps because they have a reasonably good name, client base and income stream. Why not invent from scratch? Only because it would take too long. These are just guesses mind you.

Outsourcing IT is not the answer

Most large businesses I have come across at some point come to wonder how better to manage their IT operations. IT consumes a lot of money, but often does not seem to be doing what you want, almost wilfully. You ask for something to be done, and three weeks later nothing at all seems to have happened. Surely they are all just incompetent. Outsourcing has been around a long time as a solution to this problem of feeling a lack of control.

Outsourcing sounds like it should make sense. ICHA (or whoever) do lots of IT and must know how to do it better than we do. They are specialists where we are amateurs. They must have lots of highly skilled experts who can be called on to deal with the tricky technical stuff only when required. It all sounds so efficient. And now they even have technical experts and support centres in India and China, where costs are so much lower. How could it fail to be both more effective and less costly than our current operations?

And yet. When you start talking costs, they always seem remarkably close to your current costs. And service levels always sound more as though they are trying to avoid things rather than commit to them. TUPE means of course that you simply can’t release your staff (who were so incompetent, remember?) and use ICHA’s. And the shared data centres you were going to use instead of paying for your own, well, it would cost millions to make the move and actually the services are going to be run from your own data centre after all. In the end it seems as though your own people and facilities are going to be sold back to you at a premium, but managed by someone else. So the pitch comes down to this: "Don’t worry your pretty little head about this IT stuff. Just tell us what you want and we will manage it for you". Core business is the key word. By the time you have got this far down the track, it would be really embarrassing to go back to the Board and say, "It doesn’t add up, I must have misunderstood what IT is about", so it goes ahead anyway.

Here’s why Outsourcing in this way doesn’t work.

Most of IT Operations is simply deploying vendors’ kit. It may be in large quantities, it may be very expensive, but it is still just kit. Most kit from most vendors is at the upper bounds of complexity and capability. As a random example, RSA SecurID can provide strong authentication for five plumbers, or for 100,000 staff spread around international offices. It works the same way. To implement this stuff effectively you need to be fairly expert. But then day to day it requires little more than following the book for how you add users, change settings or whatever. Mostly it just works. And when it doesn’t you really need the expert to fix it.

Now the problem is that it does not make sense for IT Operations to hire experts. You only set it up once, and change it rarely. But you administer it every day. So you tend to hire the administrators, and then try to get by on that. Systems are put together by people who are not experts, and so they don’t get done or they fail. I don’t mean to say that the people in IT Operations are not very capable. You may have a small group of people who are indeed expert in some things. Its just that they can’t possibly have the variety and depth of experience of people who do this all the time. Is it enough? Well, perhaps, but probably not.

And then when you go to the market for outside help, the transaction costs are high. It takes time to brief people and for them to understand what you are trying to do, and that time one way or another must be covered in their costs. It also looks like real money. £50,000 to do a project is a lot of cash to justify, with business cases and cost benefits analysis. Fred not achieving anything very much in a year is much harder to see.

Outsourcing does not solve this.

The outsourcer is going to sell you back your own staff and kit. Yes, there may be some changes in the way some things are done, and you may have a few redundancies. But fundamentally you have the same faulty systems being run by the same people. When you would think you would have access to experts to solve problems or make things work better, they don’t seem to be available. Why is that? Well, an expert in something like Active Directory can be charged out at high rates to client implementation projects. He is not going to be assigned to your problem just because you’d like it. If he is assigned to a chargeable project to help you, he won’t know any more about you than any other new supplier.