Windows Hello for Business and MFA

As an end-user computing specialist, I spend most of my time on security-related matters. Good cyber security is the most difficult part of the design to get right, with a balance between security and ease of use. It is quite easy to implement the standard security controls. What is more difficult is to deal with all the exceptions and operational difficulties in a secure way.

One small example of this is the configuration of Windows Hello for Business (WHB). WHB is an excellent authentication method but, like anything, it has potential flaws too.

Before WHB

Before WHB, a member of staff could typically log on to any corporate device. It had to be a corporate device, because only that would recognise the domain account. But it could be any corporate device. In fact, roaming profiles were designed to enable anyone to log on to any device.

There are two problems with this. First, because it relies only on a simple password, the password needs to be reasonably long and complex. This increases the risk that the user will write the password down. Where do they do this? They know they should not put it on a post-it note stuck to the computer. So they write it down in a notebook kept with the computer. If the computer is stolen with the notebook, the thief has access to the computer as that person.

The second problem is that, if someone gets hold of a password (for example by phishing), they only need to get hold of a device, any device, to gain access. There is no protection other than knowledge of the password combined with access to any device. An insider might easily obtain a password, and have access to another device to use it. Indeed, people might even voluntarily disclose their password, or arrange to have a password changed, so that another person can use it on another device (e.g. maternity leave).

With WHB

WHB counters these problems. It uses a one-time event to create an association between a specific user and a specific device. The one-time event uses a second authentication method to verify the identity of the user. When the identity is confirmed, a unique PIN is created, valid only for that device. The association is bound up in the Trusted Platform Module (TPM), a hardware component on the motherboard of the computer. When the PIN is supplied, it validates the association between user and device and unlocks the credentials to be used for access to network resources, for example the email account. The email service (e.g. Exchange) knows absolutely nothing about the PIN. It doesn’t even know there is a PIN. What it knows (through Conditional Access) is that the user supplied valid credentials from a managed device protected by a TPM.

We all have experience of something similar, when we create a PIN for a mobile phone. And, just like a phone, facial recognition or fingerprint can be used with WHB as a proxy for the PIN. The difference is that, with the personal phone, there was no separate verification of the identity at the outset. The person with the PIN is just the person who set up the phone.

Two flaws

There are two flaws with this authentication method. The first is in the one-time event; the second is in the way WHB is configured.

For the first, you need to know that the person setting up WHB is who they say they are. That might be quite obvious if they come into an office to set it up. But if you send out devices to be set up at home, you don’t have an assurance that the device gets to the right person. There has to have been a secure association created in the first place, between the user and the method they use to verify their identity.

The way I think of the verification of identity, or multi-factor authentication (MFA), is that it is like showing your photo ID to pick up a building pass. You need to come into the building, where people can see you, and you need to supply a proof of identity. Then you pick up the pass, and the pass in future lets you into the building. But that depends on having a valid proof of identity in the first place. The second method (building pass) is piggy-backing on the first method (photo ID).

When setting up WHB for the first time, staff typically use the Microsoft Authenticator app on their mobile phone. But setting up the Authenticator app does not prove your identity. It only proves that you know the password. So there is a circular logic if you set up the Authenticator app at the same time as setting up WHB. The steps in this circular logic are:

  1. User starts to set up WHB on a device, by supplying a password
  2. If the account does not already have a second factor method associated with it, then the user is prompted to set it up
  3. User downloads Microsoft Authenticator app on phone
  4. User receives prompt on phone to validate their identity
  5. User sets up PIN associated with that identity.

At no time did the user prove their identity other than by supplying the password of the account. WHB does not know who owns the phone. In the future, any prompt for MFA will prove that it is the same person who set up the MFA; but not who that person really is. So the second factor (Microsoft Authenticator app on a mobile phone) must be set up in a secure way that validates the identity of the person setting it up.

This is actually quite difficult to do. When an account is first created, it does not have a second authentication factor associated with it, only a password. A vulnerability exists until the second is set up securely and verifiably by the owner of the account.

The physical way to do this is to set up the second factor for the account as a one-time event similar to obtaining a building pass. The member of staff comes into the office. Someone validates their identity and enables the registration of the phone as a second factor. Any pre-existing registration is deleted. Then the member of staff receives the device and sets up WHB. The logical way to do this is with a Conditional Access policy. The policy can require specific conditions to allow the user to register security information. For example, it can require this to be done from the corporate LAN. Now the steps in this logic are:

  1. User enters the building, where their identity is verified
  2. User proceeds, as before, to set up device with WHB, but this time the second factor is a phone locked to a verified identity.

The second flaw is that the configuration of WHB enables it. It does not enforce it. The user still has the option to sign in with a password. This means that anyone can sign in with only a password and gain full access to the device and the data of the user of that account. This was the problem WHB is designed to solve. How did that happen?? The user will be nagged to set up WHB, but they don’t have to.

The way to prevent this is to configure Conditional Access policies to require multi-factor authentication for every access, even on managed devices. You might say that is absurd. Surely the possession of a managed device is the second factor. You have the password, and you have the device. But the critical point is that the WHB PIN (not password) is what proves ownership of the device. When using the PIN, the user does not need to respond to an MFA prompt when they log on. Supplying the PIN counts as performing MFA, because it was created with MFA. The MFA is valid (by default) for 90 days and, every time you supply the PIN, you revalidate and extend the MFA.

This is just one example of what I mean about striking the right balance between security and ease of use. It is easy to enable WHB, but it takes a few extra steps to make sure it is associated with a verified identity.

AppLocker or WDAC?

This is a short piece on the question of whether to use AppLocker or Windows Defender Application Control (WDAC) for application control on a Windows desktop. As technicians, we can sometimes get too interested in what technology is best, or what is newest. But the more important matter is what best meets the requirement.

WDAC is the newer technology, and a significant advance on AppLocker. You can read about the differences here: Overview. So, in a Microsoft environment (Windows 10/11 desktop, 365 Apps, Intune, SharePoint etc.) we should assume we would use WDAC unless there are reasons not to. What could those reasons be?

Cyber security is important, of course. But it needs to be a part of a productive work environment. The most secure desktop is one that cannot be used. And it needs to be part of a holistic approach. For example, if we do not allow a user to have local administrator privileges on a device, the exposure to malware is much lower than if we do. If we require MFA to log on to a device, the risk of a malicious user is much lower than if we do not.

In my view, application control should be transparent to the user. Software that is legitimate should just run. Software that is illegitimate should not run, with a message about the reason. If a new piece of software is introduced, it should either just run, or not run. There should not be a long delay while IT staff rejig the rules to allow it to run. An example would be a piece of finance software. Let’s say we are coming up for year-end, and the finance team have an update to one of the applications they use. They should be able to install it, and it should run. It should not take a month to develop and test application control rules.

AppLocker is much easier and less risky to update than WDAC. AppLocker XML files are simple text files that you can edit manually. WDAC XML files are also text files, but it is not practical to edit them manually. AppLocker uses the Subject Name of a certificate to identify a signed file. It is the same subject name regardless of the certificate used to sign. WDAC uses the thumbprint. The same name might be used in multiple different certificates with different thumbprints. A mistake in an AppLocker policy might cause some processes not to run. A mistake in a WDAC policy might cause Windows not to boot. If it cannot boot, the only solution is to re-image the device. Imagine doing that for 30 or 50,000 devices!

I think the right approach is to use WDAC, but with a process in place to make it relatively quick and safe to update. What is this approach?

  1. Use file path rules so that most administratively installed applications are allowed anyway
  2. Use “snippets” to extend the existing policies (snippets are policies created from a single application, and merged with the main policy)
  3. Use Supplemental policies for discrete areas of the business e.g. finance, or Assistive Technology, applications
  4. Use the WDAC Wizard for creating the base policy and applying updates
  5. Maintain a strict workflow for testing and deploying a policy update.

Let’s say you have a new application and it is blocked by current WDAC policy. There are several ways you could update the policy:

  • Scan the whole device and create a new policy. But this creates a significant risk of introducing new faults.
  • Read the event log or the Microsoft Defender audit of AppControl events to create rules for what was blocked. But this will only catch the first file that was blocked, not subsequent files that would have been blocked if that file had been allowed.
  • Scan the application itself, to create a policy that allows just that one application, then add this to the existing policy.

My preferred workflow is this:

  • Understand where the application saves all files including temp files and installation files
  • Copy all of them to a temp folder
  • Look to see whether the exe and dll files are signed or not. If they are, you will be able to use a Publisher rule. If they are not, see if you can install to a different location. For example, quite a few applications will allow a per-user or a per-machine install. Always use a per-machine install if you can, into a folder requiring admin rights. If you cannot, then you are going to have to use a hash, although this means any update of the file will be invalid.
  • Scan that temp folder to create a snippet
  • Merge the snippet into the base, or create a supplemental policy
  • Apply to a selection of test devices and make sure they still boot!

You need to keep a strict version control of policy versions and snippets. To achieve this, you should update the policy ID. Policies have several identifiers. The file name itself is irrelevant. When you import it into Windows, it will be generated with a name that is the policy GUID. The “Name” and “Id” (visible in the policy) are also just labels. The “BasePolicyID” and “PolicyID” are the two GUIDs that Windows uses to identify the policy. When you merge two policies, or merge a policy and a snippet, these GUIDs are not changed. You will see in the Event Log that Windows considers it to be the same policy. So, to keep track of which policy version is actually applied, you really want to update the GUID. You can do this in PowerShell with Set-CIPolicyIdInfo.

If you follow this approach, WDAC will work like a charm!

Government Commercial Problems with IT Procurement

Working in IT, I come across procurement problems frequently. The root cause, it seems to me, is that government procurement rules are implicitly designed for a steady state, whereas IT projects implement change, which is inherently imprecise. These rules need a radical overhaul. The new Procurement Bill, currently (Feb 2023) going through the House of Commons, aims to do this.

Problems

What sort of problems? 1) Long delays. A procurement that might be a simple executive decision in the private sector can be a three or six month exercise in the public sector. On a project, delay has a cost. This cost often outweighs the potential benefit of the procurement process. 2) Inflexibility as requirements evolve. Sometimes you don’t know exactly what you need until you talk to suppliers. But you can’t talk to suppliers without a formal procurement process.

I cannot give specific cases, for reasons of client confidentiality. But I can highlight the areas of the procurement rules that create these problems. The intention of the public procurement policy is clear and legitimate: to achieve “the best mix of quality and effectiveness for the least outlay over the period of use of the goods or services bought”. The question is whether the rules do this in practice.

I must say at the outset, these thoughts are from a “user” perspective. I have no great knowledge of the procurement rules, only my experience in performing procurements as part of an IT project. The amount of regulation and guidance applying to procurement is vast, and I don’t know how anyone could master it. The scope is vast too: £ hundreds of billions of contracts, of every conceivable type, and ranging in value from £ billions to £10,000. I don’t believe it is realistic to try to codify the rules for this vast enterprise, but that is what the policy does.

Long delays

I led a piece of work to implement a small piece of software that integrated two different systems. There are four products that do this. It is quite a niche area, with not much published information. The value of the purchase would be small, in relation to the two systems being integrated. The products are priced by volume of usage, with annual subscriptions. There were various technical complications about integrating with the two specific systems in our case.

The obvious thing to do was to buy a few licences and learn on the job. We were not allowed to do this. The rules said that no purchase of any kind could be made without a selection process, in this case to decide which ones to trial. The public information was not sufficient to justify the selection of a single product to trial. The next obvious thing was to talk to vendors. We were strictly not allowed to do this. Talking informally to any vendor would prejudice a fair selection.

So we developed our selection criteria as best we could (based on what we could glean from the published information), and then carried out a systematic trial of all four products sequentially. The trial involved actually implementing all four products, and asking staff to evaluate their experience when using them. The experience was almost identical, as we expected.

Some of our important selection criteria were technical, for example compliance with security requirements, and licensing terms. For these, we had to ask the vendors to respond to an RFP. As you can imagine, the responses were inadequate to provide any assurance, without speaking further to the vendors.

After going through the selection process, amazingly, we had not actually completed the procurement. All the vendors sold licences through resellers, as you would expect. So, after the selection, we needed to pick a reseller. You’ve guessed it! We needed a procurement to pick a reseller to sell us the licences for the product we had selected. Fortunately, we were able to use the Crown Commercial Services framework to ask for quotes.

The end result was that we purchased a few licences for the product we expected to pick at the beginning, but many months later and at considerably greater cost than the cost of the licences.

The basic problem here is that we do not live in a world of perfect information. At the outset, we cannot know all the ins and outs of different products. Vendors design their product information to highlight advantages and hide weaknesses. Vendors do not publish real prices. Vendors do not respond to RFPs with full and honest answers to questions.

Think of it from the vendor’s point of view. Some government department wants to make a small purchase. The department invents a long and complicated process and invites them to participate. What should they do? Obviously, just send them the data sheet and the price list. Why would they go to the effort and expense of responding when the total profit if they won would be less than the cost of responding?

Inflexibility

I led a project to upgrade the technology of an existing system, the purpose of which was to enable integration with another system. Sorry if that is a bit obscure: the reason is confidentiality.

The original system was contracted for before the integration even existed. We were not allowed to select our new network supplier with the integration built in to their product. This service was not in the scope of their new contract, because no-one at the time knew we would need to do this. It would have required a completely fresh procurement of the primary product, which would have taken at least a year.

In this case we were allowed to vary the existing contract. The rules on variation are highly complex. They require a good understanding of Annex A – Regulations 72 and 73 of the Guidance on Amendments to Contracts 2016. We were allowed to vary the contract but only provided the contract used different technology to do the same thing.

This gave us a few big challenges to negotiate. One, we needed a new type of support for the new technology not provided in the original contract. Two, we needed a third party (at additional cost) to provide a service to assist in the integration.

After something like a year we had completed the integration. At this point there was less than a year to run on the existing contract. But we could not extend the contract. The rules on extension are especially severe: they are one of the “red lines” for IT procurement. So the next stage had to be a full procurement of the whole service, having just completed the transformation of the previous service.

The basic problem here is that we don’t live in a world of isolated products and services. They are all inter-related in some way. It is not possible to have perfect foreknowledge of all the ways the services might need to change in the future.

Observations

I have a few observations.

  1. Procurement rules do not take account of the cost of complying, in relation to the value obtained.
  2. They assume the availability of adequate market information to make perfect choices without speaking to vendors.
  3. They also assume vendors can and will respond with accurate detailed information about what they offer.
  4. They do not take sufficient account of the relationships with other products and services, and the way these all evolve over time.
  5. It is simply not possible to comply with the rules intelligently, without having a large and skilled Commercial department.
  6. Commercial department cannot have a full knowledge of the product or service being procured and, therefore, there will be extensive delay or bad choices made.
  7. Delay is built in to the system, and the cost of delay is not accounted for.
  8. The cost and delay of procurement means that people are incentivised to wrap up products and services into large contracts that preclude innovation and competition – the exact opposite of what is intended.

Procurement Bill

The original Public Contracts Regulation 2015 stemmed directly from the EU Public Contracts Directive. The intention was to make contracts open across Europe.

But the idea that you can regulate all procurement across all of Europe with a value of more than £138,760 (Jan 2022 threshold) seems unrealistic. Let’s say you have an organisation of 10,000 staff. Let’s say a contract might run for 5 years (printing, laptops, software etc.). The threshold means that any contract worth about £3 per member of staff per year must be subject to a full, open, procurement. Let’s say the vendor profit on the procurement is 20%, or £27,752. The procurement process will cost more than that!

The explicit aim of the current Public Procurement Policy is to obtain value for money. But people don’t need rules to enable them to obtain value for money when buying a holiday, or a car, or the weekly shopping. People will do this for themselves. What the public needs is rules to prevent corruption. Anything that knowingly does not obtain value for money is corrupt. The new Procurement Bill says it aims to do this: “Integrity must sit at the heart of the process. It means there must be good management, prevention of misconduct, and control in order to prevent fraud and corruption.”

I will leave it to others to describe the changes in the new bill. But it is interesting to consider how it might affect the two cases I mentioned.

  • A below-threshold contract is one worth more than £12,000 and less than (I think) £138,760
  • For a below-threshold contract, the contracting authority “may not restrict the submission of tenders by reference to an assessment of a supplier’s suitability to perform the contract [including technical ability]. I take that to mean that all procurements must be open to all potential suppliers and not shortlisted. That is admirable, and I see no difficulty in making all these tenders public. But for obscure and specialised requirements the result is likely to be a deluge of irrelevant tenders and/or no valid submissions at all.
  • This does not apply to frameworks, so the best way to procure anything below-threshold will always be through a framework. But frameworks can only sell commodities. They can’t sell niche specialised products.
  • Modifying an existing contract is covered in Section 74 and Schedule 8. I think a contract extension is limited to 10% of the term, i.e. 6 months of a five year contract. This is still not enough where a change of circumstances occurs during the contract.
  • The provision for additional goods, services or works during a contract seem less restrictive then before. “A modification is a permitted modification if (a) the modification provides for the supply of goods, services or works in addition to the goods, services or works already provided for in the contract, (b) using a different supplier would result in the supply of goods, services or works that are different from, or incompatible with, those already provided for in the contract, (c) the contracting authority considers that the difference or incompatibility would result in (i) disproportionate technical difficulties in operation or maintenance or other significant inconvenience, and (ii) the substantial duplication of costs for the authority, and (d) the modification would not increase the estimated value of the contract by more than 50 per cent.” That seems to be a lot more flexible than before.

The scope of government contracts, even just IT contracts, is vast and I don’t know how it is possible to codify the rules governing them except by introducing  a great deal of bureaucracy and expense.

Curiously, the word “integrity”, despite being one of the bill’s objectives, only occurs once in the bill, other than in the statement of the objective. It occurs in the context of the supplier’s integrity. But, when a private sector organisation contracts with a vendor, the organisation is relying on the integrity of the staff, not the vendor. If the staff act with integrity, the organisation is confident the best choice will be made.

Speaking for an SME, I’m glad the bill has provisions to make it easier for small businesses to obtain contracts from government. But I have difficulty seeing how that will work in practice. Bidding is an expensive process. The way a small business manages the cost of bidding is to screen the opportunities for a competitive advantage. This might be having a good reputation with previous clients, or offering a high quality of service, or having strong skills in a particular area. These are intangibles that are screened out in a bureaucratic tendering process.