This post is a few thoughts about Windows Containers and the impact they are likely to have on enterprise IT infrastructure.
Containers are very new to Windows. Although they have been around for a while in Linux, it is still not that long in terms of infrastructure technologies. In Windows, they were introduced in Windows Server 2016; they are still at a very early stage of maturity. I expect them to make a big impact.
Here are the best two resources that I know of for a general discussion around containers:
- Containers 101 from Puppet, the DevOps people
- Containers: Docker, Windows and Trends by Mark Russinovich, CTO of Microsoft Azure.
This post is specifically around Windows containers, and their possible impact on enterprise IT infrastructure. I am not a great believer in trying to predict technology. I am just interested to consider how much effort we should put in to understanding containers and perhaps starting to use them for IT services in the enterprise.
Containers
If you are already working with containers you can skip this. If you are not working with containers, and you want to know roughly what they are so that you can appreciate the impact, then this is all you really need to know.
Containers are just another form of task virtualisation.
- A virtual machine runs an OS kernel on virtualised hardware
- A virtual application runs a process on a virtualised file system
- A container runs a session on a partitioned kernel.
From the point of view of enterprise infrastructure, the key attributes of a container are:
- No boot time. The kernel has already booted up on the hardware. The container starts and stops as fast as a session logon would.
- No persistence. There is no dedicated storage, so nothing persists outside the container.
- Because of 1 and 2, a different model for scalability.
- Efficiency, because the same OS kernel is shared between containers.
- Isolation between different containers, but not between the container and the OS.
One of the most interesting things about containers on Windows is how they use or extend existing technologies:
- The Host Computer Service and Host Network Service, which share out kernel processes between containers, are provided by Hyper-V. For example, container networks are implemented as Hyper-V virtual switches.
- The networking technology for containers (bridged, transparent, overlay) is part of Windows Server 2016 Software Defined Networking (SDN).
- The container file system is implemented as a Virtual Hard Disk (VHD).
- Image layering is implemented by a file system filter driver. Container volumes are implemented as reparse points.
Docker provides the commands to operate containers, but the underlying technology that creates and runs containers on Windows is Windows. This makes containers on Windows a very large scale robust solution for compute, storage and networking infrastructure.
There are also some limitations or features of containers specifically on Windows:
- no Active Directory domain membership, or Group Policy
- no user interface processes: no Explorer; no desktop; no GUI
- no kernel mode device drivers (like AV or encryption)
- two options for the base image: Windows Server Core; or Nano Server.
Using Containers
For a developer, on Windows or Linux, containers are a wonderful thing. You can create a complex infrastructure very quickly. You can create different versions of the infrastructure. You can add or remove infrastructure components as needed. You can do it on a micro scale; then scale it up to production size with exactly the same images and deployment tools. You can easily do it with both Windows and Linux together. There is no dependency on the infrastructure team. They will only need to provide hosts, not install and configure applications. If you design with a new micro-service architecture, you can scale up and out just by adding replicas.
But I think there are a number of problems with implementing current general purpose enterprise applications in containers. To summarise, most enterprise applications have evolved over ten or twenty years in parallel with the three tier application model running on Windows servers using Windows services. There is little benefit to implementing the same model in containers:
- Many services use Active Directory as the AAA provider. They assume the servers are part of a domain, both for administrators and users.
- Most services already have a scalability and availability model, for example based on Windows clusters, or load balanced web front ends.
- Most services can already be partitioned. For example, IIS already runs distinct sites with their own application pool. SQL Server already runs distinct instances with their own security context.
- Services are often closely coupled with data. For example it would make no sense to run a DNS or DHCP service as an ephemeral instance with non-persistent data.
- Virtual machines already pool the resources of the hardware. There is certainly a benefit in reducing the number of instances of the kernel, but I don’t know if the benefit would be sufficient to merit the change.
I see containers more as part of a trend away from enterprise infrastructure together. In many medium sized enterprises, at least, the foundation of enterprise infrastructure was Active Directory, Exchange, and SharePoint. When you already run Exchange in-house, then it makes sense to prefer other applications to run on Windows Server, with IIS for the front end, SQL Server for the back end, and Active Directory for AAA. Now this is moving to Office 365, and not for economic reasons. The primary reason in my experience , and often unstated, is an organisational desire to move away from the complexity of running IT services.
Once you have Office 365 instead of on-premise Exchange, then it makes sense increasingly to use SaaS services. It is all about marginal costs. If you already have a Windows infrastructure, then the marginal cost of running something like SAP on-premise is lower than it would be if you had no existing Windows infrastructure. The more services move to SaaS, the higher the marginal cost of running on-premise Windows infrastructure.
For an enterprise of any scale, some part of the enterprise is already going to be remote from the datacentre. As long as the SaaS service is located less than about 50 milliseconds away from the user, and provided the application is designed for thin client, then there is no difference in the intrinsic cost or performance.
Once the balance of enterprise software moves to SaaS, then the dominant three tier architecture for enterprise applications is no longer necessary, or even suitable. SaaS applications are by definition multi-tenant, multi-site, parallel, and continuous. Interestingly, Microsoft has moved first to what it calls Service Fabric, and only secondly to Containers in Service Fabric. The real architectural change is in Service Fabric.
On an even larger scale, if you move away from the traditional (only twenty years old!) enterprise infrastructure, you also move away from the divide between Windows and Unix/Linux OS. As an enterprise, you don’t know or care about the OS that a SaaS application runs on. As a developer you can use any language on any OS for any component of the architecture. Microsoft becomes a cloud infrastructure services vendor, and no longer a "Windows" vendor. We can see this already with Linux containers on Hyper-V and Linux hosts in Service Fabric.