Containers in the Enterprise

This post is a few thoughts about Windows Containers and the impact they are likely to have on enterprise IT infrastructure.

Containers are very new to Windows. Although they have been around for a while in Linux, it is still not that long in terms of infrastructure technologies. In Windows, they were introduced in Windows Server 2016; they are still at a very early stage of maturity. I expect them to make a big impact.

Here are the best two resources that I know of for a general discussion around containers:

This post is specifically around Windows containers, and their possible impact on enterprise IT infrastructure. I am not a great believer in trying to predict technology. I am just interested to consider how much effort we should put in to understanding containers and perhaps starting to use them for IT services in the enterprise.


If you are already working with containers you can skip this. If you are not working with containers, and you want to know roughly what they are so that you can appreciate the impact, then this is all you really need to know.

Containers are just another form of task virtualisation.

  • A virtual machine runs an OS kernel on virtualised hardware
  • A virtual application runs a process on a virtualised file system
  • A container runs a session on a partitioned kernel.

From the point of view of enterprise infrastructure, the key attributes of a container are:

  1. No boot time. The kernel has already booted up on the hardware. The container starts and stops as fast as a session logon would.
  2. No persistence. There is no dedicated storage, so nothing persists outside the container.
  3. Because of 1 and 2, a different model for scalability.
  4. Efficiency, because the same OS kernel is shared between containers.
  5. Isolation between different containers, but not between the container and the OS.

One of the most interesting things about containers on Windows is how they use or extend existing technologies:

Docker provides the commands to operate containers, but the underlying technology that creates and runs containers on Windows is Windows. This makes containers on Windows a very large scale robust solution for compute, storage and networking infrastructure.

There are also some limitations or features of containers specifically on Windows:

  • no Active Directory domain membership, or Group Policy
  • no user interface processes: no Explorer; no desktop; no GUI
  • no kernel mode device drivers (like AV or encryption)
  • two options for the base image: Windows Server Core; or Nano Server.

Using Containers

For a developer, on Windows or Linux, containers are a wonderful thing. You can create a complex infrastructure very quickly. You can create different versions of the infrastructure. You can add or remove infrastructure components as needed. You can do it on a micro scale; then scale it up to production size with exactly the same images and deployment tools. You can easily do it with both Windows and Linux together. There is no dependency on the infrastructure team. They will only need to provide hosts, not install and configure applications. If you design with a new micro-service architecture, you can scale up and out just by adding replicas.

But I think there are a number of problems with implementing current general purpose enterprise applications in containers. To summarise, most enterprise applications have evolved over ten or twenty years in parallel with the three tier application model running on Windows servers using Windows services. There is little benefit to implementing the same model in containers:

  1. Many services use Active Directory as the AAA provider. They assume the servers are part of a domain, both for administrators and users.
  2. Most services already have a scalability and availability model, for example based on Windows clusters, or load balanced web front ends.
  3. Most services can already be partitioned. For example, IIS already runs distinct sites with their own application pool. SQL Server already runs distinct instances with their own security context.
  4. Services are often closely coupled with data. For example it would make no sense to run a DNS or DHCP service as an ephemeral instance with non-persistent data.
  5. Virtual machines already pool the resources of the hardware. There is certainly a benefit in reducing the number of instances of the kernel, but I don’t know if the benefit would be sufficient to merit the change.

I see containers more as part of a trend away from enterprise infrastructure together. In many medium sized enterprises, at least, the foundation of enterprise infrastructure was Active Directory, Exchange, and SharePoint. When you already run Exchange in-house, then it makes sense to prefer other applications to run on Windows Server, with IIS for the front end, SQL Server for the back end, and Active Directory for AAA. Now this is moving to Office 365, and not for economic reasons. The primary reason in my experience , and often unstated, is an organisational desire to move away from the complexity of running IT services.

Once you have Office 365 instead of on-premise Exchange, then it makes sense increasingly to use SaaS services. It is all about marginal costs. If you already have a Windows infrastructure, then the marginal cost of running something like SAP on-premise is lower than it would be if you had no existing Windows infrastructure. The more services move to SaaS, the higher the marginal cost of running on-premise Windows infrastructure.

For an enterprise of any scale, some part of the enterprise is already going to be remote from the datacentre. As long as the SaaS service is located less than about 50 milliseconds away from the user, and provided the application is designed for thin client, then there is no difference in the intrinsic cost or performance.

Once the balance of enterprise software moves to SaaS, then the dominant three tier architecture for enterprise applications is no longer necessary, or even suitable. SaaS applications are by definition multi-tenant, multi-site, parallel, and continuous. Interestingly, Microsoft has moved first to what it calls Service Fabric, and only secondly to Containers in Service Fabric. The real architectural change is in Service Fabric.

On an even larger scale, if you move away from the traditional (only twenty years old!) enterprise infrastructure, you also move away from the divide between Windows and Unix/Linux OS. As an enterprise, you don’t know or care about the OS that a SaaS application runs on. As a developer you can use any language on any OS for any component of the architecture. Microsoft becomes a cloud infrastructure services vendor, and no longer a "Windows" vendor. We can see this already with Linux containers on Hyper-V and Linux hosts in Service Fabric.

Windows Containers: Hyper-V

An option with Windows Containers is to run a container in Hyper-V Isolation Mode. This blog shows what happens when we do this.

When we run a container normally, the processes running in the container are running on the kernel of the host. The Process ID and the Session ID of the container process are the same as on the host.

When we run a container in Hyper-V Isolation Mode, a utility VM is created and the container runs within that. We need to have the Hyper-V role installed on the host. Then we need to add --isolation hyperv to the docker run command.

Here are some of the main differences.

The processes in the container are isolated from the host OS kernel. The Session 0 processes do not appear on the host. Session 1 in the container is not Session 1 on the host, and the Session 1 processes of the container do not appear on the host.


Get Process Hyper-V Container


Get Process Hyper-V Host Same SI

There is no mounted Virtual Hard Disk (VHD):

Disk Management Hyper-V

Instead we have a set of processes for the Hyper-V virtual machine:

Hyper-V Processes on Host

A set of inbound rules is not automatically created on the host Windows firewall. There are no rules for ICC, RDP, DNS, DHCP as there are when we create a standard container:

Firewall Rules Hyper-V Host

But the container is listening on port 135, and we can connect from the host to the container on that port, as we can with a standard container:

Netstat Hyper-V Container Established

And if we create another, standard, container, they each respond to a ping from the other.

Hyper-V does not add to the manageability of containers. The Hyper-V containers do not appear in the Hyper-V management console.

Hyper-V Manager

So in summary: in Hyper-V Isolation Mode the container processes are fully isolated; but the container is not on an isolated network, and is still open to connections from the host and from other containers by default.

Windows Containers: Data

A container is an instance of an image. The instance consists of the read-only layers of the image, with a unique copy-on-write layer, or sandbox. The writable layer is disposed of when we remove the container. So clearly we need to do something more to make data persist across instances. Docker provides two ways to do this.

When Docker creates a container on Windows, the container is instantiated as a Virtual Hard Disk (VHD). You can see the disk mounted without a drive letter, in Disk Management on the host. Docker keeps track of the layers, but the file operations take place inside the VHD.

Host Disk Manager

If we use the interactive PowerShell console to create a new directory in the container, C:\Logs, then this is created directly inside the VHD:

Sandbox Logs

When Docker removes the container, the VHD is also removed and the directory is gone.

Docker provides two ways to mount a directory on the host file system inside the container file system, so that data can persist across instances:

  1. Bind mount
  2. Volume mount.

A bind mount is simply a link to a directory on the host. A volume mount is a link to a directory tracked and managed by Docker. Docker recommends generally using volumes. You can read more about it in the Docker Storage Overview.

The parameter you commonly see to specify a mount is -v or --volume. A newer parameter, and the one Docker recommends, is --mount. This has a more explicit syntax.

In this example, we mount a volume on the host called MyNewDockerVolume to C:\MyNewDockerVolume in the container:

docker run -it --rm --name core --mount type=volume,src=MyNewDockerVolume,dst=C:\MyNewDockerVolume microsoft/windowsservercore powershell

If the volume does not already exist, it is created inside the docker configuration folder on the host:

Docker volumes MyNewDockerVolume

The Hyper-V Host Compute Service (vmcompute.exe) carries out three operations inside the VHD:

CreateFile: DeviceHarddiskVolume13MyNewDockerVolume. Desired Access: Generic Read/Write, Disposition: OpenIf, Options: Directory, Open Reparse Point, Attributes: N, ShareMode: Read, Write, AllocationSize: 0, OpenResult: Created
FileSystemControl:DeviceHarddiskVolume13MyNewDockerVolume. Control: FSCTL_SET_REPARSE_POINT

Now if we look in the VHD, in Explorer, we see the directory implemented as a shortcut:

Sandbox MyNewDockerVolume

In PowerShell, we can see that the directory mode is “l”, to signify a reparse point, or link:

Dir MyNewDockerVolume

Files already in the volume will be reflected in the folder in the container. Files written to the folder in the container will be redirected to the volume.

Windows reparse points come in several flavours: directory or file link; hard link (“junction”) or soft link (“symbolic link” or “symlink”). If we use the command prompt instead of PowerShell we can see that the Docker volume is implemented as a directory symlink:

Dir in Command

Working with data in Windows Containers requires keeping three things in mind:

  1. The difference between bind mount and volume mount
  2. The different syntax for --volume and --mount
  3. Differences in behaviour between Docker on Linux and Windows hosts.

The first two are well documented. The third is newer and less well documented. The main differences I can find are:

  • You cannot mount a single file
  • The target folder in the container must be empty
  • Docker allows plugins for different drivers. On Linux you can use different storage drivers to connect remote volumes. On Windows the only driver is “local” and so the volume must be on the same host as the container.

If you reference a VOLUME in the Dockerfile to create an image, then the volume will be created automatically, if it does not already exist, without needing to specify it in the docker run command.

Windows Containers: Build

This post is a building block for working with containers on Windows. I have covered elsewhere installing the Containers feature with Docker, and running containers with the Docker command line. We can’t do much that is useful without building our own images. Doing this tells us a lot about what we can and cannot do with containers on Windows.

Some preamble:

  1. A container is not persistent. It is an instance of an image. You can make changes inside a running container, for example installing or configuring an application, but unless you build a new container image with your changes, they will not be saved.
  2. A Windows container has no GUI. Any installation or configuration will be done at the command line.
  3. Therefore we should make our changes in a script, containing the instructions to build a new image.
  4. This script is a Dockerfile.

The command to build an image is: docker image build with a range of options, including the path to the Dockerfile.

You can also run: docker image commit to create a new image from a running container. This gives scope for configuring a container interactively before saving it as a new image. But, since the only interface to configure the container is the command line, and since the same commands can be performed in the Dockerfile, this has limited use.

Building an image in Docker is a similar idea to building an image for OS deployment. The Dockerfile is like the task sequence in MDT or SCCM, being a scripted set of tasks. The documentation is here: Dockerfile reference. An example is this one, from Microsoft, for IIS on Windows Server Core:

FROM microsoft/windowsservercore
RUN powershell -Command Add-WindowsFeature Web-Server
ADD ServiceMonitor.exe /ServiceMonitor.exe
ENTRYPOINT ["C:\ServiceMonitor.exe", "w3svc"]

The basic structure of a Dockerfile is:

  • FROM to specify the image that the new image is developed from
  • ADD or COPY from source to destination to put new files into the image
  • RUN to execute commands to configure the image
  • CMD to specify a command to start the container with, if no other command is specified
  • EXPOSE to indicate what port or ports the application listens on
  • ENTRYPOINT to specify the services or executables that should run automatically when a container is created.

We can immediately see some implications:

  1. We don’t have to build every part of the end image in one Dockerfile. We can chain images together. For example, we could build a generic web server FROM microsoft/iis, then build specific web sites with other components in new images based on that.
  2. Adding a single feature is easy, like: Add-WindowsFeature Web-Server. But configuring it with all the required options will be considerably more complicated: add website; application pool; server certificate etc.
  3. We may want to bundle sets of commands into separate scripts and run those instead of the individual commands.
  4. There is no RDP to the container, no remote management, no access to Event Logs: and arguably we don’t need to manage the container in the same way. But we can add agents to the image, for example a Splunk agent.
  5. Static data can be included in the image, of course, but if we want dynamic data then we need to decide which folders it will be in, so we can mount external folders to these when we run the container.

It is rather like doing a scripted OS deployment without MDT. I would not be surprised if a GUI tool emerges soon to automate the build scripting.

You may find a number of Dockerfiles for Windows using the Deployment Image Servicing and Management (DISM) tool. There is a confusing choice of tools and no particular need to use DISM (or reason not to). DISM is typically used for offline servicing of Windows Imaging Format (WIM) images. For example it can be used to stream updates and packages into a WIM image by mounting it. But in the case of Docker images the changes are made by instantiating a temporary container for each RUN, and the DISM commands are executed online. This means we can use three different types of command to do the same thing:

  • Install-WindowsFeature from the ServerManager module in PowerShell
  • Enable-WindowsOptionalFeature from the DISM module in PowerShell
  • dism.exe /online /enable-feature from DISM.

Just to make life interesting and keep us busy, the commands to add a feature use different names for the same feature!

Windows Containers: Portainer GUI

When you first set up Containers on Windows Server 2016, you would imagine there would be some kind of management console. But there is none. You have to work entirely from the command line. Portainer provides a management GUI that makes it easier to visualise what is going on.

The Windows Container feature itself only provides the base Host Compute and Host Network Services, as Hyper-V extensions. There is no management console for these. Even if you install the Hyper-V role, as well as the Containers feature, you can’t manage images and containers from the Hyper-V management console.

Images and containers are created and managed by a third party application, Docker. Docker also has no management console. It is managed from the Docker CLI, either in PowerShell or the Command Prompt.

There is a good reason for this. But, for me at least, it makes it hard to visualise what is going on. Portainer is a simple management UI for Docker. It is open source, and itself runs as a container. It works by connecting to the Docker engine on the host server, then providing a web interface to manage Docker.

Portainer Dashboard

Portainer Dashboard

Setting up the Portainer container will also give us a better idea of how to work with Docker. Docker has a daunting amount of documentation for the command line, and it is not easy to get to grips with it.

Configure Docker TCP Socket

The first step in setting up Portainer is to enable the Docker service to listen on a TCP socket. By default Docker only allows a named pipe connection between client and service.

Quick version:

  • create a file with notepad in C:\ProgramData\docker\config
  • name the file daemon.json
  • add this to the file:
    {"hosts": ["tcp://","npipe://"]}
  • restart the Docker service.

The long version is: the Docker service can be configured in two different ways:

  1. By supplying parameters to the service executable
  2. By creating a configuration file, daemon.json, in C:\ProgramData\docker\config.

The parameters for configuring the Docker service executable are here: Daemon CLI Reference. To start Docker with a listening TCP socket on port 2375, use the parameter

-H tcp://

This needs to be configured either directly in the registry, at HKLM\SYSTEM\CurrentControlSet\Services\Docker; or with the Service Control command line:

sc config Docker binPath= ""C:\Program Files\docker\dockerd.exe\" --run-service -H tcp://"

The syntax is made difficult by the spaces, which require quotation with escape characters.

The easier way is to configure the Docker service with a configuration file read at startup, daemon.json. The file does not exist by default. You need to create a new text file and save it in the default location C:\ProgramData\docker\config. The daemon.json file only needs to contain the parameters you are explicitly configuring. To configure a TCP socket, add this to the file:

 "hosts": ["tcp://","npipe://"]

Other options for the configuration file for Docker in Windows are documented here: Miscellaneous Options. For example you can specify a proxy server to use when pulling images from the Docker Hub.

Just to add complexity:

  • the Docker service will not start if the same parameter is set in service startup and in the configuration file
  • You can change the location of the configuration file by specifying a parameter for the service:
    sc config Docker binPath= ""C:\Program Files\docker\dockerd.exe\" --run-service --config-file "[path to file]""

Ports 2375 (unencrypted) and 2376 (encrypted with TLS) are the standard ports. You will obviously want to use TLS in a production environment, but the Windows Docker package does not include the tools to do this. Standard Windows certificates can’t be used. Instead you will need to follow the documentation to create OpenSSL certificates.

Allow Docker Connection Through Firewall

Configure an inbound rule in the Windows firewall to allow TCP connections to the Docker service on port 2375 or 2376. This needs to be allowed for all profiles, because the container virtual interface is detected as on a Public network.

netsh advfirewall firewall add rule name="Docker" dir=in action=allow protocol=TCP localport=2375 enable=yes profile=domain,private,public

Note that, by default, containers do not have access to services and sockets on the host.

Pull the Portainer Image

Back in an elevated PowerShell console, pull the current Portainer image from the Portainer repository in the Docker Hub:

docker pull portainer/portainer

If we look in the images folder in C:\ProgramData\docker\windowsfilter we can see that we have downloaded 6 new layers. We already had two Nano Server layers, because we pulled those down previously.

Portainer Layers

If we look at the image history, we can see the layers making up the image:

docker image history portainer/portainer

Portainer Image History

The two base layers of the Portainer image are Windows Nano Server. We already had a copy of the Nano Server base image, but ours was update 10.0.14393.1593, so we have downloaded a layer for the newer update 10.0.14393.1715. We can also see the action that created each layer.

If we inspect the image, with:

docker image inspect portainer/portainer

we can see some of the things we need to set it up

  1. The container is going to run portainer.exe when it starts
  2. The exposed port is 9000
  3. The volume (or folder) to mount externally is C:\Data

Set up Portainer container

Quick version:

  1. Create a folder in the host called: C:\ProgramData\Containers\Portainer
  2. Open an elevated PowerShell console on the host
  3. Run this command:
    docker run -d --restart always --name portainer -v C:\ProgramData\Containers\Portainer:C:\Data -p 9000:9000 portainer/portainer

The long version is: we need the command line to run the Portainer image:

  1. Standard command to create a container: docker run
  2. We want to run the container detached as a free standing container, with no attached console: -d or --detach
  3. There is no need to remove the container if it is stopped. Instead, we want to restart the container automatically if, for example, the host is rebooted: --restart always
  4. We can give the container a name, to make it easier to manage: --name portainer
  5. Portainer reads information about images and containers directly from Docker, so it does not need to store that. But it needs to store it’s own configuration, for example settings and user passwords. To do this, we need to save the configuration data outside the container.  We can do this in Docker by mounting an external folder in the file system of the container. The folder in the container has already been designated as C:\Data in the image, but the folder in the host can be anything you choose. In this example we are using C:\ProgramData\Containers\Portainer. The folder needs to exist before using this: -v C:\ProgramData\Containers\Portainer:C:\Data
  6. The Portainer process is listening on port 9000 (see above). We can connect to this directly from the host itself, without doing anything more. But the outside world has no access to it. The container is running on a virtual switch with NAT enabled. This does port forwarding from the host to the container. We need to decide what port on the host we would like to be forwarded to port 9000 on the container. If we don’t specify a port, Docker will assign a random port and we can discover it through docker container inspect portainer. Otherwise we can specify a port on the host, which in this case can also be 9000: -p 9000:9000
  7. The image to run: portainer/portainer
  8. We don’t need to specify a command to run, since the image already has a default command: portainer.exe

Putting the parameters together, the full command is:

docker run -d --restart always --name portainer -v C:\ProgramData\Containers\Portainer:C:\Data -p 9000:9000 portainer/portainer

Connect to Portainer

Using a browser on your desktop, connect to the Docker TCP port on the remote host: Set up a password for the admin user:

Portainer Setup

Set up the Docker host as the endpoint:

Portainer Setup Endpoint

Note that the endpoint is the IP address of the host virtual interface on the container subnet (in this case This address is also the gateway address for the container, but in this context it is not acting as a gateway. The virtual interface on the host is listening on port 2375 for Docker connections.

And we are in:

Portainer Dashboard

We can also connect directly from a browser on the host to the container. For this, we need to use the IP address of the container itself, in this case, or whatever address we find from docker container inspect portainer.

The Portainer Container

We don’t need to set up a firewall rule to allow access to the container on port 9000. Docker sets up a bunch of rules automatically when the container is created:

Container Automatic Firewall Rules

These rules include: DHCP; ICMP; and DNS. They also include port 9000 on the host, which we specified would be forwarded to port 9000 in the container:

Container Automatic Firewall Rule for Portainer

In Portainer, when we set up the endpoint (being Docker on the host) we need to specify the virtual interface of the host that is on the same subnet as the container (the address). This is because Windows does not allow the container to connect directly through the virtual interface to a service on the physical interface (

If we look at the TCP connections on the host, with: netstat -a -p tcp, we see that there is no active connection to Portainer in the container, although my browser is in fact connected from outside:

Portainer Host TCP Connections

However, if we look at the NAT sessions, with Get-NetNATSession, we see the port forwarding for port 9000 to the container:

Host Get-NetNATSession

Docker has attached a virtual hard disk to the host, being the file system of the container:

Host Disk Manager

If we give it a drive letter we can see inside:

Portainer Container System Drive

The portainer executable is in the root of the drive. C:\Data is the folder that we mounted in the docker run command. Other folders like css and fonts are part of the application. These are contained in the first layer of the image, after the Nano Server layers. The layer was created by the COPY command in the Portainer Dockerfile used to create the image:

FROM microsoft/nanoserver
COPY dist /
VOLUME C:\\data
ENTRYPOINT ["/portainer.exe"]

And here is the portainer process running on the host in Session 2, using:

Get-Process | Where-Object {$_.SI -eq 2} | Sort-Object SI

Portainer Process Running on Host


You can see in the Portainer GUI for creating endpoints that we can connect to Docker with TLS. This assumes we have set up Docker with certificates and specified encrypted TCP connections, covered in the Docker daemon security documentation.

We should also connect to Portainer over an encrypted connection. We can do this by adding more parameters to the docker run command: Securing Portainer using SSL.

More about using Portainer

You can read more about using Portainer in the Portainer documentation.

Windows Containers: Properties

If we create an instance of an image in interactive mode, and run a PowerShell console in it, then we can see inside the container.

In a previous post I used the Nano Server image, because it is small and quick. But Nano Server is a cut down OS so, for the purposes of seeing how a container works, let’s take a look inside a Windows Server Core container. The question of when Nano Server can be used in place of Core is a subject for another time.

The Docker command to do this is:

docker run --rm -it --name core microsoft/windowsservercore powershell

The system information, with systeminfo, shows a Windows server where some of the properties belong to the container, and some to the host. For example, the language and locale belong to the container, but the BIOS and boot time belong to the host:

Container SystemInfo

The TCP/IP information, with ipconfig /all shows that the container has its own:

  • hostname
  • Hyper-V virtual ethernet adapter
  • MAC address
  • IP address in the private Class B subnet, which we saw previously was allocated to the Hyper-V virtual switch
  • gateway, which we saw previously was the Hyper-V virtual ethernet adapter on the host
  • DNS server addresses.

Container IPConfig All

I can connect to the outside world, with ping and get a reply:

Container Ping World

The running processes, from Get-Process, show the PowerShell process, as well as what look like typical user processes. If I run Get-Process | Sort-Object SI I can see that there are two sessions: a system session in Session 0, and a user session in Session 2.

Container Get Process Sort SI

I can start other processes. For example, if I start Notepad, then I see it running as a new process in Session 2.

Container Start Notepad

The services, from Get-Service, show normal system services. It is easier to see if I filter for running services, with:

Get-Service | Where-Object {$_.Status -eq "Running"} | Sort-Object DisplayName

Container Get Service Filter and Sort

I have listening ports, shown with Get-NetTCPConnection, but nothing connected:

Container Get TCP Connection

There are three local user accounts, shown with Get-LocalUser:

Container Get Local User

PowerShell tells me that it is being executed by the user ContainerAdministrator:

Container Get Process PowerShell UserName

In summary, I have something that looks similar to an operating system running a user session. It can start a new process and it can communicate with the outside world.

Let’s see what it looks like from outside. From the host I can ping the container:

Host Ping Container

I can telnet from the host to port 135 (one of the ports that I saw was listening) in the container, and make a connection:

Host Telnet Container

But I can’t make a connection from outside the host. I already know there is no route to the container subnet. What happens if I supply a route? Still no reply. I am not really surprised. The connection would have to go through the host, and there is nothing in the host firewall to allow a connection to the container.

World Ping Container

If I start another container, though, it can ping and get a reply from the first container:

Container Ping Container

If I look in Task Manager on the host, there is no obvious object that looks like a container. I don’t even know what size I would expect it to be. But I notice that the PowerShell process in the container shows as the same process on the host.

Get-Process PowerShell in the container:

Container Get Process PowerShell

Get-Process PowerShell on the host:

Host Get Process PowerShell

The process ID 3632 is the same process. All three PowerShell processes, including the one in the container, are using the same path to the executable. You could say that the container is a virtual bubble (session, namespace or whatever you want to call it) executing processes on the host:

Host Get Process PowerShell Path

If I look at all the processes on the host, I can see that the container’s Session 2 is also Session 2 on the host. Here are the host processes filtered by session:

Get-Process | Where-Object {$_.SI -eq 0 -or $_.SI -eq 2} | Sort-Object SI

Host Get Process Filter and Sort SI

Session 0 (the System session) has a lot more processes than shown inside the container, but Session 2 is the same. Processes like lsass, wininit, csrss are the normal processes associated with a session.

The host does not see the user who is executing the processes. In the container the user is ContainerAdministrator, but there is no such user on the host, and the host does not have the username:

Host Get Process PowerShell UserName

A container is ephemeral. But if I create files inside the container they must be stored somewhere.

In the image folder of the host I can see a new layer has been created:

Docker Images Folder

The layerchain.json file tells me that the layer is chained to the Windows Server Core base image layers. The layer has a virtual hard disk drive called “sandbox”, which sounds like the kind of place that changes would be saved.

If I look in Disk Manager, I can see that a new hard disk drive has been attached to the host:

Host Disk Manager

The disk is the same size as the apparent system drive inside the container. It is shown as Online, but with no drive letter. However, if I give it a drive letter, then I can see the same file system that I was able to see inside the container:

Container Sandbox Disk on Host

So the file system of the container is created by mounting a Hyper-V virtual hard disk drive. This only exists for the lifetime of the container. When the container is removed, any changes are lost.

In summary:

  • From inside, the container appears to have similar properties to a normal virtual machine.
  • The container has a network identity, with a host name, virtual Ethernet adapter and IP address.
  • It can connect to the outside world, and with other containers, but the outside world cannot (until we change something) connect to it.
  • It has a file system, based on the image the container was created from.
  • On the host, the container processes are implemented as a distinct session.
  • The file system of the container is implemented as a virtual hard disk drive attached to the host.
  • Files can be saved to the virtual hard disk drive, but they are discarded when the container is removed.

Windows Containers: Run an Image

A container is an instance of an image. When we “run” the image, a container is created. Let’s see what happens when we do this.

In previous posts I covered installing the Windows Containers feature, and downloading a base image of Windows Nano Server or Windows Server Core.

Docker is the daemon or service that manages images and containers. It is managed at the command line, in PowerShell or the Command Prompt. To use Windows Containers we need to get familiar with the Docker commands.

I am using the base image of Nano Server as an example, because it is simple and small. When you see how it works, it is easy to imagine how other images based on Windows Server Core might also work.

Lets just run the Nano Server image and see what happens. In an elevated PowerShell console:

docker run microsoft/nanoserver

The console blinks, briefly changes to C:\ prompt, then returns to the PowerShell prompt. It seems that nothing happened at all!

Docker Run Nano

We can try:

docker container ls

to see if any containers exist. It shows none. But:

docker container ls -a

(or –all) shows a container that has exited.

Docker Container LS

So there was a container but it exited. I can see that the container has an ID, and a name.

I can start the container with:

docker start [ID]

but it exits again. Clearly it is configured to do nothing and then exit.

Docker Start Container

This container is no use to me, since it just runs and exits. I can remove it with:

docker container rm [ID]

Docker Container Remove

How can I get it to hang around, so I can see what it is? Normally a server waits for something to do, but this container seems to exit if it has nothing to do. It acts more like a process than a server. I could try giving it a command that continues until stopped, like:

docker run microsoft/nanoserver ping -t

Now I can see that the container continues to perform the ping. If I disconnect the terminal with Ctrl+C, the container is still running.

Docker Run Nano Ping

If I run:

docker container attach [ID]

then the PowerShell console attaches again to the output of the running ping process.

Docker Container Attach

I need to run:

docker container stop [ID]

to stop the ping process, and:

docker container rm [ID]

to remove the container.

If I want to run a container and see what it is doing, then I can run an interactive container. Putting the commands together:

    • create the container:
docker run [image]
    • remove it when it exits:
    • double dash is for a full word, or an abbreviation:
    • single dash is for a concatenation:
    • so -[interactive][tty] is:
    • give the container a name so that I don’t have to find the ID or the random name:
--name [friendly name]
    • a command parameter at the end is the executable to run in the container:


docker run --rm -it --name nano microsoft/nanoserver powershell

gives me a running container with an attached PowerShell console:

Docker Run Interactive

Now we can see the inside of the container as well as the outside, and get a good idea of how it works.

Windows Containers: Base Images

The Containers feature on Windows Server 2016 runs applications in containers. A container is an instance of an OS image. Let’s explore what an image is.

For Windows containers we can start with one of two base images:

  1. Windows Server Core
  2. Windows Nano Server

The base images are provided by Microsoft, and kept in the Microsoft repository in the Docker Hub registry.

To get a copy of the current Nano Server base image, we use the command:

docker pull microsoft/nanoserver

This downloads, unzips and saves the image in the Docker folder at C:\ProgramData\docker\windowsfilter.

  • Docker commands are run by the docker client from either the Command Prompt or PowerShell
  • The command processor must run elevated
  • The place where the images are stored, by default, can be changed in the configuration of Docker
  • By default Docker pulls the latest version of the image. Other versions can be specified explicitly.

An image is a set of files. Here are the image files for Nano Server:

Nano Image Base Layer

The image consists of files and registry hives. Here are the contents of one of the folders for the Nano Server image:

Nano Image Folder

The files look like a standard system drive:

Nano Image Files

The Hives folder is a collection of registry hives:

Nano Image Registry Hives

You can load the registry hives into Regedit in the normal way:

Nano Image Registry Software Hive

We also have a Utility VM folder with two Hyper-V Virtual Hard Disk (vhdx) files:

Nano Image Utility VM

These are used when the container is run in a small Hyper-V virtual machine instead of directly on the host OS (Hyper-V isolation mode).

In my example of the Nano Server base image, there were two folders:

Nano Image Pull

Each folder represents a layer. When an image is modified, the changes are saved in a new layer.

The command:

docker image ls

shows one image:

Nano Image List 1593

The command:

docker image history microsoft/nanoserver

shows two layers. The first layer is the original release of Nano Server, 10.0.14393.0, and the second layer is an update, 10.0.14393.1593. You can see the name, date, action that created it, and size of each layer:

Nano Image History

The command:

docker image inspect microsoft/nanoserver

shows the details of the image. These include:

  • The unique ID of the image
  • The OS version
  • The unique ID of each of the two layers

If we look back at the Microsoft repository on Docker Hub, we can see the tags for different updates:

Update 10.0.14393.1066
Update 10.0.14393.1198
Update 10.0.14393.1358
Update 10.0.14393.1480
Update 10.0.14393.1593
Update 10.0.14393.1715

Update 1715 is newer than the one I pulled recently. If I run the command:

docker pull microsoft/nanoserver

again, I get the latest image. If I run the command with a tag appended, I get that specific image. In this case they are different update levels, but they could be different configurations or any other variation.

Now a third folder is added in C:\ProgramData\docker\windowsfilter:

Nano Image Pull 1715

The command

docker image ls

shows that I have two images:

Nano Image List 1715

The command:

docker image history microsoft/nanoserver

again shows two layers in the latest image. One layer is the new update, and the other layer is the same original layer as in the previous version:

Nano Image History 1715

The image name “microsoft/nanoserver” refers, by default to the latest version of the image, consisting only of the original layer and the newest layer. Docker keeps track of images and layers in a local database:

Docker Local Database


  1. Windows containers are instances of an image
  2. An image is a set of files and registry hives
  3. An image comprises one or more layers
  4. All Windows container images start from either Windows Server Core or Nano Server
  5. The layers may comprise updates, roles or features, language, applications, or any other change to the original OS.

Windows Containers: Add Feature

The Windows Server 2016 Containers feature enables Windows Server 2016 to run applications in “containers”. Let’s take a look at what this feature is.

There are plenty of guides on the Internet for how to set up containers on Windows. The purpose here is not so much to provide the instructions, as to see and understand how the new Containers feature is implemented.

Step 1: Build a standard Windows server. It can be a physical or virtual server.

Step 2: Install the Containers feature.

Windows Containers Feature

This creates a new service: the Hyper-V Host Compute Service. Note that several Hyper-V components are already installed by default in the server OS, without adding the Hyper-V role explicitly. The Containers feature extends the default Hyper-V services.

Hyper-V Host Compute Service for Containers

The Hyper-V Host Compute service is the one that will partition access to the Windows kernel by different containers.

Next, install the PowerShell module for Docker. There are two steps to obtain the module:

  1. Add the Microsoft Nuget module
  2. Add the PowerShell Docker module.

Nuget is the Microsoft package manager for open source .NET packages:

  • Install-PackageProvider -Name NuGet -Force

Then the PowerShell module for Docker:

  • Install-Module -Name DockerMsftProvider -Repository PSGallery -Force

Next, we need to add the Docker components. Docker is a third party application that manages containers, on Linux and now on Windows. Microsoft provides the API (in the Hyper-V Host Compute Service) and Docker provides the application that uses the API to run containers. The documentation for Docker comes from Docker, not from Microsoft. The command to install the Docker package is:

  • Install-Package -Name docker -ProviderName DockerMsftProvider

I have broken these out as separate steps for clarity. If you install the PowerShell Docker module you will be prompted first for Nuget. The Docker package (last step above) will also add the Containers feature, if you have not already done it.

Docker is installed as a service (daemon) and a client to operate the service.

Docker Daemon Service

The Docker installation has these two executables.

Docker Executables

The file dockerd.exe is the Docker service.

Docker Properties

The file docker.exe is the client. Like a lot of open source tools, Docker is managed at the command line. You can run the docker client executable in the Command Prompt.

Docker Client

The Containers feature also creates an internal network where the containers will run by default. This consists of:

  1. A Hyper-V virtual switch
  2. A subnet used for the virtual network (always 172.17.nnn.0/20)
  3. A virtual NIC on the host server that is presented to the virtual switch
  4. Two new rules in the Windows firewall.

By default the Containers feature sets up a NAT switch. A Windows component, WinNAT, maps ports on the host to IP addresses and ports on the container network.

Here is the virtual switch:

Docker Virtual Network

And the NAT component:

Container VMSwitch and NAT

The host NIC on this virtual switch:

Hyper-V Virtual Ethernet Adapter

 The Hyper-V Virtual Ethernet Adapter shown in the normal Network and Sharing Centre:

Hyper-V HNS Internal NIC

You can create other types of virtual switches later.

The installation also creates two default firewall rules:

Docker Automatic Firewall Rules

The Inter- Container Communication (ICC) default rule allows anything from the virtual container network:

Docker Automatic Firewall Rules ICC to Docker Network

and RDP:

Docker Automatic Firewall Rules RDP

It is not obvious why the Containers feature creates a firewall rule for RDP. It does not enable RDP on the host. And the containers do not support RDP.

In summary:

  • The Windows Containers feature is enabled as an extension of the default Hyper-V services.
  • The Hyper-V Host Compute Service allows containers to run processes on the Windows kernel. The Hyper-V Host Network Service creates the internal logical networks for the containers.
  • There is no need to install the Hyper-V role itself, unless you want to run containers in a VM (called Hyper-V Isolation Mode).
  • Docker is a third party application that uses the Windows Containers feature to create and run containers.
  • The Docker package installs the Docker components on top of the Windows Containers feature.
  • The Docker package installation also creates a virtual network for containers. This has a Hyper-V virtual switch with NAT networking, and a Hyper-V virtual NIC on the host attached to the switch.

So far, we have installed the Containers feature and the Docker components. We still can’t do anything until we obtain an image to create containers from.