Welcome to the workshop :)
You will be provided with a virtual machine which is already prepped for the lab.
You will build images and push them to Docker Hub during the workshop, so they are available to use later. You'll need a Docker ID to push images.
Exercises are shown like this:
This is something you do yourself...
copy and paste this code
You'll be given the connection details for your Windows Server 2016 VM during the workshop.
You can connect to the VM using RDP on Windows, Microsoft Remote Desktop from the Mac App Store or Remmina on Linux.
RDP into the server VM. The server name will be something like:
dwwx-dcus1800.centralus.cloudapp.azure.com
Now run a script to make sure everything is up to date.
Open a PowerShell prompt from the start menu and run:
cd C:\scm\docker-windows-workshop
.\workshop\lab-vm\update.ps1
Do not use PowerShell ISE for the workshop! It has a strange relationship with some
docker
commands.
Here we go :)
We'll start with the basics and get a feel for running Docker on Windows.
You'll see how to run task containers, interactive containers and background containers, and explore the filesystem and processes inside Docker containers.
Task containers just do one thing and then exit. Useful for automation, but we'll start with a simple example.
Print out the host name from a Windows container:
docker container run microsoft/nanoserver hostname
When the process in a container ends, the container stops too. Your task container still exists, but its in the 'Exited' state.
List all containers:
docker container ls --all
Note that the container ID is the container's hostname.
Interactive containers start and connect your console to the container. You can use it like an RDP session, to explore a container.
Start a Windows Server Core container and connect to it:
docker container run --interactive --tty --rm `
microsoft/windowsservercore powershell
The Windows Server Core container is pretty much Windows Server 2016 without the GUI.
Explore the container environment:
ls C:\
Get-Process
Get-WindowsFeature
Now run
exit
to leave the PowerShell session, which stops the container process.
Background or "detached" containers are how you'll run most applications. They start in the background and keep running, like a Windows Service.
Run SQL Server as a detached container:
docker container run --detach --name sql `
--env ACCEPT_EULA=Y `
--env sa_password=DockerCon!!! `
microsoft/mssql-server-windows-express:2016-sp1
As long as SQL Server keeps running, Docker keeps the container running in the background.
You can check what's happening inside a container from the host, using the Docker CLI.
Check the logs and processes in the SQL container:
docker container top sql
docker container logs sql
You can't connect to this SQL container, because you started it without making ports accessible.
We'll see how to do that later - but for now you can run commands inside the container using the CLI.
Check what the time is inside the SQL container:
docker container exec sql `
powershell "Invoke-SqlCmd -Query 'SELECT GETDATE()'"
The SQL Server container is stil running in the background.
Connect a PowerShell session to the container :
docker container exec -it sql powershell
The SQL data files live inside the container - you can find the MDF data and LDF log files for the standard databaes.
Look at the default SQL data directory:
cd 'C:\Program Files\Microsoft SQL Server'
ls .\MSSQL13.SQLEXPRESS\MSSQL\data
Processes in a Windows Server container are actually running on the server.
Check the processes running in the container:
Get-Process
One is
sqlservr
. There are twopowershell
processes, one is the container startup script and the other is this PowerShell session.
Processes in containers run as standard Windows user accounts.
Compare the user accounts for the processes:
Get-Process -Name sqlservr,powershell -IncludeUser
Containers have the usual Windows accounts, and a special
ContainerAdministrator
user.
On the Windows Server host, you can see the container processes.
Open another PowerShell terminal and run:
Get-Process -Name powershell -IncludeUserName
You'll see the PowerShell sessions from the container - with the same IDs but with a blank username. The container user doesn't map to any user on the host.
Windows Server container processes run natively on the host, which is why they are so efficient
Container processes run as an unknown user on the host, so a rogue container process wouldn't be able to access host files or other processes.
Close the second PowerShell window, and exit the interactive Docker session in the first PowerShell window:
exit
The container is still running.
We don't need any of these containers, so you can remove them all
The -force
flag removes containers even if they;re still running:
docker container rm --force `
$(docker container ls --quiet --all)
Now you should understand different ways of running containers and connecting to containers, and how container processes run natively on the server.
So far we've used Microsoft's container images. Next you'll learn how to build your own.
You package your own Windows apps as Docker images, using a Dockerfile.
The Dockerfile syntax is straightforward. In this section you'll walk through two Dockerfiles which package websites to run in Windows Docker containers.
Have a look at the Dockerfile for this app. It builds a simple ASP.NET website that displays the host name of the server. There are only two instructions:
The Dockerfile copies a simple .aspx
file into the content directory for the default IIS website.
You package an app by building a Docker image from a Dockerfile.
Switch to the directory and build the Dockerfile:
cd "$env:workshop\docker\101-dockerfiles-and-images\hostname-app"
docker image build --tag hostname-app .
The output shows Docker executing each instruction in the Dockerfile, and tagging the final image.
Now you can run a container from your image to run the app.
Run a detached container with the HTTP port published:
docker container run --detach --publish 80:80 `
--name app hostname-app
Any traffic coming into the server on port 80 will be managed by Docker and processed by the container.
When you're connected to the host, to browse the website you can use the local (virtual) IP address of the container.
Get the container IP address and browse to it:
$ip = docker container inspect --format '{{ .NetworkSettings.Networks.nat.IPAddress }}' app
firefox "http://$ip"
Let's see how lightweight the containerized application is.
Run five containers from the same image:
for ($i=0; $i -lt 5; $i++) {
& docker container run --detach --publish-all --name "app-$i" hostname-app
}
The
publish-all
flag publishes the container ports to random ports on the host.
You now have multiple instances of the app running. The Docker image is the same, but each instance will show its own containr ID.
Browse to all the new containers:
for ($i=0; $i -lt 5; $i++) {
$ip = & docker container inspect --format '{{ .NetworkSettings.Networks.nat.IPAddress }}' "app-$i"
firefox "http://$ip"
}
You'll see that each site displays a different hostname, which is the container ID Docker generates.
On the host you have six w3wp
processes running, which are the IIS worker processes for each container.
Check the the memory and CPU usage for the apps:
Get-Process -Name w3wp | select Id, Name, WorkingSet, Cpu
The worker processes usually average around 40MB of RAM and <1 second of CPU time.
This is a simple ASP.NET website running in Docker, with just two lines in a Dockerfile. But there are two issues we need to fix:
The cold-start issue is because the IIS service doesn't start a worker process until the first HTTP request comes in.
IIS stores request logs in the container filesystem, but Docker is only listening for logs on the standard output from the startup program.
Check the logs from one of the app containers:
docker container logs app-0
The logs are locked inside the container filesystem, Docker doesn't know about them.
The next Dockerfile fixes those issues. These are the main features:
Build an image from this new Dockerfile.
cd "$env:workshop\docker\101-dockerfiles-and-images\tweet-app"
docker image build --tag tweet-app .
This is a static HTML site, but you run it in a container in the same way as the last app:
docker container run --detach --publish 8080:80 `
--name tweet-app tweet-app
You can reach the site by browsing to your computer externally on port 8080, on on the computer by using the container IP address.
Find the container IP address and browse to it:
$ip = docker container inspect --format '{{ .NetworkSettings.Networks.nat.IPAddress }}' tweet-app
firefox "http://$ip"
Feel free to hit the Tweet button, sign in and share your workshop progress :)
You've built your own images from Dockerfiles. Right now they only live on the computer where you ran the docker image build
command.
Next you'll see how to share those images by pushing them to Docker Hub, so anyone can run your apps in containers.
Images are the portable package that contains your application - your binaries, all the dependencies and the default configuration.
You share images by pushing them to a registry. Docker Hub is the most popular public registry. Most enterprises run their own private registry. You work with them in the same way.
You've built two images but you can't push them to a registry yet. To push to Docker Hub your images need to have your username in the image tag.
Start by capturing your Docker ID in a variable:
$env:dockerId='<insert-your-docker-id-here>'
Make sure you use your Docker ID, which is the username you use on Docker Hub. Mine is
sixeyed
, so I run$env:dockerId='sixeyed'
Now you can tag your images. This is like giving them an alias - Docker doesn't copy the image, it just links a new tag to the existing image.
Add a new tag for your images which includes your Docker ID:
docker image tag hostname-app "$env:dockerId/hostname-app"; `
docker image tag tweet-app "$env:dockerId/tweet-app"
You can list all your local images tagged with your Docker ID. You'll see the images you've built, with the newest at the top:
docker image ls -f reference="$env:dockerId/*"
The image tags start with your Docker ID, so these can be pushed to Docker Hub.
You can use any tag for local images - you can use the microsoft
tag if you want, but you can't push them to Docker Hub unless you have access.
Log in to Docker Hub with your Docker ID:
docker login --username "$env:dockerId"
You have access to your own user image repositories on Docker Hub, and you can also be granted access to organization repositories.
Docker Hub is the public registry for Docker images.
Upload your images to the Hub:
docker image push $env:dockerId/hostname-app; `
docker image push $env:dockerId/tweet-app
You'll see the upload progress for each layer in the Docker image.
Open your user page on Docker Hub and you'll see the images are there.
firefox "https://hub.docker.com/r/$env:dockerId/hostname-app"
These are public images, so anyone can run containers from your images - and the apps will work in exactly the same way everywhere.
The logical size of those images is over 10GB each, but the bulk of that is in the Windows Server Core base image.
Those layers are already stored in Docker Hub, so they don't get uploaded - only the new parts of the image get pushed.
Docker shares layers between images, so every image that uses Windows Server Core will share the cached layers for that image.
Remove all containers:
docker container rm --force `
$(docker container ls --quiet --all)
You have a good understanding of the Docker basics now: Dockerfiles, images, containers and registries.
That's really all you need to get started Dockerizing your own applications.
Our demo app is a simple ASP.NET WebForms app which uses SQL Server for storage. It's a full .NET Framework app, which uses .NET version 4.7.2
.
Right now the web app is a monolith. By the end of the workshop we'll have broken it down, but first we need to get it running.
Check out the Dockerfile for the application. It uses Docker to compile the app from source, and package it into an image.
Build the image:
cd $env:workshop
docker image build -t dwwx/signup-web `
-f .\docker\frontend-web\v1\Dockerfile .
The v1 Dockerfile is simple, but inefficient. The v2 Dockerfile splits the NuGet restore and MSBuild parts - which makes repeated builds faser. And it relays the application log file.
Build the image:
cd $env:workshop
docker image build -t dwwx/signup-web:v2 `
-f .\docker\frontend-web\v2\Dockerfile .
That's it!
You don't need Visual Studio or .NET 4.7.2 installed to build the app, you just need the source repo and Docker.
Try running the app in a container:
docker container run `
-d -p 8020:80 --name app `
dwwx/signup-web:v2
You can browse to port 8020
on your Docker host (that's your Windows Server 2016 VM). Or you can browse direct to the container:
Get the container's IP address and launch the browser:
$ip = docker container inspect `
--format '{{ .NetworkSettings.Networks.nat.IPAddress }}' app
firefox "http://$ip/app"
Oops.
Remember the app needs SQL Server, and there's no SQL Server on this machine. We'll run it properly next, but first let's clean up that container.
Remove the app
container:
docker container rm -f app
Now we'll run the database in a container too - using Docker Compose to manage the whole app. Check out the v1 manifest, it specifies SQL Server and the web app.
Now run the app using compose:
docker-compose -f .\app\v1.yml up -d
You now have two containers running. One is the web app image you've just built from source, and the other is SQL Server from Microsoft's public image.
List all the running containers:
docker container ls
As before, browse to port 8020
on your Docker host or browse direct to the container:
$ip = docker container inspect `
--format '{{ .NetworkSettings.Networks.nat.IPAddress }}' app_signup-web_1
firefox "http://$ip/app"
But let's check it really works. Click the Sign Up button, fill in the form and click Go! to save your details.
Check the data has been saved in the SQL container:
docker container exec app_signup-db_1 `
powershell `
"Invoke-SqlCmd -Query 'SELECT * FROM Prospects' -Database SignUp"
We're in a good place now. This could be a 10-year old WebForms app, and now you can run it in Docker and move it to the cloud - no code changes!
It's also a great starting point for modernizing the application.
Monoliths can run in containers just fine. But they aren't modern apps - they're just old apps running in containers.
You can rebuild a monolith into microservices, but that's a long-term project.
We'll do it incrementally instead, by breaking features out of the monolith and running them in separate containers - starting with the app's homepage
Check out the new homepage. It's a static HTML site which uses Vue.js - it will run in its own container, so it can use a different technology stack from the main app.
The Dockerfile is really simple - it just copies the HTML content into an IIS image.
Build the homepage image:
docker image build `
-t dwwx/homepage `
-f .\docker\frontend-reverse-proxy\homepage\Dockerfile .
You can run the homepage on its own - great for fast iterating through changes.
Run the homepage:
docker container run -d -p 8040:80 --name home dwwx/homepage
The homepage is available on port 8040
on your Docker host, so you can browse there or direct to the container:
Get the homepage container's IP and launch the browser:
$ip = docker container inspect `
--format '{{ .NetworkSettings.Networks.nat.IPAddress }}' home
firefox "http://$ip"
The new homepage looks good, starts quickly and is packaged in a small Nano Server image.
It doesn't work on its own though - click Sign Up and you'll get an error.
To use the new homepage without changing the original app we can run a reverse proxy in another container.
We're using Nginx. All requests come to Nginx, and it proxies content from the homepage container or the original app container, based on the requested route.
Nginx can do a lot more than that - in the nginx.conf configuration file we're setting up caching, and you can also use Nginx for SSL termination.
Build the reverse proxy image:
docker image build `
-t dwwx/reverse-proxy `
-f .\docker\frontend-reverse-proxy\reverse-proxy\Dockerfile .
Check out the v2 manifest - it adds services for the homepage and the proxy.
Only the proxy has ports
specified. It's the public entrypoint to the app, the other containers can access each other, but the outside world can't get to them.
Upgrade to v2:
docker-compose -f .\app\v2.yml up -d
Compose compares the running state to the desired state in the manifest and starts new containers.
The reverse proxy is published to port 8020
, so you can browse there or to the new Nginx container:
$ip = docker container inspect `
--format '{{ .NetworkSettings.Networks.nat.IPAddress }}' app_proxy_1
firefox "http://$ip"
Now you can click through to the original Sign Up page.
Check nothing's broken.
Click the Sign Up! button, fill in the form and click Go! to save your details.
Check the new data is there in the SQL container:
docker container exec app_signup-db_1 powershell `
"Invoke-SqlCmd -Query 'SELECT * FROM Prospects' -Database SignUp"
So now we have a reverse proxy which lets us break UI features out of the monolith.
We're running a new homepage with Vue, but we could easily use a CMS for the homepage by running Umbraco in a container - or we could replace the Sign Up form with a separate component using Blazor.
These small units can be independently deployed, scaled and managed. That makes it easy to release front end changes without regression testing the whole monolith.
Docker makes it easy to run features in separate containers, and takes care of communication between containers.
Right now the web application loads reference data direct from the database - that's the list of countries and roles in the dropdown boxes.
We're going to provide that reference data through an API instead.
The new component is a simple REST API. You can browse the source for the Reference Data API - there's one controller to fetch countries, and another to fetch roles.
The API uses a new technology stack:
We can use new technologies without impacting the monolith, because this component runs in a separate container.
Check out the Dockerfile for the API.
It uses the same principle to compile and package the app using containers, but the images use .NET Core running on Nano Server.
Build the API image:
docker image build `
-t dwwx/reference-data-api `
-f .\docker\backend-rest-api\reference-data-api\Dockerfile .
You can run the API on its own, but it needs to connect to SQL Server.
The image bundles a default database connection string, and you can override it when you run containers with an environment variable.
Run the API, connecting it to the existing SQL container:
docker container run -d -p 8060:80 --name api `
-e ConnectionStrings:SignUpDb="Server=signup-db;Database=SignUp;User Id=sa;Password=DockerCon!!!" `
dwwx/reference-data-api
The API is available on port 8060
on your Docker host, so you can browse there or direct to the container:
Get the API container's IP and launch the browser:
$ip = docker container inspect `
--format '{{ .NetworkSettings.Networks.nat.IPAddress }}' api
firefox "http://$ip/api/countries"
Replace
/countries
with/roles
to see the other dataset
Now we can run the app and have the reference data served by the API. Check out the v3 manifest - it adds a service for the REST API.
The manifest also configures the web app to use the API - using Dependency Injection to load a different implementation of the reference data loader.
Upgrade to v3:
docker-compose -f .\app\v3.yml up -d
There are lots of containers running now - the original web app and database, the new homepage and reverse proxy, and the new REST API.
List all the running containers:
docker container ls
The entrypoint is still the proxy listening on port 8020
, so you can browse there or to the container:
$ip = docker container inspect `
--format '{{ .NetworkSettings.Networks.nat.IPAddress }}' app_proxy_1
firefox "http://$ip"
Now when you click through to the original Sign Up page, the dropdowns are loaded from the API.
The new REST API writes log entries to the console, which Docker can read from the container.
The logs will show that the countries and roles controllers have been called - the request came from the web app.
Check the logs:
docker container logs app_reference-data-api_1
The API uses a different ORM from the main app, but the entity classes are shared, so the reference data codes match up.
Click the Sign Up! button, fill in the form and click Go! to save your details.
Check the new data is there in the SQL container:
docker container exec app_signup-db_1 powershell `
"Invoke-SqlCmd -Query 'SELECT * FROM Prospects' -Database SignUp"
Now we've got a small, fast REST API providing a reference data service. It's only available to the web app right now, but we could easily make it publicly accessible.
How? Just by adding a new routing rule in the reverse proxy that's already part of our app. It could direct /api
requests into the API container.
That's something you can try out yourself.
Hint: the
location
blocks innginx.conf
are where you need to start
Right now the app saves data by synchronously connecting to SQL Server. That's a bottleneck which will stop the app performing if there's a peak in traffic.
We'll fix that by using a message queue instead - running in a Docker container.
When you sign up the web app will publish an event message on the queue, which a message handler picks up and actions. The message handler is a .NET Framework console app running in another container.
The new component is a simple .NET Console app. You can browse the source for the save message handler - the work is all done in the Program
class.
This is a full .NET Framework app, so it can continue to use the original Entity Framework logic from the monolith. It's a low-risk approach to updating the architecture.
Check out the Dockerfile for the message handler.
It uses the same principle to compile and package the app using containers, and the images use .NET Framework running on Windows Server Core.
Build the message handler image:
docker image build `
-t dwwx/save-handler `
-f .\docker\backend-async-messaging\save-handler\Dockerfile .
Check out the v4 manifest - it adds services for the message handler and the message queue.
The message queue is NATS, a high-performance in-memory queue which is ideal for communication between containers.
The manifest also configures the web app to use messaging - using Dependency Injection to load a different implementation of the prospect save handler.
Upgrade to v4:
docker-compose -f .\app\v4.yml up -d
You now have a message queue and a message handler running in containers.
The message handler writes console log entries, so you can see that it has connected to the queue and is listening for messages.
Check the handler logs:
docker container logs app_signup-save-handler_1
You should see that the handler is connected and listening.
The entrypoint is still the proxy listening on port 8020
, so you can browse there or to the container:
$ip = docker container inspect `
--format '{{ .NetworkSettings.Networks.nat.IPAddress }}' app_proxy_1
firefox "http://$ip"
Now when you submit data, the web app publishes an event and the handler makes the database save
Click the Sign Up! button, fill in the form and click Go! to save your details.
The UX is the same, but the save is asynchronous. You can see that in the logs for the message handler.
Check the handler logs:
docker container logs app_signup-save-handler_1
You should see that the handler has receievd and actioned a message, and it gets an ID back from the database
To be sure, let's make sure the data has really been saved in the database.
Check the new data is there in the SQL container:
docker container exec app_signup-db_1 powershell `
"Invoke-SqlCmd -Query 'SELECT * FROM Prospects' -Database SignUp"
Now we've got an event driven architecture! Well, not completely - but for one key path through our application, we have event publishing.
You can easily extend the app now by adding new message handlers which subscribe to the same event.
A new message handler could insert data into Elasticsearch and let users run their own analytics with Kibana.
The app uses SQL Server for storage, which isn't very friendly for business users to get reports. Next we'll add self-service analytics, using enterprise-grade open-source software.
We'll be running Elasticsearch for storage and Kibana to provide an analytics front-end.
To get data into Elasticsearch when a user signs up, we just need another message handler, which will listen to the same messages published by the web app.
The new handler is a .NET Core console application. The code is in QueueWorker.cs - it subscribes to the same event messages, then enriches the data and stores it in Elasticsearch.
The new message handler only uses the message library from the original app, so there are no major dependencies and it can use a different tech stack.
The Dockerfile follows a similar pattern - stage 1 compiles the app, stage 2 packages it.
Build the image in the usual way:
cd $env:workshop; `
docker image build --tag dwwx/index-handler `
--file .\docker\backend-analytics\index-handler\Dockerfile .
The Elasticsearch team maintain their own Docker image for Linux containers, but not yet for Windows.
It's easy to package your own image to run Elasticsearch in Windows containers, but we'll use one I've already built: sixeyed/elasticsearch
.
The Dockerfile downloads Elasticsearch and installs it on top of the official OpenJDK image.
Same story with Kibana, which is the analytics UI that reads from Elasticsearch.
We'll use sixeyed/kibana
.
The Dockerfile downloads and installs Kibana, and it packages a startup script with some default configuration.
In the v5 manifest, none of the existing containers get replaced - their configuration hasn't changed. Only the new containers get created:
cd "$env:workshop"; `
docker-compose -f .\app\v5.yml up -d
Go back to the sign-up page in your browser. It's the same IP address because the app container hasn't been replaced here.
Add another user and you'll see the data still gets added to SQL Server, but now both message handlers have log entries showing they handled the event message.
And the logs in the message handlers:
docker container exec app_signup-db_1 powershell `
"Invoke-SqlCmd -Query 'SELECT * FROM Prospects' -Database SignUp"; `
docker container logs app_signup-save-handler_1; `
docker container logs app_signup-index-handler_1
You can add a few more users with different roles and countries, if you want to see a nice spread of data in Kibana.
Kibana is also a web app running in a container, listening on port 5601.
Get the Kibana container's IP address and browse:
$ip = docker container inspect --format '{{ .NetworkSettings.Networks.nat.IPAddress }}' app_kibana_1; `
firefox "http://$($ip):5601"
The Elasticsearch index is called
prospects
, and you can navigate around the data in Kibana.
The new event-driven architecture lets you add powerful features without updating the original monolith.
There's no regresison testing to do for this release, the new analytics functionality won't impact the original app, and power users can build their own Kibana dashboards.
Containerized applications give you new opportunities for monitoring. You export metrics from each container, collect them centrally and showyour whole application health in a dashboard.
The metrics collector and dashboard run in containers too, so now you can run the exact same metrics stack in dev that you use in production.
Apps running in Windows containers already collect metrics. Windows Performance Counters run in containers in the same way that they do on Windows Server.
You can export IIS Performance Counters from web containers to get key metrics without having to change your code - you package an exporter utility alongside your web application.
Here's a new version of the web application Dockerfile. It packages a metrics exporter utility.
The utility app reads from Windows Performance Counters and publishes them as an API on port 50505
.
The web code is unchanged. The exporter comes from the dockersamples/aspnet-monitoring sample app.
The new version includes the metrics exporter:
cd $env:workshop; `
docker image build --tag dwwx/signup-web:v3 `
--file ./docker/metrics-runtime/signup-web/Dockerfile .
You can run the new version in a container just to check the metrics you get out.
docker container run -d -P `
-e ConnectionStrings:SignUpDb='Server=signup-db;Database=SignUp;User Id=sa;Password=DockerCon!!!' `
--name web-v3 dwwx/signup-web:v3
Windows containers connect to the same default Docker network, so this container will use the existing database and message queue.
Browse to the app and refresh the page a few times:
$ip = docker container inspect --format '{{ .NetworkSettings.Networks.nat.IPAddress }}' web-v3; `
firefox "http://$ip/app/SignUp"
This starts the
w3wp
worker process, which will start recording metrics in IIS and .NET Performance Counters.
Now you can look at the metrics which the exporter utility makes available:
firefox "http://$($ip):50505/metrics"
The metrics API uses the Prometheus format. Prometheus is the most popular metrics server for cloud-native apps, and the format is widely used.
Now we know how the metrics look, let's remove the new container:
docker rm -f web-v3
Runtime metrics can tell you how hard your app is working. In this case there are key details from the IIS runtime, which you can put into your application dashboard:
Using the exporter utility gives you all this without changing code - perfect for legacy apps.
The next level of detail is application-level metrics, recording details about what your app is doing. You surface those through a metrics API in the same way as the runtime metrics.
We'll add application metrics to the message handlers, so we can see the flow of messages through the system.
The message handlers already have code to record metrics when they handle messages.
You can see this in the Program.cs file for the SQL Server handler, and the QueueWorker.cs file for the Elasticsearch handler.
Both handlers use a community Prometheus package on NuGet, prometheus-net. It's a .NET Standard library, so you can use it from .NET Framework and .NET Core apps.
Prometheus uses a time-series database - it grabs metrics on a schedule and stores every value along with a timestamp. You can aggregate across dimensions or drill down to specific values.
You should record metrics at a fairly coarse level - "Event count" in this example. Then add detail with labels, like the processing status and the hostname of the handler.
There's a new Dockerfile for the save handler and a new Dockerfile for the index handler. They package the same code, but they set default config values to enable the metrics API.
cd $env:workshop; `
docker image build -t dwwx/save-handler:v2 `
-f .\docker\metrics-application\save-handler\Dockerfile . ; `
docker image build -t dwwx/index-handler:v2 `
-f .\docker\metrics-application\index-handler\Dockerfile .
The build should be super-fast, because of the cache.
You can run containers with the new message handler apps to see what sort of metrics they expose.
Run the new version of the SQL Server handler:
docker container run -d `
-e ConnectionStrings:SignUpDb='Server=signup-db;Database=SignUp;User Id=sa;Password=DockerCon!!!' `
--name save-v2 dwwx/save-handler:v2
The save message handler is a .NET Framework console app. The Prometheus NuGet package adds a self-hosted HTTP server for the metrics API.
Check out the metrics:
$ip = docker container inspect --format '{{ .NetworkSettings.Networks.nat.IPAddress }}' save-v2; `
firefox "http://$($ip):50505/metrics"
Port
50505
isn't standard, it's just the port I've chosen.
The index message handler records similar metrics about messages handled, and the processing status.
Run the new version of the Elasticsearch handler:
docker container run -d -P --name index-v2 dwwx/index-handler:v2
The save message handler is a .NET Core console app. The same Prometheus NuGet package publishes the metrics API with a self-hosted web server.
Check out the metrics:
$ip = docker container inspect --format '{{ .NetworkSettings.Networks.nat.IPAddress }}' index-v2; `
firefox "http://$($ip):50505/metrics"
The raw data is very basic. Prometheus will make it more useful.
Now we know how the metrics look, let's remove the new containers:
@('save-v2', 'index-v2') | foreach { docker container rm -f $_ }
Metrics about what your application is actually doing give you useful insight into your application health, and how work is being distribuited among containers.
You need to change code to get this level of detail, but all the major languages have Prometheus client libraries which make it very easy to capture and export metrics.
Exposing metrics endpoints from all your app containers is the first step to getting consistent monitoring.
Next you need to run two other components - a metrics server, which grabs and stores all the metrics from your containers, and a dashboard which presents the data in a usable way.
We'll do that by running Promtheus and Grafana - the leading tools in this space - in containers alongside our app.
Prometheus is a metrics server. It runs a time-series database to store instrumentation data, polls configured endpoints to collect data, and provides an API (and a simple Web UI) to retrieve the raw or aggregated data.
The Prometheus team maintain a Docker image for Linux, but we'll use a Windows Docker image from dockersamples/aspnet-monitoring.
Prometheus uses a simple configuration file, listing the endpoints to scrape for metrics.
We'll use this Dockerfile to bundle a custom prometheus.yml file on top of the existing Prometheus image..
cd $env:workshop; `
docker image build -t dwwx/prometheus `
-f ./docker/metrics-dashboard/prometheus/Dockerfile .
Now you have a Docker image that will run Prometheus with your custom config.
Grafana is a dashboard server. It can connect to data sources and provide rich dashboards to show the overall health of your app.
There isn't an official Windows variant of the Grafana image, but we can use the one from dockersamples/aspnet-monitoring.
Grafana has an API you can use to automate setup, and we'll use that to build a custom Docker image.
To make a custom Grafana image, you need to configure a data source, create users and deploy your own dashboard. The Grafana Dockerfile does that.
It uses a data source provisioning and dashboard provisioning, which is standard Grafana functionality, and the Grafana API to set up a read-only user.
Build the custom Grafana image:
cd $env:workshop; `
docker image build -t dwwx/grafana `
-f ./docker/metrics-dashboard/grafana/Dockerfile .
Now you can deploy the updated application. The v6 manifest uses the upgraded web app and message handlers, and includes containers for Prometheus and Grafana.
Update the running application:
docker-compose -f .\app\v6.yml up -d
Compose will recreate changed services and start new ones.
Browse to the new proxy container, and send some load - refresh the sign up page a few times, and then submit the form:
$ip = docker container inspect `
--format '{{ .NetworkSettings.Networks.nat.IPAddress }}' app_proxy_1
firefox "http://$ip"
The web application and the message handlers are collecting metrics now, and Prometheus is scraping them.
Browse to the Prometheus UI to see the metrics:
$ip = docker container inspect `
--format '{{ .NetworkSettings.Networks.nat.IPAddress }}' app_prometheus_1
firefox "http://$($ip):9090"
Try looking at the process_cpu_seconds_total
metric in Graph view. This shows the amount of CPU in the message handlers, which is exported from a standard .NET performance counter.
The Prometheus UI is good for browsing the collected metrics and building up complex queries.
But the Prometheus UI isn't featured enough for a dashboard - that's why we have Grafana.
The Grafana container is already running with a custom dashboard, reading the application and runtime metrics from Prometheus.
Browse to the Grafana container:
$ip = docker container inspect `
--format '{{ .NetworkSettings.Networks.nat.IPAddress }}' app_grafana_1
firefox "http://$($ip):3000"
Login with the credentials for the read-only account created in the Grafana Docker image:
You'll see the dashboard showing real-time data from the app. The app dashboard is set as the homepage for this user.
The dashboard shows how many HTTP requests are coming in to the web app, and how many events the handlers have received, processed and failed.
It also shows memory and CPU usage for the apps inside the containers, so at a glance you can see how hard your containers are working and what they're doing.
Containerized apps run on dynamic container platforms. maybe with hundreds of containers running across dozens of servers in production.
A metrics dashboard like this is essential to being ready for production - so when you go live you can be confident that your app is working correctly.
There's one missing piece from this dashboard - metrics from the Docker platform itself. I cover that in Monitoring Containerized Application Health.
This web app uses log4net
which is configured to write log entries to a file on the C drive.
Those log entries are being written inside the container, we just need to read them from the file back out to Docker.
The new Dockerfile uses a Docker volume for the path C:\logs
, which is where the log file gets written. That means the log data is stored outside of the container using Docker's pluggable volume system.
And the startup script has been extended, so it ends by tailing the log file - relaying all the log entries to the console, which Docker is monitoring.
Tag the image as v4
, which includes logging:
cd $env:workshop; `
docker image build -t dwwx/signup-web:v4 `
-f ./docker/prod-logging/signup-web/Dockerfile .
The v7 manifest uses the upgraded web app, which echoes the existing log4net
log entries back out to Docker. It also maps the log volume to a local directory on the VM.
Create the log folder and update the application:
mkdir C:\web-logs
docker-compose -f .\app\v7.yml up -d
The container startup script writes some initial output.
Check that before you open the website:
docker container logs app_signup-web_1
You'll see the steps from the script prefixed with
STARTUP:
When the app is running, there are additional log entries written by log4net
.
Check out the app by browsing to the new container, and saving some data:
$ip = docker container inspect `
--format '{{ .NetworkSettings.Networks.nat.IPAddress }}' app_proxy_1
firefox "http://$ip"
Now look at the logs again:
docker container logs app_signup-web_1
You'll see entries from
log4net
flagged withDEBUG
andINFO
levels
The app is writing the log file at the path C:\logs
inside the container, but that's a volume which is being mapped fo C:\web-logs
on the VM.
It's transparent to the container, but that log file actually exists on the VM.
You can see the same data from the host:
cat C:\web-logs\SignUp.log
You can manage log storage outside of the container, and use a different storage device.
This is a simple pattern to get logs from existing apps into Docker, without changing application code.
You can use it to echo logs from any source in the container - like log files or the Event Log.
Docker has a pluggable logging system, so as long as you get your logs into Docker, it can automatically ship them to Splunk, Elasticsearch etc.
A key benefit of containers is that you deploy the same image in every environment, so what's in production is exactly what you tested.
You should package images with a default set of config for running in dev, but you need a way to inject configuration from the environment into the container.
We'll do this next, extending the web image to read external configuration.
The app has a Web.config file, which contains setup that will be the same in every environment, and a log4net.config file and connectionStgrings.config file which will change.
The updated Dockerfile sets that up using environment variables to inject a different source path for the config files.
The startup script checks those variables, and if they're set it overwrites the default config files with the new sources.
Tag the image as v5
, which includes variable configuration:
cd $env:workshop; `
docker image build `
-t dwwx/signup-web:v5 `
-f ./docker/prod-config/signup-web/Dockerfile .
There's also an updated Dockerfile for the save handler, which adds the same config-loading logic.
Tag this image as v3
, which includes variable configuration:
docker image build `
-t dwwx/save-handler:v3 `
-f ./docker/prod-config/save-handler/Dockerfile .
We'll swap out the configuration files later when we run the app in a Docker swarm cluster.
For now we need to make sure we haven't broker the app, so we can run it with the default config in the image. The v8 manifest just updates to the new images.
Update the application:
docker-compose -f .\app\v8.yml up -d
Check out the app by browsing to the new container, and saving some data:
$ip = docker container inspect `
--format '{{ .NetworkSettings.Networks.nat.IPAddress }}' app_proxy_1
firefox "http://$ip"
It all works as before, using the default config in the image
The logs will show that the startup script did run the config logic:
docker container logs app_signup-web_1
docker container logs app_signup-save-handler_1
You'll see
STARTUP: Loading config files
- but the source path isn't specified, so the default files are not overwritten
Your containerized apps should have default configuration settings bundled in the image, so the team can use docker container run
without needing any extra setup.
But you need to be able to inject external configuration data into your container at runtime, which is what this pattern does. The config source is configurable, so it can come from the files in the image, or from the container platform.
We'll see that in action shortly.
Old apps used to be written with the assumption of availability - they didn't check dependencies were available, they just tried to use them in the expectation they would exist.
That assumption doesn't work in dynamic cloud and container environments. Apps should check their dependencies are available before they start, and they should exit if they're not available.
We'll add that functionality to our old ASP.NET app by packaging a dependency checker utility in the Docker image.
The utility is just a .NET Framework console app. It uses the same configuration structure as the ASP.NET app, so it will use the same settings as the app.
In the Program class the app uses Polly to wrap a SQL connection check. It retries three times to connect to SQL Server, and if the third attempt fails, the utility returns an exit code of 1
.
A new stage in the updated Dockerfile builds the dependency checker from source. The output is copied into the final Docker image, alongside the original ASP.NET app.
There's also a new environment variable DEPENDENCY_CHECK_ENABLED
, used to turn the dependency check off in dev environments.
In the startup script that flag gets checked, and if it's set then the dependency checker gets run.
Tag the image as v6
, which includes the depencency check:
cd $env:workshop; `
docker image build `
-t dwwx/signup-web:v6 `
-f ./docker/prod-dependencies/signup-web/Dockerfile .
We can test the dependency check by removing the database container, so when we run the new web app it should fail.
Remove the database container:
docker container rm -f app_signup-db_1
Now run the app container on its own, so you can see the dependecy check in action.
Run the container interactively to see the output:
docker container run -it `
-e DEPENDENCY_CHECK_ENABLED=1 `
dwwx/signup-web:v6
You'll see the check fires and fails, and then the container exits
Ah. It was stored in the filesystem of that container we've just destroyed, so it's gone forever.
Let's make sure that doesn't happen again. This Dockerfile for the database is based on Microsoft's SQL Server image, but it adds a Docker volume for data storage.
The initialization script checks that storage location, and it will eiother create a new database, or attach existing database files if they exist.
We can run persistent database containers from this image by mapping the volume, or run it without a volume map to get a disposable database.
Build the image from the Dockerfile:
cd $env:workshop; `
docker image build `
-t dwwx/signup-db `
-f ./docker/prod-dependencies/signup-db/Dockerfile .
The v9 manifest uses the upgraded web app and database images. It also mounts the database volume from a folder on the host.
Create the folder and update the application:
mkdir C:\mssql
docker-compose -f .\app\v9.yml up -d
The database container will start before the web container because we're using Docker Compose on a single machine.
You'll see the database container and web container logging startup instructions:
docker container logs app_signup-db_1
docker container logs app_signup-web_1
In a cluster you don't have ordering guarantees, which is why we need the dependency check in the web app.
Let's just check the app is still working:
$ip = docker container inspect `
--format '{{ .NetworkSettings.Networks.nat.IPAddress }}' app_proxy_1
firefox "http://$ip"
This pattern makes your old apps ready to run in a dynamic environment. In a production cluster you can't have ordering depenencies, it's too limiting for the platform.
Explicit dependencies mean the container stops if the app can't work. The cluster starts a replacement container - the dependencies may be available by then. If not, the new container stops and the cluster starts another one.
It's important to keep this functionality optional, so in dev and test environments you can run just part of the stack without the container constantly failing.
Healthchecks are the final piece to making your old apps behave like new apps when they're running in containers.
The healthecheck should exercise key logic in your app and make sure it's functioning properly. You can do that by adding a dedicated /health
API endpoint to your app.
We'll use the other option, bundling a healthcheck utility in the Docker image.
The utility is another .NET Framework console app. It just makes an HTTP GET to the app running locally in the container.
In the Program class the utility expects the site to return a 200 OK
response, within 200 milliseconds. If it doesn't do that, the health check returns an exit code of 1
, meaning failure.
There's another new stage in the updated Dockerfile - it builds the health check utiliuty from source.
Then the output is copied into the final Docker image, alongside the original ASP.NET app and the dependency checker.
There's also a HEALTHCHECK
instruction, which tells Docker to run the utility every 30 seconds. Docker records the result of executing each healthcheck.
Tag the image as v7
, which includes the health check:
cd $env:workshop; `
docker image build `
-t dwwx/signup-web:v7 `
-f ./docker/prod-health/signup-web/Dockerfile .
The v10 manifest uses the upgraded web app, and it specifies a different schedule for the healthcheck.
Update the application:
docker-compose -f .\app\v10.yml up -d
The app still works in the same way:
$ip = docker container inspect `
--format '{{ .NetworkSettings.Networks.nat.IPAddress }}' app_proxy_1
firefox "http://$ip/app"
Containers with a healthcheck report their current status.
You should see that the web app is showing as Up
and healthy
:
docker container list
And you can see the result of the executing healthchecks by inspecting the container:
docker container inspect app_signup-web_1
A healthcheck lets the container platform test if your application is working correctly. If it's not the platform can kill an unhealthy container and start a replacement.
This is perfect for old apps which have known issues - if they're known to crash and stop responding, the healthcheck means Docker will repair the app with minimal downtime and with no manual intervention.
Production readiness with these patterns means our legacy ASP.NET WebForms app will behave just like a cloud-native app when we deploy it to a Docker cluster.
In production environments, you will run multiple Docker engines in a cluster and manage services rather than individual containers.
The clustering technology built into Docker is called swarm mode - you can easily create a swarm across dozens of machines just by running one command on each.
You can also run a single-node swarm for dev and test environments.
The two most popular container orchestrators are Docker swarm and Kubernetes.
Kubernetes has many more integration points than swarm which makes it easier for cloud providers to offer managed Kubernetes services, like AKS on Azure.
But Kubernetes doesn't support Windows containers. Yet. Windows support came in beta in 2018 and is expected to GA in Q1 of 2019.
We don't need any of these containers, so we'll remove them all.
docker container rm --force `
$(docker container ls --quiet --all)
You can run a single node swarm, which gives you all the functionality of swarm mode but without high availability or the opportunity to scale horizontally.
Switching to swarm mode is easy:
docker swarm init
Your RDP session will flicker here. That's to do with a network shift Windows does to support networking in swarm mode.
That makes your node a swarm manager. The output is the command you would use to join other nodes to the swarm, but for now we'll stick with a single node.
Shortly we'll deploy the workshop app to the swarm, but before that we'll just explore swarm mode with some simple services.
Services are the unit of deployment in swarm mode. You don't run containers, you create services which Docker deploys as containers on the swarm.
You can create services imperatively with the command line, or declaratively using compose files.
Create a simple service which pings itself:
docker service create `
--entrypoint "ping -t localhost" `
--name pinger microsoft/nanoserver
The declarative approach is better. We'll use that for the final app.
Services are a first-class resource in swarm mode. You can list services, list the containers running the service (called "tasks"), and check the logs for the service:
docker service ls
docker service ps pinger
docker service logs pinger
Services are an abstraction over containers. You don't care which node is running the containers, you just specify a service level, and the swarm maintains it.
Scale up the service by running more replicas:
docker service update --replicas=3 pinger
Scaling up the service adds more task containers. This is a single node swarm so they will all be running on your VM, but it's the same command for a swarm of any size.
You can see the extra tasks, and their logs:
docker service ps pinger
docker service logs pinger
Swarm mode supports automatic updates. You upgrade your app by updating the service with a new image.
This is a zero-downtime update for services with multiple replicas, because Docker replaces one container at a time, so your app stays available.
Update the image for the pinger
service:
docker service update --image microsoft/windowsservercore pinger
This replaces the container image. The start command is still the same, so when new tasks start, they will run the original command.
Check the task list for the service, and you can see the rollout happens gradually.
Some containers will be using the original definition with Nano Server, and some will be using Windows Server Core:
docker service ps pinger
Docker ensures new tasks are healthy before carrying on with the rollout. You have fine control over how the rollout happens.
The service definition is stored in the swarm, securely peristeted among the manager nodes.
Check the service details:
docker service inspect pinger
Swarm saves the service definition, so you can easily rollback an update if the new version of the app has a problem.
You don't need to use a compose file, because the swarm has all the details.
docker service update --rollback pinger
The rollback happens in the same way, with task containers being replaced with versions using the original definition:
docker service ps pinger
And you can see the logs are still collected from all the containers:
docker service logs pinger
You can stop and remove all the task containers just by removing the service:
docker service rm pinger
Production swarm clusters typically have 3 manager nodes for high availability, and as many worker nodes as you need for your workloads.
One swarm can be a cluster of hundreds of worker nodes, and you manage your services in exactly the same way for any size of swarm.
Docker takes care of service levels, so if a server goes down and takes containers with it, Docker spins up replacements on other nodes.
Now we're ready to deploy the real application to a real swarm.
Exit swarm mode, which will remove your single-node cluster:
docker swarm leave -f
And remove any running containers:
docker container rm -f $(docker container ls -aq)
It's time to buddy up!
Your workshop VM is in the same virtual network as your neighbour's, so you can create a swarm between you:
If you don't like the sound of this, you can continue with the single-node swarm on your own VM, or join my swarm.
This part is for the manager
On the manager's VM, get the machine's internal IP address:
ipconfig
The internal address will start with 10.0.0
. Use it to create your swarm:
docker swarm init `
--listen-addr <ip-address> `
--advertise-addr <ip-address>
The output of docker swarm init
is a swarm join
command which you need to share with your buddies.
You can use the Google sheet for this. Or memorize the token...
This part is for the worker(s)
Run the docker swarm join
command which your manager has shared with you.
Now you can sit back for a while. In swarm mode all the work gets scheduled on the manager node, so you'll be a passenger for the next few steps.
This part is for the manager
Check you have all the expected nodes in the swarm:
docker node ls
The output will list all the nodes in the swarm. You should have one manager and multiple workers - and they should all be in the ready
state.
You deploy apps to swarm using Docker Compose files. There are some attributes which only apply to swarm mode (like the deploy
section), and some which are ignored in swarm mode (like depends_on
).
You can combine multiple compose files to make a single file. That's useful for keeping the core solution in one compose file like v11-core.yml, and adding environment-specific overrides in other files like v11-dev.yml and v11-prod.yml.
Everyone can do this part
cd $env:workshop
docker-compose `
-f .\app\v11-core.yml `
-f .\app\v11-prod.yml config > docker-stack.yml
The generated
docker-stack.yml
file contains the merged contents, ready for deployment. It also uses Docker config objects and Docker secrets.
A Docker Swarm cluster does more than just manage containers. There's a resilient, encrypted data store in the cluster which you can use with your containers.
Communication between swarm nodes is encrypted too, so you can safely store confidential data like passwords and keys in the swarm.
Docker surfaces config data as files inside the container, so it's all transparent to your app.
This part is for the manager
There are two ways to store configuration data in Docker swarm. You use config objects for data which isn't confidential.
Store the log4net.config file in the swarm:
docker config create `
netfx-log4net `
./app/configs/log4net.config
Configs aren't secret, so you can read the values back out of the swarm.
Check the config object is stored:
docker config inspect --pretty netfx-log4net
This is an XML config file. You can store any type of data in the swarm.
This part is for the manager
Store the connectionStrings.config file in the swarm:
docker secret create `
netfx-connectionstrings `
./app/secrets/connectionStrings.config
Secrets aren't secret, you cannot read the original plain text.
Check the secret object is stored:
docker secret inspect --pretty netfx-connectionstrings
It's still XML, but it's only delivered as plain text inside the container that needs it.
This part is for the manager
Deploy the stack:
docker stack deploy -c docker-stack.yml signup
Docker creates all the resources in the stack: an overlay network, and a set of services. It will deploy service tasks across the swarm, so you should see containers running on many nodes.
Application stacks are first-class object in swarm mode. You can see the stacks which are running, and the services which are in the stack:
docker stack ls
docker stack ps signup
You can navigate around the services, and make changes to the deployment. But your stack file is the source of truth, which lets you work declaratively.
The swarm keeps your app running at the desired service level. You can manualy remove containers from worker nodes, have workers leave the swarm, or even stop the worker VMs - Docker will keep the app running.
You can add more nodes to the swarm just by running the swarm join
command, and immediately add capacity.
Thanks for coming to the workshop. We hope it was useful and we'll be glad to have your feedback.
The content for this workshop will stay online and you don't need a Windows Server VM to follow along - you can do everything with Docker Desktop on Windows 10.
But before you go...
Use Play with Docker and the Play with Docker labs
Follow @EltonStoneman and @stefscherer on Twitter
Read Docker on Windows, the book
And watch these Pluralsight training courses:
Don't have Pluralsight? Ping @EltonStoneman and he'll send you a 1-month trial code.