Mike Lindegarde... Online

Things I'm likely to forget.

Debugging ASP.NET Core Web APIs with Swagger

Debugging APIs

Debugging .NET based RESTful APIs isn't really that difficult.  Once have your code base successfully passing all unit tests it's just a matter of having the right tools and knowing the URLs for all of the end points you need to test.  Usually you're doing this to verify that your API works as expected (authentication / authorization, HTTP status codes, location headers, response bodies, etc...)

For a long time now I've been using an excellent Chrome App called Postman.  Postman offers a lot of great features:

  1. Slick user interface
  2. Ability to save API calls as Collections
  3. You can access your Collections from any computer (using Chrome)
  4. It supports Environments (which allow you to setup environment variables)
  5. You can share Collections and Environments
  6. Test automation

So why not just stick with Postman?  Simple, it doesn't lend itself well to exploring an API.  That's not a problem for the API developer (usually); however, it is a problem for third parties looking to leverage your API (be it another team or another company).  Swagger does an excellent job documenting your API and making it much easier for other users to explore and test.

Using Swagger with an ASP.NET Core 1.0 Web API

Like most things in the .NET world, adding Swagger boils down to adding a NuGet package to your project.  I would assume you could still use the NuGet Package Manager Console; however, we'll just add the required package to our project.json file:

dependencies": {
    "Microsoft.NETCore.App": {
      "version": "1.0.0",
      "type": "platform"
    "Swashbuckle": "6.0.0-beta901"

Next you'll need to add a few lines to your Startup.cs file:

public void ConfigureServices(IServiceCollection services)
    // Add framework services.


public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)

Now you should be able to run your app and explore your API using Swagger by appending /swagger/ui to the Web API's base URL.  It would probably be a good idea to set the your project's Launch URL to the Swagger UI's URL.  You can set by right clicking on your project, selecting Properties, and navigating to the Debug tab.

Security via OperationFilters

In most situations you're going to need to add some sort of Authorization header to your API call.  Fortunately Swashbuckle provides a relatively easy way to add new fields to the Swagger UI.

The following class will take care of adding the Authorization field to the Swagger UI:

public class AuthorizationHeaderParameterOperationFilter : IOperationFilter
	public void Apply(Operation operation, OperationFilterContext context)
		var filterPipeline = context.ApiDescription.ActionDescriptor.FilterDescriptors;
		var isAuthorized = filterPipeline.Select(filterInfo => filterInfo.Filter).Any(filter => filter is AuthorizeFilter);
					var allowAnonymous = filterPipeline.Select(filterInfo => filterInfo.Filter).Any(filter => filter is IAllowAnonymousFilter);

		if (isAuthorized && !allowAnonymous)
			if (operation.Parameters == null)
				operation.Parameters = new List<IParameter>();

			operation.Parameters.Add(new NonBodyParameter
				Name = "Authorization",
				In = "header",
				Description = "access token",
				Required = false,
				Type = "string"

With that in place you simply need to tell Swashbuckle about it in your Startup.cs:

public void ConfigureServices(IServiceCollection services)
    // Add framework services.
    services.ConfigureSwaggerGen(options =>
		options.SingleApiVersion(new Info
			Version = "v1",
			Title = "Sample API",
			Description = "This is a sample API",
			Contact = new Contact
				Name = "Mike",
				Email = "email@example.com"

If you run your API project you should now see the Authorization field added to the "Try it out!" section of the Swagger UI for the selected end point.

That's all there is to it.  You now have a self documenting API that is both easy to explore and test using the Swagger UI.  To add even more value to the Swagger UI you should look into using the attributes and XML Documentation support that Swashbuckle offers.

Git Your Own Hub

The setup

Skip it

Cloud based version control has two major benefits:  it makes version control readily accessible to everyone and it provides an offsite backup.  Several cloud based providers are available; GitHub, BitBucket, Visual Studio Team Services, Google Cloud Platform, and CodePlex to name a few.  However, most (if not all) of them have their limitations.  For the most part, if you want to keep your repositories private, you'll end up paying at some point.


GitHub has changed it plans so that all paid plans now include unlimited private repositories.  When I first started using GitLab you were limited to just a few private repositories on most of the reasonably priced plans.  However, if you are a large company with several developers or you simply don't like the idea of your code being hosted on someone else's server GitLab is still a solid alternative to GitHub

Enter GitLab

GitLab is a pretty solid repository manager.  It offers most of the features that the more well known options offer and you can host it locally or you can use GitLab.com.  If you have a Linux box available use it.  Otherwise you can leverage VirtualBox to setup an Ubuntu VM.


GitLab's website does an excellent job of walking you through the installation process.  Rather than trying to reinvent the wheel, I'm just going to direct you to their website: https://about.gitlab.com/downloads/.


Putting Ubuntu 14.04 to Work via VirtualBox

I'm pretty sure I've done this before

Skip the story

Hosting Ubuntu in a VM makes it incredibly easy to setup a Linux box that you can use to host a Git server, WordPress blog, your Rails apps, Node.js, Apache, mail server, MySQL RDBMS, TeamCity, etc...  Really, there's no reason not to setup an Ubuntu VM.

This isn't the first time I've attempted blogging, nor is this the first time I've put together a post about using VirtualBox to host an Ubuntu VM.  Each time I write this post I discover that the process has gotten easier and eaiser... mostly.

You no longer need to use "VBoxManage setextradata" to setup port forwarding, Ubuntu has gotten easier and easier to use, and most major software package have some sort of setup / installation package you can download and easily install.

However, I continue to write this post every few years because I continue to have a problems trying to install the VirtualBox Guest Additions...


Getting a basic installation of Ubuntu setup is incredibly straight forward.  You pretty much need just two things:

  1. VirtualBox
  2. The latest stable distribution of Ubuntu for desktops (if you're reading this you should probably avoid Ubuntu Server)

Helpful hint:  if you have an old install of VirtualBox laying around from the last time you messed around with virtual machines, delete it and install the newest release.  This will ensure you have the newest version of the Guest Additions.

Install the Guest Additions

I'm going to assume you can figure out how to install VirtualBox and get a basic instance of Ubuntu up and running.  For the most part the options are pretty straight forward.  Give up as much memory as you can and allocate a virtual disk drive that matches your intended use for the VM.

Make sure that you go into the VM's settings (in VirtualBox) and enable 3D acceleration.  With that taken care of, open a terminal and execute the following:

$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install build-essential module-assistant
$ sudo m-a prepare
$ sudo apt-get install xserver-xorg xserver-xorg-core

With that taken care of, use the VirtualBox menu to mount the Guest Additions CD in the VM.  Ubuntu should automatically prompt you to begin installation.  If it does not, you can either eject / unmount the CD or run the following command:

sudo sh /media/cdrom/VBoxLinuxAdditions.run

Once the install completes you should reboot the virtual machine.  Hopefully when Ubuntu finishes rebooting you'll be all set.

Hello World

As nifty as it is to have Ubuntu running in a VM on your Windows box, it doesn't do you a lot of good unless you can get to it from the outside world.  Regardless of the port you want to open up, you basically have two ways to accomplish your goal:

  1. Configure your VM to use a Bridged Adapter
  2. Setup port forwarding

Bridged Adapter

With bridged networking your VM essentially appears to be on your network.  There is no need to setup port forwarding.  You can simply configure network router to forward traffic to the IP address of the VM.  Unless you have a compelling reason not to, I'd recommend using this configuration if you want to open something running on your VM to the outside world.

Port Forwarding

With port forwarding your host OS has the opportunity to buffer / forward packets as it sees fit.  I'll let you explore why you would use port forwarding over a bridged adapter on your own.   Here's how you do it:

  1. Open the settings for your VM in Virtual Box
  2. Go to the Network tab
  3. Ensure Attached to is set to NAT
  4. Click on Port Forwarding
  5. In the dialog that appears enter whatever Host and Guest port are appropriate for your use case.  You should leave the two IP columns blank.

Wrap up

With that you should be set to access your VM from anywhere in the world.  Please keep in mind that this probably isn't the most secure thing to do.  I wouldn't really recommend leaving any port open to the world unless you really know what you're doing. 

Random useful links:

Using WCF to Monitor Your Windows Services


Skip the story

Having come to age as a professional developer in an era where putting business logic in your database was considered sacrilege, I never used the database for anything more than storing data.  Using SQL Server was (is) a last resort.

A few months back I was working on a project at a company that has its roots firmly planted in a database oriented approach to development.  I get that it's impossible to rewrite a massive legacy system every time contemporary programming practices change.  However, I was surprised how quickly (and frequently) developers turned to the database or logging as a solution.

One such example, we needed a way to monitor and manage a Windows service.  Certainly logging provided a low level means of monitoring; however, it didn't provide an effective way to manage the Windows service.  One suggestion was to use a table in a database as a control mechanism.  That could work, but what about a more direct approach?

Setting up the solution

In this example we'll Create two console applications.  One will use TopShelf to start a Windows service (this post doesn't cover TopShelf).  The other will be a normal console application that'll communicate with the Windows service via WCF.  Generally I prefer to put a WPF application in the system tray; however, I'm keeping it simple for this example.

Create a blank Visual Studio 2015 solution named ServiceMonitorDemo.

The Windows service project

Add a new C# Console Application to your project named ServiceMonitorDemo.Service.  The first thing you'll need to do is to add TopShelf to the project:

Install-Package TopShelf

With that taken care of, you'll need to write two service contracts.  For this tutorial we're going to use a duplex channel.  You'll need one interface for each direction through the channel.

 First write the contract that other programs will use to communicate with the service:

using System.ServiceModel;

namespace ServiceMonitorDemo.Service.Contracts
    [ServiceContract(SessionMode = SessionMode.Required, CallbackContract = typeof(IDemoServiceCallbackChannel))]
    public interface IDemoServiceChannel
        [OperationContract(IsOneWay = true)]
        void Connect();

        [OperationContract(IsOneWay = false)]
        bool DisplayMessage(string message);

You'll notice this service contract is decorated with the ServiceContract attribute.  This is how you tell .NET what interface to use for the callback contract.  The callback contract is used by the service to communicate with connected clients.  You'll define the callback interface shortly.

Notice that the contract consists of two methods:

  • Connect - This method is used to add the calling client to our list of connected clients.
  • DisplayMessage - This is used as an example of bidirectional communication and to show how clients can control the service through WCF

The callback contract is pretty straight forward:

using System.ServiceModel;
using ServiceMonitorDemo.Model;

namespace ServiceMonitorDemo.Service.Contracts
    public interface IDemoServiceCallbackChannel
        [OperationContract(IsOneWay = true)]
        void UpdateStatus(StatusUpdate status);

        [OperationContract(IsOneWay = true)]
        void ServiceShutdown();

Again, our simple service contract has just two methods

  • UpdateStatus - Used by the service to push the service's status out to all connected clients.
  • ServiceShutdown - WCF does not cleanly handle shutting down things.  We need to make sure that the code takes care of opening and closing connection correctly.

With that out of the way we need to take care of writing the actual service.  For this example the service won't do anything exciting, it'll simply post a status object to all connected clients.  The code for this is pretty long, so I'm only going to post important sections here.  You can find the completed example solution on GitHub.

In order to accept connections to the service you'll need to initialize the named pipe:

_host = new ServiceHost(this);

NetNamedPipeBinding binding = new NetNamedPipeBinding();
binding.ReceiveTimeout = TimeSpan.MaxValue;

    new Uri(Uri));


This code simply creates a new host using the current object for the ServiceHost.  A named pipe binding is is added to the host.  Clients connect to this endpoint via the Connect method on the channel service contract defined above.  Our service implements the Connect method as follows:

public void Connect()

The only other important detail here is that DemoService class is implemented as a singleton and decorated with the following attribute:

[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)]

In this case I have a single service running and I want to share data with all connected clients.  With this in mind, using InstanceContextMode.Single makes sense.  You can also use per session and per call context modes.  You can find a good overview of the differences on Code Project.

The rest of the cod for this project is pretty much boilerplate setup and tare down.

The Console Application

Fortunately this entire application is less than 100 lines long.  Again, I'll refer you to the GitHub repository to see the full implementation.  Below you'll find the most important section of the code:

        private void Connect()
                    DuplexChannelFactory<IDemoServiceChannel> channelFactory = new DuplexChannelFactory<IDemoServiceChannel>(
                        new InstanceContext(this),
                        new NetNamedPipeBinding(),
                        new EndpointAddress(Uri));

                    _channel = channelFactory.CreateChannel();

                    _isConnected = true;
                    Console.WriteLine("Channel connected.");
                    Console.WriteLine("Failed to connect to channel.");


I wouldn't recommend taking the above code and dumping into your production code.  It's designed to demonstrate how to establish a connection to the Windows service via a WCF named pipe.

As long as the console application is connected to the service, the service will continue to trigger the UpdateStatus method.  In a real world implementation UpdateStatus would most likely toggle some sort of visual status indicator on a WPF application (e.g. a red / green light).  In tutorial land displaying a message in the console works just fine.

Wrap Up

If you've cloned, forked, or downloaded a zip file of the repository you should be able to run the Windows service as a service by navigating to your binary folder for the project (Debug or Release depending on your active build configuration) in your preferred terminal / command prompt and running:

../ServiceMonitorDemo.Service install
../ServiceMonitorDemo.Service start

You can then run a few instances of the ServiceMonitorDemo.Monitor and see what happens.

GitHub repository Link

Developer Street Cred

Important Information in the Next Section...

Skip the story, take me to the instructions.

I did my share of text editor coding and command line / terminal compiling back in the day.  However, between 2004 and 2013 I was pretty much doing all of my programming in Visual Studio.  In 2013 I started using my Mac Mini and eventually my MacBook Pro to do some work.  That lead to using Eclipse and Android Studio for some Android development.  Than I started to play with Rails, Node.js, etc... 

Long story short, I realized the Windows command prompt doesn't compare to using the terminal on OS X (and of course Linux / Unix based distributions).


Does not equal this:

But this is pretty close:

Getting There

I'm no expert when it comes to the different options available to Windows users (or terminals in general).  However, I can tell you what I did to at least improve my situation when working on a Windows box.

Step 1: Install Babun

Head over to the Babun homepage and following the instructions there.  Pretty straight forward.

Step 2: Find You Theme

I ended up going with the powerlevel9k theme for ZSH.  I used that as a starting point and made a few modifications from there.  You can either follow the instruction on the linked page or do the following:

  1. Download the powerlevel9k.zsh-theme file
  2. Move the theme file to C:\Users\[username]\.babun\cygwin\home\[username]\.oh-my-zsh\custom.
  3. Navigate to C:\Users\[username]\.babun\cygwin\home\[username].
  4. Open .zshrc in your favorite text editor.
  5. Make the following change:
# Set name of the theme to load.
# Look in ~/.oh-my-zsh/themes/
# Optionally, if you set this to "random", it'll load a random theme each
# time that oh-my-zsh is loaded.

I made a slight change to the default theme.  To do this, I created a copy of the file I downloaded it and renamed it to powerlevel9k-modified.zsh-theme.  I then changed one line in the file:

  local current_path='%C'

To see the change you'll need to once again edit your .zshrc file to reflect the "new" theme.

Step 3: Locate a Powerline font

You can find the Powerline fonts on GitHub.  I'm currently using DejaVu Sans Mono for Powerline.  Simply download and install the ttf file (double click on it once you've downloaded it).  You'll then need to edit your ~/.minttyrc file to use the font:

Font=DejaVu Sans Mono for Powerline

Step 4: Set You Color Scheme

I'm a fan of the Solarized color scheme.  You can find both the light and dark versions on GitHub.  I opted not to edit my config file.  Instead I created a ~/.solar folder and put the sol.dark file there.  You'll then need to add the following line to the end of your ~/.babunrc file:

source ~/.solar/sol.dark

Step 5: Enable Syntax Highlighting

Reading the most current documentation suggest this step may not be necessary; however, when I did my setup I had to do this:

cd ~/.oh-my-zsh/custom/plugins
git clone git://github.com/zsh-users/zsh-syntax-highlighting.git

Then you'll need to edit your ~/.zshrc file as follows:

# Which plugins would you like to load? (plugins can be found in ~/.oh-my-zsh/plugins/*)
# Custom plugins may be added to ~/.oh-my-zsh/custom/plugins/
# Example format: plugins=(rails git textmate ruby lighthouse)
# Add wisely, as too many plugins slow down shell startup.
plugins=(svn zsh-syntax-highlighting)

That should ensure you have proper syntax highlighting in your terminal.

Random Useful Links: