Tags: | Categories: Articles Posted by RTomlinson on 2/9/2014 3:24 AM | Comments (0)

Dan Pink Motivation Image

I’m seeing this more and more from interview candidates, as well as my own experience. Of course you could argue that the whole reason for being at an interview is because a candidate is unhappy in their current position. Yet what’s clear is that developers, including myself, thrive on using and developing tools and technologies and most importantly removing things (processes/tools/technologies) that just get in the way of productivity. For me, that is the mark of a good developer!

There are some recurring themes that run through the applied reasoning for mandating certain tools and processes and the majority of them are completely unfounded or illogical. There are, absolutely, some necessary reasons for mandating processes; take SOX (Sarbanes-Oxley Act) for example. Often though, when you dig into these compliancies they’re often misinterpreted or intentionally strictly applied out of fear or lack of knowledge.

In an attempt to sway the post from a rant, which I’ve clearly already started doing. I want to address some common excuses:

The “single unified way” excuse

This is often related to using the same tool between teams of employing a technology that becomes the standard. When you do the latter you are saying that the technology that was decided, at that time by a certain person/people, is always the tool of choice regardless of the situation and you take the context out of decision making. Particularly in our industry this is detrimental to the organisation. It discourages innovation and limits the ability of your development team to look outside of their technical skill set. Forcing separate teams to use the same tools means that a team can’t play to its strengths and hinders self-direction.

The “we don’t do that here” excuse

This is synonymous with “we write our own” or “nobody knows it so we don’t do it” excuses. Embarrassingly I’ve actually used these myself and it’s not until I’ve ventured out of my own bubble that I realise how damaging it can be. Making particularly poor choices over tools or processes that get in the way of developers happens a lot and there are situations where the choice at the time was the right one. When those choice end up getting in the way and people aren’t heard then resentment grows.

The “{insert position here} wouldn’t allow that” excuse

I’ve seen this more in relation to tooling and it’s completely illogical. It’s particularly illogical when applied to using SAAS/cloud tools (bug trackers, github, etc). When you force tooling internally you add significant overhead and pain. Some businesses can deal with this and in those cases then it’s applicable but that is the 5% minority. Reinventing the wheel and giving yourself more to maintain takes your focus away from the core of what you do. The US Army, Navy, Air Force and Department of Justice are using cloud services, you have no excuse.

In an industry of constant change it’s important to enable your team as well as provide guidance. This doesn’t mean giving everyone free reign but it does mean giving them the opportunity to investigate new technologies, to innovate and bring that back into the business. After all, that’s how technology companies ultimately survive. If you don’t, you risk losing your developers to companies that do.

Tags: , | Categories: Articles Posted by RTomlinson on 1/3/2013 5:43 AM | Comments (0)

All exceptions in our architecture at Tombola are caught and logged. We use an exception driven approach because the thought of someone having a poor experience on the site and experiencing errors is painful for us and just isn’t good enough. To then consider that we aren’t aware of every issue customers may be having is both bad for business and makes us, as a team, look shite.

We have two distinct types of exception; custom/handled exceptions and unhandled exceptions. The latter being the scary ones. The custom exceptions are there to control flow and business logic (for example, it’s a valid exception if someone attempts to deposit over their deposit limit). Throughout our service and repository layer we handle these differently but in all cases they are logged (to a SQL backend). I won’t cover the reasoning behind this approach as Terry has already covered the topic in this post and detailed the performance impact in this post.

When exceptions occur we want to know about them instantly. With a spare 60” TV in the office it made sense to make use of it with useful information.

This is the resulting product (for the sake of the demo this is me firing sample messages). Apologies for the poor quality. It’s worth playing in HD.

What the video doesn’t show is that each exception type plays a sound when it is displayed (using HTML5 audio).

How it was build

When an exception occurs, for any country (currently we operate in UK, Spain and Italy), an NServiceBus message is fired off from our database logger to a MSMQ queue on an app server. This is feature toggled to allow us to turn it on/off and looks like this (an example of a software exception from the logger):

if (FeatureToggles.EnableExceptionMessageOnBus)
    tombolaBus.SendException(message, exceptionStr, ExceptionType.Software);
  catch (Exception exception)
    Log("Failed to add message to bus", exception);

Once the message reaches the queue on the app server we have a “handler” process running as a windows service using the NServiceBus host. This process is incredibly simple. It acts as an NServiceBus server endpoint to handle exception messages. When it receives a message from the bus (taken off the queue) it calls a SignalR proxy to fire the message to all connected clients:

public class ExceptionMessageHandler : IHandleMessages<ExceptionMessage>
  private static readonly ILog log = LogManager.GetLogger(typeof(ExceptionMessageHandler));
  public void Handle(ExceptionMessage message)
    log.DebugFormat("New message received: {0}", message.Message);


The website dashboard

The dashboard (as you see from the video) was build using twitter bootstrap and SignalR 1.0 alpha. It consists of a single hub that calls a javascript function on the client to display the message. The client takes the country and exception type and styles it up to display in the correct lane. The hub is almost not worth showing….but I will:

public class Exceptionhub : Hub
  public void SendException(ExceptionMessage exception)

In order to call this hub in the web project from another project, as I have done with the ExceptionMessageHandler, SignalR provides this functionality with the use of a proxy (IHubProxy). Once the proxy is created you can invoke method calls on the remote hub and pass parameters.

public static class SignalRProxyConnection
  private static bool connected;
  private static IHubProxy proxy;
  private static HubConnection hubConnection;
  private static string connectionUrl = ConfigurationManager.AppSettings["SignalRDashboardUrl"];

  public static void SendException(ExceptionMessage message)
    if (!connected)

    if (hubConnection.State == ConnectionState.Connected)
      proxy.Invoke("SendException", message);

  private static void Connect()
    hubConnection = new HubConnection(connectionUrl);
    proxy = hubConnection.CreateHubProxy("ExceptionHub");
    connected = true;

You can see here I’m calling the “SendException” method on the ExceptionHub and passing the message that was received from the queue on the app server.

Further enhancements

There is a great deal more that could be done to enhance this project. There is no security implementation on the hub to stop anyone creating a proxy connection and firing random messages at it. This isn’t a huge problem for us, as it’s an internal application that’s not accessible from outside the domain.

There’s also a lack of persistence. If a message is fired and there are no connected clients then the message is lost (well this is not strictly true because it’s still in our database, but that involves somebody actively looking). There is scope to use a NOSQL persistence store to warehouse this data and retrieve it for newly connected clients.


The beauty in this solution is just how simple these technologies make it. The heavy lifting is done by SignalR, NServiceBus and to some extend twitter bootstrap. There is very little code involved, yet the solution is distributed and extensible. It enables us to be proactive in solving customer problems before customer support inform us and we can actively look to make our product better.

Tags: , , , | Categories: Articles Posted by RTomlinson on 11/12/2012 1:16 AM | Comments (0)

NuGet isn't just a means of acquiring and sharing libraries externally. It's a great way to distribute your shared libraries internally within and across your development team(s). There’s no need to checkout updated copies of a shared library and build it on a developers machine and re-reference it within Visual Studio. I’ll show you how to setup TeamCity as a NuGet server to make sharing dependencies a little easier.

Setting up TeamCity

It’s extremely simple to enable TeamCity to act as a NuGet Server.

Go to the Administration section and select “NuGet Settings” and enable the NuGet server.

teamcity admin section

You will see that TeamCity has it’s own WCF service URL called the “Authenticated Feed URL”. This is what we will use to tell Visual Studio where to look for our NuGet feed (more on this later).

Configure your build to produce a NuGet package

If your project configuration in TeamCity doesn’t already have a build that compiles your project you will need to do this first. In this example I’ll simply use the built in Visual Studio build runner type that uses MSBuild to compile a solution.

nuget package runner type

Once you have a compiled binary you can use TeamCity’s built in NuGet Pack runner to ouput a .nupkg (NuGet package) of your compiled library. TeamCity will ask for a Specification file and this is usually a .nuspec file that you add to your project that details detailed information about the package. However, you can also simply use the .csproj file to provide this information.

specification file

The two build configurations are shown below:

build steps screenshot

Configure Visual Studio to use TeamCity NuGet server

Firstly open up Visual Studio and open up the NuGet Package Manager. At the bottom left selection “Settings…”.


In the “Package Sources” dialog add a new source and specify the source as the TeamCity feed URL that I mentioned earlier.


Now when you go back to manage NuGet packages you will have the new package source as an option.


Now when your team updates a shared dependency it will be built by TeamCity on checkin and a NuGet package is produced and it will be updatable from within Visual Studio using the package manager.

Tags: , , , | Categories: Articles Posted by RTomlinson on 10/25/2012 6:28 AM | Comments (0)

I'm still amazed that there are software companies out there that are delivering software far too infrequently. Most of the time there are no valid excuses for monthly or quarterly (or more) deployments. If the business isn't demanding a faster time to market then the Dev/Operations team should be continuously evaluating and improving their processes for the benefits that continuous delivery give.

Historically writing, testing and delivering software have been scary for me and the teams that I have worked in. In the past I've worked in teams where we would write a month's worth of software, have a test team test the release, fix those bugs, test again and then release. The process was long and painful and we dreaded the release and the following few days were nervous ones.

Since then I've been on a path to continuous delivery and work with a great team who see the value in doing so.

Why releasing often is so important

Here's just a few points:

  • Continuous feedback - With every check-in we build and run tests (unit and other variants) and continue in our deployment pipeline. The sooner developers code is integrated, compiled and tested the sooner we can fix issues. This also holds true for deployments.
  • Problems are localized - When problems do occur it's usually pretty easy to spot if only a few changes are being deployed at any one time. With larger deployments, if the shit hits the fan, then it's going to take longer to diagnose and resolve.
  • Competitive advantage - Not much explaining here. If you can release features incrementally to your customers the advantages are two-fold. They benefit from the features themselves and they benefit from a learned user experience. That is to say that smaller changes are easier to adapt to than larger rollouts of features.
  • Deployment confidence - The more you do something the more confident you get in doing it. The same goes for most tasks and skills. 

How often is often?

This really depends on many factors but when you have changes waiting daily there really isn't any reason not to release daily. There are many release strategies that you should consider. The most common are release branching or releasing from trunk. In the case of the latter it is common to use feature toggles for unfinished or unreleasable code but this is too big a subject so I'll save for another post.

At tombola, we release daily for each country. Everything is driven from our TeamCity build server that runs custom MsBuild scripts to compile, transform configs and deploy (using MsDeploy) to several environments (including the cloud). 

teamcity build

For live deployments we do blue/green deployments. That means that we hit the run button for the live configuration that is inactive (see image above). The load balancer will then drain out the users from the active node and bring the inactive node up to make it active. The benefit of this approach is the ability to roll back very quickly if we have any issues simply by switching the nodes back.

How to get there?

The process of getting to daily (or even weekly) releases doesn't have to be taken in one leap. Continuous integration is usually the first step. CI products like TeamCity are free and extremely configurable and easy to get up and running. Start by building your product on check-in and informing your developers when issues occur (using something like TeamCity tray notifier). Then move on to automated deployments to your developments environments. This will give you the confidence in the process over time to add automation to your stage/QA deployment pipeline.

Ultimately the key is to evaluate your processes. If you have processes that take a long time and hold the pipeline up then look at addressing those first. If you have a test team that is a bottle neck, look at what you can automate with regard to testing or look at who's remit testing is and see if you can disperse it amongst other teams.

Dev/Ops should be introspective and be focussed on continuous improvement in their development and deployment processes. Your deployment process is never perfect and is never set in stone.

Tags: , | Posted by RTomlinson on 5/22/2012 5:19 AM | Comments (0)

The software release process usually involves the deployment of a mixture of bug fixes and feature releases. Bug fixes usually come more frequently than features (or large abstractions) and software teams often hit the decision point to consider how they release small fixes when larger/longer fixes or features are in development. In an environment where the team are doing branch by feature then the pain is eased somewhat. Features are done in completely seperate branches are more often than not, using this branching strategy, fixes are done on the mainline (trunk). In a team where continuous delivery or continuous deployments are employed then the problem becomes more complex.

Release branching

Release branching involved creating a completely seperate branch where production ready code (or potential production ready code) is merged to and deployed from.

With this strategy the benefit is that the release manager can merge over only the chosen revisions that contain bug fixes and satisfaction comes from knowing only known revisions are applied and more importantly feature revisions are excluded.

Adopting continuous delivery/deployment means that this adds a manual step to the process. It's also not error proof as it's often difficult to exclude feature revisions that are checked in and the management and maintenence, over time, can become painful.

Builds, tests and deployments can then be done from the "stable" release branch to promote to staging or live (production) environments and development environments can continue to use trunk.

Branch by abstraction with feature toggles

Another method that can be employed is to use a mixture of branch by abstraction with feature toggling. This allows you to maintain a single trunk (WIN!!) and apply bug fixes on the trunk but larger or longer fixes of features can be effectively toggled off until they are ready to be deployed. The team can continue to fix and check in without having to worry that a half implemented feature will make it to live.

Branch by abstraction involves abstracting classes away from their implementation whilst in a development environment yet still maintaining the current class in the live environment. In practical terms that could mean creating a new class that implements an interface and (if using IoC) inject your feature class rather than the existing implementation.

Given the scenario whereby we're implementing a new feature for card payments:

  1. We would create a completely new concrete implementation of ICardWithdrawalService as FeatureXCardWithdrawalService.
  2. Create a configuration toggle. This could simply be an app setting of true/false in a Web.Config or App.Config with a transform for each environment targetted.
  3. At the point of IoC bootstrapping register the FeatureXCardWithdrawalService instead of the existing implementation to be injected.
  4. Where view (front end view) specific feature changes are applied the same toggle can be checked (ideally using a central helper to manage these).

Developers and release managers need not be concerned with branching and merging with this strategy and thus time is saved in doing so. However, concerns come from knowing when a toggle should be employed and the management of toggles throughout the codebase. Developers should be encourages to checkin early and often and it could be tempting to reduce the amount of toggles by not checking work in for longer fixes until it is complete.

The quality gate

Both strategies are not without dangers. The decision as to which is best is very much dependant on the product, the team, the release strategy (and amount of releases), the environment etc. The key in both is the quality gates at each step in the continuous delivery/deployment process. Ensuring that each stage of the test pyramid provides adequate coverage or your product and your existing features will define the quality and readiness of a release.

Further Reading




Posted by RTomlinson on 5/22/2011 6:56 AM | Comments (0)

Follow The Leader


Software development is still in its infancy. When you consider other industries, such as manufacturing, writing software has a very short history. The word “software” was first used in print in 1958 and many of the software visionaries today are still alive. As a result we don’t have hundreds of years of research, guidelines and experience to have rigidity in our daily practices.

As we learn and look to the evangelists and proposers of best practice there stands the inevitability that many of us will play follow the leader and follow their gospel without forethought and questioning of WHY.

Current trends, and there are many of them, such as development methodologies; TDD, BDD and DDD, practices such as unit testing and patterns e.g. inversion of control, dependency injection, repository pattern, have heavy focus at the minute and so they should. The problem comes when best practice is followed and implemented without the reasoning, justification and understanding of why. In other industries early adoption is made more difficult by legislation and industry governance. That's not to say that early adoption is necessarily bad but in a rapidly changing market developers need to be aware of the pitfalls.

So what is the answer?

  • Question everything. Yes the repository pattern makes sense, but is it appropriate in your situation? WHY are you using it and what are the benefits? Consider the impact of alternative solutions.
  • Think ahead. Be hype-agnostic and consider that your decision for adoption of a particular technology/framework/pattern has future impact. Consider product support in choosing a particular technology. Is this developed by a single person as a person project? Will it be dropped shortly down the line? How active is the community of support?
  • Keep it simple. Don't become an architect astronaut. Don't spend forever architecting and adopting every pattern that is currently popular or that your favorite Microsoft evangelist is touting.

Of course there is never any guarantee. Even if a technology/framework is from a large organisation (read Microsoft) this does not mean that it is the right choice for you and it doesn't mean that it will continue to be supported.

Speak to your fellow developers. Whether it be in your team, at a local developer event or old colleagues. I find they usually throw up ideas that you haven't even considered and your products will usually be better for it.

Tags: , | Categories: Articles, Tutorials Posted by RTomlinson on 2/22/2011 10:26 PM | Comments (0)

This is the first in the series of WP7 tips that I picked up whilst developing StackO for Windows Phone 7. I have to say, that I am no Silverlight (or WP7) expert and there may be better ways to do certain things from what I write about in these posts, and if there are then I’d love to here them, so please leave comments. Hopefully, some of these tips will help you along a little quicker in getting your apps to market.

The TheNounProject.com is an absolutely excellent resource for iconography. Their mission is to:

"sharing, celebrating and enhancing the world's visual language".

There are over 500 icons there that fit perfectly with the metro style that the WP7 adopts providing visual metaphors to use throughout your application, absolutely free.

These icons are downloaded as SVG’s and the WP7 ApplicationBar will only accept PNG images. In addition, you will most probably want these images to be themable.

WP7 theming is pretty clever. If you use a PNG icon that has a transparent background and the image itself is completely white. Then if the dark theme is used your image will remain white, however, if you switch to the light theme then the image is changed automatically to black. This tutorial details the steps to take the SVG’s from TheNounProject and convert them to theme aware icons for WP7.

Firstly, choose an appropriate image. For StackO, I used the “Eye” icon. blackeye

Download the icon to somewhere local on your machine.

Next you need to download and install ImageMagick. Again, there might be other ways to do this but this was the first solution I found.

Open up a Command Prompt and navigate to the location of your SVG you downloaded. We are going to use the convert tool from ImageMagick to do the conversion, set the background to transparent, fill the image with white and resize it to 48px (the recommended image size for ApplicationIcons).

Use the following command:

convert -background none -fill white -resize 48 noun_project_388_1.svg whiteeye_42.png


The output is exactly what we want.eye_white_42

Try it out. Add it to your project and set the IconUrl to the PNG. Now if you go into your WP7 Settings and change the theme from dark to light you will see that the icon is changed for you. Brilliant!

Tags: , | Categories: Articles Posted by RTomlinson on 2/20/2011 2:55 AM | Comments (0)

UPDATE: For support and feedback go to http://stacko.uservoice.com

Background to the app

I started, about a week ago, messing around with Silverlight for Windows Phone 7, having no experience with Silverlight what-so-ever. After having my HTC HD7 for almost a month now I wanted a project to start on that would give me exposure to the WP7 platform. The reason for this is twofold. Firstly, the mobile development market is absolutely flying at the minute (not surprising given that the mobile market has already taken over the PC market in sales) and I don’t want to be left behind, having spent the last 4 years developing for the web. Secondly, my main focus this year is on new learning. Not that I’m not learning constantly, as developers do pretty much every day, but learning outside my remit at work and away from the web.

I spend a lot of time on the StackExchange family of sites and StackOveflow in particular. I even browsed the Stackoverflow site for common issues and requests to come up with an idea. After a little research I realised that Stackoverflow itself had an API and so did the other StackExchange family of sites. Then I came across a post by lfoust showing some screenshots of a mockup for a StackExchange client for WP7 and discovered that he had actually implemented a superb C# API wrapper for StackExchange.

I set out developing an app using the mockups displayed on this post.

What does it do?

The apps main features are:

  • Browse latest Voted, Featured and Hot questions
  • Browse top users by Reputation and Latest users
  • Search for Questions
  • Search for Users
  • View questions and answers, including associated tags and view, vote and score counts.
  • View user information and achievements
  • Watch questions and add/remove them from your watched list

What does it look like?



What I learned?

The learning curve for this project was quite steep, having originally only limited exposure to Silverlight and the Windows Mobile 7 platform in particular. A lot went into the development of this app and below is a summary of the main learning outcomes:

  • The WP7 ecosystem: Windows Mobile 7 is a superb platform to develop on and Microsoft have done a great job in supplying some exceptional tools to develop with. That said there are a lot of controls missing and a lot of things that I don’t like. One example in particular is the WP7 Marketplace is still not great when it comes to categorizing apps and searching them. That said, people have to remember that it is still very early days for WP7 and Microsoft have a lot of resource going into improving it. Things will improve and these days Mircrosoft are getting really good at listening to consumer feedback and community driven development.
  • MVVM: As with any new development it is very easy to just want to “knock something up”. I spent a lot of time doing the research so that I didn’t just do that. That mind set usually results in huge problems and a monolothic application. Separation of concerns and reducing coupling is very important and in doing so the results are a more solid app.
    If you’re going to do any Silvelight/WPF development you need to have a good understanding of MVVM. There are frameworks out there such as MVVM Light but I decided to go with a simple implementation without the help of an MVVM framework. The cons of this (to which I found out) is that I had to do a lot of the wiring up myself, that I’m sure the framework would have provided me.
  • Databinding: The databinding model in Silverlight is immense. It’s very powerful and works great with MVVM. Once you master databinding you will understand the benefits of IValueConverter and the ability to manipulate data whilst databinding to your controls.


StackO (the very well-thought out name Smile) is far from complete. There are lots of features I would like to continue to develop and improve. I want a lot of that to come from the community. I intentionally left out a lot of features in the hope that they will be requested. If you have a Windows Phone 7 device I would really appreciate you downloading it and giving it a go and providing any feedback you can (good or bad). It’s completely FREE!

Tags: , | Categories: Articles, Tutorials Posted by RTomlinson on 2/11/2011 11:02 PM | Comments (0)

Very simple one here, but one that may not seem completely obvious to those new to Silverlight.

The Scenario

A very common scenario in Silverlight is to databing the visibility of a control to a boolean value in your code. At first you may think you can just do inline binding as normal:

<Image Source="AnImage.png" Visibility="{Binding Accepted}" />


Today I was trying to do similar with a new project I'm working on. A StackExchange (StackOverflow family of sites) application for Windows Phone 7 that brings the popular family of sites to the mobile. In particular I was trying to indicate which answer in particular was the correct one. To do this I used the Image control (as above) to display a green tick icon, similar to that on the StackOverflow website and bind the Accepted property to controls it's visibility.

The Solution

Unfortunately the visibility property is of type System.Windows.Visibility. Therefore we need to implement IValueConverter to take our boolean property and convert that to System.Windows.Visibility. You can then add this IValueConverter implementation to your XAML as a resource and specify the converter as a part of the binding process in your XAML.


   1:      public class VisibilityConverter : IValueConverter
   2:      {
   3:          public object Convert(
   4:              object value,
   5:              Type targetType,
   6:              object parameter,
   7:              CultureInfo culture)
   8:          {
   9:              bool visibility = (bool)value;
  10:              return visibility ? Visibility.Visible : Visibility.Collapsed;
  11:          }
  13:          public object ConvertBack(
  14:              object value,
  15:              Type targetType,
  16:              object parameter,
  17:              CultureInfo culture)
  18:          {
  19:              Visibility visibility = (Visibility)value;
  20:              return (visibility == Visibility.Visible);
  21:          }
  22:      }


And the XAML:


   1:  <helpers:VisibilityConverter x:Key="VisibilityConverter"></helpers:VisibilityConverter>

   1:  <Image Source="/Images/Icons/correct_answer.png" Visibility="{Binding Accepted, Converter={StaticResource VisibilityConverter}}"></Image>


Again it may seem a little obvious but I hope this comes in handy for those new to Silverlight databinding.

Tags: , , | Categories: Articles, Labs Posted by RTomlinson on 8/19/2010 8:49 PM | Comments (0)

.NET 4.0 sees the introduction of Code Contracts. Code Contracts allow the developer to specify rules and "assumptions on your code in the form of pre-conditions, post-conditions and object invariants" (in the words of DevLabs). What this means in real terms is that we are able to apply contractual rules to methods or properties where typically we would have applied other input sanitizing methods.

Take, for example, an SMS messaging application where we have a dispatching service class. This class is responsible for sending out messages and contains a single Send method. A pre-condition to this method may be that the message length can only be 140 characters (as is typically the case). Code Contracts allows us to design our object and specify this contract as a condition to the method.

There are three ways to utilise this feature.

  1. Runtime Code Checking - Your code will be modified with the contracts at runtime and allows for runtime results.
  2. Static Code Analysis - A static checker that can check broken conditions and violations.
  3. Document Generation - Although I haven't used this facility, it is able to generate XML documentation.

I start by creating an interface for my message dispatcher that contains my single Send method.

   1:  using System.Diagnostics.Contracts;
   3:  namespace CodeContractsFirstLook.Contracts
   4:  {
   5:      [ContractClass(typeof(MessageDispatcherServiceContract))]
   6:      public interface IMessageDispatcherService
   7:      {
   8:          bool Send(IMessage message);
   9:      }
  10:  }

As you can see this dispatcher service interface is attributed with the ConcreteClass attribute that is part of the System.Diagnostics.Contracts namespace. This tells the checker which contract class applies to the concrete class that implements this interface. Let's see what the contract class (MessageDispatcherServiceContract) looks like:

   1:  using System.Diagnostics.Contracts;
   3:  namespace CodeContractsFirstLook.Contracts
   4:  {
   5:      [ContractClassFor(typeof(IMessageDispatcherService))]
   6:      public sealed class MessageDispatcherServiceContract : IMessageDispatcherService
   7:      {
   8:          bool IMessageDispatcherService.Send(IMessage message)
   9:          {
  10:              Contract.Requires(message.messageBody.Length < 140);
  11:              return default(bool);
  12:          }
  13:      }
  14:  }

Here the contract is specified on line 10. What we are saying here is that whenever a class implements the IMessageDispatcherService interface and the Send method is called we are going to evaluate the Requires method as a pre-condition to the method. In this case the condition is that the message body is less than 140 characters in length.

By default we must return something where the method siganture has a return type, otherwise our code won't compile. Here, I use the default keyword to return the default for the type bool.

Now let's take a look at the actual MessageDispatcherService that implements the IMessageDispatcherService interface and therefore will execute our interface contract:

   1:  using CodeContractsFirstLook.Contracts;
   3:  namespace CodeContractsFirstLook
   4:  {
   5:      public class MessageDispatcherService : IMessageDispatcherService
   6:      {
   7:          public bool Send(IMessage message)
   8:          {
   9:              // It doesn't matter what we do here
  10:              // Our Contracts will be executed
  12:              return true;
  13:          }
  14:      }
  15:  }

As our interface has a contract associated with it, whenever the method above is executed our contract code will execute to evaluate the message body length. We can test this code works with a simple console application.

   1:  using System;
   2:  using CodeContractsFirstLook.Messaging;
   3:  using CodeContractsFirstLook;
   5:  namespace ConsoleApplication1
   6:  {
   7:      class Program
   8:      {
   9:          static void Main(string[] args)
  10:          {
  11:              var message = new SmsMessage();
  12:              var messageDispatcher = new MessageDispatcherService();
  14:              message.messageBody = "This will certainly pass";
  15:              Console.WriteLine(string.Format("Was message dispatching successful? {0} ", 
  16:                  messageDispatcher.Send(message)));
  18:              message.messageBody = "A very long message that surely must be over one hundred and forty characters by now...surely! Let me copy and paste into word to do a character count....a yes....we're good!";         
  19:              Console.WriteLine(string.Format("Was message dispatching successful? {0} ", 
  20:                  messageDispatcher.Send(message)));
  21:          }
  22:      }
  23:  }


In this post I've covered a very simplified overview of Code Contracts and a basic introduction as to how you can use the .NET 4.0 System.Diagnostics.Contracts to specify pre-conditions on methods, specifically using interface contracts.

This is very much the tip of the iceberg when it comes to the subject of Code Contracts.

One of the biggest benefits is the ability to ensure loose coupling of contracts from class code through interface contracts and the "Design-By-Contract" pattern seems a much neater implementation than the normal input sanitization/exception throwing mechanism that we typically deal with day-to-day.

What you need to get started

To get started you will need to download the Code Contracts project from DevLabs. This will add a "Code Contracts" panel to your projects properties (right-click on your project in Visual Studio and go to Properties). In order to perform runtime checks you will need to select the "Perform Runtime Contract Checking" option (see below):

Want to know more?






Download my solution code sample:

CodeContractsFirstLook.zip (66.36 kb)