Coding and Dismantling Stuff

Don't thank me, it's what I do.

About the author

Russell is a .Net developer based in Lancashire in the UK.  His day job is as a C# developer for the UK's largest online white-goods retailer, DRL Limited.

His weekend job entails alternately demolishing and constructing various bits of his home, much to the distress of his fiance Kelly, 3-year-old daughter Amelie, and menagerie of pets.

TextBox

  1. Fix dodgy keywords Google is scraping from my blog
  2. Complete migration of NHaml from Google Code to GitHub
  3. ReTelnet Mock Telnet Server à la Jetty
  4. Learn to use Git
  5. Complete beta release FHEMDotNet
  6. Publish FHEMDotNet on Google Code
  7. Learn NancyFX library
  8. Pull RussPAll/NHaml into NHaml/NHaml
  9. Open Source Blackberry Twitter app
  10. Other stuff

Automating Test Coverage with PartCover, NUnit and Nant

Hi all,

I've been playing around recently looking at test coverage with PartCover - for those of you unfamiliar with PartCover, it's a free open source alternative to test coverage tools such as NCover or dotCover.

I should mention the following is all based on the existing PartCover tool, apparently there's a port in progress which resolves a couple of issues with the current PartCover (see https://github.com/sawilde/opencover), so I may be revisiting this topic in future as the port becomes stable.

So far I've mainly used PartCover through it's Browser GUI, but this is a manual process that I need to take time to regularly do. With an eye to automate things, there's also a command-line version of the tool, which uses the same configuration, but generates results out to a report file. If I can get this command into a Nant build file, then every time I commit my code into my repository, I can get Jenkins-CI (my continuous integration tool) to run Partcover over my newly checked in code. I could also potentially fail my build if I don't have a minimum percentage code coverage (though on its own it's important to point out that this is a very poor marker of test quality!).

Step 1 - Required Files and Tools

To end up where I want to, we're going to need 6 things:

  1. An existing .Net solution and accompanying unit tests (of course I'll be using the soon-to-be-released FhemDotNet source in my examples)
  2. The MsBuild tool to compile the solution output
  3. A test runner to execute my tests - NUnit in my case
  4. The PartCover test coverage tool
  5. NAnt to allow me to automate things
  6. A continuous integration tool to run my NAnt script every time I check in a change.

The command line tools needed by my NAnt script (NUnit, PartCover and NAnt itself) are all to be included in my source code repository, so I have a folder structure something like the following:

c:/source/FhemDotNet/trunk/ This is the project root on my local machine 
    build/ My automated build process will produce all output in here
      FhemDotNet.Repository.Tests/ An example test project
        FhemDotNet.Repository.Tests.dll And an example test dll
    src/ The FhemDotNet source code lives here - this is the real app
    tests/ These are test code projects, NUnit runs over these
    tools/NAnt My automated build tool of choice
    tools/NUnit/ My test runner of choice
    tools/PartCover/ My test coverage tool
FhemDotNet.sln My Visual Studio solution
FhemDotNet.PartCover.xml The XML configuration shared by the PartCover GUI and the PartCover consol app
FhemDotNet.build The XML script used by NAnt
build.bat A simple batch file to run the NAnt executable with the FhemDotNet.build file

Carrying the tools around with the code means that I can just check out the code and I'm good to go, but of course this trick only works with software that has an appropriate license, so I'm guessing dotCover and NCover are no-gos here.

Step 2 - Configuring PartCover and NUnit

Using the above folder structure, once I've built my solution, I can open up a command line prompt, navigate to my project root, and run the following command to execute the tests in my DLL:

tools\nunit\nunit-console.exe build\FhemDotNet.Repository.Tests\FhemDotNet.Repository.Tests.dll

This seems to work, I get the standard NUnit console output saying which tests were run, which passed or failed, and which were ignored.  It's important to get this command running right, because this is the information we're now going to need to feed into PartCover.

Let's fire up the PartCover GUI, then go to File > Run Target.  The following screen should appear, I'm going to enter the information into the fields as below (note for simplicity's sake, I'm going to look at only one assembly, typically I'd list each assembly I want to include in my PartCover analysis):

Executable File - ..\nunit\nunit-console.exe
Working Directory - ..\..\build\
Working Arguments - FhemDotNet.Repository.Tests\FhemDotNet.Repository.Tests.dll
Rules - [FhemDotNet.Repository]*

Click the Start button, you should see NUnit pop up, your tests should run, and you should be able to start navigating your code hierarchy to see where you have and have not got code coverage.  If this doesn't work correctly then go back to the Run Target window and tweak the settings.  Once you're happy with things, save the configuration by going back again to the Run Target window and clicking Save. I'm going to save my config in my project root and call the file "FhemDotNet.PartCover.xml".

Step 3 - Automating with NAnt

Let's fire up Notepad, and enter the following NAnt script - we're going to save this file in the project root alongside the PartCover configuration file, and call it "FhemDotNet.build".

<project name="FhemDotNet" default="test">
    <property name="partcover.dir" value="tools\PartCover" />

    <target name="test">
        <mkdir dir="build\reports" />
        <exec program="${partcover.dir}\PartCover.exe" workingdir="${partcover.dir}">
            <arg value="--settings &quot;..\..\FhemDotNet.PartCover.xml&quot;" />
            <arg value="--output &quot;..\..\build\reports\PartCover.xml&quot;" />
        </exec>
    </target>
</project>

This is an extremely minimal build file!  The code to actually produce and position your test DLLs is missing (though I've attached a more complete build file to put this into context).  However, there's enough of this build file for us to run it.  Before we run this script, the important part to note is that I'm setting the "workingdir" attribute to the PartCover executable directory.  Even though this means I've got to navigate back up my folder structure to get to my PartCover XML configuration file, this is required so that I can reuse the same XML file here in my script and in the PartCover Browser.

To run the above script, open a command prompt, navigate yourself to your project root, and enter "tools\nant\nant.exe".  The NAnt tool should find your one and only .build file, then fire up PartCover, which in turn will fire up NUnit, and you'll get a PartCover report at "build\reports\PartCover.xml".

Step 4 - Failing a Build on Code Coverage

When I first attempted this, I was hoping the PartCover report would have a nice clear "PercentCoverage" field, or at worst two fields "LinesCovered" and "TotalLines".  No such luck!  It turns out there's not much summary information on the report that it spits out.  No worries, we can fix this with a bit of XPath and NAnt's XmlPeek task.  I've added the following lines to my build script immediately after the exec task, they're pretty straightforward, they sum up all of the lines of code seen, then the lines of code that our tests hit, and use these numbers to fail the test.

        <property name="Test.ActualCoverage" value="${double::parse(Test.LinesCovered) / double::parse(Test.NumLines)}" />
        <property name="Test.MinimumCoverage" value="0.5" />

        <fail if="${double::parse(Test.ActualCoverage) &lt; double::parse(Test.MinimumCoverage)}"
              message="The solution currently has ${double::parse(Test.ActualCoverage) * 100}% coverage, less than the required 50%" />

Incidentally, when I was writing the above NAnt code I was suprised how many times I had to parse properties to cast them to the right type - if anyone can suggest a way around this I'd be keen to hear!

A Final Word of Warning

Finally I'm just going to reiterate a point I made earlier. In the wrong hands, using test coverage as a measure of unit testing quality or effectiveness is a big mistake.  Your tests are only as good as your asserts. You can have 100% code coverage, but this is only telling you anything useful if you have asserts that match all of the important external things that your code should do (e.g. its return value, which dependencies it's interacting with and how, expected exceptions, etc)

In my case, I do a lot of work in fairly small chunks - maybe 20 minutes each way on the train to and from work, and an hour or so in an evening, so I'm often interrupted in my TDD / BDD cycle. I find coverage tools are a great safety net for this.

Full Example NAnt Script

As promised, here's a more complete NAnt script that shows the technique above in use in context.

FhemDotNet.build (4.18 kb)

Update - 19th June 2011

A quick update, based on a discussion in the comments below I've found the discussion over at https://github.com/sawilde/partcover.net4/issues/46, it looks like there're actually two different ways to calculate the coverage through PartCover, which I've copied from the linked discussion for convenience:

  1. sum(pt[@visit>0].@len)/method.@bodySize - This is pretty much what I've described above, and is calcuated based on the size of IL code that fragments that PartCover is seeing.
  2. pt[@visit>0].count()/pt.count() - This alternative method is using what the posted refers to as "Sequence Points"

I must admit I'm not an expert in how IL and Sequence Points stand against each other, but perhaps this second simpler calculation will more closely match the source code that you see in Visual Studio? Anyone with any more detail on this please feel free to add to the discussion below!


Permalink | Comments (10)

Comments (10) -

Shaun Wilde Australia

17 May 2011 11:23

Shaun Wilde

Nicely done. I agree with you about the use of coverage results when testing, sometimes people put too much effort into getting 100% coverage over actually thinking about what they are doing - its just a tool use it wisely. I'll refer to this article from the partcover wiki.

russell United Kingdom

24 May 2011 09:59

russell

Hi Shaun,

Good to hear from you, looking forward to an entry in the Partcover wiki!  Smile  Keep up the good work, for this type of continuous integration, especially for open source, Partcover is to the best of my knowledge the only tool out there.

Arne De Herdt Belgium

17 June 2011 09:06

Arne De Herdt

I've tried the buildscript you've placed on the site, however i'm receiving the following error in the output from TeamCity:

C:\Robinson\trunk\Scripts\NantScripts\NantTestsRunner.build(371,6): Failed to select node with XPath expression 'sum(//Type/Method/pt/@len)+sum(//Type/Method[count(pt)=0]/@bodysize)'.

Yet when i look at the generated XML file, the data is filled in and present, so i'm a but confused as to why I'm receiving this error.

russell United Kingdom

17 June 2011 13:42

russell

Hi Arne,

Have you tried testing the XPath against your generated file using an online XPather tester, such as http://chris.photobooks.com/xml/default.htm? If the file you're putting in matches the XPath that's in your code, then the you might think about:

1. Are you pointing at the same file in your script as the one you're manually checking
2. Are you on a different version of NAnt that maybe doesn't have the same XPath engine?  (Doubtful I know!)

Let me know how you get on, good luck.

Russ

Arne De Herdt Belgium

17 June 2011 13:50

Arne De Herdt

I had to split up the Xpath expressions. Apparently our nant doesn't allow 'sum() + sum()' in a single expression.

At the mioment I'm able to run our tests through the Gallio Framework and i'm getting results from the XML file using nant and a small xpath function I wroth that allows me to run your logic.
I've noticed something strange however. PartCover tells that I only have 12% coverage and doesn't see that some code is called through the tests whereas ncover does see this.

At the moment I only feed the dll to Gallio that includes the tests and let gallio include the other dlls as needed. Perhaps I need to specify more details?

The Chairman Germany

17 June 2011 22:37

The Chairman

Well done tutorial, Russel. I integrated PartCover with NAnt more than a year ago and I wish I had some of the precious advice then.

Important note: Make sure you have at least NAnt version 0.91-alpha1 (May 29, 2010) installed. Release notes state, they have improvements for "more advanced XPath functions" with xmlpeek in this version. Find my answer on a question regarding this issue on StackOverflow here:

http://stackoverflow.com/questions/6383680/nant-xmlpeek-issue/6391981#6391981

The Chairman Germany

17 June 2011 22:44

The Chairman

@arne: PartCover and NCover coverage rates (NCover actually has three different coverage rates) are hardly comparable. I tried to integrate both in our CI environment and they  just won't match. Stay with the tool you feel more comfortable with and don't try to compare.

Arne De Herdt Belgium

19 June 2011 10:25

Arne De Herdt

@The Chairman: I've been in contact with the developers of PartCover, and the results I'm getting seem to depend on how I calculate the values. The formula I used from Russel, seems to be pseudo coverage and gives different results on how the test is executed and the functions are entered/exited.

I've also received some more detailed information on this at https://github.com/sawilde/partcover.net4/issues/46, with that information I should be able to apply the same logic that ncover does. The reason I want to become the same logic, is because all our unit tests now "fail" due insufficient coverage. Which needs to be resolved by either using the same logic or adjusting the the tests. I'd prefer going for the same logic instead of manipulating the tests.

russell United Kingdom

19 June 2011 10:58

russell

Thanks for the discussion so far guys, very interesting! I've added an update to the post so that noone can miss it.

Shaun Wilde Australia

19 June 2011 23:48

Shaun Wilde

FYI: I have have started a new code coverage tool called OpenCover (though I'll continue to support PartCover) for reasons I have detailed here http://scubamunki.blogspot.com/2011/06/opencover-first-beta-release.html. It gathers metrics by sequence point rather then IL; Partcover can do IL only coverage (requiring no PDBs) and hence why it has two sets of data in its report files.

Also Daniel Palme's project ReportGenerator(http://reportgenerator.codeplex.com/) is very useful for PartCover (and now OpenCover)

Pingbacks and trackbacks (1)+

Add comment

  Country flag

biuquote
  • Comment
  • Preview
Loading