Coding and Dismantling Stuff

Don't thank me, it's what I do.

About the author

Russell is a .Net developer based in Lancashire in the UK.  His day job is as a C# developer for the UK's largest online white-goods retailer, DRL Limited.

His weekend job entails alternately demolishing and constructing various bits of his home, much to the distress of his fiance Kelly, 3-year-old daughter Amelie, and menagerie of pets.

TextBox

  1. Fix dodgy keywords Google is scraping from my blog
  2. Complete migration of NHaml from Google Code to GitHub
  3. ReTelnet Mock Telnet Server à la Jetty
  4. Learn to use Git
  5. Complete beta release FHEMDotNet
  6. Publish FHEMDotNet on Google Code
  7. Learn NancyFX library
  8. Pull RussPAll/NHaml into NHaml/NHaml
  9. Open Source Blackberry Twitter app
  10. Other stuff

Living With Tests - Some Gems Courtesy of Toughtworks' Dan Moore

A few weeks ago I was fortunate to watch a presentation given by Dan Moore, at the Thoughtworks office over in Manchester. The guy giving the talk was my sort of geek - I got the impression that he wasn't a lover of giving presentaitons, but he did love the tech enough that he just had to share it. I've been there! There were a few good little nuggets I took from the talk, that looking on my to-do list I really wanted to blog about, for example page objects, the "mysery guest", making test fail elegantly, etc, so without furher ado, here we go.

Red, Elegant Red (Burgundy?), Green, Refactor

Everyone doing serious TDD knows the whole red-green-refactor cycle. But one area I'd never really given much thought to is how do your tests fail? The flow is currently:

  1. Write a new test
  2. Get your app to build
  3. Watch your test fail
  4. Add the code to make it pass
  5. Refactor to make the code more managable / flexible / whatever.

But there's a bit missing here! After you watch the test fail, before writing the new functionality, is your failure a nice clear failure? Is your test failing elegantly? The errors you see in your tests at this point are likely to be pretty similar to the errors you see in your app one day when something goes wrong. (And don't kid yourself that you're watertight 'cos you've got tests!)

Make sure the failures you see in your red cycle are meaningful and helpful when they crop up in a log file one day, as well as "NullReferenceException" maybe they need more meaningful descriptions attached, for example "Order ABC123 has a null products collection".

Separate Active vs Regression Integration Tests

Integration tests are expensive. It doesn't pay to have too many of them, in fact the more time you spend doing cost-benefit analysis and the less time you spend writing the darn things, probably the better. But having said that, some ingegration tests doing things like end-to-end smoke testing are invaluable.

But when you're writing new integration tests, make sure you've got a way of separating these new tests from your existing stable integration tests. That way you can do things like configure your build environment to run the new integration tests on every check-in, and keep the stable integration tests in a nightly run. You're also going to be a lot happier just running the new tests locally while you're working on that area of functionality.

That one sounds obvious as hell when I've just written it down. Well, it felt new at the time.

Page Objects

In an earlier post on integration tests, I've talked about the idea of building product-specific test APIs. Unbeknownst to me at the time, there's actually a name for this idea (or at least a particular flavour of it), "Page Objects". There's a write-up on the subject over at the Selenium Wiki. It's nice to be able to attach a proper name to a pattern!

The Mystery Guest

The Mystery Guest was one of my favourite little phrases from the talk, it so accurately sums up one of my pet hates in a unit test. Mystery Guests are the random little constants that crop up in your tests that no-one has bothered to label, and which in retrospect you have no idea what the hell they're there for.

Suppose you have the following test which uses some builder objects to create some products, and then asserts that a PaymentService is correctly exercised:

private void TakePaymentForProducts_TwoProducts_ChargesTotalCorrectly
{
  // Arrange
  var productList = new List<Product> {
    ProductBuilder.Create(50);
    ProductBuilder.Create(120);
  };

  Mock<IPaymentService> paymentService = new Mock<IPaymentService>();

  // Act
  var orderService = new OrderService(paymentService.Object);
  orderService.TakePaymentForProducts(productList);

  // Assert
  paymentService.Verify(x => x.TakePayment(170));
}

What's so special about that 170 in the assert? Wouldn't it be nicer if we had a const in our Arrange section that says, "const expectedTotal = 170;", and then in our assert we've got "paymentService.Verify(x => x.TakePayment(expectedTotal));"?

Fluent Builder Patterns

On another familiar note for me, it was great to see an emphasis on good ol' fluent builder patterns, I've already written a little about these so I won't go on too much here, except to say that they're awesome.

Jetty - Something New!

And the last thing I took away was something I'd not come across before - a Java tool called Jetty. As far as I can tell, this tool allows you to spin up mock HTTP-based servers from within your tests. I'm definitely considering spinning up fake FHEM servers for my FhemDotNet project, so maybe I need to look more into this.

Thanks again Thoughtworks, another excellent session.


Permalink | Comments (0)

Add comment

  Country flag

biuquote
  • Comment
  • Preview
Loading