Skip to main content

Practical Decisions on Testing: Using TDD

I think this will be the final post in this unplanned series on "Practical Decisions on Testing". Previously posts have covered:

In this post I wanted to consider the topic of test driven development (TDD). It's a technique that tends to divide developers into two camps, those that swear by it, and those that don't touch it (even if they are on board with unit testing itself). Over the course of my career I've used it on occasion, and found some value, but on balance find I fall mostly into the "tests after" camp, and the method of using tests to drive the design of the code isn't something that's really become part of my programming workflow.

In particular, going the full TDD approach where you write code that doesn't initially even compile isn't something I've found grabs me, perhaps due to my background in statically typed languages, primarily C#.

A peer of mine called the approach I generally follow one of "spike and stabilise", which I think (now I Google it) is used in slightly other contexts. But it still fits well I think - write some code that looks like it'll do the job, and then write tests to verify, providing the "double lock" that I think I recall Martin Fowler calling it, that the code is correct. And then, with the tests in place, the red-green-refactor approach can be adopted to improve the code quality whilst maintaining working functionality.

Having said that, I realise there are a couple of occasions where I do use a TDD approach, which I'll detail here. It maybe that others who generally like me don't follow strict TDD, will see there's some situations where it can be a particularly useful technique. Or others, who would like to be doing full TDD but don't yet, can see these as ways into the approach.

The first is following a bug report. Many bugs raised in web development projects that I work on are around rendering issues on different browsers and devices. A true logic error, is rare - as it should be with a solid unit testing approach. If one does appear though, I take that as a affront to the test coverage, and so the first thing to do is to realise there's a missing (or incorrect) test case, and to create a failing test for that. Then we can move on to amend the code to fix the bug, ensuring the test (and all others) go green, and the bug won't reoccur.

A second comes when working with something that is pure logic, like an algorithm. In that case, I find it can be a productive way to work for me, creating tests and the code to make the tests pass iteratively, building up the logic as I go. So whilst I don't find it useful to start tests first when creating an non-compiling MVC controller for example, a single function with some relatively complex logic calculating some value seems to fit better with the technique for me.

Comments