7 Sep 2009

Code Coverage Metrics and TDD

I am wary of metrics as a "Done" marker. The most common is something like "code coverage must be at least 75%". That is a strange position to take:

  • We are happy with a quarter of the entire codebase remaining untested;
  • there is no consideration of what should be tested (the hard stuff or the easy stuff).
"I know that half of my advertising is wasted—I just don't know which half."--John Wanamaker

I prefer to use metrics simply for insight into my mastery of practices.

With Test Driven Development (TDD), one should always write a failing test before adding production code. Thus, if I practice TDD perfectly, my code coverage should be 100%.

I never manage to practice TDD perfectly.

Undoubtedly within a programming session, I'll inadvertently add a loop, conditional or exception block without first writing a failing test. This may be because of the language, the IDE or simply a lapse in concentration or an ingrained idiom.

The Coverage metric gives me some visibility of those events. This enhanced awareness may improve my TDD practice.