«    »

The Value of Measuring Code Coverage

Code coverage is the percentage of production code that is executed by your automated test suite. Writing automated tests has been a widely-accepted industry practice for years now, particularly for unit testing. Test-driven development (TDD) is a practice that I use which helps achieve high coverage rates: perfect adherence to TDD in theory will result in 100% line and branch (conditional) coverage.

Is there value in measuring code coverage? I had the opportunity to evaluate this on a recent personal project where I had no pre-existing development tooling to leverage and had to set up everything from scratch. In my drive to deliver business value early, I focused on build and deployment automation and neglected continuous integration - after all, I was the only developer. But this left me without a way to measure code coverage. Worse, I was without peers and their code reviews to keep me honest.

Over a few months the code base grew in size and I encountered a number of quality issues that left me feeling dissatisfied. Being used to having code coverage measurement in place for years previously, I started to develop this uncomfortable feeling about whether I really had good code coverage. I knew I was not always adhering to TDD, particularly for user interface-related code or when I was experimenting with new technologies and frameworks (which I was doing a lot of). So I finally set up continuous integration, which was easy enough to do that I should have done it far sooner.

What was my code coverage? I had 78% line coverage and 65% branch coverage. I considered this to be a failing grade since my normal standard is 85%+ line coverage and 80%+ branch coverage. As I reviewed the gaps, I found some easy places to add coverage which meant I had clearly missed writing tests during development. In adding these tests, I found a minor defect. Other gaps were higher effort and lower value to automate. But in at least one case, once I spent a little time creating a few test helpers, the seemingly higher-effort tests suddenly became a lot easier to write.

I never did end up addressing many of the gaps in coverage, especially for the earliest code I wrote, much of which I wrote in 'experimental' mode with limited coverage. I ended up achieving 85% line coverage and 75% branch coverage which still is slightly below my standard.

Subsequently I worked on a new feature with a heavy focus on the user interface, lots of experimentation and use of new framework features. As a result, I used TDD less and wrote fewer tests. But now I was able to check the code coverage, confirm it was unacceptably low, and add more tests afterwards.

So in my experience, I believe that measuring code coverage adds value and is a necessary tool in a software developer's toolbox. Measuring code coverage does not mean just running a coverage tool on your continuous integration server - you need to regularly check the coverage results and take action if necessary to address gaps. Having sufficient code coverage should be part of your definition of done.

If you find this article helpful, please make a donation.

«    »