«    »

Why Coding is not Enough

If the goal of software development is to produce working software then developers need to know more than just how to code - they need to know how to prevent or eliminate functional and non-functional defects.

Too many developers think their job is complete once a feature has been coded. Sometimes they think that it is the tester’s job to find defects. Sometimes they think defects in released code are unavoidable and normal, so not worth worrying about. Sometimes they believe their code is perfect - it cannot possibly have defects. I encounter developers with these attitudes with unfortunate frequency. I also encounter development managers who are surprised to encounter such attitudes. A while back I talked to one manager who was shocked to learn than one group of developers under her were assuming their code worked if it compiled successfully - there were no reviews or any sort of testing being done. So I hope with this article to raise the awareness amongst developers that coding is simply not enough to produce working software, and to raise the awareness amongst development managers that they need to ensure the appropriate systems are in place to support this.

The reality is that even the most diligent developers inject defects into their code at a surprisingly high rate. Defect rates are often defined as the ratio of the number of defects per one thousand lines of code (KLOC). Industry statistics on defect rates are rather hard to find and vary significantly, partly because the definition of defect used varies. Several studies have reported defect rates in the range of 10 to 100 defects per KLOC as reported in the book Best Kept Secrets of Peer Code Review. This works out to one defect per 10 to 100 lines of code.

On my most recent project I decided to calculate the defect rate for a particularly error-prone feature. Counting only defects found by independent testers after code reviews and unit testing were done, and using a KLOC count not including comments or blank lines, this feature had 20 defects for roughly 850 lines of code which is a defect rate of 24 defects per KLOC, or one defect for every 40 lines of code. This may seem reasonable, but remember that this is after multiple code reviews and automated unit testing have already found and eliminated a number of defects. (How many I do not know as these kinds of defects are not tracked.) And there still may be yet-to-be-found defects still lurking in this code. So the actual defect injection ratio is higher, perhaps much higher.

Defect rates have such a wide variance, even between developers working on the same code base, that it is unfortunately not a reliable metric for predicting defect counts. My main point in discussing them is to emphasize just how frequently defects are introduced.

Coding, therefore, is simply not enough. Every developer needs to have a personal system for preventing and eliminating defects, which should integrate into the system / processes used by the development team to produce high-quality working software. For ideas on how to assemble such a system check out my definition of done that identifies a number of defect elimination activities.

If you find this article helpful, please make a donation.

2 Comments on “Why Coding is not Enough”

  1. Mike T. says:

    If you haven’t read it yet, you might want to consider the TSP/PSP process (http://www.sei.cmu.edu/tsp/tools/index.cfm). We looked into it when I was at Intuit and it deals with a number of these sorts of issues with personal and group code reviews, reducing errors, etc. Not all of it works with Agile methodolgies, but I think it could be adapted and has some great potential.

  2. I have read enough about the PSP/TSP to be intrigued by them and I would like to study them more. I am a bit worried that they are high-discipline processes that would make adoption difficult.

«    »