I wrote previously about the process I went through in adopting test driven development (TDD). In this article I discuss my experience with TDD: the benefits, the limitations, and the techniques I use when doing TDD.
This section covers the benefits, as I see them, of doing TDD. This does not include the benefits of doing automated unit testing, which I am a big fan of and have been doing for years using a non-TDD approach (i.e. writing tests after writing production code).
- Using TDD provides great code coverage, especially conditional code coverage. Strictly following Robert C. Martin’s three rules of TDD should result in 100% coverage for both statements and conditionals. I allow myself to deviate from these rules at times, but still obtain 90+% coverage.
Previously when writing tests after coding, as part of my process to ensure I was done a feature I would check the code coverage results to identify gaps and add missing tests. I have found that this code coverage check is mostly a formality when doing TDD.
- TDD helps avoids the tedium that I have experienced at times writing the tests after coding. Often while doing TDD I am able to establish what I call the red-green rhythm: write a failing test (unit test result bar goes red), then get it to pass (bar changes to green). This rhythm makes it more enjoyable to write the tests, although the tedium is not completely eliminated. As a result, I find it takes less discipline to write the tests first than afterwards – I do not have to force myself to write a bunch of tests after getting some functionality in place.
- When I started using TDD I was initially uncomfortable with writing the simplest possible code to get a failing test to pass (TDD rule #3) when I knew that the final production code would be different. As my experience using TDD increased, I began to see a number of benefits of following this rule:
- Writing more code than the minimum necessary to pass the test runs the risk of having logic in the production code (statements or conditions) not covered by the tests.
- At times, when the design for the method / class was a little fuzzy to me, writing the simplest possible code actually proved helpful, as I could then write another failing test which then clarified for me what the design would need to be.
- Using TDD, especially when strictly following the three rules, seems to eliminate the question / debate about how much to test. When introducing unit testing to developers (not TDD, just the use of automated unit tests), I get asked this question time and time again. I have a standard answer I use, but it no longer seems relevant when doing strict TDD. In fact, the question itself no longer applies when doing TDD.
When adopting a new practice it is important to know the contexts in which the practice is less applicable. I have come across a number of situations in which TDD seemed less helpful. These were the situations when I was most likely to deviate from the three rules of TDD or abandon TDD entirely.
- I prefer to have the design of the method / class I am working on fairly clear in my head before I start writing tests. Sometimes I do this on paper, but sometimes I do this by working with the actual code which would be a deviation from TDD. Since adopting TDD I have tried to do this design in the test code instead of in production code, with mixed results. Sometimes this worked fine, and sometimes it felt unnatural and less productive. I have the feeling that as I gain experience with TDD I will grow more comfortable with doing design within test code.
- When I need to code a non-trivial algorithm I often extract logic into separate methods on the same class or need to invoke new methods on collaborating classes. Often I need to write these additional methods as the minimum needed to get my failing unit test to pass, which means that I am strictly following TDD. The issue is that my normal unit testing practice is to test methods individually as much as possible, especially methods on other classes, and the rules of TDD do not require me to do so. In essence, my original failing unit test ends up being an integration-style test for the overall algorithm, while I want to have individual unit tests for methods making up the algorithm. So the limitation of TDD is not that it cannot be applied – I am using it – but that it is not enough to ensure what I consider to be a sufficient level of testing. See the Techniques section below for how I address this limitation.
- In situations where automated unit tests are not applicable, TDD obviously does not apply. Some people would insist that everything be unit tested, and I agree that it is a goal to aspire to, but in some circumstances I feel that unit testing is not pragmatic. Situations where I am unlikely to use automated unit tests and hence TDD include:
- Prototyping or other exploratory-style work such as an architectural spike. However, when trying to understand the behavior of third-party libraries, I often do find it helpful to do this via unit tests.
- User interfaces such as web pages or emails. Automated tests can be used to verify that the web page or email content is produced without failure, but the actual content and formatting is best checked by a human.
I have adopted several techniques for using TDD to fit my style of coding that go beyond the three rules. I prefer to think of them, however, as tips or tricks of the trade rather than firm rules.
- I developed a technique to address the limitation of TDD discussed above regarding the creation of new methods on collaborating classes under a single failing unit test. When I go to create a new method on a different class, I recursively apply TDD. So before creating the new method, I create a new test for it on a different test class corresponding to this other class. This means that I now have two failing tests, not one, so I modify the TDD rules and just run the tests of this second class while working on this new method. Once I am done with the method, I can run the suite, confirm I have just the one original failing test, and resume working on the original method.
- There are often times when, upon getting the current tests to pass, there are multiple scenarios to select from to write the next test. Which one to pick? My own preference is to choose a scenario that will fail given the current production code – do not choose a scenario that will automatically pass. The reason for this preference is that this helps maintain that rhythm of alternating between failing and passing tests. After all of these failure-inducing scenarios are tested, I do go back to add the scenarios I expect to pass. At this point, while I’m still technically following TDD, it feels like my old approach of writing the tests afterwards: the production code is finished, and I am testing the remaining scenarios to confirm it is correct.
Overall I found test driven development to be a very effective process for producing high-quality code, and I plan to continue to use it. I highly recommend every developer to experiment with adopting TDD and evaluate the benefits and limitations for themselves.
If you find this article helpful, please make a donation.