«    »

My Definition of Done

I recently wrote about why you need a definition of done, and it only seems logical to follow this up by presenting what I use for a definition of done for developing software.

I use two guiding principles as the basis for constructing my definition.

  1. Potentially releasable: Ideally the software can be released (or shipped) once it is done. I've seen many people, particularly in the context of Scrum, use the similar term "potentially shippable".
  2. I would trust my life to my code: I just wrote an entire article on this titled Would You Trust Your Life To Your Code?

These principles are deliberately idealistic in order to set high expectations and motivate continuous improvement when I fall short of reaching them.

Different definitions of done can be created based on different levels or scopes. The two primary scopes are:

  1. Done for a feature / user story.
  2. Done for a release.

For this article I am using my definition of done for developing a feature (user story).

My definition of done is essentially a checklist with items grouped into categories. The lists of items and categories are not meant to dictate the process by which these items are done or the chronological order. For example, automated unit testing is listed in a separate category from coding but it is typically done at the same time or before-hand, if doing test-driven development.

Without further ado, here is my definition of done.

Coding

  1. Code meets functional requirements.
  2. Code meets non-functional requirements. Typical ones include:
    • Performance (capacity, scalability)
    • Usability
    • Security
    • Maintainability
  3. Code is deployment-ready. This means environment-specific settings are extracted from the code base. A past article I wrote on designing for deployability provides more context on this.
  4. Code is checked into version control.
  5. Code complies with coding & architectural standards.
  6. Code has been cleaned up. The goal is to ensure the code is easily readable and has a good design. In the past I have used the terms polishing code and refactoring to describe this. Robert C. Martin's book Clean Code: A Handbook of Agile Software Craftsmanship provides the best explanation of this that I have seen. Achieving this goes a long way towards meeting the maintainability requirement.
  7. All TODO-style comments in the code have been addressed and removed.

Static Code Analysis

  1. Code has been analyzed by static code analysis tools. The two primary tools I use for Java development are the Eclipse compiler and FindBugs. Other Java tools include PMD, Checkstyle, and Architecture Rules.
  2. All errors and warnings found by the tools have either been corrected or have been suppressed with a comment indicating the reason for suppression.

Testing

  1. Automated unit tests are written. The tests should be high quality (e.g. not brittle).
  2. Automated integration tests are written that verify interactions with external systems such as the application database or third-party application / web services.
  3. Code coverage achieved by the automated tests is measured and sufficient coverage is achieved. I use Cobertura for measuring code coverage. I do not like using only a numeric percentage coverage target as the sole definition of sufficient coverage, because this can encourage people to write poor-quality tests that merely execute the code rather than verify its correct behavior in order to meet the target. My true definition of sufficient coverage is that the tests execute and verify all code that could reasonably be incorrect. Having said this, I generally aim for at least 80% line (statement) coverage overall, and often achieve 90%+ coverage for individual classes. I am still debating what a reasonable target is for branch (conditional) coverage . I currently aim for at least 50% overall, but I have the feeling that a target of 75% would be better.
  4. Functional testing by someone other than the developer has been done. Ideally this testing will be done by the customer, involve exercising the complete feature being coded in the way that users would use it, and be fully automated. More frequently I have seen this testing done manually (especially for user interfaces) by business analysts or testers who act as proxies for the customer. The key idea is to have someone other than the developer do testing to validate the assumptions and interpretations made (often implicitly) by the developer.

    I use the vague term "functional testing" rather than the more common terms "system testing" or "user acceptance testing" because projects can differ dramatically in what is done for system or user acceptance testing. If acceptance testing is done in a waterfall fashion as a separate phase near the end of the project then it cannot be part of the definition of done for a feature (but it is still part of the definition of done for the release). So I use the term "functional testing" to indicate this potential differentiation. Ideally, based on lean principles, all testing including system and user acceptance testing should be done as part of the work on a feature and not artificially delayed till later.

Reviewed

  1. Design / approach has been reviewed by the technical lead / architect.
  2. Detailed peer review / inspection has been done. If pair programming is being used then the peer review is automatically done at the same time as the coding. Otherwise, the reviewer should focus on issues that are less likely to be found by the static code analysis or automated testing. This can include items such as security holes, concurrency issues, and correctly meeting requirements.
  3. Issues identified by reviewers have been resolved to the reviewer's satisfaction.

Other

  1. Required documentation has been updated. This may include online help, user manual, or operations manual.
  2. Build and deployment scripts and related configuration files have been updated.
  3. No known defects are outstanding unless the customer has agreed to defer them, in which case they should be logged.
  4. No known tasks related to the feature are outstanding.

That concludes my definition of done. I would appreciate hearing about what you use for a definition of done. In particular, if there is anything you think should be added or removed from my definition please let me know via a comment below.

If you find this article helpful, please make a donation.

One Comment on “My Definition of Done”

  1. [...] concept of done as it relates to iterative software development. It comes in many flavors, from working lines of code to acceptance criteria, and can change from tasks to features to releases. While I agree as a [...]

Leave a Reply

(Not displayed)

«    »