«    »

Using Feature Done Checklists

I have written previously about the importance of having a definition of done that the team understands and adheres to. At the level of features (use cases, user stories, etc.) a comprehensive definition of done will often consist of a number of items, some involving non-developer roles such as tester, business analyst, and architect. Tracking the status of a given feature and ensuring it gets to done is thus non-trivial.

One solution to this problem that I have worked with and now recommend is to use a feature done checklist to track the completion of the definition of done for a specific feature. At its essence the checklist consists of the major elements of the definition of done listed as rows in a table with additional columns for the person verifying the completion of the item to enter their name and date. Here is a simple example:

Feature:
Item Person Verifying Date Verified
Coding completed
Peer reviewed
Architect reviewed
Testing completed
Meets user requirements
All defects resolved and tasks completed
Feature is Done!

You need to decide whether to use digital or printed checklists. I prefer using paper-based checklists for several reasons:

  • I find it more meaningful to sign a piece of paper than to add my name electronically, so I tend to treat the completion of the checklist more seriously than I would an electronic form. This attitude may be due to my engineering background.
  • A paper copy is more tangible and real. It can be waved around at a team meeting to celebrate the completion of a feature or posted on a team wall. It can be placed on a coworker’s keyboard in order to focus their attention on an outstanding item.

Evolving the Checklist

I have found it useful on occasion to evolve the basic feature done checklist beyond the team’s definition of done to provide support for the team’s development process. This is usually done as part of continuous improvement to resolve issues identified during retrospectives. Here are some examples:

  • On one project we had issues with developers rushing into coding new features without fully understanding the requirements, so we added an initial checklist item for feature clarification to focus attention on the activity.
  • For each item you can specify the role (e.g. developer, tester, architect) responsible for signing that it is done. I strongly recommend doing this for two reasons: it minimizes confusion over who signs off each item, and it minimizes the problem of having the developer of the feature quickly sign off all the checklist items without doing the necessary checking so they can say the feature is done.
  • You can specify dependencies between the items. On one project we had a problem with people treating the items as a linear sequence of activities, so we defined dependencies to explicitly show that some activities could be done in parallel.
  • You can add a feature start date to indicate when work began on the feature. This in combination with the feature done date allows you to calculate the duration of time spent on the feature. In lean terminology, this gives you the cycle time for the feature, which you want to minimize. When aggregated across the team, this defines the team’s overall throughput in completing features.

Checklist Challenges

I have encountered a number of challenges when using feature done checklists. Some of these issues are due to the actual checklists, but many are actually due to using a strict definition of done – the checklists just make the issue visible.

  1. The checklist needs to be meaningful to the team - you must avoid the risk of it just becoming a form that people sign without doing the necessary work / checking that the item entails. (This is sometimes called rubber-stamping or pencil-whipping.) I have used many strategies to mitigate this risk:
    • Use a paper-based checklist rather than electronic.
    • Keep the checklist to a single page with as few items as possible for each role to complete.
    • For items involving ‘checking’ activities, assign these to roles / people whose primary responsibility is checking, separate from the developers doing the coding.
  2. Extra effort is involved in getting a feature fully complete. This is often not accounted for in estimates, particularly if the estimate is provided by the developer. The time required for checking by other roles can be accommodated by having separate estimated tasks for these activities. What I find particularly challenging is accounting for extra time needed by the developer to resolve issues found in reviews or testing.
  3. Having a through definition of done with multiple hand-offs of the feature to various roles will typically extend the duration of time required to complete the feature. The mitigation is to allow as many items as possible to be done concurrently. The best example is using pair programming as the means for doing the peer code review.
  4. A single role / person with multiple review items per feature can end up becoming a bottleneck that ends up delaying feature completion. This is especially problematic if you are using a methodology like Scrum which focuses on getting features (user stories) done within the iteration – you tend to get a backlog of checklist items forming at the bottleneck near the end of the iteration which jeopardizes their completion.
  5. One problem with using paper-based checklists is that they can get misplaced, or you end up looking for who has a form. This can be addressed by defining a location for checklists to be returned to. One option I like is to post them on a team wall, which provides the additional benefit of visibility.

If you find this article helpful, please make a donation.

Leave a Reply

(Not displayed)

«    »