The human visual / perceptual system works less well than we think it does. We believe that we observe everything in our field of vision, but in reality our limited processing power means that we only really process and thus see what we are paying attention to. We become blind to everything else. This phenomenon is called inattentional blindness. Our brain is very good at filling in the gaps in our perception with what we expect to see, thus making it very difficult to see the unexpected. A famous psychology experiment identified the phenomenon and you can share the experience of the participants by viewing this YouTube video.
There was a recent event on my development team that perfectly illustrated inattentional blindness. I had raised a defect regarding misaligned cell borders in a PDF report and attached a screenshot with some of the misalignments circled in red. This screenshot is shown below with the text labels blurred:
The defect was fixed and the report tested by two to three different testers with no issues found. A few months later, after the change had been promoted to production, someone noticed something wrong. (Can you spot the problem in the above screen shot?) One of the cells was missing a label - it was completely empty. In the screen shot I had produced, I had circled the misaligned border of this empty cell. Myself, the developer, and all the testers had completely missed seeing the empty cell, probably because our attention was on border alignment issue.
Inattentional blindness is therefore highly relevant to I.T. - it has important implications for quality control procedures such as testing and reviews. For example I have read about a testing study that gave detailed test scripts to motivated non-tester business users and had them test a system concurrently with experienced testers. (I forget where I read this - I believe in material from Cem Kaner or James Bach.) The first discovery was that these users failed to notice obvious problems not covered by the test scripts - they were blind to these issues because their attention was focused on the test scripts. The second more surprising discovery was that these users missed a significant number of other problems that should have been discovered by following the test scripts. The blindness was much worse than one would expect.
So are we doomed to miss the obvious and have defects abound? The book The Invisible Gorilla: How Our Intuitions Deceive Us explores the topic of inattentional blindness and offers some rays of hope. Psychology studies of this phenomenon have discovered that experts fair much better at noticing the unexpected, perhaps because they require less focused attention on the normal state of affairs due to their expertise. Changing one's mindset to expect the unexpected also helps. Code review studies have found that reviewers who expect to find defects will indeed find more defects. When doing reviews, or when verifying results as a tester, you need to ensure that your attention is focused on evaluating what you are looking and and determining whether it is correct, rather than just scanning over the code or the system output without careful analysis. Toyota developed a technique to help with this when inspecting car assembly work - the inspector points at each object as they inspect it as a cue for them to mentally analyze whether what they see is actually correct. A similar technique could be used for reviews and testing. This is one reason for the recommendation to have someone other than the programmer review or test the functionality they wrote: the programmer is very familiar with it and expects it to work, so is much more likely to miss any defects that are present.
If you find this article helpful, please make a donation.
- No related posts