Wednesday, November 5, 2008

Forget Quality, Focus on Value

In my last blog entry I said that it is not a tester’s job to find bugs. Now I’ll take it one step further – testing isn’t “quality assurance” either. I have never liked this term to describe testing, because testing can’t assure quality. You can’t test quality into a product; it’s either there or it isn’t.

But there is another reason, too. My observation is that companies aren’t looking for quality in their software products, they want value. So what’s the difference?

Let’s use an analogy. You might want a high quality car, but that doesn’t mean you are willing to pay $250,000 for a Rolls Royce. There is a limit on what your transportation is worth to you, especially if you could buy a house for the same price. Neither would you be willing to walk or take the bus while you saved up the money to buy a Rolls.

Instead, you would buy a car that you could afford that still met your needs for transportation and safety. For some people that might be a used Honda, for others a new Lexus, and for some a Bentley might make sense. It all depends on your perception of the value of transportation and the trade-off in time and money you have to make to get it.

Software is no different. Companies buy or build software because it provides functionality they need. While they might say they want zero defects, they are rarely willing to pay a lot more or wait a long time to get it. Instead, they will balance the utility of having the software against the inconvenience of any defects it might have. They will reject the software only if those defects prevent their enjoyment of the functionality in some material way.

The proof of this is everywhere. Companies ship or install software with known defects all the time. Most software companies have backlogs of reported defects that number in the hundreds or even thousands, and some of them have been there for a long time. So why don’t they get fixed? Because to fix a defect you have to trade-off against adding a new feature or delaying a release. Quite simply, it’s a value judgment.

So why does this distinction matter?

It matters because it causes a disconnect between the testers and the users. Testers report their results as a list of defects, usually categorized by severity measured by system impact. For example, a severity 1 might cause a crash or loss of data, severity 2 means a feature doesn’t work, and so forth. I have been in too many release meetings where the status is reported as “X number of sev 1, sev 2 and sev 3” issues while the users’ eyes glaze over. What ensues is a negotiation over downgrading the severity of an issue so that some arbitrary policy – “No sev 1 issues!” - can be mollified.

At the end, the testers come away resentful that their hard work in uncovering these issues has been devalued or ignored. Worse, they fear – sometimes rightly – that they will be blamed for the poor quality once it goes into production. Users, on the other hand, become frustrated that testers seem to find never-ending reasons to delay or deny their enjoyment of the software.

What to do?

The easiest place to start is by changing the conversation to be value-oriented. Instead of reporting “We found a high severity defect” or even “The product drop-down list is corrupt”, say “You will not be able to enter an order”. This shifts the perspective from quality to value. You are no longer talking about the impact on the system, you are talking about the impact on the user. Then let the users evaluate the risk and the priority.

Then, try not to prejudge what is and isn’t critical to the users. You might think that not being able to enter an order would be unthinkable, but the users might decide that the majority of orders are transmitted electronically anyway, and waiting to fix this will delay the availability of a new product or service that generates more revenue than the handful of manually entered orders.

Next, check your virgin sense of quality at the door. Don’t imagine that you are there to find bugs or “assure quality”, realize that you are there to deliver value, and value is whatever the users say it is. Your job is to help them understand the trade-offs between enjoying the functionality and dealing with the defects so they can make an informed decision.

Finally, make sure the users understand where you stand. Ask them what is most important to them about the software and test that first. Don’t let them get away with hazy or missing business requirements; insist that you know the critical processes the software must support and make sure they work. Treat users like both a customer and a partner: a customer because you work for them, and a partner because you must work together to get anything done.

And let’s not forget to lose “QA”. How about something that creates value…like Business Process Assurance? See “A New Twist on Test Processes” at http://www.stickyminds.com/s.asp?F=S7069_COL_2. Also, “To Win, Change the Game” at http://www.developer.com/java/article.php/614401.