Friday, December 31, 2010

The Value of Values

As the year and decade end (or not, depending on how you think about the year 0), I have been reflecting on what has come before and what might or should happen next.

In my humble opinion, Worksoft has done a masterful job of making test automation feasible. That’s a credit to a world class product team as well as many years of experience from founders, employees and customers - not to mention robust infusions of capital. Until application developers learn to behave ;-) our automation may not always be a pretty process, but it works and works well.

The company has also produced new products at an astonishing rate, displaying productivity without sacrificing quality. Many of these new products enable the integration of testing into an overall change management process, making automation even more powerful and valuable. I find this all exciting.

But looking ahead, I realize that once you can easily execute as many tests as fast as you want, the challenge moves from how to test to what to test. What to test refers to the coverage matrix, which I maintain is the set of business processes and rule outcomes that should be verified.

I believe we have the capture of business process workflow variations down to a science. What we need next are the rules and data, not because we want to simulate or generate them at runtime, but because we need to know how many variations exist and how to test them. This is what optimal and measurable coverage is all about: having one test case (and no more) for each variation or set of rule outcomes.

In a structured and configurable environment like SAP, it is tempting to try and extract the rules from the application directly. This would have the advantage of being automated, of course, and thorough. The downside is that it may be both more and less than you need. It may be more than you need because the 80/20 rule applies equally to functionality, so you could be inundated with rules you don’t use.

It will also be less than you need, both in quality and quantity. Frankly it is illogical to test something by asking it what it does and then making sure it does that. That is not a test. A test asserts what it should do and verifies whether it does. Otherwise it is like asking the student to write the exam.

You will also get less than you need because some rules are tribal instead of configured. I am often struck by user ingenuity in grafting functionality onto an application. For example, users may adopt a reference only field such as comments and endow it with special meaning as a workaround for having a new field added. Or they may simply know from experience or actual company policy that certain customers receive specific treatment.

So I have come to believe that – unless you are blessed with current, complete and testable requirements - the best way to capture rules and data is to ask an expert to give them to you. That way you have independent verification. My experience, however, is that while experts are generally accommodating with input data values, they resist having to predict result values, which may require complex calculations or lookups. In an existing application the output values are often captured from the system itself, apparently on the theory that the system is working. The risk inherent in this habit, of course, is that the system might be in error, but the expert could be wrong as well.

The temptation then arises to write the rules so they can be executed and thus generate the values. Again, this has the ostensible benefit of being more efficient, but in fact it may cost more and be less effective. It will be costly because you are, in effect, rewriting the application functionality. This is a massive undertaking and every bit as likely to introduce error as the development of the application you are testing. It is less effective because it introduces ambiguity into the verification: is the application or the test calculation in error?

But however you end up with them, it all comes down to data values in the end. The fact is you have to have them to automate test execution. For now, most test automation systems – Worksoft Certify included – store data in tables and rows that are tied to variables and to business processes, but they are not tied to rules and outcomes. In other words, the relationships of the data values to each other and to the coverage matrix are not explicit.

And that’s the next frontier.

0 Comments :

Post a Comment

Subscribe to Post Comments [Atom]

<< Home