Forget Regression Testing
I must confess: I don't believe in regression testing. Not that I don't believe it can work, just that it doesn't.
After spending over 20 years in software testing and working with hundreds of IT shops, I have to report that effective regression testing--which is supposed to protect against unintended impact of intended changes - is about as common as development projects that are ahead of schedule and under budget. There are exceptions; some IT shops no doubt do a great job. But it's also true that some people have survived jumping off the Golden Gate bridge, which is not a strong recommendation for doing it.
So why doesn't regression testing work? I think it has to do with the way it is perceived and practiced.
For starters--and I'm only being halfway facetious here--what's with that name, anyway? Regression sounds bad--too much like repression, depression, and oppression. It sounds pejorative. And it doesn't seem to add a lot of value: Just makes sure that what used to work, still works. Most IT shops are rewarded for what they create--more features, new applications, cooler technologies. What glory is there in maintaining the status quo?
Furthermore, what's the difference between regression, system, or acceptance testing? Sure, there are subtle distinctions between the definitions of each, but at the end of the day, aren't they all about making sure the stuff works? Regression testing is about proving a negative: making sure that there is no unexpected impact from changes. How do you prove that something you don't expect to happen, didn't happen?
What's in a change?
Regression testing assumes you know what the application used to do before you make any changes, which implies a known set of requirements. But the fact is, few IT shops maintain current application requirements, relying instead on the individual expertise of testers conscripted from the ranks of business users. Thus, regression testing is only as complete as the knowledge of the person(s) performing the tests.
Aside from the inconsistency of this approach, there is a deeper problem. A "change" to an application's functionality is not necessarily the result of actual modifications to its code. In today's complex environment of interdependent and multi-tiered applications, there are myriad factors that can affect operations: a new version of the operating system, middleware, database, or change to the network topology or hardware configuration can result in downtime.
For example, I recently reviewed the production trouble tickets for a large IT shop and found that fewer than 25% of the incidents could truly be described as defects in the software itself. The rest were caused by ancillary system resources. Regression testers borrowed from the ranks of users can hardly be expected to know about, let alone understand and test for, the potential impact from changes of this nature. But if regression testing sounds bad and is poorly practiced, what's the answer? Well, how about renaming, redefining, and repositioning it?
What’s in a name?
First, lose the name “regression testing”. It not only conjures up negative images, it's also all about proving a negative, which is impossible.
Instead, start talking about Business Process Validation, which assures that critical business processes are still available and accurate after changes have been made.
Notice what we are going to assure: not what the system used to do, whatever that is, but that critical business processes are still available and accurate. The problem with regression testing is that what used to work is undefined. If you don't know everything the software used to do, how can you test it?
Consider the word critical, which means that BPV is concerned with those aspects of an application that are essential. In other words, we're not talking about every possible error condition, every bug ever detected, or each and every combination, boundary, data type, etc. This distinction is crucial because it implies risk management. If you know you don't have enough time or resources to get complete coverage, then prioritization is key.
Now think about a business process. The "business" half of this term places this type of testing squarely outside of development. The "process" half says that these are user scenarios, examples of everyday tasks that run the business, not mathematically derived test cases. Taken together, this means that the business-user community is both a participant and a beneficiary in BPV.
Finally, let's examine the phrase “after changes have been made”. Notice the word "changes" is not qualified: Instead of just modifications to application software, a change can be to the hardware or any aspect of the environment--even the business process itself. Critical applications reside in highly complex, interconnected environments, relying on multiple tiers and layers of functionality. It is unrealistic and downright naive to apply application-centric test techniques to integrated IT environments.
The implication is that the test environment should be a microcosm of your production environment--not just in its configuration, but in its processing cycles. Thus, BPV testing can't be cadged from a corner of a development system, making do with volatile data and unstable software configurations. It requires a tightly controlled and well-managed test environment.
The last detail to settle here is exactly where Business Process Validation fits in your delivery process. The most logical place is as a condition prerequisite to promotion into production. In other words, BPV should be the gateway to operations, the point at which the business confirms that it can continue to carry on uninterrupted. It happens last.
Why is that so important? Because someone in a deadline crunch might slight a process as obscure as regression testing, but who would dare fail to assure that critical business processes are available and accurate? Perish the thought!
After spending over 20 years in software testing and working with hundreds of IT shops, I have to report that effective regression testing--which is supposed to protect against unintended impact of intended changes - is about as common as development projects that are ahead of schedule and under budget. There are exceptions; some IT shops no doubt do a great job. But it's also true that some people have survived jumping off the Golden Gate bridge, which is not a strong recommendation for doing it.
So why doesn't regression testing work? I think it has to do with the way it is perceived and practiced.
For starters--and I'm only being halfway facetious here--what's with that name, anyway? Regression sounds bad--too much like repression, depression, and oppression. It sounds pejorative. And it doesn't seem to add a lot of value: Just makes sure that what used to work, still works. Most IT shops are rewarded for what they create--more features, new applications, cooler technologies. What glory is there in maintaining the status quo?
Furthermore, what's the difference between regression, system, or acceptance testing? Sure, there are subtle distinctions between the definitions of each, but at the end of the day, aren't they all about making sure the stuff works? Regression testing is about proving a negative: making sure that there is no unexpected impact from changes. How do you prove that something you don't expect to happen, didn't happen?
What's in a change?
Regression testing assumes you know what the application used to do before you make any changes, which implies a known set of requirements. But the fact is, few IT shops maintain current application requirements, relying instead on the individual expertise of testers conscripted from the ranks of business users. Thus, regression testing is only as complete as the knowledge of the person(s) performing the tests.
Aside from the inconsistency of this approach, there is a deeper problem. A "change" to an application's functionality is not necessarily the result of actual modifications to its code. In today's complex environment of interdependent and multi-tiered applications, there are myriad factors that can affect operations: a new version of the operating system, middleware, database, or change to the network topology or hardware configuration can result in downtime.
For example, I recently reviewed the production trouble tickets for a large IT shop and found that fewer than 25% of the incidents could truly be described as defects in the software itself. The rest were caused by ancillary system resources. Regression testers borrowed from the ranks of users can hardly be expected to know about, let alone understand and test for, the potential impact from changes of this nature. But if regression testing sounds bad and is poorly practiced, what's the answer? Well, how about renaming, redefining, and repositioning it?
What’s in a name?
First, lose the name “regression testing”. It not only conjures up negative images, it's also all about proving a negative, which is impossible.
Instead, start talking about Business Process Validation, which assures that critical business processes are still available and accurate after changes have been made.
Notice what we are going to assure: not what the system used to do, whatever that is, but that critical business processes are still available and accurate. The problem with regression testing is that what used to work is undefined. If you don't know everything the software used to do, how can you test it?
Consider the word critical, which means that BPV is concerned with those aspects of an application that are essential. In other words, we're not talking about every possible error condition, every bug ever detected, or each and every combination, boundary, data type, etc. This distinction is crucial because it implies risk management. If you know you don't have enough time or resources to get complete coverage, then prioritization is key.
Now think about a business process. The "business" half of this term places this type of testing squarely outside of development. The "process" half says that these are user scenarios, examples of everyday tasks that run the business, not mathematically derived test cases. Taken together, this means that the business-user community is both a participant and a beneficiary in BPV.
Finally, let's examine the phrase “after changes have been made”. Notice the word "changes" is not qualified: Instead of just modifications to application software, a change can be to the hardware or any aspect of the environment--even the business process itself. Critical applications reside in highly complex, interconnected environments, relying on multiple tiers and layers of functionality. It is unrealistic and downright naive to apply application-centric test techniques to integrated IT environments.
The implication is that the test environment should be a microcosm of your production environment--not just in its configuration, but in its processing cycles. Thus, BPV testing can't be cadged from a corner of a development system, making do with volatile data and unstable software configurations. It requires a tightly controlled and well-managed test environment.
The last detail to settle here is exactly where Business Process Validation fits in your delivery process. The most logical place is as a condition prerequisite to promotion into production. In other words, BPV should be the gateway to operations, the point at which the business confirms that it can continue to carry on uninterrupted. It happens last.
Why is that so important? Because someone in a deadline crunch might slight a process as obscure as regression testing, but who would dare fail to assure that critical business processes are available and accurate? Perish the thought!