Test Smarter: The Top 3 Testing Myths
Welcome to my blog. My hope is to share 20+ years of testing experience gained the hard way to make it easier for you. The best way to start is to dispel the basic myths that have been around the industry ever since I got started. From there, we can move on to more positive subjects, such as how to really succeed as a tester by testing smarter. Along the way I’ll also share links to other columns and articles by myself and others to expand and expound on the topic at hand.
So, here are the top 3 myths we need to lose so we can move on to what testing is really all about:
Myth #1: Your job as a tester is to find bugs. This premise is fundamentally flawed because it requires you to prove a negative, which is that there are no bugs remaining when you are finished. Sorry, but that is neither possible nor practical.
It is not possible because the potential combinations of user behavior, system environment and code conditions are essentially infinite, so you will never be finished. It is not practical because it would take too long and cost too much to even try. By the time you completed the exhaustive testing necessary to even approach full coverage the software would be obsolete and your users would have found another solution. Perfect tax software is worthless on April 16th.
Further, this myth leads to the belief that the more bugs you find, the better. This in turn encourages testers to engage in bizarre behavior and to focus on the fringes of functionality in the hopes of finding more bugs. Anyone can get software to fail if they try hard enough, but frankly users are more interested in what works than what doesn’t. They developed or bought the software because they need to use it, and finding random reasons why they can’t will not make you popular.
Reporting weird bugs invites developer derision and user frustration because it diverts time and resources from the real goal: making sure the software does what the users need. If you don’t believe users will sacrifice quality for functionality, look no further than the 50,000+ known bugs shipped with Windows itself.
This myth also discourages testers from exercising the core functionality – the so-called “happy path” – that is most widely used because it is less likely to yield bugs. But just because a feature has worked every previous time doesn’t mean it can’t fail this time, and if it is vital to the user then a single failure is one too many. Waving the list of bugs you found in the weeds will do nothing to placate users if mainstream functionality fails. For more on this, see “When Bugs Don’t Bug Me” at http://www.itmanagement.earthweb.com/entdev/article.php/621801.
So what is your job as a tester? It’s actually much more fun and interesting than just finding problems…but you’ll have to stay tuned to get the rest of the story.
Myth #2: Test new functionality first. This one has a certain surface appeal, since of course new functionality is unproven. The issue is that if it is new then users aren’t using it yet, but if it is old then the odds are they are. A broken new feature may be embarrassing, but breaking functionality the enterprise already relies on could be catastrophic.
This myth leads to the dangerous practice of focusing on changes first and only doing regression testing if there is time left over. The fact is that testing must be risk-based, and change is only one risk factor. Others include volume, financial impact, and exposure. Clearly a new feature may be high risk for many reasons, but the fact that it is new is only one of them.
And, since the overwhelming amount of testing is still performed manually (more on that next), this inevitably means that regression testing is slighted. There just isn’t time to test everything that is changing plus everything that isn’t supposed to change.
The irony is that over its lifetime, software tends to become less reliable, not more. This is caused by a variety of factors: developer turnover, repeated patches, constant enhancement, and changing technology landscapes. The volume of code and the functionality it supports continues to expand while developer knowledge is lost over time and turnover, and as a result the software becomes increasingly more complex and less stable. For more on this, see “Anatomy of Ugly: How Good Code Goes Bad” at http://www.computerworld.com/printthis/2005/0,4814,106105,00.html.
Thus, each successive change increases the probability of unintended consequences. Coupled with decreasing regression test coverage, this is a formula for disaster. Happily, changing your strategy can make the most of your efforts and win you popularity with developers and users alike.
Myth #3: You can only automate regression tests.The overwhelming majority of testing is still performed manually, even though test automation tools have been around for decades. You have to wonder why. Part of this is because most testing is centered on new functionality, as previously discussed, and the rest is because of code-based test automation tools.
Originally, test automation tools were based on an approach called record and play, or capture/playback, which required that testers perform the test manually as it was recorded into a script file. This meant, of course, that you could not create the test until you could perform it, and you could not perform it until the software was available and working as expected.
Later the scripting languages became more sophisticated and theoretically you could code tests in advance. Unfortunately the effort to develop and maintain the code was so onerous that it didn’t make sense to do it for any functionality that was in a state of flux. Thus, testers still needed to wait for the application to stabilize, and even then they only automated regression tests for features that weren’t likely to change. When they did change, it was usually easier just to start over than to modify the code
What makes this whole idea silly is that we test software because of changes. Even so-called regression tests can be impacted by new capabilities. By adopting an automation approach that is so sensitive to change that affected tests are easier to throw away than fix, you lose the primary benefit of automation, which is reusability that enables cumulative coverage. And, if you lose this benefit, then the whole value proposition of test automation is also lost: you spend so much developing and re-developing tests, or attempting to maintain them, that you don’t save enough time and money to pay for it. Thus, shelfware is rampant for test tools. For more on this see “The Demise of Record/Play/Script” at http://www.stickyminds.com/s.asp?F=S10939_MAGAZINE_2.
The good news is that test automation has evolved away from code to a data model that not only allows tests to be easily and quickly developed before the code is available, but also supports automated maintenance that makes reusability a reality.
So now we have cleared away the misconceptions and set the stage for an exploration of how to test smarter, giving you the opportunity to be successful and enjoy yourself along the way.
See you next time.
So, here are the top 3 myths we need to lose so we can move on to what testing is really all about:
Myth #1: Your job as a tester is to find bugs. This premise is fundamentally flawed because it requires you to prove a negative, which is that there are no bugs remaining when you are finished. Sorry, but that is neither possible nor practical.
It is not possible because the potential combinations of user behavior, system environment and code conditions are essentially infinite, so you will never be finished. It is not practical because it would take too long and cost too much to even try. By the time you completed the exhaustive testing necessary to even approach full coverage the software would be obsolete and your users would have found another solution. Perfect tax software is worthless on April 16th.
Further, this myth leads to the belief that the more bugs you find, the better. This in turn encourages testers to engage in bizarre behavior and to focus on the fringes of functionality in the hopes of finding more bugs. Anyone can get software to fail if they try hard enough, but frankly users are more interested in what works than what doesn’t. They developed or bought the software because they need to use it, and finding random reasons why they can’t will not make you popular.
Reporting weird bugs invites developer derision and user frustration because it diverts time and resources from the real goal: making sure the software does what the users need. If you don’t believe users will sacrifice quality for functionality, look no further than the 50,000+ known bugs shipped with Windows itself.
This myth also discourages testers from exercising the core functionality – the so-called “happy path” – that is most widely used because it is less likely to yield bugs. But just because a feature has worked every previous time doesn’t mean it can’t fail this time, and if it is vital to the user then a single failure is one too many. Waving the list of bugs you found in the weeds will do nothing to placate users if mainstream functionality fails. For more on this, see “When Bugs Don’t Bug Me” at http://www.itmanagement.earthweb.com/entdev/article.php/621801.
So what is your job as a tester? It’s actually much more fun and interesting than just finding problems…but you’ll have to stay tuned to get the rest of the story.
Myth #2: Test new functionality first. This one has a certain surface appeal, since of course new functionality is unproven. The issue is that if it is new then users aren’t using it yet, but if it is old then the odds are they are. A broken new feature may be embarrassing, but breaking functionality the enterprise already relies on could be catastrophic.
This myth leads to the dangerous practice of focusing on changes first and only doing regression testing if there is time left over. The fact is that testing must be risk-based, and change is only one risk factor. Others include volume, financial impact, and exposure. Clearly a new feature may be high risk for many reasons, but the fact that it is new is only one of them.
And, since the overwhelming amount of testing is still performed manually (more on that next), this inevitably means that regression testing is slighted. There just isn’t time to test everything that is changing plus everything that isn’t supposed to change.
The irony is that over its lifetime, software tends to become less reliable, not more. This is caused by a variety of factors: developer turnover, repeated patches, constant enhancement, and changing technology landscapes. The volume of code and the functionality it supports continues to expand while developer knowledge is lost over time and turnover, and as a result the software becomes increasingly more complex and less stable. For more on this, see “Anatomy of Ugly: How Good Code Goes Bad” at http://www.computerworld.com/printthis/2005/0,4814,106105,00.html.
Thus, each successive change increases the probability of unintended consequences. Coupled with decreasing regression test coverage, this is a formula for disaster. Happily, changing your strategy can make the most of your efforts and win you popularity with developers and users alike.
Myth #3: You can only automate regression tests.The overwhelming majority of testing is still performed manually, even though test automation tools have been around for decades. You have to wonder why. Part of this is because most testing is centered on new functionality, as previously discussed, and the rest is because of code-based test automation tools.
Originally, test automation tools were based on an approach called record and play, or capture/playback, which required that testers perform the test manually as it was recorded into a script file. This meant, of course, that you could not create the test until you could perform it, and you could not perform it until the software was available and working as expected.
Later the scripting languages became more sophisticated and theoretically you could code tests in advance. Unfortunately the effort to develop and maintain the code was so onerous that it didn’t make sense to do it for any functionality that was in a state of flux. Thus, testers still needed to wait for the application to stabilize, and even then they only automated regression tests for features that weren’t likely to change. When they did change, it was usually easier just to start over than to modify the code
What makes this whole idea silly is that we test software because of changes. Even so-called regression tests can be impacted by new capabilities. By adopting an automation approach that is so sensitive to change that affected tests are easier to throw away than fix, you lose the primary benefit of automation, which is reusability that enables cumulative coverage. And, if you lose this benefit, then the whole value proposition of test automation is also lost: you spend so much developing and re-developing tests, or attempting to maintain them, that you don’t save enough time and money to pay for it. Thus, shelfware is rampant for test tools. For more on this see “The Demise of Record/Play/Script” at http://www.stickyminds.com/s.asp?F=S10939_MAGAZINE_2.
The good news is that test automation has evolved away from code to a data model that not only allows tests to be easily and quickly developed before the code is available, but also supports automated maintenance that makes reusability a reality.
So now we have cleared away the misconceptions and set the stage for an exploration of how to test smarter, giving you the opportunity to be successful and enjoy yourself along the way.
See you next time.