Wednesday, January 7, 2009

Do the Math: Automation is No Longer Optional

I don’t get it. Software development has evolved from a slow, painstaking, manual discipline to a rapid, iterative, highly automated process. Computers have escaped glass-walled rooms with raised floors to land on palm-sized devices. Applications have shifted from back-office productivity aids to front line competitive weapons.

Yet the majority of functional testing is still done manually.

It’s not for lack of technology – automation tools have been around for decades. I should know, I co-founded a record/play/script company in the 1980’s called AutoTester and followed up with Worksoft to take the coding out of automation. And of course the test tool ecosystem has expanded exponentially with a broad range of companies and products.

Neither it is a lack of need. The graph below is my favorite because it says it all: over time, functionality grows but testing resources don’t. Unless you can employ automation to create a cumulative library of tests, you are doomed to steadily decrease coverage and increase risk with each succeeding release.

The risk is also exploding. Whereas before software failure likely meant that the accounting staff had to revert to calculators for a day or two, today errors can instantaneously affect thousands or even millions of customers, tainting the company’s brand, reducing revenue and driving up costs. In some infamous cases, inadequately tested software has cost hundreds of millions, even billions, of dollars.

So what gives?

As far as I can tell, it’s a combination of four factors:

1. An undefined test process. You can’t automate what isn’t defined, and manual tests are by their nature informal, usually relying on the knowledge and skill of the individual tester. If you don’t have a handle on what you need to test, you can’t predict the time and effort for automation. See “The Importance of Testing Software Requirements”,294698,sid92_gci1275907,00.html for a deeper dive into this topic.

2. Lack of software testability. Many modern application interfaces are dynamically generated on the fly, resulting in window and objects that have no useful or persistent names. A lack of testability standards may also cripple automation by presenting components that tools can’t interact with. These combine to make automation either impossible or so time-consuming that the payback isn’t there. See “Automated Testability Tips” for more.

3. Inadequate resources. Too many companies think that all they need to do is write a check to the tool vendor and automation is on the way. But buying a tool is like joining a health club: you still have to invest the effort. And you can’t expect existing staff to keep up with manual testing while automating in their spare time. See “Join the Club”

4. Unrealistic expectations. Record and play demos paint a false picture of what automation is all about, so management believes that it requires nothing more than capturing the manual process and therefore expects immediate results. This leads to a failure to instill development disciplines and allocate the proper time, attention and resources to getting the process defined and automated. See “The Demise of Record Play Script” for a better understanding of the true costs of record and play.

So what to do?

First of all, face up to the fact that manual testing is no longer a viable option. Don’t get me wrong – you will never automate 100% of your testing, simply because some tests are not worth automating and others, like usability tests, depend on actual user interaction. But the vast majority of functional testing can, should – even must – be automated in order to achieve anything close to comprehensive coverage on time and within budget.

Second, develop and present a reasonable plan for automation that takes into account the current state of your test process and sets realistic expectations. Accept a strategy for gradual progress, and reject any plan that fails to account for the additional time and effort that will be required to get established. Granted, automation saves time and resources, but that’s only after you get it into place.

Third, don’t expect to exclude your experts. Unless you have clear, current and comprehensive test case documentation (and who does?), you will need access to your best and brightest to help you identify the most critical aspects of the application and the business rules that govern them.

And fourth, attack the test data and environment issue head-on. Simply put, automation requires repeatable, stable data, and this is one of the most challenging components of any automation strategy. In fact, it’s such an important topic that it deserves its own separate treatment.

So stay tuned.


OpenID mcfinder said...

Hi Linda, I agree with your peception of the current state of software test automation. To me it is amazing that in a world dominated by sat-nav, iphones, email and HDTV the vast majority of software quality testing is still carried out manually. Although when you take a closer look, maybe it isn't quite so surprising...

Most of the traditional automation tools that dominate the market are based on a complex scripting language that is getting on for 20yrs old. This approach (in my opinion)brings with it a number of flaws which have limited the usefulness of these tools.

You are right that automation is not easy - and anyone who thinks these things work out of the box without any looking after is living in dream world. These scripts need writing, they don't write themselves and to write a bank of scripts can take months...and to write those scripts you need a team of specialists; expensive specialists with a demanding and costly educational programme.

So if your application delivery is time sensitive, there is little point in even starting down the automation road, because by the time you have got yourself into a position to automate your first test, the deadline has long since gone!

The real problem for me however is maintenance of these scripts. We live in a fast paced ever changing world, every time the application under test changes the scripts that were written to originally test the application have to be re-written to accomodate the changes. This is a time consuming task and can often take longer than manually testing the application in the first place. This high maintenance burden is one of the big reasons why so much automation software is just gathering dust on shelves. As soon as the application changes the tools are useless, unless significant time and resource is thrown at them.
This lack of flexibility is the reason why such tools are not used on the high risk applications that are constantly being changed or updated. QA teams simply cannot cope with the continual manual intervention needed to re-write scripts. So even if automation is being used, it is on the 'safe' part of the application that doesn't change too much...the parts that often change (and as such are more risky to the business) are still tested manually.

Automation thus far has failed to deliver and until it drags itself into the 21st century, manual testing will continue to dominate.

January 19, 2009 at 9:14 AM  
Blogger Skarlso said...


I have to disagree. Although i know that applications change constantly and the test scripts have to be updated BUT..

In my experience as automation worker ( for 5 years now ) i learned that you have to take that in count. And you HAVE to write the automation framework in a way that if you have to update it than be that as minimal as possible and at only one place. I currently have a huge framework and if a change occurs i only have to change some XML files... Configureability ( i know that that is not a real word :DDD ) is Key! :)

February 13, 2009 at 1:04 AM  

Post a Comment

Subscribe to Post Comments [Atom]

<< Home