Don't get better at scraping burnt toast; get better at not burning the toast in the first place.
"When you make toast, do you want to burn the toast and scrape it, burn the toast and scrape it - or do you want to make the toast right before it gets to your inspectors?"
Dr. William Edwards Deming
Inspect-and-fix after the product is built is "scraping burnt toast". This is inherently ineffective and unscalable. Learning how to not "burn the toast" is about improving how the development process works.
Tests are a way to clearly communicate expectation; establish expectations before, not after development.
Normal language is prone to misunderstanding and false agreement. Tests, as a more structured medium, more clearly communicate expectations and ensure we get real agreement (or disagreement).
It takes less effort to communicate expectations before starting and meet them than it is to proceed with the wrong expectations and have to correct after having already built the wrong thing.
Automated regression testing will lead not to tens, nor even hundreds of tests, but rather thousands of tests. Therefore automated regression tests need to be deliberately structured for maintenance and performance.
Even minor problems with an automated testing approach or toolset will become untenable as the number of tests escalate. Typical numbers of tests in an automated regression test suite can run into the thousands, not tens or even hundreds.
The issues that you face with scaling automated regression test suites are pretty much the same as that for scaling a development code base: duplication, clunky and/or complicated setup, inconsistent conventions, etc. In essence, good automated regression test suite practice is about good development practice.
Tests are a product asset. They are a maintenance deliverable to make subsequent changes easier.
Because tests clearly communicate how a product behaves, they help with both the design and verification of subsequent changes. An executable test suite (including data and environment dependencies) should be an expected part of a delivered product, not something that is used and thrown away after a project ends.
Automated testing is not enough. You also want ongoing exploratory, sapient testing.
Machines are good at inspecting for known expected behaviour. Humans are good at exploring unexpected behaviour. Human testing should focus on improving our understanding the boundaries and attributes of the solution space to help inform the design and development of future solutions.
No comments:
Post a Comment