As I've said elsewhere in this blog (http://qa4software.blogspot.com/2010/07/test-specifications-dont-have-variables.html), test cases don't have variables, they have specifications. It follows then, that automated test cases also don't have variables. In practice, that means scripts don't take variable arguments and they don't have config files. Config files are just a way to populate variables so when I talk about variables below, I'm also talking about config files, and vice versa.
If we want to assure the quality of our software doesn't degrade over time, we need to do regression testing. Regression tests are deterministic and therefore repeatable. They don't vary. Variable arguments, including those populated via config files seriously undermine the ability to do regression testing.
Test planning generally works like this:
1) Review the Product Requirements Document and Development Response for new features.
2) Create new test cases in the test case planning database for the new features.
3) Automate the test cases.
4) Execute the automated test cases.
All automation approaches are not created equal. Take these two execution synopses:
$> testutility.pl -A 123
In the first case, the script name equals the test case because it doesn't take variables. We can track our test planning and execution by script name from test case design through execution. If we find a bug, we can tell the developer he can reproduce the bug by running 'testcase123'.
In the second case, we can't just refer to the script name when we talk about the test case, the full name of the testcase has to be, 'testutilityA123', because the script name no longer uniquely identifies the test case. Config files are rarely used to specify just one variable, so we should identify test cases with something like, 'testutilityA123B432C484D543'. Of course, that doesn't actually happen. People shorten the name to just, 'testutility' as if the variables passed to the utility all result in the same test case. Then they incorrectly record all the test results to the same test case in the testcase database.
The usual objection to the former approach is that you have to write more scripts. While that's true, in the second case you have to write and include config files in the test case specifications along with the utility script. There are no config files to keep track of in the first approach. Essentially, the automation of the test has been split into a utility and config files in the second approach.
In the first approach, the script is the test case, so you end up with these files:
In the second approach, the-script-plus-the-config is the test case, so you have these files:
You're not really avoiding creating files in the second approach. But you are guaranteeing that QA engineers will fail to track and manage their config files. In the first approach, you can create testcase456 from testcase123 by simply copying the file and make the necessary changes internally.
The next objection to the first approach is that there is much duplicate code between the copied test scripts. The answer to that is utility modules. Factor out the common code to library functions for reuse.
Obviously, I'm a proponent of the first approach. The test script is the fully automated test case, so test cases can be managed as a single file and regression testing is greatly simplified.