There are two kinds of test data generated by testing. As I've said elsewhere in this blog, it's the Release Candidate that separates them. If we want to run an efficient testing organization we need to understand the ways in which we use data to decide how we store and summarize it in managing our software projects.
Pre-Release Candidate Testing
Testing we do prior to having a completed product doesn't count toward qualifying the product for shipping to customers. We may do a lot of testing during development. We may test every integration build or every check-in. That produces a lot of test results data. Testing during this phase focuses on maintaining product quality during development, so we're trying to find bugs. We're managing by exception. We want to know about regressions as soon as possible. We're very interested in the failures and not at all in tests that pass. We file bugs for the failures. The passing tests we ignore.
The Data - We need to keep test failure forensic data (cores, logs, config, test logs, etc...) for debugging and associate it with the bugs we file. But we have no use for the test results data from the tests beyond that. After filing bugs the data not associated with the bugs can be discarded.
Release Candidate Testing
By contrast, testing we do on a Release Candidate is official. We very much care about all the test results and we want to see them all pass. We want to manage execution of a Test Plan through to completion. After all, the goal of this testing is to ship the product. We should have already found the bugs. We're hoping to ship the product and we need all the test results to prove that it's ready.
The Data - We need to keep the test results data forever. We're going to use it for comparison in future regression testing. These are the results that customers and partners will want to see. Of course, we keep the forensic data with bugs as usual but unlike the test results generated by Pre-Release Candidate testing, we need to keep all of the results data too.