Establishing a Supporting Infrastructure
Running automated and regularly scheduled build and testing processes should involve minimal distraction if set up properly and if all required infrastructure is present. At minimum, that infrastructure should include a source-control system, a tool for automated test execution, and a reporting mechanism to track results of the automated build and testing.
Most development teams use a source-control system to store code they are working on. Some teams store their tests in the source-control system, and others leave testing to the discretion of the individual developers. In some cases, developers are testing the code on their own machines, but the tests never make it into the source-control system. In other cases, they do some testing if time permits and don't do any when in a crunch. In either case, the value of any tests that exist on an individual developer's machine is minimal and such tests are typically used only to verify that the code is functioning properly at the time it is written. Once stored into the source-control system, the test can be leveraged over and over to validate that the new code didn't break the existing functionality or introduce problems that could impact reliability or functionality.
The regression suite generation/execution should be scheduled to run regularly (for example, nightly, or several times a day in a continuous integration process) in whatever manner makes the most sense for your specific development environment and team. This could be a combination of an Ant script and Windows scheduler or CruiseControl, or a shell script and a crontab on UNIX. This process then regularly scans the project and generates more test cases for new and updated code, and executes all tests that comprise the regression suite, then reports all failures and cumulative coverage information. This report can be e-mailed to individual developers, or they can import it directly into their IDEeffectively triggering their review of the failures and the new/changed code. This cycle is then repeated every time the regression process is run.
Building a Behavioral Regression Test Suite
To demonstrate this process, I use Parasoft JTest to create a Behavioral Regression Test Suite for the JPetStore 4.0 project from iBatis (sourceforge.net/project/showfiles.php? group_id=60632). This is a simple web application that lets you purchase pets online.
First, automatically generate an initial regression suite for the JPetStore project. This one-time process scans the project and generates a test class for each Java file in the project. In the JPetStore example, 40 new test classes containing 702 test cases were generated. The tool proceeded to execute only these automatically generated test cases because the regression test suite did not include any manually written test cases. It reported coverage of 91 percent of the source code:
[exec] Executed Test Cases: 702 [exec] Runtime Exceptions: 0 [exec] Assertion Failures: 0/0 verified [exec] Contract Violations: 0 [exec] Profiling Problems: 0 [exec] Unverified Outcomes: 0 [exec] Coverage: [exec] Line: 91% [994/1090 executable lines]
Next, set up the automated testing tool to run using Ant. The appropriate Ant properties are configured to point to the tool installation directory and the Eclipse workspace (which is where the JPetStore project is located); for example:
<target name="all" > <echo message="Testing 'JPetStore'" /> <exec dir="." executable="${testtoolcli}" > <arg line="-data ${workspace} ${config} -resource 'JPetStore' -report '${workspace}/JPetStore/Report.xml' -publish -localsettings '${workspace}/JPetStore/examples.properties'"/> </exec> </target>