testdriven.com Wrangling quality out of chaos

Archive for the ‘Articles’ Category

Unit Testing of J2ME Applications or Using Inheritance in Tests Development

06.06.2007 · Posted in Articles

This article is sharing of my impressions and thoughts about unit testing of J2ME applications and partly an introduction of MoMEUnit – an instance of the xUnit architecture for unit testing of J2ME applications.It is not a secret that using of JUnit test framework for testing J2ME applications is problematic. The main problem is that J2ME is almost JDK 1.0. It doesn’t include reflection API and, of course, it doesn’t support annotations. In this case it is impossible to designate method that implements test via method signature or annotation. In other words, it is impossible to maintain implicit association between the test method and name of test case.

In my humble option, if it is not possible, we shouldn’t do this. To my mind, the solution is not to use “test<TestName>” method name signature to implement tests, and designate only one method:

public void test() throws Throwable

as method for test implementation. In this approach we will not implement tests per method and will not maintain implicit association between the test method name and name of test case, that (recall) is not possible. Instead we will create test per class. in J2ME it is possible to keep implicit association between class name and test case name. Another problem that arises is a sharing of fixture between group of tests. Here is a place where Inheritance can help.

In MoMEUnit we implement different tests by different subclasses of TestCase. Sharing of the same fixture between different tests is realized via inheritance (as mentioned above). The one approach to do this is to create one subclass of TestCase by defining instance variables to store the state of the fixture, overriding setUp() and/or tearDown() methods to initialize fixture and to release resources after test run, implementing test() method. Other tests will extend this test sharing the same fixture. Instance variables, of course, should be declared as at least “protected” to be accessible by subclasses. Another more general approach is to create abstract TestCase subclass by defining instance variables and overriding setUp() and/or tearDown() methods. All test cases will extend this abstract class sharing the same fixture. The second approach is of course more structural but, to my mind, this is a bit a waste of time and memory).

For example instead of classical JUnit TestCase subclass

// sorry for stupid examples.
import junit.framework.TestCase;

public class ArithmeticTest extends TestCase
{

private int arg1;

private int arg2;

public ArithmeticTest( String name)
{
super(name);
}

protected void setUp()
{
this.arg1 = 2;
this.arg2 = 5;
}

public void testAdd()
{
int res = arg1 + arg2;
assertEquals("I don’t know what happened", 7, res);
}

public void testSubtract()
{
int res = arg2 – arg1;
assertEquals("And what again ?", 3, res);
}
}

In MoMEUnit we will have :

//"AddTest.java" file

import momeunit.framework.TestCase;

// Usage of Test suffix of class name is not required. You can use any name.
// This is only for illustration purposes.

public class AddTest extends TestCase
{
protected int arg1; // at least protected to be accessible by subclasses

protected int arg2; // at least protected to be accessible by subclasses

protected void setUp()
{
this.arg1 = 2;
this.arg2 = 5;
}

public void test()
{
int res = arg1 + arg2;
assertEquals("I don’t know what happened", 7, res);
}
}

// "SubtractTest.java" file

// No imports because SubtractTest and AddTest are in the same package

// Usage of Test suffix of class name is not required. You can use any name.
// This is only for illustration purposes.
public class SubtractTest extends AddTest
{
public void test()
{
int res = arg2 – arg1;
assertEquals("And what again ?", 5, res);
}
}

Using inheritance gives the other nice feature. We can define one fixture for a one group of test cases and add additional fixture or change existing one for another group. That can be helpful in some cases. For example

public class AdditionalFetureTest extends OtherTest
{

protected AnObject arg3 = null;

protected void setUp()
{
super.setUp()
this.arg3 = new AnObject( … )
}

public void test()
{
assertTrue("Why additional feature doesn’t work ?",
target.additionalFeature( arg3, … ));
}
}

Maybe it looks like implementing different tests as different classes is cumbersome. To my mind, it is not true. By using IDEs like Eclipse or NetBeans it is almost as simple and easy as creating additional method. Besides, writing class like this

public class MultiplyTest extends AddTest
{
public void test()
{

}
}

even by hand is not complex.

Of course implementing different tests as different classes increases a size of J2ME application. But we are just testing. This is not a production J2ME application. Besides, as mentioned above in J2ME it is possible to keep implicit association between test case name and class name but not method name. This enforces to create anonymous classes or use other means that also increases application size. In my humble option test per class in J2ME is an optimal solution.

MoMEUnit has, of course, other changes to JUnit framework, that overcome other restrictions of J2ME platform and minimize memory usage.
MoMEUnit is ,of course, free software. It is available under Common Public License.

Intelligent Testing Framework or How to Avoid running Every test every time

01.18.2007 · Posted in Articles

Background

Anyone working on any large TDD (Test Driven Development) project knows the drill:

– Make changes to code
– Update with latest version of code base
– Fix inconsistencies and conflicts
– Start running tests
– Make cup of tea / have lunch / go home – depending on how far into the project you are
– Fix one or two test failures – fingers crossed you have not screwed up the other tests
– Try to check in code
– Swear loudly because someone has made changes to the code base that conflict with your changes
– Repeat ad nauseum

OK – I am exaggerating a bit but full end-to-end system tests can take a while on a large system – particularly if you have to reset the system to a known state before each test. If you are attempting to get a release out regularly, fixing the build ready for the next release involves banning commits of new code and days of pain and torment fixing all the build issues.

Dividing the tests into groups (e.g. using TestNG) and building the project from separate projects can help – but in the world of Spring, AOP, proxies, ORM technology and so on – it is very hard to tell which group of tests you need to run to validate a code change.

The solution given here (which has been successfully implemented) goes a long way towards eliminating these problems. It uses the JDK 5 Instrumentation technology to record at run time which tests touch which classes – AND conversely which tests you need to run for each class change. Before anyone screams – What about new classes? What about files that are not classes? I am well aware of these issues – but this technique will find the correct tests 90% of the time – and I rely on the build server to catch the remaining 10%. It certainly beats either not running the tests or waiting a long time for the tests to complete in their entirety.Technology

JUnit was used as the underlying test framework which supports both individual Tests and Test Suites (groups of tests run together).

JDK 5 officially introduced Instrumentation into the Java world. Instrumentation allows classes to be changed as they are loaded for a variety of reasons: to add code to collect coverage statistics, removing not-required log4j messages (http://surguy.net/articles/removing-log-messages.xml), to collect performance statistics and so on.

The instrumentation is performed by code in a JAR file termed an ‘Agent’. The location of the agent JAR is passed in as a JVM argument -javaagent:jarpath[=options ]. The manifest within the JAR file specifies which class to use for the Agent.

On start up the JVM calls the ‘premain’ method of the Agent passing in an Instrumentation object with which the Agent can register an instance of ClassFileTransformer (typically itself):

public static void premain(String options, Instrumentation instrumentation) {
instrumentation.addTransformer(new MyTransformer());
}

Any classes that are loaded by the JVM will now pass through the transform method on MyTransformer before being available to the application.
Unfortunately the JVM specification does not specify a set of tools to perform the bytecode transformations. I chose ASM for performance although there are other frameworks which are simpler to use e.g. BCEL.

Design

Run the tests using JUnit test suites. There are two variants of each Test Suite – one ‘Non Intelligent’ version – which runs all the tests (this is used on the build server) – and one ‘Intelligent’ version which looks to see if there is any stored dependency data and uses that to work out which tests to run.
Each Test Suite has an associated Agent Dataset that is persisted between each run containing:

&#9679; Which classes are used by which test
&#9679; The failed tests from the last run
&#9679; The time of the last run
The Agent Dataset starts out empty before the Test Suite is first run.

Non Intelligent Test Suite Version Workflow:
1. Run all the tests in the test suite

Intelligent Test Suite Version Workflow:
1. Load the Agent Dataset from the last run. If there is none then proceed as per Non Intelligent Version.
2. Inject the old dependency database into the Agent (otherwise we will lose a lot of the dependency information as only a subset of the tests are being run).
3. Create a new set of tests to run. Add all the previously failed tests to it. Iterate through the project classes within the dataset. For each project class – compare the modification date of the class file against the timestamp within the dataset (i.e. when the tests were last run) – if the class file has been modified then add all the tests that touched that class file to the set of tests to run.
4. Run only the tests that are required

Common Workflow (for both Test Suite versions):
1. Instrument all the methods on all the classes within the project to add in a static call back to the ‘record’ method on the Agent passing the name of the containing class.
2. At the beginning of each test make a static call to the Agent to specify which test is currently being run.
3. Use the ‘record’ method on the agent to lot which classes have been used by which tests.
4. At the end of the test suite save the data collected by the Agent together with a timestamp and a list of failed tests.

Implementation

The Agent consisted of the following components:
&#9679; Dependence Database
&#9679; Getters and Setters for the Dependency Database
&#9679; A ‘premain’ method to register an instance of the Agent as a transformer
&#9679; A transformer which uses ASM to add a static callback to ‘record’ on the Agent passing the name of the calling class.
&#9679; A ‘record’ method to record which classes were touched by which tests into the Dependency Database
&#9679; A ‘startTest’ method to tell the Agent which test is currently running

In additions there is a helper class to save and load the Agent Dataset and perform the calculation to determine which tests need to be run.

Due to issues with the coverage tool (Emma) there is also an Ant task to instrument the classes before Emma does its own instrumentation. This is more an issue with Emma than with anything else.

This took about 4 days to implement by someone that had not touched ASM before. The lack of standard toolset or documentation / examples made this harder than it need be. For example JDK 5 uses attributes within the class file to store information for generics – whereas the sample code I was using did not operate with attributes. This was not a hard issue to fix, but lack of documentation made it hard to work out what was happening.

Conclusion

I believe this is an extremely elegant solution in that it is almost completely orthogonal to the rest of the project code (the change to the existing code is minimal – just a few extra test suites, a call to ‘startTest’ on the Agent, and to ‘saveAgentData’ on the helper class at the end of the suite). Using this technique we achieved an up to 10 fold performance improvement in running test suites.
As a way to make TDD more productive and less painful I heartily recommend it.