testdriven.com Wrangling quality out of chaos

Archive for January, 2007

Good Agile Karma

01.25.2007 · Posted in Links

Successful practitioners will have noticed that Agile relies heavily on discipline, rather than genius. For this reason, there is an emphasis on the practice of the basics: putting people first, focusing on value, delivering high-quality work frequently, and reflecting regularly to continue improving. Average teams, even in the early stages of Agile implementation, can achieve dramatic performance improvement if they are disciplined. As we do these things, the effects of our words and actions actively create, and re-create over time, the environment in which our teams and projects operate.

Authors: Gujan Doshi, Deborah Hartmann
Published: InfoQ, January 25, 2007
See also: Good Agile Karma reminder, Cheat sheet


Agile is on sale, Old geeks keep going, Code quality matters

01.24.2007 · Posted in Blogosphere

Reginald Braithwaite explains what he’s learned from sales, and how it applies to selling agile development techniques.

David J. Anderson thinks we need New Rules for Old Geeks.

Levent Gurses discusses Code Quality in Eclipse and adds two tools, Simian and CAP, to the list of Eclipse Code Quality plugins described in Paul Duvall’s developerWorks article.Joey Beninghove posts a short list of TDD Resources for .Net.

"When it comes to automated tests the first impression everyone (including myself) thinks that that would certainly increase the amount of time a certain feature is developed in. However as I started to practice TDD, Test First Programming, increasing the number of tests I came to the conclusion that automated tests actually allow you and the team to develop faster than without." (How TDD improves development speed and is very cost effective, Dan Bunea)

Danny Lesnadrini reviews SQL Refactor.

The first product release of AgileTrack is available. "AgileTrack is a development tool which assists in managing projects, tasks, and techniques leading to their completion." (AgileTrack: Agile/Extreme Programming Project Management, Iteration Planning, and Task Tracking Tool)

Levent Gurses addresses transparency in agile development: "Transparency is a major dynamic associated with agile development. At the roots of the people-centric model advocated by agile development is a philosophy of collaborative, non-punitive accountability that can be defined as transparency. Thanks to their social methodology, agile projects offer better transparency to clients, business partners and corporate decision makers. When broken down, this concept consists of management components such as individual responsibility, commitment, and accountability." (On the Way to Agile Transparency: Climbing the Big Wall)

"Agile software development requires input from all team members, and such collaboration is most effective when everyone participates." (Online Collaboration and Agile Software Development, Dr. Dobb’s)

Checkstyle: Java coding standards formatting tool

01.24.2007 · Posted in Links

Checkstyle is a development tool to help programmers write Java code that adheres to a coding standard. It automates the process of checking Java code to spare humans of this boring (but important) task. This makes it ideal for projects that want to enforce a coding standard.

Plug-ins available for major IDEs. Includes an ant task to integrate reformatting into the build process.

Agile tracks at QCon London, March 12-16, 2007

01.24.2007 · Posted in Events

There will be two agile-related tracks at the upcoming QCon conference in London, March 12-16, 2007.

Agile Foundations
Reflecting on our Agile Journey – How do we reach Mastery?Track description

Agile Foundations
Host: Deborah Hartmann

What makes Agility work?

Agile software development is state-of-the-art – numerous teams world-wide are using XP, Scrum, Crystal, and other known or custom-made Agile methodologies – but not every project is an unqualified winner.

What would enhance the chance of success? How do we even measure success? In this conference track we’ll concentrate on fundamental factors that make agility work, for both developers and their client organizations, including feedback mechanisms to evaluate agile implementation progress, and practices that can help stabilize teams and projects.

Reflecting on our Agile Journey – How do we reach Mastery?
Host: Deborah Hartmann

Facilitator: Diana Larsen, co-author of Agile Retrospectives: Making Good Teams Great

Agile software methodologies are maturing. The "innovator" questions of first-time implementation are giving way to more sophisticated, thoughtful conversations as we grapple with local constraints, and sometimes with the organization itself. To help us examine these issues, this track combines Open Space with lectures to create a day rich in collaboration and expert input. It’s a chance to share stories of failure and success, and to deepen our knowledge of what makes organizations and teams healthy and responsive to customer needs. Participants are free to move between lectures and Open Space – something our lecturers will also be doing.

Leaders in our field are spending more time teaching, leaving less time for peer-review… and yet feedback may be our most important tool for professional development! Open Space brings participants together to wrestle with the topics we are passionate about. Participants from all tracks, including novices, are welcome to join our experts in Open Space. Different levels of experience and diverse opinions will make our sessions more meaningful, so skeptics are welcome, too! This is the perfect venue for serious face-to-face debate, as we strive to develop a more mature community of practice. Participants will include: Jeff Sutherland, Rachel Davies, Dave Thomas, and other experts in the field.

Registration page.

Java Reference Guide: Continuous Integration

01.24.2007 · Posted in Links

I’ve recently had a lot of discussions about the notion of Test-Driven Development and Continuous Integration and how they can be used together to achieve superior code quality and performance. We have discussed a key facet of Test-Driven Development in the Enterprise Java Testing section of the Java Reference Guide, namely defining unit test cases with JUnit, but I thought this was a good opportunity to discuss how it can be integrated into a Continuous Integration framework to perform these tests on each developer commit.

Author: Steven Haines
Last updated: informit.com, January 12, 2007

Refactoring, Ruby Edition

01.24.2007 · Posted in Links

The content found on this site will eventually be compiled into Refactoring, Ruby Edition. With Martin Fowler’s permission, the existing examples found in Refactoring: Improving the Design of Existing Code are converted from Java to Ruby and explore any additional Refactorings that may pertain to Ruby development. The first phase of the project is the conversion process and the second is the Ruby specific content addition.

This website will be continually updated to show our progress.

Authors: Ali Aghareza, Stephen Chu, Jay Fields & John Hume

froglogic Pre-Announces Automated Mac OS X GUI Testing Tool Squish/Mac

01.24.2007 · Posted in Java

Hamburg, Germany – 2007-01-24. Squish/Mac will become available as a new edition of the powerful, platform-independent testing framework Squish. This new edition will feature the automatic creation and execution of tests for native Mac OS X Carbon and Cocoa applications.

Squish is the market-leading cross-platform GUI testing tool for applications based on a variety of GUI technologies such as Swing/SWT/RCP/Eclipse, Trolltech’s Qt, Tk, Four J’s Genero and XView. Additionally, Squish supports automatically testing Web and Ajax applications running in different web browsers.

Squish is a cross-platform solution that runs natively on Windows, Linux/Unix, embedded Linux and Mac OS X. The new Squish/Mac edition takes advantage of Squish’s mature testing framework while adding test automation support for the native Mac OS X GUI technologies Carbon and Cocoa."While the popularity of Mac OS X is constantly growing, no tools for automated testing of native Mac OS X applications are available today. After the huge success of Squish with different GUI technologies, we are very excited to fill this gap by providing a solid and powerful test solution to companies targeting the Mac OS X market.", explained Rainer Schmid, froglogic’s Mac OS X Engineer.

Squish offers a versatile testing framework with a choice of popular test scripting languages (Python, JavaScript, Tcl and Perl) extended by test-specific functions, open interfaces, add-ons, integrations into test management systems, an IDE that supports the creation and debugging of tests and a set of command line tools facilitating fully automated test runs.

Similar to preexisting Squish editions, tests for Mac applications can be automatically recorded or written manually. Using Squish Spy, verification and synchronization points can be inserted as easily as in every other edition by visually exploring the internal structure of a Mac application.

Previews of Squish/Mac will become available in the next weeks. If you are interested in exploring and providing feedback for Squish/Mac, please contact us at squish@froglogic.com. If you would like to evaluate any other Squish edition, please visit http://www.froglogic.com/evaluate or contact squish@froglogic.com. For more information, visit http://squish.froglogic.com.

About froglogic

froglogic GmbH is a software company based in Hamburg, Germany. Their flagship product is Squish, the market-leading automated testing tool for GUI applications based on Qt, Tk or XView and for HTML/Ajax-based web applications running in different web browsers. froglogic also offers services in the areas QA/automated testing and Qt C++ programming and consulting. More about froglogic at http://www.froglogic.com.

actiWATE: Web Application Testing Environment

01.24.2007 · Posted in Links

actiWATE R1.0 (Actimind Web Application Testing Environment) is a free software platform intended to simplify the test automation process.

The major component of actiWATE is a Java-based Framework that emulates Internet browser functionality and provides a convenient and intuitive action-based API for test script development. Action-based tests are easier to write and comprehend, and therefore easier to maintain through the regression testing process.

Automated tests in actiWATE consist of Java code using the actiWATE API. Therefore these tests can be executed by means of different tools (for example, the JUnit test runner).

actiWATE collects testing information to log files and generates test failure reports to simplify problem allocation and correction.

actiWATE can now be used for testing web applications written in AJAX.

New release of actiWATE R1.0 – Web Application Testing Environment

01.24.2007 · Posted in Advisories

A new version of actiWATE R1.0 is available for download from our web site.

Here is a short list of actiWATE R1.0 features and improvements:

– Support for AJAX technology
– Handling of different character sets
– Support for data and connection timeouts
– actiWATE TWA now supports IE 7.0
– Extended JavaScript support: DOM Attributes, ‘Enumerator’ object, dynamic creation and loading of SCRIPT and IFRAME elements

and other improvements…More information about changes in actiWATE R1.0 may be found here.

Feel free to contact me with any questions regarding the new version of actiWATE.

With best regards,

Vladimir Kornev

Project Coordinator
Actimind, Inc.

Web Seminar on Continuous Integration, January 23, 2007

01.18.2007 · Posted in Events

Continuous Integrated Testing for .NET: What’s in it for You?
An SD Times Web Seminar

Tuesday, January 23 2007, 2:00 pm Eastern, 11:00 am Pacific

You already know that it’s better, faster and cheaper to fix bugs early in your application development cycles. But the last thing you need is more work in your busy day. Enter Continuous Integrated Testing (CIT), an approach that combines development and testing practices and tools to let you test while you build, increasing quality and saving you time.Author and industry analyst Ian Hayes, president of Clarity Consulting, will outline through pragmatic, quantifiable examples how developers, managers and executives can benefit from the use of CIT. Compuware CIT specialist Mike Koza will demonstrate how CIT can be easily and methodically integrated into a Microsoft-based development and testing organization, whether you’re using agile, traditional or hybrid methodologies.

Attendee Bonus: One Web Seminar attendee will receive an Apple iPod nano, courtesy of Compuware.

Speaker: Ian S. Hayes, Clarity Consulting:
Ian S. Hayes is the founder and president of Clarity Consulting, Inc. As an industry analyst and consultant, he actively advises Fortune 1000 companies on enhancing IT value by better targeting IT investments, improving the effectiveness of IT execution, and establishing measurement programs that tie IT performance to delivered business value. Hayes is the author of three IT books, has chaired numerous industry conferences, and has had articles and reports published in Computerworld, Information Week, Software Magazine and the Cutter IT Journal.

Speaker: Mike Koza, Compuware:
Mike Koza is a senior practitioner at Compuware supporting its application development and software quality solutions. He brings over 17 years of experience in the information technology space ranging from application development, project management and consulting with organizations that include Electronic Data Systems and Oakland County. He manages and consults on Compuware’s Continuous Integrated Testing solution for customers and events around the world.

Moderator: David Rubinstein, Editor-in-Chief, SD Times:
David Rubinstein brings more than 25 years of newspaper experience to his role as editor-in-chief of SD Times. Over the past six years, he has covered a wide range of software development issues, from the application development lifecycle to emerging standards and specifications, as well as the business behind the software business. Rubinstein writes a regular column in SD Times that examines the development industry as a whole.

[url=https://event.on24.com/eventRegistration/EventLobbyServlet?target=registration.jsp&eventid=34999&sessionid=1&key=B4AC245E891F33B1D755B404ACA9D5A1&partnerref=bzmedia3&sourcepage=register]Event Registration Page[/url]

Continuous Integration Using Team Foundation Build

01.18.2007 · Posted in Links

Organizations always need a repeatable and reliable method to create a regularly available public build. In my previous organizations, I used in-house tools.—I even used a continuous integration build type. I just did not know at that time that it had a name!

What is continuous integration? Continuous integration is the process of generating a build whenever a programmer checks code into the source control server. When you use this process, it is a good idea to have a dedicated build server where you can synchronize and compile the sources, and run unit tests.

Software development life cycle processes evolved over time. Development teams realized the importance of having periodic builds. It started with a weekly build. Then it grew tighter when "nightlies" began, and tighter still with "hourlies." As the benefits of frequent builds became more obvious, organizations wanted more builds. And now we have continuous integration. But nightlies still exist. Many organizations still count on them to get a formal, reliable build.

Authors: Khushboo Sharan, Kishore M. N.
Published: MSDN, January 2006

Unit Testing with Stubs and Mocks

01.18.2007 · Posted in Links

I was on site with some clients the other day, and they asked me about unit testing and mock objects. I decided to write up some of the discussion we had as a tutorial on creating dependencies (collaborators) for unit testing. We discuss two options, stubbing and mock objects and give some simple examples that illustrate the usage, and the advantages and disadvantages of both approaches.

It is common in unit tests to mock or stub collaborators of the class under test so that the test is independent of the implementation of the collaborators. It is also a useful thing to be able to do to control precisely the test data that are used by the test, and verify that the unit is behaving as expected.

Author: Dave Syer
Published: January 15, 2007

Intelligent Testing Framework or How to Avoid running Every test every time

01.18.2007 · Posted in Articles


Anyone working on any large TDD (Test Driven Development) project knows the drill:

– Make changes to code
– Update with latest version of code base
– Fix inconsistencies and conflicts
– Start running tests
– Make cup of tea / have lunch / go home – depending on how far into the project you are
– Fix one or two test failures – fingers crossed you have not screwed up the other tests
– Try to check in code
– Swear loudly because someone has made changes to the code base that conflict with your changes
– Repeat ad nauseum

OK – I am exaggerating a bit but full end-to-end system tests can take a while on a large system – particularly if you have to reset the system to a known state before each test. If you are attempting to get a release out regularly, fixing the build ready for the next release involves banning commits of new code and days of pain and torment fixing all the build issues.

Dividing the tests into groups (e.g. using TestNG) and building the project from separate projects can help – but in the world of Spring, AOP, proxies, ORM technology and so on – it is very hard to tell which group of tests you need to run to validate a code change.

The solution given here (which has been successfully implemented) goes a long way towards eliminating these problems. It uses the JDK 5 Instrumentation technology to record at run time which tests touch which classes – AND conversely which tests you need to run for each class change. Before anyone screams – What about new classes? What about files that are not classes? I am well aware of these issues – but this technique will find the correct tests 90% of the time – and I rely on the build server to catch the remaining 10%. It certainly beats either not running the tests or waiting a long time for the tests to complete in their entirety.Technology

JUnit was used as the underlying test framework which supports both individual Tests and Test Suites (groups of tests run together).

JDK 5 officially introduced Instrumentation into the Java world. Instrumentation allows classes to be changed as they are loaded for a variety of reasons: to add code to collect coverage statistics, removing not-required log4j messages (http://surguy.net/articles/removing-log-messages.xml), to collect performance statistics and so on.

The instrumentation is performed by code in a JAR file termed an ‘Agent’. The location of the agent JAR is passed in as a JVM argument -javaagent:jarpath[=options ]. The manifest within the JAR file specifies which class to use for the Agent.

On start up the JVM calls the ‘premain’ method of the Agent passing in an Instrumentation object with which the Agent can register an instance of ClassFileTransformer (typically itself):

public static void premain(String options, Instrumentation instrumentation) {
instrumentation.addTransformer(new MyTransformer());

Any classes that are loaded by the JVM will now pass through the transform method on MyTransformer before being available to the application.
Unfortunately the JVM specification does not specify a set of tools to perform the bytecode transformations. I chose ASM for performance although there are other frameworks which are simpler to use e.g. BCEL.


Run the tests using JUnit test suites. There are two variants of each Test Suite – one ‘Non Intelligent’ version – which runs all the tests (this is used on the build server) – and one ‘Intelligent’ version which looks to see if there is any stored dependency data and uses that to work out which tests to run.
Each Test Suite has an associated Agent Dataset that is persisted between each run containing:

● Which classes are used by which test
● The failed tests from the last run
● The time of the last run
The Agent Dataset starts out empty before the Test Suite is first run.

Non Intelligent Test Suite Version Workflow:
1. Run all the tests in the test suite

Intelligent Test Suite Version Workflow:
1. Load the Agent Dataset from the last run. If there is none then proceed as per Non Intelligent Version.
2. Inject the old dependency database into the Agent (otherwise we will lose a lot of the dependency information as only a subset of the tests are being run).
3. Create a new set of tests to run. Add all the previously failed tests to it. Iterate through the project classes within the dataset. For each project class – compare the modification date of the class file against the timestamp within the dataset (i.e. when the tests were last run) – if the class file has been modified then add all the tests that touched that class file to the set of tests to run.
4. Run only the tests that are required

Common Workflow (for both Test Suite versions):
1. Instrument all the methods on all the classes within the project to add in a static call back to the ‘record’ method on the Agent passing the name of the containing class.
2. At the beginning of each test make a static call to the Agent to specify which test is currently being run.
3. Use the ‘record’ method on the agent to lot which classes have been used by which tests.
4. At the end of the test suite save the data collected by the Agent together with a timestamp and a list of failed tests.


The Agent consisted of the following components:
● Dependence Database
● Getters and Setters for the Dependency Database
● A ‘premain’ method to register an instance of the Agent as a transformer
● A transformer which uses ASM to add a static callback to ‘record’ on the Agent passing the name of the calling class.
● A ‘record’ method to record which classes were touched by which tests into the Dependency Database
● A ‘startTest’ method to tell the Agent which test is currently running

In additions there is a helper class to save and load the Agent Dataset and perform the calculation to determine which tests need to be run.

Due to issues with the coverage tool (Emma) there is also an Ant task to instrument the classes before Emma does its own instrumentation. This is more an issue with Emma than with anything else.

This took about 4 days to implement by someone that had not touched ASM before. The lack of standard toolset or documentation / examples made this harder than it need be. For example JDK 5 uses attributes within the class file to store information for generics – whereas the sample code I was using did not operate with attributes. This was not a hard issue to fix, but lack of documentation made it hard to work out what was happening.


I believe this is an extremely elegant solution in that it is almost completely orthogonal to the rest of the project code (the change to the existing code is minimal – just a few extra test suites, a call to ‘startTest’ on the Agent, and to ‘saveAgentData’ on the helper class at the end of the suite). Using this technique we achieved an up to 10 fold performance improvement in running test suites.
As a way to make TDD more productive and less painful I heartily recommend it.

DbUnit 2.2 released

01.16.2007 · Posted in Advisories

The dbunit-team is pleased to announce the dbunit-2.2.jar release!

This is a long-awaited new DbUnit official release, generated more than 2 years after DbUnit 2.1!

Changes in this version include:

New Features:

o Enable TestCase compositions Issue: 1473744. Thanks to Andres Almiray.
o Migrate SCM to Subversion
o Added pom.xml so it can be built by Maven 2. Issue: 1482990.
o XmlDataSetWriter now has a flag to include column’s name as comment.
o Added org.dbunit.util.search and org.dbunit.database.search packages, whose classes can be used to search tables depedencies for a given table. Issue: 1273949.
o Add "transaction" attribute to ant tasks to wrap operations in a single transaction. Can make operations faster. Issue: 1264212. Thanks to John Lewis.
o Properly support writing NCLOBs to Oracle Thanks to Cris Daniluk.
o Support CSV files from a URL (e.g. jar file) + CSV fixes Issue: 1114490. Thanks to Dion Gillard.
o new HsqldbDataTypeFactory for working with booleans in HsqlDB. Thanks to Klas Axell.

Fixed Bugs:

o Support for MySQL 5.0 boolean datatype Issue: 1494257. Thanks to Bas Cancrinus.
o Typo in howto example Thanks to Jeremy Frens.
o Fix the driver in classpath/driver not in classpath to always work regardless of configured driver.
o Typo in test class AbstractDataSetTest Issue: 1114487. Thanks to Dion Gillard.

For a manual installation, you can download the dbunit-2.2.jar here:


Have fun!

JUnitFactory: an experimental *characterization* test generator

01.16.2007 · Posted in Java

JUnitFactory is a free experimental project from AgitarLabs. You send it Java code and it sends back JUnit characterization tests for your code.

We have made our technology freely available to academic institutions and test researchers, but some of them don’t have the spare computing or IT resources to set-up a test server, so we thought it would be a fun experiment to set-up a dozen CPUs and have students and researchers use the test generator over the web. As the experiment progresses, we plan to make APIs available so that researchers and anyone interested in automated test generation can experiment with their own characterization test generation algorithms and strategies.Before anyone starts complaining about the evil of automated test generation, let me restate that JUnitFactory can only generate characterization tests; and let me clarify what is meant by characterization tests for those that may not be familiar with the term or concept.

In "Working With Legacy Code", Michael Feathers defines characterization tests as tests that characterize and record what the code actually does – not what it’s *supposed* to do. They exercise the code with a range of inputs, and record return values, object state, etc., for each set of inputs.

Characterization tests are useful for working with legacy code which is defined, again by Michael Feathers, as any code without tests.

CTs can come in handy as change detectors when there is a large body of code with little or no tests and provide a safety net when you need to make some changes to it. If the same set of inputs results in different behavior between the original and the revised version of the code, some assertions in the characterization tests will fail. Some of those failures may point to unexpected and/or unwanted changes in behavior that you might want to know about and address.

Colloquially, and in the simplest possible terms, characterization tests will tell you things like the following:

In the original version of the code, ‘Util.foobar("abc", 123)’ returned ’42’. After you made the change, it returns ’43’.

The test has no idea whether the right return value is ’42’ or ’43’, but it lets you know that it has changed and you, the developer, has to decide which one is right and what to do about it. Sorry, no free lunch.

By now it should be amply clear that automatically generated characterization tests are NOT a replacement for TDD or Unit tests lovingly crafted by developers. But, especially with some user input in the selection of test data and test assertions (which is possible with JUnitFactory), they can give developers a valuable jump-start in working with legacy code and complement their manual testing efforts.

Unfortunately, for the time being, we only have a client plug-in for Eclipse. If enough people find the JUnitFactory useful, we plan to add support for other IDEs. As a matter of fact, if you want to help with that let us know – but keep in mind that JUnitFactory is designed to be a free service so all clients will have to be freely available or, better, open-source.

If you don’t use Eclipse, there’s a "toy" web-based demo of the CT generation also – but it only works on one class with no dependencies.

Anyway, if you have any interest in JUnit generation, go to the website, request an invitation and give it a try.

Remember, it’s still experimental and mostly for fun so don’t expect too much or bang on it too hard.


Alberto Savoia
Agitar Software Laboratories

TeamCity 1.2 released

01.16.2007 · Posted in Advisories

JetBrains, the creator of intelligent, productivity-enhancing applications, today announced the general availability of TeamCity 1.2, their innovative teamwork product for continuous integration and effective build management, which supports both Java and .NET platforms.

TeamCity automates and coordinates key collaborative processes to eliminate manual systems and delays; providing tight integration with multiple build and test tools, real-time unit test frequency, server-side inspections and code coverage analysis.Along with usability improvements and fixes to existing functionality (see release notes at http://www.jetbrains.com/teamcity/download/release_notes12.html ), this new release is mainly targeted for .NET platform developers but in the nearest major release we plan to extend the list of supported IDE’s with Eclipse plugin and implement multiple enhancements anticipated by Java developers.

Test-drive our demo server with TeamCity 1.2 installed at http://teamcity.jetbrains.com/.

Download TeamCity 1.2 at http://www.jetbrains.com/teamcity/download/index.html.

Urbancode announces AnthillPro 3.1.2

01.16.2007 · Posted in Advisories

Urbancode is pleased to announce the release of version 3.1.2 of AnthillPro.

AnthillPro is dedicated to addressing not just the problems of build management, but also to automating the wider array of application lifecycle issues like promotions and deployments.Changes since the 3.0 release include:

* Graphical Workflow Definition – Easily lay out which jobs should run in parallel.

* More centralized configuration – More configuration items are configured once and reused by multiple projects. For instance, working directory scripts and cleanup policies.

* Jira Integration – AnthillPro will now comment Jira issues addressed by a build, and generate reports listing which issues are touched by a build.

* Activity Removal – The activity concept was difficult for users. It has been removed as we moved the parrallelization of jobs to the workflow definition and the selection of which agents to run jobs on to the job itself.

* Job Iterations – Jobs can be configured to run multiple times in a workflow allowing for easy configuration and distribution of similar test cases, or stress testing efforts.

* General Improvements – There have been dozens of bug fixes, and small improvements added since 3.0.


froglogic Announces Automated GUI/Web Testing Tool Squish 3.1

01.15.2007 · Posted in Advisories

Hamburg, Germany – 2007-01-15 froglogic GmbH today announced version 3.1 of the leading, cross-platform automated UI testing tool Squish for Qt, Web/HTML/Ajax, Four J’s, Tk and XView applications. Squish is being successfully used in QA departments across the world in companies such as Reuters Financial Software, EADS, Siemens, Synopsys, Xilinx and Trolltech.

This new release of Squish comes with many improvements and new features for all Squish editions. The new features were mainly driven by the feedback of Squish’s users and aim to make Squish even more powerful, robust and easy to use. A list of the major new features can be found at the end of this announcement.
"One of Squish’s major advantages is its versatility with regard to supported platforms and technologies while offering a tight integration into each supported technology. This new release comes with many general improvements as well as enhancements specifict to individual editions. Such as improved support for version 4 of Trolltech’s Qt and Qtopia 4.x products and support for new Ajax/HTML toolkits.", said Reginald Stadlbauer, co-founder and CEO of froglogic.

Squish offers a versatile testing framework with a choice of popular test scripting languages (Python, JavaScript, Tcl and Perl) extended by test-specific functions, open interfaces, add-ons, integrations into test management systems, an IDE that supports the creation and debugging of tests and a set of command line tools facilitating fully automated test runs.

While strongly improving the existing Squish editions, new editions broadening the scope of supported technologies will be released later this year. The first one will be Squish/Java for testing Swing/AWT/SWT/Eclipse/RCP applications which will become available in Q1 2007. Additionally froglogic offers new services to help customers being more effective and efficient with their test automation efforts.

Squish 3.1 is available for customers and evaluators in their download area now. If you are interested in any of the existing or upcoming Squish editions or would like to learn more about froglogic’s service offerings please contact squish@froglogic.com or visit http://squish.froglogic.com.

New features of Squish 3.1

(General improvements)

– Perl support for test scripts
– Event monitoring and continuous test result logging in improved control bar.
– Point & click insertion of synchronization points
– Logging of call stacks in case of application crashes
– Interactive script console in debugger and improved variable watcher
– Improved automatic object naming (automatic creation of more meaningful and readable symbolic names)
– Import Excel files as test data, export test results in Excel format
– Template support for test scripts
– Stress/Monkey Testing


– New and much more robust object identification and naming scheme (names expressed in previous scheme will continue to work)
– Improved support for Qt 4 widgets and classes
– Qt 4.2.x support
– Windows x64 support
– Reduced size and improved performance of Qt wrapper
– Support for non-intrusive testing on Intel Mac OS X
– Support for sending low-level windowing system events to native, non-Qt controls


– XPath support to efficiently access and query nodes in the web application’s DOM document from test scripts
– Screenshot comparisons
– New functions to synchronize on page loading completed
– New event types to catch opening of popup windows
– More specialized support for popular JS/Ajax frameworks such as Backbase, and qooxdoo.


– Added events which can be handled by Squish’s event handlers to catch opening Notice windows

About froglogic

froglogic GmbH is a software company based in Hamburg, Germany. Their flagship product is Squish, the market-leading automated testing tool for GUI applications based on Qt, Tk or XView and for HTML/Ajax-based web applications running in different web browsers. froglogic also offers services in the areas QA/automated testing and Qt C++ programming and consulting. More about froglogic at http://www.froglogic.com.

Revenge of the Anti-test, Developer optimism, Nullity defined

01.10.2007 · Posted in Blogosphere

Charles Miller finds an interesting class of bugs: the ones that fix themselves spontaneously from one build to another. Well then, are those tests really green? (Revenge of the Anti-Test!)

Laurent Ploix has advice for optimistic developers. "We tend to under-estimate the development charges, we like to forget about some tasks (debug, documentation…), and we tend to believe that a code that works on one machine works everywhere. Almost everybody knows that this is nonsense:" (But it works on my machine!)

James Carr has been encountering tests that only contain assertNull and assertNotNull (TDD Anti-Pattern: The Nullifier).Alex Ruiz has released version 0.2 of his testng-abbot tool. It provides flexible assertions and component Fixtures. (Alex Ruiz’s weblog)

Shane O’Sullivan releases the first beta of his build tool for the Dojo javascript toolkit.

Folks on JavaRanch attempt to [url=http://saloon.javaranch.com/cgi-bin/ubb/ultimatebb.cgi?ubb=get_topic&f=42&t=000814]explain Agile to management in 30 secs[/url].

Following in the bytesteps of the Ambient Orb Ant Task, the Eclipse XPS plug-in shows the results of JUnit test runs using the built-in LEDs on Dell XPS (eclipse-xps – Google Code). Some screenshots.

Jeremy Rainer found a few "excellent" presentations on Selenium (Jason Huggins), Building testable AJAX applications (Adam Conners and Joe Walnes), Behaviour Driven Development (Dave Astels), Scrum Et Al (Ken Schwaber), Doubling the value of automated tests (Rick Mugridge), Using open source tools for performance testing (Goranka Bjedov). See the list here.

Hudson 1.72 is out: "Hudson now uses a ‘slave agent’ program on the slave machine, which maintains a single bi-directional stream with the master." (Hudson 1.72 and new remoting infrastructure)

SpringContracts: DbC and Spring integration

01.10.2007 · Posted in Links

SpringContracts is a Java Solution for Design By Contract with seamless integration into the Spring Framework.

Main Features

* Gives you the freedom to notate Contracts – Preconditions, Postconditions and Invariants – in a flexible way.
* Lets you configure the behaviour of Contract Validation via Spring’s Application Context.
* Provides a pluggable way to switch the Language in which to describe the conditions.
* Comes with a built-in support for Expression Language (Commons EL) with extensions due to first order logic.

rcodetools: Ruby code manipulation tools

01.10.2007 · Posted in Links

rcodetools is a collection of Ruby code manipulation tools. It includes xmpfilter and editor-independent Ruby development helper tools, as well as emacs and vim interfaces.

Currently, rcodetools comprises:

* xmpfilter: automagic Test::Unit assertions/RSpec expectations and code annotations
* rct-complete: 100% accurate method/class/constant etc. completion
* rct-doc: document browsing and code navigator
* rct-meth-args: precise method info (meta-prog. aware) and TAGS generation

Evolving an Embedded Domain-Specific Language in Java [PDF]

01.10.2007 · Posted in Links

This paper describes the experience of evolving a domain-specific language embedded in Java over several generations of a test framework. We describe how the framework changed from a library of classes to an embedded language. We describe the lessons we have learned from this experience for framework developers and language designers.

Describes the evolution of jMock from framework to domain-specific language.

Author: Steve Freeman
Presented: OOPSLA 2006