testdriven.com Wrangling quality out of chaos

Archive for the ‘Articles’ Category

Test Automation in the Age of Continuous Delivery

09.30.2014 · Posted in Agile Testing, Articles


I spend a lot of my time with clients figuring out the minutia of how to implement optimal test automation strategies that support their transition to Continuous Delivery. The goal is typically to be able to release software after each 2-week iteration (Sprint).

When faced with such a compressed development and testing schedule, most teams quickly realize that they can’t afford to have a separate QA automation group — there is just not enough time for after-the-fact automation and bug investigations that have to be done with that approach. Instead, the teams are adopting “whole team automation” strategies where system, integration and unit tests are owned and developed by the entire team.

The first and most persistent question I hear from folks who are going through this transition is: “who writes the system/integration tests?” This is usually followed by a discussion about the tester’s lack of knowledge of the chosen development language (Java, C#, etc.) and that some cockamamie, XML-based tool will save the day. My advice is to recognize that the fundamental issue is not the tester’s coding skills, but the lack of proper collaboration on the team. Learning to write most test code in Java or C# is not much more difficult than expressing the same script in XML or equivalent QA-only tool — it is difficult is to write ALL of it from scratch and without proper support.

Allow me elaborate: when tests are well-written, using proper layering patterns such as Page Objects for Selenium and using the ‘builder/fluent’ style, its is quite easy to add additional tests or to modify them when requirements change. Once the initial design is laid out — the rest of the code is quite simple to write and when a tester needs help the developer is right there. When testers pair with developers, they become proficient very quickly — pairing is a very efficient way to transfer skills. Now, when you check in code — you are checking in “units of business value” — that is code with the proper amount of tests to mitigate the risks to quality.

Lets take a look at a real-world example:

Imagine an application which has a Web UI and both internal and external RESTful APIs. We chose Cucumber/Selenium for automating the Web UI and rest-assured for automating the web services. How would this work in a fully integrated team?

The Product Owner comes to the planning meeting with a rough draft of Cucumber feature/scenario scripts that support the user stories that may be chosen for the upcoming Sprint. The requirements are reviewed and clarified with developers and testers; additional scenarios are discovered and added to the Cucumber ‘feature’ files. Agreement is reached on what scenarios needs to be automated with system/integration tests.

The developer starts writing the UI code in a story branch. He lays out the fields and controls for a new screen and then generates the Page Objects stubs. She checks in the code. The tester, if available begins the Selenium automation while the developer is working on the JavaScript code for client-side validation and the server-side code driven by rest-assured tests. After some time they both merge their code into the story branch and meet for a pairing session where they do some exploratory testing, review the unit tests and the integration tests and together refactor some of the tests to be more readable. The developer gets additional code reviews while the tester runs a quick exploratory test session based on the code changes for this story. After the system/integration tests pass the pre-commit tests on the build cloud, the code is checked into the master and all of the necessary tests are re-run. Voila! The build is still green The entire process took less than a day and the right level of quality has been achieved. There is joint ownership of the tests and the tests are well written. If they break in the future the developer will have no qualms about fixing them right away or rolling back the commit to keep the build green — “software by accretion” is now part of the culture.



Taming Legacy Grails Code with Test Generation

04.14.2013 · Posted in Articles, Automatic Testing, Selenium

In his seminal book, Working Effectively with Legacy Code, Michael Feathers defines “legacy code” as code without tests. Unfortunately this is exactly what my team wound up creating towards the end of our Groovy/Grails project. We were hoping that our Selenium tests would cover both the JavaScript in the browser and the Groovy code in the controllers. Unfortunately, due to several issues which are outside the scope of this article, we could not get a robust set of tests that ran fast enough so we decided that the pragmatic thing to do was to abandon the Selenium tests — the project has a rather large QA component so the risk to business was well mitigated. We soon realized that the untested code in the controllers was giving us problems with typos not being found for many days because Groovy 1.8 is a dynamic language.

To quickly rectify the situation we decided to implement a record/playback strategy. In most situations I would be against using record/playback automation tools, by the beautiful thing about software development is that its so contextual and I think this was the right move for the moment.

We wrote a small code generator that produced simple Selenium tests using the HtmlUnitDriver. The generator was triggered by a call in a ‘MyFilters.groovy’ file when the application was tested manually. For every controller/action combination, the generator intercepted the params map and generated the test with the values in the map. The initial assertions were simple — they just checked to see that nothing obviously bad had happened — such as 404 or 500 errors. It took about 1.5 hours to write the generator and then we stepped through the application manually and generated many dozens of tests. Because there was no Ajax involved the tests were 100% reliable and ran in about 90 seconds.

This simple approach allowed us to be sure that when we do refactor our code, we will catch the basic mistakes that a compiler would catch for us. We got an additional bonus when we caught a bug in our sample data that violated a Hibernate constraint.

In our situation, writing a simple record/playback tool was the right move. It took less that a day to implement and it already paid for itself many times over.

Using just enough automation to dramatically improve the testing process

05.10.2011 · Posted in Articles

At one of my recent engagements, I was assigned to test the reports feature. The setup was as follows: reports were produced via a third-party tool and ran in a separate server that talked directly to the database server. There was only one Report Analyst for the entire company —  needless to say he was a very busy guy. Additionally setting up the reports server when the reports configuration changed was tedious and error prone, so testers always put off reports testing to the end of the release cycle. Thus reports were always late, dreaded by testers and often had to miss the main product release and shipped separately.

I started to look for opportunities to apply automation to fix some of these pains. First I looked at automating the checking of the final report. This was a non-starter because the report UI was using copious amounts of custom JavaScript and was not amenable to web automation tools such as Selenium/WebDriver.

To establish testing priority, I examined the history of the testing effort in this area which has been ongoing for several years. It was obvious that most of the bugs were related to schema drift, but there was no easy way to determine which reports were affected by schema changes because report templates used a custom format to represent the data model.

Then it hit me: the reports server probably has diagnostic logging capability to trace the SQL that it sends to the DB server. Sure enough, after a quick search through docs, I was able to see the generated SQL.

Next I ran each report by hand to capture the generated SQL for the most common scenarios. With SQL in hand, I created plain old JUnit tests whose sole purpose was to break when the DB schema changed. These tests ran in a few seconds and I was allowed to check them into the main continuous integration server suite. Now when one of these tests broke, we could disable it and file a bug against the reports component.

Failed tests notified the Reports Analyst about upcoming work much earlier in the cycle and was able to schedule his time much better. On the testing side, I wrote a script to deploy the reports server using a single shell command. With the tedious and error-prone setup eliminated, testers were happy to do exploratory testing as soon as the new reports were checked in. When the bug was fixed, we replaced the SQL string in the JUnit test and re-enabled it.

To conclude, while it was too expensive to automate reports checking completely, applying the right kind of automation made the process a lot more manageable, predictable and less stressful for everyone.

Programmer/Tester collaboration example

04.14.2011 · Posted in Articles

Most testers of enterprise applications have some programming background, some even have degrees in computer science, but most of them would not qualify as professional programmers and that is the way it should be because great testers should be great analysts and competent in the skills of testing. Recently, a tester described to me how she was testing the migration from one DB vendor to another. She saved the result set of an SQL query, imported it into an editor and converted it into a query for the the target databases by wrapping each item in quotes and separating by a comma.

Something like:


INTO “SELECT * FROM rwm.Loginlog WHERE userName in(“joe1234”, “mary2234”);

This is certainly a reasonable test technique for this situation and the manual step of ‘generating’ the test SQL is not very difficult. Still, doing manual test preparation that does not require any thinking will take its toll on the tester and should be automated if affordable. In this case, the tester, though competent in SQL did not know how to automate the entire operation. I decided to give it a shot and time boxed it for an hour. I decided to try writing a stored procedure that generated the required SQL code. I have never seen a MySql stored procedure before, but with Google as my pair-programming partner, I was able to write it in about 45 minutes. (Caveat, there maybe a much more elegant way of doing it, but my code works and does the job)

File: p1.sql


DECLARE colValue VARCHAR(255);
DECLARE cur1 CURSOR FOR SELECT userName FROM rwm.Loginlog;

SET @outsql = ‘SELECT * FROM rwm.loginLog WHERE userName in( ‘;

OPEN cur1;

read_loop: LOOP
FETCH cur1 INTO colValue;
IF done THEN
LEAVE read_loop;

SET @outsql = CONCAT(@outsql,'”‘,colValue,'”,’);

CLOSE cur1;

SET @outsql = LEFT(@outsql, LENGTH(@outsql) – 1);
SET @outsql = CONCAT(@outsql, ‘);’);


And you install and run it like this:

source p1.sql
call p1();
select @outsql

With this code in hand, the tester can easily modify it to suite many similar situations without needing any more help from the programmer.

Elisabeth Hendrickson (http://twitter.com/testobsessed) suggested that saving results from both databases and comparing the files using unix tools is another time-tested approach and I am sure there are at least a dozen others. The moral of the story is that most programmers would love to help you avoid doing mindless, repetitive work – just ask them for help.

Automated GUI Testing: Squish Success at Perforce Systems

09.30.2009 · Posted in Articles


Perforce Software, founded in 1995, markets and supports Perforce, the Fast Software Configuration Management (SCM) system. Perforce is an award winning SCM system used to version and manage source code and all digital assets. Perforce SCM seamlessly handles distributed development and multi-platform environments and is used by more than 330,000 developers worldwide.

Recently, Perforce adopted Squish for GUI performance testing of their SCM system’s flagship GUI, the Perforce Visual Client (P4V). We discussed Perforce’s use of Squish with Tim Brazil, a Perforce Performance Lab engineer.

Why Squish?

In mid-2009 Tim Brazil chose Squish to round out his testing arsenal and to help ensure that performance remains high as new features and bugfixes are applied to P4V. P4V, written in C++ and using the Qt GUI library, provides a graphical interface on Windows, Mac OS X, Linux, Solaris, and FreeBSD.

One feature in particular Tim liked was Squish’s support for multiple standard scripting languages for writing tests—these currently include JavaScript, Perl, Python, and Tcl. Using a standard scripting language avoids the need for testers to learn an obscure proprietary testing language, and means they can benefit from the large number of third party modules available for standard scripting languages.

Squish provides a complete GUI-based testing IDE capable of recording and playing back GUI tests, but for Tim, it was also Squish’s support for command line use that was particularly appealing.

"I prefer to work in a command-line environment that facilitates the use of scripting languages," Tim explained. "It is evident that froglogic’s approach took engineers like myself into account when they designed the product. For example, features like the envars file, suite.conf file, squishserver, and squishrunner, allow me to design a fairly complex test environment with relatively minimal work."

Tim has many years of experience in the testing field and is a strong proponent of using both automated and manual testing.

"I believe the best software testing solution is to use a good mix of automation and manual tools," said Tim. "By utilizing the strengths of both, producing superior products can be achieved."

"The benefits of automated testing include reliability, repeatability, comprehensiveness, and speed," Tim continues. "Furthermore, automated tests can quickly and dependably navigate through test scenarios and are, therefore, more likely to uncover subtle timing problems." He pointed out that manual testing alone is often both slow and fallible, and that even with a comprehensive test plan at hand it can be difficult to reliably recreate the exact test actions and environment.

Squish at Perforce

Tim uses Squish to test the performance of P4V, specifically on Windows Professional, Mac OS X, and SLED (SUSE Linux Enterprise Desktop). Squish is used to test the nightly build as well as the previous three P4V releases. A daily performance report is generated, identifying performance trends and pinpointing any possible areas of concern as P4V continually evolves.

Each test Tim uses is designed to work independently—if one test fails, it has no effect on subsequent tests. No third party tools are used; instead, a custom Perl script (runner.pl) runs the Squish tests. This script runs each set of Squish P4V tests for each version of P4V on each client machine, accumulating results into an XML file (see the screenshot below). Once the tests are complete, the XML results file is automatically parsed and converted into an HTML report for the test team’s review each morning.

Squish in Practice

We asked Tim what features of Squish he liked most, apart from the multiple scripting language and command-line tools support he’d already mentioned.

Squish’s Object Map came to mind: "At previous companies I have used graphical test tools that were heavily invested in using coordinates to identify objects. This was a maintenance nightmare. Squish’s objects.map is unique and greatly facilitates test readability, robustness, and maintenance."

Squish uniquely identifies application objects such as widgets, using the values of their properties. For every identified object, Squish also creates a corresponding symbolic name—the symbolic name is the one that is normally used inside test scripts. This means that if a developer changes one of an object’s properties, the test engineer only has to update the corresponding object property once in the Object Map to reflect the change. The symbolic name used for the object will continue to work in all the tests where it is used.

In some cases, an object property’s value may vary depending on the platform the application is being run on. Squish’s Object Map can accommodate such challenges since the properties used to identify an object can not only be matched for equality, but also using wildcard or regular expression matches—a feature that Tim has found to be particularly useful.

"One of the biggest challenges I’ve faced with automated GUI testing is the ability to maintain a robust test as the product changes and evolves over time," Tim continues. "This is a problem because GUI application performance can vary on different operating systems. One particular issue we’ve had is being able to identify when a web page or GUI application object is ready to be tested, since load times vary. The wait*() functions Squish provides have been invaluable in helping me tune a test so that it provides consistent behavior when run on various platforms."

In addition to Squish’s documented features, Tim has found Squish’s technical support team very helpful: "I was impressed with froglogic’s support from the moment I started to evaluate Squish. Just because something isn’t stated in the documentation does not mean it cannot be done. Squish’s technical support team has quickly and efficiently provided solutions that work."


Perforce’s Performance Lab depends on the reliability and repeatability of Squish tests to check application performance as well as behavior across multiple platforms. Squish’s usefulness and flexibility has allowed Perforce to rapidly adopt Squish as an integral part of their performance quality monitoring process. This has lead to time and cost savings compared with the previous manual testing, and at the same time ensured that tests are automatically and reliably repeated to ensure product quality.

Making a third party tool an integral part of P4V’s testing process is a significant commitment, but one that Perforce has chosen to make with Squish.

froglogic’s team would like to thank Tim for taking the time to share Perforce’s experience with Squish, and we are looking forward to a continued successful relationship.

More about Squish and froglogic at http://www.froglogic.com

Testing WCF Service Applications (Part 3 of 4) — Mocking the Async Service

01.01.2009 · Posted in Articles

Up to this point, we have tested the service and we have tested the client — both in isolation. We have written unit tests and our code has good coverage. Unfortunately, my clients are not always synchronous. In Silverlight client, for instance, the framework will not permit you to make synchronous service requests. As it turns out, writing tests for asynchronous service clients is not straight-forward. Thankfully, there are some hacks that you can take advantage of to write effective asynchronous tests.

Testing WCF Service Applications (3 of 4) — Mocking the Asynchronous Service

Testing WCF Service Applications (Part 2 of 4) — Mocking the Service

12.15.2008 · Posted in Articles

So far, I outlined how to test your WCF service. I simply took advantage or the WCF architecture and tested the service directly outside of the actual service harness. Now I need to set my sights on the client. This becomes a bit more difficult, but I wouldn’t say that it is necessarily hard. I will start by giving a typical textbook example of hooking up to our service, and then I will tell you what is wrong with it. I will continue by modifying the code to be more testable so that the service can be mocked.

Testing WCF Service Applications (Part 2 of 4) — Mocking the Service

Unit Testing, TDD and the Shuttle Disaster

12.15.2008 · Posted in Articles

It turns out that TDD principles also apply for engineering the space shuttle’s main engine.

This article quotes a report that compares the bottom-up and the top-down approaches for designing jet/rocket engines. It turns out that the bottom-up approach is quite similar to TDD, and likewise successful.

Testing WCF Service Applications (1 of 4)

11.29.2008 · Posted in Articles

One of the most beautiful things about the WCF framework is the way it was designed to be more testable than ASPX services. When you design your WCF interface, you are mostly just designing an interface with the WCF ServiceContract attributes. The WCF framework uses your interface to determine the actual contract and transport mechanism so you don’t have to.

Read More: Testing WCF Service Applications (1 of 4)

Automated GUI Testing Interview on Squish 4.0: Talking with froglogic’s founders

11.13.2008 · Posted in Articles

Squish is the leading automated GUI testing tool supporting applications based on cross-platform GUI technologies such as Java Swing/AWT, Java SWT/Eclipse RCP, C++/Qt, Web/HTML/Ajax and more. Squish is renowned for its dedicated toolkit support, use of open scripting languages, great flexibility, and robust test creation and execution.

While the development of the upcoming Squish 4.0 is on-going, Qtrac Ltd.’s Mark Summerfield talked with some of the people behind the product.

In this first interview, Mark asked froglogic’s founders, Harri Porten and Reginald Stadlbauer, to give an overview of Squish 4.0’s features. In the following interviews Mark will have deeper technical discussions with the responsible developers working on specific features.

Read the interview at http://blog.froglogic.com/2008/10/squish-40-interview-talking-with-froglogics-founders/

My Year With TDD

10.24.2008 · Posted in Articles

It’s been over a year now since I have been developing using TDD (Test Driven Development) as my primary development practice. I wanted to reflect on what it has done for me professionally. In reality, the past year has been great for my professional career in many ways.

My Year With TDD

Java and Web GUI Testing: Squish’s Extensibility and Integratability

09.25.2008 · Posted in Articles

Today I’d like to write about some of Squish’s features which are not so well known but which makes Squish a really powerful GUI testing solution. Squish is a cross-platform, automated GUI testing tool with dedicated support for applications based on Java Swing, AWT, SWT, Eclipse RCP, Web/HTML/AJAX, Qt and many more.Extensibility: When we started to create Squish, our main focus was to support all standard components of the supported GUI technologies (Qt, Swing, SWT, etc.) very well. But soon we found that in nearly every GUI application custom components are used. These are often complex and interactive controls visualizing data in some way. While basic support for all custom components can be provided out-of-the-box (and is provided that way by Squish), there is no way a GUI testing tool can know all internals (such as internal objects) of such custom controls and support that in a robust way.

This made us think about extension capabilities. The first Squish edition where we offered an extension API was Squish for Web- the automted GUI testing tool for web applications. The reason for that is probably that esp. in the world of AJAX GUIs nearly all controls used are custom controls.

This extension allows users to implement a few functions in JavaScript to let Squish know how to query internal information of custom AJAX controls. That way every custom AJAX control can be supported equally well as standard HTML and AJAX controls already known to Squish.

Later we introduced a similar mechanism for our Squish for Java tool, which features automated testing of Java GUIs such as Swing, AWT, SWT and Eclipse RCP. Here we provide a few Java interfaces which you need to implement for your custom Java controls to allow Squish to drill down into the details of your custom controls.

Since these extensions solve many problems of GUI testing, we are now introducing similar extension APIs for our Squish for Qt edition. Also our new Squish edition for testing native Windows applications, which will be part of the upcoming Squish 4.0, will be completely built on this extension mechanism even internally.

Integratability: Many vendors of testing tools want to completely lock their users into their world. So they come with the complete tool set needed to automate the QA. We early realized that GUI testing is just one part of the whole QA process and also saw that most of our users already have automation systems in place for their unit testing. So they usually prefer to integrate Squish into their existing automation system instead of setting up a completely separate automation just for GUI tests.

So we decided to focus on making Squish easy to integrate instead of developing our own test management solution. To allow that we provided easy-to-use command line tools to start and control Squish test runs and generate an easy-to-process XML report from any test run. This allows for easy integration.

In addition we now also offer ready-made integrations for CruiseControl, Maven, Ant, Eclipse TPTP, HP Quality Center and more so Squish can be used as part of the whole test automation regardless of the tools in use.
We will also release our own test management system, which we use internally, eventually. But this will be just another way to do it and not replace our integration. Rather the opposite But we will always ensure that Squish stays easy to integrate since we really think that our users should have the choice what they want to use.

To end this article here are some links with further details on Squish’s extension and integration capabilities:

Squish for Web’s extension API: http://www.froglogic.com/download/book/ug-jsapi.html
Squish for Java’s extension API: http://www.froglogic.com/download/book/ug-javaextapi.html
Squish’s integration plugins:http://www.froglogic.com/download/book/addons.html

Java GUI Test Automation: Squish for Java Success at Ericsson

06.17.2008 · Posted in Articles

"Squish has proved to be an excellent and popular replacement for the GUI automation tool which was used in previous automation campaigns." said Shane McCarron from Ericsson AB.We had the pleasure to interview Shane McCarron, a Senior Designer of Ericsson AB, who use froglogic’s Squish for their automated GUI testing effort of several Java GUI applications in different divisions. We talked about their test automation and why they chose Squish over the competition.

Froglogic: What’s your name and position?

Shane: Shane McCarron, Senior Designer.

Froglogic: What’s the name of the company you work for?

Shane: LM Ericsson Ltd.

Froglogic: Can you briefly describe the software you are testing with Squish?

Shane: It is a suite of Java GUI applications used to manage a telecommunications network.

Froglogic: How did you learn about Squish?

Shane: I read about it on a WIKI article which was discussing vendors that provide cross platform GUI automation tools. Squish was said to support multiple Java Virtual Machines (JVMs) which was a key requirement of ours.

Froglogic: When did you start to use Squish?

Shane: In July 2007

Froglogic: What are the main reasons you decided to use Squish for your automated GUI tests?


Squish supports testing of multiple JVMs from a single test case
Tests can be written in JavaScript and this was seen as a benefit as there was already some knowledge of this language in our area and we didn’t have to learn a proprietary language
We were impressed with the Spy feature which lets you pick an object on a GUI and then set related synchronisation points in your test script
The step through debugging feature of the IDE was also seen as a very beneficial facility

Froglogic: What are Squish’s main advantages over the competition? Why did you choose Squish over its competitors?


Multiple JVM support
JavaScript based test scripts
Ability to integrate existing tests written in other languages e.g. Perl
Modular based development which means commonly used functions can be placed in scripts which are shared by multiple tests
Technical support – this was gauged by our experiences when we had questions during the review process
Competitive cost of the product and its licenses

Froglogic: What’s your favorite feature of Squish?

Shane: The debugger is excellent – when a breakpoint is encountered it is possible to see at that point in time the value of test specific variables and quickly get to the bottom of any problem with the logic of a test.

Froglogic: What is your most wanted feature wish for Squish?

Shane: Formatting and auto-completion of code e.g. in other IDEs when you type a function name and the opening bracket a prompt will appear indicating the parameters that must be passed to the function for that particular call. [Editor’s note: froglogic works on a highly improved IDE for Squish 4.0]

Froglogic: Are you satisfied with froglogic’s technical support service?

Shane: Yes. We have found froglogic to be very approachable and knowledgeable. The turnaround time for resolving issues is quite fast too.

Froglogic: What have been your biggest challenges in creating your automated GUI tests with Squish?

Shane: We were early adopters of Squish for Java so there were some teething problems there but I think our main challenge is that the GUIs we are testing are rather complex.

Froglogic: Where do you see the main benefit of automated testing? Shorter testing cycles, decreased use of human resource, better coverage, test reliability, and/or improved product quality?

Shane: We believe automated regression testing to be the quickest way to ensure that the introduction of new functionality has not broken existing core functionality of our GUIs. Where large tests involving many repetitive actions are concerned the automated test has to be considered more reliable too.

Froglogic: Do you use any other 3rd party or internal tools complementing your testing effort?

Shane: Yes for example we use internal Perl scripts to check a database at various points in our tests to validate the actions taking place on the GUI

Froglogic: Is there anything else you’d like to add?

Shane: Squish has proved to be an excellent and popular replacement for the GUI automation tool which was used in previous automation campaigns.

Froglogic: Thank you, Shane, for your time!

More about froglogic and Squish at http://www.froglogic.com.

Test Driven Development in .NET

03.02.2008 · Posted in Articles

A few days ago I had a session at a .NET conference in Germany (BASTA Spring 2008). I talked about software quality assurance and especially test driven development. Because of many people asking me about the details and the source code of my talk I wrote an article about TDD (including TDD of Web-UIs) with Visual Studio 2008 and WatiN: Bug Busters – Test Driven Development in .NET.

The article shows a sample application that is developed according to test driven development principles. It shows how to add data access- and business-layer (based on Linq-to-SQL) and how to build unit tests for ASP.NET forms.

I hope you find it interesting to read.

Kind regards,

Interview: Squish for Java GUI Testing

10.25.2007 · Posted in Articles

"My Java Swing application has a lot of graphical nterfaces, so it’s impossible to validate all of them by hand. Squish ermits to increase the test coverage without increasing the duration of the validation.", said Vincent Laigle from SAGEM.

We had the pleasure to interview Vincent Laigle, the Validation Team Leader of SAGEM, who use froglogic’s Squish for their automated GUI testing effort of a Java Swing application. We talked about their test automation process and why they chose Squish over HP’s (formerly Mercury) Quick Test Pro.

Froglogic: Can you briefly describe the software you are testing with Squish?

SAGEM: It is a Java program which runs on an embedded Linux platform. It’s best described as a kind of mail software wich is plugged on a radio network.

Froglogic: How did you learn about Squish and when did you start using it?

SAGEM: I learned about it from Aptus. I asked them to search for a Java GUI test tool similar to Quick Test Pro but which runs also on Linux. After a successful evaluation of Squish we started to use it in June 2007.

Froglogic: What are the main reasons you decided to use Squish for your automated GUI tests? What are Squish’s main advantages over the competition?

SAGEM: Squish has several great features. Here is a list of the most important ones for us:

– It works well with Java applications using Swing/AWT (Editor’s note: Squish for Java also supports testing Java SWT and Eclipse RCP applications)

– The ability to modify and customize Squish’s object name generator to influence which objects’ properties to use for generating object names when recording scripts

– Built-in support for data-driven testing and data tables

– Support of easy-to-learn programming languages for test scripts (JavaScript, Python, Perl, Tcl)

Froglogic: Are you satisfied with froglogic’s technical support service?

SAGEM: Yes. The team at froglogic is very responsive and helpful!

Froglogic: What have been your biggest challenges in creating your automated GUI tests?

SAGEM: The software is sold on a kind of palm device with a touch screen. So there is no keyboard but a lot of graphical interfaces and elements.

Our customer often changes his opinion on the presentation of the interfaces. So I have to create test scripts wich can be easely adjusted in the future without the need to redo them for each change in the user interface.

Squish is very good for that: To abstract the test data from the test scripts, I use a lot of data tables. This combined with a few sed and awk scripts allows me to easily adjust the scripts to UI changes (for example to adjust properties in the object names) without the need to re-record the script.

This is something which is not possible with other GUI test tools like HP’s Quick Test Pro!

Froglogic: How many tests cases do you have approximately by now?

SAGEM: By now I have approximately 300 test cases with a total of about 9000 lines of script code and about 3200 lines of data tables. I’ve automated tests for approximately 30% of my application’s functionality so far.

Froglogic: Where do you see the main benefit of automated testing?

SAGEM: My application has a lot of graphical interfaces, so it’s impossible to validate all of them by hand. Squish (or automated tests in general) permit to increase the test coverage without increasing the duration of the validation process.

Froglogic: Is there anything else you’d like to add?

SAGEM: Squish for Java is a very good product, which is very customizable and adaptable. I had big problems with Quick Test Pro because the objects in my applications changing a lot. Quick Test Pro doesn’t permit to modify the object name repositories and generation in a good and automated way. Also it is not possible to easily edit the scripts outside their IDE.

As a result, with Quick Test Pro I had to re-record my scripts over and over again. Squish, on the other hand, permits me to use grep/sed/awk to modify my scripts and object map. Having a flexible tool like Squish saves a lot of time!

Froglogic: Thank you for your time!

Test-Driven GUI Development with FEST

10.01.2007 · Posted in Articles

JavaWorld published the article Test-Driven GUI Development with FEST, written by yours truly :)

This article introduces FEST, demonstrates what separates it from other GUI testing frameworks and provides a hands-on introduction to Swing GUI testing with this developer testing library. Video demonstrations and working code examples are included.

At the same time, we just released a new version of FEST (0.4). For more details please see the release announcement.

Regression Therapy – Contentful Testing

08.11.2007 · Posted in Articles

Regression testing is usually seen as the poorer cousin of "proper" domain-abstracted assertion-based testing. Often rightly so!

However, with the right support in place, I have found that this form of testing can work very well in certain contexts.

This article addresses one such context: testing the view content generated by a web app. I discuss the background, then present a concrete example in the form of a plug-in for Rails.