testdriven.com Wrangling quality out of chaos

Using just enough automation to dramatically improve the testing process

05.10.2011 · Posted in Articles by David Vydra

At one of my recent engagements, I was assigned to test the reports feature. The setup was as follows: reports were produced via a third-party tool and ran in a separate server that talked directly to the database server. There was only one Report Analyst for the entire company —  needless to say he was a very busy guy. Additionally setting up the reports server when the reports configuration changed was tedious and error prone, so testers always put off reports testing to the end of the release cycle. Thus reports were always late, dreaded by testers and often had to miss the main product release and shipped separately.

I started to look for opportunities to apply automation to fix some of these pains. First I looked at automating the checking of the final report. This was a non-starter because the report UI was using copious amounts of custom JavaScript and was not amenable to web automation tools such as Selenium/WebDriver.

To establish testing priority, I examined the history of the testing effort in this area which has been ongoing for several years. It was obvious that most of the bugs were related to schema drift, but there was no easy way to determine which reports were affected by schema changes because report templates used a custom format to represent the data model.

Then it hit me: the reports server probably has diagnostic logging capability to trace the SQL that it sends to the DB server. Sure enough, after a quick search through docs, I was able to see the generated SQL.

Next I ran each report by hand to capture the generated SQL for the most common scenarios. With SQL in hand, I created plain old JUnit tests whose sole purpose was to break when the DB schema changed. These tests ran in a few seconds and I was allowed to check them into the main continuous integration server suite. Now when one of these tests broke, we could disable it and file a bug against the reports component.

Failed tests notified the Reports Analyst about upcoming work much earlier in the cycle and was able to schedule his time much better. On the testing side, I wrote a script to deploy the reports server using a single shell command. With the tedious and error-prone setup eliminated, testers were happy to do exploratory testing as soon as the new reports were checked in. When the bug was fixed, we replaced the SQL string in the JUnit test and re-enabled it.

To conclude, while it was too expensive to automate reports checking completely, applying the right kind of automation made the process a lot more manageable, predictable and less stressful for everyone.

Comments are closed