Antigen is a tool to take an Ant build script, combine it with a GUI and wrap it up as an executable jar file. Its primary purpose is to create powerful graphical installers from Ant scripts.
Schema Unit Test (SUT) introduces a framework for testing XML Schema.
It includes a namespace and vocabulary for embedding test cases into sample XML documents, and a Java implementation using JUnit for testing a W3C Schema with embedded Schematron schema.
The basic idea of DDTUnit is to provide an XML description (XML Schema based) of test data and combine it with the simplicity of JUnit. The XML is only used to define data structures. All program flow is coded in plain old Java.
Kent Beck makes a post on Accountability in practice, in which he points to the Open Quality Dashboard: "I particularly like contrasting the graphs at the bottom of the Analysis tab between projects. I hope other software vendors will publish their quality data." (See also: [url=http://developers.slashdot.org/article.pl?sid=05/02/24/1549253&tid=156&tid=8]Slashdot discussion[/url])
Michael Hunter was a codeslinger, but learned that Slow And Steady Wins The Race: "Exploratory testing is a great way to get a handle on an area, and it can be a very effective technique on its own. However, if you treat it as advance scouting and take some time to use your findings to draw up an actual test plan, you can take your testing to the next level."
Phillip J. Eby discovers DocTest and is graced by a revelation: Stream-of-Consciousness Testing. "Ordinarily, I hesitate over every new test, trying to figure out how to translate from the result I want, to a test expressed in unittest-ese. With doctest, however, the testing just ‘disappeared’ from perception. It was like I was just writing down my thoughts while playing with the interactive interpreter, only it was even better than that, because the interpreter and my notes were a single, continuous stream, and my work was saved in a file where I could edit and re-run it at will." (See also Grig Georghiu’s articles Python unit testing part 2: the doctest module and Agile Documentation with doctest and epydoc)Mike Roberts decides to do away with Web Forms in CruiseControl.NET, and explains why: "Web Forms are hard things to unit test. Basically you can’t. This is because of how closely tied all Page implementations are into the ASP.NET framework. To introduce testability you have to keep your code-behinds very thin, but once you’ve got a few controls on your page this is tricky. Also, any logic that you put in the .aspx file itself is even harder to test, and this includes any templates, grid setup or whatever."
Having added capacity for test coverage, Fred Grott wonders how else to improve J2MEUnit: "An example might be use ANT, ANT-Contrib, etc. features to determine test methods since we have no reflection in J2ME. Hmm, ANTLR possibly?"
Chris Webb quotes Jon Axon’s method for [url=http://spaces.msn.com/members/cwebbbi/Blog/cns!1pi7ETChsJ1un_2s41jm9Iyg!146.entry]MDX Automated Unit Testing[/url]: "The gist of it is that you produce a fixed baseline cube with correct behaviour, and then copy-and-paste this within the same database to create a version of the cube for refining; only the latter is subequently altered."
Jonathan Cogley wonders whether the Ultimate Pair Programming Setup might not be so ultimate, and asks: "How do you pair? What is the best configuration you have found?"
(This is getting old, but an interesting blog entry none the less) BigCanOfTuna gripes against Refactoring, The Most Hated Word in I.T.: "It is a simple concept that most people do not understand, and because of this, it gets used in the most inappropriate ways."
PortletUnit is a jUnit Java Unit Testing Framework for testing JSR-168 portlets. It is built on ServletUnit and Pluto. It provides a mock portlet container as ServletUnit provides a mock servlet container.
The doctest module searches for pieces of text that look like interactive Python sessions, and then executes those sessions to verify that they work exactly as shown. There are several common ways to use doctest:
* To check that a module’s docstrings are up-to-date by verifying that all interactive examples still work as documented.
* To perform regression testing by verifying that interactive examples from a test file or a test object work as expected.
* To write tutorial documentation for a package, liberally illustrated with input-output examples. Depending on whether the examples or the expository text are emphasized, this has the flavor of "literate testing" or "executable documentation".
This book describes how to apply the ICONIX Process (a minimal, use-case driven modeling process) in an agile software project. It’s full of practical advice for avoiding common "agile" pitfalls. Further, the book defines a core agile subset — so those of you who want to "get agile" need not spend years learning to do it. Instead, you can simply read this book and apply the core subset of techniques.
The book follows a real-life .NET/C# project from inception and UML modeling, to working code — through several iterations. You can then go on-line to compare the finished product with the initial set of use cases.
The book also introduces several extensions to the core ICONIX Process, including combining Test-Driven Development (TDD) with up-front design to maximize both approaches (with examples using Java and JUnit). And the book incorporates persona analysis to drive the project’s goals and reduce requirements churn.
Brian Marick has a book planned for the end of the year: Scripting for Testers. "Prodded by Bret Pettichord, I’ve finally committed to writing Scripting for Testers. The manuscript is due by the end of the year, to be published in Dave Thomas and Andy Hunt’s Pragmatic Bookshelf."
Mike Gunderloy comments on MSF Agile: Microsoft does agile, sort of. Doug Neumann has more to say about VSTS Team Build in Highlights from the Dec. CTP: "Team Build is the out-of-the-box automated build process that does much more than compile code. With Team Build, you get a reproducible build, your tests are run, your code is analyzed, and your completed work items are updated with a build number. Reporting leverages a data warehouse that underlies everything in Team Foundation to give you some great views of what’s happening on your development project."
Roy Osherove attempts an answer to Employing TDD and unit testing with Waterfall methodologies: "How do you enforce such Agile rules as ‘Write unit tests for your code’?"Scott Hanselman posts [url=http://www.hanselman.com/blog/PermaLink,guid,d835178f-a649-45f5-907f-28ad1177d8d5.aspx]What Great .NET Developers Ought To Know (More .NET Interview Questions)[/url]: "Groking these questions may not make you a good or bad developer, but it WILL save you time when problems arise."
Marcus Ahnve tricks JUnitPerf into asserting Unit Testing Performance.
Fred Grott finds a Neat Wiki Plugin that lets you update your MediaWiki pages from inside Eclipse.
On the fun side, Paul Wilson relates two quotes from the waterfall, and K. Scott Allen identifies The 5 Stages Of Mocking: "When I first began reading about mock objects, I did not like what I was reading. In fact, I disliked the idea so much, I entered…"
When using agile methods, it is not uncommon for the contents of a release to change dramatically. This can create problems for up-front user interface design as it is created with the expectation that certain functionality will be present in the released system. As UI design is created across features using techniques such as scenarios, it is especially vulnerable to changes. A user interface design also assumes that functionality will be implemented in an order that makes navigating from one feature to another possible. However, if the customer’s priorities do not match the flow created by the UI designer then the design may have to change substantially.
Author: Owen Rogers
Posted: February 12, 2005
DejaGnu is a framework for testing other programs. Its purpose is to provide a single front end for all tests. Think of it as a custom library of Tcl procedures crafted to support writing a test harness. A Test Harness is the testing infrastructure that is created to support a specific program or tool. Each program can have multiple testsuites, all supported by a single test harness. DejaGnu is written in Expect, which in turn uses Tcl — Tool command language.
On traditional projects, folks with Quality somewhere in their title (Quality Assurance, Quality Engineers, et al) perform Independent Verification and Validation (IV&V) activities to assess the quality of the system. Often these teams also review design artifacts. Sometimes they also have a hand in defining and/or enforcing the process by which the software is made.
Agile project teams generally reject the notion that they need an independent group to assess their work products or enforce their process. They value the information that testing provides and they value testing activities highly. Indeed, Extreme Programming (XP) teams value testing so much, they practice Test-Driven Development (TDD), writing and executing test code before writing the production code to pass the tests. However, even though agile teams value testing, they don’t always value testers. And they’re particularly allergic to the auditing or policing aspects of heavyweight, formal QA.
So how can testers make themselves useful on a team that does not see much use in traditional, formal QA methodologies? Here’s what I’ve been doing.
Author: Elizabeth Hendrickson
Published: January 19, 2005
Agile teams accept change as inevitable and tailor their processes accordingly. Short iterations mean that stakeholders can see steady progress and provide frequent feedback. Continuous integration means that if one part of the system isn’t playing nicely with others, the team will find out almost immediately. Merciless refactoring, where programmers improve the code internally without changing its external behavior, prevents code from becoming fragile over time. Extensive automated unit tests ensure that fixing one bug won’t introduce two more.
Author: Elisabeth Hendrickson
Presented: Pacific Northwest Software Quality Conference, October 2004
CLUnit is a tool that allows Common Lisp users to define and run unit tests. Although similar in purpose to the venerable RT, we believe that CLUnit is not only more powerful in the types of tests that can be specified, but also much simpler to use. In addition, CLUnit was designed to be used in an environment characterized by frequent interactive unit test runs. For this reason, I believe that it is better suited for interactive use than RT and other alternatives.
CLUnit is written in ANSI Common Lisp. It has been tested (so far) under Franz Allegro, Xanalys LispWorks, CMUCL, CLISP, and Corman Lisp. CLUnit is available for general use under the LGPL. It is provided as part of CLOCC.
The Symbian Test Unit is a unit testing framework for the Symbian OS platform. It is used by the developer to implement unit tests in C under Symbian platform. Symbian Test Unit is Open Source software released under Lesser General Public License
Symbian Test Unit Benefits:
* Syntax is very close to the original Junit of last versions
* Expandability and flexibility of settings
* Documented code
* Documentation is in CppDoc (Doxygen) format
* Doesn’t need installation
BugHuntress is a pioneering tool for automated testing of Palm applications, which requires no additional hardware.
* Uses black box testing technology
* Imitates QA engineer’s testing steps
* Tests on real devices and their emulators
* Is very suitable for Palm regression testing
* Lists, compares and exports logs to text files
* Has the expandable capabilities
* Requires minimum system resources
The Brew Test Unit is a unit testing framework for the Brew platform. It is used by developers to implement unit tests in C under the Brew platform. Brew Test Unit is Open Source Software released under Lesser General Public License.
* Syntax is very close to the original JUnit of last versions
* Expandability and flexibility of settings
* Documented code
* Documentation is in CppDoc (Doxygen) format
Enhance your automated testing solution, VisualCoder v1.0 allows non-developers, QA or non-techies to author tests visually, i.e. – drag & drop simplicity programming. Reuse your NUnit tests visually to create new tests programs adding another dimension to your testing.
VisualCoder v1.0 introduces "V" programming that is "real" visual and simple, plus it has built-in class and member functions generator helping out under the curtain to extract and make available for reuse in UI, any reusable block of code in your (test) program, further simplifying layout and structure of your otherwise possibly complex test program.XML results streaming also guarantees scalability in execution and storage of results of your long running and results "hog" test program. Plus, your team will find the rich and "drilldown" generated reports to be an indispensable tool in after-run analyses. Once you lay your hands in this tool, you’ll never do auto-testing without it again.
Don’t worry if you’re not an NUnit enthusiast, your environment and test framework is just a plugin away to be supported by VisualCoder. NUnit plugin ships w/ the tool to serve as a pattern in writing a test plugin and we can also help out if you stumble in the process.
Highly recommended! Try out now and reap the benefits of "real" visual programming.
It’s no exaggeration to say that Web services are revolutionizing application-to-application communication. Web services are already being used extensively in corporate intranet environments and are making their way into commercial use, too. But because Web services are relatively new, techniques to test Web services programmatically are not widely known. In this column I’ll show you a technique to quickly write test automation that verifies Web service functionality.
Author: James McCaffrey
Published: MSDN Magazine, March 2005
We write software in order to serve some specific set of needs for some specific set of people. When I’m trying to understand what software to write, I apply this principle in the form of a few questions: Whose needs will the software serve? What needs will trigger those people to interact with the software? What roles will the software play in satisfying those needs?
Author: Dale H. Emery
Published: February 22, 2005
Michael Hunter is Grading On A Curve: "A topic I’ve been pondering of late is grading test cases. If I have two test cases that appear to do exactly the same thing, how do I decide which one to keep and which one to turn off? If I am wading though a large number of failing or unstable test cases that I inherited from someone else, how do I decide which ones are worth spending time on and which should just be thrown away?"
Johan Danforth is asking who is testing the tester?: "I’ve been using JUnit and NUnit in smaller and larger projects, but I’ve never used it in the way you often see it described in books written by the TDD-gurus and the way it is described in this book. I can’t help it, but it feels kind of stupid to create a test as first thing you do."
Steve Hayes points the issue with forgetting to refactor: "I looked at what I had done, and it worked, and the interfaces were ok (thanks to TDD for that), but the internals of each class just weren’t where they should be."
MSF for Agile Software Development has been released.Miguel Jimenez sets up .NET Development Trees for better team performance: "I’ve been reading Mike Roberts’ series of posts about How to Setup a .NET Development Tree and in the last project I worked we created a very very very similar tree to host the whole project."
Ranjan D. Sakalley has an idea for NUnit and automatic method input parameter verification: "I have tried to create an attribute to add to the existing ones in NUnit, which can be placed over a class. The NUnit core identifies this attribute, and redirects it to a custom test case builder which is added to create and run test cases for boundary values for value types, and nulls for reference types."
Adrian Sutton shows How To Simulate Key Events In Swing JUnit Tests.
Narayanaswamy Balasubramaniam has been Using JUnit in Web Application Server and presents a quick graphical setup procedure.
Adrian Sutton tries Allowing Tests To Fail: "How do we preserve the ability of unit tests to ensure no regressions while still being able to use them as a to do list?" He also experiences a little annoyance trying to set up Continuous Integration.
Carlos Villela tells A little horror story on Legacy Code: "You try and follow Michael Feather’s brilliant Working Effectively With Legacy Code, and manage to isolate a tiny bit of functionality, test it and make sure you can fix the behaviour properly. That gets you trapped in a maze of circular dependencies with some other 30 other packages."
Frank Sommers spends an hour setting up a Jini lookup services and wonders whether [url=http://www.artima.com/forums/flat.jsp?forum=106&thread=94371]complete test coverage is desirable, or even attainable?[/url] "Were our tests, or the Jini JSK’s tests, flawed? How could we account for environmental exigencies in those tests? How deep should we aim for in our test coverage? Should we strive to cover all the permutations of code and its environment in our test suites? Is such complete test coverage of code even attainable?"
In the context of Evolutionary Design, Martin Fowler explains Abundant Mutation: "Evolutionary Design expects the team to modify the design as the project proceeds; both to cope with requirements changes and to take advantage of learning. You can think of this as adding mutations to the design that react to these changes. This mutation is a good and necessary thing, but like many good things you can get too much of it."
Sound advice from John D. Mitchell: Use less milk: "My little girl has this habit of pouring way too much milk into her bowl of cereal. Then, she whines and complains when we tell her to drink up the extra milk after the cereal is gone so it doesn’t go to waste. Yesterday, she got quite snippy when I dared to suggest that she try pouring less milk into the bowl. Gee, she sounds like a lot of managers and developers of software."
Robert Hurlbut will make a presentation on TDD at the Vermont .NET User Group in April.
Release, to my mind, is the delivery of business value to the client – the process of taking your wonderful code, packaging it up, and delivering it in a nice little bundle to your client. In this diagram, a release is simply something that happens when you have no stories left. Not only does this jar with the agile/XP concept of “release early, release often”, but it radically oversimplifies the kind of checks that might need to be done prior to a release being ready (here simply called “System Testing”).
Author: Sam Newman
Blogged: February 14, 2005
The architecture is simple. There is one class, to be written but trivial, that opens an assembly and gets all the types from it. This is essentially a one-liner or so. The class in the middle is responsible for iterating through the types, finding all attributed test methods, and creating a sorted list of application methods to those test methods that test them. And finally, there is an output class, whose responsibility it is to produce the final, consumable output.
[Currently in beta]
A Test List tells a story about the behavior expected from the module/class under test. It is composed of one-liners, each line describing what a specific unit test tries to achieve. [..]
I find it very valuable to have such a Test List for every Python module that I write, especially if the list is easy to generate from the unit tests that I write. I will show later in this post how the combination of doctest and epydoc makes it trivial to achieve this goal.
Author: Grig Georghiu
Published: February 16, 2005