Differences between revisions 17 and 18
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
I write virtually all of my code using TestDrivenDevelopment and UnitTesting. I believe strongly in AcceptanceTesting, but have had limited success in getting organizations to buy-in. (Instead, they seem to fall back on manual testing by a bunch of QA people.) It's easy to write code really fast if it doesn't have to work. All too often, untested code ''doesn't'' work. Testing our code gives us confidence that it ''will'' work when called upon. The output of a test run give us feedback, during development, on how were progressing. It also lets us know if a change has caused a regression in other functionality. The testing also gives us information to decide if we're ready to release.

In order to meet these goals, we need several types of testing:
 * Programmer Tests, which tell us that our code does what we intended, and
 * Customer Tests, which tell us that our code does what the customer wants.
These tests operate at different levels:
 * UnitTesting, which tells us that our methods, classes, functions, and subroutines work as we intended.
 * Integration Testing, which tells us that our code plays well with other code, subsystems, or systems.
 * System Testing or AcceptanceTesting, that tells us the system works as a whole.
You can add more levels than this, but if you have less, you're probably missing some valuable information about the system you're building.

To get the benefits of testing, you must, of course, run the tests. This seems obvious when you think about it, but I've been amazed at how often this is neglected. Why is this neglected?

One reason is that it's too much work to run the tests. If your AcceptanceTesting is based on a person sitting at a keyboard and manually checking the system, you will never be able to afford to run these tests frequently. While you may find manual tests useful for exploring for things that can break, it's important to AutomateTasks so they can be repeated without much effort or concentration.

Another reason is that the tests are too slow. I write virtually all of my code using TestDrivenDevelopment with UnitTesting and I expect my suite of unit tests to run in just a few seconds. Anything longer really throws off the rhythm of my development. Integration tests, especially those going against a database, are more difficult to make really fast. I've gotten by, in some cases, by just disciplining myself to run them prior to checking in changes or after making schema changes, but that's an error-prone practice. It's better to have a ContinuousIntegration machine that runs these slower tests after every checkin. The sooner you notice a problem, the easier it is to fix.

----

 * HtmlTestingUsingXpath
 * [http://idiacomputing.com/moin/PrimaryKeyGeneration#head-d7949da7a424b68c0ae6279e11b7e8718331321e DbUnit and autonumbered primary keys]
Line 5: Line 25:
 * [http://patterntesting.sourceforge.net/whatis.html Pattern Testing] allows you to check coding/design standards across the project. This looks very interesting, but I haven't tried it.  * [http://www.myloadtest.com/free-packet-sniffer/ WireShark] and other network testing recommendations.
 * [http://sourceforge.net/projects/jsptest JspTest] is a new jUnit extension for testing JSP pages outside the container. It's brand new (April 2006) and doesn't have any documentation, yet.
 * [http://patterntesting.sourceforge.net/whatis.html Pattern Testing] allows you to check coding/design standards across the project. This looks very interesting, but I haven't tried it. (thanks to Jeff Waltzer)
 * [http://www.jdemo.de/ JDemo Framework] mentioned by Ilja Preuss on T''''''estFirstUserInterfaces @ yahoogroups.com
 * [http://developer.spikesource.com/wiki/index.php/Projects:TestGen4WebDocs TestGen4WebDocs] is a F''''''ireFox plugin for recording web interaction to be played back later as a test. It reportedly works with [http://ftp.mozilla.org/pub/mozilla.org/firefox/releases/1.5b2/ Firefox 1.5 Beta 2] but not Firefox 1.0.7 or Firefox 1.5 RC 1.
 * [http://selenium.thoughtworks.com/ Selenium] and [https://addons.mozilla.org/extensions/moreinfo.php?id=1157 Selenium Recorder] may have eclipsed T''''''estGen4WebDocs. See also:
  * [http://wiki.openqa.org/display/SEL Selenium Confluence]
  * apparently [http://openqa.org/selenium "new" website]
  * [http://redhanded.hobix.com/inspect/theSoundsOfSeleniumTestingYourWeblickation.html The Sounds of Selenium Testing Your Weblickation]
  * [http://agiletesting.blogspot.com/2005/03/web-app-testing-with-python-part-2.html Selenium and Twisted]
 * ''Agile Security Testing of Web-Based Systems via HTTPUnit'' ([http://www.agile2005.org/RP4.pdf PDF]) describes, among other things, how to bypass HTML form field length limitations.

 * [http://groups.yahoo.com/group/testdrivendevelopment/message/17719 Re: Automated acceptance Tests for a Web application] by Cory Foy -- using FitNesse -- a tiny case study on the TDD list.
 * [http://groups.yahoo.com/group/testdrivendevelopment/message/17826 Web application testing using FitNesse with Selenium-rc] by Bob Runstein on the TDD list.

It's easy to write code really fast if it doesn't have to work. All too often, untested code doesn't work. Testing our code gives us confidence that it will work when called upon. The output of a test run give us feedback, during development, on how were progressing. It also lets us know if a change has caused a regression in other functionality. The testing also gives us information to decide if we're ready to release.

In order to meet these goals, we need several types of testing:

  • Programmer Tests, which tell us that our code does what we intended, and
  • Customer Tests, which tell us that our code does what the customer wants.

These tests operate at different levels:

  • UnitTesting, which tells us that our methods, classes, functions, and subroutines work as we intended.

  • Integration Testing, which tells us that our code plays well with other code, subsystems, or systems.
  • System Testing or AcceptanceTesting, that tells us the system works as a whole.

You can add more levels than this, but if you have less, you're probably missing some valuable information about the system you're building.

To get the benefits of testing, you must, of course, run the tests. This seems obvious when you think about it, but I've been amazed at how often this is neglected. Why is this neglected?

One reason is that it's too much work to run the tests. If your AcceptanceTesting is based on a person sitting at a keyboard and manually checking the system, you will never be able to afford to run these tests frequently. While you may find manual tests useful for exploring for things that can break, it's important to AutomateTasks so they can be repeated without much effort or concentration.

Another reason is that the tests are too slow. I write virtually all of my code using TestDrivenDevelopment with UnitTesting and I expect my suite of unit tests to run in just a few seconds. Anything longer really throws off the rhythm of my development. Integration tests, especially those going against a database, are more difficult to make really fast. I've gotten by, in some cases, by just disciplining myself to run them prior to checking in changes or after making schema changes, but that's an error-prone practice. It's better to have a ContinuousIntegration machine that runs these slower tests after every checkin. The sooner you notice a problem, the easier it is to fix.


Some other things that look interesting:

iDIAcomputing: TestingStrategies (last edited 2009-07-27 18:25:59 by localhost)