Differences between revisions 13 and 14
Deletions are marked like this. Additions are marked like this.
Line 35: Line 35:

 * [http://groups.yahoo.com/group/testdrivendevelopment/message/17719 Re: [TDD] Re: Automated acceptance Tests for a Web application] by Cory Foy -- using FitNesse

It's easy to write code really fast if it doesn't have to work. All too often, untested code doesn't work. Testing our code gives us confidence that it will work when called upon. The output of a test run give us feedback, during development, on how were progressing. It also lets us know if a change has caused a regression in other functionality. The testing also gives us information to decide if we're ready to release.

In order to meet these goals, we need several types of testing:

  • Programmer Tests, which tell us that our code does what we intended, and
  • Customer Tests, which tell us that our code does what the customer wants.

These tests operate at different levels:

  • UnitTesting, which tells us that our methods, classes, functions, and subroutines work as we intended.

  • Integration Testing, which tells us that our code plays well with other code, subsystems, or systems.
  • System Testing or AcceptanceTesting, that tells us the system works as a whole.

You can add more levels than this, but if you have less, you're probably missing some valuable information about the system you're building.

To get the benefits of testing, you must, of course, run the tests. This seems obvious when you think about it, but I've been amazed at how often this is neglected. Why is this neglected?

One reason is that it's too much work to run the tests. If your AcceptanceTesting is based on a person sitting at a keyboard and manually checking the system, you will never be able to afford to run these tests frequently. While you may find manual tests useful for exploring for things that can break, it's important to AutomateTasks so they can be repeated without much effort or concentration.

Another reason is that the tests are too slow. I write virtually all of my code using TestDrivenDevelopment with UnitTesting and I expect my suite of unit tests to run in just a few seconds. Anything longer really throws off the rhythm of my development. Integration tests, especially those going against a database, are more difficult to make really fast. I've gotten by, in some cases, by just disciplining myself to run them prior to checking in changes or after making schema changes, but that's an error-prone practice. It's better to have a ContinuousIntegration machine that runs these slower tests after every checkin. The sooner you notice a problem, the easier it is to fix.


Some other things that look interesting:

iDIAcomputing: TestingStrategies (last edited 2009-07-27 18:25:59 by localhost)