Sunday, March 12, 2006

Test Driven Heresy

In my view, some of Test Driven Development's greatest benefits are :

  • Guards developers from breaking each others code

  • Protects a projects quality from going backwards as bugs increase with added features.

  • Provides a series of short term goals (tests that pass, code that runs) for developers to accomplish, fostering a feeling of progress

  • Enables components to be created and tested in isolation from the complexities of the greater system. At integration time, bugs are greatly reduced, and most likely to be integration issues rather than the fault of either the system or the new component.

  • Enforces disciplines such as minimizing dependencies. Your new component can't be dependent on other components in the system if they don't exist in your test environment.

  • Provides confidence that tested components work, eliminating them from suspicion when the system fails. Therefore fault finding is simplified


These benefits are promoted as more than paying for the costs of TDD, mainly significantly more code to create, maintain, version control etc. Sometimes creating the test can be much more complex than creating the component it is testing !

But hold on, what if your environment means that the benefits don't pay for the costs ? What if you are a sole experienced developer working on hardware control code. You don't have the team issues that TDD helps to solve, reducing your benefit, and TDD is much harder to do with external devices that are outside your control, increasing the cost. You may be writing code much like you have been for many years, and is simple enough that it rarely has errors that are not immediately apparent. Here you have a choice of religiously following TDD assuming the gurus know best, or you could do your own cost/benefit analysis and (shock) choose to just write code and test informally by running the system.

So here's my more liberal TDD for such situations :

  • Tests don't have to pass or fail to be of benefit. Sometimes evaluating the output requires heuristics way beyond the code you are testing. Just use a “test” to exercise code and output values as it progresses, and manually inspect the output to determine whether it is doing the right thing. You have still gained the TDD benefits of building and running code in isolation from the system, the satisfaction of seeing it work, the discipline of designing the code from a users point of view, and many errors will be apparent anyway, especially those that throw exceptions. Note that tools such as JUnit and NUnit don't encourage this kind of use, believing that a test should only output information on failure. It is still possible however.

  • For a given major project called BlueWhale, set up two additional projects called BlueWhale.Tests and BlueWhale.RnD. BlueWhale.RnD is your playground for trying new things. Here tests are not required to pass or fail. You can begin creating a new class just above the test itself, for convenience, without the hassle of making new files or changing the production code. It might have dependencies that aren't dependable. It doesn't matter because you might change tack and blow it away anyhow. When a test is working, and the test subject becomes worthy of inclusion in the system, graduate the test by moving it into BlueWhale.Tests. Here it should pass or fail, and follow all the usual TDD requirements. BlueWhale.RnD is also a place to demote code that was in the production system, has been replaced, but may still be of value in the future.

  • Apply Pareto's 80/20 Principle to test coverage. Some things are too obvious to write a test for. Your time would be better spent elsewhere (such as writing more tests for more critical areas), or in single stepping it through in the debugger, inspecting variables as it goes. Or simply printing it out and scrutinizing it. Testing high level code inherently tests the low level code it uses (though some errors will escape this), and so perhaps attempt to cover most code at the high level, and drill down with more tests over time.

  • You can cut corners and add dependencies in tests that you wouldn't in production code. Runtime performance is likely not an issue. You can use third party libraries, alternative script languages (Python, Ruby) and external tools (GNU Diff). The worst that can happen is that change causes these tests to go on passing when the code under test fails, but a more likely scenario is that they will fail due to their dependencies. So fix them. No harm done.

  • You still gain the benefits of designing the code to be testable, writing from a users point of view, developing without the hassles of the larger system and so on.

Confident Programming

I propose that a core goal in the set up of a development environment, including choice of language, in-house code libraries, version control, build process, requirements management, third party components etc is to enable developers to write application code confidently.

To write code confidently requires :

  • The ability to hold a complete concept of the problem at hand in the brain at once.

  • Definition and separation of the problem from other problems.

  • Definition and separation of the the problem from its dependencies.

  • The ability to experiment and undo. Version control to allow experimentation without affecting other developers or production code.

  • Coding conventions – to avoid having to think about how to name or format things.

  • In-house code libraries – basically a collection of solved problems. These must be robust, reliable, organised, well maintained and easy to add to. A graduate developer once asked me “how can you write reliable code on top of unreliable libraries ?”. Good question – if its possible its probably not worth the effort. My answer was that you build libraries from the ground up that are reliable.


Without these things, an astute developer can be overwhelmed by all the possible side effects and implications of what they are doing. A less attentive developer will overlook these things, leaving them to be discovered in testing, when major changes are required, or worse, in the hands of customers. Testing becomes more important as a developer is less able at coding time that what they are doing is correct, or will remain correct as the code it is built on shifts over time. Application code becomes more platform specific, less agile, and includes more low level detail.
While much has been said of defensively finding and removing bugs from software, there is disproportionate discussion about how to create an environment where bugs are not created in the first place.