I intended to talk about testing a lot more on this blog when I started out, before I got thoroughly sidetracked by the coolness of Perl 6. However, seeing another furor about it on the various programming sites last week, and having varied experiences with it in my own work in the same time, made me think it was time to revisit it.
First the good. A lot of the Perl 6 work I've been doing is perfectly suited to TDD. The tests are easy to write and nicely concise, with the median test just a single line long, and the longest maybe six lines. The tests run pretty quickly, even as slow as Rakudo is today. They provide direct and useful feedback about the code, and make it easy to make larger changes to the code with the confidence that the tests will find any problems you create that way.
I guess even here I wouldn't be following the rigorous ways of TDD. Typically I would code a little bit of library first, to get a feel for where it is going, then write tests for what I have done, and more tests that occur to me. Then make those tests work, rinse, and repeat. Sometimes the code would lead the test, sometimes the other way around.
But as I said, I find this really effective for this sort of work.
My problem with advocates of TDD, then, is that a lot of them seem to imagine that this sort of work is the only wort of work. But it isn't! Testing to see that two vectors are approximately equal (within a tolerance) is trivial. Testing to see that two B-reps are approximately equal (within a tolerance) is monstrously hard. Seriously, I've been doing professional work with B-reps for fifteen years now, and I have no idea how I would practically go about such a thing. (If I had code to fill the B-rep with cubes of varying sizes, you could then compare the cubes, to make sure that all the cube vertices of one B-rep were inside the cubes of the other, and all the voids were likewise empty. But that's a pair of complicated O(N^3) algorithms, and it still wouldn't handle a vast horde of common cases (like NMT B-reps and open shells).)
Putting this in concrete: last week I was working on a bug involving the orientation of B-rep faces on a simple box model. So I wrote up a test to look them. It took me several hours to write approximately 150 lines of code for the test, and it was fairly hard work. At the end of that, the test ran -- and confirmed that the model was correct as far as it could tell. (Admittedly the tests would have been easier to write if I could have written them in Perl 6 (with a well-written B-rep library) rather than C++. A lot of common B-rep operations are terribly verbose in C++.)
The end result is that I write a good many unit tests, but they are testing around the edges of things. So you can have unit tests for an assembly structure, but the assembly components are straight lines rather than B-reps. The tests can be quite helpful, but they are hardly conclusive proof your code is working.
A third example is a project I considered when I was dreaming of buying an iPhone. Wouldn't it be great, I thought, to have a little button accordion app, so you could fire up an accordion at any moment? The program would obviously consist of two main units: the user interface that allows you to press the buttons, and the music-making engine.
How would you go about unit testing something like that? One of the major components requires having someone pressing buttons on the iPhone to test it properly. The other component is generating audio. You could mock both components, but that would only let you test the interface between the two, but that's the trivial part of the program.
And if the app was good, both components would need to be finely tuned to get the proper feel. It seems to me this would require hours of sitting there playing the thing, and unit testing would help little, if at all...