Forests, trees, testing and design

Home | Blog | Bio and Contact | CSLA .NET | CSLA Store

31 March 2006

During the early banter-and-have fun part of my recent dotnetrocks interview I dissed TDD (Test Driven Development, not the text translation service for the deaf). Of course this wasn’t the focus of the real interview, and prior to sitting down behind the mic I hadn’t thought about TDD at all, so we were all just having some fun. As such, it was flippant and it was short - meaning I was having fun, and that I surely didn't have the time or opportunity to really express my thoughts on design or testing. That would be an entirely different show all by itself.

<?xml:namespace prefix = o ns = “urn:schemas-microsoft-com:office:office” /> 

Nor do I have time to write a long discussion of my thoughts on design, or on testing, just at the moment. But I did respond to some of the comments on Jeffery Palermo’s blog – and here are some more thoughts:

  As a testing methodology TDD (as I understand it) is totally inadequate. I was unable to express this point in the interview due to the format, but the idea that you’d have developers write your QA unit tests is unrealistic. And it was pointed out on <?xml:namespace prefix = st1 ns = "urn:schemas-microsoft-com:office:smarttags" />Palermo’s blog that TDD isn’t about writing comprehensive tests anyway, but rather is about writing a single test for the method – which is exceedingly limited from a testing perspective. Of course you could argue that since the vast majority of applications have no tests at all, that it is a huge win if TDD can get companies to write just one test per method. That is infinitely better than the status quo (that whole division by zero thing) and so TDD is hugely beneficial regardless of whether it is actually the best approach or not. I’d go with this: some testing is better than no testing. But one-test-per-method is pretty lame and certainly doesn’t qualify as real testing.   But the real key is that developers are terrible testers (with perhaps a few exceptions). This is because developers test to make sure something works. Actual testers, on the other hand, test to find ways something doesn't work. Testers focus on the edge cases, the exceptional cases, the scenarios that a developer (typically?) ignores. Certainly there's no way to provide this level of comprehensive test in a single test against a method - which really supports another person's comment on Palermo's blog: that TDD isn't about testing.   Having spent a few months being employed specifically to do that type of QA unit testing I can tell you that I suck at it. I just don't think in that "negative" manner, and it is serious effort for me to methodically and analytically work my way through every possible permutation in which a method can be used and misused. I’ve observed that this is true for virtually all developers. As a group, we tend to be optimists – testing only to make sure things work, not to find out if they fail.   But as someone on Palermo's blog pointed out, TDD is mis-named. TDD isn't about testing (thankfully), nor apparently is it about development. It is, he said, a design methodology.   That's fine, but I don't buy into TDD as a “design methodology" either. You can't "design" a system when you are focused at the method level. There’s that whole forest-and-trees thing... Of course I am not a TDD expert – by a long shot – so for all I know there’s some complimentary part of TDD that looks at the forest level. But frankly I don’t care a whole lot, because I use a modified CRC (class, responsibility and collaboration) approach. This approach is highly regarded within the agile/XP community, and is to my mind the best way to do OOD that anyone has come up with to this point. (David West's Object Thinking book really illustrates how CRC works in an agile setting)  The CRC approach I use sees the forest, then carefully focuses in on the trees, pruning back the brush as needed.   Now I could see where TDD (as I understand it) would be complimentary to CRC, in that it could be used to write the prove-it-works tests for the objects' methods based on the OO model from CRC – but I’m speculating that this isn’t what the TDD people are after. Nor do I see much value at that point – because the design is done, so whether you write the test before or immediately after doesn't really matter - the test isn't driving the design.   But to my mind writing the tests is a must. In the interview I mentioned that I (like all developers) have always written little test harnesses for my code. What I didn’t get to say (due to the banter format) was that for years now I’ve been writing those tests in nunit, so they provide a form of continual regression testing. In talking to

Jim Newkirk about TDD and his views on this, he said that this is exactly what he does and recommends doing.

  And as long as we all realize that these are developer tests, and that some real quality assurance tests are also required (and which should be written by test-minded people) then that is good. Certainly this is my view: developers must write nunit tests (or use VSTS or whatever), and that I strongly encourage clients to hire real testers to complete the job – to flesh out the testing to cover edge and exceptional cases.   None of which, of course, is part of the design process, because that’s done using a modified CRC approach.