The Unit of Testing is the Unit of Maintainability
Every time I talk to a client which has been through some sort of agile transformation, whatever that means, I often end up having an interesting conversation with the local development teams about testing.
For some reason, many developers think that the unit test + integration test + functional test combination is just for beginners. These people have been through a lot, you know. They used to have all sort of trouble delivering software that made any sense to the business, but after years and many dollars spent in consulting and recruitment they are racing like a pro now.
And that means no training wheels required. And apparently this leads to “cherry-picking which tests make sense for us”. Yeah, I am not so sure about that.
Tests are executable specifications, and what we are doing by dropping a level of testing is leaving this level as unspecified behaviour. John Regehr has a brilliant series of blog posts on how unspecified behaviour affects the C programming language, and I find this quote particularly important for our discussion:
If any step in a program’s execution has undefined behaviour, then the entire execution is without meaning. This is important: it’s not that evaluating (1«32) has an unpredictable result, but rather that the entire execution of a program that evaluates this expression is meaningless. Also, it’s not that the execution is meaningful up to the point where undefined behaviour happens: the bad effects can actually precede the undefined operation.
Now, it is completely ok for you not to care enough about some piece of software to write a specification for its behaviour. What happens, though, is that you create a black-box around that piece.
And treating something as a black-box is mostly an economical issue.
Back in the 90s, I knew a lot about PC hardware. Every time I had a hardware problem I would open my machine, perform multiple tests, identify and replace the very faulty piece. Upgrades were also done this way; I’d buy some circuit board, open the computer and replace the old piece with the new one.
This was required because computers were expensive beasts. These days I really can’t be bothered, I just go to the shop and get a new one. Apple can’t be bothered either, so they often just replace my faulty hardware. For free.
The problem here is that this is still not the case for most software.
Regardless of what building blocks you choose, the way we usually build systems is by layering abstractions on top of each other. These abstractions are not meaningless; they play an important role in managing complexity and have distinct responsibilities.
If an abstraction or even a whole level in your system is not worth some sort of automated specification, you should probably ask yourself if that piece is required at all. There are some cases where this is just fine –often happens with automatically generated code, for example– but in most scenarios every class in a system is expensive enough to deserve it’s own specification so that people can understand how that was supposed to work.
And when you have to change some piece of code without any specification, there are basically two strategies.
The first is to try and be extremely careful; changing only the handful of lines you know are absolutely required for whatever you want it to do. This approach is obviously not refactoring-friendly, so eventually the class ends up with a lot of comments saying something like:
//I have no idea what this does but it's not safe to remove it
The other approach is to do exactly what I do with computer hardware these days: just replace the whole black box with something else. As you can probably imagine, this is very expensive.
If you consider yourself to be mature enough –whatever that means– to drop levels of testing, there isn’t much I can do but beg you to pay attention to the black boxes (and minefields) you may be leaving behind.