I just read Your Unit Tests Lie to You by Janusz Gorycki and I was going to leave a comment there, but thought it was more appropriate to expand my comments off into their own thing. For those that haven't read the article, its basic premise is to grab hold of the nearest "test infected" reader and shake the warm and fuzzy out of them. It paints the short sightedness of many recent "unit testing" converts as living in a dream world where unit tests should replace formal testing. It follows with many sentiments I've read (and written about here) for a while now. It's not that I disagree with what is being said in the article, or its tone for that matter; most of what is being said is spot on. Unit testing is definitely not a silver bullet. If you read my blog often, you no doubt get that. The article ends:

So please, don't fire your QA department just yet. Their job is still important, even if you unit test.

So to Janusz, the fundamental problem here is a general ignorance of the purposes behind a unit test suite. I agree 100% that's the primary factor behind his problem. What don't we agree on? Semantics. But semantics are important! How far do we have to go for a true zen-understanding of this issue? Not far. Indulge me —

When is A Unit Test is not a Unit Test?

Here's my thesis: you may usea unit testing framework, but what you writeare developer tests. Even if they are technically unit tests, it is against everyone's interest to call them this. Picky, useless distinction, you say? Hear me out.

There is a vast difference between the gamut of possible automated tests one could write and what is known colloquially as a unit test. A number of different kinds of automated tests are written against frameworks that are built on top of unit testing frameworks. That doesn't make them unit tests. It doesn't make sense to callthem unit tests. A square is a rectangle, but does that make every rectangle a square? An automated acceptance or integration test is subject to a completely different set of problems (in areas such as specification, maintenance, complexity) than a unit test. In fact, about the only thing they share is their lifecycle and execution model, which many times has been retrofitted into the JUnit lifecycle and execution model.

"Unit Testing" and Linguistic Drift

I've recently seen a number of different incarnations (1, 2, 3) of the argument that we should eschew use of the term "best practice" because of the implications of its linguistic drift and general propensity of people to turn off their brains when just spoon-fed answers, not having to experience deriving the solution for themselves. Similarly, the popularity of unit testing frameworks and the sheer frequency with which the term has been used have, in a sense, set the idea of developer testing back considerably. Cedric Beust makes the pointthat in many cases we've confused [developer] testing terminology with JUnit terminology, and TestNG was in part a response to that. Here we oversimplify the problems we choose to bite-off and the goals we strive toward. They're not realistic. Is it no wonder so many people fall flat when trying to adopt "unit testing"?

While there may be some overlap with the goals of validation and verification, most in the know consider the true benefits of "unit testing" to be a totally different animal altogether. We seem surprised to find the benefits of doing developer testing have little to do with what "testers" do. This dischord causes a lot of confusion, and has sparked a lot of articles. Some draw this conclusion, appropriately, that developer testing, while it fits a rigid definition of what testing is, shares little with what a typical "tester" is responsible for (true V&V). Quite often developers are the only ones doing any automation work, including this automated "developer" testing. Developers tend to do it for all kinds of different reasons, too. We'll use a suite of automated tests to proceed without fear of integration errors. That adds value totally independentof validation and verification practices. If our suite catches regressions before we hand a single version off to testers, that saves both developers and testers time. I could go on and on, but the term "unit testing" conveys very little of these kinds of benefits to the development lifecycle, and as it turns out, causes a great deal of confusion.

The Right Usage

When we're talking pure unit tests, that is, black-box testing components in pure isolation (which requires isolated dependencies), not in concert, as developers we can certainly find merit in this practice. But what we've learned is that this idea of unit testing, while quite beneficial as a development practice, shares very few goals with "testing" as we know it from a classical definition of the word (end-to-end V&V). Perhaps a few years ago this distinction was not that true, but we now know better. We know the danger in trying to bend unit tests into something they're not.

My point is essentially this: knowing how to use a unit testing framework, a very simplistic construct on the surface, to do all of these wonderful things is certainly not something that just "falls out" of developing a unit of code – it's not something we should expect developers to just deliver each iteration as part of their deliverable like it's "done." Effort invested to develop and maintain test code alongside the code under test is not free.

Only if you are persistent will you come to understand there sometimes is very little benefit in terms of "full blown", classical validation and verification – the whole reason you set out on this crazy "unit testing" kick. You either stick with it, or write it off at this point. If you stick with it, well, I'll save you the trouble of figuring this out for yourself; you learn it's only another development practice, that needs to be balanced out with other sound practices.It's hard work. Really hard.

Then why go to all the trouble? It's my contention, and I assume most will agree, that developer testing is useful. It drives out better designs, code, and has the potential to thwart a caste of would-be regressions. It can stamp out integration errors. It can help you learn about a new framework or prototype a new feature. But that usefulness comes with a very real cost.

If we truly acknowledge that the type of practice we've talked about has merit as a development process, it makes very little sense to continue referring to the practice as "unit testing." If testing modules of code, driving development with these tests, maintaining them alongside the code, and other automation activities are worthy practice for developers, we should call a spade a spade. Let's stop confusing each other. We're not talking about delivering 100% code coverage, we're not talking about replacing testers (unless you've got the balls to go without them, anyway), and we're surely not talking about writing or releasing bug-free software. It's a lot easier to cut through a lot of the hype and confusion if we can all learn to say developer testing and understand it as such. Let's put the term "unit test" to rest.

R.I.P. unit tests. Long live "developer tests!"