Students and clients approach me too many times asking for my take on whether unit tests should be written before or after the code they test. They often tell me that they’ve read or been told that if they write the unit test “after”, it can’t be considered TDD – Test-Driven Development – which is apparently the goal.
This question betrays a deep misunderstanding of TDD. It is true that historically TDD was mostly done test-first. And most programmers doing TDD today also write test-first. But they don’t write the tests first because TDD somehow demands it. They write them first because TDD works best that way.
Let me rephrase that. Test-Driven Development is not concerned with the types of tests you write or even when you write them. Its primary concern, as the name implies, is driving development. TDD is a programming technique. It’s just another way for programmers to write, well, programs.
This brings us back to the beginning. The question whether the tests should be written “before” or “after” reflects an inability to separate the wheat from the chaff, the important from the trivial. The TDD technique imbues a disciplined approach to software development. The discipline is the wheat, the tests a trivial mechanical extension.
There are different ways to do TDD. Obviously, you can do TDD using unit tests. But you can also use acceptance tests and integration tests. You can use a combination of different types of tests. You might have to test different parts of your application in different ways, due to technological limitations or for other reasons. If you cannot use unit tests, you can usually still do TDD. In fact, you can do TDD using a REPL environment.
Some programmers see adherence to the test-first methodology as necessary for working in a disciplined way. But as we grow adept at TDD, we often find that we don’t need to follow all the strictures of test-first development. We might write a number of related tests all at once, before we write the code, instead of one test at a time. Or we might skip a test or two, if we believe the behavior they describe is trivial, mundane, or implied.
Sometimes we realize that we can avoid writing a number of tests by simply using a better design in our code. This is significant. One of the goals of doing test-first development is to ensure that our code is testable. How can it not be testable, if we first write the test?
So we learn over time which design patterns are more testable, robust and extensible. They often evolve almost on their own. We also learn which structures produce less complexity. And we start using those patterns and structures, by default. We sometimes feel that their role is more boilerplate than logic. The most important take-away is that at this point, many programmers can “see” the tests in their mind’s eye. They can drive the development using virtual, unwritten tests.
My friend and colleague Yuval Mazor calls this “Post-TDD-ism”. The idea is that we are no longer bound to just one way to do TDD. New tools, new techniques, new approaches and languages, as well as experience, all push us forward. We can do Test-Driven Development using different types of tests, and if we are mentally disciplined enough, those tests can be mental. As an interesting corollary, postmodernism started with architecture, as did software design patterns. TDD is about the architecture.
One warning, though. Whether you write the tests before, or after, or not at all, you have to strike a balance. You have to keep other goals in mind. Automated tests help us prevent regression, provide immediate feedback, facilitate better team work, and create a “live” spec that describes our application’s requirements and behavior. When you intentionally neglect to write a test, you have to take those goals into consideration. This is part of the discipline.
So what do you think? Do you always write tests? Do you think there are better reasons to always write the tests first? Or are you a post-TDD-er too?
In my experience, the tests I write AFTER the code just seem a little less…complete. I think about the tests better when they’re the only thing there. If the tests are written later, they tend to follow the code, rather than the code being guided by the tests. But the #1 reason; there’s less pressure to forget the tests if they come after the code.
I completely agree with your sentiment, especially that last point about pressure. Writing the tests first, by the book, is a way to “force your hand”, so to speak. It forces you into a sort of mental discipline, a “wash-rinse-repeat” kind of thing. I’d even go so far as to say it’s a tenet of proper TDD.
But not everybody works that way. Not everybody has the mental discipline. And those who have the discipline may also choose to use shortcuts (whether right or wrong). Writing the tests first does not guarantee TDD. As I see it, TDD is primarily about what goes on in your head, and is best facilitated by the test-first approach.
Not sure I agree. It might be OK if you’re some sort of ninja coder but I see devs regularly bypass tests and do anything they can to avoid writing tests. Anything they write after the fact is writing tests to pass the code they’ve written, bugs included. I can appreciate dong this on the odd occasion if you’re writing the codebase yourself, but I wouldn’t trust colleagues with this practice unless they were sufficiently senior or cared deeply enough about curating their code.
Actually, that’s kind of my point. One of the questions I’m trying to answer is: why is it OK if you’re a “ninja coder”, or a “sufficiently senior” (and caring) colleague? What are they doing different that makes their code trustworthy, even when they don’t write the tests first? My next post, on Visualizing TDD, delves a bit deeper into the mental process.
The important thing to note is that I strongly recommend doing TDD test-first, by the book. This article is about the essence of TDD, rather than the technical process.
As both Sean and Chris touched on, the difference is in what’s driving what.
Tests written before code *specify* and verify what the code *should do*.
Tests written after code only *document* and verify what the code that was written *does*.
It’s a subtle, but I think crucial, distinction.
As to what makes it OK for “ninja coder” and company to diverge, nothing does. While a certain amount of boilerplate will become apparent to the seasoned developer and will tempt them to short-circuit the process, the danger is that they will not know when they’ve assumed too much about what needs to be driven by test and what doesn’t. Good habits are best left unbroken.
I partially agree with your first point. Tests written at different stages have different roles. But I would not say that the difference is specify vs. document. If you work Agile, all your code is a kind of live document, formalized by your tests (whether unit, integration, acceptance, or any other). And if you don’t work Agile, none of your code is documentation, regardless of what methodology you use to write it. So when Agile and TDD are combined, tests written before code serve in both the specification and documentation roles. And if you run your tests automatically, then I find it difficult to distinguish between a “before” and “after” stage as far as determining what the code “should do”.
I would suggest that the difference between them has to do more with what happens while you code. When you write your tests before the code, one-by-one, in an orderly fashion, you force yourself to think about the design in a gradual, adaptive, and structured way. While it might be theoretically possible to do that when the tests are written after the code, it is far more difficult and you lose the safety guards that test-first TDD gives you. In other words, the real difference is in the way the methodology affects the outcome.
As far as your second point, about ninja coders, I do not believe it is “OK” for them to diverge. Actually, I’m not really trying to judge. I believe I’m describing a common everyday phenomenon. And sometimes the lines are blurred. A good analogy might be cooking. When you use a recipe to cook, you have to follow the instructions to get it right. But a master cook, a chef, might – without really thinking about it – combine steps, or do them in a slightly different order, and still get it right. Is it OK for the chef to diverge? Who knows! Maybe it depends on the results.