Why Testing After Is a Bad Practice

Matti Bar-Zeev - Jan 28 '22 - - Dev Community

In this post I will try and give you my 2Cents on why writing tests after you have a so called “working” code is a bad practice, and why you should avoid it and strive to go tests first (also known as - TDD).

Some background

There you have it, your completed feature which actually works!

Well at least as far as you’ve manually checked it. It has dozens of files involved, several new ones, several modified old ones, but you’re pretty proud of what you’ve accomplished.

You look at your Jira board (or whatever your poison is these days) and see that ticket saying “Write tests for it”, and so you grab your coffee mug and set off to get your hands dirty with some assertions.

You don’t understand what’s the big fuss about TDD and writing tests first. You’ve been coding like this for a few years now and everything seems to be working fine. I mean, each code has it’s bugs, right? and your code is sometimes hard to introduce new features to… or, perhaps too often that you find it hard to do? and what about these tests which are cluttered with mocks and are too complex for no real reason… hmmm.

That unsettling feeling you have cripping down your spine is rightfully there.

Here are a few points on how writing tests-after contributes to the symptoms I’ve mentioned above, among others:

Unconsciously complying to a given “reality”

It is within our human nature - when there is a certain situation at hand we try to comply with it, even to the point of justifying wrongs.

Our code is a given “reality”. It’s there and it works (for all we know). The tests that you will write now will tend to establish and support the existing reality you’ve created.
It will do so by going through paths you covered in your code, it will focus on the happy paths more often and will be bais. It is not surprising that tests-after usually produce poorer code coverage. It becomes easier for us to overlook certain cases since we wish to comply with what’s already there.

For instance, let’s take the common “add” function - when you write a test to it after the code is implemented, you will attempt to add a few numbers and see that it works as you remember your code should work. But if you wrote a test first, you would start to think on how the function handles a situation where it does not get the arguments it expects, not in number and not in type.

In this sense, TDD kinda forces you to think of edge cases prior to writing the actual code. In many cases, this has proven to produce a much more resilient code.

Emotional attachment to our code

We get emotionally attached to our work. You can see it in every PR you’ve submitted, when requested to modify something. It takes a lot of self-discipline to acknowledge that something you’ve made requires a change, and in many cases people will go a long distance in debating over minor issues. Search your feelings, you will know it to be true.

“What testing after has to do with it?” you might ask -
Tests have the tendency to expose your code design weaknesses. When your code is too complex or too tightly coupled, writing tests after will surface the bad design.
Although the tests indicate that the design is wrong you will find that many choose to ignore the red lights and not to refactor the code, but somehow make the tests suffer for the lack of testability.

This can manifest itself in lack of tests, in overlooking certain use cases and overly mocking.
Practicing TDD helps to avoid such cases and helps us better design our code. TDD increases your code testability by default, and code with good testability is also a flexible code which can be modified with greater ease.

Overly mocking

Practicing test-after usually produces tests which have a lot more mocking done for it. When this happens it implies that your code is tightly coupled to modules it probably shouldn’t and/or that the code’s separation of concerns (SoC) is lacking. When you wrote the code nothing stopped you from tightly coupling, but now the tests surface it.

Overly mocking means that your tests become more complex and less readable. Moreover, in some testing frameworks it may present an overload to the runner.

As you probably know, mocking also requires to be well maintained- you need to clean it, restore it, apply it, and that can be so frustrating later on when you’re trying to figure out why a certain test is not passing only to find out that you forgot to restore a mock.

Gets neglected at the end

At the beginning I wrote that you have that Jira ticket for “Write tests for it”. I don’t know why you didn’t stop me there and then :D

This is the place to say you should not have such a ticket. Writing tests is not an additional task. It is an inseparable part of the development task for your feature. What’s more, when you have it at the end, it is the easiest task to postpone to “never” in the eyes of your product team - after all, as they see it, the feature is “working” and done.

Sometimes developers will just write dummy tests which have no value, but somehow increase the code coverage, and that’s even worse than not writing the tests at all since it gives a false feeling that the code is well covered and protected, when it is actually not.

Wrapping up

Many of the coding issues we experience on a daily basis can be avoided if we will practice TDD more. I’m not saying that the transition should be binary, this or that, but I hope that what I’ve written here will help you insist a bit more (even in that inner debate you’re having with yourself) on the quality which you would like to write your code in.

I know that reality sometimes demands we spit out the code as fast as we can, but we, as professionals, should always strive to make our work better and improve as we go.

Do you agree? share your thoughts with the rest of us :)

Hey! If you liked what you've just read check out @mattibarzeev on Twitter 🍻

Photo by Jennifer Bedoya on Unsplash

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .