After writing a post for the Upsource blog about evaluating tests during code review, I found myself compelled to write something more general on why we should be writing automated tests. If you already know the value of tests, you might find something here that will help convince your colleagues who are less sold on the idea.
Why do we write automated tests?
In my first post about code reviews, I mention that a lot of activities we often do in a code review can be automated – checking style, formatting, code coverage etc. The nice thing about automating these things is that, once automated, you no longer need a human to give you confidence that your code meets certain criteria.
Tests provide us with a similar level of security. Once you’ve written some automated tests that prove your code behaves in a particular way, provided they are running in some sort of Continuous Integration environment like TeamCity, they continue to give us confidence in the code well into the future, as long as they are passing.
Another reason to have automated tests is that often the tests can be easier to reason about than the production code – they should state the desired functionality clearly, they should be readable, and they should provide insight into which things the author of the code has
considered (and possibly hint at areas they have not thought about).
What sort of tests are we talking about?
There’s a huge range of the type of automated tests we could be writing. In the Java world, we can write unit tests via JUnit, TestNG, Spock, or similar, or could be an end-to-end test, maybe driven by a tool like Selenium or Robotium.
It would be rare that new code, whether a bug fix or new feature, wouldn’t need a new or updated test to cover it. Even changes for “non-functional” reasons like performance can frequently be proved via a test.
Adding a test for a bug fix or feature is also often a good way to document what the changed code does (with an implication of “why?”). This is particularly useful in projects like open source projects where documentation typically lags behind implementation.
But writing tests is hard!
Yes it’s true. But then so is writing production code. Solving difficult problems is something that we, as developers are good at. Plus we enjoy it! And when you have a set of tests to prove our production code works, not only do we have much more confidence in our solution, it gives us freedom and safety to refactor code later.
What happens when tests break?
If there are failing tests, clearly something has to change – either the production code or the test code. This is a decision you have to make, based on one of two conclusions:
- The tests are supposed to fail: for example, a new feature has been added that needs a change in existing behaviour. In this case, the existing tests need to be altered, or replaced with new ones that assert the correct behaviour.
- The tests are supposed to pass: two possibilities spring to mind – firstly, these tests relied on some side effects that have been changed by new features or fixes, intentionally or not. In this case, these existing tests need to be updated (ideally to be less dependent upon undocumented side effects). Secondly, some new code has broken existing functionality. In which case, the new code needs to be re-written to not affect existing functionality.
Note that under no circumstances should breaking tests result in no action – either the tests need changing, or the production code does.
Advice on writing good tests
In these days of Agile and DevOps, as developers we are responsible for so much more than writing lines of code to implement features or fix bugs. For one thing, we’re also expected to write lines of code to test these bugs and features in realistic ways. Generally, we have not been trained to think this way, but professional testers have. If you have a QA/test team, or have access to testers (or developers with a solid background in testing), ask for their help when writing tests – get them to review your tests, or, if possible, have them pair with you when you’re coming up with scenarios.
While we included some pointers on evaluating tests in the post on reviewing automated tests, professional testers can suggest far more ways to check the code. For just some examples read the article Doing Terrible Things To Your Code.
Tests are about so much more than just checking that your code does what you thought it did. Automated tests are a way to explore the limitations of your code, to discover how it behaves under a range of inputs, and to “document” the expected behaviour (both under normal use and exceptional circumstances), by coding those requirements into tests that get executed in your CI environment.