Upsource
Code Review and Project Analytics
What to look for in a Code Review
This is part 1 of 6 posts on what to look for in a code review. See other posts from the series.
Let’s talk about code reviews. If you take only a few seconds to search for information about code reviews, you’ll see a lot of articles about why code reviews are a Good Thing (for example, this post by Jeff Atwood).
You also see a lot of documentation on how to use Code Review tools like our very own Upsource.
What you don’t see so much of, is a guide to things to look for when you’re reviewing someone else’s code.
Probably the reason there’s no definitive article on what to be looking for is: there are a lot of different things to consider. And, like any other set of requirements (functional or non-functional), individual organisations will have different priorities for each aspect.
Since this is a big topic to cover, the aim of this article is to outline just some of the things a reviewer could be looking out for when performing a code review. Deciding on the priority of each aspect and checking them consistently is a sufficiently complex subject to be an article in its own right.
What do you look for when you’re reviewing someone else’s code?
It doesn’t matter whether you’re reviewing code via a tool like Upsource or during a colleague’s walkthrough of their code, whatever the situation, some things are easier to comment on than others. Some examples:
- Formatting: Where are the spaces and line breaks? Are they using tabs or spaces? How are the curly braces laid out?
- Style: Are the variables/parameters declared as final? Are method variables defined close to the code where they’re used or at the start of the method?
- Naming: Do the field/constant/variable/param/class names conform to standards? Are the names overly short?
- Test coverage: Is there a test for this code?
These are all valid things to check – you want to minimise context switching between different areas of code and reduce cognitive load, so the more consistent your code looks, the better.
However, having humans looking for these is probably not the best use of time and resources in your organisation, as many of these checks can be automated. There are plenty of tools that can ensure that your code is consistently formatted, that standards around naming and the use of the final keyword are followed, and that common bugs caused by simple programming errors are found. For example, you can run
IntelliJ IDEA’s inspections from the command line, so you don’t have to rely on all team members having the same inspections running in their IDE.
What should you look for?
What sort of things are humans really good for? What can we spot in a code review that we can’t delegate to a tool?
It turns out there’s a surprisingly large number of things. This is certainly not an exhaustive list, nor will we go into any one of them in great detail here. Instead, this should be the start of a conversation in your organisation about which things you currently look for in a code review, and what, perhaps, you should be looking for.
Design
- How does the new code fit with the overall architecture?
- Does the code follow SOLID principles, Domain Driven Design and/or other design paradigms the team favours?
- What design patterns are used in the new code? Are these appropriate?
- If the codebase has a mix of standards or design styles, does this new code follow the current practices? Is the code migrating in the correct direction, or does it follow the example of older code that is due to be phased out?
- Is the code in the right place? For example, if the code is related to Orders, is it in the Order Service?
- Could the new code have reused something in the existing code? Does the new code provide something we can reuse in the existing code? Does the new code introduce duplication? If so, should it be refactored to a more reusable pattern, or is this acceptable at this stage?
- Is the code over-engineered? Does it build for reusability that isn’t required now? How does the team balance considerations of reusability with YAGNI?
Readability & Maintainability
- Do the names (of fields, variables, parameters, methods and classes) actually reflect the thing they represent?
- Can I understand what the code does by reading it?
- Can I understand what the tests do?
- Do the tests cover a good subset of cases? Do they cover happy paths and exceptional cases? Are there cases that haven’t been considered?
- Are the exception error messages understandable?
- Are confusing sections of code either documented, commented, or covered by understandable tests (according to team preference)?
Functionality
- Does the code actually do what it was supposed to do? If there are automated tests to ensure correctness of the code, do the tests really test the code meets the agreed requirements?
- Does the code look like it contains subtle bugs, like using the wrong variable for a check, or accidentally using an
and
instead of anor
?
Have you thought about…?
- Are there potential security problems with the code?
- Are there regulatory requirements that need to be met?
- For areas that are not covered with automated performance tests, does the new code introduce avoidable performance issues, like unnecessary calls to a database or remote service?
- Does the author need to create public documentation, or change existing help files?
- Have user-facing messages been checked for correctness?
- Are there obvious errors that will stop this working in production? Is the code going to accidentally point at the test database, or is there a hardcoded stub that should be swapped out for a real service?
Look out for follow up posts on this blog covering these topics in more detail.
What makes “good” code is a topic that every developer has an opinion on. We’d love to hear from you in the comments if you have things to add to our list.
UPDATE:
Find more posts on "What to look for in a Code Review" here.