In the last two posts we talked a bit about how effective those approaches are for the author to learn and improve, because those two approaches focus on getting the code correct. The Design Evolution approach additionally mentions the value of the whole team seeing the evolution of the code.
This third approach for code reviews puts learning at the centre of the process — the purpose is not to check if the code is correct, but to share with other developers what changes have been made.
In this example, I’m assuming a slightly different approach to development. This team develops against master (or some other centralised development branch), and submits small self-contained commits that pass all the automated tests.
In this example we assume the team is not working on feature branches, but instead commits directly to some main development trunk. It’s important in these circumstances to have a CI system that runs all relevant checks, and that failures here are addressed with the highest priority
When I work on a fix or feature, as usual I’ll make changes in small, self-contained commits that pass all the tests. This should be the default way of working, of course, but when you’re committing to the main development branch this is so important, it ensures the whole team isn’t blocked on some breakage you introduced. Upsource integrates with your CI server so you can see at a glance the revisions that passed all the checks.
In this code review process, I can either make each small change a tiny review in its own right, or I can batch them up into a single review when the feature is complete1. The first approach has the advantage that the review will be small, but there will be many of them. The second approach means the review will probably be bigger, but gives me much more freedom to experiment and evolve my design during implementation.
In Upsource, I can either choose to add each revision to the review via the UI…
…or I can use the IntelliJ IDEA plugin to select which code review to add the commit to…
…or I can tag the commit with an issue number or code review ID to have it automatically added to a review.
Using this process, we’ll see groups of commits on the same branch belonging to different code reviews.
When the reviewer comes to look at the code, they have very different goals to either a gateway reviewer or a design evolution reviewer. Here, the reviewer’s goal is to read the code to: understand what functionality was introduced and see how the application code is evolving. Of course, the reviewer is still free to make comments and suggestions. If a reviewer doesn’t understand the code for any reason (poor naming, confusing design, missing tests to document features etc) this is an important point to raise — code is likely to be read more often that written, and if a team member cannot understand the code right now, when the current state of the code base and the context of the need for the fix or feature is fresh in their mind, imagine how much worse it will be in the future when all of this context is lost.
I can make changes based on these suggestions and also commit these straight into the development branch. I’ll get the reviewer to look over these changes too, to see if they address the original problem.
As always, I can guide the reviewer, and future code-readers, by leaving comments in Upsource to document the more obscure code, or make notes of possible future changes.
By having these comments in Upsource but not attached to a review, future readers of this code will be able to see what my intentions were.
Here the main beneficiary of the code review is the reviewer not the code author. It’s a way to encourage ownership of the whole code base by the whole team, without everyone having to work on all sections of the code.
It also emphasises code readability. While enforcing standards or design approaches will help readability, these may be too heavy-weight for many teams. Peer-reviewing code can improve the readability by focusing on small but important things like naming and understandable tests.
This process does not act as a gateway. Code is already merged, it’s assumed to be working as all tests pass. Any suggested changes can be made later, and made directly to the development branch as well.
The reviewer can be junior or new, they’re not acting as the guardian of the code design. Many teams see this as a good way to get people up to speed on their whole code base with fairly minimal cost to the more experienced developers on the team.
- No waiting for the review to finish before merging the changes into production-ready code
- Shared code ownership reduces the bus factor for the team
- Focus on code readability can improve the code’s maintainability
- Since the aim is to help all the team understand the code, reviewers can (and should) be developers of all levels, and junior or new developers’ comments are as valuable as those of more experienced team members — your code should be understandable to everyone.
- A reviewer is likely to spend far less time on this kind of review than on a strict-correctness review like a gateway review.
- There are likely to be far fewer iterations than a design evolution review, and probably fewer than a gateway review. This means not only will the review take less time, but also there’s likely to be less work-in-progress since code authors won’t be working on other things while also iterating over the code review.
- If you do not have automated tests to catch important things (automated acceptance tests to “prove” the fix/features does what it should; automated regression tests; automated performance test for perf-critical systems; automated security tests etc) or dedicated departments (QA, performance, security etc) to own those areas, it’s possible that code that doesn’t work, or breaks something, could go into production, due to the focus on readability rather than strict correctness2.
- Given the code is already merged and “working”, there may be a temptation to review the code “later”, so code reviews (and subsequent feedback) may happen when the context of the original fix are not so fresh in the author’s mind.
The anti-patterns for the previous two code reviews are things that block the code review finishing, and therefore block the code going into production. Since this approach assumes the code is going into production and the review is for learning and sharing, there’s nothing in the code review process that blocks the code going live.
- Teams with the highest level of trust that the developers only commit code that meets the team’s standards.
- Or code bases that don’t need strict control over the design and can tolerate multiple different programming approaches.
- Teams with shared code ownership
- Teams with all key requirements tested (either automated tests or a QA team)
1 “Complete” meaning passes all appropriate tests, e.g. acceptance tests, and/or meets the acceptance criteria set in the issue
2 Of course, these reviews are still more likely to catch problems than no reviews at all