Upsource
Code Review and Project Analytics
Webinar recording: “Code Review Best Practices”
The recording of our webinar “Code Review Best Practices” is now available on JetBrains YouTube channel.
In the past we’ve looked at the specifics of what to look for in code review and code review best practices in Upsource, in this webinar recording we take a step back to look at what we’re trying to achieve with our code reviews, so we can maximize the value of the code review and minimize the pain.
https://youtu.be/a9_0UUUNt-Y
About presenter:
Trisha has developed Java applications for a range of industries, including finance, manufacturing, software and non-profit, for companies of all sizes. She has expertise in Java high performance systems, is passionate about enabling developer productivity, and dabbles with Open Source development. Trisha is a Java Champion and is a Developer Advocate for Upsource and IntelliJ IDEA.
Q&A
As promised, here are Trisha’s responses to questions that have been left unanswered during the webinar.
Q: How do I get developers to see code reviews as useful?
If you’re already doing reviews, and the developers don’t think they’re helpful, then you need to understand what value of the review you’re after and make sure the team understands it too. You may need to tweak the review process so it delivers that value. There’s also the possibility that if you’re doing reviews and the developers don’t see them as useful, they actually might not be useful – take a step back and figure out what’s going wrong.
If you’re not already doing reviews, the best way to persuade developers that code reviews are a good thing is to make sure they get something out of it. For example, you could focus on the learning aspect. Make the objective of the reviews “to share knowledge and learn from each other”. This way the reviewers get something out of the code (they learn stuff) and the authors get something out of it (they taught something to someone). Once code reviews are part of the process, and developers are getting value from it, you should start to see improvements in quality/readability. At this point if you need to you can tweak the processes goals.
Q: Is it helpful to have output from sonar in the code review tool? Or does it just add noise?
Yes, I think it’s useful for output from automated tools to be visible as part of the review process. For example, Upsource shows the results of the TeamCity build – this way, a reviewer doesn’t waste time reviewing code that doesn’t compile or has failing tests.
Similarly for static analysis tools – it can save a reviewing from checking these things manually, or if some of the results are a matter of opinion, the reviewer can make a comment on whether it’s valuable to fix the issue or if it’s not that important.
Q: I hear conflicting views on annotation, why is it some say that annotations should be none to limited?
I believe you should annotate your code as an author in the context of code review comments, not code comments, e.g. comments in Upsource not comments in Java. You can explain which aspects you considered when you picked a particular design/approach and the trade-offs that you’re aware of. This can reduce the number of questions a reviewer might have about the code as they don’t need to make assumptions on what you have considered.
Some people don’t like annotations because the code should be “self-documenting”, it should be clear from the code what it’s doing. And I agree, but the code should say what it’s doing, whereas annotations could say why you chose to do it that way.
Q: How do we prevent obvious rule breaks?
This is where automation comes in. There are lots of static analysis tools which can check things like formatting, common bugs, using wrong idioms, using the wrong language in the wrong place (e.g. a language you might use for testing making it into production code).
Q: Should a junior developer also be the gatekeeper when he is doing the review? They may lack experience.
‘It depends’, of course. If the criteria for passing the review is “even the juniors can understand the code”, then, of course, she/he can be the gatekeeper. But usually, a formal gateway review is set up precisely because it needs to be a senior person to say yes to the code.
A combination may be possible – if juniors, or the whole team, or some subset, have looked at the code, made suggestions, and improvements have been made, then the gatekeeper (presumably a busy senior person) needs to do far less work as they only need to make suggestions based on things others haven’t thought of yet.
Q: Is there a potential for power imbalance in the gateway review model (e.g. indirect discrimination)? Do you have any ideas on how to tackle this, if this is the review model used?
Yes, there is definitely a potential for power imbalance. And this is where some of the anti-patterns come in, if the gatekeeper is always looking for problems, looking to say no. One way to get around this is to have more than one reviewer so they can keep each other honest, or you can rotate reviewers, so the burden is not just on a single gatekeeper. It’s also important that gatekeepers have their code reviewed too, to help them understand what it’s like to be reviewed.
We often think that it’s the gatekeeper’s responsibility to go through all the code meticulously, find every potential error and make sure everything is fixed before approving. I believe, however, that the gatekeeper should, instead, be checking that the code has been checked (by the automated tools, by the author’s peers) and that there’s nothing obviously wrong with it. Their job should be to approve it, not block it.
Q: How do you deal with an aggressive/unhelpful code review?
Having clear objectives for the review and the review process should a) prevent unhelpful code reviews and b) highlight when a reviewer is being aggressive or unfair rather than following the objectives. Also having more than one reviewer may help (see answer to last question).
There is a problem if the person who sets all the rules is also the person who does all the reviews. Under these circumstances, you may have to sit down with that person to let them know your concerns and get some clarity over the review objectives, or you may even need outside help or to speak to that person’s manager. Remember that a code review should be helping code to get into production, not preventing code from being used by your users, you may need to remind an unhelpful reviewer’s manager of this fact.
Q: Who should explain/show the code? The one that wrote it or the one that does the review?
I had assumed the code author would walk through the code and show it to the reviewer, but actually, you raise a good point. There’s definitely a use case for the reviewer to walk through the code in front of the author, to see if they’ve understood it correctly. I think I would suggest that if a reviewer is less experienced, it might be interesting to have them walk through the code to the author so they can check if they’ve understood it correctly. This might be a good way for code authors to see how their code is read and understood.
Q: It’s there a specific format/tone of the messages for code reviews? Any things to be avoided?
Yes, you should be nice! I think a good basic guideline is don’t say something you wouldn’t say to a person’s face. When writing comments, try to think about the person who’s going to read them (and how they might “hear” it) and when reading comments try to bear in mind that the commenter is (probably) trying to help and may have been short on time when they wrote it.
Be constructive, be specific. Be clear what your issue/question is, be clear on what you want the reader to do – answer the question; make a change; don’t make a change but learn for the future etc.
How to do you do code reviews on a big project with no automated tests?
If you have no automated tests, then you really need code reviews! In fact, a project like this can use a code review process to focus on the things you want to change and make those changes with every bug fixed and every new feature. E.g. every code review should have an automated test suit, and some tech debt addressed (e.g. maybe separating business logic and infrastructure code).
It can be difficult to sell the business on this, as you might find these code reviews start to slow down the process. If you can try to do the code reviews for a period of time (maybe 3 months?) you should start to see an improvement in quality (e.g. reduction in bugs) that will help people buy into continuing with the code reviews.
Q: How do you include remote workers in a pair-programming like experience?
I think you have to settle for some really excellent screen-sharing tools, particularly with the ability for either user to take control of the cursor. Without being able to share control, you’ve basically got a driver and a bored navigator.
You could also try a really light-weight code review process which is similar to pairing, like maybe reviewing every half day. This isn’t the same thing as having conversations at the desk, of course. If anyone has alternative answers to this question I would love to hear them in the comments