Upsource 2017.1.1922 Is Here

Please welcome a fresh update for Upsource 2017.1 (build 1922). This minor release brings a number of bug fixes and improvements, including IntelliJ IDEA plugin fixes and usability problems. Please check the full list of the Release Notes for more details.

Download the latest Upsource 2017.1.1922 and enjoy all the fresh goodies!

The Drive to Develop!

The Upsource JetBrains Team

Posted in Newsletter, Release | Tagged , | 6 Comments

Welcome Upsource 2017.1.1892

Introducing a bug fix for Upsource 2017.1 (build 1892). This minor update brings a lot of useful fixes and enhancements, including a number of fixes for IntelliJ IDEA plugin, and TeamCity integration, such as showing the build status on the review page:

TeamCity Integraion

To get more juicy details about the new build, please check the full list of the Release Notes.
Download Upsource 2017.1.1892 today and enjoy all the fresh goodies!

The Drive to Develop!

The JetBrains Upsource Team

Posted in Newsletter, Release | Tagged , | Leave a comment

Automating Your Code Review Workflow with Upsource

The last three posts on code review workflows show that there’s more than one way to do a code review, and that the approach you take will depend upon what you want to achieve with your reviews and the type of team you’re working with.

In this post, we’ll cover how Upsource aims to make your workflow, whatever it looks like, simpler by automating as much of it as possible. We’ll show the types of things that can be easily automated, when this might be useful, and show how to achieve it.

Automatically Create Reviews

If your workflow is predictable in who reviews code, you don’t need to manually create reviews and add reviewers, you can have Upsource do this for you. In this example, we’ll assume that you have a single individual who reviews all code that is committed.

Steps:

This is achieved using Custom Workflows.

  1. Log in as an administrator and go to the administration page by clicking on the settings icon settings-icon
  2. Select the project you want to enable automatic code reviews for, and click its Edit project button.
  3. Click on the Custom workflows link on the right of the page.
  4. Under the “Create reviews automatically” heading, click the Enable check box, then click the Add trigger button that appears.
  5. For our specific example, we’re not going to enter any authors or branch/path information, we want all code by all authors to be reviewed by our designated reviewer. You can see that Upsource suggests all the users in this project, so you can simply select one from the list.

    Trigger for automatically creating reviews

    Figure 1: Configuring a trigger for creating reviews

  6. Once this has been saved, you can see the details of the trigger:

    Trigger is set

    Figure 2: Viewing the trigger details

  7. Next time anyone checks in code, a code review for this commit will be created automatically, to be reviewed by Reviewer:
Review automatically created

Figure 3: The automatically created review

Notes:

By default, this creates a new review for every commit. You can optionally set the review to automatically track the branch it is on, which may be suitable if the team uses feature branches or a similar workflow.

05OptionallTrackBranch

Figure 4: You can optionally set the created review to track the branch

There are other options to allow commits to be bundled into a specific review, for example using issue IDs or code review IDs to automatically add a commit to a review.

Suitable for:

  • May be used for gateway code review process, where either one individual usually approves changes to code, or there’s a set of reviewers who are all automatically added to all reviews. Not so suitable for this process if the author usually makes a series of commits but only wants them to be reviewed when the code is “finished”.
  • May be used for design evolution reviews, in this workflow you may choose all commits to be reviewed by all (or most) of the team.

Vary the Automatically Assigned Reviewers

The previous automation is useful if you a) know which developers will be required as reviewers and optionally b) can assign these based on either branches or code paths. Most teams are more flexible around who they assign to reviews, particularly if they’re following an iterative approach or using code review for knowledge sharing. Upsource can assign randomly selected reviewers from a given pool of suitable developers.

In this example, we’re going to specify a pool of developers who can be randomly assigned to review any code from any developer.

Steps:

  1. We’re going to enable a second automated workflow as well as the previous one. Firstly, we need to set up the automatic creation of code reviews, like we did in the last example. The only difference is that this time we do not assign reviewers (see Figure 1 – in this second example we do not add reviewers).
  2.  While we’re on the Custom workflows section, we need to go to the “Assign review participants automatically” heading. Here, we click the Enable check box, then click the Add trigger button that appears.

    Assign reviewers trigger

    Figure 5 – Assign review participants automatically trigger

  3. As usual, Upsource will suggest users for both the “Created by” and “Users” fields. We’re going to leave the “Created by” field blank, because we want this to apply to all developers. But you can imagine that if you wanted this workflow to apply to just the junior/new developers, you could add them to this box.
  4. We’ll select a set of reviewers for all code reviews. This could be just the most senior/experienced people on the team, or could be all developers if you’re aiming for shared code ownership with your reviews.
  5. Now, we have the option of either assigning everyone in this pool to be a reviewer, or getting Upsource to select a random subset of these reviewers for each review. We’re going to configure Upsource to pick one person randomly, so the reviewers don’t get overloaded with too many reviews, and so that everyone gets to see all parts of the code.

    Pick random reviewer

    Figure 6 – Configure Upsource to select a single reviewer randomly from the pool

  6. Click the Save changes button.
  7. Now, whenever anyone checks in code, a review will automatically be created, and a random reviewer assigned.

    Review created

    Figure 7 – Review created and random reviewer assigned

Suitable for:

  • Teams who want developers to break out of silos, encouraging them to review code outside their usual zone
  • Knowledge sharing – by assigning random reviewers to each commit, the whole team can eventually get to see changes to the whole code base.
  • Iterative design – this can be used to assign the whole team as reviewers or watchers to every commit, or it can be used to randomly select reviewers. This latter option may be a good way to reduce some of the noise when using code reviews for design evolution.

Automatically add commits to a review by Issue Number

Many teams already tag every commit with the ID of the bug/feature ticket in the issue tracker. If this is standard practice for your team, and you want to create a code review for every issue, then you can turn on the feature in Upsource that will do this for you.

Steps:

  1. Follow steps 1-3 from the first example to navigate to the Custom workflows page.
  2. Go to “Add revisions to a review automatically” and tick the checkbox. As before, click Add trigger to define the settings.

    Create Issue ID Trigger

    Figure 8 – Create a trigger on Issue ID

  3. Don’t forget to click Save changes.
  4. You will still need to create a code review for the first commit with a comment containing a keyword in the right format (either a Jira or YouTrack issue ID, or a predefined issue ID format), unless this is combined with one of the other automated workflows. But subsequent commits with this issue ID will be added to the code review for the issue.

    Commits added to Review

    Figure 9 – Commits with the same issue number are added to the same review automatically

Suitable for:

  • Teams who do not work on feature branches but use issue numbers to group their commits.
  • Teams who already tag all commits with issue numbers, or who would like to do this.
  • Applies to all code review workflows.

Automatic integration with GitHub

This has been covered in previous blog posts. Upsource’s bi-directional integration with GitHub makes it easy to synchronise pull requests (reviews) and comments, so no matter whether a reviewer or author prefers to use the GitHub UI or Upsource, all the information is available in both tools.

Suitable for:

  • Teams who use GitHub
  • Open Source projects using GitHub where external contributors use the GitHub UI and internal team members have access to an Upsource instance.

Summary

Upsource has a number of options to help you automate your code review workflow. Just as there are many tools that automate the routine things to examine in your code (e.g. static analysis tools and automated tests) which should be used to minimise the load on your code authors and reviewers, there are ways to ease the creation of code reviews according to your workflow.

One of the options described above might suit your need, or more likely a combination of them will cover a vast majority of the cases where code reviews need to be created for your team, so you don’t even need to remember to create a review and assign people to it.

Posted in Code review practices, Newsletter | Tagged | Leave a comment

Upsource 2017.2 Roadmap

It’s been a month since we released Upsource 2017.1 and, of course, we are moving forward. Today, we would like to share our plans for the 2017.2 release which is planned for Q3 2017.

600x400_blog_UP_2017_2_@2x

Support for External Inspection Engines

Upsource is renowned for having a built-in IntelliJ IDEA engine that we use, among other things, to show inspections in code. It’s also behind the Code Analysis Summary that you see in reviews. However, there are a lot of tools on the market that specialize in static code analysis and we don’t want to limit ourselves just to inspections from IntelliJ. We are working to support external inspection runners, be it ESLint or SonarQube, or IntelliJ global inspections and ReSharper inspections bundled with TeamCity, with the goal of presenting their results alongside our own inspections.

Suggested Revisions in Reviews

We have employed advanced statistical analysis to suggest revisions that should be added to a review. Similar to the reviewer suggestions that were implemented several releases ago, this is another powerful tool that helps you review code more efficiently.

Achievements

Upsource grows up and suggests more and more cool features to make code review and code browsing better. We want to make it easier to discover new features and to add some fun to your interactions with Upsource. We plan to introduce an achievements/badges system to accomplish these goals and open doors for other engaging and useful experiences.

“Discussions” page

We are working on a new page that lists all of the discussions that are happening in a project chronologically and lets you search and filter the discussions.

NPM Support

To improve “Go to declaration” and “Find usages” in JavaScript code, we are working to support TypeScript definitions for packages that are listed in your package.json file.

Better Python Support

While basic Python support is available in Upsource, pip isn’t. We are working to address that.

Reactions

Everyone loves reactions in Slack, right? Reactions help you give feedback in a fast and compact form, saving time for everyone. We are eager to provide you with such an ability in Upsource.

We share this public roadmap to give you insight into our current direction. Unfortunately, we can’t promise to include everything listed here in the upcoming version, but we’ll definitely do our best. Also, we’re moving to shorter release cycles and aim to provide you with shiny new features as soon as they are ready. So, stay tuned for updates, and don’t forget to share your feedback by adding your comments here, using our issue tracker, or just by contacting our support team.

Your Upsource Team

The Drive to Develop!

Posted in Announcements, Newsletter | Tagged , | Leave a comment

Upsource 2017.1 Bug Fix Is Out

Upsource 2017.1.1821 is available! This minor update includes a number of important bug fixes, as well as a couple of enhancements, including Elixir and Twig syntax highlighting support.

elixir

Check out the full Release Notes for more details.

Download the latest Upsource version today to enjoy even more productive code review.

Posted in Newsletter, Update | Tagged , | 3 Comments

Code Review for Knowledge Sharing

In the last two posts we talked a bit about how effective those approaches are for the author to learn and improve, because those two approaches focus on getting the code correct. The Design Evolution approach additionally mentions the value of the whole team seeing the evolution of the code.

This third approach for code reviews puts learning at the centre of the process — the purpose is not to check if the code is correct, but to share with other developers what changes have been made.

In this example, I’m assuming a slightly different approach to development. This team develops against master (or some other centralised development branch), and submits small self-contained commits that pass all the automated tests.

The Process

In this example we assume the team is not working on feature branches, but instead commits directly to some main development trunk. It’s important in these circumstances to have a CI system that runs all relevant checks, and that failures here are addressed with the highest priority

When I work on a fix or feature, as usual I’ll make changes in small, self-contained commits that pass all the tests. This should be the default way of working, of course, but when you’re committing to the main development branch this is so important, it ensures the whole team isn’t blocked on some breakage you introduced. Upsource integrates with your CI server so you can see at a glance the revisions that passed all the checks.

Upsource shows the result of the Continuous Integration build

In this code review process, I can either make each small change a tiny review in its own right, or I can batch them up into a single review when the feature is complete1. The first approach has the advantage that the review will be small, but there will be many of them. The second approach means the review will probably be bigger, but gives me much more freedom to experiment and evolve my design during implementation.

In Upsource, I can either choose to add each revision to the review via the UI

02AddToReviewUI-2

…or I can use the IntelliJ IDEA plugin to select which code review to add the commit to…

…or I can tag the commit with an issue number or code review ID to have it automatically added to a review.

Using this process, we’ll see groups of commits on the same branch belonging to different code reviews.

When the reviewer comes to look at the code, they have very different goals to either a gateway reviewer or a design evolution reviewer. Here, the reviewer’s goal is to read the code to: understand what functionality was introduced and see how the application code is evolving. Of course, the reviewer is still free to make comments and suggestions. If a reviewer doesn’t understand the code for any reason (poor naming, confusing design, missing tests to document features etc) this is an important point to raise — code is likely to be read more often that written, and if a team member cannot understand the code right now, when the current state of the code base and the context of the need for the fix or feature is fresh in their mind, imagine how much worse it will be in the future when all of this context is lost.

I can make changes based on these suggestions and also commit these straight into the development branch. I’ll get the reviewer to look over these changes too, to see if they address the original problem.

As always, I can guide the reviewer, and future code-readers, by leaving comments in Upsource to document the more obscure code, or make notes of possible future changes.

By having these comments in Upsource but not attached to a review, future readers of this code will be able to see what my intentions were.

Discussion

Here the main beneficiary of the code review is the reviewer not the code author. It’s a way to encourage ownership of the whole code base by the whole team, without everyone having to work on all sections of the code.

It also emphasises code readability. While enforcing standards or design approaches will help readability, these may be too heavy-weight for many teams. Peer-reviewing code can improve the readability by focusing on small but important things like naming and understandable tests.

This process does not act as a gateway. Code is already merged, it’s assumed to be working as all tests pass. Any suggested changes can be made later, and made directly to the development branch as well.

The reviewer can be junior or new, they’re not acting as the guardian of the code design. Many teams see this as a good way to get people up to speed on their whole code base with fairly minimal cost to the more experienced developers on the team.

Advantages

  • No waiting for the review to finish before merging the changes into production-ready code
  • Shared code ownership reduces the bus factor for the team
  • Focus on code readability can improve the code’s maintainability
  • Since the aim is to help all the team understand the code, reviewers can (and should) be developers of all levels, and junior or new developers’ comments are as valuable as those of more experienced team members — your code should be understandable to everyone.
  • A reviewer is likely to spend far less time on this kind of review than on a strict-correctness review like a gateway review.
  • There are likely to be far fewer iterations than a design evolution review, and probably fewer than a gateway review. This means not only will the review take less time, but also there’s likely to be less work-in-progress since code authors won’t be working on other things while also iterating over the code review.

Disadvantages

  • If you do not have automated tests to catch important things (automated acceptance tests to “prove” the fix/features does what it should; automated regression tests; automated performance test for perf-critical systems; automated security tests etc) or dedicated departments (QA, performance, security etc) to own those areas, it’s possible that code that doesn’t work, or breaks something, could go into production, due to the focus on readability rather than strict correctness2.
  • Given the code is already merged and “working”, there may be a temptation to review the code “later”, so code reviews (and subsequent feedback) may happen when the context of the original fix are not so fresh in the author’s mind.

Anti-patterns

The anti-patterns for the previous two code reviews are things that block the code review finishing, and therefore block the code going into production. Since this approach assumes the code is going into production and the review is for learning and sharing, there’s nothing in the code review process that blocks the code going live.

Suitable for

  • Teams with the highest level of trust that the developers only commit code that meets the team’s standards.
  • Or code bases that don’t need strict control over the design and can tolerate multiple different programming approaches.
  • Teams with shared code ownership
  • Teams with all key requirements tested (either automated tests or a QA team)

 

1 “Complete” meaning passes all appropriate tests, e.g. acceptance tests, and/or meets the acceptance criteria set in the issue
2 Of course, these reviews are still more likely to catch problems than no reviews at all

Posted in Code review practices, Newsletter | Tagged | Leave a comment

Upsource 2017.1 Is Out

UPsource 2017_1

Please welcome Upsource 2017.1, the first major product update this year! We’ve worked hard the last several months to deliver a set of fresh goodies to make your code review process smoother and more productive. We believe that productive teamwork is the key to a successful project, and we are here to make your team code collaboration soar higher than ever.

This time we introduce cross-project full-text search, revised Reviews page, review progress, browser notifications, squash/rebase support, new workflows, and more.
Watch a short What’s New video to see the main Upsource 2017.1 goodies in action.

And now let’s take a closer look at new features of 2017.1.

Cross-Project Full-Text Search

Full text search

Upsource has always been an expert in searching and navigating inside the codebase. To expand our search capabilities, we’ve implemented cross-project full-text search. Upsource searches across all branches and even finds deleted code.

If you’re considering a refactoring that affects multiple projects, or want to retrieve a piece of code you deleted last week, or wondering if someone from your team or company is already using a new web API that appeared in Chrome last week, Upsource will definitely help.

Review Suggestions in Revisions List

Review Suggestions
Upsource now detects situations when someone is modifying files that were authored by someone else in the past, and suggests creating a review. In one click, the review will be created and the owner of the code will be assigned to review the changes.

Review Progress

Review Progress

Have you ever experienced a situation when reviews you create are just ignored by the reviewers? Do they really ignore you, or maybe they’re just too busy? Now you can check how many files were viewed and when was the last time your colleague came around: Simply open the review and hover over the reviewer’s avatar.

Merge Status on Reviews Page

Merge branch without conflicts

Upsource 2017.1 checks and informs you whether a branch can be merged without conflicts. Please note that this feature works only for GitHub projects.

Browser Notifications

Browser Notifications 426

Now you can choose to receive notifications in your browser to keep track of important events without switching from your IDE. You can get notified when a review is created, reopened, closed or accepted, and more. Note that you need to have an Upsource tab open in your browser.

Squash/Rebase Support

Squash:Rebase Support

Upsource 2017.1 supports squashed Git revision, and updates the history to display new commit ids and new commit messages from the VCS. The review timeline also reflects the reasons old revisions have disappeared from the review.

New Code Review Workflows

We’ve added two new workflows to automate your code review processes:

  • Close Reviews Automatically: closes the review automatically after all reviewers have accepted changes.
  • Resolve Discussion Automatically: marks all discussions as resolved upon closing
    the review.

Code Review Workflows Enhancements

  • Add revisions to reviews automatically: we’ve added a new options that allows to ignore merge commits.
  • Create review automatically if commit message contains issue ID.

You can find these workflows under Administration->Projects->Edit project->Custom Workflows.

GitLab Support

We’ve added basic GitLab support, which lets you see pull requests from GitLab in Upsource. At the moment Upsource supports only gitlab.com. This is an initial step towards full-scale GitLab support. We’ll keep working to extend this functionality in future releases.

Docker support

Upsource 2017.1 is available as a Docker image. Enjoy easy setup with recommended system settings applied out of the box!

Enhancements

Revised Reviews Page

Revised Reviews Page
The Reviews page now sports five predefined searches that let you quickly jump to reviews you are interested in: Created, Assigned, Rejected, Mentioned, and Completed. If you’ve ever felt overwhelmed with reviews requiring your attention, the new design will help you focus.

Redesigned Administration Area

Redesigned Create Project

We’ve revised and redesigned the Create/edit project form. What was previously one long page is now a series of tabs, with all the fields rearranged in a more logical way.

New User Profile

User Profile
Please welcome a new, fully interactive User Profile! Open your commits, check and comment the changes, create issues and add labels from the Recent Activity feed. Your profile page can now serve as your landing page, pointing out all the details and actions you need to process your commits and reviews.

Support for Gradle advanced settings

We’ve enhanced code intelligence for Gradle so that you can configure Gradle properties and Gradle init script. For example, you can specify your Proxy server location via Gradle properties.

Support for Android Studio Projects

Upsource 2017.1 handles Android projects like pure Java, Kotlin, and other languages. Now it resolves dependencies and creates a code model for Android projects.

Query Assist

Query assist
In this version we’ve added new query keywords, closed-by and closed to filter reviews by the user who closed the review and by a closing date.

Fira Code

If you like your functional code extra-fancy, the font added in IDEA 2016.3 is now available in Upsource.Fira Code

The brand-new and shiny Upsource 2017.1 is waiting for your feedback! Download the latest version today and enjoy all the fresh goodies. Please check the Release Notes if you are thirsty for more details.

The Upsource Team
The Drive to Develop

Posted in Newsletter, Release | Tagged , | 15 Comments

Code Review for Design Evolution

In the last post we covered a review process where the code review is performed after all the code is written with the goal of approving the suggested changes.

If getting the design right is important to your team, then a collaborative and iterative approach may be appropriate — the code can be checked frequently while it’s being built, so design feedback comes at the right time.

In this example, I’m going to assume the team works using feature branches. As with anything, this approach has its supporters and detractors, but it’s a reasonable model to showcase iterative code reviews.

The Process

In our example, I’ll be the developer who works on a bug fix or feature. I create a new branch for this fix/feature, and sketch out a rough start to the solution.

This could be:

  • Implementing the first task of the story
  • Prototyping an end-to-end implementation
  • Writing automated acceptance tests to document my understanding of the problem and the possible edge cases.

Ideally, this is contained in a single commit — commits should be small and frequent, so this first step is not a large amount of work. The commit should still compile and pass the automated checks, which may mean ignoring tests or inserting feature toggles. I commit this initial sketch to my new branch.

Initial commit

Using Upsource, I create a branch review for the feature branch.

Create branch review

I then invite reviewers. With this type of review, I may want feedback from a number of developers, I’m not simply looking for sign-off. With Upsource, I can select watchers as well as reviewers, who can contribute their comments if they like.

Reviewers and watchers added

I can use Upsource to make notes on known assumptions, to ask questions, and to explain my choice of approach.

Code author comments

The goal of reviewers at this stage is to see if my initial assumptions are correct or complete and to check if I’m going in the right direction. The first comments are likely to be questions, some suggestions or notes on potential gotchas, rather than critiquing the specifics of the code which is likely to evolve.

Early suggestions

While this commit is being reviewed, I almost certainly have more ideas to implement and more tests to write, so I’ll probably continue working on this branch even while the initial code is waiting for review.

I should be quick to respond to any questions, as these not only help reviewers understand the problem or code better, but may also help me to think about the problem differently. And suggestions from the review are incorporated during implementation.

When I’ve completed the next set of changes, I commit this. It doesn’t have to address everything raised in the initial code review — this is an iterative approach and some things may be outstanding until an answer is found or a particular task is worked on.

Committing more changes

Because I created a branch review, all commits to that branch are automatically added to the review. Reviewers and watchers will be notified of the update, and Upsource highlights in bold any changes and lets you select which commits to view.

New changes highlighted

As the code is evolving from a sketch to a working piece, reviewers should be checking: is the new code going into the correct services / modules / classes? Is the design aligned with the code base and the team’s current practices? Have earlier questions or concerns been answered?

Comments about design

In the last post we showed how labels on comments make it easier for an author to understand the context. Upsource also makes it easy to see which discussions are still active and which have been settled, as authors or reviewers can “Resolve” a discussion.

Resolve discussions

The discussion is still around, so you can see the history of decisions, but you can more easily focus on the things that haven’t been decided or addressed.

Show only relevant discussions

The process of soliciting feedback via code review and acting upon it then iterates until the code is “done”1. At that point, one or more of the reviewers will accept the code.

Discussion

This process addresses the main problem in the Gateway process — in a Design Evolution review, feedback on the design is given early so there’s plenty of opportunity to change direction.

In a review like this, the reviewer(s) doesn’t have to be more senior as the aim is not to say simply “yes” or “no” to merging the code. You can invite the whole team if you want, everyone is in a good position to ask questions, think of edge cases and make suggestions. The team does need to have a policy in place about who has the last say in design decisions to prevent deadlock, and you need at least one person who’s responsible for declaring the fix/feature is “done”1. Their job is potentially easier than that of a reviewer in a gateway review, as they’ve been able to rely on several pairs of eyes to identify potential problems.

These reviews can generate a lot of noise and opinions, and developers on teams practising these types of reviews may be participating in many reviews, so there’s a context-switching cost. To address this problem, the team may batch together work in the same area to reduce the cost of context switching. This can improve the overall design, since the whole team is collaborating and may identify a design that works for a range of cases. However, if the team is using feature branches this does increase the risk of merge conflicts.

Review feedback needs to happen quickly in these iterative reviews, so as not to block the developer or leave it until too late to critique an approach.

Advantages

  • Design issues can be caught early and addressed before the code is fully implemented
  • Each review iteration contains a small number of changes, so should be quick to review
  • No pressure to find defects or showstoppers in the code, goal is to check overall approach
  • Varied opinions from the team can lead to more information and create a better design
  • If reviews include many developers in the team, everyone learns the whole code base and sees how it’s evolving
  • It’s a good approach for learning — you see other people’s thought processes, how developers with different experiences approach problems, and how others spot opportunities for design patterns or architectural approaches.

Disadvantages

  • The length of the code review is determined by how long it takes to implement the feature, it’s not a single pass through a completed piece of work, and reviewers are invited to check every commit added to the review
  • Developers are invited to many reviews that need their attention, as well as working on their own code
  • Too many differing opinions without clear leadership can prevent progress

Anti-patterns

If review feedback is not provided quickly, the developer is either going to be a) blocked, so they may start another piece of work, leading to more work-in-progress or b) continuing to code and submit commits, which may lead to all the feedback being received at or near the end of the implementation. If feedback is not received and acted upon quickly, the review will have the same disadvantages as the gateway code review.

Another failure case is the opposite problem — too many opinions, possibly contradictory feedback, and unproductive arguments will all block progress on the fix/feature.

Alternatively, you may find that one of the reviewers has a clear idea of the direction the code should take but cannot communicate this to the author. The review can end up in a ping-pong between the author trying to implement the review feedback and the reviewer suggesting changes.

To overcome these problems:

  • Reviewers should give feedback quickly and code authors should act upon the feedback as soon as they can
  • Someone needs to have the power to break deadlock. Either the code author can be responsible for deciding what to implement (implying a level of trust in all developers), or the team can allocate a sponsor who can break deadlocks in design discussions.
  • Reviewers with clear design ideas can be encouraged to submit code directly to the branch.
  • Done” needs to be defined. In the same way a Gateway review needs to define what’s a showstopper and deserves a “no”, an Evolutionary review relies on knowing when the code is ready and deserves a “yes”.

Suitable for

  • Projects where getting the design right is key. It may be better to incur the cost of longer and noisier reviews over the cost of adding tech debt to the code base
  • Teams where shared code ownership is important
  • Developers comfortable working with iterative design
  • Developers who are comfortable exposing unfinished ideas to their peers
  • Teams with a level of trust where all participants feel comfortable leaving feedback
  • Teams who have one or more people who can break deadlock on design decisions
  • Teams who prefer to coach new/junior team members by collaborating with them on their design and encouraging them to view and critique other code

 

1 Someone needs to decide what “done” means. Maybe all tests passing and all “potential bug” comments addressed. Or maybe the testers are happy with it and the team is satisfied with the readability of the code. Of course, your definition depends upon what the team values — speed to production; correctness; maintainability; etc. Like CAP theorem, you can’t have all of them, you have to understand the implications of your preferences.

Posted in Code review practices, Newsletter | Tagged , , | 4 Comments

Code Review As A Gateway

We’ve talked a lot about what to look for in a Code Review, but we haven’t covered the process at all. There are many different reasons to perform code reviews in your team or organisation, and the process you choose should be driven by what you’re trying to achieve with the reviews.

The first example we’re going to explore is Code Review as a Gateway. In this process, a developer picks up or is assigned a task, implements the code to the best of his/her ability, and when s/he’s “finished”, submits it for review. The reviewer checks the code and, if everything is OK, the code is merged into the main development branch.

This was a fairly typical approach in organisations with a formal software development process – each stage had a gateway which needed one or more people to sign off to say the work had been done correctly. In these days of more collaborative, agile approaches, however, this is less common, particularly if the team is colocated. But it is a similar process to that of GitHub pull requests, so that’s the example we’re going to explore.

The process

In this example, we’re working on an open source project hosted on GitHub. I have decided to pick up one of the issues listed. Generally it’s a good idea to mention on the mailing list, or discuss with the project owner that you want to work on this, and maybe have a conversation on the approaches you want to take. The project owner may be able to point you to some example code that fits the pattern they want, and/or point out some gotchas to be aware of.

I’ll go away and implement the fix or feature. If I have doubts I’ll probably chat to the owner either on the ticket or via chat/email, but eventually I’ll submit a pull request with what I think is the complete working code.

GitHub pull request

In Upsource, pull requests are automatically converted into Code Reviews:

New Code Review from Pull Request

At this point, the project owner reviews the whole thing with one question in mind – is it good enough to merge? Usually there will be some comments, questions or feedback from the reviewer. Like GitHub, Upsource lets you leave these comments either as a review level comment.

Review Level Comment

Or at the code level

Code Level Comment

Comments in Upsource are automatically synched with GitHub and vice versa, so reviewers are free to comment in either tool.

Comments are synced with GitHub

As the code author, it’s my job to look at the comments and either implement suggestions, defend my decisions, or ask questions. In Upsource, comment labels can help clarify whether this is something that really needs to be addressed before the code is merged, or if it’s simply a question to be answered or a comment to note.

Labels for comments

If there are comments that need to be addressed by updating the code, then I will make the changes and then add them to this code review.

Make code changes

It’s possible that we then begin a cycle of suggesting improvements or additions and receiving updated code. In theory, on each iteration there are fewer comments, and fewer/smaller changes that need to be made. Finally, the reviewer should be happy enough with the code to merge it, closing the pull request.

Merge

You can even merge from Upsource

Merge via Upsource

Discussion

Because this is a gateway type of process, often the reviewer is more senior, or at least in a position to have final veto over the code. Therefore, during a code review of this type, the author is often defending their design and implementation, and the reviewer is trying to think of reasons why this should not be merged into the main code base.

This can lead to a certain level of antagonism: as the code author, you’ve already invested time writing the code, you’ve thought it through to the best of your ability, and you have a solution that you think works; as a reviewer, it seems that your goal is to think of everything that could be wrong with the code before it is merged in, so you can prevent bad/incorrect/suboptimal code sneaking into your code base.

Many of us have been through these sorts of reviews, and I suspect it’s a reason people react negatively to the idea of code reviews – it feels like a combative experience where no one wins.

This type of review process can work well, however, when the goal of the reviewer is not to prevent bad code from going into the code base, but to add functionality to the project/application, i.e. rewarding reviewers on how quickly they merge in the code. Of course, the danger here is that reviewers will not look at the code properly and simply say “yes”. But given how much can be automated (compilation, tests, inspections, static analysis, performance, security etc), in this kind of review the emphasis could be on making sure the right checks have been automated for this code (e.g. ensuring unit / functional / performance / security tests exist) and ensuring that those checks are testing the right things.

What gateway reviews are really horrible for is checking the design of the code. Really it’s too late at that stage – either the poor author has to throw everything away and start again (is there anything more demoralising as a developer?) or the code needs to be merged in even with its non-standard, non-optimal design. If design and architecture matter to your team and your application, these things need to be discussed before starting the implementation. A gateway code review then simply checks the code follows the ideas discussed, and if it doesn’t, the reviewer assumes the code author had a good reason for this, and perhaps starts a discussion about why the original design expectations didn’t match what was actually implemented.

Let’s distill some of this discussion into the pros and cons of this approach to code reviews.

Advantages

  • Someone on the team is ensuring that code that’s merged into the main code base meets the required levels of quality (however that is measured).
  • If the process is geared towards accepting, rather than rejecting, code1, these types of reviews can be relatively quick and straightforward.
  • If the process is geared towards accepting code, and either expectations around code style/approaches are very clear or a certain amount of inconsistency within the code base can be tolerated, this process can encourage contributions from a wider community of developers (e.g. open source projects on GitHub)

Disadvantages

  • The major disadvantage is that the work has already been done. So either the reviewer needs to recognise that the work has been implemented and therefore focus their feedback on only preventing major problems from creeping in, or the code author needs to accept that re-work from a code review could be extensive.
  • If the code review is focussed on getting the code absolutely correct2, each iteration of comments/code changes can be fairly long, and there can be quite a lot of these iterations.
  • Handing code over to be reviewed blocks progress, so developers following this kind of process often have more than one fix/feature in progress, sometimes several. A lot of work-in-progress and a lot of context switching can be detrimental to a team’s productivity.
  • This is not the best way to up-skill developers. Suggestions given once the code is written can be seen as “you did this wrong” instead of “there’s a nice way of doing this thing”, and developers are not as receptive to these suggestions as those received while solution is being created.

Anti-patterns

When gateway code reviews are focussed on getting the code absolutely correct2, these long and plentiful cycles of review/change/re-review/more-changes take longer than the initial code. That makes authors and reviewers dread the death-march of the code review – authors leave it longer before review, meaning much more code goes into a review, and reviewers wait longer before opening and commenting on reviews, both of which lead to even longer reviews. Without clear guidelines on what a reviewer is supposed to be looking for and what is (and is not) a showstopper, even the reviewer can be unable to bring this cycle to a satisfactory end.

To overcome these problems:

  • Code reviews should contain a small set of changes that’s easy to review.
  • Reviewers should have a clear set of guidelines on what they should be looking for.
  • The goal should be to accept the code unless a showstopper is identified.
  • Code can be annotated by the author in advance in the code review tool to explain approaches or justify design decisions up front to preempt questions.

Also remember to praise good solutions and nice code, as well as drawing attention to potential problems. This can alleviate the feeling that code reviews are all about criticism.

Suitable for

  • Teams with high levels of trust with a shared vision of the code – it’s much easier to accept the code of developers you trust.
  • Code bases that don’t need strict control over the design and can tolerate multiple different programming approaches – it’s too late to discuss design or architecture in a gateway code review.
  • Teams with alternative channels to discuss design and architecture, e.g. via mail, chat, issue tracker, or even in person, and a process that either prevents implementation before a design is agreed, or actively allows collaboration and feedback during implementation.
  • Projects where waste can be tolerated – if it’s OK to throw away contributions. This way an initial “reject” won’t lead to lots of explanations and re-work, but possibly to an alternative contribution from a different developer.

 

1 Note: a good way to achieve this is to make sure everyone is clear and consistent with what they’re looking for during the review, and what they consider to be a showstopper.

2 For some definition of correct, from the reviewer’s point of view.

Posted in Code review practices, Newsletter | Tagged , , | 4 Comments

Upsource 3.5 (Build 3616) Is Available

Hi Everyone!

happyholiday_02

Christmas time is just around the corner, and we have a nice little surprise for you: welcome a fresh Upsource 3.5.3616. This update brings a number of bug fixes, such as star/unstar project issue, missing diff summary in some reviews, etc.

Check the Release Notes for more details.

Download the latest Upsource 3.5.3616 and enjoy all the new goodies it brings today.

P.S. We wish you a merry Christmas and a very happy and productive New Year!

Posted in Newsletter, Release, Update | Tagged , | Leave a comment