Software is written to accomplish goals and communicate to developers. Code reviews are about people reading code. I propose an approach to code reviews designed to maximize the value of humans reading code.

This code review methodology does not focus on catching bugs per se. No code review process can be expected to find all bugs in a piece of software. This approach instead focuses on locating faulty assumptions and uncovering unclear solutions. It minimizes the time cost of code reviews while increasing the team’s exposure to the codebase.

Goals

Here are the goals of this approach to code reviews:

  • Maximize supportability of code by:
    • Reaching consensus on naming and approach
  • Share knowledge throughout the team
  • Find test coverage gaps
  • Find logical errors

These are the sorts of things that the ad-hoc “another set of eyes” approach can give you. And in truth, we don’t hope for this system to give us much more than that. We are human after all.

Note that we aren’t enforcing code style here. I consider this a problem solved by developers tools such as linters or IDEs. Although humans do adapt to identify whitespace differences and missing semicolons, I’d rather see that energy go towards the more interesting problems.

Prerequisites

  1. Coder brings a mind open to:
    • Alternate approaches to solving the problem at hand
    • Learning new paradigms or design patterns

That’s it! These rules are here to remind us that we are not our code. We should be egoless with our software. What matters in this exercise is the impression the code makes on others, not who wrote it.

The Process

A code review is a 20 minute session between two people: the Coder who wrote the code being reviewed, and the Reviewer.

  1. First, the Coder explains problem space and intent of the solution
  2. Then the Reviewer takes control, navigating the source code and verbalizing her thoughts. Some suggested reviewer tasks:
    • Scan the directory structure.
    • If you were solving this problem, where would that code go? Try to find it there.
    • Look at the files that were changed. Any surprises? Here's a command for that in git: git diff <start commit> <end commit> | grep +++
    • Review the test case descriptions, ignoring the test code. Are there any coverage gaps?
    • Review the tests, looking for logical errors in coverage.
    • Review the changed code, in-place
  3. Coder and Reviewer discuss differences of assumption they find. They have a dialog about their points of view and agree on the clearest way to accomplish the goals.
  4. Coder documents todo items generated by the dialog.
  5. After the session, the Coder updates the codebase to match the verbal consensus.

This method puts nearly all of the focus on the Reviewer’s reactions to the code. In truth, what we are doing is performing a Usability Test on the code itself. The Reviewer takes on the role of a future developer, sitting down to read or modify this code. From this point of view, the source code is the application under test, and future developers are the users.

I don’t recommend reviewing diffs, as convienient as Github makes it. Simply looking at diffs removes the code from the context, hiding its role in the application as a whole.

Scheduling

Here we focus on practicality and human factors. Code reviews can be a huge time sink. We want as many eyes as possible looking at as much code as possible, without burdening the team.

I chose 20 minute sessions as the smallest productive unit of time. Any less and immersion in the task becomes difficult. I’m in favor of regularity, so I propose holding these sessions daily at a consistent time. I’d recommend occupying one of those times that tends to be a slump or distraction already, such as 2PM or right after a regular standup.

This regularity lets developers plan their time a bit more and avoid disruption. It might even serve as a good transitional task to bring them into the coding mindset more smoothly.

As for picking slump times, I think it may be beneficial to get a person at his grumpiest. Of course it will depend on the person and the software, but looking at new code while one is the least receptive might help uncover assumptions.

Rotation

Finally, everyone should be reviewing everyone’s code, regardless of experience level. A junior developer’s reaction to advanced code can still be valuable. Don’t forget that experience in a subject matter and software quality are not the same!

Ideally, it should be 2-3 days since a coder’s work has been reviewed I assume in that timeframe, each developer has made 2-3 ‘interesting’ decisions to be brought up in the reviews. Teams with an odd number of developers can give a different person a day off each day.

In summary

This method should be straightforward to implement in a department, especially if code style is already enforced. Ideally, it will be low impact but maximize the value of humans looking at code.