Guest posts

Worrying about maintainability is often a waste of time…

We’ve shared with you a number of articles on the technical aspect of code reviews, like “What to look for in a code review” series. Code reviews, of course, also involve a big deal of human interactions, and there’s always a place for subjectiveness. Today we’d like to share with you a guest post that looks at what we may mean by “maintainability”.

About the author:

Wes McClureWes Higbee is passionate about helping companies achieve remarkable results with technology and software. As a consultant, Wes helps people eradicate emotional blind spots with technology. He’s had extensive experience developing software and working with teams to improve how software is developed. Wes launched Full City Tech to leverage his expertise to help companies rapidly deliver high-quality software to delight customers.

Code reviews have to have a purpose at the end of the day. We don’t do them for their own sake, hopefully. For example, a code review might identify a bug, it might be a mentoring opportunity, it might be a learning opportunity for the reviewer, or it might verify that everything is okay.

Unfortunately, there are many distractions that detract from getting value out of a code review. Many of these stem from being a human being.

One common distraction is discerning whether or not code is maintainable. Seems like a good thing to watch out for. Wouldn’t you like to know if code is unmaintainable?

The problem lies in determining what maintainability is. As humans, we have a tendency to answer one question, by substituting another question first and then answering that other question instead. Especially answering hard questions like “is this code maintainable?”.

Since maintainability is subjective, individuals will instead substitute a question, like one of the following:

  • Do I like this code?
  • Am I my familiar with this coding style?
  • Is this code similar to the code that I write?
  • Do I dislike a particular design pattern?
  • Do I think that the person that wrote this is a neophyte?
  • How old is the code? (assumption that older = less maintainable)
  • Is this using an older framework? Or, am I familiar with this framework?
  • Do I like the person that wrote the code?
  • Did I write this code?
  • Do I agree or disagree with it?
  • Do I agree or disagree with adding this feature?

Now you might be thinking “I don’t do that”. The problem is, you’re not aware of this substitution. Nobody is. We all do it subconsciously. It’s human nature.

The same automatic process at work here is the same one that makes you run when you see something that looks like a bear. It might be a shadow behind a tree in the forest, doesn’t matter, you’re going to run. You don’t perform a quantitative risk analysis and then decide to run or stay, you run before you know you’re running.

When you do code reviews the same process is at work. When you look at code you have automatic reactions to it. And those automatic reactions are answers to the list of substituted questions above.

You’ll justify your reactions in terms of things like maintainability. Because you think that’s what you’re looking for. So, your feedback will speciously seem reasonable.

For example, code that takes you a while to read and/or comprehend. You’ll feel compelled to call this code unmaintainable. If it takes you a while to understand, it’ll take everyone a while!

Now, it could be the case that the code is hard to read. It’s also very possible that you’re just not familiar with a given coding style.

If you don’t distinguish between these two situations, then you will waste time changing code that doesn’t need to be changed. Just because you’re not familiar with a given coding style.

You might think that the solution to this is to have a long list of criteria for people to refer to when doing reviews. The problem with that, people can’t keep a long list of criteria in mind.

Subconsciously, people will summarize lists of criteria. And then map that summary into another set of substituted questions. That’s why we have the idea of “unmaintainable code” in the first place! It summarizes a whole host of problems we might find with code.

So, we’re back at square one, criteria won’t help.

Instead, here are three simple steps you can follow. If you do these three things, you’ll substantially cut waste in your code reviews.

  1. Embrace It
    What you need to do, to avoid as much wasted time as possible, is to first accept that this is normal, subconscious human behavior. In fact, without this, you probably couldn’t give feedback. It’s a blessing and a curse. 

Accept that when you do a review, you will substitute questions like those above for maintainability. 

If you embrace this as normal, you can learn to identify substitutions in your own reviews. If you think it’s a sign of a character flaw, or “Not Me”, your reviews will continue to produce waste because you won’t accept that it’s a part of you too.

  2. Identify Substitutions
    For each aspect of reviews — maintainability is only one example of a substitution — use your imagination to identify likely substitutions.

 If I told you that one code base had 100% test code coverage, and another had 50%, and you only had time to do one review, which would you pick to review? 

This is another example of a specious substitution. “What percentage of this code is covered with tests?” is substituted for “What’s the probability of defects in this new code?” The assumption is that Test Code Coverage = Quality. In a review, one might rubber stamp code with 100% coverage, whereas code with 70% might be flagged. Sounds reasonable, right? But, it’s not.

Another example, instead of Code Coverage = Quality, you might assume Lack of Tests = Problem. But is that really the case? 
 
One good way to find substitutions — if you have been doing reviews for a while — is to review past reviews. Look for false positives and false negatives and work backward to postulate what substitutions might be culpable.

  3. Filter Feedback

    Don’t rush to give feedback. Leave time to think. 

When you find feedback to give, ask yourself: “Will changing this be worthwhile?” This will help you filter false positives. 

Keep track of your reversals. Look for trends so you can avoid spending time on these in the future. For example, if you notice code coverage is often misleading, then stop measuring it during reviews. 

When it comes to false negatives, missing things, they’ll be less likely if you aren’t distracted by false positives. You find what you look for. If you’re looking for the wrong things you can’t attend to finding the right things. For example, if you stop looking at code coverage, you might start looking at individual tests. Maybe you comment out code that the test inspects and see if the test fails.

 When it comes to maintainability, filtering feedback might mean asking about the likelihood of change. If it’s low, why bother, what if you never change the code!? If it’s high, you likely know things about the future to filter and frame the feedback you give.