Every team I’ve worked with thinks they have a code review process, until you look closely. Then you realise there are actually twelve different processes happening at once. Some people review for style. Others for safety. Some pick at syntax. Others treat reviews like design reviews. Most do a mix without realising it.
The result is predictable: inconsistent feedback, slow turnarounds, frustrated engineers, and an invisible tax on every change you try to ship.
On a long-running headless multisite project, that tax built up so much that reviews became the bottleneck for the entire team. Not because anyone was doing anything wrong, but because no one was reviewing with the same mental model.
So I started researching what I was seeing: the layers, the patterns, the points where good reviews quietly turn bad. What I found was a simple hierarchy that anyone can follow. And once we used it, the whole system became clearer and calmer almost overnight.
This post walks through that hierarchy, how to recognise where your team is today, and what it looks like to move toward reviews that actually help you ship.
Why do we do Code Reviews?
The importance of an efficient, thorough code review process cannot be overstated. At the heart of this process lies a shared understanding and alignment within the engineering team regarding the objectives of code reviews. It’s crucial to recognise that the scope of code reviews extends far beyond identifying bugs. They are multifaceted tools designed to enhance code maintainability and readability, foster knowledge sharing among team members, and continually refine and improve our solutions.
The Hierarchy of Code Reviews
An engineering team must agree on the most critical factors regarding code reviews. If you’re familiar with Maslow’s hierarchy of needs, the diagram below shouldn’t seem too strange. Below is the hierarchy of Code Reviews that we developed for this project. As with Maslow’s pyramid, the most critical factors are at the bottom.

Placing Readability at the base of this hierarchy emphasises its fundamental importance. This prioritisation is not meant to undermine the significance of security or robustness, but to acknowledge that code clarity is the bedrock upon which all other qualities rest. Here’s why I believe this:
Readable code is more straightforward to review, understand, and verify for robustness and security. Reviewers can more effectively identify logic errors, potential security vulnerabilities, and unintended side effects when the code is clear and understandable. In contrast, even if a piece of code is secure and robust, it is more challenging to validate its security and robustness if it is not readable, as the underlying logic and potential edge cases are obscured by complexity.
Additionally, in an agency setting like ours, projects involve teams of developers and can span years, with team members rotating in and out. Readable code ensures that new team members can quickly understand the codebase, making it easier to maintain, update, and scale. This is critical for a project’s long-term success and viability, as obscure code can lead to increased development time, higher costs, and a greater risk of introducing regressions during updates or enhancements.
Finally, promoting a culture of writing readable code fosters an environment of learning and knowledge sharing among team members. It encourages best practices and coding standards, helping less experienced engineers learn from the codebase and contribute more effectively. This, in turn, enhances team cohesion and efficiency.
Decoding the Layers
Is the Code Readable?
- Is the code easy to read and comprehend?
- Does it clarify the business requirements (code is written to be read by a human, not by a computer)?
- Are variables, functions and classes named appropriately?
- Do the domain models intuitively map the real world to reduce the cognitive load?
- Does it use consistent coding conventions?
Is the Code Robust?
- Does the code do what it’s supposed to?
- Does it handle edge cases?
- Is it adequately tested to ensure it stays correct even when other engineers modify it?
- Is it performant enough for this use case?
Is the Code Secure?
- Does the code have vulnerabilities?
- Is the data stored safely?
- Is personal identification information (PII) handled correctly?
- Could the code be used to induce a DOS?
- Is input validation comprehensive enough?
Is the Code Elegant?
- Does the code leverage well-known patterns?
- Does it achieve what it needs to do without sacrificing simplicity and conciseness?
- Would you be excited to work on this code?
- Would you be proud of this code?
Is the Code Inspiring?
- Does the code leave the codebase better than it was?
- Does it inspire other engineers to improve their code as well?
- Is it cleaning up unused code, improving documentation, or introducing better patterns through small-scale refactoring?
What Issues do we face with Code Reviews, and how can we solve them?
Once we aligned on what good code looks like, the next bottleneck was the pace of high-quality reviews in practice. There are two common pain points within code reviews:
- Slow turnaround time
- Low-quality feedback
I was introduced to the Code Review Quadrant by Dr. McKayla Grieler in her Code Review Workshop. The quadrant categorises code reviews into four types based on the speed of the review process and the value of feedback provided: Blocking Reviews, Omissible Reviews, Value Reviews, and Power Reviews.
Blocking Reviews are those in which engineers have to wait a long time for feedback, only to receive low-quality feedback. This is very detrimental to both engineering productivity and team morale.
Omissible Reviews happen quickly, but the feedback quality is low; think “LGTM” style reviews. Whilst this review style doesn’t block the engineer for too long, why are we reviewing it at all if you’re never getting feedback on the code?
Value Reviews are a typical style of review in which feedback is thorough but takes a significant amount of time to receive. These reviews are frustrating because they require the engineer to context-switch frequently. That said, they’re much better than the previous two.
Engineering teams want to enter the Power Reviews quadrant to deliver the best code review experience. Power Reviews are where high-quality and high-value feedback is given promptly.

The team must solve as many of their pain points as possible to move into the Power Reviews quadrant. These pain points can usually be solved by:
- Communicating better
- Reducing reviewer burden
- Increasing process speed
There is a breakdown of these below:
Communicate better
Communication occurs across multiple areas of the code review process, from the code itself to feedback and the code review description. Since most code reviews happen asynchronously, we must ensure that the intended meaning of our words is conveyed accurately.
Consider using the Observation, Impact, and Request rule when leaving feedback. This may look something like this:
| Observation | “This method has 100 lines” | Describe your observations objectively and neutrally. Refer to the behaviour if you have to talk about the author. |
|---|---|---|
| Impact | “This makes it hard for me to grasp the essential logic of this method” | Explain the impact the observation has on you. Talk about yourself, as you only know the impact on yourself. |
| Request | “I suggest extracting the logic out into additional methods with expressive names” | Use an I-message to express your wish or proposal. |
Below are some tips for respectful and constructive feedback:
- Do not make demands; instead, ask questions.
- Make it about the code, not the person.
- Use I-Messages (I think, I suggest, for me…)
- Give genuine and authentic feedback.
- Don’t use simple, easy, obviously…
- No Generalisation & Exaggeration (always, never…)
- Explain the reason
- Give Guidance
Reduce reviewer burden
When a teammate comes to review code and sees a large PR, they will either have to carve out a large chunk of time to review it or skim over it and point out anything obvious. To help reduce the reviewer’s workload and increase feedback quality at the same time, the team should do the following:
Submit high-quality code changes.
- Self-review your work before asking someone else to review it.
- Ensure all automated tests are passing.
Have small, coherent changes.
The larger a PR, the less helpful the feedback you’ll get

Don’t try to cram two or three tickets into the same PR. This leads to bloated PRs, less mergeable code, and slower code reviews. If you feel a ticket is too big and will result in a significant PR, raise the issue with the team and see if the work can be broken down.
Write a good code review description.
When a reviewer looks at a PR, they don’t have the context you would have if you wrote the code. They won’t know what you’ve already tried, why it didn’t work, or why you chose the approach that you have.
You must remember the following:
- Your reviewer was probably not involved in the fix.
- Your reviewer probably has no idea what you were doing.
- Your reviewer probably has no idea what issues you encountered along the way.
- It’s your responsibility to decrease the reviewer’s burden.
Writing a good code review description helps extract some of this knowledge from your head and gets it in writing for the reviewer to consume. The focus should be on the what and the why, not the how!
Consider the following as a starting point:
- What does this change accomplish?
- Why was this change necessary?
- Why did you come up with this solution?
- Have you considered alternative solutions?
- If so, why did you decide against them?
Automate what can be automated.
Projects should have a good set of automated tests. These can be as basic as PHPCS, stylelint, Prettier, and eslint, right through to unit tests, end-to-end tests, and VRT. Engineering teams should continue to add to these tests where it makes sense.
Increase process speed
By sharing code reviews across the team, it should be possible to speed up the process, as no single team member will become a bottleneck.
Code Review Policy Template
For many projects, engineers have a shared understanding of how to handle code reviews. That said, I implore you to put it in writing; not only does that enable new engineers to understand the process, but it also ensures engineers always have something to refer back to. Consider the following questions when putting together a code review policy:
- How many reviewers are required?
- Who should be on the review?
- How large should a code change be?
- How fast should the turnaround time be?
- When should a PR be reviewed?
- What should a code review description look like?
These will differ for each project and will likely change throughout the project lifecycle, which is even more reason to have them in writing.
Where is the project now?
That’s a good question; we’ve implemented the ideas above. This project now generally fluctuates between 15 and 30 PRs open at a time, depending on how close to a release we are.
The team shares code reviews and deployments to production. We have a great set of automated tests, and engineers provide enough context in the tickets to make it easy to understand why the code was written the way it is. We have improved how we break down tickets and communicate more if we have overlaps in work that could cause issues at merge time. We’re hoping that this last one will be fixed with a move to trunk-based-development 🤞
The team is much happier with the process now. We’re not perfect, but we’re much more efficient than we were a year ago!
The hierarchy isn’t a theory. It’s DevEx. It’s removing friction from one of the most painful parts of engineering. We saw huge improvements simply by creating shared language — and once we had that language, the whole development flow felt calmer.
So if you’re unsure where to start, here’s a tiny, low-risk experiment: add a section to your next PR description called “What I want reviewers to focus on.”
It forces clarity from the author and gives reviewers permission to be intentional.
That one tweak alone can change the entire tone of a review.