The Code Review That Made Me Quit (Almost)
Four years ago, I submitted a pull request I was proud of. I'd spent a week building a new feature, wrote tests, documented everything. I was feeling pretty good.
Then the code review came back. Forty-seven comments. Not about bugs or logic errors. About variable naming, spacing, whether I should use a ternary operator or an if statement. One comment just said "lol what is this" pointing to a function name.
I didn't quit. But I thought about it. And I stopped volunteering to take on challenging work for six months.
That review taught me what NOT to do. It took me years to learn what good code review culture actually looks like.
What Code Review Is Actually For
Here's where most teams get it wrong: they think code review is about finding bugs. It's not. Not primarily.
What code review is REALLY for:
- Knowledge sharing: Now 3 people understand this code, not just 1
- Catching blind spots: "Did you consider what happens when this API times out?"
- Maintaining consistency: "We actually have a util function that does this already"
- Mentorship: "Here's a pattern I learned that might help here"
- Building trust: "I've got your back" vs. "I'm looking for mistakes"
Bug catching? That's what tests are for. If you're catching lots of bugs in code review, your testing strategy is broken.
The Rules That Transformed Our Reviews
After that terrible review experience, and after giving some terrible reviews myself (I'm not innocent), here's what we implemented:
Rule 1: Clarify Your Intent With Prefixes
Every comment starts with a prefix that signals what you're asking:
- [BLOCKER] This must be fixed before merge (security, data loss, crashes)
- [QUESTION] I'm trying to understand this, can you explain?
- [SUGGESTION] Consider this alternative (author decides)
- [NITPICK] Personal preference, ignore if you want
- [LEARNING] I didn't know you could do this, cool!
- [PRAISE] This is great, nice work
Real example from last week:
[QUESTION] Why are we using a recursive approach here? I might be missing something.
[SUGGESTION] We could use the existing `UserValidator` class here instead of inline validation. Up to you.
[BLOCKER] This will throw a NullPointerException if `user.profile` is null. We need a null check.
[PRAISE] This abstraction is really clean. Makes the code way more readable.
This simple change cut our "I thought you meant I HAD to change this" conversations by probably 80%.
Rule 2: The 30-Minute Rule
If you've been reviewing code for 30 minutes and you're not done, stop. You're either:
- Looking at a PR that's too big (should be broken up)
- In the weeds on nitpicks (zoom out)
- Missing context (talk to the author first)
We track PR review time. Our best reviews average 15-20 minutes. Our worst (that actually improved code quality) were 25 minutes. Anything longer is usually bikeshedding.
Rule 3: Praise in Public, Nitpick in Private
We have a #code-wins Slack channel. When you see great code in a review, you screenshot it and share it there.
When you have 15 nitpicks about formatting? You either:
- Automate it with a linter (best option)
- Let it go (second best option)
- Leave ONE comment suggesting the author run the formatter (okay option)
- Leave 15 comments about indentation (you're now the villain)
Rule 4: The "Build On" Approach
Instead of "This is wrong," we say "What if we..."
Before:
"This function is doing too much. It should be broken up."
After:
"[SUGGESTION] What if we extracted the validation logic into a separate function? Then this would be easier to test, and we could reuse the validation elsewhere. Something like:
function validateUserInput(data) {
// validation logic here
}
function createUser(data) {
const validData = validateUserInput(data);
// creation logic here
}
Just a thought though—if you prefer keeping it together for now, that's fine too."
Same feedback. Completely different feeling.
What Good Reviews Look Like
Here's a real review I gave last month (sanitized). The PR was adding a new payment method:
[PRAISE] Love how you broke this into small, focused commits. Made the review way easier.
[QUESTION] In PaymentProcessor.js line 45 - what happens if the payment gateway times out? Do we retry, or show an error to the user?
[BLOCKER] We need to log payment failures to our audit system (compliance requirement). Can you add a call to AuditLogger.logPaymentFailure() in the catch block?
[SUGGESTION] Consider extracting the card validation logic into our existing CardValidator utility. We're doing similar validation in 3 other places. But if you want to tackle that in a follow-up PR, that works too.
[LEARNING] I didn't know you could use optional chaining with dynamic properties like that (line 78). TIL!
Overall: This is solid. The error handling is thorough, and I like how you're using the existing payment abstractions. Just need the audit logging added and clarification on the timeout behavior.
Developer fixed the blocker, explained the timeout behavior (I'd missed that it was handled upstream), and merged. Total time: 18 minutes for me to review, 10 minutes for them to address.
How to Review When You're Junior
Early in my career, I was terrified to review senior engineers' code. "What could I possibly tell them?"
Turns out: a lot. Here's what junior reviewers are GREAT at:
- Asking "why" questions: "Why did you choose this approach over X?" (Senior might have a great reason, or might realize they didn't consider X)
- Pointing out unclear code: "I don't understand what this function does" (if you don't get it, the next person won't either)
- Finding missing documentation: "What's supposed to be in the `options` parameter?" (senior knows, but forgot to document)
- Catching edge cases: "What if this array is empty?" (fresh eyes catch things)
Real example: A junior on my team reviewed my code and asked "Why are we using a Map here instead of an object?" I said "better performance." They asked "Did you benchmark it?" I hadn't. We tested. Object was faster for our use case. They saved us from premature optimization.
Your "dumb questions" aren't dumb. They're valuable.
The Response Time Contract
Nothing kills momentum like PRs sitting for days. We have a team agreement:
- Urgent fixes (production broken): 1 hour
- Normal PRs: 4 business hours
- Large refactors (500+ lines): 24 hours, but author schedules a 30-min walkthrough first
If you can't review within that time, you comment "I'll review this by [specific time]" so the author isn't left hanging.
We track this in our team metrics. It's not about punishment—it's about knowing when we're overloaded and need to adjust.
How to Author a PR That Gets Good Reviews
Here's the secret: reviewers give better reviews when you make it easy for them.
Write a Great Description
Bad:
"Fixed the bug"
Good:
"Fixed race condition in payment processing
**Problem:** Users clicking 'Pay' twice within 100ms were getting charged twice.
**Solution:** Added idempotency check using Redis with 5-second TTL on payment requests.
**Testing:** Added integration test simulating rapid clicks. Also tested manually in staging with network throttling.
**Risks:** Very low. If Redis is down, payments still work (we just lose the double-click protection temporarily).
**Screenshots:** [before/after behavior]"
I'll review the second one in 10 minutes. The first one will take me 40 minutes just to figure out what you did.
Keep PRs Small
Data from our team last year:
- PRs under 200 lines: reviewed in 15 minutes on average, 90% approved first try
- PRs 200-500 lines: reviewed in 35 minutes, 60% approved first try
- PRs over 500 lines: reviewed in 2+ hours, 30% approved first try
Big PRs get worse reviews because reviewers get fatigued and start skimming.
Tag the Right People
Don't tag the whole team. Tag:
- Someone who knows this part of the codebase
- Someone who will learn from reviewing this
- Someone whose code you recently reviewed (reciprocity)
What to Do When Reviews Get Heated
It happens. Someone feels attacked. Someone feels ignored. Here's the protocol:
- Take it off GitHub. If there are more than 3 back-and-forth comments on one point, jump on a call. Text-based arguments escalate fast.
- Assume positive intent. "They're trying to help" not "They're trying to prove I'm wrong."
- Focus on outcomes. "What are we trying to achieve?" not "Who's right?"
- Escalate if needed. If you can't resolve it in 15 minutes, pull in a third person or your lead.
Last month, two engineers spent 2 hours arguing in PR comments about whether to use TypeScript enums or union types. I intervened: "Either approach works. Flip a coin and move on. We're building a feature, not writing a language spec."
Metrics We Actually Track
We look at:
- Time to first review: Are PRs sitting?
- Number of review rounds: Are we catching things early, or doing 5 rounds?
- PR size distribution: Are PRs getting too big?
- Comments per PR: Are we bikeshedding or giving meaningful feedback?
We DON'T track "lines of code reviewed per person" or "number of comments per reviewer." Those incentivize the wrong behavior.
Start Here This Week
- Implement comment prefixes ([BLOCKER], [SUGGESTION], etc.) in your next 3 reviews
- Set a team agreement on review response time (suggest 4 hours for normal PRs)
- Create a #code-wins channel and share one great piece of code you see this week
- Review PR descriptions in your last 5 PRs - are they helping reviewers or making them work harder?
- Track time spent reviewing for one week - you'll be shocked where the time goes
Code review isn't about finding mistakes. It's about building better software together. When someone reviews your code, they're investing their time in making you better. Return the favor.