The difference between a senior and a junior engineer is rarely raw knowledge. It's the rate at which they can find the actual cause of a bug. This is a tour of techniques and habits that compound over a career — boring on day one, irreplaceable by year five.

We'll start with the mindset, then walk through the toolkit.

The debugging mindset

Most "tough bugs" aren't tough. They're poorly investigated. A few habits change that:

  • Reproduce the bug before guessing. If you can't reproduce it, you can't fix it. You can only hope.
  • Narrow the surface. Halve the code involved every iteration. Comment out, branch off, simplify the input. Most bugs reveal themselves the moment the search space gets small enough.
  • Trust nothing. "It can't be the database" is how three days of debugging starts.
  • Read the error message twice. It is shocking how often the answer is in the first sentence of the stack trace.

When in doubt, slow down. The fastest way through a tough bug is to ask better questions, not type faster.

Browser DevTools, beyond the basics

Most engineers use 5% of what DevTools can do. Here are the high-leverage parts:

The Console is for evidence, not output

Stop using console.log like a print statement. A few habits make a huge difference:

  • console.log({ user, posts }) instead of console.log(user, posts). The object literal preserves variable names, making the output dramatically easier to read.
  • console.table(items) for arrays of objects. It gives you a sortable table.
  • console.trace() when you need to know how you got here.
  • console.group() / console.groupEnd() to keep related logs together.
  • debug() (not console.debug) to break inside a specific function.

And when you're really stuck, conditional breakpoints in the Sources panel. Right-click any line number, "Add conditional breakpoint." It only pauses when your condition is true. This single feature saves hours.

The Network panel is your second brain

For any bug that involves data, start in Network. Things to make a habit:

  • Filter by Fetch/XHR to see only API calls.
  • Sort by Time to find slow endpoints.
  • Use the "Initiator" column to find which file actually triggered a request.
  • Right-click → "Copy → Copy as fetch" to reproduce a request in the Console.
  • Throttle to "Slow 4G" before declaring "everything is fast on my machine."

A surprising amount of "the data is wrong" turns out to be "the request is being made from the wrong place" or "the cache is returning stale data."

Performance tab for what people actually click

INP regressions are mostly invisible until you record them. Open the Performance panel, hit Record, click around the part of the app that feels janky, and stop. The tab will show you, in milliseconds, exactly which functions held the main thread. Almost every "the page feels slow" report we've debugged has been one or two oversized functions, easily found this way.

Source maps that actually work

In production, your code is minified. Without working source maps, every stack trace is a.js:1. With them, you can debug production builds as if they were local code. Things that go wrong:

  • Source maps not uploaded to your error tracker.
  • Source maps publicly served on the CDN (security risk for some teams) or not served at all (debug pain).
  • Stale source maps from a previous deploy.

The simplest fix: upload source maps to Sentry (or your equivalent) on every deploy and don't serve them publicly. Treat the upload as a required deploy step, not a nice-to-have.

Structured logs in production

console.log is a development tool. In production, you need logs you can search, filter, and correlate.

Two changes go a long way:

  • JSON logs: every log line is a JSON object with fields like level, message, userId, requestId. Your log aggregator can index those fields.
  • A request ID on every log line, propagated through the request from edge to database. When you investigate "the request that broke," every relevant log shows up under the same ID.

If your production debugging strategy is "grep through cloud logs and hope," you're one bad incident away from realizing this matters.

Reproducing customer bugs

The hardest bugs are the ones one customer reports and nobody on the team has seen. A workflow that helps:

  1. Get the request ID from your error tracker or support ticket.
  2. Find every log line with that ID. Trace what happened.
  3. Look at session replay if you have it (LogRocket, Sentry Replay). It's a different debugging modality but can save hours.
  4. Reproduce locally using the actual user's data shape, not made-up data. Surprisingly often, the bug only happens for a specific account.

A tool that makes this much easier is impersonation: a way for engineers to log in as a user account in production for debugging. Build it carefully — audit every impersonation, never store credentials, and treat it like the privileged operation it is — but it pays for itself fast.

Backend debugging without panic

When a production endpoint is misbehaving, three things usually save the day:

  • A clear error message in the logs. If your error says "something went wrong," go fix that first.
  • A trace. OpenTelemetry across your services lets you see exactly which call took 4 seconds. Without it, you're guessing.
  • A way to drop into the running process. In Node.js, this is --inspect and Chrome DevTools. In containerized environments, it's whatever you've set up — but you should set up something.

For database issues, EXPLAIN ANALYZE is the single most useful command in any debugger's vocabulary. The query plan is the truth; everything else is theory.

Knowing when to stop

A few signs you should step away and come back:

  • You've been at it for over two hours without progress.
  • You're guessing instead of testing hypotheses.
  • The bug seems to "move" every time you look at it.
  • Your reproductions are inconsistent.

The fastest way through these is not the next attempt. It's a walk, a sandwich, and a fresh look. Engineers who push through end up shipping fixes that paper over the real problem. Engineers who pause and think tend to find the bug in fifteen minutes after lunch.

A small debugging checklist

When you're stuck, run through these in order:

  1. Have I actually reproduced the bug, or just heard about it?
  2. Is my code on the latest version of the branch I think it is?
  3. Are my caches clean? (Hard reload, clear DB cache, restart dev server.)
  4. Is the error message telling me literally what's wrong?
  5. Have I read the network tab for the relevant request?
  6. Have I checked logs around the time of the failure?
  7. Have I tried explaining the bug out loud to a teammate, or to a duck?

Number seven sounds silly. It works. The act of articulating the problem in plain English forces you to confront the assumptions you've been making.

The unglamorous truth

Most production bugs are caused by:

  • A type that wasn't quite what the developer thought (null instead of undefined, string instead of number).
  • A timezone or locale issue.
  • A race condition between two requests.
  • A cache returning data that's older than the developer expected.
  • An environment variable that's set differently in production than in development.

That's roughly 80% of the bug landscape. If you internalize that list, you'll start suspecting the right things first, and your debugging will get faster on its own.

Debugging isn't about being clever. It's about being methodical when everyone else is panicking. Build the habits, build the tooling, and the bugs become much smaller adversaries than they look.