Engineering · Interview Questions

Backend Engineer Interview Questions

These twenty interview questions are designed to give a hiring panel a defensible, structured read on a backend engineer candidate in roughly two hours of interview time. They're calibrated to surface the four signals that actually predict performance in this role: behavioural evidence (what they've done), technical depth (how deeply they understand their craft), situational judgement (how they reason under ambiguity), and values alignment (how they treat people, including the ones they disagree with). Use them as a starting set — replace any question that doesn't fit your context, but keep the four-category balance.

Why structured beats unstructured

Decades of meta-analysis (most famously Schmidt & Hunter's research) show that structured interviews predict job performance roughly 2.5× better than unstructured ones. The mechanism is simple: every candidate gets the same questions, every interviewer scores against the same rubric independently before debate, and the panel's job in the debrief is to surface evidence, not to argue over feelings. For backend engineer hires, where you're often comparing candidates from very different backgrounds, structure is the thing that makes the comparison fair.

The twenty questions

Twenty structured interview questions for Backend Engineer roles, mixing behavioural, technical, situational, and values. Score 1–5 per question, calibrate independently before debate.

  1. behavioral

    1. Tell me about the most complex bug you've debugged in production.

    Look for: Structured debugging; hypothesis-driven; clear comms.

  2. behavioral

    2. Describe a project that shipped late. What did you learn?

    Look for: Owns slip; identifies root cause; doesn't blame others.

  3. behavioral

    3. When have you pushed back on a product decision, and how did it land?

    Look for: Disagrees with data; commits afterwards; respectful.

  4. behavioral

    4. Tell me about a time you raised the bar on code quality on your team.

    Look for: Concrete examples; influence without authority; lasting change.

  5. behavioral

    5. Walk me through a technical decision you regret. What would you do differently?

    Look for: Real reflection; not theatre; specific learning.

  6. behavioral

    6. Tell me about your highest-impact contribution related to distributed systems.

    Look for: Owns outcome; specific evidence; learned something.

  7. technical

    7. How would you design a rate limiter for a public API serving 100k req/sec?

    Look for: Token bucket vs leaky bucket; distributed state; trade-offs.

  8. technical

    8. Walk me through what happens when you type a URL and hit enter.

    Look for: DNS, TCP/TLS, HTTP, render. Depth varies by seniority.

  9. technical

    9. How do you decide between SQL and NoSQL for a new service?

    Look for: Workload shape; consistency needs; team familiarity.

  10. technical

    10. Walk me through how you'd add an index to a hot table without locking.

    Look for: Online index; rollout; validation.

  11. technical

    11. Describe how you'd build idempotent webhook delivery.

    Look for: Dedupe key; storage; retries; observability.

  12. technical

    12. How do you approach data modelling? Walk me through your process.

    Look for: Structured; pragmatic; comfortable with trade-offs.

  13. situational

    13. A teammate's PR has subtle correctness issues. How do you handle the review?

    Look for: Direct, kind, specific; pairs in person if needed.

  14. situational

    14. You're paged at 3am for a service you don't own. Walk me through your first 30 minutes.

    Look for: Stabilise; communicate; document; hand off.

  15. situational

    15. Product wants a feature in a week that you think needs four. What do you do?

    Look for: Surface trade-offs; offer scope cuts; commit clearly.

  16. situational

    16. A senior engineer dismisses your design. How do you respond?

    Look for: Curious; data-led; doesn't capitulate or stonewall.

  17. situational

    17. You inherit a service with no tests and active incidents. What's your plan for week 1?

    Look for: Stabilise first; characterisation tests; carve safe areas.

  18. values

    18. When have you done the right thing for a customer at a cost to yourself or your team?

    Look for: Concrete; specific; humble.

  19. values

    19. Tell me about a time you changed your mind based on someone else's argument.

    Look for: Open; specific; gracious.

  20. values

    20. What's something you believe about your craft that most peers don't?

    Look for: Distinctive; reasoned; not contrarian theatre.

Scoring rubric

A simple 1–5 rubric every interviewer should use. Score independently before debrief; argue with evidence, not feelings.

1
Strong no

Significant gaps against the must-haves. Cannot do the role today and unlikely to grow into it within a reasonable runway.

2
No

Some signal in the right direction but the gaps outweigh the strengths for this role at this stage.

3
Mixed

Could do parts of the role well; meaningful gaps remain. Lean on references and working session to disambiguate.

4
Yes

Clearly capable. Demonstrated outcomes against most of the must-haves. Hire if comp and timing align.

5
Strong yes

Will raise the bar on the team. Demonstrated outcomes across all the must-haves and most of the nice-to-haves.

Panel design

A five-stage loop, roughly four hours of candidate time, that gives a defensible read on a backend engineer candidate.

Recruiter screen30 min · Recruiter or hiring manager

Confirm role fit, comp expectations, timing, and the must-have requirements. No deep technical questions — that's the next stage's job.

Hiring manager45 min · Hiring manager

Mission alignment, ownership, judgement under ambiguity. Lead with behavioural questions; probe for the specific decisions and trade-offs they personally owned.

Working session90 min · Hiring manager + 1 peer

Realistic, paid work simulation that mirrors the actual role. Score against a written rubric the candidate sees in advance.

Peer interview45 min · Two future teammates

Craft depth and collaboration. Mix of technical and situational questions. Look for how they reason, not just whether they get the right answer.

Cross-functional45 min · One partner from a function they'll work with daily

Communication, partnership, and judgement on cross-team trade-offs. The 'will I want to work with this person?' read.

Scoring playbook

  • Score each question independently, in writing, before any debrief discussion. The single biggest source of interview noise is the first interviewer's opinion anchoring everyone else's.
  • Capture a one-sentence quote of evidence per dimension. 'They said X, which suggests Y' — not 'they were great'.
  • In the debrief, surface evidence first, opinions second. Every disagreement should be resolvable by going back to what was actually said.
  • If the panel splits, lean on the working session output and the references. They're more predictive than any one interview.
  • Document the decision and the why. The next time you hire for this role, future-you will thank present-you.

Bias guardrails

Structure removes the easy bias; these guardrails remove the rest.

  • Use the same questions, in the same order, with every candidate at the same stage. Variation creates noise; structure removes it.
  • Score on demonstrated evidence, not on credentials, brand names, or 'culture fit' — the latter is where bias hides.
  • Calibrate rubrics quarterly with the panel. Re-watch a small sample of past interviews and re-score them blind; surface where panellists drift.
  • Track demographic breakdowns at every stage of the funnel. If pass-through rates diverge by more than 20% at any stage, investigate.
  • Never ask about protected characteristics (age, family, religion, disability, etc.) or proxies for them. Train every interviewer on what to do if a candidate volunteers this information.

Legal notes (US, EU, UK)

Not legal advice — talk to your employment counsel for jurisdiction-specific guidance.

  • In the US, every question must be job-related and non-discriminatory under Title VII (race, colour, religion, sex, national origin, age, disability, genetic information). State and city laws add more (e.g. NYC AEDT for AI scoring).
  • In the EU, the AI Act classifies hiring AI as high-risk; document your scoring methodology and keep humans in the loop on every reject. GDPR rights to access, rectification, and erasure apply to interview notes and scores.
  • In the UK, the Equality Act 2010 covers nine protected characteristics. Reasonable adjustments must be offered (e.g. extra time, alternative format) when a candidate discloses a relevant condition.
  • Globally: don't ask salary history (illegal in many US states and EU member states); do publish your salary band on the JD; do offer reasonable accommodations on request.

Frequently asked questions

How many of these twenty questions should we actually ask?

Pick eight to ten across the four categories — roughly two behavioural, three technical, two situational, and one or two values. Asking more rarely adds signal and steals time from the candidate's chance to ask their own questions, which is one of the most underrated panels for spotting strong vs weak hires.

Should we share the questions with candidates in advance?

For technical and situational questions, yes — strong candidates do better when they've had time to think, and the goal is to see their best work, not to surprise them. For behavioural questions, share the topic ('we'll ask about a time you owned a tough trade-off') without the exact wording. The signal you lose to preparation is small; the signal you gain by seeing structured, considered answers is large.

How do we use AI to support this interview process without introducing bias?

Use AI for the parts that benefit from consistency: ranking applicants against the documented requirements, scoring async video answers against a published rubric, and flagging panellist drift over time. Don't use AI to make the final decision, score on protected characteristics, or replace the working session. Audit pass-through rates by demographic group quarterly.

What's the single highest-leverage change we can make to our backend engineer interview loop?

Add an independent scoring step before the debrief. Every panellist scores every dimension, in writing, before they hear anyone else's view. The cost is fifteen minutes per interviewer; the benefit is removing the dominant source of interview noise (the first opinion anchoring everyone else's).

How should we adapt this set for a remote-only or hybrid version of the role?

For remote: add a written async response question (a one-page write-up of how they'd approach a real problem) — it's the closest signal you'll get to how they'll actually communicate on the job. For hybrid: ask explicitly about how they handle the boundary between in-office collaboration and async work; it's a real skill and a common failure mode.

What's a fair rubric for the working session output?

Score on three things: did they understand the actual problem (not just the literal ask), did they make defensible trade-offs they can articulate, and did they communicate the result clearly. Depth matters more than polish for backend engineer candidates — a rough output with strong reasoning beats a polished one with weak reasoning every time.

Run structured interviews
in Screeq.

Build rubrics, score independently, calibrate panels — all native.