Technical interviews have long followed a familiar format. Whether it’s LeetCode, HackerRank, or a live session, candidates are given a problem and asked to solve it under pressure. Typically, this has involved no external tools, no searching, limited help. That model made sense, but it’s becoming harder to justify.

AI is now embedded in everyday engineering workflows. Developers don’t work in isolation anymore. They use tools to write, debug, and even help design systems. Coding is less about recalling syntax and more about guiding, evaluating, and refining AI-assisted output.

Yet many companies still pretend these tools don’t exist in attempt to get a view on a candidates ‘raw ability’. As a result, we’re evaluating engineers in a way that no longer reflects how they actually work.

Traditional interviews were designed to test problem-solving ability. In practice, they often measure something much narrower. Candidates are rewarded for pattern recognition, memorisation, and performing under artificial constraints. Platforms like LeetCode have turned this into a training loop where repetition often matters more than understanding.

Most engineers spend their time navigating unfamiliar codebases, debugging unclear issues, and making decisions with incomplete information. They collaborate, use documentation, and increasingly work alongside AI tools, and a lot of interviews are still stripping this away. On the job, engineers are expected to use every tool available to them. In interviews, these tools are often restricted, inconsistently allowed, or simply ignored in how candidates are evaluated.

Being a strong engineer today is less about writing every line manually and more about how you work with tools. It’s about asking the right questions, spotting mistakes, and knowing when something doesn’t feel right.

Engineers are now expected to:

  • Frame ambiguous problems clearly so both humans and AI can act on them
  • Evaluate and validate what AI produces
  • Think in systems, not just functions

This shift moves the emphasis away from output and toward judgment, which isn’t something you can measure with a timed algorithm question.

What better interviews look like

If interviews are meant to predict performance, they need to evolve alongside the role.

That doesn’t mean removing structure, but it does mean redesigning it around the reality of a position.

A modern approach removes artificial constraints. Instead of restricting AI, companies can allow it and observe how candidates use it. The question isn’t whether a candidate used AI, it’s whether they used it effectively. Two candidates can talk equally well about their AI setup, but have very different levels of underlying engineering judgment. One is steering the tool, while the other is being steered by it, and it’s sometimes tricky to differentiate.

It also means rethinking the problems themselves.

Rather than abstract puzzles, interviews could focus on tasks that resemble real work:

  • Debugging a broken feature with incomplete logs, unclear reproduction steps, and subtle edge cases.
  • Improving existing code with constraints around performance, readability, and maintainability.
  • Designing a small but realistic component with defined inputs, failure modes, and integration points.
  • Walking through a system design problem with tradeoffs around scalability, reliability, and complexity.

These introduce ambiguity, tradeoffs, and context, which are all things central to real engineering but often missing from interviews.

Evaluation criteria also need to shift. AI can generate working code quickly. What it can’t do is explain why an approach works, when it might fail, or how it fits into a larger system. A strong candidate isn’t just someone who reaches an answer, but someone who can explain their thinking, question assumptions, and adapt. Some companies are already experimenting with AI collaboration rounds, where candidates use AI tools and then walk through their decisions, how they prompted, what they trusted, and what they would change. This feels like a better way to test how someone works.

The concerns (and why they mainly miss the point)

A common concern is that allowing AI makes it harder to assess true ability. But this assumes the goal is to isolate individuals from their tools. In reality, the goal is to understand how they perform with them, as this is now largely the environment they’ll work in. There’s also the fear that AI lowers the bar. If anything, it only changes the bar. Using AI effectively requires clarity, critical thinking, and the ability to spot subtle errors. It shifts the challenge from producing code to evaluating it, which isn’t any easier, just different.

The companies that adapt their interviews will hire better engineers. The ones that don’t will keep optimising for the wrong skills.

This is a topic that has come up a lot recently with our partners, which is why we’ll be hosting a round table event, acting as a safe space for hiring managers to thrash out some ideas on how to best to interview. If you’re interested in attending, please reach out to adam@zebrapeople.com

Are you looking for a digital team or candidate?

Let's work together to find the right people, fast.

Send Brief 020 7729 4771

Looking for your next digital role?

We'll partner with you to find a company you can grow with.

Send CV/Portfolio 020 7729 4771