New York

October 15–17, 2025

Berlin

November 3–4, 2025

London

June 2–3, 2026

Interviewing tactics for a post-LLM world

Now that candidates can use AI interviews, how do you make the right hire?
January 21, 2026

You have 1 article left to read this month before you need to register a free LeadDev.com account.

Estimated reading time: 19 minutes

How can you ensure you hire the right talent? 

In the post-LLM world, traditional take-home assignments are becoming obsolete. Remote interviews pose a challenge in determining whether candidates are genuinely responding or simply reading the results of a prompt. On the one hand, we want our prospects to be up to date with the latest technologies; on the other hand, we don’t want them to outsource their day-to-day jobs to LLMs.

To make sure that we separate critical thinkers from prompt readers, it’s necessary that we tailor our interview processes.

The problem

Today, it can be practically impossible to distinguish between human and machine intelligence, especially if the candidate has set up a system that can listen to your questions and present results for the candidate to read aloud to an interviewer. 

Unprepared, we are seeing more and more companies changing their recruitment process. Some companies are removing the take-home assignment or opting for an on-site interview to mitigate the risks, increasing the chances of attracting better candidates at the cost of longer recruitment duration. 

But these solutions come at the cost of candidates feeling as though they’re being interrogated – so what’s a more sustainable path forward?

Interview strategies

The answer lies in designing interviews that embrace LLM usage rather than trying to prevent it. Instead of treating AI assistance as cheating, we can create scenarios where candidates must demonstrate skills that go beyond what an LLM can provide: contextual judgment, critical thinking, real-world experience, and the ability to validate and critique AI-generated outputs.

The following three strategies transform the interview from a test of memory or raw problem-solving into an evaluation of how candidates leverage modern tools while applying genuine expertise. Each approach reveals different aspects of a candidate’s capabilities, and they can be used individually or in combination depending on the role and seniority level.

Going deep

It is very difficult for an LLM to go deep into a specific subject without having been specifically trained on it. This means that we need to move beyond surface-level answers and probe for genuine expertise areas where LLMs typically struggle or provide incomplete responses. Interviewers should design questions and scenarios that require candidates to demonstrate real-world understanding, nuanced judgment, and the ability to reason about complex systems.

For example, during an interview, you could ask the candidate to explain something they’ve built that they are proud of. Then ask them to go deep into specific design decisions they made, trade-offs they considered, and challenges they faced. Don’t stop at the first layer of explanation; keep probing into the details. Someone who has truly worked on the project will be able to provide insights that go beyond surface-level descriptions while an LLM would struggle to maintain coherence and depth.

Ask them what they learned from the experience, how they would approach it differently now, and how they handled specific technical challenges.

Listening to the candidate’s explanations, interviewers should look for:

  • Depth of understanding: Does the candidate demonstrate a deep grasp of the subject matter, or are they providing generic answers?
  • Contextual judgment: Can the candidate explain why certain decisions were made based on real-world constraints?
  • Critical thinking: Does the candidate question assumptions, consider alternatives, and reflect on lessons learned?
  • Experience-based insights: Are the candidate’s explanations enriched with personal experiences and anecdotes that an LLM would not possess?

Reading code

It is well known that the greatest part of an engineer’s time is spent reading code. Knowing this, many companies will have a stage in their interview process where they ask the candidate to read and explain what a certain block of code is doing.

With the advancements of LLMs, this process has become much simpler. However, we can still up our game. We could provide the candidate with a much larger codebase (one with many lines of code, high complexity, outdated documentation and maybe even multiple languages) and ask the candidate to explain to us the inner workings of the project. We would allow them to use an LLM of their choice, but provide them with one whose shortcomings we’re aware of. For example, a smaller or more summary-oriented model may produce confident but incorrect explanations when reasoning across a large, messy codebase. This helps us assess whether a candidate can critically evaluate LLM output rather than trust a plausible, but unreliable narrative.

We would be able to observe the following:

  • How well does the candidate leverage the LLM?
  • How much of the LLM’s output does the candidate take for granted?
  • Will the candidate ask clarifying questions to the interviewer to get a better scope of the assignment, or solely rely on the output of the LLM?
  • Will the candidate be able to identify the issues with the codebase using an LLM as well as their own experience?

All of the above would give us a clear understanding of the candidate’s skills and mindset.

Reviewing code

Code reviews are an important part of an engineer’s day. Now that LLMs can generate code faster, it has only become more difficult to parse through change requests. 

While some argue that engineers can also use LLMs to help review the code, it’s much easier said than done. LLMs, without the full context of the codebase, can easily become confused and come to incorrect conclusions. 

To circumvent this in an interview arena, we could present the candidate with a large change request to review. One that has been partially generated by an LLM. To make things more interesting, we could add comments to the code that inaccurately describe what it does. And add even more confusion by including an incorrect README file. Finally, to spice things up, the change request could consist of multiple programming languages.

Due to the complexity of the task, a candidate won’t be able to rely on an LLM; rather, they’ll need to draw on their own experience, providing a more reliable way to discern their competence.

This methodology allows us to observe the following:

  • How well can the candidate give feedback? This is crucial for an engineer to be able to give and receive feedback constructively.
  • Is the candidate able to make their own observations about the code? Or will they just take the LLM’s output for granted?
  • How well does the candidate leverage the LLM? Can they pinpoint sections of the code to keep the LLM focused? Will they be asking verifying questions to ensure accuracy?
  • Will they ask for the confusion to be clarified? Or will they just try to muddle through it?

How to evaluate candidates?

Evaluating a candidate’s usage of LLMs during interviews requires a nuanced approach. The goal is not to penalize candidates for using modern tools, but to assess their ability to combine AI assistance with critical thinking, domain expertise, and collaborative skills.

Technical judgment is crucial. Interviewers should observe whether the candidate treats LLM output as a starting point rather than an unquestioned answer. Strong candidates will validate, cross-check, and test the information provided by the LLM, and can spot hallucinations, inaccuracies, or gaps in its explanations.

Problem decomposition is another key area. Candidates who excel will break down complex tasks into smaller, focused questions for the LLM, guiding it to specific code sections, clarifying ambiguous requirements, or asking for alternative solutions. This demonstrates their ability to use the LLM as a tool for exploration rather than a shortcut for answers.

Domain knowledge must also be evident. The best candidates supplement LLM output with their own experience and expertise, recognizing when the LLM’s suggestions are contextually incorrect or outdated. They do not rely solely on the LLM, but use it to enhance their own understanding and decision-making.

Collaboration and communication are essential as well. Candidates should engage with the interviewer, ask clarifying questions, and seek feedback. They should be able to explain their reasoning, including where and why they relied on the LLM, and demonstrate a willingness to critique and improve upon its output.

By focusing on these criteria, interviewers can identify candidates who use LLMs as effective tools and who possess the judgment, expertise, and communication skills needed for modern engineering roles.

Final thoughts

The interview process must evolve alongside technological advances. Rather than fighting LLMs, embrace them as part of the evaluation process. Design scenarios that reveal how candidates think, collaborate, and apply judgment when AI assistance is available. The goal isn’t to eliminate LLM usage; it’s to identify engineers who can leverage these tools effectively while maintaining critical thinking and domain expertise.