This is your last article that you can read this month before you need to register a free LeadDev.com account.
Estimated reading time: 7 minutes
I see AI tools as part of the modern engineer’s toolkit, so I welcome their use in interviews.
As an engineering leader at an AI organization, I’ve redesigned our technical interview process around a simple principle. If our engineers use AI tools daily to build better products faster, our interviews should reflect that reality.
The new interview process focuses on highlighting engineers who can effectively use AI to solve technological problems while maintaining a deep technical understanding. In our field, the biggest value comes from those who know how to leverage AI with critical thinking, system design, and solid engineering judgment.
To achieve this, our interview process consists of an in-person technical discussion, a home assignment, and a final deep-dive conversation. During each stage, we encourage the use of AI, and by being clear about what we’re evaluating and how exactly this approach aligns with our daily work, we’re able to identify the candidate best suited for our team.
1. The whiteboard session
Interviews at my organization start with a quick screening, and after we’ve gained a good sense of the candidate, they proceed to a technical interview in person.
During this interview, two senior engineers from the team meet with the candidate in our office. At this stage, we ask the candidate to walk us through a system they’ve built at some point in their career. The aim is for them to pick something they know well, maybe even something they’re proud of. In doing so, we:
- Create a comfortable, safe, and authentic space.
- See how deeply they understand their work.
- Get a feel for how they communicate and respond to challenges.
The team asks questions, pushes on assumptions, and dives deep into their architectural decisions. We explore how they design their solutions for scale, what safety nets they implemented to ensure reliability, and how they handled edge cases. If time allows, we move into collaborative system design, working through a challenge from our own domain to see how they approach our complex problems in real-time.
Throughout this process, we’re evaluating core engineering fundamentals, system design thinking, scalability considerations, and engineering judgment. In addition, we explore questions like: “Looking back at this solution, where could AI have been used effectively?” or “How might you design it differently with AI capabilities in mind?” This gives us insight into how they consider AI as part of their engineering toolkit, but the foundation remains their ability to apply engineering knowledge.
Your inbox, upgraded.
Receive weekly engineering insights to level up your leadership approach.
Finding cultural alignment
Beyond technical assessment, this meeting is also checking their culture fit. These candidates will potentially be joining the team and working closely with our engineers, so their chemistry and ability to get along are important.
We’re specifically looking for engineers who can receive feedback gracefully and adapt their thinking when presented with new details or information. When we push back on their architectural decisions or suggest alternatives, we watch how they respond. Do they get defensive, or do they engage with the critique? Can they explain their reasoning calmly while remaining open to other perspectives?
Can they articulate their ideas professionally and, when appropriate, offer alternative suggestions? We’re drawn to candidates who show genuine curiosity about software engineering and technology, those who ask additional questions, seem energized by technical challenges, and demonstrate a drive to continuously improve and raise the bar.
2. Home assignment
I know home assignments are controversial. Many companies have abandoned them.
I haven’t. And here’s why: real engineering doesn’t happen under the artificial pressure of someone watching you code for 60 minutes. It happens when you have space to think, experiment, and iterate to get to the best solution.
Our take-home task asks the candidate to build a small app that connects to an external API (currently, OpenAI) and displays a response.
The assignment is short, respecting the candidate’s time, and relevant to their experience, meaning they aren’t being required to attack theoretical exercises they’ll never have to use. Further, it’s practical, as it requires the use of the OpenAI API that we use in our team.
Candidates are encouraged to use AI for this exercise. But, this isn’t just about allowing AI use – it’s about verifying they’re using it correctly. Can they craft effective prompts that produce quality code? Do they know how to iterate and refine AI output rather than accepting the first result? When AI generates boilerplate or suggests an approach, do they understand it well enough to modify and improve it?
Engineers who are already integrating AI into their daily workflow tend to know when to lean on AI for rapid prototyping and when to step in with human judgment for architectural decisions. This assignment helps us identify candidates who don’t just know about AI tools, but have developed the skills to use them as force multipliers in their engineering practice.
Here’s what I’ve found this exercise reveals:
- Can the candidate design a project well? We look for logical code organization that would scale if the app grew. Are components and functions separated thoughtfully, or is everything crammed into a single file? Do they structure the project in a way that makes it easy to add features or modify existing ones?
- Are the setup instructions they leave clear and thoughtful? This reveals empathy for future collaborators. Can someone else clone the repo and get it running without hunting for missing dependencies or unclear steps? Do they document any assumptions or prerequisites? This small detail often predicts how they’ll communicate in a team setting.
- Are best practices followed? We’re looking for evidence they understand production-ready code: proper error handling (especially when calling external APIs), meaningful variable and function names, appropriate use of environment variables for API keys, and sensible project structure. Corner-cutting red flags include hardcoded secrets, no error boundaries, or functions named things like “doStuff” or “handleThing.”
- Is the code clean, readable, and not over-engineered? Can someone read through their solution and understand the flow without needing extensive comments? Conversely, do they avoid unnecessary complexity?
To date, I have just one candidate who didn’t submit the take-home task, and quite a few candidates have told me, “I really enjoyed working on the assignment.”
This is what an assignment should feel like – a chance to shine, not a trap to survive.
(And fun fact: this specific task was so effective, other teams outside the AI org started using it too.)
More like this
3. Final interview
If the assignment goes well, the candidate meets with me for a final session.
We start by walking through the code of their take-home assignment. I want to see if they really understand what they built. I don’t mind if AI helped. I want to hear how they decided to use AI. If they can’t explain their decisions on the AI-generated output, that’s a red flag for me.
I’ve had candidates who copy-pasted AI output without understanding it. When I asked about a particular function, they’d say “the AI wrote that part” and couldn’t explain the logic or trade-offs involved. One candidate had implemented a complex state management solution that was completely overkill for this simple task. When pressed, they admitted that they had simply accepted what the AI suggested without fully understanding whether it was appropriate.
Conversely, I had a candidate demonstrate how they used AI to generate the API integration code, but then explained exactly why they modified the error handling logic: “The AI suggested a basic try-catch, but I added specific handling for rate limits and network timeouts because I know OpenAI’s API can be unpredictable.” Not only could they articulate what the code did, but also why they made those specific improvements.
From there, we talk about the stuff that doesn’t show up in code:
- What challenges did they face?
- What motivates them?
- What kind of team do they want to join?
- Where do they want to grow?
- What have they learned from past experiences?
- How well do they handle criticism and feedback?
It’s not just about the answers. It’s about the alignment.

October 15-17, 2025
New York is almost here. Book your spot today.
Final thoughts
This process didn’t come out of a book. It came from trying, adjusting, and listening to my team and the candidates themselves.
This approach acknowledges the reality of modern engineering. On the job, engineers will have access to AI tools. They’ll use Stack Overflow. They’ll leverage existing libraries. What matters is how they use these tools to solve real-world problems.
We no longer need engineers who can code everything from scratch. Rather, teams need those who can efficiently leverage all resources at their disposal while maintaining a deep understanding of what’s happening under the hood.