Spaghetti and Hammers

Interviewing in the age of AI

April 26, 2026 | 2 Minute Read

AI is everywhere. I was skeptical at first, like many others, but I can’t deny that it has become a huge productivity boost. My whole team says the same: tasks that used to take almost two days can now be done in a single morning, mostly because the boilerplate can be delegated to an AI agent after a few clear instructions. And also importantly, it can be done in the background while you are focusing on other stuff. Or if you have that capability, you can run your own pool of AI agents tackling 2 or 3 independent tasks at the same time.

Don’t get me wrong, I do believe there’s a lot of work developers still need to do, especially on planning and designing. But depending on what you are working on, a lot can be delegated to AI agents. Do you really need a developer to implement a bunch of CRUD endpoints without any particularly complex domain logic? Likely no. Should you ensure the unit tests cover edge case scenarios to avoid surprises? I say yes.

AI is a tool. And I was never a big fan of evaluating other developers on how they use a specific tool. But AI is a bit more than vim vs emacs fight, right?

While I see the huge benefits of using AI, I’ve also seen less experienced developers using AI blindly without understanding what they are doing, producing even worse code than they would by manually writing it themselves. And with slower results due to the number of iterations the PRs suffered. So yes, being a good developer will multiply your AI skills.

So if our daily job changed so much lately, how should we adapt our recruitment process?

I don’t have a clear answer yet. And maybe that’s the point: if the work has changed this much, we probably need to revisit what signal we are actually trying to get from interviews.

My take is that we should still care about the fundamentals. If you don’t understand good programming practices yourself, how can you review the work done by an agent? Instead of focusing so much on writing dummy algorithms, maybe exploring domain modelling, tradeoffs, testing, and code review would give us more accurate signals. And perhaps we should add a dedicated interview section to understand how developers leverage AI for daily tasks, and where they draw the line.

  • Should we then keep doing the same questions as we did before?
  • Does it make sense to keep doing leetcode challenges when AI writes most code?
  • Are home challenges still a good signal?
  • Or are home challenges even a better tool nowadays since we may be able to ask more realistic challenges where the candidate focuses on what really matters and delegates the boilerplate to the AI agent?
  • Maybe a live system design interview is the only one where we can properly evaluate a candidate’s experience?
If you enjoyed my work, consider supporting it bellow with a small donation.
Buy Me A Coffee

Newsletter

Did you enjoy this blog post? Sign up for my Newsletter to be notified of new posts. (Pretty low-volume, around once per month.)