Should you reject a candidate for using AI in a job interview?
A hiring manager recently faced this situation: a candidate used ChatGPT during a technical screen-share exercise without asking permission and presented the AI-generated code as their own work. The company encourages the use of AI and they even pay for GitHub Copilot subscription, but they never explicitly said AI was banned in interviews.
Should they reject the candidate for cheating? Or is it their fault for not stating the rules?
This scenario is becoming common, and it reveals a deeper problem: most companies haven’t thought through what they’re actually testing for when AI tools are part of daily work.
The answer isn’t “ban AI” or “allow AI.” It’s this: be explicit about what you’re measuring, then design interviews where understanding can’t be faked.
The real issue is understanding.
If a candidate hides AI use and presents generated code as their own thinking during an assessment meant to evaluate their baseline skills, that’s misrepresentation. Live coding exercises exist to test things you can’t see on a résumé. Otherwise, how do we learn about how someone thinks through problems, what they do when stuck, or how they validate their work?
If they quietly outsourced the thinking to ChatGPT, you have no signal on any of that.
But the problem is that your company encourages AI on the job, and when you don’t clarify interview rules, some candidates will assume AI use is fair game. Not because they’re trying to cheat, but because the world changed fast and a screen-share exercise can be interpreted as “show me how you solve this with your normal tools,” not “prove you can do this without assistance.”
The distinction that matters isn’t “used AI” versus “didn’t use AI.” It’s whether they understand what they produced.
Red flags (AI as a crutch): - Copy-pasting output without reading it - Unable to explain what the code does - Doesn’t catch obvious errors - Can’t adapt when requirements change
Green flags (AI as a tool): - Uses AI to generate a starting point, then validates and improves - Catches errors and fixes them - Explains reasoning and trade-offs - Knows when to use AI and when not to
If they can’t interpret and validate results, it doesn’t matter whether they used AI or copied from Stack Overflow.
When to reject
Reject the candidate if AI use was hidden and undisclosed. If they can’t explain and defend the solution under follow-up questioning, and if it was your intent to measure baseline problem-solving ability.
Consider continuing only if AI usage was disclosed (or you asked, and they were honest). If they can walk you through the logic and explain their reasoning, and you’re comfortable evaluating “AI-assisted performance” for this role.
Regardless of what you do with that one candidate, the bigger win is fixing your interview process, so you’re not guessing next time.
Stop telling yourself ‘it should be obvious’
Say the rule out loud before every exercise:
Baseline lane (No AI): “For this exercise, please don’t use AI tools or outside help. We’re evaluating your baseline problem-solving.”
Real-world lane (AI allowed): “For this exercise, you can use AI as you would at work. We’ll evaluate how you use it and how you validate the output.”
This removes the gray area instantly.
At Poly, we make clear at the outset which exercises allow AI and which don’t. Sometimes we use proctored tests when we want to assess raw skills. Sometimes we want to see how well someone uses tools. Both matter. The key is to be explicit about which one you’re testing.
Design interviews where candidates can’t fake understanding.
Detection traps turn hiring into an arms race. The better move is to make faking collapse naturally.
Employ techniques such as “Walk me through your approach”, or “Why did you choose that method?”. Ask “Where could this break?” as it will stimulate discussion and provide opportunities for understanding to be demonstrated.
Add a change request mid-way by asking for a pivot like “Now handle this edge case”, or “Now reduce memory use”, and “Now assume the input is messy”.
People who understand can adapt. This works whether they used AI, Stack Overflow, or wrote it themselves.
Use a 2-lane interview
Lane 1: Prepare a short baseline test (No AI). Give the candidate10-15 minutes. This provides a small exercise that tells you if the fundamentals are real.
Lane 2: Prepare a realistic exercise (AI allowed). Now you’re testing modern skills to assess tool judgment, verification, and adaptation. Not just typing.
Both lanes matter. Both are legitimate. The key is to be intentional about what you’re testing in each scenario.
What about behavioral questions?
When candidates answer this type of questioning using AI, this is misrepresentation. When we ask things like, “Tell us about a time you had to resolve conflict with a colleague”, we expect to hear about real experiences.
To test this if you suspect AI in responses, follow up with specific and probing questions that require real context, like “What specifically did that person say?”, or “How did you feel in that moment?”, even “What happened the next day?”.
Real stories have depth Fake ones don’t. If they can’t answer without consulting a screen, it’s a red flag about whether their experience is real.
The bottom line
There’s no universal rule for AI in interviews. The question is “what are we testing for, and why?”
Sometimes you need to test raw skills without AI to assess baseline capability, understand training needs, or validate experience claims. And sometimes you need to test with AI to reflect real work conditions, test effective tool use, or evaluate judgment.
Both matter. Both are legitimate. The key is to be intentional.
Decide what you’re testing for in each question. Define your expectations explicitly. Watch how people work, not just what they produce. Design interviews where understanding can’t be faked, with or without tools.
And remember: the goal isn’t to catch people cheating. It’s to understand whether they can do the job.
Newsletter
Get all the latest posts delivered straight to your inbox

