The Wrong Question

Every week, teachers ask some version of the same thing: "How do I stop students from using AI on this assignment?"

Wrong question. Students are using AI. They'll keep using AI. Treating every assignment as a cat-and-mouse game between you and ChatGPT means you've already lost — because that game is one you cannot win. AI detection tools are unreliable. Bans are unenforceable. The arms race of "more restrictive assignment" versus "better AI output" will always favor the AI.

The right question is different: How do I design assignments that require something AI cannot produce?

That reframe matters. It shifts your energy from enforcement to design. And it turns out that assignments AI can't do aren't exotic or particularly hard to create — they're often just better assignments.


What AI Can and Cannot Do

Before the redesign, understand what you're designing against.

AI is very good at:

AI cannot:

The design insight: assignments AI can't do are assignments that require the student to be present as themselves — in a specific time and place, drawing on specific experience that only they have.


Four Categories of AI-Resistant Assignment Design

ThinkingEngine helps teachers run Socratic discussions at scale. See how it works →

1. Personal Reflection (The Irreducible Self)

AI can write about a student's topic. It cannot write about a student's specific memory, conversation, or observation from last Tuesday.

Before:

"Write a 2-page reflection on what leadership means to you."

This is completely AI-submittable. A chatbot will produce a competent, generic reflection on leadership that could belong to anyone.

After:

"Write a 2-page reflection on a moment in the last month when you saw someone make a decision you disagreed with. What did you do? What would you do differently, and why?"

Now the assignment requires something real: a recent specific memory. The student has to have actually observed something and thought about it. AI can hallucinate a plausible memory — but students who submit fabricated personal reflections are detectable in conversation, and the assignment's value (genuinely thinking about your own behavior) is lost only on students who cheat.

The principle: make recency and specificity non-optional. "In the last two weeks." "In our classroom." "In your neighborhood." "A conversation you had with a family member this semester."

Specificity defeats generative AI by default. The more local and recent the requirement, the less useful any AI becomes.


2. Local Context (The Grounded Claim)

AI has no access to what's happening on your street, in your school district, or at your local city council meeting.

Before:

"Write a persuasive essay arguing that your city should invest more in public transportation."

Generic claim, generic research. AI produces it effortlessly — and the essay could have been written by anyone, anywhere.

After:

"Write a persuasive essay arguing for or against the proposed bus service changes currently before your city council. Use at least two sources from local news published in the last 30 days."

AI can't do this. Local sources aren't in its training data. The specific policy isn't in its knowledge base. Students have to read actual local journalism, form an opinion about an actual local decision, and argue it. The assignment is no longer abstractly about transportation policy — it's about a real thing happening in their real community.

The principle: local specificity defeats AI by default. Anything tied to your specific community, classroom discussions, or recent school events requires real engagement.

This category is especially powerful in social studies, civics, environmental science, and economics — any subject where connecting abstract concepts to real local cases is already good pedagogy.


3. Process-Over-Product (Show Your Thinking)

AI can produce a polished final product. It cannot show the messy intermediate steps that demonstrate real thinking.

Before:

"Write a 3-page analysis of the symbolism in The Great Gatsby."

After:

"Submit three drafts of your thesis statement across the week. For each draft, write one sentence explaining what you changed and why. Your final submission should include a 300-word note reflecting on how your thinking changed between draft 1 and draft 3."

This assignment is hard to outsource because the process is the assignment. A student who submits three AI-generated drafts with fabricated process notes has done more thinking — about how thinking looks — than they would have by submitting a single AI essay. The assignment creates a forcing function.

Variations:

The principle: if you grade the process, the product becomes an afterthought. Students who genuinely engage with the process develop the skills. Students who fake the process reveal themselves in conversation.


4. Socratic Defense (The Live Test)

Nothing defeats AI like requiring students to explain their work in real time, in your presence.

Before:

"Write a 1,000-word essay arguing a position on climate policy."

After:

"Write a 500-word position paper on climate policy. On Friday, you'll have a 5-minute one-on-one conversation with me about your argument. I'll ask you three questions about claims you made. Your grade includes both the written paper and your ability to defend it in conversation."

This approach — sometimes called Socratic oral assessment — is the single most effective check on AI-generated work. If a student submits an AI essay, they'll be unable to answer questions about it. Most students know this, and the oral requirement changes the calculation: suddenly it's less work to actually write the essay than to memorize an AI version and fake ownership.

A student who genuinely wrote their paper can answer questions about it. A student who didn't, can't. That distinction is reliable and immediate.

The principle: any assignment with a required oral defense is automatically AI-resistant. You don't need AI detectors. You need a five-minute conversation.

For more on Socratic questioning as a classroom technique, see How to Run a Socratic Discussion That Doesn't Fall Flat.


Before/After: Redesigns Across Subjects

| Subject | Before | After |

|---------|--------|-------|

| English | Analyze the theme of isolation in Frankenstein | Find a news story from this month that connects to Frankenstein's isolation. Analyze the parallel in 300 words. |

| History | Explain the causes of the French Revolution | Interview someone over 50 in your family about a time they witnessed social or political unrest. What parallels do you see to the French Revolution? |

| Science | Write a lab report on the experiment results | Keep a real-time lab notebook during the experiment. Submit photos and written notes taken during class, plus your analysis written after. |

| Math | Solve these 10 problems | Choose one problem and record a 2-minute video explaining your reasoning as you work through it. |

| Social Studies | Write about a community issue | Attend one local public meeting (city council, school board, community group) and report what you observed. What surprised you? What would you change? |


Common Objections

"Won't students just describe a fake memory or fake local event?"

Some might. But the requirement to be specific about recent events creates risk for students who fabricate — they have to maintain the fiction in conversation. A student who writes "I attended last Tuesday's school board meeting" can be asked a single follow-up question: "What was on the agenda?" Students who fabricated can't answer it. Students who went can.

The deeper point: assignments that require real presence don't need to be cheat-proof. They need to have a built-in verification mechanism — usually a conversation.

"I don't have time to do oral assessments with 150 students."

You don't have to assess every student. Random sampling works: tell students that 30% of oral defenses will be conducted, chosen randomly on submission day. The uncertainty alone — any student might be selected — changes the calculation. A student who submits an AI essay and has a 30% chance of being called on to defend it faces real risk.

"What about students who struggle with verbal communication?"

Oral defense doesn't have to mean live performance. It can be a recorded video explanation, a written annotation of their own essay, or a structured reflection form that asks questions only the author could answer. The goal is surface-level verification, not a formal oral examination.

"I already assign process journals and nobody actually does them."

This is a grading design problem, not a process design problem. If process documentation is optional or ungraded, students skip it. If it's worth 40% of the assignment grade and is due in stages (draft 1 due Monday, reflection due Wednesday, final draft due Friday), it happens.


The Frame Shift: "Thinking-Proofing" vs. "AI-Proofing"

There's a more useful frame than AI-proofing: thinking-proofing.

The goal isn't to design assignments AI can't do — it's to design assignments that require genuine reasoning from a specific person in a specific context. That's a higher bar, and a better one.

An assignment that requires real thinking is automatically harder for AI to fake. But it's also more valuable for students regardless of AI. A student who does a genuine Socratic defense of their argument has developed a real skill. A student who responds to a real local policy issue has practiced real civic reasoning. A student who tracks their own thinking across drafts has developed metacognition that will transfer to other contexts.

The "AI era" turns out to be useful as a forcing function. It's made us notice that assignments which can be completed without thinking... were probably not developing thinking in the first place.


What Remains

Some assessments — standardized test prep, required writing formats, district-mandated prompts — can't be redesigned. That's fine. The goal isn't to eliminate traditional assignments. It's to ensure that enough of your assessment portfolio captures real student thinking that you can distinguish students who reason from students who don't.

A practical target: ensure at least a third of your assessments are AI-resistant by design. The rest can stay as-is, with the understanding that some AI use is probable, and your job is to develop thinking in the portions that can't be faked.


Extending This to Every Assignment

One pattern worth noting: the four assignment categories above — personal reflection, local context, process documentation, Socratic defense — aren't just AI-resistant. They're what many educators call authentic assessment: tasks that resemble real-world cognitive work rather than school-only tasks.

Students who argue about real local issues are practicing real civic reasoning. Students who defend their arguments in conversation are developing real communication skills. Students who track their own thinking are developing real metacognition.

The AI era has pushed us toward the kinds of assessments that were always best. That's not a bad outcome.

One way to extend this principle to every assignment, including ones you can't fully redesign: ThinkingEngine runs Socratic dialogue one-on-one with each student. You can require a structured thinking session before or after any traditional assignment — and get a session transcript for each student that shows how they actually reasoned, before they handed anything in.

It doesn't replace good assignment design. But it means even a traditional essay assignment can have a "thinking layer" attached — one that requires students to be present as themselves, defending their reasoning in real time, before submission.

Try a free session — no account required.


Related Articles

Ready to bring critical thinking into your classroom?

ThinkingEngine guides students through Socratic dialogue — questions that build reasoning, not recall. Free to start, no setup required.

Start Free →