The Feedback Most Teachers Give (And Why It Doesn't Work)
You spend twenty minutes writing detailed feedback on an essay. You identify every weakness, annotate every unclear sentence, explain exactly what's missing.
The student reads it. They see a 3/10. They close the document.
A week later, you read their next essay. It has the same problems.
This is not a motivation problem. It's not a respect-for-teachers problem. It's a feedback design problem.
Most feedback tells students what they got wrong. It gives them the answer. And giving students the answer to a thinking problem does not teach them to think. It teaches them to avoid getting the answer wrong next time. Those are not the same thing.
The goal of formative feedback is not to correct errors. It's to create productive confusion that pushes students to engage with their own reasoning at a level they couldn't reach alone. Most teacher feedback does the opposite: it resolves the confusion instantly, so the student never has to do the hard cognitive work of figuring out what they actually don't understand.
What Good Feedback Actually Does
Let me be specific about what this looks like in practice.
Before (the feedback most teachers write):
"Your thesis is too vague. You need to be more specific about what you mean by 'success.' The New Deal is a complex topic and this claim doesn't account for the nuance."
This tells the student their thesis is bad and points vaguely toward what's missing. It answers the question. It doesn't prompt thinking.
After (feedback that creates productive confusion):
"What does 'success' mean in your argument? If you're measuring economic recovery, what specific data would support your claim? If you're measuring institutional change, what evidence would show the New Deal changed government's role? You can't defend a claim until you know which one you're making."
This doesn't answer the question. It forces the student to confront the gap in their own thinking and figure out what they actually mean. They can't complete the revision without doing the reasoning work.
Here's the key distinction: the student should not be able to answer your feedback by rereading the prompt. If they can, the feedback isn't asking them to think harder. It's asking them to listen harder. Those produce very different results.
The Question-First Principle
ThinkingEngine helps teachers run Socratic discussions at scale. See how it works →
Good feedback almost always comes in question form.
Not:
"You need to cite more sources."
Yes:
"What would you look for in a source that would tell you whether your argument is right or wrong? What would count as strong evidence here, and what would count as weak evidence?"
The first tells them what's missing. The second forces them to think about why it's missing and what they'd need to actually defend the claim.
Students who receive question-first feedback learn to ask those questions of themselves before they submit. Students who receive answer-first feedback learn to look for a checklist before they submit. One of these habits transfers to higher-level work. The other doesn't.
The test: Would this feedback make sense if you handed it to a student who had never seen the assignment? If yes, you've probably stated too much. If no, the student has to engage with their specific thinking, which is what you actually want.
Three Feedback Strategies That Actually Change Thinking
1. Identify the reasoning gap, not the error
A student's thesis is weak. Your instinct is to mark it and explain what's missing.
Before you do that, ask yourself: why is the thesis weak? Is it:
- A knowledge gap? (They don't know what 'specific' means in argument writing.)
- A modeling gap? (They've never seen a strong thesis for this kind of prompt.)
- A reasoning gap? (They have the knowledge but didn't apply it.)
Feedback for a knowledge gap is an instruction: "Here's what a specific claim looks like."
Feedback for a modeling gap is an example: "Compare your thesis to this one."
Feedback for a reasoning gap is a question: "What would a critic say about your claim, and what would you say back?"
Most weak student work comes from a reasoning gap. Teachers usually treat it like a knowledge gap. That's why the feedback doesn't transfer.
Before:
"This thesis is too broad. Try to be more specific."
After:
"If a classmate read this thesis and asked you 'But what specifically?' could you answer them without just restating the same words? If not, the thesis isn't specific enough. Write the answer you'd give them."
2. Get the timing right
Feedback is most effective when the thinking work is still active.
If a student writes an essay in week one and gets feedback in week three, the cognitive context is gone. They're not thinking about the argument anymore. They're reading a grade and a checklist.
Better timing options:
- After a Socratic discussion: Students who've just argued a position in dialogue have strong opinions and reasoning in active memory. Asking them to write a revised thesis immediately after captures that energy and shows them what the discussion accomplished.
- Before the next draft: Not after the final submission. Before the revision. The feedback has to arrive while the student is still in the thinking, not after they've moved on to the next assignment.
- One-on-one: Written feedback is a compromise. Talking to a student about their thinking in thirty seconds does more than a page of annotations. If you can do one-on-one feedback for the highest-stakes assignments, do it.
3. Use the student's own language
The most underused feedback move: quote back what the student said and ask them to explain it.
Before:
"You say the New Deal 'helped people' but that's vague. What people? How? Help how?"
After:
"You wrote that the New Deal 'helped people in real ways.' What did you mean by 'real ways'? If someone asked you to defend this claim, what specific evidence would you use? Write it out as if you're explaining it to someone who's skeptical."
The second version doesn't tell them what's missing. It forces them to work with their own language and find the gap themselves.
A rule of thumb: If your feedback could be answered by rereading the assignment prompt, it's not feedback that changes thinking. It's feedback that checks whether students read carefully. Those are different purposes, and conflating them is why so much feedback produces so little learning.
Before/After: Two Feedback Scenarios
The Weak Thesis (Essay Revision)
Before:
Teacher comment on a first draft: "Your thesis is too vague. Be more specific about what you mean by 'the American Dream.' This is a complex topic. Make sure your argument is clear."
Student response: Revises to "The American Dream is harder to achieve today than it was in the 1950s." Technically more specific. Still vague. Still easy to write around without actually arguing anything. No new thinking.
After:
Teacher comment: "Your thesis argues that the American Dream is harder to achieve. But who is this claim about? All Americans? Some Americans? And harder compared to when? And harder how -- financially, socially, legally? If you were arguing this with a friend who disagreed, what's the one thing they'd ask you to prove? That's your thesis."
Student response: Revises to "For first-generation immigrants from Southeast Asia, the socioeconomic mobility achieved by their children's generation in the 1980s and 90s was a product of specific policy conditions (affirmative action in higher education, immigration reform, suburban expansion) that have since reversed, making it substantially harder for today's first-generation immigrants to replicate that mobility."
The student did the reasoning. The teacher just asked the right question.
The AI Work (Critical Analysis Assignment)
Before:
Teacher comment on student analysis of an AI-generated argument: "You're not critical enough of this source. AI outputs need to be evaluated carefully. Think more about what the AI got wrong."
Student response: Adds a sentence about how AI sometimes makes things up. Submits. Nothing changes.
After:
Teacher comment: "You wrote that this AI argument has a 'logical flaw.' Which specific claim in the output is flawed? What would make it not flawed? What criteria would you use to evaluate whether this argument is strong or weak -- independent of who or what made it? Write out the criteria, then apply them to the AI's argument."
Student response: Actually engages with the reasoning, identifies the specific claim, constructs evaluation criteria, and applies them. Now they can transfer that skill to evaluating any argument, AI-generated or otherwise.
What This Requires You to Give Up
Giving feedback that makes students think harder is more work upfront and less satisfying in the short term.
It's more work because you're writing questions, not answers. Questions take longer to craft well. You have to understand the specific reasoning gap well enough to prompt it without resolving it.
It's less satisfying because you don't see the student's face when they read "you need to be more specific." You see it when they read a question they can't immediately answer and their eyes go slightly unfocused for a second. That's the productive confusion moment. That's the learning.
The trade-off is worth it. Students who get question-first feedback learn to ask better questions of themselves. Students who get answer-first feedback learn to look for what they're supposed to do next. The first group can transfer their thinking to new contexts. The second group can't.
You have limited time for feedback. Use it to push thinking, not to cover mistakes. The mistakes will resolve themselves once the thinking is better.
What This Looks Like With ThinkingEngine
ThinkingEngine runs structured Socratic dialogue with students one-on-one, adapted to each student's reasoning level. You can assign an argumentation session as homework before a discussion or revision: students write a thesis, defend it in a dialogue where the AI asks about evidence, warrants, and counterarguments, and submit the transcript.
You review the transcripts before class. You can see which students constructed tight arguments, which students are conflating positions with claims, and which students can't handle a direct question about their evidence. Class time becomes targeted: you work on the specific reasoning gaps the transcripts revealed.
It's the difference between writing feedback and actually seeing what students can do when thinking is the only option.
Try a free Socratic session with your students this week.
Related Articles
- Your Students Are Already Using AI. Here's How to Use That Against Them (In a Good Way) - The feedback principles above apply directly to AI-generated student work
- How to Teach Students to Argue (Not Just Debate) - The discussion formats that build the reasoning skills good feedback assumes
- How to Run a Socratic Discussion That Doesn't Fall Flat - Using Socratic dialogue as a feedback tool before written work
- Teaching Critical Thinking with AI: A Practical Guide for Teachers - How to use AI as a thinking partner without losing the cognitive load that makes learning stick
Ready to bring critical thinking into your classroom?
ThinkingEngine guides students through Socratic dialogue — questions that build reasoning, not recall. Free to start, no setup required.
Start Free →