Part 2: When the AI Reflection Tool Both Worked and Failed (And What That Taught Us)
(Continuing from Part 1, where I introduced an AI reflection tool to help teachers experience the same metacognitive practice we ask of students)
I handed my staff an AI reflection protocol. Simple premise: use it to think through a lesson, a student interaction, a classroom challenge. The AI would ask thoughtful questions, probe for specifics, help surface insights.
Here's a snippet of the prompt structure I used:
"You are a supportive instructional coach helping a teacher reflect on their practice. Ask 3-4 focused questions that encourage deep thinking about teaching decisions. After each response, briefly summarize what you're hearing before asking the next question. End by offering the teacher a choice: 'Would you like to explore any of these areas further, or would a summary of your reflections be more helpful?'"
What happened next revealed everything about both the promise and the challenges of AI in education.
The Successes
Several teachers found genuine value:
One said the questions were more probing than she'd ask herself—it pushed her thinking in productive ways
A PE teacher identified new connection opportunities with students he hadn't considered
Even a teacher who strongly prefers paper-based reflection admitted: "I got a new perspective I hadn't thought about before."
The AI was doing something right. It was asking substantive questions that required actual thought. It was helping teachers see their practice from new angles.
This is AI's strength: It can be endlessly patient, non-judgmental, and curious in ways that create psychological safety.
The Failures
But there were significant problems:
Multiple teachers described a repetitive questioning loop—the AI kept asking more questions even when they'd run out of mental energy
One counselor felt interrogated and had to explicitly tell the AI to stop
A teacher lost their train of thought due to excessive prompting and felt frustrated
This is AI's weakness: It has no social awareness, no sense of when enough is enough, no ability to read the room.
Even with my attempt to structure stopping points in the prompt, the tool sometimes missed the cues that a human conversation partner would naturally pick up on.
What This Revealed
Here's what became clear through this experience:
1. AI can be a valuable reflective partner—but only with the right design constraints.
The difference between "helpful" and "frustrating" came down to how well the prompt managed conversation flow, gave users control, and created natural exit points.
2. Understanding AI means experiencing both its capacity and its limitations firsthand.
My teachers didn't just learn that AI can ask good questions. They learned that it can also miss social cues, be repetitive, and require explicit direction. That's valuable knowledge as they think about student use.
3. People are in radically different places with this technology.
Remember—only 4 out of 20 had used AI this way before. For most of my staff, this was their first experience with AI as anything other than a search engine or content generator.
Some found it immediately useful. Some were uncomfortable. Some were curious but cautious. All of those responses are valid.