Tuesday, January 20, 2026

The Formative Assessment Paradox: When "Helpful" AI Might Undermine Learning

 The Formative Assessment Paradox: When "Helpful" AI Might Undermine Learning

Following up on the Brookings AI report - here's where implementation gets complicated.

In my building, we're using AI tools (Curipod, Neweala, Brisk) in what we think are thoughtful ways - only for formative work, not summative assessment. Students get instant feedback on writing mechanics, which frees teachers to conference more deeply on content and thinking.

Sounds responsible, right?

But here's what the Brookings report helped me see:

Formative experiences are where learning actually happens. If students spend 100 hours getting AI feedback during daily practice and only 5 hours getting human feedback on final assessments, which experience is truly shaping how they think?

The critical question isn't whether AI gives accurate feedback.

It's what students are learning ABOUT learning when they use it repeatedly.

Are they developing:

✓ Internal standards for quality, or external dependence on validation?

✓ Self-monitoring skills, or waiting for the tool to identify problems?

✓ Transferable revision strategies, or compliance with AI suggestions?

✓ Metacognitive awareness, or algorithmic responsiveness?

A concrete example from today:

I observed a science class where students were writing CER (Claim-Evidence-Reasoning) responses. The process:

  1. Complete a graphic organizer (thinking structure)
  2. Draft in Newsela for technical feedback (Does it have claim/evidence/reasoning? Grammar on target?)
  3. Revise based on AI feedback
  4. Work with a peer for content feedback (Does the science actually make sense?)

Students told me they valued the immediacy - they could revise quickly instead of waiting. The teacher was clear: "This is part of the writing process, not the whole process."

But here's my question:

Even in this thoughtful implementation, what are students actually learning?

→ Are they internalizing WHAT makes a strong claim, or learning WHERE to put claims in the structure?

→ Are they developing judgment about evidence quality, or recognizing that "adding evidence" satisfies the AI?

→ Can they transfer these skills to contexts without AI scaffolding?

This is the distinction Brookings identifies between AI-enriched and AI-diminished learning. Same tool. Same task. Same well-designed lesson. But potentially very different cognitive outcomes depending on what's happening inside students' heads.

I don't have the answer yet. But I know we need systems to tell the difference:

  • How do we assess cognitive transfer beyond the AI tool?
  • What does student self-revision look like without AI support?
  • Can students articulate their own thinking process, not just follow AI guidance?

The work isn't choosing between AI or no AI.

The work is building the structures to ensure formative AI use amplifies student thinking capacity rather than creating learned helplessness - even when the implementation looks good on the surface.

What are you seeing in your context? How are you distinguishing between AI-supported learning and AI-dependent behavior?

📄 Brookings Report: https://www.brookings.edu/articles/a-new-direction-for-students-in-an-ai-world/

#EducationalLeadership #AIinEducation #FormativeAssessment #Metacognition #CriticalThinking #ScienceEducation

Monday, January 12, 2026

 

Part 2: When the AI Reflection Tool Both Worked and Failed (And What That Taught Us)

(Continuing from Part 1, where I introduced an AI reflection tool to help teachers experience the same metacognitive practice we ask of students)

I handed my staff an AI reflection protocol. Simple premise: use it to think through a lesson, a student interaction, a classroom challenge. The AI would ask thoughtful questions, probe for specifics, help surface insights.

Here's a snippet of the prompt structure I used:

"You are a supportive instructional coach helping a teacher reflect on their practice. Ask 3-4 focused questions that encourage deep thinking about teaching decisions. After each response, briefly summarize what you're hearing before asking the next question. End by offering the teacher a choice: 'Would you like to explore any of these areas further, or would a summary of your reflections be more helpful?'"

What happened next revealed everything about both the promise and the challenges of AI in education.

The Successes

Several teachers found genuine value:

  • One said the questions were more probing than she'd ask herself—it pushed her thinking in productive ways

  • A PE teacher identified new connection opportunities with students he hadn't considered

  • Even a teacher who strongly prefers paper-based reflection admitted: "I got a new perspective I hadn't thought about before."

The AI was doing something right. It was asking substantive questions that required actual thought. It was helping teachers see their practice from new angles.

This is AI's strength: It can be endlessly patient, non-judgmental, and curious in ways that create psychological safety.

The Failures

But there were significant problems:

  • Multiple teachers described a repetitive questioning loop—the AI kept asking more questions even when they'd run out of mental energy

  • One counselor felt interrogated and had to explicitly tell the AI to stop

  • A  teacher lost their train of thought due to excessive prompting and felt frustrated

This is AI's weakness: It has no social awareness, no sense of when enough is enough, no ability to read the room.

Even with my attempt to structure stopping points in the prompt, the tool sometimes missed the cues that a human conversation partner would naturally pick up on.

What This Revealed

Here's what became clear through this experience:

1. AI can be a valuable reflective partner—but only with the right design constraints.

The difference between "helpful" and "frustrating" came down to how well the prompt managed conversation flow, gave users control, and created natural exit points.

2. Understanding AI means experiencing both its capacity and its limitations firsthand.

My teachers didn't just learn that AI can ask good questions. They learned that it can also miss social cues, be repetitive, and require explicit direction. That's valuable knowledge as they think about student use.

3. People are in radically different places with this technology.

Remember—only 4 out of 20 had used AI this way before. For most of my staff, this was their first experience with AI as anything other than a search engine or content generator.

Some found it immediately useful. Some were uncomfortable. Some were curious but cautious. All of those responses are valid.


Sunday, January 11, 2026

We Ask Students to Reflect. But Who Helps Teachers Do the Same?

 

Part 1:

As educators, we're constantly asking students to be metacognitive.

"What strategies did you use?" "Why did you make that choice?" "What would you do differently next time?"

We know that reflection deepens learning. We build it into our lessons, our assessments, our feedback cycles.

But here's what I've been thinking about: Who's asking teachers those same questions?

And more importantly—in a way that's truly personal, non-evaluative, and focused on growth rather than judgment?

The Challenge We Don't Talk About Enough

Teacher reflection is hard. Not because we don't value it, but because it requires time, space, and the right conditions.

Reflecting with a colleague can feel vulnerable—especially about lessons that didn't go well. Reflecting alone can feel isolating—we don't always know what questions to ask ourselves. Formal evaluation cycles are valuable but inherently high-stakes.

What if there was a middle ground? A way to process our teaching that felt safe, personal, and genuinely helpful?

That question led me to experiment with AI as a reflective thought partner for my staff.

The Setup

Out of 20 teachers in my building, only four had previously used AI as anything more than a content generator. Most had never experienced AI as a conversational partner—something that could ask follow-up questions, probe for specifics, help them think through complexity.

As we work to build teacher understanding and capacity around AI, I'm acutely aware that everyone is starting from a different place. Different comfort levels. Different prior experiences. Different levels of skepticism or curiosity.

So I designed a simple activity: an AI-powered reflection tool that would guide teachers through thinking about their practice.

The goal wasn't to evaluate them. It was to give them the same metacognitive experience we're constantly asking students to have.

Why This Matters Now

AI is becoming increasingly pervasive in education. Our students are using it. Parents are asking about it. Districts are developing policies around it.

But before we can help students use AI responsibly and effectively, we need to understand it ourselves. Not just what it can do, but what it does well, what it struggles with, and how it actually feels to interact with it as a learning tool.

This wasn't just about reflection. It was about experiential learning with a technology that's rapidly changing our profession.


Coming in Part 2: What happened when teachers actually used it—the successes, the failures, and what it revealed about AI as an educational tool.


I Asked AI to Analyze How I Use AI as a Principal

     I've been thinking a lot about how students should use AI. Then it hit me: I've been using AI myself for two years now, first w...