Tuesday, January 20, 2026

The Formative Assessment Paradox: When "Helpful" AI Might Undermine Learning

 The Formative Assessment Paradox: When "Helpful" AI Might Undermine Learning

Following up on the Brookings AI report - here's where implementation gets complicated.

In my building, we're using AI tools (Curipod, Neweala, Brisk) in what we think are thoughtful ways - only for formative work, not summative assessment. Students get instant feedback on writing mechanics, which frees teachers to conference more deeply on content and thinking.

Sounds responsible, right?

But here's what the Brookings report helped me see:

Formative experiences are where learning actually happens. If students spend 100 hours getting AI feedback during daily practice and only 5 hours getting human feedback on final assessments, which experience is truly shaping how they think?

The critical question isn't whether AI gives accurate feedback.

It's what students are learning ABOUT learning when they use it repeatedly.

Are they developing:

✓ Internal standards for quality, or external dependence on validation?

✓ Self-monitoring skills, or waiting for the tool to identify problems?

✓ Transferable revision strategies, or compliance with AI suggestions?

✓ Metacognitive awareness, or algorithmic responsiveness?

A concrete example from today:

I observed a science class where students were writing CER (Claim-Evidence-Reasoning) responses. The process:

  1. Complete a graphic organizer (thinking structure)
  2. Draft in Newsela for technical feedback (Does it have claim/evidence/reasoning? Grammar on target?)
  3. Revise based on AI feedback
  4. Work with a peer for content feedback (Does the science actually make sense?)

Students told me they valued the immediacy - they could revise quickly instead of waiting. The teacher was clear: "This is part of the writing process, not the whole process."

But here's my question:

Even in this thoughtful implementation, what are students actually learning?

→ Are they internalizing WHAT makes a strong claim, or learning WHERE to put claims in the structure?

→ Are they developing judgment about evidence quality, or recognizing that "adding evidence" satisfies the AI?

→ Can they transfer these skills to contexts without AI scaffolding?

This is the distinction Brookings identifies between AI-enriched and AI-diminished learning. Same tool. Same task. Same well-designed lesson. But potentially very different cognitive outcomes depending on what's happening inside students' heads.

I don't have the answer yet. But I know we need systems to tell the difference:

  • How do we assess cognitive transfer beyond the AI tool?
  • What does student self-revision look like without AI support?
  • Can students articulate their own thinking process, not just follow AI guidance?

The work isn't choosing between AI or no AI.

The work is building the structures to ensure formative AI use amplifies student thinking capacity rather than creating learned helplessness - even when the implementation looks good on the surface.

What are you seeing in your context? How are you distinguishing between AI-supported learning and AI-dependent behavior?

📄 Brookings Report: https://www.brookings.edu/articles/a-new-direction-for-students-in-an-ai-world/

#EducationalLeadership #AIinEducation #FormativeAssessment #Metacognition #CriticalThinking #ScienceEducation

No comments:

Post a Comment

I Asked AI to Analyze How I Use AI as a Principal

     I've been thinking a lot about how students should use AI. Then it hit me: I've been using AI myself for two years now, first w...