Monday, February 16, 2026

I Asked AI to Analyze How I Use AI as a Principal

    

I've been thinking a lot about how students should use AI. Then it hit me: I've been using AI myself for two years now, first with ChatGPT and more recently with Claude. So I decided to get curious about my own usage patterns.

Try This Yourself:

I gave Claude this two-part prompt:

  1. "Research a typical job description for a middle school principal"
  2. "Based on that job description, analyze our chats to see where, how and what percentage do I use you to help meet my responsibilities as a leader and principal"

The results were eye-opening.

Instructional Leadership (40%)

  • Supervising and evaluating teachers with consistency and evidence
  • Analyzing student achievement data to identify patterns and drive instruction
  • Designing professional development that builds capacity

Communication & Stakeholder Engagement (30%)

  • Writing clear, actionable weekly updates for staff
  • Making complex educational ideas accessible
  • Creating parent communications that inform without overwhelming

Operational Management (15%)

  • Solving scheduling challenges and resource allocation
  • Building systems that streamline processes
  • Creating tools that make daily operations smoother

Student Support & School Climate (10%)

  • Supporting individual student needs
  • Developing metacognitive reflection tools
  • Managing safety and behavior systems

Strategic Planning & Vision (5%)

  • Developing frameworks for AI integration
  • Planning district-wide initiatives
  • Building long-term action plans

What struck me most was realizing AI helps me do the cognitive work—the analysis, the clarification, the pattern recognition—more efficiently. That efficiency gives me time back. Time to be in classrooms and hallways instead of tied to my office. Time to have actual conversations with staff rather than being buried in spreadsheets. Time to be present with my family instead of bringing work home every night.

I'm not claiming AI makes me a better principal. But it does help me find better balance between the work that requires my brain and the work that requires my presence.

Your Turn:

If you use AI regularly in your work, try this analysis with your own role. What would you discover? Where are you using AI most? What does that reveal about the actual demands of your job versus what the job description says?


Tuesday, February 10, 2026

Teaching AI Literacy: A Driver's Ed Approach for K-12

We need to talk about AI in schools.

Not another think piece about whether AI will replace teachers or destroy student writing. We need a practical conversation about how we're actually going to teach kids to use these tools responsibly.

Because here's the thing: we've been down this road before.

We gave kids unfettered access to smartphones and social media without teaching them how to use these tools thoughtfully. We're now dealing with mental health crises, misinformation epidemics, eroded attention spans, and a generation that struggles to distinguish reliable information from garbage.

We have a chance to get ahead of AI. Let's not waste it.

The Driver's Ed Model

What if we approached AI literacy the way we approach driver's education?

I know what you're thinking: "AI isn't a car. Kids can access it anytime on their phones." True. But that's exactly why we need a developmental framework.

We can't control AI access the way we control car keys. But we CAN teach judgment, critical thinking, and responsible use—progressively, intentionally, before problems become crises.

We don't expect kids to figure out driving on their own. We have a clear progression: observe, practice with supervision, earn independence gradually.

AI deserves the same intentional approach.

A Developmental Framework

This is a starting point—an idea to spark conversation and be refined through dialogue.

Elementary School (K-5): Passengers Learning the Road

Young students don't use AI independently, but they ARE learning about it.

They're understanding what AI is and where we encounter it daily. Critically, they're learning that AI is not alive—it doesn't think, feel, remember, or care. It recognizes patterns and predicts responses. This matters because young kids naturally anthropomorphize, and when Alexa responds or a toy "talks back," it can seem sentient.

They're also building foundational questioning skills, developing domain knowledge, and learning that good questions lead to better answers.

Students are passengers, but active ones—learning to read road signs and understand how things work before they ever touch the wheel.

Middle School (6-8): Driver's Ed Classroom—No Independent Access

Here's where we need to pause and think carefully. Most social media platforms require users to be 13+, and there's growing momentum to raise that age to 16. If we're recognizing that younger teens aren't developmentally ready for unsupervised social media, why would AI chatbots be different? They require similar critical evaluation skills and can be just as persuasive or potentially manipulative.

As students progress through middle school, they:

  • Learn how AI works, its limitations, bias, and potential for error

  • Watch teachers model AI use and analyze outputs together as a class

  • Practice crafting good questions and understanding that domain knowledge matters—the more you know, the better questions you ask

  • By 8th grade, may use controlled sandbox platforms (like school.ai) where teachers monitor all activity

  • Never have individual, open access to tools like ChatGPT, Gemini, or Claude

The critical focus: authenticity and intellectual integrity

We're already seeing middle schoolers submit AI-generated work—sometimes without citation, sometimes with citation as if that makes it okay. This reveals a fundamental misunderstanding.

Students need to grapple with what makes work authentically theirs:

  • School's goal isn't to produce outputs—it's to develop your capabilities

  • If AI did the thinking, you didn't learn—even if you cited it

  • What's the difference between AI helping you think versus doing your thinking?

  • How do you prove you actually understand something?

Drawing on Tony Frontier's work on intellectual character, this is about understanding authentic learning. Every interaction with AI becomes an opportunity to discuss purpose and integrity.

Key principle: No unsupervised AI access in middle school. All use happens in controlled environments with teacher oversight.

Early High School (9-10): The Learner's Permit

Students start using AI tools with supervision—there's always a "teacher in the car."

Here, AI becomes a tool to support learning across content areas. Example in English class: "Use AI to generate 5 possible themes in The Great Gatsby, then evaluate each one. Which are well-founded? Which are off-base? Provide textual evidence." The student does the higher-order thinking—AI provides a starting point.

Students build skills in effective prompting, source verification, and understanding when AI helps versus hurts learning. They make some choices about AI use within clear boundaries and continue developing their sense of authenticity and intellectual integrity—all in the context of real coursework.

Late High School (11-12): Junior License to Full License

Students who demonstrate competency and mature judgment earn progressive independence.

Grade 11 brings restricted independence—like a junior driver's license. Students can use AI for certain tasks (research, brainstorming, feedback) but restrictions apply. They're building judgment about when and where AI use is appropriate.

Grade 12 can earn full license through demonstrated mature, ethical use. They make independent decisions about AI as a tool and can articulate WHY they're using or not using it for specific tasks. They understand that AI amplifies what you bring to it—and that you can't prompt your way to expertise.

The Foundation: Learning How to Learn

School's goal is to teach students how to learn, build deep domain knowledge, and ask increasingly sophisticated questions.

Here's the AI connection: The better your domain knowledge and questioning skills, the better you can use AI. Garbage in, garbage out.

As Justin Reich emphasizes, domain knowledge matters MORE in an AI world, not less. You need to know enough to ask the right questions, recognize good versus bad responses, and know what follow-up questions to ask.

We can't move forward with AI in education without having this discussion.

This is a starting point, not a finished blueprint. There's research yet to be done, questions to answer, details to work out.

So here's my question for you:

What's missing? What would you add or change? What does your experience tell you about developmental readiness for AI use? How does this compare to what you're seeing in your school or district?

Let's talk about it. Let's refine it together. Our kids are worth getting this right.




Tuesday, January 20, 2026

The Formative Assessment Paradox: When "Helpful" AI Might Undermine Learning

 The Formative Assessment Paradox: When "Helpful" AI Might Undermine Learning

Following up on the Brookings AI report - here's where implementation gets complicated.

In my building, we're using AI tools (Curipod, Neweala, Brisk) in what we think are thoughtful ways - only for formative work, not summative assessment. Students get instant feedback on writing mechanics, which frees teachers to conference more deeply on content and thinking.

Sounds responsible, right?

But here's what the Brookings report helped me see:

Formative experiences are where learning actually happens. If students spend 100 hours getting AI feedback during daily practice and only 5 hours getting human feedback on final assessments, which experience is truly shaping how they think?

The critical question isn't whether AI gives accurate feedback.

It's what students are learning ABOUT learning when they use it repeatedly.

Are they developing:

✓ Internal standards for quality, or external dependence on validation?

✓ Self-monitoring skills, or waiting for the tool to identify problems?

✓ Transferable revision strategies, or compliance with AI suggestions?

✓ Metacognitive awareness, or algorithmic responsiveness?

A concrete example from today:

I observed a science class where students were writing CER (Claim-Evidence-Reasoning) responses. The process:

  1. Complete a graphic organizer (thinking structure)
  2. Draft in Newsela for technical feedback (Does it have claim/evidence/reasoning? Grammar on target?)
  3. Revise based on AI feedback
  4. Work with a peer for content feedback (Does the science actually make sense?)

Students told me they valued the immediacy - they could revise quickly instead of waiting. The teacher was clear: "This is part of the writing process, not the whole process."

But here's my question:

Even in this thoughtful implementation, what are students actually learning?

→ Are they internalizing WHAT makes a strong claim, or learning WHERE to put claims in the structure?

→ Are they developing judgment about evidence quality, or recognizing that "adding evidence" satisfies the AI?

→ Can they transfer these skills to contexts without AI scaffolding?

This is the distinction Brookings identifies between AI-enriched and AI-diminished learning. Same tool. Same task. Same well-designed lesson. But potentially very different cognitive outcomes depending on what's happening inside students' heads.

I don't have the answer yet. But I know we need systems to tell the difference:

  • How do we assess cognitive transfer beyond the AI tool?
  • What does student self-revision look like without AI support?
  • Can students articulate their own thinking process, not just follow AI guidance?

The work isn't choosing between AI or no AI.

The work is building the structures to ensure formative AI use amplifies student thinking capacity rather than creating learned helplessness - even when the implementation looks good on the surface.

What are you seeing in your context? How are you distinguishing between AI-supported learning and AI-dependent behavior?

📄 Brookings Report: https://www.brookings.edu/articles/a-new-direction-for-students-in-an-ai-world/

#EducationalLeadership #AIinEducation #FormativeAssessment #Metacognition #CriticalThinking #ScienceEducation

Monday, January 12, 2026

 

Part 2: When the AI Reflection Tool Both Worked and Failed (And What That Taught Us)

(Continuing from Part 1, where I introduced an AI reflection tool to help teachers experience the same metacognitive practice we ask of students)

I handed my staff an AI reflection protocol. Simple premise: use it to think through a lesson, a student interaction, a classroom challenge. The AI would ask thoughtful questions, probe for specifics, help surface insights.

Here's a snippet of the prompt structure I used:

"You are a supportive instructional coach helping a teacher reflect on their practice. Ask 3-4 focused questions that encourage deep thinking about teaching decisions. After each response, briefly summarize what you're hearing before asking the next question. End by offering the teacher a choice: 'Would you like to explore any of these areas further, or would a summary of your reflections be more helpful?'"

What happened next revealed everything about both the promise and the challenges of AI in education.

The Successes

Several teachers found genuine value:

  • One said the questions were more probing than she'd ask herself—it pushed her thinking in productive ways

  • A PE teacher identified new connection opportunities with students he hadn't considered

  • Even a teacher who strongly prefers paper-based reflection admitted: "I got a new perspective I hadn't thought about before."

The AI was doing something right. It was asking substantive questions that required actual thought. It was helping teachers see their practice from new angles.

This is AI's strength: It can be endlessly patient, non-judgmental, and curious in ways that create psychological safety.

The Failures

But there were significant problems:

  • Multiple teachers described a repetitive questioning loop—the AI kept asking more questions even when they'd run out of mental energy

  • One counselor felt interrogated and had to explicitly tell the AI to stop

  • A  teacher lost their train of thought due to excessive prompting and felt frustrated

This is AI's weakness: It has no social awareness, no sense of when enough is enough, no ability to read the room.

Even with my attempt to structure stopping points in the prompt, the tool sometimes missed the cues that a human conversation partner would naturally pick up on.

What This Revealed

Here's what became clear through this experience:

1. AI can be a valuable reflective partner—but only with the right design constraints.

The difference between "helpful" and "frustrating" came down to how well the prompt managed conversation flow, gave users control, and created natural exit points.

2. Understanding AI means experiencing both its capacity and its limitations firsthand.

My teachers didn't just learn that AI can ask good questions. They learned that it can also miss social cues, be repetitive, and require explicit direction. That's valuable knowledge as they think about student use.

3. People are in radically different places with this technology.

Remember—only 4 out of 20 had used AI this way before. For most of my staff, this was their first experience with AI as anything other than a search engine or content generator.

Some found it immediately useful. Some were uncomfortable. Some were curious but cautious. All of those responses are valid.


Sunday, January 11, 2026

We Ask Students to Reflect. But Who Helps Teachers Do the Same?

 

Part 1:

As educators, we're constantly asking students to be metacognitive.

"What strategies did you use?" "Why did you make that choice?" "What would you do differently next time?"

We know that reflection deepens learning. We build it into our lessons, our assessments, our feedback cycles.

But here's what I've been thinking about: Who's asking teachers those same questions?

And more importantly—in a way that's truly personal, non-evaluative, and focused on growth rather than judgment?

The Challenge We Don't Talk About Enough

Teacher reflection is hard. Not because we don't value it, but because it requires time, space, and the right conditions.

Reflecting with a colleague can feel vulnerable—especially about lessons that didn't go well. Reflecting alone can feel isolating—we don't always know what questions to ask ourselves. Formal evaluation cycles are valuable but inherently high-stakes.

What if there was a middle ground? A way to process our teaching that felt safe, personal, and genuinely helpful?

That question led me to experiment with AI as a reflective thought partner for my staff.

The Setup

Out of 20 teachers in my building, only four had previously used AI as anything more than a content generator. Most had never experienced AI as a conversational partner—something that could ask follow-up questions, probe for specifics, help them think through complexity.

As we work to build teacher understanding and capacity around AI, I'm acutely aware that everyone is starting from a different place. Different comfort levels. Different prior experiences. Different levels of skepticism or curiosity.

So I designed a simple activity: an AI-powered reflection tool that would guide teachers through thinking about their practice.

The goal wasn't to evaluate them. It was to give them the same metacognitive experience we're constantly asking students to have.

Why This Matters Now

AI is becoming increasingly pervasive in education. Our students are using it. Parents are asking about it. Districts are developing policies around it.

But before we can help students use AI responsibly and effectively, we need to understand it ourselves. Not just what it can do, but what it does well, what it struggles with, and how it actually feels to interact with it as a learning tool.

This wasn't just about reflection. It was about experiential learning with a technology that's rapidly changing our profession.


Coming in Part 2: What happened when teachers actually used it—the successes, the failures, and what it revealed about AI as an educational tool.


Monday, August 11, 2025

Unleashed - AIDA in Use

Using the AIDA Framework to Deepen My Learning from Unleashed

This past summer, our Administrative Team was asked to read Unleashed: The Unapologetic Leader’s Guide to Empowering Everyone Around You as part of our shared professional learning. Like many assigned readings, there’s a temptation to approach it as a task—get through the chapters, check the box, and be ready to nod along in the group discussion.

But I wanted more than that. If I was going to invest my time in this book, I wanted to walk away with clear, practical applications I could bring into my leadership practice. I didn’t just want to read it; I wanted to use it.

That decision reminded me of how our students often experience assigned books or class texts. They, too, are given material with the expectation that they’ll get through it, but the real goal is for them to engage, think, and apply what they’ve learned. We want them to go beyond surface-level reading and turn ideas into something meaningful.

To make sure I was practicing what I preach, I used my AIDA Framework—Assist, Investigate, Dialogue, Apply—not just as a reading guide, but as a metacognitive structure to deepen my thinking and ensure the book would leave a lasting impact.


Putting AIDA into Practice

Assist
I began by deliberately connecting new concepts to what I already knew—past leadership experiences, district initiatives, and professional development I’ve been part of. For example, when the authors described “empowering others through trust,” I immediately thought of our own culture-building efforts. These connections grounded my reading and made me more aware of what I was bringing to the table before taking in new information.

Investigate
From there, I sought out supporting and contrasting information—research studies, survey data from my school, and examples from other districts. When I read about distributed leadership models, I went looking for case studies, teacher-leadership frameworks, and Hattie’s work on collective efficacy. This expanded my perspective beyond the book’s examples.

Dialogue
Here’s where the lines started to blur. Investigate had me gathering facts, but my mind quickly jumped into evaluating them: Would our school culture support distributed leadership? How might these ideas look in a middle school setting? I noticed that Dialogue—the reflective and analytical stage—often arrived before Investigate was “complete.” While the blending felt natural, I realized that separating them more clearly could lead to richer insights, because I’d have a fuller set of information before I began interpreting it.

Apply
Finally, I considered how these insights could be put into action. For example, after reflecting on empowerment and trust, I sketched out how I could revise our Collaborative Action Team’s meeting norms to ensure every voice has influence. That was an immediate takeaway I could implement, not just an idea I admired in the abstract.


Why This is More Than a Reading Strategy

Using AIDA with Unleashed reminded me that it’s not a tool for getting faster answers—it’s a metacognitive framework for slowing down and thinking better. Each phase forces a different mental posture:

  • Assist makes me conscious of what I already know and how it shapes my perspective.

  • Investigate ensures I have real evidence before forming judgments.

  • Dialogue pushes me to wrestle with meaning, implications, and context.

  • Apply moves reflection into action.

By working through these phases, I uncovered insights I might have missed if I had skimmed the book and gone straight to “what does this mean for me?” I saw patterns in my own leadership approach, noticed gaps between our school’s current practices and the ideals in Unleashed, and identified concrete changes worth testing.


Sample Moments Where AIDA Helped Me See More Clearly

  • While reading about empowerment, Assist surfaced memories of past initiatives where trust either accelerated or stalled progress.

  • In Investigate, looking at our 360 survey data gave me a factual baseline for how empowered staff currently feel—something I might have skipped without this phase.

  • In Dialogue, I compared that data to the book’s leadership levers, which highlighted a gap between intent and perception in our building.

  • In Apply, I drafted a plan to increase teacher input in decision-making for the upcoming school year.


Lessons for Next Time
The Assist phase felt strong and distinct, but Investigate and Dialogue overlapped more than I’d like. To keep them separate in future projects, I’ll:

  1. Fully document findings in Investigate before starting any interpretation.

  2. Use separate note pages for “what I found” and “what I think.”

  3. Time-box the phases so I’m not tempted to blend them midstream.


AIDA doesn’t replace thinking—it organizes it. Using it with Unleashed proved that it can slow down the learning process just enough to deepen understanding, surface new insights, and make application more intentional. It’s a framework I’ll continue to use for professional reading, team learning, and even personal projects where deep thinking matters.



I Asked AI to Analyze How I Use AI as a Principal

     I've been thinking a lot about how students should use AI. Then it hit me: I've been using AI myself for two years now, first w...