Spencer, John. “Using AI to Spark Deeper Learning.” Spencer Education, 29 Oct. 2025, spencereducation.com/ai-deeper-learning-update/.
AI represents both a dream and a warning for education. On one hand, it offers personalized feedback, instant support, and creative possibilities that can make learning more dynamic than ever. On the other, it tempts us toward speed over depth, efficiency over understanding, and convenience over curiosity. The real opportunity lies in the space between utopia and dystopia, where teachers and students use AI not to replace human thinking but to extend it. When we design with intention, AI becomes a catalyst for deeper learning rather than a shortcut around it.
If you enjoy this blog but you’d like to listen to it on the go, just click on the audio below or subscribe via iTunes/Apple Podcasts (ideal for iOS users) or Google Play and Stitcher (ideal for Android users).
One of my favorite social studies writing prompts requires two separate days to complete. On the first day, students design their own utopia. They begin with the question, “What is the ideal society? What values should it be built upon?” From there, they dream up this perfect situation. Students complete a concept map connecting every aspect of society, from transportation to education to energy to entertainment to city planning. Next, they describe a day in the life of this dream society using the five senses to create greater imagery.
The next day, they begin the second half of the writing prompt. This utopia has become a dystopia and the dream is now a nightmare. What happened? What is it like to live in this society? What were the hidden weaknesses of the systems, structures, and even values embedded in that initial idea? What happens when we take a good idea and move it to an illogical extreme?
x
While I designed this as a social studies thought experiment, I actually think we can apply the same process to our dream scenarios in education. In many ways, AI represents a kind of educational utopia: instant feedback, personalized instruction, limitless access to information. But just like those student-created worlds, our idealized version of AI has cracks beneath the surface that we need to explore.
At the same time, we tend to have a glut of dystopian stories about the dangers of AI. From The Terminator to The Matrix to Ex Machina, our culture treats artificial intelligence as an inevitable descent into a dystopian future. Machines rise up, humans lose control, and the technology we created to make life easier ends up erasing what makes us human.
In this vision, schools become untouched sanctuaries, sealed off from algorithms and automation. Yet even that version can slip toward its own dystopia: one where students are unprepared for the world they actually live in, where fear replaces curiosity, and where learning becomes nostalgic rather than transformative.
I mention this because I recently led an all day AI workshop with secondary teachers. They actually spent twenty minutes in teams creating their dream scenario (I didn’t call it a utopia) with AI based on their previous beliefs about rejecting or embracing AI. Afterward, they analyzed what might cause their dream to become a nightmare.
Together, we discussed how the real opportunity isn’t about creating a perfect system powered by AI. It’s about asking deeper questions: How might AI make learning more human? How might it help students think more deeply, not just work more efficiently? In other words, we were able to discuss through the lens of story, how we might avoid the two extremes of Lock It and Block It and Techno-Futurism.
The question isn’t whether AI will shape learning, but how deeply it will shape thinking. Between the dream of effortless perfection and the fear of total collapse lies a quieter, more human possibility. It’s the chance, albeit messy, to use AI as a tool for deeper learning. The challenge for educators is to design learning experiences where AI doesn’t replace curiosity or struggle but helps students go further into both.
We live in an era of rapid change. Technology is moving at a breakneck speed and our lives feel more crowded than ever. We can access rapid fire video on multiple apps and stream on demand shows on multiple platforms. It’s no surprise, then, that the number one thing I hear when I ask something how they are doing is the short answer, “busy.”
This hurried pace is a broader cultural trend, but teachers are feeling it more acutely than many other professions. Every shift in technology, every new expectation, and every “time-saving” innovation seems to add another layer of urgency instead of relief. The role of the teacher continues to expand beyond teaching and learning into zones once owned by specialists (just look at the pushback I got in this Facebook post for suggesting that secondary content experts might not need to be experts in early literacy), counselors, curriculum designers, etc. We keep piling more onto teachers’ plates. New standards for the already packed curriculum map. New unfunded policies and initiatives. Even the most dedicated educators feel stretched to the edge of what’s possible.
So, given this reality, many teachers have looked to AI as a time-saving device. That rubric that once took an hour to make? I can revise an AI-generated rubric in five minutes. That newsletter that used to take two hours to write? Let’s use AI to craft a new one that can then be differentiated for parental preferences (a short, “just the facts” version, a typical version, and a detailed version). That station rotation activity? Let’s design it complete with handouts, instructions, and leveled readers in half an hour.
Students are feeling this same pressure to move faster. Many of them spend hours completing homework while participating in regimented after school activities. Suddenly, AI arrives as a way to save time and take shortcuts. On the surface, that makes sense. AI tools offer instant feedback, personalized support, and quick answers that can genuinely help students learn more efficiently. And when the curriculum map moves quickly, this instant feedback and quick tutoring can make a huge difference.
But there is also a cost. When students rely on AI to do the hard thinking for them, they miss the slow struggle that builds understanding. Over time, that convenience can lead to cognitive atrophy, where curiosity fades and learning becomes something to complete rather than explore. In its worst iterations, students end up using AI to do all of the thinking and the work for them. This cognitive off-loading can erode thinking over time.
However, we can also take a different approach that focuses on deeper learning. Here, students engage in deeper problem-solving that leads to mastery. They focus on meaning rather than speed and output. They engage in deeper curiosity and critical thinking. In other words, they use AI in a way that gives them the depth advantage in life:
So as we think about integrating AI into our schools at the student level, we might want to start with the question, “How does this promote deeper learning?”
I am sitting here staring at the cursor blinking back at me in silent judgment at my slow, foggy brain and my clunky human hands. It is 2:56 pm. In three hours, I’ll be teaching my assessment course. But for now, I am sluggish. One part circadian rhythm. Another part being a desert boy in a cold and gray climate. But mostly? I stayed up until midnight watching an exciting baseball game.
You know who doesn’t get tired? Generative AI. It might gobble up electricity and use a fair share of water (though this is actually a really complicated environmental issue that goes beyond the binaries of good and bad). So, right now, I am tempted to pass the baton from my hands to a complex algorithm built on predictive analytics.
But I don’t.
Part of it is my love of writing. I love the process even when I don’t enjoy it. I find value in the process even when it moves slowly. I define myself as a writer and I am convinced that part of being a writer is continuing when you are tired. It’s slowing down and being deliberate with words. It’s the process of learning through writing rather than just writing after you’ve learned. I don’t want to lose any of that even if it saves time and energy.
On some level, though, we all need to negotiate what we gain and what we lose by using AI. For some people, the idea of AI jump cuts (video editing) takes out the slow, methodical eye that videographers need as they engage in film production. To others, it’s an unpleasant task that they can outsource to a machine. For some, AI-generated spreadsheet codes allow them to analyze data faster. For spreadsheet nerds (the self-proclaimed “freaks in the sheets”) that takes the joy out of making spreadsheets.
As a classroom teacher, you might love to make slideshows, design leveled readers, or write a newsletter. But you might just want to spend more time in small groups or engaging in professional learning. You might hate data analysis and love the idea of an AI tool that helps you get real-time data to pull students for intervention. Or that might just be something you love to do in order to get a sense of how each student is doing.
In other words, it is idiosyncratic. We all decide when and how we want to use AI tools. But we do need to recognize that we lose something human every time we use AI.
This is why it helps to take what Ethan Mollick calls the “cyborg” approach, where we treat AI as a form of co-agency. We use the AI but we modify what it creates based on our own voice, contextual understanding, and knowledge of our students.
I use the metaphor of ice cream. AI creates the vanilla but we add our own unique flavor.
Co-agency means we are not outsourcing our thinking but partnering with it. Mollick describes two main approaches to this relationship. The centaur approach treats AI like a teammate. The human and the machine divide tasks based on their strengths, alternating between intuition and computation.
The cyborg approach goes a step further. It is fully integrated, where human and machine thinking blend in real time. In this model, AI does not replace creativity or judgment. Instead, it extends it. We remain the authors, the composers, the teachers, but with an extra layer of insight or speed or perspective. The challenge is to stay grounded in our humanity while still exploring what this partnership can make possible.
Deeper learning with AI requires intentional friction in this co-agency process. It involves doing unpleasant but important tasks. It focuses less on efficiency and more on extended learning. It means designing moments where students wrestle with ideas instead of rushing to answers. When AI can instantly summarize, explain, or generate, the teacher’s role shifts from delivering information to creating conditions for curiosity and productive struggle.
Students might use AI to test their understanding, to compare interpretations, or to revise their reasoning, but they still have to think, question, and decide. In this way, AI becomes less of a shortcut and more of a mirror that reflects how well students can apply, adapt, and extend what they know. But this requires us to view AI as a thinking partner rather than an efficiency machine.
I really struggle with the idea of a AI as a thinking partner because machines don’t think. They generate scripts. They aren’t intelligent. They simply generate responses using predictive analytics. True, an LLM mimics the human brain but it is not sentient, intelligent, or affective. AI cannot understand anything. It can generate questions but it cannot become curious. It doesn’t wonder. So, in a sense, AI is a tool and not a thinking partner.
Yet there is still value in using the metaphor of a thinking partner, because it captures how AI can extend and challenge human reasoning even without genuine understanding. When we treat AI as a thinking partner, we are really using it as a reflective surface that pushes our thinking forward. The goal is not to share cognition with a machine but to use its responses as prompts for deeper human thought.
When students use AI as a thought partner, they can improve their metacognition. They can determine what they know, what they don’t know, and what they need to do next. Along the way, they can move through this metacognition cycle:
Moreover, AI can actually strengthen student agency when students use it in a slower, more intentional way. They can ask better questions, explore new ideas, and take more ownership of what and how they learn. Here they move from compliance and engagement and toward deeper empowerment.
A student might use AI to clarify a confusing concept, plan a project timeline, or get feedback before revising their work.
The teacher’s role is to guide those choices, not to script them.
When students learn to use AI as a tool for curiosity and reflection, they begin to see themselves as active creators rather than passive receivers of knowledge.
In other words, students can use AI as a partner in curiosity or as a coach for reflection.
So, let’s explore a few practical ways students might use AI for deeper learning tasks.
AI can become a powerful tool for inquiry when students use it to spark curiosity instead of just finding quick answers. It often starts by asking questions and follow-up questions. Students can move in a slower, more methodical way by using the FACTS Cycle for prompt engineering.
Here, students might ask questions that help clarify a misconception they have about a mathematical concept. They could interview a fictional historical figure, like a woman who lived in ancient Egypt. Students might even use an AI chatbot as the starting place for a Wonder Day project.
But that’s only one element of inquiry. If students want to develop deeper curiosity with AI, they can ask a chatbot to challenge their assumptions, offer counterarguments, or reveal perspectives they might not have considered. Here, it becomes a way to explore ideas more deeply.
At the same time, students also need to spend time analyzing the answers that generative AI produces. While LLMs continue to improve, they will still create hallucinations (inaccurate information that it produces based on bad “guessing”). Information will continue to be biased as well. This is part of why students need to explore the nature of AI rather than just focusing on how to use it.
When students use AI to wonder rather than to finish, it turns learning into an open conversation instead of a checklist.
One of my favorite uses of AI is as a study aid. Instead of having students re-read or underline the text (which fails to improve metacognition or lead to deeper retention of material), students focus on having the AI test them on the material. Here’s how the process works. Note that this still incorporates a human element built on personal agency:
In the future, students will need to solve what are called “wicked problems.” These are the types of problems where the solutions are imperfect and often lead to new problems. These problems require human wisdom and deeper contextual understanding. They often involve empathy as well. In other words, they are the kinds of problems that require human skills in the age of smart machines.
Here’s where AI can help with the problem-solving process by running simulations that allow students to test ideas. In science, students can explore what happens when they adjust one variable in an ecosystem or change the rate of carbon emissions. They get to see the ripple effects and talk about why it happened, which builds real understanding instead of memorized facts.
In social studies, AI simulations can drop students into complex situations like peace negotiations or debates over economic policy. They can see how one decision affects another and how competing interests collide. This kind of work helps them think like historians and civic leaders rather than test-takers.
In math and engineering, students can use AI to design bridges, rockets, or roller coasters and test how small design changes affect outcomes. They learn through trial and error, reflection, and revision. Across subjects, AI simulations make problem solving more active and give students a safe place to explore, fail, and try again.
We are still in the early stages of these deeper simulations. However, this is one of the rare moments where we use AI in a way that moves past integration and into true transformation. See the continuum below.
As mentioned earlier, the student use of AI should actually improve rather than reduce metacognition. One way to do this is to use AI as a metacognitive coach. Students start by sharing an initial goal that they have and then ask AI for feedback on that goal using a SMART Goal approach. Generative AI tends to do really well in estimating how realistic a goal is, especially in terms of time frame. As humans, we get pulled into the planning fallacy, where we overestimate how much we will accomplish in a given time. But generative AI tends to do well in making accurate time estimations based on larger predictive trends. Similarly, students can use AI to help break down goals into smaller tasks and action plans. Then, over time, the AI takes on the role of a coach asking reflection questions that lead to deeper reflection.
Another option might be for students to ask for specific feedback. If we think about our receptivity to feedback, we tend to react emotionally to feedback based on whether we expected it, whether the feedback can lead to extrinsic punishment or rewards (impacting our grades, for example), or whether we asked for it.
When feedback is unexpected and unsolicited, most people get angry.
When it is expected but unsolicited, we get anxious.
When it is expected and solicited, we tend to embrace it more enthusiastically.
So, students can ask for specific feedback on their work or their ideas and they can actually ask specific questions that lead to action. They might create a set of fictional avatars that offer immediate feedback. They can then engage in a fictional conversation with the avatars where they ask for feedback that is both positive and critical.
Subscribe to my newsletter and get the A Beginner’s Guide to Artificial Intelligence in the Education.
You can also check out other articles, videos, and podcasts in my AI for Education Hub.
Join over 90,000 educators who receive teacher-tested tools, fresh ideas, and thought-provoking articles every week straight to your inbox.Subscribe
My goal is simple. I want to make something each day. Sometimes I make things. Sometimes I make a difference. On a good day, I get to do both. More about me.
on the uploaded document.Logging in, please wait... 
0 General Document comments
0 Sentence and Paragraph comments
0 Image and Video comments
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Finding “the good stuff” in the middle
You are noticing how the writer uses two extremes—AI as a dream and AI as a warning—so that the “real opportunity” in the middle feels wiser and more balanced. That middle space is where AI helps humans think better, instead of replacing human effort and curiosity. If you had to put that “good middle” into your own words, how would you describe what a healthy use of AI in learning looks like?
Would you like to explore more questions about this idea of balance and “the middle”?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Answering your first question
In that “middle space,” we are not just saying “both sides are fine”; we are building clear practices, rules, and habits where AI must always serve human thinking, not replace it. That means we design tasks, tools, and classroom norms so that AI is used for exploring, reflecting, and stretching ideas, not just for quick answers. One key move is to make students show their thinking before, during, and after using AI, so the machine becomes a partner in reasoning, not a shortcut. What is one concrete practice you imagine that would force AI to support thinking instead of doing the thinking for the learner?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Sources for the “middle space” idea
To support that “good middle” — AI extending, not replacing, human thinking — you might look for: (1) education researchers who write about “augmentation” not “automation,” (2) learning scientists who study metacognition and “productive struggle,” and (3) policy or ethics frameworks that say AI must stay human‑centered. You could search sites like eric.ed.gov, scholar.google.com, and doi.org with terms such as “AI in education augmentation,” “human-in-the-loop learning,” “AI and metacognition classroom,” “productive struggle and AI tools,” and “human-centered AI education framework.” Would you like to zoom in on researchers, on classroom practices, or on ethical guidelines first?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Step 1 – Getting specific about “classroom practices”
When you say “classroom practices in high school,” do you mean daily routines (like how a lesson runs with AI), rules and norms (how students are allowed to use AI), or types of assignments and activities that involve AI? If you choose one of those three, we can dig into concrete examples that keep AI serving thinking, not replacing it. Which one matters most to you right now: routines, norms, or assignments?
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Nice catch—that “extremes, then middle” move gives the paragraph a really clear shape: dream vs warning, then that third space “in between” that feels calmer and more thoughtful. How does that middle space feel to you—more like compromise, or like a completely different way of thinking about AI?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
I like how you’re picking up on the “third space” rhythm—it really does slow the paragraph down in a good way. When you picture that in‑between space, do you imagine specific classroom moments or practices that would only be possible with AI used in that more intentional way?
New Conversation
Hide Full Comment
Short answer to your question
By “a dream,” I mean AI can help more people learn more easily; by “a warning,” I mean it can also weaken thinking, fairness, and human connection in learning if we use it carelessly.
Let us look at a few concrete cases together and see whether they feel like “dreams” or “warnings” to you.
Then you can decide what you think the sentence really means.
Examples of the “dream” side
1. Personal tutor for every student
Imagine a learner who never had extra help at home; AI can give step‑by‑step hints, extra practice, and explanations at any hour, in many languages, for free or very low cost.
2. Help for students with disabilities or language barriers
AI tools can read text aloud, summarize hard articles, translate instructions, and adapt tasks to different levels so more students can access the same content.
3. Less boring work for teachers
AI can draft quiz questions, sort student work, or suggest lesson ideas, which might give teachers more time for real conversations with students instead of paperwork.
Question for you
Which of these feels most like a “dream” to you, and why?
Examples of the “warning” side
1. Students stop thinking for themselves
If a student pastes every assignment into an AI and copies the answer, they may get good grades but never really practice reading, writing, or problem‑solving; then, in an exam or in real life, they are stuck.
2. More inequality between schools
Wealthy schools might buy powerful AI systems, while poorer schools get weak tools or none at all, so the gap in learning and opportunities could grow even larger.
3. Invisible bias and control
If AI is trained mostly on data from certain groups or countries, it might give advice or examples that quietly favor some students and ignore others, and teachers and students may not even notice.
4. Privacy and surveillance
Some AI systems track every click, mistake, and pause a student makes; this data could be used to judge or label students in ways they cannot see or challenge.
Question for you
Which of these feels most like a serious “warning” in your own context, and what makes you say that?
Turning it back to your situation
If you think about your own school or work with learners, where do you already see AI as a helpful “dream,” and where do you already see (or fear) it becoming a “warning”?
Would you like to explore concrete guidelines next time—how to use AI so we keep the “dream” and reduce the “warning”?
New Conversation
Hide Full Comment Hide Thread Detail
Answer in one breath
A school can stay “dreamy” with AI if it puts human judgment, curiosity, and care above speed, automation, and control, and makes clear rules so AI serves learning rather than replaces it.
Let us sketch some possible guidelines together, and you can keep the ones that fit your school best.
As we go, I will keep asking: “Does this protect the dream, or turn into a warning?”
1. Start with a shared purpose for AI
Guideline idea
Write a short, simple statement: “In our school, AI exists to support thinking, creativity, and inclusion, not to replace them.”
Questions for you
2. “Human first, AI second” rule for students
Guideline idea
Ask students to think, draft, or attempt a problem first, and only then use AI to get feedback, examples, or hints; they must be able to explain what they kept, changed, or rejected from the AI’s answer.
Questions for you
3. Clear zones: where AI is allowed, limited, or banned
Guideline idea
Create three simple categories: (1) “AI welcome” tasks (brainstorming, language help, practice questions), (2) “AI with conditions” tasks (essays, projects, where process must be shown), and (3) “AI free” tasks (exams, core writing, personal reflections).
Questions for you
4. Protecting originality and academic honesty
Guideline idea
Ask students to attach a short “AI use note” to any work where they used AI: which tool, for what purpose, and what they changed after thinking about it.
Questions for you
5. Teacher use of AI: support, not autopilot
Guideline idea
Invite teachers to use AI to draft materials, but require that they adapt, check, and localize everything, and never use AI alone for grading or important decisions about students.
Questions for you
6. Equity and access
Guideline idea
If AI is allowed for any graded work, the school should provide at least one shared, safe AI tool so that students who lack devices or paid accounts are not left behind.
Questions for you
7. Privacy and data protection
Guideline idea
Do not allow students to enter real names, grades, or sensitive details into AI tools, and choose tools that store as little data as possible, under clear agreements with the school.
Questions for you
8. Teaching students to question AI
Guideline idea
Make it a habit that whenever students use AI, they must ask: “Is this reliable? What is missing? Whose view might be ignored?” and compare AI answers with at least one other source.
Questions for you
9. Student voice and co‑creation of rules
Guideline idea
Form a small student–teacher group that meets a few times a year to review how AI is used, update the rules, and collect examples of both “dream” and “warning” cases from real school life.
Questions for you
10. Regular reflection: Is the school still “dreamy”?
Guideline idea
Once or twice a year, ask students and teachers: “Where has AI made learning more alive, and where has it made it more flat?” and then adjust rules based on what they say.
Questions for you
Now, your move
If you had to choose just three of these ideas to start with in your school, which ones feel most important, and why those three?
Once you pick them, we can turn each one into a few short, concrete rules in your own words—would you like to try that next?
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Dream vs. warning
AI can be a dream because it can give every learner instant help, feedback, and practice, but a warning because it can replace thinking, increase cheating, and deepen unfair gaps. What do you think is more dangerous for students: using AI too little, or trusting it so much that they stop struggling and thinking for themselves?
Would you like to look at some specific examples in school or university settings?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
High school dream: tutor in your pocket
Imagine a student stuck on algebra at 10 p.m.—AI can walk them step by step, give similar practice questions, and explain ideas in three different ways until one finally “clicks.” But if the student just copies the AI’s full solutions into homework, do you think they are really learning, or only training themselves to depend on a machine?
Would you like another example, maybe about writing essays or about tests and exams?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Studies and research
Yes, many researchers study these extremes—how AI helps learning and how it harms it—especially in “learning sciences,” “educational technology,” and “academic integrity” research. What kind of study would you be most interested in: how AI affects grades, how it affects real understanding, or how it changes cheating?
Would you like to hear how you could search for these studies yourself, with some simple keywords and sites?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Choosing the bigger danger
If students avoid AI, they may miss a powerful tool; but if they lean on it so much that they stop wrestling with hard problems, they may lose the habit of thinking at all. Which risk feels more real in your own school or study experience right now: not using AI enough, or using it in a lazy, copy‑paste way?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Turning your idea into a question
If both are problems “depending on the student,” how would you start telling which kind of student is more likely to be hurt by avoiding AI, and which kind by over‑trusting it?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Kinds of students, kinds of risks
One simple guess is: students who already push themselves may be harmed more by avoiding AI, while students who already avoid effort may be harmed more by over‑trusting it. How might you tell, in a real classroom, who tends to use tools to go deeper versus who tends to use tools to escape struggle?
Would you like a few search keywords and sites to find studies that look at these different kinds of students and AI use?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Watching how students react to difficulty
One clue is what students do when they get stuck: some ask deeper questions and try again with the tool, others rush to shortcuts or give up when the tool will not do the work for them. If you watched a class during a hard task with AI allowed, what behaviors would you look for to tell whether a student is using AI to think more, or to avoid thinking?
Would you like to explore what teachers or schools could do once they notice these two different patterns?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Answer: Yes, let us find sources
For your question about AI as dream and warning in education, you could look for research in learning sciences, educational technology, and academic integrity that asks how AI changes real thinking, not just grades. Would you like a short list of websites to search on, or a short list of search words to type first?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Sites to search
Good places to search: scholar.google.com, eric.ed.gov, jstor.org, researchgate.net, and ieee.org (for tech‑education papers).
Search words to try
Five useful search terms: “AI in education learning outcomes,” “generative AI academic integrity,” “AI tutoring systems metacognition,” “student overreliance on AI,” “equity digital divide AI schools.”
Question back to you
If you found a study that showed AI raises grades but weakens independent thinking, how would you decide whether that result counts as a success or a failure for education?
New Conversation
Hide Full Comment
Socrates’ first reply
If we do not want AI to become a shortcut, then we have to decide what we use it for: to think with, or to think instead of us. One simple way is to ask the AI for questions, hints, examples, and counterarguments, rather than for final answers. Tell me: when you use AI now, do you usually ask it “What is the answer?” or “Help me think this through?”
Step One – What seems important in what you wrote?
I will pick out a few key ideas from what you wrote and say why they might matter for our problem of deeper learning with AI, and you can tell me if I chose well.
“On one hand, it offers personalized feedback…”
This matters because “personalized” can either support your own thinking or quietly replace it, depending on how you use it.
“…instant support…”
This is important because what is “instant” can easily become “thoughtless” if we are not careful about when to pause.
“…creative possibilities that can make learning more dynamic than ever.”
This line matters because it hints that AI might be best used as a partner in creativity, not as an answer machine.
Do these feel like the right “important parts” to you, or would you pick different words or ideas as the heart of what you are thinking about, and if so which ones would you add before we go on?
Would you like to keep going and explore this question even more deeply together?
New Conversation
Hide Full Comment
Step One – What seems important here?
I will pick a few key pieces of your sentence that might matter most and say why they might be important, but you will have to decide if my choices really fit what you mean.
“it tempts us toward speed over depth”
This matters because it suggests AI pushes us to go fast instead of really thinking things through.
“efficiency over understanding”
This is important because it raises the fear that getting things done quickly might replace truly learning.
“convenience over curiosity”
This line matters because it questions whether easy answers might slowly kill our desire to explore and wonder.
Do you think these really are the most important parts, or would you pick different phrases as the heart of your idea, and can you add any other short quote from your own thinking about AI and learning that you feel belongs beside these before we move on?
Step Two – Your questions about this idea
Since you are already wondering how humans can keep thinking and expanding knowledge with AI around, what questions pop into your mind when you look at these three contrasts: speed/depth, efficiency/understanding, convenience/curiosity?
I do not just want you to think silently about your questions; can you write them out, even in rough form, so that in the next step I can build on your exact questions rather than guessing what you care about?
Step Three – More possible questions (after you share yours)
When you share your questions, I will turn them into two short, open questions we can explore together, and then you can choose which one you want to think or freewrite about for a bit, just to see what you really believe.
Once you have written your own questions now, whatever they are, would you be willing to pick one and write a few sentences of freewriting about it, without worrying if it is right or wrong?
Step Four – How could we find out more? (for later)
After you freewrite, I can suggest some kinds of authors, sites, and search words you might use to explore how AI affects deep thinking, understanding, and curiosity.
For now, can you write your questions about that sentence and about your concern: how to keep humans really thinking instead of just being told answers?
Would you like to start by writing one question about “speed over depth,” or one about “convenience over curiosity”?
New Conversation
Hide Full Comment
New Conversation
Step One – What seems most important?
In that line you gave, I hear several key ideas hiding together, so I will pull out a few parts that might be most important for us to think about.
1. “The real opportunity lies in the space between utopia and dystopia…”
This feels important because it says AI is neither pure salvation nor pure disaster, but something we must steer with judgment and caution.
2. “…where teachers and students use AI not to replace human thinking…”
This matters because it warns that if we let AI do the thinking for us, we might lose the very skills education is meant to grow.
3. “…but to extend it.”
This is crucial because it suggests AI could become a kind of amplifier or partner for thought, making our questions deeper and our understanding broader instead of weaker.
Do you agree that these are the important parts, or would you pick different phrases from your sentence as the key ones—and can you tell me which words or parts feel most important to you so we can move on?
Step Two – Your questions about this idea
Now that you have seen the parts I chose, what questions do you have about how AI could extend, support, or maybe even damage human thinking in schools or in your own life?
Write your questions out (do not just think them): what are you most curious, worried, or hopeful about in this idea, and what do you want to understand better so that you could actually decide how AI should be used?
Would you like to keep going and share a few questions that pop into your mind right now?
New Conversation
Hide Full Comment Hide Thread Detail
Answering your worry, briefly
If we let AI do our work for us, it really can weaken human thinking, just as using a calculator for everything can make us forget how to do simple math. But if we use AI as a tool to question, to get feedback, to see other views—and then still make our own choices and write our own words—it can actually make our thinking sharper. So the danger is real, but it depends a lot on how, when, and why we use it.
Coming back to your sentence about “extending” thinking
You wrote that the “real opportunity” is to use AI to extend, not replace, human thinking—so let us see what that really means for your fear.
If AI is extending thinking, then your mind is still in charge: you are asking the questions, checking the answers, and deciding what to keep or reject.
So my question to you is: when you imagine “damaging human thinking,” do you picture people simply accepting what AI says without questioning, or something else?
Turning your worry into clear questions
Let us try to put your worry into a few concrete questions we can explore together, instead of leaving it as a vague fear.
Could you write out two or three questions like these, in your own words:
Pick any that feel close to what you mean, change the words, or add your own, and write your questions out so we can look at them together.
Where do you already see damage or help?
Before we go further, I am curious about your own experience.
Can you think of one moment when using AI seemed to weaken someone’s thinking (maybe copying, not understanding), and one moment when it seemed to help someone think more deeply or clearly?
If you describe those two small examples from your life, we can use them to decide what kind of use is harmful and what kind is helpful.
What should we explore next?
Here are three possible directions we could go next, all connected to your worry:
Which one would you like to explore first—or is there another angle on your worry that you would rather chase down?
Would you like to keep going by writing those two or three questions that are strongest in your mind right now?
New Conversation
Hide Full Comment Hide Thread Detail
Checking where you are now
You have shared a strong line about AI and also some real worry about damaging human thinking—so let me ask you first: what part of this whole idea do you feel most stuck on right now, the “motivation” part, the “pushing thinking” part, or the “AI” part?
If we choose one of those to focus on, we can make faster progress instead of trying to solve everything at once.
Could you tell me, in one short sentence, which of those three you want to work on first?
Step One – What seems most important in your sentence
Now I will pick a few parts of your sentence that seem important for us to think about together, knowing you might choose differently.
1. “The real opportunity lies in the space between utopia and dystopia…”
This matters because it says the future with AI is not fixed as all-good or all-bad, so humans still must make choices.
2. “…where teachers and students use AI not to replace human thinking…”
This is important because it sets a clear danger: if AI does the thinking, our own minds can weaken.
3. “…but to extend it.”
This phrase is key because it hints that AI might help us think further, like a ladder or a lens, instead of thinking for us.
Do these feel like the parts that really matter to you, or would you pick different words as the most important, especially for your question about motivating humans to push their thinking—and which exact words would you circle as most central to that motivation idea?
Step Two – Your questions about motivation and “extending” thinking
Now that we have looked at these parts, what questions do you have about how to get people (students, teachers, even yourself) to actually want to stay in that “in-between space” and push their own thinking instead of letting AI or habit do it for them?
Write out any questions that pop into your head—about fear, laziness, curiosity, classroom design, or anything else—because in the next step we will build deeper questions from yours, but first I need to see your own words on the page.
If you write a few of your questions now, would you like us to use them to design ways to spark that motivation in real people around you?
New Conversation
Hide Full Comment Hide Thread Detail
Sticking with your question
You asked how to help people want to use AI in a way that really pushes their thinking, and you offered that line about “the space between utopia and dystopia” as your frame for this. Before we talk about “motivation,” it might help to pin down what you want people to feel or do differently after they meet AI in that in‑between space.
So tell me: when you imagine a person who is “motivated to use AI correctly,” what are they actually doing with it that shows you their thinking is being extended, not replaced?
Step One – What seems most important in your idea
I will pull out a few pieces of your sentence that seem most useful for answering your question about motivation, knowing you might see it differently.
1. “The real opportunity lies in the space between utopia and dystopia…”
This matters here because it suggests motivation might come from showing people a realistic middle path, not selling AI as magic or doom.
2. “…where teachers and students use AI not to replace human thinking…”
This is important because your whole motivation problem appears when people are tempted to let AI do the work instead of thinking for themselves.
3. “…but to extend it.”
This feels central, because motivating people may mean helping them taste what “extended thinking” feels like—more clarity, more ideas, more questions.
Do you agree that, for your motivation question, these are the key pieces—or would you pick different words (maybe “opportunity,” “not to replace,” or “extend”) as the heart of how you want people to relate to AI, and which single word or phrase would you say is doing the most work for you here?
Step Two – Your questions about motivating people
Now that we have those pieces on the table, what are your questions about how to get real humans—students, teachers, colleagues—to actually choose the “extend” path instead of the “replace” path when they use AI?
Write out whatever comes to mind (for example: “What makes people curious instead of lazy with AI?” or “How can a teacher design tasks so AI is a thinking partner, not a shortcut?”), and then we can use your own questions to dig deeper in the next step.
If you jot down two or three of those questions now, would you like to use them to design some concrete moves that could spark that kind of motivation in your own setting?
New Conversation
Hide Full Comment
Quick answer about “catalyst” here
In this sentence, “catalyst” means something that speeds up and strengthens learning, without replacing the learning itself. It suggests that AI helps the learning process happen more powerfully, instead of doing the work for the learner.
Thinking it through together
If AI is a “shortcut,” what happens to the student’s effort and struggle in learning?
If AI is a “catalyst,” what do you think the human still has to do that the AI cannot do?
In your own experience, can you name a moment when a tool (maybe AI, maybe something else) did not give you the answer, but helped you understand faster or more deeply—how is that like a “catalyst” in chemistry, speeding a reaction but not becoming the product itself?
Next step
Would you like to explore more examples of AI as “catalyst” in classrooms, or would you rather try to write your own sentence using “catalyst” in this same sense?
New Conversation
Hide Full Comment
Socrates – first response
You say that when you design with intention, this strange thing called AI can deepen learning rather than help someone avoid it. Let us test this: what do you yourself mean by “designing with intention” – what is happening in your mind that would not be there if you were merely “using a tool”? And if AI is a catalyst, not a shortcut, what changes in the learner’s effort, attention, or struggle when AI enters the scene?
Step One – What seems important in your sentence?
I will point to a few parts of your sentence that seem central, and say briefly why they might matter most.
“When we design with intention…”
This is important because it makes design a conscious, value-driven act, not an accident or habit.
“…AI becomes a catalyst for deeper learning…”
This matters because it claims AI can speed up growth in understanding, not just speed up tasks.
“…rather than a shortcut around it.”
This is key because it warns that the same tool can also be used to avoid thinking and struggle.
Do you agree that these are the most important parts, or would you pick different words or phrases as the heart of your idea? Please add or name one phrase from your sentence that feels most important to you before we move on.
New Conversation
Hide Full Comment
I wonder what some of these cracks are.
New Conversation
Hide Full Comment
Socrates: You notice the dark stories, and you ask about hopeful ones; let us collect a few together and see what patterns they share.
Here are some widely noted positive or mostly-positive examples of humans and artificial minds working together in media and entertainment:
What do you notice these have in common: is the AI a tool, a friend, a student, or something else?
If we put the hopeful and the dystopian side by side, what do you think the darker stories are warning us about that the hopeful ones might be ignoring, and what dangers the hopeful ones are correcting that the dark ones exaggerate? And among all these stories you know, which one feels closest to the kind of human–AI relationship you would actually like to see in the real world?
Would you like to dig into one example more closely, or would you rather build your own idea for a “positive AI” story and test it with questions?
New Conversation
Hide Full Comment
Starting from your question
You moved from “Why make AI more human?” to “How might AI make learning more human?”—so let us stay with this second question and probe it together.
Step One – What seems important?
I will pick out a few key ideas inside your new question and say why they might matter.
1. “make learning more human”
This matters because you seem to suggest that learning today is often not very human—perhaps too rigid, test-focused, or de-personalized—so you are asking if AI could actually fix that instead of making things worse.
2. “How might AI…?”
This is important because you are not asking whether AI will or must change learning, but inviting many possible ways, which keeps the question open and about human choices, not fate or destiny.
3. The tension with your first feeling: “It seems a bit creepy!”
This is key because you are holding two things at once—fear of “human-like” AI and curiosity about “more human” learning—which may point to a deeper worry about where the line is between helpful imitation and disturbing replacement.
Do you feel these are really the most important parts of what you just said, or is there a different word or phrase you would add as central before we move on?
Step Two – Your questions about this
Now that we have these pieces (your uneasiness, “more human” learning, and the “how might”), what are you most curious or worried about—teachers being replaced, students becoming lazy, privacy, emotions, or something else?
Write out any questions that come to mind about how AI could change the way people feel, think, and relate when they learn, and I will use your questions as the seed for the next step.
Step Three – Growing questions (we will do this after you share yours)
After you write your own questions, I will shape two short, open questions that connect your worries and hopes with bigger issues—like power, dependence, and what “good learning” really is—and point you toward places where you can search for more.
When you are ready, you can then freewrite a few lines exploring one of those questions—just your first thoughts, not polished at all—and we will use that as our map.
Step Four – Finding out more (later)
Once you have done that quick freewriting, I can help you think of search words, authors, and good sites where people seriously discuss AI and education, so you can investigate your own question instead of accepting anyone’s answer.
We will look for ways of checking: in what sense does AI actually support human connection, and in what sense does it quietly weaken it?
So…
Right now, what is the clearest question in your mind if you finish this sentence: “AI makes learning more human if…,” and “AI makes learning less human when…”?
Would you like to write those two sentences and see what they reveal?
Or is there a different angle on this “creepy but maybe helpful” feeling that you want to explore first?
New Conversation
Hide Full Comment Hide Thread Detail
Answering you directly
You are asking, in plain terms: “Why would we even want AI involved in making learning more human at all?”—so let us look, together, at what might actually be missing in learning right now that could make such a strange idea worth considering.
Step One – What seems important in what you just said
I will pull out a few key ideas from your “Why would we want to do this?” and your “make learning more human,” and say why they might matter.
1. “Why we would want to do this”
You are not asking how, but whether there is any good reason at all to bring AI into this, which puts the whole idea on trial instead of taking it for granted.
2. “make learning more human”
This suggests you feel something about learning today is not human enough—maybe too standardized, rushed, or disconnected from real people’s lives—and you are using “human” as a kind of standard for what good learning should be.
3. Putting “AI” and “more human” together
There is a tension in even putting these two words side by side, because it raises the worry that a machine might be trying to do what only teachers, friends, and communities should do—care, listen, and really know a person.
Do these feel like the right “important pieces” of your question, or is there another part—like “creepy,” “control,” or “authenticity”—that you would add before we go on?
Step Two – Your questions about this
Now, starting from your “Why would we want this at all?”, what are your sharpest questions: Are you more worried that AI makes learning fake, that it replaces teachers, or that it changes how students see themselves?
Write out any questions that pop into your mind about what counts as truly human in learning and whether AI can support that or only imitate it, and I will use your questions as the base for the next step.
Looking ahead
After you write your questions, I will offer two short, open questions that tie your worries to bigger issues like power, dependence, and what “good learning” really is.
Once you freewrite a few lines on one of them, we can hunt together for good places to search and people to read.
To move on…
Could you finish these two sentences in your own words: “We would want AI in learning only if…,” and “We should never want AI in learning when…”?
When you write those out, we can see much more clearly what is really at stake for you—shall we try that next?
Or is there a different way you would like to spell out your “creepy” feeling first?
New Conversation
Hide Full Comment Hide Thread Detail
Starting where you are
You are seeing AI-in-school as “maybe useful, but also creepy and dangerous,” and you are asking: could it ever actually make learning more human, instead of less.
Before we decide yes or no, can we first get clear on what “human learning” really means to you?
Step One – What seems important in your question
I will pull out three key ideas hiding inside your “How might AI make learning more human?” and say why they might matter.
1. “learning more human”
You seem to feel that a lot of learning now is mechanical—tests, grades, rushing—and “more human” might mean things like conversation, curiosity, care, and real connection.
2. “AI” in the middle of that
Putting AI into the picture raises a sharp question: can a tool that is not human help with empathy, attention, and understanding, or will it always push things back toward speed, data, and control?
3. “How might…” instead of “Will…” or “Must…”
The words you use leave room for choice, which suggests that what AI does to learning depends on how people design it, use it, and set limits around it, not on some fixed destiny.
Do these three pieces match what you were really asking, or is there another word—like “control,” “authentic,” or “safe”—that you would say is just as important before we go on?
Step Two – Your questions about this
Thinking about “learning more human,” what are you most unsure about: whether AI can support real relationships, whether it makes students too dependent, or whether it lets teachers focus more on people instead of paperwork?
Could you write out a few questions that pop into your mind, starting from these stems: “AI makes learning more human if…?” and “AI makes learning less human when…?”—and we will use your answers as the base for the next step.
Looking ahead
Once you share those questions or complete those two sentences, I will turn them into two small but deep questions that you can really explore, and point you toward good places to search for more.
Would you like to try finishing those two sentences now, so we can see what you already believe before we bring in anyone else’s ideas?
Next move is yours
If you prefer, you could instead describe one moment in your own learning that felt very “human” and one that felt “not human at all,” and then we can ask where, if anywhere, AI should be in those two moments.
Which path feels better to you right now: finishing the two sentences, or telling two short learning moments from your own life?
New Conversation
Hide Full Comment Hide Thread Detail
I do not like this version. I actually had what felt like a disagreement with it.
New Conversation
Hide Full Comment
Step One – What seems most important so far?
In what you are sharing, I hear two key ideas that might be central to your thinking about students, agency, and AI; let me try a possible “key sentence” from your own words plus one from the quote you gave.
This feels important because it says the problem is deeper than technology and is about how school is already set up so that others think and act for students.
This seems key because it names a shift from curiosity (“explore”) to compliance (“complete”), which is the heart of agency and self-efficacy.
This line connects your concern to AI directly, warning that convenience can speed up a decline that is already happening if we are not intentional.
Do you see these as the most important ideas, or would you add, change, or replace any of them before we go on?
Step Two – Your questions about agency, AI, and “cognitive atrophy”
Now that you have these ideas in front of you, what are your real questions about them—questions you would actually want to investigate or design around, not just answer on a test?
Could you write out (not just think) any questions that pop into your head about student agency, self-efficacy, and AI’s role in preventing—or worsening—this kind of “learning just to complete” pattern, so that in the next step I can build on your questions rather than replace them?
Would you like to write those questions now, and then we can keep digging together?
New Conversation
Hide Full Comment
This picture shows a line or scale that goes from “teacher‑centered” on the left to “student‑centered” on the right, and it calls that whole line “student agency.”
On this line there are three stages: first “compliance” (students just follow directions), then “engagement” (students are interested and involved), and finally “empowerment” (students make choices and have real control over their learning).
So the idea is that as a classroom moves from teacher‑centered to student‑centered, students move from simply doing what they are told toward being active, powerful decision‑makers in their own education.
What part of this scale feels most like the classes you know—compliance, engagement, or empowerment—and why?
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Starting question
If AI is “robbing” students of agency, what do you think it is taking away most: their choices, their thinking, or their sense of ownership?
Looking at the image
The picture shows a line from “teacher‑centered” to “student‑centered,” with compliance, engagement, and empowerment along the way.
If AI simply gives answers and students just comply, where on that line do you think the class is living most of the time?
Your experience
Can you recall a moment when a student used AI and you felt their agency decreased, and another moment (if it exists) when AI seemed to increase their power to choose, create, or question?
What is the difference between those two situations?
Next step
If you wanted AI to move students from “engagement” toward “empowerment” instead of toward “compliance,” what is one small change you could make tomorrow—in your prompts, your tasks, or your classroom rules?
Would you like to explore concrete ways to design AI use that actually builds, rather than drains, student agency?
New Conversation
Hide Full Comment
New Conversation
Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
New Conversation
It appears to be a small logo icon that looks like a match or torch with a flame inside a circle.
Since it sits next to the words you mentioned, it is probably the symbol or logo mark for that organization’s name shown there.
What do you think the flame might be meant to represent in the context of education—energy, curiosity, something else?
New Conversation
Hide Full Comment Hide Thread Detail
If you see it as “zooming in” and “sparking interest,” that already sounds close to what the flame and magnifying‑glass shape suggest: helping students focus on something and light up with curiosity.
If the image is all about student agency, how might a little flame connect to students taking more control of their own learning instead of just following directions?
What else, besides “zooming in” and “sparking interest,” could a flame in a logo suggest—maybe passion, creativity, or even the risk that learning can be a bit dangerous and challenging?
Would you like to look next at how the words “compliance,” “engagement,” and “empowerment” fit with that flame idea?
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
New Conversation
New Conversation
Hide Thread Detail
Your question, in simple form
You are asking how to help students choose learning over their phones and take real ownership of that learning.
Before we talk about techniques, can we first ask: why do you think the phone usually wins?
Is it because the phone is more fun, more social, more “theirs,” or something else in your view?
Looking at the phones themselves
If a phone is always within reach, do you think any lesson, no matter how good, can fully compete?
What would happen if students helped create the norms for when phones are out and when they are away, instead of adults just setting rules?
How might their sense of “this is our choice” change their willingness to follow through?
Ownership of learning
When you picture a student “taking ownership,” what exactly are they doing differently from now?
Are they asking more questions, setting goals, choosing topics, reflecting on progress—or something else?
If you had to pick just one of those to build first in your classroom, which would you choose and why?
Connecting phones and learning
Is it necessary to “pull students away” from phones, or could some learning actually move onto the phones in carefully guided ways?
For example, could they use phones to: quickly research, do short polls, write micro-reflections, or discuss in backchannels that you can see and guide?
What might be the risk if phones are always the enemy, and what might be the risk if they are always allowed?
Your context
Can you tell me one concrete situation from your own classes: a moment when phones clearly blocked engagement, and a moment (if there is one) when students were deeply engaged and barely touched their phones?
Looking at those two moments side by side, what differences do you notice in what you were asking them to do, how much choice they had, and how visible the purpose of the task was?
If you share one short example, we can use that to think through specific, realistic moves instead of just general ideas.
Would you like to start with describing that one real classroom moment, and then we can dig in from there?
New Conversation
Hide Full Comment Hide Thread Detail
Starting from your concern
You are saying that as soon as work is “done,” students rush to phones, so speed becomes more important than quality.
If that is true, who is really deciding when the learning is finished—you, or the students’ phones?
What would have to change in your room so that “I’m finished” does not mean “now I get my phone,” but “now I have more meaningful options for learning or contribution”?
Questioning the current deal
Right now, what do you think the “deal” is in students’ minds: “Do this minimum, then you earn your phone,” perhaps?
If so, is it surprising that they rush, or is that just them responding logically to the system we have set up?
How might the deal look different if access to phones did not depend on finishing quickly, but on something like thoughtful effort, reflection, or contribution to others?
Redefining “finished” work
When a student says “I’m done,” what extra layer could naturally come next—revising, extending, helping a peer, or reflecting?
If “finished work” always had a part two that deepens thinking, how would that affect the rush toward phones?
What is one small, realistic “part two” you could add tomorrow, that does not create huge grading for you but makes clear the work is not just about completion?
Phones as signal or cause?
Do you think the main problem is the phones themselves, or that the work often feels like a hurdle to clear?
If the task feels meaningful, challenging, and a bit open-ended, have you noticed whether students still sprint toward their screens in the same way?
Can you recall one activity where they forgot about phones—what was different about the task, the time pressure, or the choices they had?
Your next move
Would you be willing to describe one very specific assignment where they rushed just to get to phones—what you asked, how long they had, what “done” meant, what you allowed after?
Then we can ask together: in that one case, what tiny change to the structure or the “after you finish” options might shift them from “race to be done” toward “linger to improve or explore”?
Do you want to start by walking me through that one assignment, step by step, so we can poke at it with questions?
Would you like to keep questioning this with me, or focus next on concrete ideas for that “part two” after they finish?
New Conversation
Hide Full Comment
New Conversation
Hide Thread Detail
So I think it was great to see all the different pieces and how they worked. I also liked being able to have side conversations with my colleagues about this technology, so I am glad we were in-person today. I am not an English teacher, so all the writing/reading today was a bit much. I liked the collaboration AI the best. The knowledge one drove me crazy
New Conversation
Hide Full Comment
Very quick answer
Student‑centered learning is a way of learning where the main focus is on what the student thinks, needs, and does, instead of what the teacher says and does. The student asks questions, makes choices, works on real problems, and the teacher’s role is more like a guide or coach than a boss. Now, let us see what you already think it is.
A few questions for you
If a class is student‑centered, who do you think talks more: the teacher or the students, and why?
In your own experience, can you remember a time in school when you felt you were really in control of your own learning—what was happening in that moment?
What do you think changes about the teacher’s job when the classroom becomes student‑centered?
How would you explain “student‑centered learning” in one sentence to a younger student, using very simple words?
So, based on your own classes, would you say your learning has been more teacher‑centered or student‑centered, and what makes you say that? Would you like to explore specific examples or types of student‑centered activities next?
New Conversation
Hide Full Comment
New Conversation
Hide Thread Detail
Direct answer first
When using AI, people usually need to think about at least these areas of compliance: data protection and privacy, information security, intellectual property, transparency and explainability, bias and discrimination, accountability and human oversight, records and auditability, and sector‑specific rules (like finance, health, or education).
Now, instead of me giving you a long lecture, can we unpack these one by one together?
Key areas to explore (very short list)
In your situation, which of these feels most urgent or risky right now?
To narrow this down for you
If you tell me your context (country, sector, and main AI use case), we can turn this general list into a very short, practical compliance checklist for your specific situation.
Would you like to start by telling me what kind of AI use you have in mind (e.g., chatbots, analytics, code assistants, decision support)?
New Conversation
Hide Full Comment
New Conversation
New Conversation
Hide Thread Detail
First thoughts: what kind of “empowerment” do you have in mind?
Before we go into details, let me ask you: when you say “empowerment” with AI, who exactly do you want to see more powerful — students, teachers, communities, institutions, or all of these in different ways? And do you imagine empowerment mainly as efficiency (doing more, faster), as agency (more choice and control), or as justice (changing who gets to participate and benefit)?
1. Three contrasting kinds of empowerment with AI
A. Instrumental empowerment – “I can do more tasks, better and faster, with AI.”
This often increases productivity, but not always autonomy.
So, if AI makes things easier but also more dependent, is that really empowerment?
B. Epistemic empowerment – “I can question, critique, and shape knowledge with AI.”
Here empowerment is about how we know and who gets to decide what counts as knowledge.
C. Civic / justice-oriented empowerment – “We can intervene in the systems behind AI.”
Here empowerment means being more than a user: becoming a critic, designer, or policymaker.
If you had to choose one of these as your priority for a particular course or program, which would it be, and why?
2. Key literacies for AI-integrated teaching and learning
I will name possible literacies, but I am more interested in which ones you want to foreground.
1. Prompting as inquiry literacy
What would change if you assessed the sequence of prompts as much as the final product?
2. Verification and source-tracing literacy
How might you design tasks where not checking AI output would obviously fail the assignment?
3. Data and model awareness literacy
What is the minimum conceptual understanding of AI that you want every graduate to carry?
4. Metacognitive and reflective literacy
Would you want students to keep “AI process logs” or “thinking diaries” alongside their work?
5. Ethical and civic literacy around AI
In your setting, what ethical tensions around AI are already present but maybe unspoken?
6. Creative and design literacy with AI
What would count as “original” or “authentic” work when much of the drafting is collaborative with tools?
Looking at these literacies, which two or three feel most urgent for your context, and which feel aspirational?
3. Possible learning outcomes that actually signal empowerment
Instead of writing outcomes for you, let me ask questions that might help you draft your own.
A. Outcomes about agency and control
Could a learning outcome specify how a student documents and explains their use or non-use of AI?
B. Outcomes about quality of thinking, not just quality of products
How might you make the process (the human reasoning) more visible than the polished artefact?
C. Outcomes about ethical and civic stance
In your context, what would be a realistic “first step” outcome in this civic direction?
D. Outcomes about collaboration (human–human, not only human–AI)
Would you count “teaching others to use AI thoughtfully” as a sign of empowerment?
If you sketched two or three outcomes right now, even roughly, what would you write down?
4. A quick way to test: is this AI use empowering?
Here is a small set of questions you might use as a check for any AI-integration decision:
If you applied these questions to one concrete AI practice in your setting (say, AI feedback on writing, or AI lesson planning), what tensions would you see between your aspirations for “empowerment” and what is actually happening?
5. Bringing it back to your own “lots of questions and aspirations”
You mentioned you have many questions and hopes about what empowerment might mean here.
Could you name:
If you describe that one case, we can use it as a test-bed to sharpen the literacies and outcomes that matter most to you.
So, where would you like to start: with a specific course, a particular group of learners, or a policy-level question about AI and empowerment?
New Conversation
Hide Full Comment Hide Thread Detail
Struck again how quickly a context — and a lot of related information — gets established. The more focused my question, the more targeted, but often extensive, bordering on burdensome sometimes, the feedback becomes. It’s like playing a sport with a person who is, often, playing at a higher level. Forces me to expand, explore, defend my thinking.
I need time to digest – and annotate further — the feedback I’m getting. Surprising and expansive responsives to my very openended question about the term “Empowerement.”
New Conversation
Hide Full Comment
Is it a straight process to compliance, engagement and empowerment.
New Conversation
Hide Full Comment Hide Thread Detail
1. First, a quick answer to your main question
AI moving from teacher‑centered to student‑centered is rarely a straight line; it can move forward, get stuck, or even go backwards, depending on how power, data, and decisions are shared between teacher, student, and system.
Now, let’s look at a few different ways to analyze this process, and then I want to ask you how you see it.
I will keep things simple and brief, and we can zoom in on any part you choose.
2. Different lenses to think about the shift
A. Power & control lens
One way is to ask: “Who has control at each stage?”
Is AI in your setting mostly advising the teacher, the student, or both?
B. Data & transparency lens
Another way: “Who sees what data, and who understands how AI uses it?”
Where would you place most current classroom AI tools on this scale?
C. Agency & voice lens
Here the question is: “How much choice and voice do students really have with AI?”
What signs of genuine student choice have you actually seen with AI, if any?
D. Learning depth lens (not just engagement)
We can also ask: “What kind of thinking is AI mostly supporting?”
In your view, is current AI mostly training quick correctness, or deeper reflection and critique?
E. Ethical & justice lens
Here we ask: “Who benefits and who might be harmed by this AI use?”
Do you see AI being treated as a neutral “tool,” or as something to question and study?
3. About “compliance → engagement → empowerment”
It is tempting to imagine a neat path: first compliance, then engagement, then empowerment.
But someone can be highly “engaged” by slick AI gamification and still have almost no real power or voice.
So a better question might be: at each stage, does AI increase or decrease students’ ability to question, choose, and shape their learning and their tools?
I wonder: in your experience, do tools that raise engagement actually shift any real decision‑making toward students?
Or do they mainly make it easier for students to comply happily with a system they did not help design?
Your answer there might change how you judge whether a system is truly “student‑centered.”
4. Two questions for you to move this further
Would you like to pick one concrete AI use (a platform, a workflow, a policy) and we can walk it through these lenses together, step by step?
Would you like to explore how your phrase “compliance, engagement, empowerment” could be redrawn as a more honest, maybe messier map of how AI really works in classrooms?
New Conversation
Hide Full Comment Hide Thread Detail
New Conversation
New Conversation
First thought
If “engagement” is students paying attention, and “empowerment” is students owning the learning, then the key question is: how do we shift control from the teacher’s hands into the students’ hands without abandoning them?
1. Shift who makes the decisions
What might change if students, not you, chose:
What is one small decision this week you could hand over to students instead of keeping for yourself?
2. Make goals visible and negotiable
You might keep standards non‑negotiable, but let students help define how to show them:
How often do your students actually see and talk about the goals in their own words?
3. Build in self‑assessment and reflection
Instead of you being the only judge, students can:
What simple routine (5 minutes at the end of class) could you use every time to let students assess themselves?
4. Move from “tasks” to “problems” and “missions”
Engagement often comes from fun tasks; empowerment comes from meaningful problems:
What problem, tension, or need in their real lives could your next unit connect to?
5. Share authority in the classroom culture
When something goes wrong in class, do you solve it alone, or with them?
How could your class look more like a community making decisions together, and less like one person managing many?
6. Treat voice as contribution, not performance
Empowerment grows when students see their ideas do work in the room:
In your class, whose ideas change what happens next—mostly yours, or theirs too?
7. Gradually release responsibility (for real)
Many teachers say “I do, we do, you do,” but the “you do” is often very controlled.
What is one place in your current unit where you could intentionally do less so they must do more?
8. Connect learning to identity and agency
Students feel empowered when they see how learning changes what they can do in the world:
If your students fully mastered your course, what new power would they actually have—can you name it with them?
9. Start very small and very specific
Trying to change everything at once can kill the effort.
What is one concrete practice you could try in the next two weeks, such as:
Which of these feels most possible in your setting, with your students, right now?
To keep talking
In your own classroom, where do you see strong engagement already, and where does it seem to stop short of real ownership?
If you tell me a bit about your grade level or subject, I can turn these into 2–3 very concrete moves for your context.
Would you like to explore one specific class or unit and see how it might shift from “engaging” to “empowering”?
New Conversation
Hide Full Comment
New Conversation
Hide Thread Detail
You mentioned an object and an image, but I do not see any picture attached yet.
Could you try uploading the image, or describe what you see in it (shape, color, material, any writing)?
Once I can see it or have a clear description, we can ask together: what is it, and what might it be meant to represent.
To get started while you upload or describe it, here are two quick questions:
- Does it look religious, political, scientific, or something else?
- Does it show a person, an animal, a shape, or a symbol (like a cross, star, circle, etc.)?
Your answers will already give us clues about what it might be trying to symbolize.
Once you share the image or a description, we can look at details (colors, posture, setting) and ask how each detail might add to its meaning.
Would you like to try describing it in just three simple parts: what is in the center, what is in the background, and what colors you notice first?
New Conversation
Hide Full Comment
Answering your question directly
In that sentence, “avatars” usually means made‑up characters or personas (for example, a strict teacher, a friendly coach, a tough critic) that “speak” to you and give you feedback right away. Sometimes those avatars could indeed be powered by AI chat systems, but the key idea is the fictional role or personality they take on, not the technology itself. Before we dig deeper, can you tell me where you saw that line and what kind of “feedback” they were talking about (writing, language learning, something else)?
Step One: What seems important in that line?
I will pull out a few short pieces of that idea that might be most important, and say why they matter for understanding what is going on.
1. “They might create a set of fictional avatars…”
This matters because it suggests more than one character, each with its own role or style, not just one single helper.
2. “…that offer immediate feedback.”
This is key because it tells us the main purpose of the avatars: to respond to you quickly, like a constant practice partner or coach.
3. “Fictional avatars”
This phrase is important because it shows these are not real people but imagined personas that can still feel real enough to talk with.
Do you think these are actually the most important parts of that idea, or would you pick a different phrase from that line as more important—and if so, which one and why, so I can adjust before moving on?
Step Two: Your questions about this idea
Now that we have focused on those parts, what are your questions about what these fictional avatars really are, how they give feedback, or how they might connect to AI conversations—can you write them out, even if they feel simple or half‑formed? Write any questions that pop into your head about this “set of fictional avatars that offer immediate feedback,” and then I will build on your questions in the next step.
Do you want to keep exploring this idea of avatars together, or does another part of what you are reading feel more confusing right now?
New Conversation
Hide Full Comment
New Conversation
General Document Comments 0
Step One – What seems most important here?
“AI can become a powerful tool for inquiry when students use it to spark curiosity instead of just finding quick answers.”
This matters because it shifts AI from shortcut tool to curiosity engine, which connects directly to motivation and deeper thinking.
“When students rely on AI to do the hard thinking for them, they miss the slow struggle that builds understanding.”
This sentence is key because it warns that too much convenience can destroy the very mental effort that grows real understanding.
“When students learn to use AI as a tool for curiosity and reflection, they begin to see themselves as active creators rather than passive receivers of knowledge.”
This feels central because it describes a shift in identity and agency, where AI actually helps students own their learning.
Would you say these are really the key ideas, especially for “building curiosity,” or would you pick different sentences as most important, and can you point to any lines you think better capture what matters for motivating student thinking before we move on?
New Conversation
Hide Full Comment Hide Thread Detail
New Conversation