Spencer, John. “Using AI to Spark Deeper Learning.” Spencer Education, 29 Oct. 2025, spencereducation.com/ai-deeper-learning-update/.
AI represents both a dream and a warning for education. On one hand, it offers personalized feedback, instant support, and creative possibilities that can make learning more dynamic than ever. On the other, it tempts us toward speed over depth, efficiency over understanding, and convenience over curiosity. The real opportunity lies in the space between utopia and dystopia, where teachers and students use AI not to replace human thinking but to extend it. When we design with intention, AI becomes a catalyst for deeper learning rather than a shortcut around it.
If you enjoy this blog but you’d like to listen to it on the go, just click on the audio below or subscribe via iTunes/Apple Podcasts (ideal for iOS users) or Google Play and Stitcher (ideal for Android users).
One of my favorite social studies writing prompts requires two separate days to complete. On the first day, students design their own utopia. They begin with the question, “What is the ideal society? What values should it be built upon?” From there, they dream up this perfect situation. Students complete a concept map connecting every aspect of society, from transportation to education to energy to entertainment to city planning. Next, they describe a day in the life of this dream society using the five senses to create greater imagery.
The next day, they begin the second half of the writing prompt. This utopia has become a dystopia and the dream is now a nightmare. What happened? What is it like to live in this society? What were the hidden weaknesses of the systems, structures, and even values embedded in that initial idea? What happens when we take a good idea and move it to an illogical extreme?
x
While I designed this as a social studies thought experiment, I actually think we can apply the same process to our dream scenarios in education. In many ways, AI represents a kind of educational utopia: instant feedback, personalized instruction, limitless access to information. But just like those student-created worlds, our idealized version of AI has cracks beneath the surface that we need to explore.
At the same time, we tend to have a glut of dystopian stories about the dangers of AI. From The Terminator to The Matrix to Ex Machina, our culture treats artificial intelligence as an inevitable descent into a dystopian future. Machines rise up, humans lose control, and the technology we created to make life easier ends up erasing what makes us human.
In this vision, schools become untouched sanctuaries, sealed off from algorithms and automation. Yet even that version can slip toward its own dystopia: one where students are unprepared for the world they actually live in, where fear replaces curiosity, and where learning becomes nostalgic rather than transformative.
I mention this because I recently led an all day AI workshop with secondary teachers. They actually spent twenty minutes in teams creating their dream scenario (I didn’t call it a utopia) with AI based on their previous beliefs about rejecting or embracing AI. Afterward, they analyzed what might cause their dream to become a nightmare.
Together, we discussed how the real opportunity isn’t about creating a perfect system powered by AI. It’s about asking deeper questions: How might AI make learning more human? How might it help students think more deeply, not just work more efficiently? In other words, we were able to discuss through the lens of story, how we might avoid the two extremes of Lock It and Block It and Techno-Futurism.
The question isn’t whether AI will shape learning, but how deeply it will shape thinking. Between the dream of effortless perfection and the fear of total collapse lies a quieter, more human possibility. It’s the chance, albeit messy, to use AI as a tool for deeper learning. The challenge for educators is to design learning experiences where AI doesn’t replace curiosity or struggle but helps students go further into both.
We live in an era of rapid change. Technology is moving at a breakneck speed and our lives feel more crowded than ever. We can access rapid fire video on multiple apps and stream on demand shows on multiple platforms. It’s no surprise, then, that the number one thing I hear when I ask something how they are doing is the short answer, “busy.”
This hurried pace is a broader cultural trend, but teachers are feeling it more acutely than many other professions. Every shift in technology, every new expectation, and every “time-saving” innovation seems to add another layer of urgency instead of relief. The role of the teacher continues to expand beyond teaching and learning into zones once owned by specialists (just look at the pushback I got in this Facebook post for suggesting that secondary content experts might not need to be experts in early literacy), counselors, curriculum designers, etc. We keep piling more onto teachers’ plates. New standards for the already packed curriculum map. New unfunded policies and initiatives. Even the most dedicated educators feel stretched to the edge of what’s possible.
So, given this reality, many teachers have looked to AI as a time-saving device. That rubric that once took an hour to make? I can revise an AI-generated rubric in five minutes. That newsletter that used to take two hours to write? Let’s use AI to craft a new one that can then be differentiated for parental preferences (a short, “just the facts” version, a typical version, and a detailed version). That station rotation activity? Let’s design it complete with handouts, instructions, and leveled readers in half an hour.
Students are feeling this same pressure to move faster. Many of them spend hours completing homework while participating in regimented after school activities. Suddenly, AI arrives as a way to save time and take shortcuts. On the surface, that makes sense. AI tools offer instant feedback, personalized support, and quick answers that can genuinely help students learn more efficiently. And when the curriculum map moves quickly, this instant feedback and quick tutoring can make a huge difference.
But there is also a cost. When students rely on AI to do the hard thinking for them, they miss the slow struggle that builds understanding. Over time, that convenience can lead to cognitive atrophy, where curiosity fades and learning becomes something to complete rather than explore. In its worst iterations, students end up using AI to do all of the thinking and the work for them. This cognitive off-loading can erode thinking over time.
However, we can also take a different approach that focuses on deeper learning. Here, students engage in deeper problem-solving that leads to mastery. They focus on meaning rather than speed and output. They engage in deeper curiosity and critical thinking. In other words, they use AI in a way that gives them the depth advantage in life:
So as we think about integrating AI into our schools at the student level, we might want to start with the question, “How does this promote deeper learning?”
I am sitting here staring at the cursor blinking back at me in silent judgment at my slow, foggy brain and my clunky human hands. It is 2:56 pm. In three hours, I’ll be teaching my assessment course. But for now, I am sluggish. One part circadian rhythm. Another part being a desert boy in a cold and gray climate. But mostly? I stayed up until midnight watching an exciting baseball game.
You know who doesn’t get tired? Generative AI. It might gobble up electricity and use a fair share of water (though this is actually a really complicated environmental issue that goes beyond the binaries of good and bad). So, right now, I am tempted to pass the baton from my hands to a complex algorithm built on predictive analytics.
But I don’t.
Part of it is my love of writing. I love the process even when I don’t enjoy it. I find value in the process even when it moves slowly. I define myself as a writer and I am convinced that part of being a writer is continuing when you are tired. It’s slowing down and being deliberate with words. It’s the process of learning through writing rather than just writing after you’ve learned. I don’t want to lose any of that even if it saves time and energy.
On some level, though, we all need to negotiate what we gain and what we lose by using AI. For some people, the idea of AI jump cuts (video editing) takes out the slow, methodical eye that videographers need as they engage in film production. To others, it’s an unpleasant task that they can outsource to a machine. For some, AI-generated spreadsheet codes allow them to analyze data faster. For spreadsheet nerds (the self-proclaimed “freaks in the sheets”) that takes the joy out of making spreadsheets.
As a classroom teacher, you might love to make slideshows, design leveled readers, or write a newsletter. But you might just want to spend more time in small groups or engaging in professional learning. You might hate data analysis and love the idea of an AI tool that helps you get real-time data to pull students for intervention. Or that might just be something you love to do in order to get a sense of how each student is doing.
In other words, it is idiosyncratic. We all decide when and how we want to use AI tools. But we do need to recognize that we lose something human every time we use AI.
This is why it helps to take what Ethan Mollick calls the “cyborg” approach, where we treat AI as a form of co-agency. We use the AI but we modify what it creates based on our own voice, contextual understanding, and knowledge of our students.
I use the metaphor of ice cream. AI creates the vanilla but we add our own unique flavor.
Co-agency means we are not outsourcing our thinking but partnering with it. Mollick describes two main approaches to this relationship. The centaur approach treats AI like a teammate. The human and the machine divide tasks based on their strengths, alternating between intuition and computation.
The cyborg approach goes a step further. It is fully integrated, where human and machine thinking blend in real time. In this model, AI does not replace creativity or judgment. Instead, it extends it. We remain the authors, the composers, the teachers, but with an extra layer of insight or speed or perspective. The challenge is to stay grounded in our humanity while still exploring what this partnership can make possible.
Deeper learning with AI requires intentional friction in this co-agency process. It involves doing unpleasant but important tasks. It focuses less on efficiency and more on extended learning. It means designing moments where students wrestle with ideas instead of rushing to answers. When AI can instantly summarize, explain, or generate, the teacher’s role shifts from delivering information to creating conditions for curiosity and productive struggle.
Students might use AI to test their understanding, to compare interpretations, or to revise their reasoning, but they still have to think, question, and decide. In this way, AI becomes less of a shortcut and more of a mirror that reflects how well students can apply, adapt, and extend what they know. But this requires us to view AI as a thinking partner rather than an efficiency machine.
I really struggle with the idea of a AI as a thinking partner because machines don’t think. They generate scripts. They aren’t intelligent. They simply generate responses using predictive analytics. True, an LLM mimics the human brain but it is not sentient, intelligent, or affective. AI cannot understand anything. It can generate questions but it cannot become curious. It doesn’t wonder. So, in a sense, AI is a tool and not a thinking partner.
Yet there is still value in using the metaphor of a thinking partner, because it captures how AI can extend and challenge human reasoning even without genuine understanding. When we treat AI as a thinking partner, we are really using it as a reflective surface that pushes our thinking forward. The goal is not to share cognition with a machine but to use its responses as prompts for deeper human thought.
When students use AI as a thought partner, they can improve their metacognition. They can determine what they know, what they don’t know, and what they need to do next. Along the way, they can move through this metacognition cycle:
Moreover, AI can actually strengthen student agency when students use it in a slower, more intentional way. They can ask better questions, explore new ideas, and take more ownership of what and how they learn. Here they move from compliance and engagement and toward deeper empowerment.
A student might use AI to clarify a confusing concept, plan a project timeline, or get feedback before revising their work.
The teacher’s role is to guide those choices, not to script them.
When students learn to use AI as a tool for curiosity and reflection, they begin to see themselves as active creators rather than passive receivers of knowledge.
In other words, students can use AI as a partner in curiosity or as a coach for reflection.
So, let’s explore a few practical ways students might use AI for deeper learning tasks.
AI can become a powerful tool for inquiry when students use it to spark curiosity instead of just finding quick answers. It often starts by asking questions and follow-up questions. Students can move in a slower, more methodical way by using the FACTS Cycle for prompt engineering.
Here, students might ask questions that help clarify a misconception they have about a mathematical concept. They could interview a fictional historical figure, like a woman who lived in ancient Egypt. Students might even use an AI chatbot as the starting place for a Wonder Day project.
But that’s only one element of inquiry. If students want to develop deeper curiosity with AI, they can ask a chatbot to challenge their assumptions, offer counterarguments, or reveal perspectives they might not have considered. Here, it becomes a way to explore ideas more deeply.
At the same time, students also need to spend time analyzing the answers that generative AI produces. While LLMs continue to improve, they will still create hallucinations (inaccurate information that it produces based on bad “guessing”). Information will continue to be biased as well. This is part of why students need to explore the nature of AI rather than just focusing on how to use it.
When students use AI to wonder rather than to finish, it turns learning into an open conversation instead of a checklist.
One of my favorite uses of AI is as a study aid. Instead of having students re-read or underline the text (which fails to improve metacognition or lead to deeper retention of material), students focus on having the AI test them on the material. Here’s how the process works. Note that this still incorporates a human element built on personal agency:
In the future, students will need to solve what are called “wicked problems.” These are the types of problems where the solutions are imperfect and often lead to new problems. These problems require human wisdom and deeper contextual understanding. They often involve empathy as well. In other words, they are the kinds of problems that require human skills in the age of smart machines.
Here’s where AI can help with the problem-solving process by running simulations that allow students to test ideas. In science, students can explore what happens when they adjust one variable in an ecosystem or change the rate of carbon emissions. They get to see the ripple effects and talk about why it happened, which builds real understanding instead of memorized facts.
In social studies, AI simulations can drop students into complex situations like peace negotiations or debates over economic policy. They can see how one decision affects another and how competing interests collide. This kind of work helps them think like historians and civic leaders rather than test-takers.
In math and engineering, students can use AI to design bridges, rockets, or roller coasters and test how small design changes affect outcomes. They learn through trial and error, reflection, and revision. Across subjects, AI simulations make problem solving more active and give students a safe place to explore, fail, and try again.
We are still in the early stages of these deeper simulations. However, this is one of the rare moments where we use AI in a way that moves past integration and into true transformation. See the continuum below.
As mentioned earlier, the student use of AI should actually improve rather than reduce metacognition. One way to do this is to use AI as a metacognitive coach. Students start by sharing an initial goal that they have and then ask AI for feedback on that goal using a SMART Goal approach. Generative AI tends to do really well in estimating how realistic a goal is, especially in terms of time frame. As humans, we get pulled into the planning fallacy, where we overestimate how much we will accomplish in a given time. But generative AI tends to do well in making accurate time estimations based on larger predictive trends. Similarly, students can use AI to help break down goals into smaller tasks and action plans. Then, over time, the AI takes on the role of a coach asking reflection questions that lead to deeper reflection.
Another option might be for students to ask for specific feedback. If we think about our receptivity to feedback, we tend to react emotionally to feedback based on whether we expected it, whether the feedback can lead to extrinsic punishment or rewards (impacting our grades, for example), or whether we asked for it.
When feedback is unexpected and unsolicited, most people get angry.
When it is expected but unsolicited, we get anxious.
When it is expected and solicited, we tend to embrace it more enthusiastically.
So, students can ask for specific feedback on their work or their ideas and they can actually ask specific questions that lead to action. They might create a set of fictional avatars that offer immediate feedback. They can then engage in a fictional conversation with the avatars where they ask for feedback that is both positive and critical.
Subscribe to my newsletter and get the A Beginner’s Guide to Artificial Intelligence in the Education.
You can also check out other articles, videos, and podcasts in my AI for Education Hub.
Join over 90,000 educators who receive teacher-tested tools, fresh ideas, and thought-provoking articles every week straight to your inbox.Subscribe
My goal is simple. I want to make something each day. Sometimes I make things. Sometimes I make a difference. On a good day, I get to do both. More about me.
on the uploaded document.Logging in, please wait... 
0 General Document comments
0 Sentence and Paragraph comments
0 Image and Video comments
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Step One — What’s important here?
Key sentence 1: “AI represents both a dream and a warning for education.”
This is important because it sets up AI as something double‑sided, not simply good or bad, and that frame shapes how we understand everything that follows.
Key sentence 2: “The real opportunity lies in the space between utopia and dystopia, where teachers and students use AI not to replace human thinking but to extend it.”
This matters because it names a “middle space” and suggests the human mind should stay central, with AI acting more like a tool that stretches our abilities.
Your turn for key sentence 3:
Could you choose one more sentence from your text that feels important to you, write it out, and then say (in one or two simple sentences) why you think it matters for understanding the whole passage and any background you connect to it (for example, your own experience with school, technology, or AI)?
Once you add that third quote and your reason, we can move on to sharing questions about the text and dig into why people move to extremes on AI instead of staying in this “middle space.”
Would you like to pick that third sentence now, or do you want to talk first about why you think some people see AI as almost all “dream” and others as almost all “warning”?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Step One — What’s important here? (continuing)
Key sentence 1: “AI represents both a dream and a warning for education.”
This sentence is important because it shows AI as a tension between hope and risk, which shapes how we read every other line.
Key sentence 2: “The real opportunity lies in the space between utopia and dystopia, where teachers and students use AI not to replace human thinking but to extend it.”
This one matters because it says the real work happens in a “middle space,” where humans stay thinkers and AI stretches, not replaces, their thinking.
Key sentence 3 (yours): “When we design with intention, AI becomes a catalyst for deeper learning rather than a shortcut around it.”
This feels central because it gives a condition (“design with intention”) that flips AI from something that might weaken learning into something that can actually deepen it, which connects very directly to your own experience of intentional designing.
Next, we will look at your questions about these lines—especially about what “design with intention” really means in practice when you are planning learning with AI.
Step Two — Your questions about these ideas
You mentioned that “design with intention” carries a lot of weight and makes you want to know what it actually looks like, especially since you already do a lot of intentional designing and sometimes feel the satisfaction of doing that with AI.
Could you now write out your questions—whatever comes to mind—about these three sentences (for example: what counts as “deeper learning,” how to tell “catalyst” from “shortcut,” or how intentional design with AI feels different from “just using a tool”) so we can build on them next?
If you put your questions into words, even roughly, where do you think you feel the biggest uncertainty: in the “deeper learning” part, the “with intention” part, or the “using AI” part?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Step Three — More possible questions about your passage
1. Working with extremes, or tuning them out?
How can educators design intentionally with AI while responding wisely to “dream” and “doom” voices?
You might poke around places like EdWeek, Brookings, or Edutopia to see how others are handling polarized views.
2. Extending human thinking, not replacing it
In day‑to‑day writing, what is the line between AI “support” and AI doing the thinking for you?
You could explore sites like Pearson, NEA, or OER Commons for examples of AI used in writing and feedback.
Now your turn
What are your questions now about (1) working with extreme voices, (2) what “personalized feedback, instant support” really looks like in a young writer’s workflow, and (3) what “extending human thinking” actually means in practice?
Could you freewrite for a few minutes about one or more of these—just pour out rough thoughts without worrying about polish—so we can use your own words as the base for the next step?
If you started freewriting right now, which of the three pulls you most: the extremes, the young writer’s workflow, or the “extending thinking” idea?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Step Four — How do we know? (building from your freewriting-in-progress)
1. Where to look for stories of students using AI feedback
If you want concrete stories of students using AI to revise, you might search teacher‑facing places like:
2. Some search words you could try
Here are a few simple keyword sets you might type into those sites, especially if you are hunting for that “nitty‑gritty story” level you mentioned:
3. Next move: from abstract to story-shape
As you skim anything you find, you could watch for: a specific learner, a concrete task, the exact AI move, and what changed in the student’s thinking or text.
Once you have even one small scene like that, you can use it as a model or contrast case to tell your own “nitty‑gritty” stories about students extending their thinking with AI feedback rather than outsourcing it.
If you wanted to draft one tiny scene right now—a single student, a single writing move, a single AI nudge—what moment from your experience comes to mind first?
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Step One • What’s important here?
Here are three sentences that seem central:
1. “AI represents both a dream and a warning for education.”
This is important because it sets up AI as something double-sided, not simply good or bad.
2. “On the other, it tempts us toward speed over depth, efficiency over understanding, and convenience over curiosity.”
This matters because it explains clearly what people are afraid AI will do to learning.
3. “The real opportunity lies in the space between utopia and dystopia, where teachers and students use AI not to replace human thinking but to extend it.”
This is key because it suggests a middle path that might explain why people do not need to go to extremes.
If you look at these three, do they match what you see as the most important ideas, or would you pick different lines instead?
In the next step, we will turn these important ideas into your own questions.
Which parts make you most curious, or most uncertain, when you read them?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Step Two • Your questions about these ideas
Now that you have these three key ideas in mind, what questions do they raise for you about how AI should (or should not) be used in education? I do not just want you to think your questions silently — I want you to write them out in words, so that in the next step we can take your questions and build on them together to go deeper.
My questions will grow out of yours, and will help you explore the “dream,” the “warning,” and the “middle space” you described, but first I need to see your own thoughts on the page. Please write any questions that pop into your mind — even if they feel messy, simple, or half-formed.
In the next step, I will take your written questions and combine them into two clear, big questions you can explore further.
What are you wondering right now about AI as a dream, a warning, or a middle path for education?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Step Three • More possible questions about AI in education
Here are two big questions you might explore, building from your ideas about “dream,” “warning,” and the “middle space” for AI in education:
1) How can teachers and schools tell when AI is deepening learning versus just speeding it up?
2) What shared rules or norms could help students use AI to extend, not replace, their own thinking?
To explore these, you might search on sites like Google Scholar, ERIC, or Edutopia for research and examples about AI and learning. You could try keywords such as: “AI in education ethics,” “academic integrity and AI tools,” “AI for formative feedback,” “critical thinking with AI support,” and “guidelines for responsible AI use in classrooms.”
My questions are only a starting point; what really matters are your own questions. Would you like to freewrite for a few minutes on one of these two questions (or your own version of them), just to see what you already think, and then bring that writing back here so we can look for next steps together?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Step Four • How do we know more about this?
If you freewrite about how AI is both “dream and warning” and about that “middle space,” we can look for what you most want to understand: the psychology of temptation (speed, convenience), the design of learning spaces, or the ethics of using AI as a kind of “co‑thinker.” Then we can search for writers and researchers who are already wrestling with those same questions, and compare your thinking with theirs.
You might search on broad sites like Google Scholar, ERIC, Edutopia, JSTOR, or OER Commons to see how others frame AI as both risk and possibility in education. Helpful search phrases could include: “AI as cognitive partner in learning,” “human‑centered AI in education,” “student agency and AI tools,” “critical digital pedagogy and automation,” and “emergent behavior of AI in classrooms.”
Once you have a bit of your own writing on the page, we can match it with specific authors, keywords, and sources that fit your way of seeing AI as emergent, unpredictable, and weird inside learning spaces, not just as a neutral tool.
You said each of your words — emergent, unpredictable, weird, discourse, maker space — “mean a lot” to you.
What if you picked just one of those words, freewrote for a short time about what it means for AI in education, and then brought that small piece of writing back here so we can hunt together for the best questions and sources that grow from it?
Which word are you most drawn to explore first, if you had to choose only one right now?
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Step One – What’s important?
“AI represents both a dream and a warning for education.”
This is important because it sets up AI as something double-sided, not simply good or bad.
“On the other, it tempts us toward speed over depth, efficiency over understanding, and convenience over curiosity.”
This is important because it names the specific risks that could quietly damage real learning.
“The real opportunity lies in the space between utopia and dystopia, where teachers and students use AI not to replace human thinking but to extend it.”
This is important because it points to a “middle path” where human thinking stays at the center and AI is a tool.
If you look at these three ideas together, they seem to explain why some people rush to call AI a miracle and others rush to call it a disaster. Do these feel like the three most important points to you, or is there a different sentence in your text that you think matters more?
You also said you are curious why people move to extremes on AI. How do these three sentences help you explain that swing toward “all good” or “all bad”?
Would you like to keep digging into this middle space together, or is there a different angle on AI and extremes that you want to explore next?
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Step One: What’s important?
1. “AI represents both a dream and a warning for education.”
This is important because it shows that AI is not simply “good” or “bad,” but something that creates hope and fear at the same time.
2. “It tempts us toward speed over depth, efficiency over understanding, and convenience over curiosity.”
This is important because it names the risk that AI can slowly change what we value in learning without us noticing.
3. “The real opportunity lies in the space between utopia and dystopia…”
This is important because it suggests that wise use of AI means staying in a thoughtful middle ground, not in extreme optimism or extreme fear.
What questions are starting to form in your mind now about AI, education, and this “space between” the two extremes—could you write them out, even if they feel unfinished or confused?
Step Two: Your questions about the text
You mentioned people moving to extremes on AI, so now I am wondering what you most want to ask about how these extremes show up in classrooms, teachers, or students. Could you write out your questions—any that pop into your head—so we can use them as raw material to explore more deeply in the next step?
Would you like to keep going from here?
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Step One – What’s important?
1. “AI represents both a dream and a warning for education.”
This is important because it shows that AI in education is not simply “good” or “bad” but carries hope and danger at the same time.
2. “It tempts us toward speed over depth, efficiency over understanding, and convenience over curiosity.”
This is important because it names the specific risks that can quietly change how students think and learn.
3. “The real opportunity lies in the space between utopia and dystopia, where teachers and students use AI not to replace human thinking but to extend it.”
This is important because it suggests a “middle path” where AI supports, rather than replaces, human judgment and effort.
What questions start to form in your mind when you look at these three ideas together—about the dream, the temptation, and the “middle space”—and could you write those questions out so we can work with them?
Would you like to keep going and see what grows from your questions?
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Step One – What’s important?
I will pull out a few key sentences from this paragraph that seem central to its message, and I may be wrong, so we can adjust together.
“AI represents both a dream and a warning for education.”
This is important because it sets up AI as something double-sided, not simply good or bad.
“On the other, it tempts us toward speed over depth, efficiency over understanding, and convenience over curiosity.”
This matters because it explains the specific dangers that might push learning in a shallow direction.
“The real opportunity lies in the space between utopia and dystopia, where teachers and students use AI not to replace human thinking but to extend it.”
This feels central because it describes a middle path between extreme hopes and extreme fears about AI.
Now that you have looked at these three sentences, what questions pop into your mind about them, about AI, or about why people push toward extremes—could you write those questions out so we can use them for the next step?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Step Two – Your questions about AI and “the middle space”
You just read those three sentences about AI as dream/warning and the “space between” extremes, so now I am wondering what your questions are about why people get so polarized and what it would really mean to use AI to extend, not replace, human thinking.
In two or three written questions, can you jot down whatever you are wondering right now—about fear, over-excitement, partnership in learning, or this “middle space of making”—so that in the next step I can help you stretch and deepen those questions instead of replacing them with my own?
If it helps, you might start your questions with things like:
What are the first 2–3 questions that come to your mind right now—will you write them out so we can keep going?
I am curious to see where your own questions pull us next—do you want to try writing them now so we can move to the next step together?
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Step One · What seems most important here?
In a moment I will pull out a few lines that might be central to your paragraph and say why, just as a first guess that you can challenge or change.
1. “AI represents both a dream and a warning for education.”
This feels important because it sets up the whole tension of your paragraph: hope and danger together, not separately.
2. “The real opportunity lies in the space between utopia and dystopia, where teachers and students use AI not to replace human thinking but to extend it.”
This seems key because it names a third path, a “middle space,” and says what human beings should still do.
3. “When we design with intention, AI becomes a catalyst for deeper learning rather than a shortcut around it.”
This matters because it suggests that the outcome depends on human design choices, not on the technology alone.
Do you think these are really the most important lines, or would you pick different ones and why; can you add at least one sentence from your paragraph that feels central to you before we move on?
Step Two · Your questions about your ideas
Now that we have some key lines, what questions do you have about them—about the “dream and warning,” the “space between,” or “design with intention”?
Can you write out whatever questions come to your mind right now (even if they feel rough or simple), so we can use them as the starting point for deeper questions in the next step?
Step Three · Building on your questions
Since I do not yet see your questions, let me offer two that many people have about why we go to extremes on AI and how it connects to what you wrote:
1. How does fear and hype push people away from the “middle space” you describe?
2. What would it look like in real classrooms when AI truly extends, rather than replaces, human thinking?
Would you pick one of these (or change them) and freewrite for a few minutes—just writing your first thoughts, examples, doubts, and feelings, without worrying about being “right”?
Once you have written a bit about one question, share that thinking with me, and we will use it to search for better evidence and clearer ideas.
Step Four · Where could we look for answers?
After you show me some of your freewriting, I can suggest kinds of authors, sites, and search words that connect to your specific ideas, but here are some likely directions:
Places to search
Possible search keywords
Would you like to explore more why people jump to utopia or dystopia about AI, or go deeper into how to hold that “middle space” in actual learning situations?
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Step One · What seems most important here?
I will pull out a few sentences that look central to the whole idea and say why they might matter, but you will decide if they really are the key ones.
“AI represents both a dream and a warning for education.”
This feels important because it sets up the whole tension: AI is both exciting and dangerous at the same time, which shapes everything that follows.
“On the other, it tempts us toward speed over depth, efficiency over understanding, and convenience over curiosity.”
This matters because it names the specific risks in simple pairs, showing how good things (speed, efficiency) can quietly push out deeper learning.
“The real opportunity lies in the space between utopia and dystopia, where teachers and students use AI not to replace human thinking but to extend it.”
This is key because it offers a middle path: instead of “all good” or “all bad,” AI becomes a tool that stretches human thought rather than doing the thinking for us.
Do you agree that these are three of the most important sentences, or would you add or replace any, and why? I need to hear what you see as most important before we move on to the next step.
Would you like to keep going and dig into your own questions about this?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Step Two · Your questions about this “middle space”
You said you agree about that “in‑between” space being where the real work happens, so now I am curious what questions this raises for you about how AI can extend human thinking without replacing it. In one or two sentences each, can you write out every question that pops into your mind right now about this paragraph—about the dream, the warning, or the design of that middle space—so we can use your questions as the foundation for the next step?
If you write your questions out now, we can turn them into deeper, guiding questions next. Want to try putting those questions into words so we can shape them together?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Answering what you just wrote
You are wondering why people go to extremes about technology, whether AI is just another tool or something different, and how we can enjoy what it offers in education without getting lost in fear or hype. Let us keep using your paragraph about AI in education as our shared example, and let your own questions lead us instead of any fixed theory from me. Can we treat AI as “weird but workable,” something we learn to live with wisely instead of calling it a miracle or a monster?
Step Three · Growing your questions about AI and education
You have already raised big themes: extremes, hope and worry, AI as “weird,” and how not to be overwhelmed, so now let us turn all of that into two clear questions you could really explore. Here are two that try to pull your concerns together and point you toward learning more:
If you like, you could freewrite for a few minutes on one of these: what do you think a “healthy,” non-extreme use of AI in learning would actually look like in a real classroom or in your own study life? Once you spill out your first thoughts (even if they feel messy), we can use them to choose what you want to explore next and where to look.
Step Three · Your turn to respond
When you finish that little freewrite, I will help you find good paths for exploring your ideas instead of just drifting with fear or excitement. Would you like to try that freewrite now, while the questions are fresh?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
First, answering you directly
If AI often surprises you, feels emergent, and does not behave like a hammer or a pen, then you are already sensing that it is not an “ordinary” tool but something more like a partner whose responses co‑create the work with you. The question, though, is not only what AI is, but what we become when we treat it as either a neutral tool, a magical mind, or something in between. So maybe the deeper issue is: what kind of relationship do you want to have with something that can feel both creative and unpredictable?
Step Four · How could you explore your own view of AI?
You described AI as emergent, unpredictable, exciting or threatening, and as a “dream and a warning” in education, so now we can look for ways you might investigate these ideas instead of just holding them as impressions. To explore this, you could look at:
Here are some places where you can search for sources and follow your questions further:
To get good results, here are five keywords or search phrases you might try:
Would you like to pick one small question (for example, “Is AI more like a tool or a collaborator?”) and explore it next, or would you rather first sharpen your own definition of “emergent” in your words?
New Conversation
Hide Full Comment
I am wondering if this teacher ever had students take the good ideas and go to an extreme on purpose? I know how students start to spiral. Or was this what the teacher really wanted?
New Conversation
Hide Full Comment
My AI concern is that we are pushing students further and further from face-to-face interactions. Many adults sit behind computer screens and meetings are on ZOOM. People forget how to have conversations, share ideas, and debate without everything dissolving into argument and blame.
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
First take: what is he basically arguing here?
- He is arguing that the speed and pressure of society are landing especially hard on teachers, so their jobs keep getting more intense instead of easier.
- He lists how responsibilities have quietly expanded far beyond instruction into things like counseling, curriculum design, and specialist roles, while new standards and policies keep stacking on top.
- The emotional core is that even highly committed teachers are being pushed to the breaking point by this constant “more, more, more” with no real support or subtraction.
If you had to put his main claim into one sentence in your own words, how would you say it for a friend who is not in education? Want to dig next into how his word choices build that sense of pressure and urgency?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Next move: sharpen your own one-sentence claim
- How would you boil his point down into one clear line that a non-teacher friend would instantly get?
– Maybe start with a stem like: “He is saying that when ________, teachers end up ________.”
- What word or phrase would you choose to capture that feeling at the end: “burnout,” “vertigo,” “collapse,” “overload,” or something else that fits your voice better?
- How might you tweak that sentence depending on whether you are talking to fellow teachers, administrators, or people completely outside schools?
Write out that one-sentence claim in your own words, then map out (step-by-step) how you might build a short paragraph around it: what would you add first, second, third? Anything you want to work on next as you revise this—tone, audience, or adding your own example?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Dialing in your own claim and next steps
- One next move is to decide how strong you want that final feeling word to be: are teachers “overwhelmed,” “drowning,” “maxed out,” or something else in your voice?
- Another move is to choose who you are really talking to here: teachers, policymakers, parents, or just “the public,” because that will shape how blunt or explanatory you want to be.
- You might then sketch a quick plan:
– Step 1: Your one-sentence claim in your own words.
– Step 2: One concrete detail from Spencer that best proves it.
– Step 3: One quick connection to your own experience or observations.
What word are you leaning toward for that end-feeling, and who is your main audience for this line right now? Want to see some options for how you could structure the follow-up paragraph around that audience choice?
New Conversation
Hide Full Comment Hide Thread Detail
Getting students to stop thinking of AI as an answer machine, but as a tool. I feel like these are the same conversations that teachers had about using calculators.
New Conversation
Hide Full Comment
do humans make them think, though. The calculations of the input???
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
New Conversation
New Conversation
New Conversation
New Conversation
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
New Conversation
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Dig into the core idea
Want to see more ways to break this paragraph open and really work with its meaning?
New Conversation
Hide Full Comment
Think of this image as a spectrum or path you can move along, not a fixed label.
- On the far left is teacher-centered / compliance
– Students mostly follow directions, complete tasks, and “do school.”
- In the middle is engagement
– Students are interested and participating, but the teacher still makes most of the key decisions.
- On the right is student-centered / empowerment
– Students have real agency: they help shape the goals, the methods, and even the assessment of their learning.
So the “way” to take this image is to ask:
1. Where am I (or my class) right now on this line—compliance, engagement, or empowerment?
2. Where do I *want to be?*
3. *What concrete moves would shift us one step to the right?**
– e.g., giving choices, co-creating norms, letting students design projects, reflecting on their own learning.
If you tell me your role (student, teacher, tutor, etc.) and your context, I can help you map out specific next steps to move from compliance → engagement → empowerment.
New Conversation
Hide Full Comment
We do. – All together
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
This image lays out a spectrum of “student agency” that runs from teacher-centered to student-centered, with three stages along the way: compliance, engagement, and empowerment.
What stands out to you first: the words (compliance / engagement / empowerment), or the idea of teacher-centered vs. student-centered?
Do you want to talk about what each word means in your own classroom/setting, or more about the overall message of the graphic?
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Big idea of the image
Want to see more about what each word on that line (compliance, engagement, empowerment) is really saying about classroom power and voice?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Nice insight you already have
Want to dig into how you might put that overlap idea into your own words so it shows what you think about power and voice in classrooms?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Direct answers to your two questions
Want to see more about how you could write your stance so it names when limited compliance makes sense, when it becomes harmful, and how you want to rebalance toward engagement/empowerment?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
How this connects to AI
Want to see more about how you could name specific classroom moves where AI either shuts down student agency or expands it?
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
A professor talked about a young relative of hers who went to apply for a job as a journalist. When he said that he would not touch AI, they said, don’t apply. There’s more.
New Conversation
Hide Full Comment
They do.= Students only
New Conversation
Hide Full Comment
New Conversation
I do. = Teacher
New Conversation
Hide Full Comment
New Conversation
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
First connection: What “wicked problems” are really about
Want to see more connections to AI specifically?
New Conversation
Hide Full Comment
I have seen what Peter is doing in his class and with the after school college essay help using AI, and I think this is a good use for it. But I am also concerned that a student may not react well to some feedback. I think we need to help the students process feedback as well as use the technology.
New Conversation
Hide Full Comment
General Document Comments 0