WritingPartners
2-Pane Combined
Comments:
Full Summaries Sorted

Inquiry #2 Version 2

Author: Zara Abrams

Classrooms are adapting to the use of artificial intelligence

Psychologists can help maximize the smart adoption of these tools to enhance learning.

Generative artificial intelligence (AI) promises to touch nearly every part of our lives, and education is one of the first sectors grappling with this fast-moving technology. With easy and free-to-access tools like ChatGPT, everything related to teaching, learning, and assessment is subject to change.

“In many ways, K–12 schools are at the forefront of figuring out practical, operational ways to use AI, because they have to,” said Andrew Martin, PhD, a professor of educational psychology and chair of the educational psychology research group at the University of New South Wales in Sydney. “Teachers are facing a room full of people who are very much at the cutting edge of a technology.”

AI has been used in classrooms for years, quietly powering learning management tools, such as Google Classroom, Canvas, and Turnitin. But the recent democratization of generative AI tools such as ChatGPT, and the rush to commercialize similar technologies across sectors, is providing new challenges and opportunities for students and educators alike.

In a growing movement to find out how to safely and effectively use AI to enhance learning, educational psychologists are playing a critical role. They are studying how AI tools can lighten the workload on teachers—without interfering with the social aspects of learning—as well as how intelligent tutoring systems can personalize education while keeping students motivated. They are also exploring whether educators can leverage tools such as ChatGPT without hindering the broader goals of learning.

One question should always be at the forefront, said educational psychologist Ally Skoog-Hoffman, PhD, senior director of research and learning at the Collaborative for Academic, Social, and Emotional Learning (CASEL): “How are we using AI and technology as tools to elevate the conditions and the experiences of education for students without sacrificing the human connection that we absolutely know is integral to learning?”

How children view AI

Psychologists have studied human–technology interaction for decades. A new line of research now seeks to understand how people, including children, interact with chatbots and other virtual agents.

“Little kids learn from characters, and our tools of education already [rely on] the parasocial relationships that they form,” said David Bickham, PhD, a health communication researcher based at Boston Children’s Hospital, during a panel discussion on AI in the classroom. “How are kids forming a relationship with these AIs, what does that look like, and how might that impact the ability of AIs to teach?”

In a series of qualitative studies, Randi Williams, PhD, a program manager at the Algorithmic Justice League, a nonprofit focused on making AI more equitable, observed playful interactions between young children and robots, including the children’s attempts to both teach the agents and learn from them. Williams and her colleagues also found that children viewed agents with a more humanlike and emotive voice as friendlier and more intelligent (Proceedings of the 2017 Conference on Interaction Design and Children, 2017). But many questions remain, including how to study and foster such relationships while protecting the safety and privacy of minors—issues that psychologists are well poised to address.

Among adolescents, the use of generative AI is already widespread. Of the 7 in 10 who reported using at least one such tool in a 2024 Common Sense Media survey of 1,045 teenagers ages 13 to 18, homework help was the most common reason. About half of those who used generative AI for schoolwork did so with permission from a teacher. A similar number checked the veracity of generative AI outputs using outside sources, suggesting that many students are aware of the fallibility of such tools (The Dawn of the AI Era, Common Sense Media, 2024).

“Teens have quite a sophisticated and nuanced view of AI,” said Beck Tench, PhD, an information scientist based at the Center for Digital Thriving, which explores the role of technology in young people’s lives and is part of the Project Zero initiative at the Harvard Graduate School of Education. “They report that they feel conflicted, and are having just as many excitements and concerns as we do as adults,” including worries about misinformation, awareness that it will change their work prospects, and enthusiasm about its potential to advance science, creativity, and humanity (Teen and Young Adult Perspectives on Generative AI, Common Sense Media, Hopelab, and Center for Digital Thriving, 2024).

The Center for Digital Thriving offers guidelines for talking to youth about generative AI, including asking children what school rules seem fair and whether they have ever heard about AI getting something wrong.

Intelligent tutoring

Much of the conversation so far about AI in education centers around how to prevent cheating—and ensure learning is actually happening—now that so many students are turning to ChatGPT for help.

A majority of teachers surveyed by the Center for Democracy and Technology, a nonprofit focused on technology policy, said they have used AI detection software to check whether a student’s work was their own, but those tools can also be fallible—in a way that could exacerbate inequities (Up in the Air, Center for Democracy and Technology, 2024). Black teenagers were about twice as likely as their peers to tell Common Sense that they had schoolwork incorrectly flagged as being AI-generated (The Dawn of the AI Era, Common Sense Media, 2024).

Some schools are adapting by changing the nature of assessment, Martin said. In Australia, for example, senior year science projects are traditionally submitted in written form, but students must now also present their findings orally and respond to questions in real time. On the whole, teachers told the Center for Democracy and Technology they need better guidance and training on what responsible use is and how to respond if they suspect a student is cheating by using AI tools.

On the bright side, educators are increasingly relying on AI such as Curipod, Gradescope, and Twee to automate certain tasks and lighten their workload, said Nicole Barnes, PhD, APA’s senior director for schools and education. That includes generating new ideas for lesson plans and activities, writing parent-teacher letters, adapting materials for different age groups and neurodiverse learners, and getting a second opinion on how to improve existing materials.

Intelligent tutoring systems are another major focus for researchers, developers, and education technology companies. These AI-powered systems promise to help personalize the learning experience for each student, tailoring style, pace, and assessment to the individual and making lessons more accessible to students learning English or those with disabilities. Khan Academy, McGraw Hill, and Carnegie Learning are among the companies offering AI tools, while the Los Angeles Unified School District invested millions in “Ed,” a custom chatbot that survived for just a few months after the financial collapse of the company that built it.

“It’s sort of a gold rush right now for edtech companies to sell districts the right thing, without having any data to support their claims,” said educational psychologist Stephen Aguilar, PhD, an associate professor of education at the University of Southern California who studies how such technologies relate to student motivation and engagement.

As an alternative to commercial offerings, which are expensive and difficult to customize, some researchers are working on open-source intelligent tutoring systems. OATutor—built by Zachary Pardos, PhD, an associate professor of education at the University of California, Berkeley, and his colleagues—uses generative AI to learn from an instructor’s own teaching style and materials, then creates new and improved worksheets and lesson plans. This bespoke learning tool can allow teachers to replace textbook homework questions with interactive exercises that cater to each student’s mastery level and do not require grading.

“The teacher can spend less time adapting to the technology, so it feels more like an extension of her class that helps unburden her, rather than another professional development task,” said Pardos, who is also publishing journal articles on OATutor to add to the knowledge base about adapting and scaling generative AI in education.

A key task for psychologists, Aguilar said, will be to study how using AI tools relates to students’ motivation to learn. Intelligent tutoring systems still lag far behind human teachers, Barnes said, in their ability to detect whether a student is feeling frustrated, anxious, or uncertain about the content they’re learning.

“These systems often treat responses as black and white, but the reality is far more nuanced,” Barnes said. “Every answer elicits an emotional response from students, whether positive or negative.” Teachers detect these nuances and adjust instruction accordingly—existing AI tutors do not.

Future intelligent tutors are poised to collect more nuanced data on students as they learn—including everything from the heart rate to facial expressions, Bickham said—and know when to call on a teacher to step in. That could ultimately shift teachers into more of a facilitator role.

“The teacher role has the potential to evolve from the person who’s really directing the education to a person who is kind of managing the experience,” he said.

Social and relational shifts

Ask ChatGPT for homework help and you’ll get a polite, friendly response, Martin said, which makes it easy to forget you’re not interacting with a sentient being. The tool may therefore represent a social opportunity cost if children use it to answer questions they might otherwise ask their parents, peers, or siblings.

“The more you rely on generative AI to help you with your schoolwork, the less you might be inclined to meet up with friends in person or online after school to brainstorm around an essay,” Martin said.

Teenagers also report talking to generative AI about relationships, identity, and sexuality, including to find answers to questions they’re afraid to ask adults and to have the feeling of talking to a friend who won’t judge them (Teen and Young Adult Perspectives on Generative AI, Common Sense Media, Hopelab, and Center for Digital Thriving, 2024).

“It’s striking to me that young people are sharing their deepest, darkest secrets and questions to a company that can collect that information and use it,” Tench said.

To help students learn about the downsides of using such technologies, CASEL has partnered with Common Sense Media to apply its five social and emotional learning (SEL) competencies (self-awareness, self-management, responsible decision-making, relationship skills, and social awareness) to the digital space. The goal is to empower students to bring social and emotional awareness to difficult online situations. For example, how can teenagers with body image concerns navigate a social media feed rife with photos edited by AI?

CASEL is also exploring whether AI can be used to teach SEL. Because young people today are beginning to enmesh their online and offline lives, virtual SEL lessons could be useful, Skoog-Hoffman said.

Young people may develop a cyber identity that differs from their real-world social identity. How do those concepts relate to one another and influence behavior, both online and in person? Before AI can safely be used to teach SEL, more research is needed to understand these concepts, Skoog-Hoffman said, as well as whether skills such as empathy can be practiced and acquired in a digital context (with a chatbot, for example).

“For youth, online and in-person interactions are starting to become more seamless,” she said. “That could change the way teens are learning about relationships and interpersonal skills, and as educators, it’s time for us to adapt.”

DMU Timestamp: October 25, 2025 20:32





Image
0 comments, 0 areas
add area
add comment
change display
Video
add comment

How to Start with AI-guided Writing

  • Write a quick preview for your work.
  • Enable AI features & Upload.
  • Click Ask AI on the uploaded document.
    It's on the right side of your screen next to General Document Comments.
  • Select Quickstart Pathfinder & ask how to begin.
  • Click Continue.
  • Click Start Conversation. after the results appear.

Welcome!

Logging in, please wait... Blue_on_grey_spinner