Vallis, C., Wilson, S. and Casey, A. (2025) ‘Fear and Awe: Making Sense of Generative AI Through Metaphor’, Journal of Interactive Media in Education, 2025(1), p. 14. Available at: https://doi.org/10.5334/jime.972.
In this paper, we advance the examination of generative AI (GenAI) in educational contexts in two distinct ways. First, we introduce and evaluate an innovative workshop model designed to explore GenAI metaphors, helping participants to articulate their own and their peers’ responses to the technology. These workshops included students, academics and educational support staff at a large Australian university. Second, we analyse the metaphors and collaborative discussions generated during these workshops, which reveal a diverse spectrum of perspectives on GenAI, from practical tool to existential threat. Our analysis surfaces tensions that align closely with prevailing attitudes and current literature on metaphor and GenAI, particularly the tensions between human versus machine agency, and the known versus the unknowable capabilities of the technology. Through these contributions, we propose a qualitative approach that maps the complex interplay of perceptions surrounding emerging technologies, inviting critical reflection on GenAI’s ethical and societal dimensions. Our work offers a structured yet flexible approach for facilitating meaningful dialogue about the implications of GenAI in education that moves past fear and awe.
Keywords:
Year: 2025
Volume: 2025 Issue: 1
DOI: 10.5334/jime.972
Submitted on Dec 12, 2024
Accepted on May 27, 2025
Published on Aug 26, 2025
Peer Reviewed
CC Attribution 4.0
Across higher education, generative Artificial Intelligence (GenAI) incites much fear and awe. Educators find themselves inundated with suggestions for implementing this technology, with untested ideas and possibilities that promise to reshape teaching and learning (UNESCO 2023; Jensen et al. 2024). This enthusiasm is undampened by the complex reality of the technology’s inherent limitations, and how people understand and use it. Institutions are struggling with ChatGPT as a threat to academic integrity, original thinking, and information accuracy (Derakhshan & Ghiasvand 2024; Tlili et al. 2023). The technology can generate plausible but false information, its inner-workings are not transparent or easily understood, and its capabilities tend to be overestimated or misunderstood (Gupta et al. 2024; Tlili et al. 2023).
GenAI is often seen as both inevitable and beneficial for education, a technology that educators and students should be ‘embracing’ through prompt engineering and developing ‘AI literacy’ (Ding et al. 2024; Walter 2024). Many discussions within the education sector describe GenAI as a disruptive force requiring significant changes in teaching practices, assessment methods, and the overall learning environment (Jensen et al. 2024). Practical suggestions include a renewed emphasis on skills like evaluation and creativity in using GenAI (Bearman et al. 2024; Kizilcec et al. 2024). GenAI is also promoted as a solution for personalising learning, increasing access, and reducing educators’ workload, although this framing ignores systemic inequities and context-specific needs (Santos Ferreira, Lemgruber & Cabrera 2023). Other researchers call for more critical analysis of how this technology might reshape educational relationships, practices, and values, and the nature of knowledge creation itself (Cox et al. 2023; Bender 2024).
One thing seems clear — teaching and learning with GenAI demands different strategies and skills (Markauskaite et al. 2022; UNESCO 2023). In this study we offer an approach that moves beyond technical ‘how-to’ discussions to explore the deeper assumptions and concerns that shape GenAI adoption in education. Our aims are two-fold: to develop and evaluate a workshop model using metaphor analysis, and to examine how collaborative discussion and reflection on metaphors can assist participants in understanding GenAI technology in creative and critical ways. We are guided by the following research question: How can metaphor assist in collective sensemaking of generative AI in higher education?
Metaphors extend far beyond literature and poetry. According to Lakoff and Johnson (2003), metaphors are cognitive frameworks that structure how we understand abstract concepts by linking them to our lived experiences. This ‘imaginative rationality’ is particularly relevant to understanding complex technologies, where we make sense of abstract computational processes through more familiar human experiences (Fraser 2018). Metaphors frame our understanding and interaction with technology (Weller 2023).
In educational settings, metaphors work on both conceptual and emotional levels, influencing practice, relationships and understanding (Cameron 2003; Cortazzi & Jin 2020). However, over-reliance on simplistic metaphors can mislead learners by oversimplifying complex ideas (Cameron 2003; Fraser 2018). Similarly, metaphors shape understanding of GenAI’s capabilities and limitations (Gupta et al. 2024). Anthropomorphic metaphors, for instance, falsely equate human and algorithmic capabilities (Kajava & Sawhney 2023; Bender 2024).
Carbonell, Sánchez-Esguevillas and Carro (2016) use conceptual metaphor theory (drawing on Lakoff and Johnson’s work) and Causal Layered Analysis (CLA) to examine how metaphors about the brain can influence the design, conceptualisation, and future trajectory of artificial intelligence and computational systems. Similarly, Kajava and Sawhney (2023) use conceptual metaphor theory to analyse how metaphorical language and personification in AI policy documents attribute agency and shape technological understanding.
Other research takes a critical approach to evaluating existing metaphorical discourse about ChatGPT by proposing alternative metaphors that spotlight ethical concerns about AI’s use of human writing (Anderson 2023). Gupta et al. (2024) developed a two-dimensional taxonomy mapping AI metaphor along an anthropomorphic (human-like to non-human) axis and an evaluative (functional to critical to rhetorical) axis, using a collaborative autoethnographic methodology.
Contemporary metaphors for AI in education perpetuate historical patterns: rhetoric around AI-enabled personalisation and automation continues industrial metaphors of efficiency and standardisation (Santos Ferreira, Lemgruber & Cabrera 2023), reflecting persistent technological determinism. Such metaphors risk reducing education to technical processes, at the expense of human interaction and cultural context.
Given how profoundly metaphors shape our perceptions and practices, this research critically examines and interprets the metaphorical frameworks we use to discuss AI in education. Critical metaphor analysis offers an accessible method for surfacing underlying assumptions and their implications for practice (Jensen 2006; Fraser 2018). Through metaphors, we examine the embedded meanings in metaphors that either reinforce or challenge existing power structures in educational technologies (Grisoni & Page 2010). By developing metaphorical understanding, students, educators and researchers can more thoughtfully integrate generative AI into education, while preserving core educational values and human relationships. As Krauss notes, “meaning making is a critical element to human existence and learning” (2005: 767), and even more so in relation to technology.
Our aim was to make meaning of the sudden impact of generative AI in our context and to “understand the complex world of human experience and behavior from the point-of-view of those involved in the situation of interest” (Krauss 2005: 764). Informed by the research outlined in this article, we designed a series of workshops, aimed at exploring GenAI in teaching and learning through creating and critiquing metaphors.
We facilitated these workshops at a large business school in a metropolitan Australian university. Use of the artefacts of the workshops (sticky notes, whiteboard summaries, and discussion summaries read back to participants) for research was approved under university ethics, with consent being sought and given by all participants. Our approach combined creative, material and social methods to articulate and process the complex changes around technologies in educational practice and research. We valued the embodied experiences, the physical and material aspects of human interaction during the workshops (Gourlay 2024). We resisted the strong temptation to include technological activities, choosing to avoid prompt engineering and the ‘how-to’ aspects. Understanding and using technological functions takes time, can detract from interpersonal engagement, and can divert attention from reflecting on GenAI’s purpose and wider context.
The workshops were deliberately open to collaboration with multidisciplinary groups “who bring their own experiences, ideas, processes, and educational values” (Wardak, Wilson & Zeivots 2024: 200). Similar design thinking activities are valued in business education to support interdisciplinary learning and help students develop skills like critical thinking, collaboration, and adaptability (Vallis & Redmond 2021). Three workshops were facilitated, with participants as described in Table 1.
Table 1
Participant Groups and Characteristics.
WORKSHOP
PARTICIPANTS
DESCRIPTION
NUMBER
1
Educational Support Staff
Education academics, learning designers, project officers, and media producers.
15
2
Students
Business students from various disciplines.
3
3
Academic staff
Academics and members of a research group focused on business education.
10
During the workshops, participants followed a semi-structured co-design thinking approach (Vallis et al. 2022), with adjustments to timing and contextual focus to suit the specific group. We encouraged participants to connect personally with each other through metaphor. Participants collaboratively generated ideas on sticky notes and refined their metaphors to articulate their perspectives on GenAI. A gallery of metaphors was created, and participants evaluated these ideas by discussing their metaphors, adding ticks for agreement, crosses for disagreement, and question marks for curiosity on the sticky notes. We asked participants to collectively reflect on their metaphors and how they might apply GenAI metaphors in learning and professional practice in educational contexts. A more detailed explanation of a workshop session has been documented elsewhere (Vallis, Wilson & Casey 2024).
The artefacts of these workshops provided rich, detailed data which we thematically analysed over four phases. Once we were familiar with the workshop artefacts, we began coding metaphor elements, impacts of GenAI on education, and participant views. We clustered these codes into groups, comparing, contrasting and constructing meaning across metaphors and connecting these understandings to educational practice. We undertook this research as a reflexive process by immersing ourselves in the data, “reading, reflecting, questioning, imagining, wondering, writing, retreating, returning” (Braun & Clarke 2021: 332).
We evaluated the groups of codes as categories, according to their internal consistency and external distinction (Patton 2014). Starting with granular, descriptive codes that stayed close to participants’ exact words, we first examined how well these specific data points fit together as cohesive groups. Our second pass looked at how clearly these emerging categories differed from one another, as we worked to synthesise the specific codes into broader interpretive patterns. When we found data points that were outliers or overlapped across categories, it indicated a need to refine our categorisation. We then compared the raw data back to our developing categories, revising and refining to ensure our broader themes accurately captured both specific participant expressions and larger patterns of meaning. The goal was to achieve what Patton describes as “useful understanding”, where both detailed insights and practical needs are combined for real-world settings (2014: 810).
In our context, metaphor was used to help participants make sense of GenAI in teaching and learning. Creative sensemaking was also integral to the research process (Vallis et al. 2023). This mirrored our workshop engagement, collaboratively using physical and digital practices. Tactile artefacts like sticky notes were continuously rearranged and relocated across physical surfaces, documented through photography. Material and digital modes were entwined and inseparable in our inquiry.
To honour the depth and breadth of perspectives across our diverse participant groups while identifying meaningful patterns, we analysed the metaphors holistically rather than segmenting by participant type. This analytical approach aligned with our theoretical framework that metaphors reflect broader cultural and conceptual understandings that transcend individual roles. Our initial analysis revealed that similar metaphorical themes emerged across student, academic, and professional staff workshops, further supporting this decision. Organising them by conceptual similarities rather than by participant type enabled us to identify deeper patterns in how GenAI is understood and imagined across the educational community.
Rather than limiting the richness of metaphorical expression, our categorisation was designed to interpret and differentiate the dimensions along which participants conceptualised GenAI. The metaphors we categorised indicate how GenAI is imagined in educational contexts, particularly the relational, emotional, and cognitive tensions experienced by multidisciplinary staff and students. Unlike previous studies on AI metaphors, our categories capture how these metaphors emerge through collaborative and social meaning-making, linking them to pedagogical concerns and lived experience in higher education.
Finally, we returned to the data and iteratively synthesised it as we discussed and wrote our findings to ensure the themes were distinct and captured the complexity and richness of the participant data. Figure 1 presents a sample of the participants’ metaphors.
Figure 1
A sample of the participants’ metaphors during the coding process.
There are two parts to this section. In the first, we report on the landscape our participants described with the metaphors they chose and discussed. The categories of metaphor identified in the analysis were not only present to different degrees in each workshop set, but are also alluded to in current literature, giving the authors some confidence that the conceptual landscape they mapped out is representative of the wide range of responses to GenAI in higher education. Verbatim metaphors from the workshops are in italics. In the second part we unpack the critical reflections undertaken by workshop participants on their collaborative generation of metaphors.
We begin with the concrete, often technical metaphors of ‘Functions’, where participants sought to relate their GenAI experiences to their existing understanding of tools and machines. After analysing ‘Roles’ metaphors that compare relationships to GenAI to human ones, we progress through to the more abstract metaphors in ‘Qualities’, and ‘Agency’. This analysis provides a detailed map of participants’ cognitive, affective, and social engagement with GenAI. Table 2 provides a summary of the categories, their descriptions and some sample metaphors.
Table 2
Categories of metaphors used to describe Generative AI, with definitions and examples.
CATEGORY
DEFINITION
SAMPLE METAPHORS
Functions
Metaphors that conceptualise GenAI in terms of its tasks and practical capabilities as tools or materials.
Swiss army knife, bricks and mortar, ideas generator.
Roles
Metaphors that position GenAI in human-like relationships and social positions.
Helper, study buddy, frenemy.
Qualities
Metaphors that describe GenAI’s characteristics, particularly its unknowable, unreliable, or magical nature.
Black box, outer planet, slippery slope.
Agency
Metaphors that attribute volition, intention, or autonomous action to GenAI, often expressing concerns about control and power.
Competitor, invader, sinister robot.
Within this category, we grouped metaphors that revealed how participants conceptualised GenAI’s practical capabilities, particularly the tasks and functions they believed it could perform. We captured these expressions to respect the participants’ original ideas and to support the coding process by noting the context around the metaphors. Many of these functions were mechanical, including administrative tasks, boring tasks, grammar and sentence checker, proof reader, advanced search. These metaphors and metonymic expressions suggested a view of GenAI as an efficient tool for routine tasks.
Beyond these basic operations, participants also recognised higher-order applications of GenAI, such as email improver, trip planner, write a poem or essays or stories. It could be useful as a bug finder and powerful amazing for coding. This is supported by literature on the coding abilities of GenAI, particularly ChatGPT (see, for example, Coello, Alimam & Kouatly 2024). However, these descriptions often came with qualifiers about the need for human judgment. One participant expressed this ambivalence towards GenAI as a careless code writer.
A significant theme emerged around GenAI as a creative enabler that can assist idea generation and divergent thinking (Habib et al. 2024). Metaphors like antidote to blank page, ideas generator and sounding board, indicated that participants valued GenAI’s ability to initiate creative processes. Related to this idea were metaphors of GenAI as a container of a wide and disparate assortment of ideas, objects and information (dish tray, kitchen sink, scattergun).
Participants strongly agreed with the metaphor of GenAI as an augmenting device that helped them perform tasks more efficiently (indicated by ticks added to sticky notes). These included metaphors like toolbox, fast robot, and super computer. GenAI was like a Swiss army knife, a multipurpose tool that can solve many problems. A smaller but distinct subset of metaphors positioned GenAI as workable material, such as egg, clay, cement and bricks and mortar, suggesting its versatility and potential to support creativity. Such building metaphors connect to established educational psychology concepts such as ‘constructivism’ and ‘scaffolding’, among many others.
The overall affect in this category was predominantly positive, particularly regarding work-related functions. In workshops, participants’ metaphors about functions garnered the most ticks of agreement. These metaphors suggest GenAI was seen as saving us from workplace tedium and creative blocks. There is also a hint of being saved by the last minute party planner. Workshop discussions suggested that GenAI could initiate teaching and learning processes, but participants wanted to oversee the accuracy of its outputs. More subtle tensions emerged in qualified metaphors like illegal logo creator, bringing in a sense of guilt and underlying ethical concerns about intellectual property. In this sense, the originality and ethical use of GenAI ideas was questioned since they rely on pre-existing data that is not attributed (Habib et al. 2024).
Many metaphors equated GenAI as a tool for cognitive offloading, similar to tools such as “calculators or spellcheckers to reduce the cognitive demands of a task” (Dawson 2020: 37). Tool metaphors frame technology as functional, neutral, and designed to assist humans in achieving specific goals, like a hammer (Weller 2023). This metaphor emphasises utility and control, positioning technology as something we use, rather than something autonomous or relational. By labeling GenAI as a tool, we downplay its complexity, embedded values and biases, and its potential influence on human behaviour (Weller 2023). We take comfort in the idea that GenAI is just a machine at our disposal. However, participants were unsettled by the possibility of offloading tasks typically requiring human judgment and creativity to GenAI. A translator can substitute words but cannot draw on lived experience to assess meaning or cultural context.
Participants frequently anthropomorphised GenAI through role-based metaphors, a tendency documented in other research (Kajava & Sawhney 2023; Bender 2024). These metaphors revealed complex perceptions of human-AI relationships in teaching and learning contexts and the relational nature of their perceptions. Three distinct themes emerged in how participants conceptualised these relationships: GenAI as helper, as learning partner, and as friend/foe.
One group of metaphors positioned GenAI in a helper role. Technology such as ChatGPT was characterised as a learning assistant, as in other studies (Punar Özçelik & Yangın Ekşi 2024). Similar terms that were used included supporter and guide. Sometimes the GenAI helper referred to a particular vocation, task or environment, such as research for example, where participants saw it as a research assistant or research associate. In other cases, the metaphor articulated the non-human, technological nature of the assistance, a virtual helper, personal assistant (24/7), or artificial executive helper.
Participants across all workshops saw GenAI as someone to learn from or with, as a learning partner and advisor. The learning relationships ranged from the instructional, such as he explains to me, to more collaborative, such as study buddy and friendly coach, which also imply a level of support or companionship. The metaphor of personal tutor suggested that this learning support could be tailored to meet the individuals’ specific needs, a contentious claim in educational design discourse (see for example Yan et al. (2024) for the affirmative and Costello et al. (2023) for the negative). Other metaphors focused on a more defined exchange in the learning process, such as roleplay partner, or sparring partner, that engages students in interactive dialogue and critical evaluation of AI-generated responses (Walter 2024).
Metaphors also revealed striking dualities in how participants conceptualised their relationship with GenAI. There was a sense of both familiarity (smart buddy, virtual pal) and emotional distance (rational friend, uncritical companion). It was not clear whether GenAI was considered a helpful and supportive friend, and/or one who does not challenge the participants’ behaviour or decisions. Some participants explicitly framed GenAI as an enemy to “keep close”, while another noted failed attempts at human-like interaction: “we don’t get along like friends; sometimes I say thank you but don’t get a response [from ChatGPT].”
Some participants cast GenAI in subservient roles through metaphors like servant or [ChatGPT as a] slave—the latter qualified with “if you can control it”—suggesting increasing unease about who was the master. While some metaphors reflected a more celebratory tone, such as the prompting queen, many more foregrounded adversarial roles, from a cheating assistant to the unreliable drunk reviewer.
Overall, participants were cautious about the roles GenAI might play in education, acknowledging the ‘relationship’ as being different to a human one. A high proportion of metaphors in this category combined words suggesting that GenAI is both friend and foe. This tension was described as a double-edged sword, a friendly terminator or a frenemy, conceptualisations also used in, for example, Derakshan and Ghiasvand (2024) and Wysel (2023). This ambivalence crystallised in the metaphor of feeding the beast. Ironically, we sustain an insatiable system that could harm us, one that demands ever more attention, resources, and energy.
When a metaphor frames our understanding and attitudes without scrutiny, it constrains how we perceive and interact with technology. For example, the partner metaphor emphasises collaboration and mutual influence and captures the interactive nature of GenAI, and how it responds to input and shapes output in unexpected ways. On the other hand, such metaphors can lead to misplaced trust and harm when the technology fails to meet our expectations. We are less likely to question the outputs or influence of a helper or coach. This framing also masks the limitations, biases and values embedded in its design.
Anthropomorphising generative AI can over-attribute autonomy or intent. For example, ‘Autonomous’ is used as a metaphor to refer to technologies and “computational artefacts that are able to achieve a goal without having their course of action fully specified by a human programmer” (Johnson & Verdicchio 2017: 576), but this appears to be widely misinterpreted.
Discussion around GenAI in research literature is often conceptualised through complex and often contradictory metaphors about its perceived qualities for transforming education (Lodge, Yang et al. 2023; McKnight & Shipp 2024). Similarly, participant metaphors mirrored much of this discussion, clustering around themes of GenAI as an unknowable, unreliable, unbounded force that was almost magical.
Participants appeared unsettled by technology that is a black box. The black box metaphor particularly emphasises the opaque and ambiguous nature of AI and its lack of algorithmic transparency (Bearman & Ajjawi 2023). Beyond its opacity, participants saw it as a vast, unknown space that is not readily perceived or grasped, and perhaps unknowable. For example, we only see the tip of the iceberg, and it is an outer planet, yet to be discovered. This technology seemed infinite and inscrutable. The unknowable quality and trajectory of GenAI was encapsulated as a toddler, with a question mark over “How will it grow up?”
This uncertainty extended to outputs, with GenAI described as an unreliable hallucinator, plagiarism machine, a yes man, a people pleaser, and a verbose passenger on a plane. The risks of relying too heavily on GenAI in education were clearly articulated by participants. GenAI was a “cool” operator, projecting smooth confidence and capability that masked its limitations. Similarly, the superficial literary genius metaphor indicated distrust of GenAI text that mimics expertise without authenticity or substance. The metaphor of a really bad feedback loop implied processes that could worsen rather than improve outcomes, trapping users into a vicious cycle.
Some metaphors reflected concerns about the unanticipated, unbounded growth and spread of GenAI. Participants referred to a rapidly growing network like expanding railway lines that can “travel far and wide”. Others used biological terms, comparing GenAI to a spider’s web, a contagion or allergy that spreads and worsens over time and is difficult to contain, like mold travelling through invisible spores. GenAI could be a slippery slope which was perceived as being fast, cheap and out of control.
Participants feared GenAI as an uncontrollable force. One participant used water to contemplate GenAI to articulate the tension between its essential nature and the dangerous flow of water. Water may take the shape of the container it fills but it also can spill over and flood. The discussion around this metaphor suggested the need for harmony and balance to sustain life and to avoid extreme change, such as an apocalypse. The infinity blade metaphor represented both its boundless creative and destructive power. Dystopian fears associated with technological misuse were common, although one participant positively described GenAI as the future of education.
GenAI was also described as having a magical or mythical quality. It was likened to a prophet, in deference to its ability to analyse patterns and reveal seemingly supernatural insights from data. Comparisons to wishing wells and genie in a computer express both excitement about transformative possibilities and anxiety about uncontrollable power—what happens when the genie is out of the bottle? One educator described GenAI as a new layer in the matrix, arguing it was “better to take the blue pill to know the rules of the game”. This framing suggests an artificial overlay on academic reality, and a parallel, inauthentic dimension of educational practice that reflects an underlying anxiety about a forced complicity with a system that educators don’t fully trust or understand.
GenAI was often attributed agency and volition, with a sense of dread. Participants conceptualised agency as finite or limited, a zero-sum resource, with GenAI “stealing” and “taking agency” from teachers and students by being a crutch and laziness promoter. Beneath this dread was fear of GenAI as a job rival and competitor. These replacement metaphors suggest a binary either/or mental model of human-AI relations. In this sense, these job replacement metaphors reflect sociocultural anxiety about professional obsolescence.
For example, the Frankenstein metaphor channels long-held fears about monstrous technology and the extraordinary power of AI in education. Such cultural narratives influence how we envision GenAI as a threatening human-like entity whose creation is pivotal and irreversible, raising unresolved questions about human responsibility and control in AI development (Dishon 2024). In contrast, the Hercules metaphor offers a heroic sociotechnical imaginary who uses his innate power defensively, hinting at other powerful uses for GenAI.
Furthermore, participant metaphors emphasised GenAI’s superhuman qualities: lacking facial expression, maintaining total rationality, operating 24/7, performing like an endurance runner, and it never gets tired or broken. These metaphors expressed awe for GenAI as surpassing human limitations, while hinting at deeper anxieties about human obsolescence.
The academic HAL metaphor from 2001: A Space Odyssey exemplified this cultural anxiety about AI as an autonomous agent that could threaten humanity. Metaphors like HAL are deeply embedded in cultural narratives and collective imaginaries that actively shape how educators anticipate, interpret, and respond to AI integration in academic settings. Just as the film poses questions about whether humanity is ready for such transformative power, educators worry about AI systems making critical decisions without sufficient ethical safeguards or “guard rails”.
Participants expressed complex emotional and relational responses through metaphors like sinister robot, creep, and invader. The technology was anthropomorphised as a deceptive bullshit artist, a dumb cheater and an agent of stealth. Some described it as a Judas that betrays you, others as a necessary evil. GenAI was positioned as a thief and bank robber, suggesting an antagonistic and anti-social relationship. It had a “desperate wish to be human”, that gave GenAI an unlikable try-hard quality and inauthenticity. The metaphor of a reverse Robin Hood indicated exploitative power dynamics and amplifying of inequality through this technology.
One participant referenced the Swampman thought experiment, where lightning creates an exact physical copy of a person with identical memories (Davidson 2001). This sparked a discussion about AI’s knowledge versus human experience. Like the Swampman, the participant argued that AI’s outputs appear knowledgeable but lack lived experience: it can talk but doesn’t understand. The metaphor was an affirmation of the value of human experience, regardless of AI’s capabilities.
Towards the end of the workshop participants reflected on their experience and considered whether they might use metaphors in their teaching, research or broader professional practice to make sense of GenAI, further develop their understanding, or support conversations with peers. This reflexive discussion suggested participants saw the value of metaphors for sensemaking, sharing their lived experiences, facilitating connections with others and raising their awareness of both GenAI and the pervasiveness of metaphors. It also led participants to reflect on the nature of knowledge in the age of GenAI. The following sections examine how workshop participants critically reflected on their collective language choices.
Many metaphors seemed to be attempts to make sense of GenAI as an entity neither human nor fully machine. Participants noted that making sense of GenAI with others using metaphors was a good way to “get you started”, and that the “frame of metaphor” helped them take a step back, with one teacher remarking that it had “unlocked some things” to consider. This was supported by the workshop’s methodology, grounded in material and interpersonal collaboration, purposefully excluding the use of GenAI. Metaphors were used as material artefacts to be engaged with to “explore embedded meanings” (Grisoni & Page 2010: 14). Participants suggested this stepping back, and the variety of metaphors shared, helped surface the complexity of integrating GenAI in education. Discussing multiple, nuanced metaphors in educational contexts supported deeper understanding (Cameron 2003). The collective exploration of metaphors can be seen as facilitating joint meaning-making (Carvalho et al. 2022).
The metaphors elicited and explored in the workshop included both conventional and unconventional metaphors. Fraser (2018) suggests that unconventional, novel metaphors tend to be more cognitively demanding, requiring deeper engagement. The variety of metaphors explored through the collective process resulted in a greater diversity of perspectives and “alternative visions to the transformation-oriented techno-optimism”, than is common in educational technology discourses (Houlden & Veletsianos 2022: 613). The workshop provided a space for ‘open-ended’ interpretation about the future (Cox et al. 2023), exploring fear, awe and everything in between.
This collective sensemaking process helped participants to think about the ethics of GenAI (Vallis et al. 2023). Lakoff and Johnson (2003) explain how our conceptual system defines our everyday realities. While our reasoning relies on metaphorical connections, we are often unaware of this. Participants suggested that the workshop heightened their awareness, with comments such as “I’m leaving with a head full of metaphors” and “will start seeing them everywhere”. The metaphors ‘gallery’ encouraged reflective discussions about GenAI’s capabilities and risks, and its societal and ethical impacts, through playful, inquiry-driven activities (Gupta et al. 2024).
Participants acknowledged that their metaphors were closely connected to their “lived experience”, noting how this helped them to see the cross-cultural dimensions of GenAI. As noted by Houlden and Veletsianos (2022), “the kinds of stories we tell are not neutral and can never be isolated from the worlds we live in” (2022: 610). Metaphors have the capacity to reveal beliefs, values, and identities across cultures (Cortazzi & Jin 2020). The sharing of metaphors helped surface “slightly different” experiences, with participants describing this as “peeling back the layers of the onion”. For instance, one student felt their experiences with ChatGPT were more robotic, while another felt they were more conversational.
This collective sensemaking made participants feel they were “not alone” in exploring this inscrutable technology. Bailey (2024) notes that the affective dimension of metaphors is often neglected in favour of cognitive mapping. Even those meeting for the first time appeared comfortable sharing ideas through metaphor. This resonates with Grisoni and Page (2010) who note that “metaphors seemed to provide a safe enough way for us to progress our shared knowing and examine the difficulties of our inquiry” (2010: 23). For teachers who were less experienced with GenAI, the workshop was a supportive introduction to thinking about its potential roles in their practice. The metaphors shared could also inform conversations with peers about GenAI beyond the workshop.
The metaphors generated acted as ‘props’ that visualised participants’ experiences, perspectives and emotions. They helped us to see more clearly, how metaphors act as a “container for emotional and unconscious forces” and connect us to each other in “vivid, memorable and emotion-rousing representations” (Grisoni & Page 2010: 15).
As much as participants appreciated technology for automating routine tasks and overcoming initial barriers in work and creative tasks, even if imperfectly, many also expressed fear that GenAI could take agency from humans and make educators redundant. Discourse around AI tends toward reductive oppositions; human versus machine capabilities and control versus autonomy (Johnson & Verdicchio 2017). Similarly, educational technology also tends to promise individual, automated learning as more productive and efficient than classroom teaching (Bayne 2015), and this may play into narratives of technology and humans as separate, as an either-or proposition rather than entwined, as we argue.
This binary thinking extended to discussions about knowledge creation. Participants debated whether ChatGPT outputs could be considered an idea generator or whether they were “common knowledge”. Some felt “new knowledge” emerged from the relationship between user and Large Language Model (LLM) rather than from the LLM alone, with creativity resulting from “bringing in ideas from other contexts”. These contested ideas around knowledge-making highlighted fundamental questions about whether GenAI is “producing”, “active” or “agentic”.
We noticed a strong anthropomorphising tendency, similar to other research (Bender 2024; Gupta et al. 2024; Tlili et al. 2023). Language matters; language and metaphors shape society’s understanding and governance of artificial intelligence systems. The use of anthropomorphic metaphors and attributions of agency to AI can be misleading and obscure the human, material dimensions of AI development and use in education (Kajava & Sawhney 2023). GenAI technology focuses on automating specific cognitive tasks and processes such as natural language processing, rather than human-like general intelligence, hence Siemens et al. (2022) prefer the term ‘artificial cognition’ rather than AI.
We are susceptible to accepting AI-generated language as if it emerged from human thought, even when we are aware that the underlying technology is pattern-matching at scale. We are so used to creating meaning from text, our instinct is to “follow the line of words as a dog does a hare” (Costello 2023). This anthropomorphising tendency may diminish our humanity, as seen in the metaphor that reduces the complex human mind to a computer (Bender 2024). This tension emerged repeatedly in participants’ attempts to position GenAI’s capabilities relative to human cognition, as a machine with a brain, on a par with human thinking capabilities. The computer as a brain (and vice versa) has long been a problematic metaphor in technology seen as diminishing human ability. The metaphor “afford[s] the human mind less complexity than is owed, and the computer more wisdom than is due” (Baria & Cross 2021: 2).
Metaphors such as frenemy and infinity blade reflect a deep ambivalence towards GenAI, where participants are pulled in different emotional directions, unable to take a definitive stance. Their relationship with GenAI in education is complex and their experiences are still being processed and reimagined. After analysing hundreds of metaphors that were generated across different settings and times, we attest that all metaphors serve a purpose. If we say ChatGPT is a friend, an enemy or frenemy; we value all of the insights these metaphors illuminate. We may favour the more ambivalent frenemy because this creates cognitive dissonance, and possibly deeper, more critical reflection about the technology’s characteristics. That said, even complex metaphors such as frenemy can become assumptions or clichés, slipping into our daily thinking without question.
Our research indicates that prevalent metaphors can constrain educational understanding of AI’s possibilities and limitations, and encourage adversarial or competitive views of AI. By examining this dynamic relationship between public discourse and sensemaking in higher education, we nurture more sophisticated dialogues about AI’s role in society. Unpacking the complexities of generative AI, questioning its inevitability in education, recognising diverse perspectives; these activities require sustained effort and imagination. Our attitudes and metaphors are so deeply embedded in our feeling, thinking and culture as to be invisible. Yet participants can also shift their thinking through discussion and critical reflection of their own lived experience.
Our findings have direct implications for educational practice and policy. The workshop model provides educators a social, structured way to engage critically with GenAI that assumes no technical expertise. For example, a learning designer could use metaphor analysis to help a teaching team surface their assumptions about GenAI before redesigning assessments. Metaphor workshops could be facilitated to engage students in meaningful conversations about GenAI use.
At the institutional level, our findings suggest moving beyond binary policies of permitting or prohibiting GenAI. Instead, institutions could develop frameworks that acknowledge both opportunities, as the augmenting device metaphors suggest, and the ethical risks reflected in black box concerns. Using metaphor workshops when developing AI policies could help gather diverse stakeholder perspectives, while professional development could address specific fears surfaced through metaphors, such as anxiety about AI replacing teaching roles. Institutions could also design programs that explicitly address the gap between anthropomorphic perceptions and the actual capabilities of GenAI.
We acknowledge that our workshop-based elicitation, along with the prompts and facilitation, may influence the types of metaphors generated, and that these metaphors may differ from how participants naturally conceptualise GenAI in everyday contexts. As a result, our categories might reflect aspects of the workshop design in addition to participants’ authentic conceptualisations.
Although the workshops were valuable experiences, time constraints limited our ability to fully analyse and discuss metaphors with participants. Longer or sequential workshops would allow deeper sensemaking of human-AI relationships in education. Conducting more workshops with students would also be helpful.
This research points to critical questions about agency and decision-making in educational AI implementation for further investigation: Who determines which systems are used? How are tasks delegated between AI and humans, and what are the implications for educators and students? Extended workshops could help participants understand how AI systems could reconfigure roles and relationships in educational contexts, and potentially redistribute power and responsibility.
Future studies might examine how metaphors and attitudes toward GenAI evolve over time, how it is conceptualised across different educational cultures, and how metaphor analysis could inform more ethical and equitable AI implementation.
Our research revealed the complexity and diversity of views around generative AI through personal, embodied research methods. While existing research has elicited metaphors for educational technology from teachers and students, the workshop format used in our study and its practical focus is unique and has the potential to inform critical AI literacy efforts in higher education. Workshop discussions allowed participants to express vulnerability about technology in ways that may not emerge in more structured settings like focus groups or interviews. The workshop format, particularly using handwritten sticky notes, created a democratic process that transcended language barriers. Participants could express ideas without being constrained by English fluency or software autocorrection. Lodge, Thompson and Corrin (2023) recognise sensemaking as a critical GenAI research area in higher education and emphasise the ethical issues that surface when engaging with LLMs. Our joint sensemaking process integrated cognitive, humanistic and social perspectives (Markauskaite et al. 2022), shifting away from technical AI literacy toward understanding the human values and collaborative practices behind technology in education.
Fear and awe need not define our relationship with generative AI but we need new ways to play and to engage with it. Our foray into designing, developing, facilitating and researching workshops in metaphor is just one approach in an emerging field.
The study was approved by and adhered to the university’s Human Research Ethics Committee protocols (Project number: [2019/892]).
This article was supported by the strategic Connected Learning at Scale (CLaS) initiative. The authors thank the students, academics, learning designers, media producers and project officers who played an active role in the design, development and implementation at the University of Sydney Business School.
The authors have no competing interests to declare.
Conception of the study was by the lead author Carmen Vallis. All three authors contributed substantially to data collection, analysis, and drafting and revising the draft article. All approved the final version and agree with the order in which authors are listed in this article. They also agree to be accountable for this research and to answer any questions about its accuracy or integrity.
Logging in, please wait...
0 General Document comments
0 Sentence and Paragraph comments
0 Image and Video comments
New Conversation
Hide Full Comment
General Document Comments 0