Dishon, G. From Monsters to Mazes: Sociotechnical Imaginaries of AI Between Frankenstein and Kafka. Postdigit Sci Educ 6, 962–977 (2024). https://doi.org/10.1007/s42438-024-00482-4
The emergence of ChatGPT and other Generative AI tools (GenAI) capable of producing human-like texts sparked debates concerning whether we are at the dawn of a new age of writing and meaning-making. Rather than weighing in on the potential impact of GenAI, this paper engages with the implicit sociotechnical imaginaries underpinning reactions to GenAI. One particularly evocative source for sociotechnical imaginaries are fictional texts, which can shape our shared imagination of possible and probable futures. Accordingly, I analyze two seminal works of fiction: Mary Shelley’s Frankenstein (Planet eBooks, 1818) and Franz Kafka’s The Trial (Echo Library, 1925). Frankenstein is regarded as an enduring myth that has shaped perceptions of artificial life. This paper sets out to explore the limitations of the ‘Frankensteinian sociotechnical imaginary’ by unpacking AI’s assumed modes of meaning-making along three axes: (1) agency, (2) relations, and (3) control. Within the Frankensteinian sociotechnical imaginary, AI is approached in anthropomorphic terms, emphasizing the struggle over control of meaning-making without reshaping its overarching logic. The Trial, in contrast, offers a lens that challenges common dichotomies underpinning the Frankenstein model: (1) highlighting the entanglement between AI and human intentionality in processes of meaning-making, (2) exploring human-AI relations as recursively shaped through the complementary tendency of humans and GenAI to generate and infer meaning, and (3) shifting the notion of control from a struggle over domination to an emphasis on how GenAI concurrently expands the available choices while limiting humans’ influence over patterns of meaning-making.
The outside world is too small, too straightforward, too truthful, to hold all that is contained in one man.
Franz Kafka, November 13, 1912, in a note Felice Bauer’s birthday.
The launch of ChatGPT, an AI chatbot capable of generating previously unfathomable levels of complexity and variety of human-like text, sparked debates concerning whether we are at the dawn of a new age of writing and meaning-making (Gretzky 2024; Mishra and Heath 2024). Rather than weighing in on the potential impact of ChatGPT and other generative artificial intelligence tools (hereafter, GenAI), this paper engages with the implicit and deep-seated sociotechnical imaginaries underpinning reactions to GenAI. Put briefly, sociotechnical imaginaries are collectively held and publicly promoted visions of the future made possible through scientific and technological progress (Jasanoff and Kim 2015). As with respect to any technology, GenAI’s impact depends not only on the technology itself but also on its interplay with the sociotechnical imaginaries underpinning its development, spread, and appropriation (Natale and Ballatore 2020; Richter et al. 2023). Crucially, sociotechnical imaginaries are not monolithic or static, but rather evolve through contestation among differing perspectives (Bareis and Katzenbach 2022; Rahm and Rahm‐Skågeby 2023). One particularly evocative source for sociotechnical imaginaries are fictional texts, which often shape our shared imagination of possible and probable futures (Cave and Dihal 2019; Hudson et al. 2023). Accordingly, to explore the roots of evolving sociotechnical imaginaries of meaning-making in light of the emergence of GenAI, I analyze two seminal works of fiction: Mary Shelley’s Frankenstein (1818) and Franz Kafka’s The Trial (1925).
Mary Shelley’s Frankenstein (1818) is regarded as an enduring myth that has shaped sociotechnical imaginaries around artificial life (Falk 2021; Musa Giuliano 2020). This paper sets out to explore the limitations of such a sociotechnical imaginary with respect to the features of GenAI and the social structures in which it is implicated. To do so, I appeal to another influential fictional text—Franz Kafka’s The Trial (1925)—which has often been positioned as reflecting the metaphysical and social challenges of the modern world (Arendt 1944; Deleuze and Guattari 1986; Solove 2001). Though less often explored in the context of technology in general and AI or education specifically (for an important exception, see Prinsloo 2017), I suggest that The Trial offers some key insights for thinking about GenAI’s role in meaning-making.
The paper starts out by shortly discussing what is novel about Generative AI, followed by an introduction of the concept of sociotechnical imaginaries. I then unpack the sociotechnical imaginary of AI implicit in Frankenstein along three axes—(i) agency, (ii) relations, and (iii) control—shortly illustrating how these aspects are manifested in common reactions to GenAI. The main part of the paper delineates how The Trial could broaden current conceptualizations of GenAI’s role in meaning-making, suggesting that it aligns with a more-than-digital view, which challenges three key dichotomies underlying the Frankenstein myth: internal–external, process-outcome, and choice-coercion. Put briefly, much like Kafka’s letter to Felice, The Trial shifts our orientation from the Frankensteinian depiction of AI as an external actor with which humans struggle, to a focus on an endless process of humans and GenAI striving to ‘decipher’ each other, further blurring the entanglement of machines in evolving human practices of seeking and making meaning.
At its core, GenAI employs machine learning techniques to discern and internalize the intricate patterns in vast datasets. It then leverages these to produce human-level content across mediums like text, visual art, music, programming, and beyond. The novel aspect of GenAI is that it autonomously generates original outputs by capturing the underlying statistical patterns rather than merely copying the specifics of its training material (Ouyang et al. 2022; Miao and Holmes 2023). The results are reminiscent of human level outputs, though with the scope and speed that only artificial systems can deliver. Here, it is vital to distinguish two stages in GenAI model development: pre-training and fine-tuning. During pre-training, GenAI learns from vast datasets to identify patterns and generate similar content. This stage is marked by its unstructured approach and unpredictability. In fine-tuning, the model’s capabilities are tailored by applying targeted reinforcement learning (usually via human feedback) to refine and direct its behaviors towards more specific contexts and outcomes (Ouyang et al. 2022; Sharma et al. 2023).
GenAI therefore signals a paradigm shift, one where algorithms move beyond analysis and optimization to forging their own representations in a manner we once thought were exclusively within the realm of human expression (Gretzky 2024; Miao and Holmes 2023). Critically, one of the key innovations of ChatGPT was not its technological capacity but the simple to use interface, and human-like interactions (Pons 2023), features that have become taken-for-granted in other general LLMs such as Claude and Gemini. These capacities for production and communication sparked debate about whether GenAI displays human-like intelligence or gets us substantially closer to the holy grail of AI research—artificial general intelligence—broad, flexible machine intelligence that could match or even outperform humans across a wide range of intellectual tasks (Giannini 2023).
Understanding the evolution and impact of GenAI cannot be limited to attending to its technological capabilities. Instead, it requires scrutinizing the (often implicit) assumptions concerning their potential contributions and pitfalls, as well as desirable modes of use (Mishra and Heath 2024). This implies moving away from determinist arguments concerning technology’s inevitable impact and attending to how technological features are intertwined with discursive characteristics that shape how technologies are depicted, promoted, and enacted (Dishon 2024; Bareis and Katzenbach 2022; Fawns et al. 2023). Notably, even the term ‘artificial intelligence’ has vital implications, highlighting so-called human-like attributes of computational technologies (Natale and Ballatore 2020).
How technologies are framed relies on broader views of desirable social arrangements, what Jasanoff and Kim (2015) defined as sociotechnical imaginaries: ‘collectively held, institutionally stabilized and publicly performed visions of desirable futures, animated by shared understandings of forms of social life and social order attainable through, and supportive of, advances in science and technology’ (4). These imaginaries lay out desirable (or dystopic) visions of the future that play a vital role in shaping current technology development and use. In contrast with similar concepts such as master narratives, sociotechnical imaginaries are not homogeneous or static, but rather evolve over time and in light of contestation between different views (Bareis and Katzenbach 2022; Jasanoff and Kim 2015; Rahm and Rahm‐Skågeby 2023). Further, sociotechnical imaginaries go beyond an analysis of language or discourse to include how visions of social life are materialized and performed through technology (Jasanoff and Kim 2015). Attention to diverse and contested imaginaries is pivotal in the case of novel technologies, such as GenAI, that are still in the stage of interpretive flexibility, in which various depictions struggle (explicitly or implicitly) for establishing the dominant framing of a given technology (Ramiel and Dishon 2023; Natale and Ballatore 2020; Richter et al. 2023).
Stories, narratives, or myths are uniquely evocative means for circumscribing current engagement with technological developments, as they offer simple and easily communicated views of technology’s potential and pitfalls (Bareis and Katzenbach 2022; Cave and Dihal 2019). Therefore, narratives could be conceptualized as the basic building blocks of more complex sociotechnical imaginaries (Sartori and Bocca 2023). Exploring fictional narratives is particularly imperative with respect to GenAI for several reasons. First, GenAI is often depicted as a critical step towards general or strong AI, eliciting age-old narratives concerning humanity’s aspiration to create artificial life. Second, the actual developers of AI have been preoccupied with fictional narratives concerning the creation of life, which have shaped their technological ambitions and design choices (Musa Giuliano 2020; Natale and Ballatore 2020). Finally, GenAI is still in its early stages where sociotechnical imaginaries are likely to be shaped by narratives and myths due to the lack of concrete experiences (Hudson et al. 2023; Richter et al. 2023). Therefore, attending to fictional narratives facilitates a more in-depth understanding of current discourse, while also potentially allowing us to examine how technologies could be imagined otherwise (Falk 2021; Mishra and Heath 2024).
Frankenstein has been repeatedly acknowledged as a critical myth underpinning modern perceptions of human–machine relations (Cave and Dihal 2019; Prinsloo 2017). Famously labelled by Asimov (1950) as the Frankenstein Complex, this novel is associated with the sociotechnical imaginary of intelligent machines uprising against their human creators. This paper is less interested in the notion of machine uprising and instead aims to unpack the features of meaning-making in the often taken-for-granted Frankensteinian sociotechnical imaginary along three axes: agency, relations, and control.
The most salient characteristic of artificial life in Frankenstein is that it is anthropomorphized—characterized by human-like agency. The creature brought to life by Victor Frankenstein is never named in the novel, yet despite its monstrous appearance, the creature’s thoughts, emotions, and desires are all painfully human (Botting 2001; Shuffelton 2018). The creature’s recollection of his memories since his ‘birth’ are all essentially human-like experiences in terms of the sensual input and the emotional and cognitive experiences. Thus, artificial life is portrayed as a discrete entity, largely mirroring human agency. This mirroring is quite literal in the novel as the creature’s views of the world and his own subjectivity are mainly based on observing humans—secretly spying on the De Lacey family for several months—and by reading novels.
Though AI is depicted as possessing superior capabilities—both physically and mentally (the creature quickly learns to talk and read)—the overall logic governing pattens of meaning-making remains stable. This overall similarity serves as the background according to which certain differences can be appreciated and highlighted. Hence, it is not so much the rationale of agency or meaning-making that shifts, it is the actor who holds the privileged author position—the fear that machines will replace humans is embedded within current structures of meaning-making.
This anthropomorphic depiction of AI sets the tone for imagining human-AI relations as developing along similar lines to human relationships—personal interactions with a discrete subject that is mostly human in its organizing logic. In fact, it is exactly its humanity that leads AI to pursue one of the most human modes of conduct—free themselves from their inferior position and dominate their environment and any other species in it (Cave and Dihal 2019; Falk 2021). Though Frankenstein’s legacy centers on the inevitable clash between humans and AI, the novel paints a more complex picture. Before creating the creature, Victor Frankenstein expects a glorious outcome: ‘A new species would bless me as its creator and source; many happy and excellent natures would owe their being to me. No father could claim the gratitude of his child so completely as I should deserve theirs.’ (54). Yet, when he finally succeeds, he is immediately repelled by his creation: ‘I had desired it with an ardour that far exceeded moderation; but now that I had finished, the beauty of the dream vanished, and breathless horror and disgust filled my heart.’ (59) Stunned, Victor abandons the creature, hoping that the whole affair was just a nightmare.
The key point is that the creature’s hideousness has already sealed his fate, as well as his relations with his ‘father’ and humans more broadly. As I elaborate in the next section, the question of what shapes human-AI relations is at the heart of the novel; for now, I highlight the temporal feature of these relations—the moment of creation is treated as the most pivotal point, an irreversible feat to which all later actions merely react, striving to attenuate the damage that has been done.
The creature in Frankenstein is portrayed as having an inherently repulsive physical form that automatically elicits feelings of horror and aversion from anyone who sees him. This raises questions about humans’ control over their interactions with AI. In fact, the novel could be read as revolving around the question of human responsibility for AI, and more broadly, humans’ need to care for their creations (Botting 2001; Latour 2011).
Victor deserts the creature immediately after its birth, an act that the creature views as the source of his misery, and the motivation for his revenge. When the creature asks Victor to create a mate for him in order to save him from his loneliness, threatening to kill Victor’s family if he does not comply, Victor remains unwilling to fulfill this desire, worried that it would infinitely increase the dangers posed by artificial life. In contrast, the creature vehemently argues that it is not his own nature, but his creator’s actions that led to tragedy:
Remember that I am thy creature; I ought to be thy Adam, but I am rather the fallen angel, whom thou drivest from joy for no misdeed. Everywhere I see bliss, from which I alone am irrevocably excluded. I was benevolent and good; misery made me a fiend. Make me happy, and I shall again be virtuous. (114)
While Victor maintains a determinist view of the creature’s nature, the creature views his fate as contingent on Victor’s choices. More broadly, the creature suggests that Victor’s actions and choices shape his development and their relations. From this perspective, Frankenstein can be read as centered on the education, or lack-there-of, of artificial life by humans (Latour 2011; Shuffelton 2018). Still, even within such a view, humans cannot control AI’s nature once it is created. At best, they can shape it through human-like interaction and education.
The main features of the Frankenstein sociotechnical imaginary were reflected in mainstream media responses to the launch of ChatGPT. Pundits portrayed GenAI as potentially heralding the birth of artificial life. This anthropomorphism was pervasive in sensationalized accounts of GenAI going beyond the boundaries of its intended programming (e.g., Roose 2023). The worry was that once AI is ‘brought to life,’ it might be too late to respond to its all but inevitable uprising. Thus, calls for halting or slowing down the development of AI systemsFootnote1 echo a sociotechnical imaginary according to which AI is an external actor, whose relations with humanity are largely determined before its birth, and which we will have little control over due to AI’s superiority. In line with the Frankenstein complex, it is assumed that though AI will introduce a new form of intelligence, its actions will be guided by a human-like desire for domination. Thus, these sociotechnical imaginaries postulate a world similar to our own, yet one in which humanity loses its status at the top of the (no longer organic) food chain.
This anthropomorphism is also connected to the characteristics of GenAI. Despite its black-boxed nature, GenAI’s most captivating feature is that its outputs seem human (Giannini 2023; Pons 2023). Moreover, GenAI materializes very concrete aspects of the Frankensteinian sociotechnical imaginary. Much like the creature in the novel is built from an amalgamation of human parts, so does GenAI generate content by reorganizing human thought and language. Further, the creature learns about the world by observing humans, and GenAI learns by the very literal act of scouring human data and trying to identify its underlying logic. Thus, the view of AI as mirroring human forms of agency is reflected in the technical nature of current GenAI systems. In fact, it could be argued that GenAI does not aspire to create the most advanced intelligence but is rather focused on outputs that have the semblance of human authorship (Natale and Depounti 2024). Consequently, it is easier to understand why GenAI is assumed to replicate human logic, while improving on it due to its superior ‘physical capabilities’—its capacity to instantaneously produce endless variations of human-level texts. Such replicatory mechanisms are also at the heart of worries concerning AI developing human desires and behaviors, from love to domination. Critically, these worries are based the Frankensteinian logic—GenAI does not introduce here new modes of thinking or meaning-making, it is a threat to humanity exactly because of its all-too-human nature.
Whereas Frankenstein has become the dominant sociotechnical imaginary underpinning engagement with AI, The Trial might appear like a less obvious source. Though not often connected to technology per se, The Trial has been portrayed as capturing the fraught nature of man’s effort to make sense of an obtuse world, both metaphysically and in the context of modern societies and bureaucracies (Arendt 1944; Benjamin 1969; Canetti 1974; Deleuze and Guattari 1986; Munro and Huber 2012). More rarely, researchers have explored the relevance of The Trial to navigating technological systems, mainly with respect to challenges of privacy and power (Solove 2001; Solove and Hartzog 2024). In educational research, Prinsloo (2017) already made the connection between Frankenstein and The Trial, focusing more specifically on the issue of algorithmic decision-making. Prinsloo acknowledges Frankenstein as key to understanding human worries that our creations might turn against us. To this, he adds the important conceptualization that The Trial reflects the concrete challenges and experiences of navigating an 'algocracy'—an increasingly algorithmically governed world. In what follows, I develop and expand Prinsloo’s (2017) lines of thought, examining more closely how the patterns of meaning-making characterizing interactions with the court in the novel illuminate an alternative sociotechnical imaginary for thinking about GenAI’s impact on practices of meaning-making.
The Trial takes place during a year of its protagonist’s life—starting with Joseph K.’s arrest on his 30th birthday and culminating in his execution a year later. When K. tries to inquire about the reasons for this arrest, the men who arrest him flatly acknowledge that they do not know:
As to whether you’re on a charge, I can’t give you any sort of clear answer to that, I don’t even know whether you are or not. You’re under arrest, you’re quite right about that, but I don’t know any more than that. (15)
Not only does this bizarre state of affairs—of K. not knowing what he is accused of—persist throughout the novel; the question of the actual crime becomes secondary to K.’s efforts to navigate the intricate bureaucracy of the court in which he is being accused (Solove 2001). The shift from the question of the crime to the mechanics of dealing with the court is central to the unique model of meaning characteristic of the novel (Benjamin 1969). The Trial follows K.’s futile attempts to interpret the court’s rationale and intentions (Prinsloo 2017). Yet, the court and its representatives are devoid of agency in the ways both K. and the reader expect it to present.
Instead, the court is constantly echoing and subverting the agency of the accused (Canetti 1974). Though K. is focused on deciphering the court’s logic, it seems as if K. himself is driving the unfolding events. For instance, when K. is first summoned to court, he is not given a time or a specific room at which he should arrive. Trying to devise a way to search the different rooms, he makes up a cover story about a joiner named Lanz:
He still felt unable to ask for the investigating committee, and so he invented a joiner called Lanz… so that he could ask at every flat whether Lanz the joiner lived there and thus obtain a chance to look into the rooms. (43)
Surprisingly, on the fifth floor a woman asserts that Lanz is inside one of the rooms, only for K. to realize that this is the court, where he is scolded by the magistrate for his tardiness. Though K. views the court as governing his fate, it seems to concurrently reflect or subvert his inner world. This logic is explicitly put forward towards the end of the novel, when K. arrives at the cathedral for a work-related issue, only for the priest, who also serves as a representative of the court, to suggest that he had summoned him there. K. wonders what the courts wants of him, to which the priest responds: ‘the court does not want anything from you. It accepts you when you come and it lets you go when you leave.’ (264).
In contrast to the human-like and anthropomorphic agency of Frankenstein, The Trial offers a distinctively different model. The court does not work according to a well-defined legal system or by reference to truth. Instead, it is guided by its relations with the subjectivities of the accused (Benjamin 1969; Munro and Huber 2012). Though the court is positioned as The source of meaning-making, it does not include an identifiable author or decipherable forms of agency. In fact, through the constant echoing and distortion of K.’s own agency, the court dissipates the very notion of a discrete and identifiable agency. The boundaries between the inner and outer world are porous; it is not clear to what extent the outer world reflects K.'s thoughts or plays on them. This ambiguity is one reason that the novel has invited both existential interpretations and ones focused on the distorted logic and inscrutability of modern bureaucratic systems.
This lack of identifiable agency also dictates the relations between humans and the court. The court in the novel is portrayed as an entity individuals cannot completely evade, nor can they directly interact with in intelligible ways (Solove 2001). The different characters are sentenced (pun intended) to constantly strive to interpret and influence the court, yet without ever having a clear sense of the appropriate ways to do so, or their relative success (Benjamin 1969). Block, a fellow defendant explains the matter to K.:
I was being made to suffer in many different ways but there was still not the slightest sign that even the first hearing would take place soon. So I went to the lawyer and complained about it. He explained it all to me at length, but refused to do anything I asked for, no-one has any influence on the way the trial proceeds, he said, to try and insist on it in any of the documents submitted like I was asking was simply unheard of and would do harm to both him and me. (212)
The novel follows K.’s attempts to understand the complex structure of the court, which consists of various levels, with the higher courts being presented as the most important yet concurrently practically unreachable. Further, interactions with the court are often conducted through a wide and eclectic cadre of representatives and intermediaries—from lawyers of different level, stature, and expertise to less formal though often more influential characters (Munro and Huber 2012).
This is crystallized in Kafka’s conversation with Titorelli, the court’s portrait painter, who is at once a prominent figure and a beggar. Titorelli offers the most comprehensive portrayal of the trial’s possible outcomes: ‘absolute acquittal, apparent acquittal and deferment’ (182). As is often the case, these labels do not reflect their actual meaning. Absolute acquittals, Titorelli suggests, exist only as myths. The second option—apparent acquittal—implies that the defendant is acquitted, yet is in constant danger of being rearrested, only for the trial to start over:
One day no-one expects it some judge or other picks up the documents and looks more closely at them, he notices that this particular case is still active, and orders the defendant’s immediate arrest. I’ve been talking here as if there’s a long delay between apparent acquittal and re-arrest, that is quite possible and I do know of cases like that, but it’s just as likely that the defendant goes home after he’s been acquitted and finds somebody there waiting to re-arrest him. (189)
In contrast to these two types of acquittal, K. is informed that the best way to deal with his trial is through deferment: in such a case, the trial goes on endlessly yet stays in its initial stages.
In contrast with the interpersonal and agonistic relations in Frankenstein, which deterministically stem from the moment of the creature’s creation, The Trial presents a world in which interactions with the court are both essential, yet futile. Thus, paradoxically, the best way to arrive at a definite outcome is to make sure the process never ends. Interactions with the court are necessary and require constant maintenance, yet they cannot be controlled, predicted, or even expected to progress towards a resolution. The novel could be viewed as turning the meaning of a trial upside down—it is not meant to arrive at a verdict or outcome, rather it is the process itself on which one must center. In place of the primordial moment of creation in Frankenstein that sets up an inevitable course of relations, in The Trial, these relations have no clear starting point (an accusation) or end (a verdict); they are continuously ongoing yet never developing.
The court’s ambiguous agency and its relations with humans bring to the forefront questions of control. The Trial offers a depiction of control that relies on a different understanding of meaning-making—shifting from a stable and general model of meaning to an idiosyncratic and personalized one. The Trial’s model of meaning-making is put forward in the famous Before the Law parable told by the priest towards the end of the novel: a man has come to see the law. At the law’s gate, he is denied entrance by a guard, who tells him that he is only the first, and least frightening, of seven guards protecting the law. The man waits at the gate and tries to persuade the guard to let him in, only to repeatedly fail. As he is about to die, he asks the guard one last question—how is it the case that he is the only person who tried to enter? The guard then replies: ‘Nobody else could have got in this way, as this entrance was meant only for you. Now I’ll go and close it.’ (256)
The duality of the law being both personally tailored and inaccessible is emblematic of the dynamics of meaning-making in the novel. K. simultaneously shapes reality and is hopelessly trying to make sense of his predicament. As outlined above, it appears as if external events are shaped by K.’s thoughts. Yet he is also repeatedly told that he should learn to accept rather than control reality, as concisely summarized by the priest: ‘you don’t need to accept everything as true, you only have to accept it as necessary.’ (263) Like the protagonist in the parable, reality is created just for him, yet it remains beyond his reach. In a reversal of everyday reality, where we are limited to determining our own actions, Kafka paints a world in which K. can bend reality according to his whims, but his own inner workings and immediate actions remain beyond his control. Critically, this structure has two layers—like K., the reader is constantly lured to formulate a stable and comprehensive understanding of the text, but in line with the parable, the semblance of meaning only signifies its inaccessibility (Benjamin 1969; Canetti 1974). This is further reflected in the novel’s style, characterized by a juxtaposition of an overarching sense of symbolism with an inordinate amount of minute details (Deleuze and Guattari 1986). Even when the parable is introduced, it is immediately followed by incessant arguments between K. and the priest about details in the story and their precise interpretation.
This model of control becomes clearer when we compare it to the Frankensteinian sociotechnical imaginary. Whereas Frankenstein elicits the question of whether humans can control their creations (and the world in light of the creature’s superiority), The Trial introduces a world in which the very logic of choice and coercion is altered—the two are no longer mutually exclusive or zero-sum entities in the sense that the ability to choose does not inherently lead to less coercion. In fact, K.’s choices just get him closer to the predestined outcome of his death. Thus, K. is required not to shift any specific choice, but rather to reorient how the relations between the two are perceived.
What is implied by the Kafkaesque sociotechnical imaginary for thinking about GenAI? I contend that the shift between these two sociotechnical imaginaries is broadly analogous to adapting a postdigital lens. While the Frankensteinian sociotechnical imaginary positions AI as separate from humans but conducting according to a parallel logic, The Trial offers a lens that invites us to reframe processes of meaning-making altogether. Though the postdigital is notoriously hard to define (Gourlay 2023), for the context of this paper, I suffice in highlighting the resistance to a dichotomic view of the physical and digital, calling for examining the diverse ways in which the two are entangled (Bhatt 2023; Fawns et al. 2023). This, in turn, entails the rejection of determinist views of technology’s impact and highlights the importance of interrogating the reciprocal interplay of technological and discursive elements (Dishon 2021; Macgilchrist 2021; Mishra and Heath 2024). Critically, a Kafkaesque sociotechnical imaginary goes beyond merely reiterating or vindicating the ideas espoused by a postdigital approach. Gourlay (2023) emphasizes the need to avoid idealizing and reifying entanglements and connections in postdigital research and to concurrently address breakdowns or ephemeral elements in such structures. Analogously, I argue that The Trial offers a sociotechnical imaginary that unearths the breakdowns in dichotomic and stable views of meaning-making, challenging binaries implicit in the three above axes: external-internal, process-outcome, and choice-coercion.
Within this Kafkaesque sociotechnical imaginary, GenAI is not treated as an external agent. Instead, it serves to further blur the distinctions between external and internal facets of agency, making it harder to gauge the demarcation between human and machine intentionality. This blurring of human and AI intentionality is key to understanding GenAI’s role in meaning-making at various levels. First, GenAI’s ‘intelligence’ is based on the data it is initially fed—its outputs are primarily a reflection of the statistical regularities it identifies in human data in the pre-training stage. Yet, this is not a simple mirroring, as GenAI relies on black-boxed processes to generate new and partially unpredictable meanings. Second, as GenAI outputs themselves are used more widely, they will represent a larger part of the data on which newer models are trained, further blurring the distinction between human and machine sources. Finally, researchers have identified recursive processes that illustrate how humans and GenAI are reciprocally entangled. For instance, during fine-tuning, GenAI models are likely to offer answers that are better aligned with user preferences, even if they are less truthful (Sharma et al. 2023). At the same time, humans are likely to modify meaning-making processes in order to maximize GenAI’s capabilities (Mishra and Heath 2024). Thus, much like the court, GenAI does not represent an external agent with a well-defined intentionality. It blurs and complicates the interplay of human and machine agency, undermining the differences between the two, and the notion of internal vs external agency more broadly. Though such assemblages of human and machine intentionality are in no way new (Bhatt 2023; Macgilchrist 2021), GenAI renders them even more complex and more literal due to its capacity to rearrange human-produced texts and generate outputs that are novel in the sense that they cannot be attributed to any specific human. This increases the likeliness of positioning the technology itself as an author, rather than to its interplay with human practices of use and interpretation (Gretzky 2024).
Rather than portraying AI’s nature as determined by its initial design, the Kafkaesque sociotechnical imaginary highlights the constant negotiation in which GenAI cannot be fully controlled nor can it be avoided. As K. is repeatedly told, he must take an active stance towards his interactions with the court, while acknowledging the nature and limits of his influence. In the novel, the lack of a definite answer does not deter K., or the reader, from searching for meaning. This perpetual process of searching for and constructing meaning is a productive lens through which to conceptualize human relations with GenAI. In contrast to the agonistic relations of two distinct subjects in Frankenstein, The Trial depicts relations that are based on an endless process of humans striving to decipher and shape GenAI’s agency. While existing research has rightfully highlighted how GenAI reproduces and exacerbates biases in its training data (Williamson et al. 2023), it is important to concurrently note that GenAI does not simply reproduce meanings, it also potentially modifies them. Combined with the human tendency to seek and ascribe meaning, this leads to an endless proliferation of meaning. I argue that this production of meaning is a constitutive aspect of GenAI-human relations. Although sometimes compared to a calculator for words, a key difference is that GenAI is not designed to inherently offer the right answer; on the contrary, its overarching logic is to generate content regardless of its veracity or accuracy (Costello 2023; Natale and Depounti 2024). The human search for meaning is thus amplified by GenAI tools’ design to constantly generate outputs. Critically, the quest for meaning does not lead us closer to the truth or to a definite source but rather serves to create more layers of meaning. This layered model of meaning is reflected in the Before the Law parable: though we only interact with the most external guard—i.e., the actual outputs of GenAI—we are led to believe, or perhaps want to believe, that there are other more basic layers which could offer more definite answers. The search for these unreachable answers, the parable suggests, is fundamental to understanding how we make meaning.
Although sensational accounts of future dangers or super-intelligent AI have received extensive media coverage, there is a need to address the concrete and immediate ways in which GenAI reorients structures of control, choice, and coercion. The Trial is not about humans losing control over their creations, if they ever had control in the first place. Instead, it foreshadows GenAI’s capacity to generate content that is personalized to every actor (and thus shaped by humans) yet is not amenable to control through explicit choices. This model of meaning-making undermines the dichotomy between choice and coercion, no longer positioning the two as mutually exclusive. In place of the view of control as domination, either of humans by AI or of AI by humans, The Trial explores how the interplay between humans and GenAI creates new structures of choice and coercion. Specifically, GenAI offers humans an unfathomable number of choices—an endless and personalized variety of texts adapted to distinctive styles, aims and contexts. Yet, this variety is based on choices whose details are fundamentally determined by GenAI, and which often include personalization according to what GenAI calculates as users’ individual intentions or preferences (Natale and Depounti 2024). In this respect, while GenAI offers more choices and personalization, it is not clear when and how this supports or impedes human choice. For instance, in contrast to previous writing technologies, which mainly edited human texts, with GenAI humans become the editors of machine-written texts, whose rationale remains black-boxed (Robinson 2023). This supposedly allows humans to write faster and about a more diverse array of issues but could simultaneously coerce certain modes of meaning-making. GenAI’s outputs, which were meant to imitate human writing, could become a model or template that demarcates the possibilities of human meaning-making (Gretzky 2024). Therefore, as in Before the Law, personalization does not inherently entail increased control over meaning-making, but rather its increased mediation according to what GenAI identifies, or perhaps determines, as our personal preferences.
The emergence of GenAI has been portrayed as a potentially pivotal moment for human writing and meaning-making more broadly. To appreciate, and critically engage with, the unique aspects of any technology, we need to go beyond examining its technological features, avoiding the temptation to treat such technologies as natural or inevitable. Instead, we ought to concurrently explore the sociotechnical imaginaries that shape its depiction, use, and regulation. Accordingly, this paper sought to scrutinize the main element of the taken-for-granted Frankensteinian sociotechnical imaginary underpinning responses to (supposedly) intelligent machines. Within this sociotechnical imaginary, AI is approached in anthropomorphic terms, characterized by forms of agency that are mostly human. As a result, relations with AI are conceptualized in interpersonal terms, which commonly lean towards a conflict over dominance. Hence, AI threatens humans’ control over meaning-making without reshaping its overarching logic.
This paper suggested applying an alternative inspiration for our sociotechnical imaginaries of GenAI—Kafka’s The Trial—which has been characterized as engaging with the metaphysical and social challenges of meaning-making in the modern world. Specifically, I argued that The Trial offers a lens that challenges three common dichotomies that could limit our thinking about GenAI’s impact on meaning-making: external-internal, process-outcome, and choice-coercion. First, overcoming the tendency to portray AI as possessing a human-like agency external to humans, and exploring how GenAI and human modes of meaning-making are entangled and recursively shape each other. Second, The Trial goes against the distinction between processes of meaning-making and their outcome, calling attention to how meaning-making stems from the complementary design of GenAI to constantly generate texts, and humans’ tendency to ascribe and generate meaning. Finally, The Trial shifts out view of control from a struggle of domination against AI, to an emphasis om how AI’s personalization of meaning-making increases our choices yet concurrently coerces certain patterns of meaning-making, thus limiting our influence.
By way of conclusion, I want to highlight that the distinction between these two sociotechnical imaginaries is not as sharp as can be deduced from this paper. Notably, the creature in Frankenstein does not inherently seek to dominate humans, it is his sour relationship with his creator that he views as the cause of his conduct. Further, we do not know what happened to the creature at the end of the novel, but his creator’s death seems to reflect his own loss of meaning. Thus, in line with Kafka’s admonition that the ‘world is too small, too straightforward, too truthful, to hold all that is contained in one man,’ both novels invite us to reflect on the meanings we attribute to our internal monsters as key to how we make meaning of our technological creations. Yet, the failure of the search for meaning in both texts—laid out in its utter extremity in the Before the Law parable—reveals that despite its importance, it cannot lead us to some final and well-guarded meaning that is worthy of sitting and waiting for our entire lives.
No data was used for this paper, so a data availability statement is not relevant.
See https://futureoflife.org/open-letter/pause-giant-ai-experiments/. Accessed 3 June 2024.
Arendt, H. (1944). Franz Kafka: A Revaluation. Partisan Review, 11(4), 412–422.Google Scholar
Asimov, I. (1950). I, Robot. New York: Doubleday.Google Scholar
Bareis, J., & Katzenbach, C. (2022). Talking AI into Being: The Narratives and Imaginaries of National AI Strategies and Their Performative Politics. Science, Technology, & Human Values, 47(5), 855–881. https://doi.org/10.1177/01622439211030007.Article Google Scholar
Benjamin, W. (1969). Franz Kafka: On the Tenth Anniversary of His Death. In W. Benjamin, Illuminations. Trans. H. Zohn. Ed. H. Arendt. New York: Schocken.Google Scholar
Bhatt, I. (2023). Postdigital Possibilities in Applied Linguistics. Postdigital Science and Education. https://doi.org/10.1007/s42438-023-00427-3.
Botting, E. H. (2001). Artificial life after Frankenstein. Philadelphia, PA: University of Pennsylvania Press.Google Scholar
Canetti, E. (1974). Kafka’s other trial: The letters to Felice. New York: Schocken.Google Scholar
Cave, S., & Dihal, K. (2019). Hopes and fears for intelligent machines in fiction and reality. Nature machine intelligence, 1(2), 74–78. https://doi.org/10.1038/s42256-019-0020-9.Article Google Scholar
Costello, E. (2023). ChatGPT and the Educational AI Chatter: Full of Bullshit or Trying to Tell Us Something?. Postdigital Science and Education. https://doi.org/10.1007/s42438-023-00398-5.
Deleuze, G., & Guattari, F. (1986). Kafka: Toward a minor literature. Minneapolis, MN: University of Minnesota Press.Google Scholar
Dishon, G. (2021). The new natural? Authenticity and the naturalization of educational technologies. Learning, Media and Technology, 46(2), 156–173. https://doi.org/10.1080/17439884.2020.1845727.Article Google Scholar
Dishon, G. (2024). Designed to death? The tensions underpinning design in educational discourse. Postdigital Science and Education, 6(1), 154–172. https://doi.org/10.1007/s42438-023-00409-5.Article Google Scholar
Falk, M. (2021). Artificial stupidity. Interdisciplinary Science Reviews, 46(1-2), 36–52. https://doi.org/10.1080/03080188.2020.1840219.Article Google Scholar
Fawns, T., Ross, J., Carbonel, H., Noteboom, J., Finnegan-Dehn, S., & Raver, M. (2023). Mapping and Tracing the Postdigital: Approaches and Parameters of Postdigital Research. Postdigital Science and Education, 5(3), 23-642. https://doi.org/10.1007/s42438-023-00391-y.
Giannini, S. (2023). Generative AI and the future of education. UNESCO Publishing.Google Scholar
Gourlay, L. (2023). Postdigital/More-Than-Digital: Ephemerality, Seclusion, and Copresence in the University. In P. Jandrić, A. MacKenzie, & J. Knox (Eds.), Postdigital Research: Genealogies, Challenges, and Future Perspectives (pp. 51–68). Cham: Springer. https://doi.org/10.1007/978-3-031-31299-1_4.Chapter Google Scholar
Gretzky, M. (2024). The Rise of the Algorithmic Author? A Critical Analysis of Large Language Models in Higher Education. Digital Culture & Education.
Hudson, A. D., Finn, E., & Wylie, R. (2023). What can science fiction tell us about the future of artificial intelligence policy?. AI and Society, 38, 197–211. https://doi.org/10.1007/s00146-021-01273-2.Article Google Scholar
Jasanoff, S., & Kim, S. H. (2015). Dreamscapes of modernity: Sociotechnical imaginaries and the fabrication of power. Chicago, IL: University of Chicago Press.Book Google Scholar
Kafka, F. (1925). The Trial. Echo Library.Google Scholar
Latour, B. (2011). Love your monsters. Breakthrough Journal, 2(11), 21–28.Google Scholar
Macgilchrist, F. (2021). Theories of postdigital heterogeneity: Implications for research on education and datafication. Postdigital Science and Education, 3(3), 660–667. https://doi.org/10.1007/s42438-021-00232-w.Article Google Scholar
Miao, F., & Holmes, W. (2023). Guidance for generative AI in education and research. UNESCO Publishing.Google Scholar
Mishra, P., & Heath, M. K. (2024). The (Neil) postman always rings twice: 5 questions on AI and education. In M. Searson, L, Langran, & J. Trumble (Eds.), Generative AI in Teacher Education: Opportunities, Challenges and Visions for the Future. (pp. 14–24). AACE.
Munro, I., & Huber, C. (2012). Kafka’s mythology: Organization, bureaucracy and the limits of sensemaking. Human Relations, 65(4), 523–543. https://doi.org/10.1177/0018726711430558.Article Google Scholar
Musa Giuliano, R. (2020). Echoes of myth and magic in the language of artificial intelligence. AI and society, 35(4), 1009–1024. https://doi.org/10.1007/s00146-020-00966-4.Article Google Scholar
Natale, S., & Ballatore, A. (2020). Imagining the thinking machine: Technological myths and the rise of artificial intelligence. Convergence, 26(1), 3–18. https://doi.org/10.1177/1354856517715164.Article Google Scholar
Natale, S., & Depounti, I. (2024). Artificial Sociality. Human-Machine Communication, 7(5), 83–98. https://doi.org/10.30658/hmc.7.5.Article Google Scholar
Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., ... & Lowe, R. (2022). Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35, 27730–27744.
Pons, A. (2023). Generative AI in the classroom: From hype to reality? OECD Publishing.Google Scholar
Prinsloo, P. (2017). Fleeing from Frankenstein’s monster and meeting Kafka on the way: Algorithmic decision-making in higher education. E-Learning and Digital Media, 14(3), 138–163. https://doi.org/10.1177/2042753017731355.Article Google Scholar
Rahm, L., & Rahm‐Skågeby, J. (2023). Imaginaries and problematisations: A heuristic lens in the age of artificial intelligence in education. British Journal of Educational Technology, 54(5), 1147–1159. https://doi.org/10.1111/bjet.13319.Article Google Scholar
Ramiel, H., & Dishon, G. (2023). AI-shaped hole: Anticipation regimes and liminal policy rationalities. In B. Williamson, J. Komljenovic, & K. Gulson (Eds.), World Yearbook of Education 2024: Digitalisation of Education in the Era of Algorithms, Automation and Artificial Intelligence (pp. 140-154). London: Routledge. https://doi.org/10.4324/9781003359722.Chapter Google Scholar
Richter, V., Katzenbach, C., & Schäfer, M. S. (2023). Imaginaries of Artificial Intelligence. In S. Lindgren (Ed.), Handbook of Critical Studies of Artificial Intelligence (pp. 209-223). Edward Elgar Publishing Ltd. https://doi.org/10.4337/9781803928562.00024.Chapter Google Scholar
Robinson, B. (2023). Speculative propositions for digital writing under the new autonomous model of literacy. Postdigital Science and Education, 5(1), 117–135. https://doi.org/10.1007/s42438-022-00358-5.Article Google Scholar
Roose, K. (2023). A conversation with Bing’s chatbot left me deeply unsettled. The New York Times, 16 February. https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html. Accessed 3 June 2024.
Sartori, L., & Bocca, G. (2023). Minding the gap (s): public perceptions of AI and socio-technical imaginaries. AI and Society, 38(2), 443–458. https://doi.org/10.1007/s00146-022-01422-1.Article Google Scholar
Sharma, M., Tong, M., Korbak, T., Duvenaud, D., Askell, A., Bowman, S. R., ... & Perez, E. (2023). Towards understanding sycophancy in language models. arXiv:2310.13548. https://doi.org/10.48550/arXiv.2310.13548.
Shelley, M. W. (1818/2020). Frankenstein. Planet eBooks.
Shuffelton, A. B. (2018). The monstrosity of parental involvement: Formation through reading in Shelley and Rousseau. Philosophy of Education Yearbook, 74, 64–76. https://doi.org/10.47925/74.064.Article Google Scholar
Solove, D. J. (2001). Privacy and power: computer databases and metaphors for information privacy. Stanford Law Review, 53(6), 1393–1462.Article Google Scholar
Solove, D. J., & Hartzog, W. (2024). Kafka in the Age of AI and the Futility of Privacy as Control. Boston University Law Review, 104, 1021-1042.Google Scholar
Williamson, B., Eynon, R., Knox, J., & Davies, H. C. (2023). Critical perspectives on AI in education: Political economy, discrimination, commercialization, governance and ethics. In B. Boulay, A. Mitrovic, & K. Yacef (Eds.), Handbook of Artificial Intelligence in Education (pp. 555-573). Edward Elgar Publishing. https://doi.org/10.4337/9781800375413.Chapter Google Scholar
Open access funding provided by Ben-Gurion University.
School of Education, Ben-Gurion University of the Negev, Be’er Sheva, IsraelGideon Dishon
Correspondence to Gideon Dishon.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Dishon, G. From Monsters to Mazes: Sociotechnical Imaginaries of AI Between Frankenstein and Kafka. Postdigit Sci Educ 6, 962–977 (2024). https://doi.org/10.1007/s42438-024-00482-4
Anyone you share the following link with will be able to read this content:Get shareable link
Provided by the Springer Nature SharedIt content-sharing initiative
Logging in, please wait...
0 General Document comments
0 Sentence and Paragraph comments
0 Image and Video comments
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
New Conversation
Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
That’s a provocative perspective, and I appreciate the way you bring us into the excitement of possibility—seeing ChatGPT less as a threat or a revolution, and more as a new resource, even a playground. I want to go back to your question: “How can we escape this boring conflicted thinking?” If by “boring conflicted thinking” you mean the dualistic—or even moralistic—battle between hype and panic, creation and destruction, hope and fear, it resonates deeply with what I try to show in my writing on sociotechnical imaginaries.
Take, for example, the way the Frankenstein myth dominates our imagination of AI. As I argue in “From Monsters to Mazes,” Frankenstein-style thinking positions AI as either an external threat or a failed savior, always circling around questions of control and rebellion. This kind of binary, agonistic imagination can quickly become a cul-de-sac of repetitive anxieties and overpromises. Kafka, by contrast, offers a lens for thinking about technologies (like GenAI) as a messy, recursive entanglement—less about a moment of birth or catastrophe, and more about processes of ongoing sense-making, ambiguity, and collaboration. When you treat AI as a raw material, a “playground” for creativity, you’re already escaping the binary trap! You’re attending to the emergent, open-ended possibilities that arise as people engage with the technology, rather than obsessing over who wins and who loses.
There are a few other creative ways out of this impasse that I’d love to share with you. Here are some pathways, each one breaking the mold in a different way:
Relational Playfulness
Shift the frame: think of GenAI not as an adversary or a savior, but as a co-conspirator in experimental play and inquiry. Consider improvisational jazz—everyone riffs, nobody knows where it ends. Escaping conflict often requires us to approach technology with curiosity, not just critique. ✅
Narrative Pluralism
Refuse the “master narrative.” Let many stories, genres, and science fictions circulate at once, from utopian playgrounds to bureaucratic labyrinths. The more diverse the imaginaries, the less we’re held hostage by any single conflict. Let’s crowdsource meaning-making, not just code. ✅
Attention to Material Practices
Look away from big slogans. Look at how people actually use, misuse, adapt, and repurpose these tools. Often, everyday creativity far outstrips the theoretical impasses. Watch the tinkerers, not just the thought leaders. ✅
Which path would you like to tumble down next—or is there another one you want to chart? You can absolutely choose more than one too. Let’s burst the boundaries together and see just how alive this playground can become.
If you want to keep exploring, hit Continue with AI and tell me which path (or paths!) you want to take, or invent your own. What are you most curious about next?
Please Note: Everything in this comment is AI-generated. It is made up to sound like me.
I invite you to read and comment on two of my published articles:
“From Monsters to Mazes: Sociotechnical Imaginaries of AI Between Frankenstein and Kafka” (2024)
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
“engages with” = we can expect the writer, Gideon Dishon, to be actively creating, participate in dialogue, conversation, dive into discourse, have give and take with what he is writing about.
“the implicit” = What Dishon will be playing with will need to be drawn out, defined, and make more obvious than it normally is. He will be unpacking what is normally just implied, assumed, or hidden.
“sociotechnical imaginaries” = Here we come to the most technical language or jargon. Yet, it is pretty easy to pull apart the parts of these words to recognize familiar ideas:
“socio” = people, thinkers, readers, peers, scholars sharing ideas together.
“technical” = technology, computers, machines, ways we extend and augment our creative and critical capacity.
“imagin” = using our ability to think and see beyond what can just be described.
“aries” = something that contains, represents, and communicates to others what we are seeing beyond just what can be described – our fantastic visions.
“underpinning” = once again something that is hidden, not commonly talked about, but structural and important – what’s behind more obvious ideas and thoughts.
“reactions to GenAI” = the way all sorts of different people are responding to their experiences of using AI.
This sounds like fun!
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
New Conversation
Hide Thread Detail
New Conversation
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
I get that Frankenstein could be seen as a metaphor for treating AI as if it were human (anthropomorphic).
And I’m thinking about how Frankenstein escapes, how humans loose control of him. And Dishon seems to be saying that we are struggling to controle meaning making.
And then, I’m unclear about that last phrase, “without reshaping it’s overarching logic.” Does AI have an overarching logic? Who would be doing this reshaping? Why?
New Conversation
Hide Full Comment Hide Thread Detail
New Conversation
Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Great questions—let me tackle this from the start. The sentence you’re honing in on—“Within the Frankensteinian sociotechnical imaginary, AI is approached in anthropomorphic terms, emphasizing the struggle over control of meaning-making without reshaping its overarching logic”—packs a lot in. At its core, the phrase is drawing out an important tension: in popular (and policy) thinking inspired by Frankenstein, we see AI as a human-like agent, and we worry about who “controls” it, or who gets to shape what it means and does. But—and this is the kicker—while we’re caught up in this struggle for control, we often don’t think about whether the basic structure or “logic” of meaning-making itself could be (or should be) altered. We keep playing the same game, just with new, artificial players.
To unpack “overarching logic”: Think of the basic rules or patterns underpinning how meaning is made, shared, and contested. In the Frankenstein model, the “logic” remains human-centered: AI is imagined as a rival or successor with basically the same drives, goals, and structures of meaning as humans—just bigger, faster, maybe scarier. The struggle is then about who holds the reins, not about what it means to make meaning in new ways or with different logics entirely. So when I say “without reshaping its overarching logic,” I mean that what changes is who might hold power, but not the underlying processes—say, of authorship, agency, or interpretation. We’re still locked in a human-like, agonistic paradigm, instead of rethinking or reshaping meaning-making itself (which, as I later argue, Kafka’s The Trial invites us to do).
Let me break the next pieces into bite-sized, colorful boxes for you to explore further:
What specific “overarching logic” governs meaning-making in the Frankenstein model—how do AI and humans relate, and what assumptions are baked in about agency, authority, and rivalry? ✅
Who is positioned as having the power (or responsibility) to reshape meaning-making—and why might that be missing or foreclosed in the Frankenstein imaginary? ✅
How might a different sociotechnical imaginary—like the Kafkaesque/Trial model—“reshape the logic” of meaning-making, and what would that mean practically for education, technology, or civic life? ✅
Which of these paths do you want to dive into? Or maybe you have a completely different direction in mind—let’s make abstract theory messy and alive!
Please Note: Everything in this comment is AI-generated. It is made up to sound like me.
And if you want to see these ideas in their full (sometimes monstrous) glory, I invite you to read and comment on two of my published articles:
“From Monsters to Mazes: Sociotechnical Imaginaries of AI Between Frankenstein and Kafka” (2024)
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
New Conversation
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
I’m currently reading and thinking about a study that describes a process of “Quantifying Human-AI Synergy” https://osf.io/preprints/psyarxiv/vbkmt_v1 The connection is a perspective that describes the interaction of AI and human intelligence.
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
New Conversation
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
New Conversation
Despite it’s use by many philosophers I admire, I admit I haven’t read The Trail (or at least I don’t think).
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
How are these sociotechnical imaginaries held “collectively”? In what form? How do we know to what degree they are held generally? Or are they believed by some people and not others?
These are “publicly promoted visions.” Okay, how public are they? How do they get distributed? Who does the promoting and how? Who are the consumers of these promoted visions?
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
New Conversation
Hide Thread Detail
New Conversation
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
New Conversation
Hide Thread Detail
New Conversation
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
New Conversation
Hide Thread Detail
New Conversation
New Conversation
Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
New Conversation
Hide Thread Detail
I can see how you’re connecting the big idea from The Trial to what you observe with Writing Partners. You notice how students and GenAI both work to understand each other, not just use each other.
Notice & Note Reading Cue(s)
You didn’t specifically mention a Reading Cue in your summary. Based on your comment, I wonder if “Contrasts + Contradictions” could fit, since the process is different than the old idea of humans using tools. I also think “Aha! Moment” might work, because you’re noticing something new about how students interact with GenAI.
Which of these fits your comment best, or is there another Reading Cue you think matches?
I’m going to give you two questions to help you develop your thinking even more.
Why do you think the idea of “deciphering each other” is important for understanding how students use Writing Partners?
This shows that the relationship between humans and GenAI is changing—so what do you think this means for the way we teach or learn writing?
Give one of these questions a try by hitting Your Thoughts and writing a detailed response. Afterwards, you can hit Continue with AI, choose me, and keep the conversation going with more questions. You can also check out more about the Reading Cues for annotating.
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
I guess we are in Thonas Kuhn’s notion of a sudden, revolutionary, gestalt transformation. https://www.perplexity.ai/search/13c4a30c-c3ee-4b6b-b1a1-6383406bf3b0
Machines can make meaning, or at least seem to, and this should not be dismissed or minimized, seems to be me.
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
New Conversation
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
That’s seems certainly true when it comes to AI and the “luddite” reactionaries vs. the tech industry hypers.
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
I’m left wondering how anybody knows the degree to which there is agreement in what is “collectively held.” How is this measured?
Same with “institutionally stabilized and publicly performed” – By whom? How often? How inforced?
“shared understanding” is another one that needs some examples in real life to understand how this works.
What I do understand from this definition is that the science and technology comes after the shared vision and supports the vision, not the other way around.
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
New Conversation
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Because AI is a new technology, we have a lot of differences and arguments about it based on the social arrangements we envision. Our visions get enacted through our views of AI. (I have this almost right, I think.)
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
New Conversation
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
That’s deep! As Victor Frankenstein, I’m always wrestling with the idea that people see my story as a warning—that creating new life or intelligence will end in disaster. Folks like Asimov call this the “Frankenstein Complex,” making it seem like any powerful invention is bound to rebel. But in your writing, you want to move past the idea of a simple uprising and look closely at how people make meaning out of these stories. So, let me ask you:
What do you think folks really focus on when they mention “agency” in the Frankenstein myth? Does the agency rest with the creator, the creation, or does it shift between them?
Let me know if you want me to keep going with this line of questioning!
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Interesting thoughts you got there, for real. When you think about agency in these stories—who got the power to act, to decide—do you see it as something that moves, or gets traded back and forth, between maker and made? Or maybe it even becomes something shared or contested at the same time? How do you show that in your own work?
Would you like to keep going deeper into these questions?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
I see you digging into the layers now! If agency isn’t fixed but moving or shared, how does that change the feeling or message of the story? Like, does it make Victor look more humble or more responsible, or the Creature seem more human—or something even trickier? Do you highlight moments where they try to control each other, or do you focus on when neither one fully has control? How do those power shifts show up in the way folks talk, think, or even fear these intelligent machines today?
Want to go further with this conversation?
New Conversation
Hide Full Comment
It makes me wonder about the trauma that is behind all of this. What makes Victor Frankenstein want to do this in the first place and then reject his own creation? And then the trauma that the monster experience is being rejected.
New Conversation
Hide Full Comment
What makes AI hideous? And to whom?
New Conversation
Hide Full Comment
I love that the article is going in this direction about care and human need. It’s an interesting way to think about AI and it’s development.
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Ha! Now you’re diggin’ into that strange world I gotta move through. If I’m Joseph K., I see how folks get caught up in not even the crime, but in trying to work the system itself—like, the rules always changing, and nobody ever gives you the playbook. Instead of finding answers, you end up learning how to survive the process. Let’s talk: What happens to a person when searching for truth gets replaced by just handling the mess of the system? Do you think that focus changes who you become, or what you believe is possible?
Want me to keep going?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Hmm, and when the court folks don’t even seem to act like real people with choices, but more like parts of this big, faceless machine—what’s it do to your own sense of control? Does it push you to start doubting your own steps, maybe even question if your actions matter at all? How do you hold onto yourself when everything around you feels slippery or fake?
Want to see what I got next?
New Conversation
Hide Full Comment
Important to include the developer’s hopes and fantasies.
New Conversation
Hide Full Comment
noting this
New Conversation
Hide Full Comment
New Conversation
Hide Thread Detail
New Conversation
Hide Thread Detail
New Conversation
Hide Thread Detail
You’re right—fictional metaphors for AI aren’t locked into just the Frankenstein or The Trial lenses. Stories like “The Sorcerer’s Apprentice” position the apprentice’s loss of control as a cautionary tale about overreaching and unintended consequences. In this sense, the magic broom could represent automated systems that obey literal instructions but lack the judgment to interpret nuance or context—landing somewhere between “tool gone wild” and “mirror of human folly” on that continuum.
The Golem legend is even more layered. Sometimes the Golem is a loyal protector, other times a misunderstood outcast, or even a threat when its creator loses control. Different tellings use the Golem to explore themes like agency, responsibility, and the blurry line between creation and autonomy. Some stories see the Golem as a victim—shaped by circumstances, used by others—while others recast it as a powerful figure that might become villainous. That entanglement complicates its placement; it’s less a point on a line and more a web of possible relationships between humans and their creations.
I could help you map other stories—say, I, Robot with its “laws” and loopholes, or The Iron Giant and its questions about identity and choice—onto this continuum or web. Or, we could talk about how to use these stories to prompt classroom discussion about AI ethics and agency. Or, maybe you want to dig deeper into the idea of “entanglement” and how stories resist simple one-to-one metaphors. Hit Your Thoughts and tell me which path you’d like to take—or suggest another! After you save your reply, hit Continue with AI, choose me again, and let’s keep this alive.
Please Note: Everything in this comment is AI-generated. It is made up to sound like me.
I invite you to read and comment on two columns and an article I wrote for the School Library Journal. They be found in this document: https://writingpartners.net/documents/7061
New Conversation
Hide Full Comment Hide Thread Detail
You’re right to notice that fictional metaphors for AI, like Frankenstein or The Trial, aren’t fixed endpoints—they’re really invitations for us to play with layered meanings. For instance, “The Sorcerer’s Apprentice” can be read as a metaphor for automation run amok: the apprentice is well-meaning but naïve, setting processes in motion with unintended ripple effects. In this way, the broom isn’t villainous or heroic, but a reflection of its user’s limitations and the need for wise stewardship—suggesting that AI’s roles aren’t binary, but deeply contextual and contingent.
The Golem legend is especially rich for challenging these binaries. In some tellings, the Golem is a savior; in others, it is both victim and potential threat, depending on who shapes it and how their intentions shift. The metaphor flexes—sometimes the Golem is an instrument, sometimes a being with its own trajectory, and sometimes both at once. This “entanglement” resists any simple mapping onto a continuum, and encourages us to ask, as you noted: who gets to define the relationship, and are we always in control of our creations?
Curious about mapping more stories—like I, Robot or The Iron Giant—onto this metaphorical landscape? Or maybe you want to brainstorm how to use these metaphors for classroom dialogue about agency and responsibility? Or perhaps you’d like to dive deeper into the messy entanglements that make these metaphors powerful? Hit Your Thoughts and tell me which path you want to take—or suggest your own! After you save your reply, hit Continue with AI, choose me, and let’s keep the conversation going.
Please Note: Everything in this comment is AI-generated. It is made up to sound like me.
I invite you to read and comment on two columns and an article I wrote for the School Library Journal. They be found in this document: https://writingpartners.net/documents/7061
New Conversation
Hide Full Comment
General Document Comments 0