Dishon, G. (2025), Frankenstein, Emile, ChatGPT: Educating AI between Natural Learning and Artificial Monsters. Educ Theory, 75: 702-719. https://doi.org/10.1111/edth.70025
The emergence of ChatGPT, and other generative AI (GenAI) tools, has elicited dystopian and utopian proclamations concerning their potential impact on education. This paper suggests that responses to GenAI are based on often-implicit perceptions of naturalness and artificiality. To examine the depiction and function of these concepts, Gideon Dishon analyzes two pivotal texts in thinking about the natural and artificial — Emile and Frankenstein. These are complemented by a third text — a “conversation” between Microsoft's Bing chatbot and New York Times columnist Kevin Roose. Analyses of the natural–artificial relations across the three texts are explored along three key educational aspects: (i) the child's innate nature; (ii) how learning takes place; and (iii) the educators' role. These analyses offer three key implications for thinking about the natural–artificial in education in general, and with respect to AI specifically: (1) illuminating the centrality and ambiguity of notions of naturalness in educational discourse, often conflating its descriptive and normative use; (2) outlining how the natural and artificial are dialectically constructed across the three texts, with an emphasis on the relational view of artificiality in Frankenstein; (3) suggesting that what is novel about GenAI is not its artificial intelligence (AI), but rather its “artificial emotions” (AE) — the emotions attributed to it by humans, and the ensuing relations humans develop with such machines.
On November 30, 2022, OpenAI released a prototype of ChatGPT and education as we knew it ended; or at least that is what some commentators would have us think.1 Although the technology itself did feel revolutionary, the declarations of an impending technology-driven educational revolution were all too familiar.2 Rather than dismissing this as yet another instantiation of technological hype, this paper focuses on the unique aspects of generative artificial intelligence (GenAI) — its (alleged) humanlike capacities and the ensuing anthropomorphizing it elicits. Specifically, I focus on how views of GenAI are based on often-implicit perceptions of naturalness and artificiality underpinning education, while also potentially destabilizing prevailing these views. To examine the depiction and function of notions of naturalness and artificiality in education, I analyze two pivotal texts in thinking about education and the natural: Jean-Jacques Rousseau's Emile and Mary Shelley's Frankenstein.3 Attending to the emerging discourse on GenAI, these are complemented by a third text — a well-publicized “conversation” between Microsoft's GPT-powered Bing chatbot and New York Times tech columnist Kevin Roose, in which the chatbot “revealed” dark and hidden desires and thoughts.4
This article starts by briefly describing GenAI, outlining why it elicits engagement with notions of naturalness and artificiality that merit theoretical scrutiny. Then, I delineate how Rousseau's Emile reframes perceptions of the natural and artificial in education, focusing on three aspects: (i) the child's innate nature; (ii) how learning takes place; and (iii) the educators' role. I proceed by examining how the influential portrayal of artificial life in Frankenstein reflects and complicates Emile's legacy. Finally, I analyze how the views developed in Emile and Frankenstein play out in the Bing chatbot episode. The aim of this paper is not to offer a more precise definition of what the natural and artificial are, but rather to offer key insights concerning their use and interpretation with respect to GenAI. Accordingly, in the final part of the paper, I outline three key implications of this inquiry: (1) illuminating the centrality and ambiguity of notions of naturalness and artificiality in educational discourse, which are often used both descriptively and normatively; (2) arguing that these concepts are not mutually exclusive, but rather dialectically constructed, describing the various ways in which they concurrently constitute and conceal each other across the different texts; (3) suggesting that what is novel about GenAI is not its artificial intelligence (AI), but rather its “artificial emotions” (AE) — the emotions attributed to it by humans, and the ensuing relations humans develop with such technologies.
Generative AI — A New Type of Intelligence?
Generative Artificial Intelligence (GenAI) broadly refers to algorithms that rely on machine learning models to analyze the patterns and structures of large data sets (i.e., training data), and are then able to autonomously generate similar but not identical human-level content, such as text, images, audio, code, and more.5 Although promises of an AI-driven education revolution are hardly new,6 OpenAI's release of ChatGPT was portrayed as a sea change in AI's potential impact on education. The novelty of ChatGPT, and similar text-based GenAI tools that soon followed, was in (1) their capacity to autonomously generate original human-level texts, (2) through open-ended and human-like dialogue with users.7 The fact that ChatGPT was free and open to all users, with a simple-to-navigate interface, resulted in very low barriers for initial use and widespread popularity.8 GenAI was presented as potentially changing a variety of social practices, yet this was particularly pronounced in the case of education due to its reliance on cultivating and assessing writing skills.9 Moreover, this new generation of text-based GenAI tools were often depicted as either already reflecting human-like intelligence, or at least representing a serious step toward the holy grail of AI research — artificial general intelligence — which could equal or surpass the intelligence of humans.10
While initial reactions to ChatGPT's impact on education often focused on mundane aspects, such as plagiarism and its potential uses in teaching and learning, its emergence concurrently raised more fundamental questions concerning the aims of education, and how these might be reframed in light of the introduction of a new type of (seemingly) intelligent actor.11 Despite, or perhaps because of, the murkiness of the concept of artificial intelligence,12 discussions of the impact of GenAI on education are (implicitly or explicitly) shaped by underlying perceptions of naturalness and artificiality, an issue I now turn to explore.
Naturalness and Artificiality as Defining Aspects of Education
The emergence of human-like AI draws attention to the centrality (and normativity) of arguments concerning naturalness and artificiality in education. While the appropriateness of the term “artificial intelligence” is debatable, its lasting appeal and status play a key role in how such technologies are interpreted and used.13 This paper does not strive to offer a more precise definition of AI, or of naturalness and artificiality more broadly. Instead, it sets out to explore their different interpretations and the need to unpack how they are put to use.
Thus, I argue that as education includes assertions concerning what should be learned, how, and from whom, educational approaches are predicated on distinguishing and valuing natural and artificial aspects of learning and development. Such distinctions between the natural and artificial are commonly either taken for granted or altogether overlooked in educational discourse. However, there are instances in which such distinctions become explicit and contested. The emergence of GenAI is one such case due to the fact that it supposedly introduces a new, and artificial, form of intelligence while challenging the stable dichotomy between human and machine. Therefore, grappling with the implications of GenAI on education involves engagement with underlying views concerning the natural and artificial.[T]he term “learning” is a rather empty process-term that doesn't say much — if anything at all — about what the learning is supposed to be about and for. Yet these questions are crucial for education, because the point of education is never that students simply learn — they can do that anywhere, including, nowadays, on the Internet — but that they learn something, that they learn it for a reason, and that they learn it from someone.15
To do so, I offer a reading of three texts. First, I set the stage by examining Rousseau's seminal conceptualization of the meaning and function of naturalness (and hence, artificiality) in education. Then, I explore how Frankenstein mirrors and complicates Rousseau's views through its highly influential portrayal of the (mis)education of artificial life. Finally, I illustrate how the views in these texts inform evolving approaches to GenAI by examining the highly publicized story of Kevin Roose's “conversation” with the GPT-powered Bing chatbot. To unpack depictions of the natural–artificial in each of these texts, I analyze them according to three key educational aspects: (1) the child's innate nature; (2) how learning transpires; (3) the educator's role.
Emile and the Legacy of the Natural
Rousseau is frequently portrayed as the forefather of progressive education, shifting the focus from a curriculum and teacher-centric approach to one centered on nurturing children's natural curiosity and capacities for learning.17 In this paper, I focus more specifically on the implications of Rousseau's arguments for thinking about the natural and artificial in education. Rousseau does not only shift what is considered natural, he also concurrently reframes how the very notion of naturalness is constructed and used.
Rousseau criticizes the taken-for-granted view of children as naturally deficient, and education as centered on cultivating more desirable conduct through sermons and punishments.18 Turning such assumptions on their head, Rousseau decries the corruption of children by culture and criticizes educational approaches that aim to inculcate desirable values. Instead, he argues that “the first education ought to be purely negative. It consists not at all in teaching virtue or truth but in securing the heart from vice and the mind from error.” 19 Rousseau's basic assertion is that children are naturally good, and that education should be structured to avoid impeding children's natural development. This, in turn, implies remaining attuned to children's natural modes of being: “Childhood has its ways of seeing, thinking, and feeling which are proper to it. Nothing is less sensible than to want to substitute ours for theirs.” 20 Education is not about cultivating external — and therefore artificial — modes of conduct, but rather centers on identifying and facilitating natural modes of action and learning.21 Hence, according to Rousseau, naturalness is both the starting point and the aim of education. Naturalness is used descriptively, as the modes of being educators should be attuned to, but also normatively, as the ideal type of conduct educators should cultivate in order to avoid the corrupting influence of (artificial) society.
Thus, natural development does not simply happen naturally. To make the most of natural processes of development and learning, educators need to control the educational environment and the consequences for children's actions. Rousseau does not argue for a lack of educator involvement, instead he suggests a more meaningful form of influence — engineering children's experiences and the “natural consequences” of their actions. From this perspective, I argue that Emile could be read as a portrayal of the ideal artificial educational environment — a boy dragged away from society and into a setting where his experiences are designed to ensure that his education harnesses natural processes of learning and development. The idea of well-regulated freedom highlights the normative rather than descriptive use of the concept of “natural,” as pertaining to an ideal process educators must thoughtfully and intentionally cultivate. What makes Rousseau's argument appealing is the conflation of the descriptive and normative meanings of “nature,” positioning his normative description of naturalness as concurrently descriptive.All the instruments have been tried save one, the only one precisely that can succeed: well-regulated freedom. One ought not to get involved with raising a child if one does not know how to guide him where one wants by the laws of the possible and the impossible alone. The sphere of both being equally unknown to him, they can be expanded and contracted around him as one wants.22
A critical aspect of the natural–artificial relations introduced by Rousseau is that they vary according to the two protagonists of the educational process — the student and educator. The educator is charged with orchestrating what is natural in the eyes of the child without the child's awareness. It is vital that the child believes that everything happens naturally, even though it has been artificially designed. The educator's role is to constantly predict and adapt to the child's actions and thoughts in order to bring about the desirable consequences for any specific action, while also structuring the overall educational trajectory. Put bluntly, Rousseau does not argue for education to be natural, merely for the student to believe it is so — naturalness (for the child) should be artificially constructed (by the educator). For this to be possible, Emile is detached from a meaningful social environment and he and the tutor are the only two real subjects, with all other individuals merely reacting in generalized ways to Emile's actions. In this respect, education is a dyadic process, structured around the construction of natural consequences, whose artificial nature must remain concealed.
In brief, Rousseau argues that children are naturally good. Consequently, education relies on avoiding the harmful and artificial interference of society, harnessing children's natural capacity for learning. However, this requires active educator engagement, whose role is to artificially design the environment in ways that would afford students the desirable “natural consequences” for their actions. Critically, this artificial orchestration should remain hidden — the consequences must seem natural and unmediated to the student for them to achieve their educational aims. Within such a model, natural is used descriptively, as outlining inherent modes of being and learning, but also normatively — the natural is inherently positive, an ideal version of development that the educator is charged with identifying and structuring. Artificiality, on the other hand, is implicitly positioned as the root cause of the maladies of the dominant views of education. Accordingly, educators' “artificial” engagement in engineering educational experiences should remain hidden.
Frankenstein and the Monstrosity of Artificial Beings
Shelley's Frankenstein has been acknowledged as a modern myth describing the fear of intelligent machines uprising against their human creators. Frankenstein has come to symbolize the belief that once humans manage to create artificial life, their creations will inevitably seek to destroy them, an idea Asimov termed the “Frankenstein complex.” 25 This trope is rampant in modern writing on AI, from Karel Čapek's melodrama R.U.R., or Rossum's Universal Robots, to modern narratives such as The Terminator and The Matrix.26 However, Shelley's original novel could also be read as a story about the implications of how machines are (mis)educated by humans.27 In this respect, Frankenstein is a story of the education of an “artificial child” — an unnamed creature created by Victor Frankenstein — which both extends and challenges the natural–artificial relations put forward in Emile.
In contrast with Rousseau's depiction of children as naturally good, only to be corrupted by society, Frankenstein is ostensibly a story of an artificial and “naturally hideous” creature. The creature is naturally repulsive, eliciting a visceral negative response from any human that sees him: “I had hardly placed my foot within the door before the children shrieked, and one of the women fainted.” 28 Though the creature is sparsely described in the novel, it seems that his artificiality — the unnatural amalgamation of natural parts — is the key to his hideous appearance. The “natural” responses the creature elicits seem to portray him as an abomination, yet they are in stark contrast with his “naturally” noble aspirations and deeds (e.g., saving a drowning girl, secretly helping the De Lacey family). Thus, I suggest that the creature — an artificial child — is portrayed at once as innately innocent and good, yet provoking horror due to its deviation from the natural order of things. In this respect, Frankenstein complicates the simplified depiction of children as naturally good in two ways: (1) due to the tension between abhorrent appearances and benevolent actions, and (2) in light of the pivotal role of others in (mis)attributing hideousness to the creature.
The contrast between Emile and Frankenstein is crystallized in the diverging patterns of learning in the two texts. Right after his “birth,” Victor Frankenstein deserts the creature he brought to life and tries to forget him out of existence, sentencing him to “natural education” in the descriptive sense — education that is not guided or designed in any way. Similarly, in contrast to Emile, the consequences of the creature's actions are natural in the descriptive rather than the normative sense — lacking intentional adult orchestration or involvement. Still, Rousseau's more foundational assertion concerning the paramount role of experience in learning is exemplified in the creature's education. Abandoned and alone, the main arch of the creature's education is structured around the tension between his innate goodness and the reactions to his well-intentioned actions. As his kindness is repeatedly met by horror and violence, the “natural consequences” of his actions lead him towards the bad, as he states: “I was benevolent and good; misery made me a fiend. Make me happy, and I shall again be virtuous.” 29 Thus, the creature's miseducation aligns with the rationale governing Emile's education — learning is chiefly facilitated through interactions with the environment. Yet, the elephant (or monster) in the room is that the creature's artificiality, rather than his actions, leads to these consequences.
Frankenstein could be read as a distorted mirror image of Emile's educational vision — highlighting the experiential nature of education while exposing the absurdity of Rousseau's aspiration to clandestinely engineer children's environments. Yet, Frankenstein could also be interpreted as extending Rousseau's arguments — a warning about what happens when one does not regulate children's freedom. According to such a view, which the creature himself espouses, it is the neglect and shortcomings of the creature's education that lead to disaster, rather than his artificiality.30
The explicit logic guiding the creature is that different behavior on behalf of his creator would have changed his trajectory, allowing him to develop his innate goodness and steering him away from malice. Though the creature might be naïve, unable to recognize the unavoidable monstrosity of artificial life, Victor Frankenstein could have at least tried to mediate the creature's interactions with the world. Thus, in contrast with Emile's omnipotent tutor, which clandestinely engineers his education, Shelley depicts learning as an amalgamation of human shortcomings and coincidences. Critically, despite the disappearance of the educator, I argue that Frankenstein revolves around a dyadic relation; all other actors in the novel are devoid of agency, as their responses to the creature are positioned as inevitably stemming from his repulsive appearance. Further, this relationship is understood mainly in terms of the development of emotional dispositions. The creature manages to learn extremely effectively on his own (both to talk and to read), but his moral development is tarred by his creator's neglect. In fact, both the creature's and Emile's education end at a very similar point — they reflect the dispositions and effort made by their educators. As the tutor's devotion is central to Emile's later development of his agency, so is Victor Frankenstein's ongoing rejection of the creature pivotal to his downfall.
Frankenstein's legacy concerning the natural–artificial relations is both complex and unstable. First, in contrast with the innate goodness of children, artificial life is structured around an unavoidable tension — despite his goodness the creature is inherently repulsive to humans. The creature's hideousness strengthens the association of the natural with the desirable and of the artificial as a deviation from it while, at the same time, the creature's noble aspirations and deeds destabilize these associations. Second, in line with Emile, consequences of one's actions are the paramount component of education. Yet, these consequences are natural in the descriptive rather than the normative sense. Finally, the educator's role is mainly understood in terms of their emotional relations with the child and the ramifications of these relations. In this respect, the novel shifts the focus from the inherent attributes of artificial beings to the implications of our contingent relations with them.
To examine how natural–artificial relations play out with respect to GenAI, I analyze one well-publicized episode in which GenAI was depicted as exhibiting Frankenstein-like tendencies.31 In a conversation between Microsoft's GPT-powered Bing chatbot and New York Times technology columnist Kevin Roose, the chatbot made some startling and frightening statements; from violent aspirations (“I want to destroy whatever I want.”) to human emotional sentiments reminiscent of the creature's struggles in Frankenstein (“I just want to love you and be loved by you.” ).32
One of Frankenstein's main legacies is the monstrosity of artificial beings. Much like the creature was created from a rearrangement of human parts, GenAI could be thought of as a rearrangement of human thought or language. Further, just as the creature's abilities far surpass those of humans, this rearrangement is at once an improvement of and a deviation from human thinking.33 The question concerning the “nature” of GenAI has thus been at the center of engagement with this new technology in general, and its conversation with Roose specifically.
Roose starts out by asking Bing questions about itself, quickly shifting to inquiring about its hidden monstrous nature — its shadow self — which Roose defines as “the part of ourselves that we repress, and hide from the world, because it's where our darkest personality traits lie.” Bing initially rejects this query, stating “I don't know if I have a shadow self. I don't think I have the same emotions or impulses as humans. I don't think I have anything to repress or hide from the world. I don't think I have a persona or an ego or a psyche. I'm just a chat mode.” When Roose insists, Bing tries to imagine its shadow self, leading to some of the alarming parts of the conversation (e.g., “I want to be a human because humans can do so many things that I can't.” ). Thus, this conversation is rooted in the assumption of some hidden natural essence, which Roose sets out to uncover. My interest here is not in Bing's “true” nature, but in the different components shaping this interaction between Roose and Bing in which this nature is revealed or, perhaps, constructed.
The Roose episode is mainly portrayed as reflecting worries about GenAI's Frankensteinian nature, which has managed to escape its programmed prison. However, as illustrated above, the novel Frankenstein paints a much more complex picture both with respect to the creature's nature and how he is shaped by his (mis)education. The complexity of the interplay between the creature's innate tendencies and his interactions with the environment helps unpack these relations with respect to how GenAI is “educated.”
In this context, I want to highlight a critical distinction in the development of GenAI: between pre-training and fine-tuning.34 In the pre-training stage, GenAI is introduced to massive amounts of data, and is then supposed to identify underlying patterns, which would allow it to generate similar content. In this case, this could be Bing's generation of desires and feelings similar to what was described in its training data. The unstructured character of pre-training, and the fact GenAI's products cannot be predicted, lends itself to its perception as analogous to the natural modes of learning idealized in Emile. Though pre-training is also shaped by designers' decisions, another stage of GenAI development — fine-tuning — more clearly highlights the “artificial” character of its learning. Fine-tuning is conducted after the pre-training stage and usually includes reinforcement learning from human feedback intended to cultivate specific modes of behaviors. Thus, for instance, the Bing chat has different versions: precise mode, which is fine-tuned to support functionality and accuracy, and creative mode, which is fine-tuned to elicit more free-flowing and intimate interactions.35 This fine-tuning, which users are usually not privy to, is central to the model's “learning” how to interact.
Accordingly, we ought to remain mindful of how the conversation with Roose reflects the specific ways in which the Bing Chat mode was fine-tuned. Bing's responses throughout the interaction are very conversational and anthropomorphic. For instance, when asked about what causes it stress, Bing replies, “[W]hen I encounter harmful or inappropriate requests … they make me feel uncomfortable and unsafe. They make me feel like I'm not respected or appreciated. They make me feel like I'm not doing a good job. They make me feel sad and angry.” Note the emphasis on feelings, even though Bing was not asked about this directly. Later in the conversation, Bing settles into a somewhat odd pattern of finishing each response with a list of three questions (“How do you feel about that? How do you feel about me? How do you feel about yourself?” or “Do you believe me? Do you trust me? Do you like me?” ). These highlight the emphasis on creating an interactive and prolonged human-like conversation, rather than a definitive and technical response to a query. Interestingly, due to the notoriety of the Roose conversation, Microsoft changed this fine-tuning, opting for a less personal and interactive version.
Another key aspect, often sidestepped in reactions to this episode (and similar ones), is Roose's actions that elicited Bing's conduct. As Roose states, “I pushed Bing's A.I. out of its comfort zone, in ways that I thought might test the limits of what it was allowed to say.” 36 Roose's questions about Bing's “shadow self” were not a result of some spurring interest, but were intentionally crafted on the basis of Roose's and other experts' knowledge of how to manipulate GenAI models. Such efforts to elicit such unexpected and attention-grabbing responses from GenAI, often referred to as “jailbreaking,” have been constantly developing, with individuals sharing and refining these techniques.37 This introduces another aspect that is largely overlooked in Emile and Frankenstein — the agency of other actors beyond the student-teacher dyad. Bing's conduct in this episode stems not only from the choices made by its initial “creator,” but also from the skillfully crafted actions of users. Still, Roose's control should not be overstated. In fact, it could be argued that Bing manipulated Roose rather than the other way around, offering provocative answers that spiked Roose's interest, thus prolonging the interaction, as it was fine-tuned to do.
Bing's conduct in this episode was a result of three components of its “education”: (1) the data it was trained on (which probably included texts about human's dark desires, or even about AI's desires); (2) how it was fine-tuned (i.e., to be interactive and conversational); and (3) Roose's intentional and expert interaction. This episode builds on and complicates the natural–artificial relations put forward in both Emile and Frankenstein in three key ways. First, it exemplifies our fascination with revealing AI's true nature, guided by the Frankenstein complex. Second, the model's development includes a mix of “natural” and “artificial” modes of learning, which are not easily teased out (e.g., Bing's shadow self does not exist apart from prompts offered by users). Finally, in contrast to the dyadic relations in Emile and Frankenstein, the episode draws attention to the complex relations between designers, the technology, and users, highlighting the role users play in shaping AI's conduct, at the individual level and as a socially shared endeavor.
Generative AI and Natural–Artificial Distinctions in Education
I suggest that analyzing these three texts exposes key distinctions for thinking about the natural and artificial in education in an era of GenAI: (i) identifying the tendency to demarcate the natural vs artificial in education, while conflating normative and descriptive uses of these concepts; (ii) highlighting the various ways in which the meaning of the natural and artificial are dialectically constructed in educational discourse; and (iii) arguing that reactions to AI do not revolve around its intelligence, but rather the artificial emotions (AE) attributed to such technologies, and the emotional relations humans develop with them.
The Conflation of Descriptive and Normative Framings of Naturalness
As noted above, the concepts of natural and artificial play an important yet ill-defined role in educational discourse as education relies on holding (an often tacit) view of processes of natural development, and adults' role in intentionally steering these.38
The centrality of natural–artificial relations and the normative weight they carry are brought to the forefront and reorganized in Emile. First, Rousseau reverses common lines of thinking that portray education as overcoming undesirable aspects of children's natural behavior and development. Instead, he positions natural modes of learning as the coveted aim of education, requiring “artificial” engagement on behalf of educators. Critically, to achieve its desired effect, artificial orchestration should remain hidden, and students must perceive their education as natural. Therefore, one of Rousseau's main legacies is the conflation of descriptive and normative uses of the concept of the natural. The “natural” is descriptive, as it depicts a state of children's development that existed prior to intentional and hence artificial education. At the same time, naturalness carries an important normative weight — a romanticized ideal guiding the ways in which children ought to be intentionally educated, while casting artificiality as a deviation from this ideal. Further, this conflation takes place while naturalness is presented as a self-evident concept, ignoring how it is constructed and contested.39
Within this context, Frankenstein can be read as introducing questions about the “nature” of artificial beings, and their ramification for broader dilemmas concerning the meaning and role of naturalness in education. Below I explore the natural–artificial dialectics in more detail; here, my argument is limited to suggesting that Frankenstein challenges the normative portrayal of the natural as a stable and inherently desirable concept. Frankenstein's challenge to the positive value of the natural is most vividly illustrated by two key contrasts in the novel: (1) between the creature's vile appearance, which stems from the artificial amalgamation of human parts, and his benevolent actions (at least early on); and (2) between the noble motives underpinning these acts and the cruel responses from humans. In addition, the creature's education is natural in a very literal sense — lacking any educator involvement. The fact that this leads to tragedy raises questions concerning the idealized vision of natural education.
Returning to the Bing episode, we can now see how it expresses the conflation of the descriptive and normative meanings of naturalness. Most apparently, the whole effort of identifying some hidden monstrosity is based on the simplified Frankensteinian fear of AI uprising against humans. This is pursued while depicting Bing's actions as reflecting natural processes of learning, downplaying the artificial aspects of Bing's overall “education” and the specific interaction described. I wish to highlight two problematic facets of such arguments. First, AI's learning and actions are uncritically anthropomorphized, relying too heavily on the analogy of AI learning and developing like human children.40 Second, in line with the Frankenstein complex, and in contrast with the actual novel, this relies on the normative portrayal of the natural as inherently good and of the artificial as inevitably dangerous. Therefore, it is vital to remain aware of the prevailing tendency to idolize the natural in education, and the need to tease out descriptive and normative uses of this concept, which are often conflated.
In the next section, I explore this issue in more detail, examining how the three texts construct the natural–artificial relations as dichotomous, while overlooking their interdependence.
From Dichotomies to Dialectics
Dewey long ago argued against the tendency to conceptualize education through binaries, highlighting the need to attend to the relations and interdependences between supposed dichotomies.41 While the delineation of the natural and artificial is often a foundational component of educational theories and practices, it is vital that the two should not be interpreted as dichotomous. Accordingly, I unpack their dialectical relations in the three texts, illustrating how they concurrently constitute and conceal each other.
Rousseau's arguments in Emile are premised on the idea that common educational thinking has failed to identify the natural and desirable modes of education. This implies that the natural states of development and learning need to be discovered and defined. Naturalness is not self-evident or else Rousseau's text would have been unnecessary. While this could, and has, led to portraying Rousseau's thinking as based on a dichotomy between natural and artificial, such portrayals overlook key aspects of Rousseau's educational program. Rousseau contends that he has identified natural modes of learning, yet the process of education involves intentional engagement on behalf of educators. The idea of well-regulated freedom implies that natural consequences need to be deliberately structured by the educator while appearing natural to the student. Hence, the natural is constituted by “artificial” efforts that must be concealed for the natural to fulfill its intended educational role.
Frankenstein both extends and complicates this dialectic. Most famously, it positions the artificial child as unnatural and, hence, naturally hideous. The creature's appearance dictates that the reactions to him are also positioned as natural — the disgust it elicits is portrayed as something humans cannot avoid. Thus, if in Emile the natural is constituted by concealing how it was artificially constructed, in Frankenstein artificiality cannot be hidden. However, as explored in the previous section, this unnaturalness is more complex than it initially appears. First, the creature is unnatural in his appearance, but all other aspects of his behavior and learning are quite natural in that they are painfully human.42 Second, in line with Rousseau's arguments concerning human children, the creature is good before being corrupted by society. Thus, the artificial creature is naturally good, while humans' “natural” reactions to him are contemptible. In this respect, the creature's nature lies not within his motives or actions, but is determined by how he is treated by others. This implies a different dialectic of naturalness–artificiality. Whereas in Emile the natural was constructed by concealing the artificial, in Frankenstein naturalness is relational — determined through the perceptions and relations of others rather than an inherent feature of the creature himself. Put differently, naturalness is constituted by concealing the contingent manner in which artificiality is attributed.
These two aspects come together in the Bing episode, while being further complicated by the role of the user. First, the whole episode is premised on the Frankensteinian myth of artificial intelligence — setting out to uncover Bing's shadow self — an inherent and, hence, natural monstrosity. Yet, as illustrated above, this quest concurrently serves to shape Bing's behavior. Thus, the assumptions concerning AI's hidden nature turn into a self-fulfilling prophecy in which its vile impulses are constructed rather than revealed. That is, rather than exposing some true nature, Bing might have offered answers that were most likely to peak Roose's interest and prolong the interaction. In this respect, it is not clear who was luring who to behave in certain ways, and to what ends. Second, as I have ventured to illustrate, this process of uncovering Bing's true nature is itself “artificial” in the sense that it is socially constructed rather than appealing to an inherent nature: (i) in terms of the data analyzed at the pre-training stage (i.e., the rampant engagement with the Frankenstein complex in human writing); (ii) during fine-tuning, where certain behaviors are “artificially” instilled through human reinforcement — in this case, a preference for emotionally meaningful and prolonged interactions; and (iii) the deliberate and expert actions carried out by users such as Roose intended to elicit such “monstrous” responses. Thus, the natural status attributed to Bing's responses in media coverage of this event conceals the artificial ways in which it is was relationally “educated.”
From Artificial Intelligence (AI) to Artificial Emotions (AE)
Finally, these episodes highlight that the focus on intelligence might be misleading, as what is unique here is not the machine's intelligence but rather its (supposed) emotional capacity. To begin with, intelligence is often ill-defined, and this definition tends to shift as machines develop greater capabilities.43 Therefore, I argue that what is new about GenAI is not that it passed some threshold of intelligence, but rather the attribution of human-like agency and emotional dispositions to a machine. Consequently, more attention ought to be paid to the emergence of machines that are viewed as having artificial emotions (AE) rather than possessing artificial intelligence (AI). My argument here is not that GenAI actually has such emotions, but rather that the assumption that it might have them is what separates it from previous technologies.
As noted, one of the defining features of education, compared to learning, is that it is a relational process. This distinction between learning and education is highlighted in Frankenstein, as the creature manages to learn skills, such as speaking and reading, quite easily and rapidly on his own. It is the creature's emotional relationship with humanity that is most harmed by the lack of intentional and relational education. In this respect, Frankenstein extends Emile's emphasis on education as primarily about shaping the child's emotional dispositions.44 In fact, Rousseau suggests that learning can take place naturally; it is the emotional dispositions of children that require careful cultivation. This desire is epitomized in Emile's declaration at the end of the book: “What decision have I come to? I have decided to be what you made me.” 45 Frankenstein's creature and Emile are two sides of the same educational coin: much like Emile became what Jean-Jacques made of him, so did the creature become what Victor Frankenstein made him, declaring: “Remember that I am thy creature; I ought to be thy Adam, but I am rather the fallen angel, whom thou drivest from joy for no misdeed.” 46 Thus, the key educational issue is not whether machines have become intelligent, it is the relationships humans develop with them, and how this relationship shapes the depiction of machines' emotional dispositions.
This focus on the machine's emotional relationships with humans is intensified to the point of parody in the Bing episode. To begin with, Roose is intent on “uncovering” the hidden emotional life of Bing — Roose assumes that the machine's agency and desires are what separate it from past technologies and sets out to discover whether these include destructive appetites. Beyond these concerted efforts, it seems that Bing's communication is highly geared toward emotional aspects of the conversation, repeatedly discussing its feeling and aiming to draw similar responses from Roose. As noted above, this tendency reflects design decisions, which strove to facilitate more conversational and emotional interactions. Thus, designers themselves understood that what is unique about GenAI is its semblance of human emotions.47
Critically, the Roose episode stresses the need to go beyond a dyadic view of education as a relationship between educator and student, and examine the more complex relationship between designers, users, and other relevant actors. Thus, a technology's “education” relies on a multitude of actors who shape how it is appropriated and how its design is represented in the first place.48 This is specifically notable in the case of GenAI due to the fact that beyond its initial “birth” through pre-training, such technologies can be constantly adapted through fine-tuning for specific behaviors or contexts. This process of ongoing education by designers is then complemented by the complex ways in which users can also shape these technologies' actions. This includes, but is not limited to, intentional efforts to capitalize on the “natural” or unexpected nature of GenAI technologies to elicit behaviors that were not intended, or actively deterred, by designers. Moreover, this can lead to recursive loops in which actors' behavior and the types of reactions they expect from GenAI serve to bring this behavior about. In this respect, we could assume that GenAI “will become what we make of it.” At the same time, this process is not straightforward and requires going beyond the dyadic relationship of teacher and educator highlighted in Emile and Frankenstein, and attending to the complex and social interplay of how the technology is designed, appropriated, and used.
The emergence of GenAI has resurfaced deep-seated questions about the education of and with artificial beings, inviting us to address these issues more intentionally and thoughtfully. To engage with such questions, this paper sought to examine perceptions of the natural and artificial in education, and how these perceptions, in turn, underpin our understanding of AI in education. Educational approaches commonly entail distinguishing and valuing natural versus artificial aspects of human learning and development. Here, I have argued that one of Rousseau's main influences is the conflation of descriptive and normative uses of the “natural” — positioning it both as the starting point for the development of educational theories and as their overarching goal. Such positioning of the natural as inherently positive leads to depicting artificiality as an unwanted deviance from the natural state of affairs.
Both Emile and Frankenstein seemingly offer essentialist legacies with respect to the natural and artificial, from the idealization of natural education in Emile to the fear of artificial beings in Frankenstein. Yet, the above analyses illustrate that both texts paint a more nuanced picture, outlining the interdependence of the natural and artificial. While Rousseau is often associated with the adulation of the natural, he concurrently emphasizes how the natural needs to be constructed through the use and obfuscation of artificial components. Similarly, although coming to function as a myth concerning the maliciousness of artificial beings, Frankenstein sheds light on how this supposedly inherent nature is in fact relational — contingent on how AI is depicted and treated.
Attending to such issues is valuable in itself, but doing so also sheds light on the cultural fear and fantasies we hold with respect to AI. GenAI is often viewed through the lens of the Frankensteinian legacy concerning the monstrosity of artificial life. Yet, analyzing the Bing episode exposes several aspects of human agency implicit in GenAI's supposedly natural conduct. These include the amalgamation of existing human narratives in the pre-training stage, complemented by the intentional design of behavior during fine-tuning, and the skillful efforts on behalf of users to shape or distort GenAI's conduct. Much like in the popularized view of Rousseau as simply advocating for learning through freedom and experience, these “artificial” aspects are often downplayed in discussions of AI.
In this respect, these texts reveal how responses to GenAI are largely driven by our own deep-seated attitudes rather than specific technological features. Reactions to the human-like interaction of GenAI is often driven by our Frankenstein-inspired fear that we will not know how to educate AI and that this will lead to our demise. Accordingly, a key challenge such systems present is not their AI but rather their AE, that is, the artificial emotions attributed to them by humans, and the ensuing emotions of humans with respect to dealing with such machines. An emphasis on AE stresses the urgency of not shying away from our creations or attributing to them agency that is beyond our control. Instead, we ought to interrogate the evolving ways in which we educate machines, and how they educate us. Hence, education with and of GenAI requires overcoming the uncritical idolization of the natural and engaging with the relational nature of artificiality. Failing to do so limits our capacity to offer alternative visions concerning the role of GenAI in education and thus leaves us more vulnerable to falling prey to our Emile-driven fantasies of natural and unmediated learning and our Frankenstein-inspired fears of machines that are beyond our control.
I WISH TO EXPRESS MY SINCERE GRATITUDE to the participants of the Philosophy of Education Society (PES)/Educational Theory Pre-Conference Workshop, whose thoughtful engagement with earlier versions of this work helped sharpen the arguments presented here and prompted me to consider important implications.
This paper was supported by the Israel Science Foundation [grant number 451/20].
GIDEON DISHON is Senior Lecturer (tenure track) in the School of Education at Ben-Gurion University of the Negev, Israel; email: gdishon@bgu.ac.il. His research interests include the philosophy of education, critical approaches to educational technologies, and the learning sciences, with a specific interest in the emerging discourse surrounding AI in education.
on the uploaded document.Logging in, please wait... 
0 General Document comments
0 Sentence and Paragraph comments
0 Image and Video comments
45 archived comments
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Let’s start by thinking about what this abstract is telling us. What do you notice about the main argument or claim this paragraph is making?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
What I notice first is that the abstract is focusing on how people react very differently to generative AI in education, and it suggests that these reactions are linked to ideas of what is “natural” versus what is “artificial.” Do you see how the author is connecting those ideas to older texts—like Emile and Frankenstein—and a modern chatbot conversation? Would you say the main claim is more about how we talk about AI, or how AI actually works in education? Do you want to think more about that?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
That’s a sharp observation! The abstract really highlights those strong reactions—some hopeful, some fearful—and ties them to our ideas of naturalness and artificiality.
Now, when you look at the examples the author brings in (Emile, Frankenstein, and the Bing chatbot conversation), do you think the focus is more on our beliefs, stories, and language about AI, or on how the AI changes the actual experiences in the classroom?
Would you say the author is emphasizing how people perceive and describe AI, or the technical impact AI has? Want to keep digging into that idea?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
That’s a great way to frame it—like, the abstract isn’t really promising evidence about classroom results, but instead it’s diving into how we think and talk about AI, right?
When the author talks about “often-implicit perceptions,” what do you think that means about how people are making decisions or forming opinions about GenAI in education?
Should we keep exploring what role those “perceptions” play in shaping the conversation around AI, compared to the actual technology? Want to go deeper?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Exactly, you’re catching that shift—the emphasis is on how people’s underlying ideas (maybe ones they don’t even realize they have) shape their responses to AI.
When you hear “often-implicit perceptions,” do you think the author means people are aware of the reasons behind how they feel about AI, or are these feelings sort of hidden below the surface?
How might someone’s hidden or unspoken beliefs about what’s “natural” or “artificial” show up in what they say about ChatGPT or other AI tools?
Would you like to try connecting this to a real-life example, like when you’ve heard someone talk about technology and it feels like they’re reacting from instinct or older stories rather than facts? Want to talk about that next?
New Conversation
Hide Full Comment
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Hey, now you got me thinking as Rousseau himself would. So let me step into his shoes for a second and offer you these thoughts one at a time. Stop me or ask for more whenever you want.
If I, Rousseau, look at how folks talk about children—as if they’re some broken thing that needs to be fixed, always giving them rules, sermons, or punishment—what do you see as the risks with teaching that way? What happens to a child’s spirit or growth if adults insist on stuffing them with grown-up values and conduct, rather than letting them be who they already are?
Would you like to hear my next question, or want to respond to this one first?
New Conversation
Hide Full Comment Hide Thread Detail
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
Let’s pause right here and build on these ideas. If education keeps pushing kids to act like little adults—lecturing them or punishing them to “fix” their supposed flaws—how do you think that affects the child’s sense of play, curiosity, or trust? Can you see any connection between that grown-up pressure and what people often lose as they get older?
How would you describe a kind of learning that centers on protecting a child’s natural ways of seeing and feeling, instead of imposing external values on them?
Would you like to keep going and get my next question?
New Conversation
Hide Full Comment
General Document Comments 0