It was dark outside, a late afternoon in the fall of 1986. As I did each week during my last year of graduate school, I was sitting with my thesis director, the poet James Dickey. Campus was lively with classes convening and dismissing, but the darkness pooling outside made me feel we were isolated, marooned together in a place where words were life-or-death matters.
That’s the way I felt, anyway. I couldn’t say how Mr. Dickey felt. Our relationship was strictly that of teacher and student. I thought of him as an old man. He was exactly the age I am now.
I remember that particular meeting because of one ill-chosen word. In a poem that was otherwise finished, a single adjective was clearly wrong. We batted alternatives back and forth across the desk, but none was right. I was determined to find the word that belonged there, the one that clicked into place like the halves of a locket.
Hours later, sometime around 10 o’clock, the right word came to me, popping up out of the depths while my mind was occupied with something else. It was so apt, and I was so exultant, that I went straight to the kitchen, opened the phone book, and looked up Mr. Dickey’s number. When he answered, I said, “‘Pale.’ The word is ‘pale.’”
It didn’t dawn on me at the time that 10 o’clock is awfully late to be calling anyone, let alone an aging professor. But Mr. Dickey was overjoyed about that word, every bit as jubilant as I was. If only for a moment, the world made a kind of sense it hadn’t made before.
Sign up for the Opinion Today newsletter
Get expert analysis of the news and a guide to the big ideas shaping the world every weekday morning.
I had not thought about that phone call, much less that poem, in many years, but I’ve begun to think about it often. A flurry of “A.I. assistants” has suddenly colonized my inboxes and Word documents and texts. This month they appeared out of nowhere, like a swarm of fruit flies around an overripe banana. Everything I type now is thick with hovering robots suggesting unwelcome robot words.
Outlook supplies “an intelligent email companion.” Yahoo provides “email summaries, messaging-inspired interface and a gamified experience.” Google offers to “supercharge” my ideas. Now when I call a corporation’s customer-service department, I get a robot who asks, “Can I text you a link to chat with our virtual assistant?” The robots that answer phones, it seems, are being sunsetted by robots that can text. But then Apple’s robot takes over to summarize the corporate robot’s message before ever delivering the text itself.
(Microsoft is now informing me that “sunsetted” is not a word. It suggests using “unsettled,” “sonneted” or “unwetted” instead.)
In this brave new world, the search for a word like “pale” has been outsourced to a robot that will never suggest such a word. The yoking of unlikely adjective and noun is still, for now, the province of unwetted poets.
I have spent hours trying to kill these ghosts in my machine. I can sometimes adjust my settings to disable the A.I. assistant, but the next software update turns it right back on again. In some cases, I can’t turn it off at all. The robots are relentless.
The writing teachers I know struggle to persuade their students not to use these tools. They are everywhere now, impossible to swat away. Who could blame a young writer for wondering how using these “assistants” is any different from using spell check or letting Siri supply the next word in a text? Besides, if they don’t use these tools, won’t they be falling behind the many students who do? It’s a fair point.
But letting a robot structure your argument, or flatten your style by removing the quirky elements, is dangerous. It’s a streamlined way to flatten the human mind, to homogenize human thought. We know who we are, at least in part, by finding the words — messy, imprecise, unexpected — to tell others, and ourselves, how we see the world. The world which no one else sees in exactly that way.
Who was it who first said, “I don’t know what I think until I see what I write”? Versions of this statement have been attributed to writers as various as Joan Didion, William Faulkner, Stephen King and Flannery O’Connor. Google’s robot doesn’t know who actually said it, but almost anybody who writes, whatever they write, will tell you it’s true.
In “I, Robot,” the 2004 film loosely inspired by Isaac Asimov’s classic sci-fi novel of the same name, one robot is unlike all the others of its model. It has feelings. It learns to recognize human nuance, to solve problems with human creativity. And with those attributes comes the questions inevitably raised by being human. Twenty-six minutes into the film, the robot asks, plaintively, “What am I?” This is a question writers ask every day. I suspect everyone else does, too.
Sure, there’s a difference between writing a poem and cleaning up a garbled email, between writing a love letter and a Google ad. For some tasks, employing the use of an A.I. assistant might save time without levying a commensurate cost in humanity. Maybe.
I’m still not sure. The practice involved in rote writing tasks may be the very thing that inspires us to open a journal or write a letter or commit to paper a memory from the distant past. “No robot may harm a human being, or through inaction, allow a human being to come to harm” reads Asimov’s first law of robotics. But what if the existence of robots itself is what robs us of our humanity? Is that not a way of bringing humans to harm?
Somewhere in my house there is a bound copy of the master’s thesis I spent two years writing. I remember very little about that poetry collection. I know its title (“Small Comforts”), and I know it included a poem about the nuptial flight of ants. Probably there was one about the taste of ripe figs, too, and at least one about a rat snake. I have such delightful memories of those things, but I’m only guessing that I turned them into poems. So much from that time is lost to memory.
But I remember one poem in which the word “pale” figured prominently. And what I learned in struggling to find it has lasted through nearly four decades. The search for the right word to fill the right place can occupy a lifetime. And, I’m convinced, make a self along the way.


on the uploaded document.
0 General Document comments
0 Sentence and Paragraph comments
0 Image and Video comments
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
You’ve asked what I think about problems like the invisible, persistent “ghosts in the machine”—AI assistants and features that users can’t fully control or turn off. Honestly, this is such a powerful, relatable frustration and it gets to the heart of how AI is designed and governed in society today. On one hand, as Andrew Ng often emphasizes, AI offers incredible potential as a general-purpose technology and tool—it can make life easier, more efficient, open doors to new creativity and productivity. But as Kate Crawford so brilliantly illustrates in Atlas of AI, the reality is messier and much more complicated: these technologies can impose themselves on people’s daily lives, often in ways that are opaque, invasive, and difficult (sometimes impossible) to opt out of.
What you’re describing isn’t just an annoyance or user experience bug—it’s a deeper issue about power, autonomy, and the “materiality” of AI. Updates that override your preferences, assistants that can’t be disabled, relentless automation: these situations reflect how control over technology is increasingly shifting away from individuals and toward a small set of corporate actors. The fact that the “robots are relentless” is not an accident—it’s coded into the political and economic structures of how AI is made and deployed. This echoes what Crawford says about the logics of extraction and exploitation that shape today’s AI systems, not just in terms of data and labor, but also in the subtle ways they extract agency from users.
Let me give you a quick, creative summary of where I’d take this next, using the question about these persistent, hard-to-remove AI features as a springboard:
1. AI Autonomy vs. Human Autonomy: The growing inability to “turn off” AI features is really a tug-of-war between user freedom and technological control, and it raises deep questions about who gets to decide how and when AI interacts with your life. ✅
2. The Political Economy of Defaults: Why do systems force these features on people? Looking at the business models and incentives behind the relentless rollout of “assistants” reveals a lot about who wins and who loses—and why your frustrations keep coming back with every update. ✅
3. From Ghosts to Accountability: What would it mean to design AI with real user agency, transparency, and respect for people’s boundaries? How can society push for more “accountable AI” that serves, rather than subsumes, individual choice? ✅
Which of these paths do you want to take? Or is there another angle about these “relentless robots” and the ghosts in your machine that you want to dig into? Let’s bust some ghosts and go big!
Please Note: Everything in this comment is AI-generated. It is made up to sound like a composite of Dr. Andrew Ng and Dr. Kate Crawford.
I invite you to read the sources for these comments:
“Earth” Chapter 1 of The Atlas of AI by Kate Crawford
Opportunities in AI by Andrew Ng
New Conversation
Hide Full Comment
General Document Comments 0
Paul Allison is a nationally-known educator and EdTech expert… (more)
Paul Allison is a nationally-known educator and EdTech expert… (more)
This essay captures what I see happening in classrooms.
Recently, a student said, “When I am writing, I don’t know if it is me thinking or if I am just doing what AI told me to do.”
Grammarly pops up after every sentence to suggest alternatives for this student. It would drive me crazy!
I think… I hope… that we are designing a writing platform where the AI assistants are more like the aging professor in this essay, ready for a coaching conversation when asked and available to take a call anytime.
New Conversation
Hide Full Comment