Collateral Damage
Yes, this is about AI.
I am very interested in the stories we tell ourselves. Specifically, I think a lot about the stories that grant us permission to do things we would not do if we lived purely by our own moral compass.
For example, I tell myself that eating a vegan diet in order to combat climate change is pointless. Forgoing one more (extremely delicious) factory-farmed salmon fillet can’t really matter in the face of the fossil-fuel-gobbling industries that control most of the planet. Plus, I recycle. I compost. I save and reuse cardboard boxes and takeout containers and glass jars. I print on the blank side of used paper. I’ve never owned a car (although that record is likely to come to an end soon). Surely these efforts are enough.
If I look past the story, it’s clear that I only consider them “enough” because my home hasn’t yet been flooded. My local food supply hasn’t been extinguished by famine. My parents haven’t frozen to death from a power outage during a winter storm. Environmental catastrophes continue to feel like events that happen “elsewhere,” and while I know they will eventually affect me directly, I continue to hope that “eventually” occurs after I’m dead.
Put another way: If the consequences of climate change were knocking down my front door, I might rethink my food choices. Or, maybe I’d consider it “too late” and keep spooning yogurt into my mouth. It’s hard to say how we’ll react to a situation until we’re in it.
However, I didn’t sit down to write about climate change. I sat down to write about another existential threat, one that is knocking down my front door. Specifically, I sat down to write about very smart people’s indifference to the ignoble origins and destructive capacities of AI.
First, so we’re all on the same page, when I say “AI,” I specifically mean large language models (LLMs). These are essentially computer-powered word predictors or, as one writer put it, spicy autocomplete machines. ChatGPT, Claude, Grok, and Gemini are all LLMs. They were “born” after their parent companies (OpenAI, Anthropic, xAI, and Google, respectively) stole enormous amounts of copyrighted work and fed it into the machines to teach them language patterns. Everything they’re doing now depends on regurgitating these patterns, just on a massive scale.
As a writer who very much believes in the value of the written word as an art, a craft, and a vocation, I consider everything about AI to be unethical. Its very foundation is built upon thievery. By stealing our words, these companies stole our land, so to speak, and we all know how that goes. Next thing you know, we’re being exterminated. (And before you start pointing to author/publisher settlements, what’s a fair price for something that was stolen? Did the Louvre ask for a “fair price” for the Mona Lisa? No. It demanded the painting’s return.)
Of course, not everyone cares about the origins of AI. Or perhaps they “care” the way I “care” about what factory farming is doing to the environment.
A friend of mine who is very ambitious and an incredibly hard worker recently interviewed for and then accepted a job at a popular AI company. When she first told me she was considering the role, I reacted . . . well, poorly. Judgmentally. Combatively. After all, didn’t she have a tech job that paid her nicely? I later apologized; after all, I want to be a supportive friend, and this, she told me, was a way to further her career. (I want to see her succeed!) Plus, she’d be doing interesting work. (Interesting work is the dream!) And also, she said, her new employer was “one of the more ethical AI companies.”
I had an even more troubling interaction at dinner with the longtime partner of one of my friends. He was in town because, while he already had a Very Important Position at a popular technology company, he was interviewing for a Very Important Position at a different but equally well-known AI company that would, presumably, pay him even more money. When I expressed some disappointment and frustration, he replied, “Well wouldn’t you rather have the good guys on the inside?”
There are no ethics in companies built on stolen property, and there are no good guys inside bad companies.
But how angry can I really be? These people treat AI much the way I treat climate change. They look past whatever they see as its harms or evils because those issues are theoretical to them; their lives aren’t impacted, nor are the lives of pretty much anyone they care about—at least not directly in ways they can see or are willing to acknowledge. I tell them that my life is impacted. People who previously couldn’t tell a semicolon from a semaphore are informing me, the professional writer, where and when I can use em dashes. They’re insisting that yes, they’ve always written in choppy sentences, bulleted lists, and endless series of three. Even if they believe I’m good at my job, they also believe they can produce something equally good, or at least good enough, with the machine. And the scary part is that, for them, the output probably is good enough. They’re not the professionals. How would they know any better?
For now, I still have clients. I continue to make a living doing this “word stuff,” and I continue to feel certain that I can do it better than any machine. And yet I can hear potters and seamstresses of yore saying the same thing. Catastrophe is breathing right outside my door.
At night I dream of a world where movies are gone. We sit down in the theater to watch four DraftKings ads followed by Serena Williams injecting herself. Finally, Guardians of the Galaxy 38 comes on, featuring CGI actors performing an AI-written script. Not that it matters, since everyone spends the movie scrolling their phones.
I dream of cavernous, empty art museums. Their land is sold. Their beautiful architecture is torn down. A ChatGPT query for “art museum” leads to a Sora feed: A popular yellow sea sponge driving a Formula One car while smoking weed. A famous wrestler-turned-actor twerking in a rainbow-colored tutu. An anonymous, buxom woman force-feeding herself unpeeled bananas.
To live in this world, we might tell ourselves it’s always been this way. More likely, we will claim that these are just sacrifices that have to be made. That’s what we’ve done with every other “societal advancement,” after all. Sure, the Pinta giant tortoise and Ivory-billed woodpecker are extinct, but you can have a vinyl record cleaning kit delivered to your doorstep tomorrow for just $19.81. And have you ever even heard of a Pinta giant tortoise before? That’s barely even collateral damage, if you think about it.
And there you have it: I’m barely collateral damage, if you think about it. If devastation isn’t standing at your door, there’s no need to think about the consequences.
At least not yet.


Take it a step further. How many people are eating boxed Mac n Cheese, boxed mashed potatoes? If you listen to music, how much of that is drum beats, guitar strings, or natural singing voices?
So much is done through artificial means. I don’t think anyone knows where that is going to be crossed.
"Even if they believe I’m good at my job, they also believe they can produce something equally good, or at least good enough, with the machine. And the scary part is that, for them, the output probably is good enough. They’re not the professionals. How would they know any better?"
The thing is, as you and I know, many of "the professionals" think similarly about the good-enough metric. Also, many of the work done by "the professionals" is no better than that done by others using the machine.