AI, Artifice, And Authenticity
The era of badly-automated homogenized engagement slop is upon us. I desperately want to believe there's a renaissance for authenticity on the other end.
I've always been fascinated by America's obsession with artifice.
Our "top engineers" can't engineer. Our journalists can't tell the truth. Our wealthiest innovators can't innovate. Our leaders can't lead. Our most "insightful" and popular podcasters have heads full of cottage cheese and pebbles.
The walls of our chain restaurants are plastered with fake Americana, our kitchens are full of French bistro prints from Target so we can pretend we're well traveled, and when authoritarianism came knocking, American exceptionalism proved as hollow as a Dollar Store fake chocolate Easter bunny.
Donald Trump, and his tacky gold spray painted bullshit, is perfectly representative of a culture being devoured by its obsession with artifice. Trump exposes how many Americans can't tell the difference between authenticity and the laziest bullshit imaginable, and I find that equal parts fascinating and terrifying.

Now AI is colliding with shaky American artifice at unimaginable scale. AI engagement slop routinely tops the music charts, pummels vast swaths of the electorate with Facebook stories that aren't real and never happened, and is steadling supplanting whatever's left of real journalism.
It's the golden age for hyperscaled, badly-automated bullshit, overseen by the least ethical assholes America has to offer. "If a lot of people click on it it must be good" is the gold standard for the country's deepest thinkers and top titans of industry.
Peppered within this morass are islands of educators, artists, writers, and other creatives wondering where we go from here. Does America finally get devoured by a deadly homogenized hyper-commercialized sameness, or does badly automated artifice at impossible scale create a renaissance for authenticity?
Every so often I'll see interesting flashpoints on this subject, like this recent Wall Street Journal article about how Fortune, already a cornerstone of lazy "CEO said a thing!" journalism, is going full steam ahead on replacing what used to pass as human insight with high-volume automated cack. Often without telling readers.
The implication in the piece is that this is simply unavoidable, and the author is engaged in an act of savvy adaptation by not just using AI clerically, but by having it do most or all of the writing. That this approach has repeatedly resulted in widespread (and costly to fix) plagiarism, errors, and publisher embarrassment is treated as somewhat tangential.

Despite the article itself not being very long, this recent Wired piece about AI in journalism (paywalled, here's an archive copy), also triggered a lot of conversation – and something akin to a wave of nausea – among many writers I admire.
Wired asked a small selection of journalists how they're integrating AI into their workflows. Again: many writers aren't just using AI for invoicing, editing, and research, they're deeply interweaving automation into their personal voices in a way that feels decidedly creepy.
Some of the writers in the Wired piece seem very excited that they no longer need to bother with the droll task of actually writing things:
“I feel like I’m cheating in a way that feels amazing,” says [Alex] Heath. “I never did this because I liked being a writer. I like reporting, learning new things, having an edge, and telling people things that will make them feel smart six months from now.”
You'll notice he said feel smart, not be smart.
The craft of writing, and thereby the act of learning, is viewed by some as a sort of nuisance that gets in the way of the important stuff: engagement, subscription growth, attention, and being recognized as a person of importance and insight.
The problem is if you skip the learning and the making human mistakes through writing part, you're going to be boring. You're not going to have an "edge." Hyper-reliance on AI, which can only generate output based on what's already been said elsewhere, is inevitably going to result not just mass homogenization, but an erosion of personal growth, experience, and intellectual evolution.
"The journey is the reward," as the saying goes.
This worry that AI is short-circuiting the learning process is a big conversation over in academia and the sciences, which is full of people rushing to take shortcuts toward understanding and knowledge, so they can rush into a job market with no jobs (or grant money), and confidently pretend they understand things.
I liked this piece and quote by Berkeley computational scientist Minas Karamanis:
"What's great about science is its people. The slow, stubborn, sometimes painful process by which a confused student becomes an independent thinker. If we use these tools to bypass that process in favor of faster output, we don't just risk taking away what's great about science. We take away the only part of it that wasn't replaceable in the first place."
Some writers quoted in the Wired piece appear to have convinced themselves that they can remain immune from all the creeping, unremarkable sameness and non-learning involved with heavy AI reliance, so long as they do a good job convincing a pile of code to better understand them on a personal level:
Like Heath, [Jasmine] Sun has fed Claude past articles she’s written and notes on her style. But she’s also instructed Claude to focus only on enhancing and developing her voice and taste, and never to be sycophantic. She tells Claude it “should never write a sentence for her. Your goal is to elicit out of Jasmine by providing feedback.”
Here’s part of the instructions Sun has shared with her Claude editor: “You are not a co-writer. You cannot perceive—you don’t have experiences, sources, scenes, or emotions to draw from. Your role is to help Jasmine write like the best version of herself—not just who she is on the page now, but who she’s trying to become as a writer. That means understanding both her current voice and her aspirations, including the writers and qualities she’s reaching toward.”
This, as independent reporter Marisa Kabas correctly notes, is a dramatic misunderstanding of how LLMs actually work (despite the journalist in question ostensibly covering the sector for a living):
Telling a machine that can’t perceive that it can’t perceive won’t make it perceive that it can’t perceive. It can’t make you be the best version of yourself because it doesn’t know what that means, nor does it know what it means to aspire. Just because some are given a human name and users are taught to address it collegially doesn’t make it real. You cannot force a machine to become human. You’re stuck, for better or worse, with your fellow humans for perceiving your aspirations. Perhaps the problem is that you don’t like how humans perceive you.
You'll see this anthropomorphizing a lot in media, and it's a direct extension of tech industry misrepresentation of what modern AI is and what it can actually do.
That's driven, in turn, by the fact that many tech journalists are an extension of tech industry marketing, more fascinated with the nature of wealth and wealth accumulation than any genuine understanding of technology or how it works (deep dives into the guts of engineering doesn't get those sweet clicks, you see).
There's no faster way for a tech journalist to loudly demonstrate they don't know how this technology works than by attributing human intentions and motivations to software. There is a lot of terrible journalism on AI obsessed with this sort of weird anthropomorphism:




code can't apologize, dumbass
Tech companies, who are endlessly trying to pretend we're just a few additional billion and another week or two away from sentience, love this shit. It doesn't just advertise that this software is more sophisticated than it actually is, it helps distance them from their very human failures and bad choices.
If you're a writer who doesn't understand how AI really works or the often-ugly motivations of its unsubtly malevolent creators, how good of a job are you doing in integrating it into your workflow in a way that doesn't undermine your work? How can I trust you to inform me honestly about this or anything else? Especially if you're so eager to short-circuit your personal growth and learning?
And these are journalists who cover AI. You can assume ordinary everyday citizens, students, or younger aspiring writers (who in the U.S. already suffer from abysmal media literacy standards), understand even less about these new tools.
I think a lot about this 2025 New York Times discussion between Casey Newton and Kevin Roose about AI. In it, they regularly attribute human attributes to LLMs in a way that shows a very deep misunderstanding of how AI software works (again, despite being paid very well to professionally cover the sector).
The conversation included several implications that AI makes a good, symmetrical replacement for mental health therapy (decidedly incorrect), and that chatbots are actively trying to cheat or deceive their users:
ROOSE So these are some of the amazing and wonderful things that today’s A.I. systems are capable of. But we should also say there are limitations that still remain.
NEWTON That’s right. If you don’t pay close attention to them, they tend to be bad at certain common-sense things. For technical reasons, they don’t have great memories yet; they’re not amazing at long-term planning. Also, they’re not always aligned with human values: They might lie or cheat or steal to get what they want.
That's...not an actual thing that's happening. This software is an algorithmic, stochastic parrot that's presenting you with a combination of words it probabilistically believes is the right answer based on an ocean of existing writing. It's a simulacrum of human intent and understanding, it's not a perfectly symmetrical substitute for human consciousness sneakily trying to fuck with you.
If you're not able to understand whether or not your laptop software is sentient, there's probably going to be flaws in your adoption and analysis of it. Automation has uses, but there's been no shortage of journalists who've gotten too cozy with "AI" only to embrace their own ruin. This sort of thing is happening a lot.
That said, Roose eventually does come around to a point I believe in. Or at least very much want to believe in:
ROOSE I have a somewhat more optimistic take on this: I think that yes, people will lose opportunities and jobs because of this technology, but I wonder if it’s going to catalyze some counterreaction. I’ve been thinking a lot recently about the slow-food movement and the farm-to-table movement, both of which came up in reaction to fast food. Fast food had a lot going for it — it was cheap, it was plentiful, you could get it in a hurry. But it also opened up a market for a healthier, more artisanal way of doing things. And I wonder if something similar will happen in creative industries — a kind of creative renaissance for things that feel real and human and aren’t just outputs from some A.I. company’s slop machine.
The only thing I really have of any professional worth is my voice and experience, cultivated by a quarter century of writing. Much of it profoundly terrible. I'm a direct byproduct of those failures and inconsistent growth. I'm not interested in shortcuts, I'm interested in honest, genuine communication and connection.
Hopefully ethical human beings will still play a meaningful role in this brave new world. Writer Hamilton Nolan welcomed the sea change set forth in the Wired piece, noting that a growing tide of homogenized, badly automated sameness is good news for authentic voices that actually have something to say:
If you are a professional writer, I want you to use AI. Because this industry is competitive. I’ll take any advantage I can get. And if you want to make your writing suck, that’s all the better for me. One less person outshining me.
The tepid, conformist nature of your AI-assisted prose will only make my unexpected bons mots stand out more sharply. While you lean on a technological crutch of grammatical mediocrity to drag your essays over the finish line, I’ll be metaphorically zipping past you on my “magic carpet” of words emerging directly from my own declining and unpredictable brain.
I desperately want to believe that the rise of AI will result in a deeper desire for authenticity. But then I remember all the times I've stood in line at the grocery store and observed what the average American actually eats. Or spent some time with the top 20 U.S. podcast list. And I quickly have second thoughts.
As I've mentioned previously, most of the problems with AI are caused by very ordinary human failures, many of which have been compounded over decades. Unethical, rich weirdos (who have long had an ironic disdain for the humanities) are rushing to layer software automation on top of vast, complex sociological systems they don't understand and already didn't work in the public interest.
Authoritarianism and automated tech enshittification have combined to accelerate the deterioration of once-serviceable artifice, revealing a vast, neglected brokenness beneath. The hype cycle has gotten so surreal, it feels like a satirical novel written by some alien construct with moderate to severe brain damage.

There's white hot public rage right now against AI. Not just because AI undermines labor, recklessly consumes energy, and is propped up by financial shell games, but because younger Americans are more clearly seeing through the veneer we've used to wallpaper over decades of very ordinary human failures.
In better days, U.S. artifice was just effective enough to maintain some semblance of order. As our institutional cornerstones buckle and crumble from broad corruption and neglect, the sheer laziness of the stage play is coming into stark relief. Especially if you're young, hungry, and have never known anything else.
Into that mix comes a fascism-friendly extraction class that's desperately trying to construct a massive, hyper-commercialized, ethics-optional, badly-automated clickbait ouroborus that shits ad engagement money, pummeling the electorate with a steady stream of superficial infotainment agitslop at impossible scale.
I'm not actually sure authenticity wins this standoff, but I desperately want to believe it has a chance to put up a respectable fight.