DYLAN TWENEY
Storylines

Writing with (and without) AI

Can we learn to love our brilliant yet pathological robot coworkers?
Dylan Tweney 5 min read
A closeup photo of a shelf of books with a row of four toy robots in front of the books.
Robots and books. Photo by otzinoz/Flickr.

Artificial intelligence complicates things for writers.

It's complicating everything for everyone these days, sure. We all have AI in our faces all day long, whether we want it or not (and mostly we don't).

But for writers, the existence of chatbots that can generate text — let's not call it "writing" exactly, but content — seems like an existential threat.

Or perhaps it's an existential upgrade. Who can tell yet?

I've been experimenting with conversational AI tools for the past two or three years and using it in my work extensively for the past year and a half. There's no doubt that it's helped me do research and complete assignments that I would have struggled with otherwise. But I am deeply conflicted about it.

Modern AI is a powerful tool that can be incredibly useful when used thoughtfully and carefully. AI can boost your productivity and enrich your writing – if you know what you’re doing. But if you are careless, it will rapidly drag your work down into a sea of mediocrity. I know, because I've seen it happen.

Modern chatbots are built on “large language models” (LLMs) that are essentially super-sophisticated text prediction engines. After distilling terabytes of actual human language, these LLMs can string together words in a way that sounds convincingly like the way a human would write or speak. The results are impressive across a staggeringly wide range of topics. But by the same token, LLMs' responses tend to be generic, reflecting the most predictable arrangements of words.

To put it plainly: They produce extremely average content.

This is why em dashes, god bless them, have emerged as a supposed “tell” for AI-generated writing. Because many human writers overuse em dashes — and editors like me have been complaining about this for decades, even if we're guilty of the same sin — chatbots have also picked up this writerly tic. A similar issue crops up with words that many writers overuse, like “delve” or “crucial,” as well as tired constructions like “From (something) to (something else unrelated)” or sentences that start “With (some obvious trend), ... (some vaguely related point).”

In addition to being prone to clichés, chatbot output has nothing to do with the truth. Chatbots are good at producing convincing-sounding strings of text, but are not built to understand or validate facts. Confabulation is what they are made to do. It’s not fair to call something factually wrong a “hallucination” if the LLM is completely unconcerned with the truth value of its statements. It's more accurate to call it bullshit.

The people developing AI may eventually come up with a language model that's capable of checking its statements against the real world and gradually building a more accurate mental model of reality. Some say that's exactly how the human mind works, as Anil K. Seth wrote a few years ago. If they succeed at this, they might wind up creating a true intelligence.

But until then, LLMs do not have a lived experience from which they can speak. I love the way George Saunders writes about this:

A piece of fiction is infused with, let’s call them, “overtones” – the result of those thousands of micro-decisions we’re always talking about here. ... those choices are infused, on a molecular level, with every place we’ve ever lived, been, or seen; with the thousands of different artists we’ve been along the way; with our evolving preferences and biases in terms of language and form, and by the archaeological layers of outdated preferences and biases of that type. 
When I read your work, I am receiving that entire history, in every phrase and every omitted phrase.
—George Saunders, "I Doubt the Machine Because the Machine is Not You"

And that’s why it often feels like there is no heart or soul to AI-generated copy. There is no lived experience behind them, no unique, reality-inflected being making decisions to generate those words, so there is no one for the reader to connect to. It's all scrim, like a Potemkin village version of writing.

~~~

Another thing: The makers of today’s AI chatbots built their technologies by absorbing the creative work of millennia and reducing it to statistical patterns represented in neural networks by numeric vectors. The work of thousands of writers and artists, spanning everything from your most mundane Facebook posts to the published work of Nobel Prize winners, has been reduced to statistical probabilities without any acknowledgement or compensation to the creators. As a result, LLMs can do a convincing job of imitating almost any artist or author, including you. (For a deeper dive into this, see Alicia McKay: What the AI bros won't tell you.)

The companies making AI aren’t doing it to help you. AI tools might be helpful to you, yes, and they’re happy to take $20 a month from you if you find their tools valuable enough. But their main customers are big companies willing to spend a lot of money to save even more money. If they can use AI to reduce the number of employees, let go of freelancers, or pay everyone less, they will certainly do that.

~~~

So when you write using AI tools, you need to approach them with an open mind. You’re using a tool—working with a collaborator, if you prefer—whose training was based on taking others’ work without compensation or consent, for purposes that include replacing you, and which produces content that’s remarkably sophisticated and convincing but is completely disconnected from the truth.

In a human coworker, those qualities would be pathological.

So here's how I suggest you work with AI: regard it as a brilliant, solicitous, but untrustworthy coworker.

What does that mean? For me, I treat AI as an assistant, as a tool to be used consciously and deliberately, not as a replacement for a coworker (or my own work) or as a collaborator.

I never let AI write the first draft. I place too much value on the work of thinking things through by trying to write about them, and I appreciate the way that my own individual style emerges through the writing process, as Saunders describes.

I use AI to help me with research, as a brainstorming partner, and for editing suggestions. I use it to boil down transcripts into highlights or more usable Q&As.

I always take the time to think through what I'm asking for, and when I have a request, I write very clear instructions for it to follow (essentially, I give the AI an assignment brief).

And I often reject its suggestions. Grammarly, for example, is indispensable for catching typos and the kinds of usage mistakes that I make all the time. But its suggestions for how to "improve" my sentences would make them sound wordy, evasive, and bureaucratic, so I almost always reject them.

In short, I am very deliberate about staying in control of the process.

For me, AI is an extremely powerful tool. But it's a dangerous one.

How about you: Are you working with AI tools to help with writing? If so, what works for you?

Share
Comments
More from DYLAN TWENEY

Let's stay in touch!

Subscribe to my newsletter on writing & storytelling

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to DYLAN TWENEY.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.