AI Writing Assistants in Academic Work: An Honest Accounting
A graduate student I know has two documents open on her screen at all times. One is her dissertation chapter. The other is a chat window with an AI assistant. When she gets stuck on a paragraph, she pastes it into the chat and asks for feedback. Sometimes the feedback is useful. Sometimes the AI rewrites the paragraph for her, and she accepts the rewrite. She told me recently that she had begun to notice something strange: when she reread her own chapter, she could no longer identify which sentences she had written and which the machine had produced. They had converged on a single style, competent and slightly bloodless, and she could not remember what her own voice had sounded like before.
This is the quiet problem of AI writing assistants in academic work. The tools are not useless, and the panic about cheating, while warranted in some cases, has obscured a subtler question: what happens to a writer who outsources the parts of writing that are actually cognition?
There is a real case for these tools, and it deserves to be made clearly. Writing well involves many subtasks, and some of them are not especially creative. When you have drafted a paragraph and you cannot tell whether it is clear, an AI assistant can read it back to you and flag the sentences that would confuse a reader. It can catch ambiguous pronouns, the kind where it could refer to three different antecedents and you have stopped being able to see the problem. It can notice when your topic sentence promises one thing and your paragraph delivers another. These are editorial functions, and they are genuinely useful, particularly for writers who do not have a patient colleague willing to read their drafts.
AI is also useful for what might be called pre-writing. If you are staring at a blank page and you know roughly what you want to argue but not how to structure it, asking a model to suggest three possible outlines can break the paralysis. The outlines will usually be generic, but that is fine. You are not going to use them. You are going to react to them, notice which one irritates you least, and start writing from there. The tool has served as a rubber duck with opinions.
There is a related use case in literature reviews, where the volume of sources can genuinely overwhelm. Asking a model to summarize a dense methods section so you can decide whether a paper is relevant to your question is defensible. You are using the tool to triage, not to replace your reading. You still go back and read the papers that matter. What you have bought yourself is a faster filter.
So far, so reasonable. The trouble begins when the tool stops being a filter and starts being the writer.
Consider an undergraduate assigned a five-page paper on Hobbes. She has done the reading. She has underlined passages. She has a vague sense that something about the state of nature feels wrong to her, that Hobbes is smuggling in assumptions she wants to push back on. But she cannot yet say what those assumptions are, and the paper is due in two days. She opens ChatGPT and types: write a critical essay on Hobbes’s conception of the state of nature. The model produces a competent essay. It makes arguments. She reads it, nods, edits a few sentences, and submits it.
What has she lost? Not the grade, probably. The essay is fine. What she has lost is the hour or two of genuine confusion that would have preceded a real argument. The struggle of figuring out what you think is not an obstacle to writing. It is the writing. When the philosopher Bernard Williams wrote that philosophy is an attempt to make sense of what you already half-know, he was describing something true of most academic writing at its best. You sit with the material until the vague discomfort in your mind sharpens into a claim. The discomfort, in the Bjorks’ framing, is the desirable difficulty doing its work. AI bypasses this process entirely. It delivers a polished claim without the sharpening.
This matters beyond any single assignment because writing is how academic thinking actually happens. The argument does not exist fully formed in your head, waiting to be transcribed. It emerges through the friction of trying to put it into sentences. Joan Didion put this plainly: she wrote entirely, she said, to find out what she was thinking. If you never sit with that friction, you never develop the capacity to have the thought in the first place.
There is also the voice problem, which is more insidious than it first appears. Prose generated by current models has a recognizable shape. The sentences tend toward the same medium length. Transitions are handled with a small vocabulary of connective phrases. Paragraphs resolve too neatly. Even when the output is grammatically impeccable, it has a flatness to it, a sense that no particular person is speaking. Readers notice this, even when they cannot articulate what they are noticing.
A writer’s voice, meanwhile, is developed by writing. Not by editing someone else’s output, and not by accepting suggestions. It emerges over hundreds of drafts in which you make particular choices: this word rather than that one, a short sentence here because the previous one was long, a willingness to begin with and because the rhythm needs it. Each choice is small. The accumulation is what makes a voice. If you skip the accumulation, there is nothing to accumulate.
The graduate student I mentioned at the beginning eventually noticed what had happened and stopped pasting her drafts into the chat window. She said the first few weeks were uncomfortable. Her sentences felt worse to her. She could hear her own awkwardness. But after a month she noticed that she had started writing paragraphs she actually liked, sentences with a small surprise in them, arguments that sounded like her own mind at work. She was, in a sense, relearning how to write. She was also, by her own account, thinking more clearly than she had in a year.
The honest rule, if there is one, is something like this. Use AI for the tasks where what matters is the output: summaries for your own reference, a second pair of eyes on a draft, a generic outline to push against. Do not use it for the tasks where what matters is the process: forming the argument, writing the first draft, deciding what you actually believe. The distinction is sometimes hard to hold, because the tools are designed to be helpful everywhere. But the question you can ask yourself is simple. Is this a step I want to skip, or a step I need to take? The machine will happily skip either one. You are the only one who knows the difference.
Photo via Unsplash.