This post is an essay by John Warner that was published in Inside Higher Ed. Here’s a link to the original.
He takes a smart approach to the problem of how to teach writing to college students in the era of AI, where an algorithm can produce an adequate essay in response to the instructor’s prompt without any effort by the student. As he notes, the real problem is that the longstanding system of college essay writing has focused primarily on producing generic documents rather than on actually writing.
The way that I’d frame the argument is this: Writing is not producing coherent and grammatically correct essays. Instead writing is thinking. You don’t write in order to put on paper an existing set of preformed ideas. AI does that quite well — by drawing on the patterns it has found in its deep dive into existing literature to produce a plausible piece of English prose. It’s taking what’s already out there and reforming it in response to the prompt. It’s not thinking. It doesn’t understand the underlying processes and causal structures that produce the results it’s talking about. It’s simply using probability to predict what is the most likely word or sentence or paragraph that follows in a given sequence. AI relates to writing the same way that truthiness relates to truth. It’s an exercise in plausible fakery.
By contrast, we write because writing is only way for us to figure out what what we think. Simple ideas can be put on paper with ease and without thought. But anything more complex than that requires you to work out on paper where this line of thinking is leading. You don’t know what you think about a subject until you finish writing about it.
This is what makes writing so difficult. It’s a long, drawn-out struggle to figure out what you think. The gratification comes from finding that you can indeed work through difficult problems all on your own. The challenge for a writing instructor is to move students from producing documents to crafting ideas on paper. The aim for anyone learning how to write is to strengthen your ability to think and to express this thinking rather than how to acquire an algorithm for producing an essay without serious thought. AI already has this algorithm. It’s mastered the five-paragraph essay. And it’s always going to be better at algorithmic document production than you are.

Teach Writing, Not Document Production
If we want students to learn to write, AI tools shouldn’t have much of a role. If we don’t think students need to learn to write anymore, I’m not sure what we’re doing here.
The challenge of generative AI in education is to fundamentally determine what matters.
I think writing matters. Here’s why:
- If we want students to learn how to write, they must write.
- Knowing how to write is, essentially, knowing how to think. Thinking is important.
- Syntax production is a by-product of writing, but not all production of syntax comes from the act of writing.
- We know we are in the presence of writing when the text has been produced by a unique intelligence acting to intentionally communicate a message that fulfills a purpose and meets the needs of a specific audience.
- Because they operate on probabilistic patterns, and without the capacity to think, feel or communicate with intention, large language models do not write—they produce syntax.
- Automated syntax production could have many useful applications in the world.
- Inviting students to use LLMs as part of the production of text in a writing course makes this course not a writing course, but a course in document production.
- A document-production course could be a useful experience for students. Since the appearance of ChatGPT, I have spent a fair bit of time thinking about the documents I have been required to produce as part of my various professional capacities, and the list of things I would’ve wished for the aid of an LLM is long: The faculty activity report, interview transcripts, boilerplate charts I used to have to do every quarter for a market research tracking study, the progress email for the superior who didn’t seem to do much but justified his existence by pretending he was supervising the work of underlings … The list goes on and on.
- One of reasons why I can think of instances where automating document production would be useful is because my ability to write allows me to draw distinctions around when writing is necessary or where the text production can be offloaded. I learned this by doing lots of reading and writing in the absence of automated text production/processing technology.
- Many students arrive in college never having experienced writing (see above for definition) in school contexts.
- These students have primarily been asked to produce writing-related simulations for the purpose of limited assessments, often divorced from anything like an authentic writing situation requiring the work of a unique intelligence.
- This pattern of students producing writing simulations long predates the appearance of ChatGPT. (See: Warner, John, Why They Can’t Write: Killing the Five-Paragraph Essay and Other Necessities, JHUP 2018.)
- LLMs make it possible to produce these simulations without any student effort or engagement.
- If a student has only experienced the simulation, me telling them it is important for them to learn to really “write” is going to have a limited impact, because they don’t have a firm grasp on the genuine experience I’m urging them to embrace.
- When we tell students to “write” something and we mean the above definition, but they are envisioning something else—e.g., producing a document for a grade—some students will inevitably choose to use an LLM to outsource the text production.
- Allowing students to turn in LLM syntax productions and receiving credit in a course that is designed to develop their writing is not acceptable. It makes a mockery of the supposed work of education.
- The reasons students will make this unacceptable choice are varied and include, but are not limited to, the following (close paraphrases from firsthand discussions with students):
I had more important things to do.
The assignment was dumb and seemed pointless.
I don’t care about this class.
I had too much stuff to do and it was just easier to check something off the list.
I had to work.
I didn’t understand the assignment.
Everyone else is using it and they’re doing fine.
I was pretty sure [the LLM] would do a better job than me.
- It’s that last one that kills me. What kind of system have we made where a student is certain the simulation carries more value that whatever they have to contribute?
- The root challenge of this dilemma is the transactional model of school, which emphasizes the product that can be graded as more important than the experience of learning. An A while learning nothing has significantly more value than a C where students learned something potentially transformative to their own thinking.
- The transactional model predates generative AI by decades, and the biggest difference pre- and post-ChatGPT is that we no longer can trust the simulation because it may have been generated automatically. Put another way, learning wasn’t necessarily happening before, either, but for some reason we found solace that students were at least doing the simulation themselves.
- AI companies are actively selling their products as the solution to the “problem” of school where all activities can be outsourced.
- Many institutions are “partnering” with the companies that are actively selling cheating tools to students.
- Learning is important. The simulation didn’t mean much before; it means even less now.
- Most students want to learn, but in many cases, it is not apparent to students how the work of school is related to learning. I don’t like thinking this way, either, but this belief comes from talking to thousands of students across a couple of decades.
- Once students see the genuine pleasure (a word I don’t use lightly) and benefits of doing their own learning, they are more likely to see the value in continuing with these practices.
- Reducing assessment to experiences that can be strictly monitored and proctored reduces what students may learn in the name of increasing “integrity,” as determined by the credential-certification entity (teacher, institution).
- Opting for a strictly monitored and proctored assessment may be a desirable choice in some cases.
- Students will, one day, go into a world where they are not strictly monitored, where they will have to exercise judgment over the choices they make as they do their labor. Integrity will be a choice they have to make for themselves.
- An instructor can take incredible care in designing meaningful writing experiences, tailoring assessment to important learning outcomes, transparently communicating the importance and value of having this experience, and students will still choose to outsource the whole thing to a large language model.
- This choice will drive the instructor bonkers, because what else are you supposed to do?
- Nothing. Nothing else. The instructor has done everything possible.
- In the end, the education belongs to the student.
- The best we can do is give them a proposition worth taking up, to do something hard that requires deep consideration and lots of friction (and even frustration) where the challenge itself is the reward, and the good faith attempt at meeting the challenge is rewarded because in that attempt learning has happened.
- To achieve this, we have to teach writing, not document production.
