Linda Flower and John Hayes demonstrated that writing involves recursive problem-solving, not linear execution of a plan. You figure things out as you go, looping back, reconceiving, trying again.
Nancy Sommers found that experienced writers revise for meaning—they reconsider arguments, restructure for logic. Student writers trained in formulas just polish: fix grammar, swap word choices, and tweak sentences.
The formulas teach them to think of revision as cleanup rather than as thinking. As the great historian Edmund S. Morgan once told me (apparently paraphrasing Ernest Hemingway): “There’s no writing, only rewriting.”
Ann Penrose and Cheryl Geisler showed that novice writers treat sources as information to mine for supporting quotes, while expert writers engage sources as arguments to understand and position against. Formula-based instruction reinforces the novice approach: find three sources supporting your thesis, plug them in.
He then walks us through the process of teaching writing more effectively.
I’ve been commenting on student writing for over forty years, and I’ve watched the same struggle play out again and again: students master the rules we give them—about thesis statements, topic sentences, and transitions—but their writing stays wooden and lifeless.
They follow our prescriptions and produce essays nobody wants to read.
Here’s the puzzling part: these same students read excellent writing all the time. They admire it, they’re moved by it, they know it when they see it. But somehow they can’t translate what makes that writing work into their own practice.
Then large language models showed up, and suddenly I saw what composition scholars like Anne Beaufort, Nancy Sommers, and Linda Flower have been telling us for decades: we’ve been teaching writing backward.
Think about how AI actually learns to write. It doesn’t memorize rules about thesis statements or five-paragraph essays. But it doesn’t figure things out on its own either. Instead, it ingests millions of examples of writing, and then human experts teach it what works and what doesn’t.
That human guidance turns out to be crucial. AI can’t figure out on its own what makes writing coherent or persuasive.
Experts in argument and style evaluate its attempts and give specific feedback: “This explanation is clear because you define terms upfront.” “This argument rambles because you introduce claims without evidence or examples.” “This integration of evidence is smooth; that one feels clunky.”
Through thousands of these expert judgments, the model learns patterns—not just any patterns, but the ones that human experts have identified as marking good writing.
Here’s what stopped me cold: AI often produces more natural, readable prose than students who’ve dutifully memorized our rules.
This tells us something important, something researchers have been saying for years: how humans actually learn to write has more in common with how AI learns than with how we typically teach writing in schools or college.
Real writers don’t learn primarily by memorizing rules. They learn by reading widely, absorbing patterns from good writing, and getting guidance from more experienced writers about what works. They encounter thousands of examples while developing what Donald Schön called “reflection-in-action”—the ability to judge quality while you’re actually writing.
That’s essentially what happens with AI, minus the consciousness. It processes massive amounts of text and learns from expert judgment about what distinguishes effective from ineffective prose.
Yet we keep teaching through what David Russell calls “general writing skills instruction”—abstract rules divorced from real contexts: the five-paragraph essay, the research paper template, thesis-support-conclusion formulas.
We act like writing is a technical skill you master by learning specifications, like assembling IKEA furniture.
The results speak for themselves. Students can recite the rules, but their writing feels mechanical and disconnected from anything they’d actually want to read. Patrick Hartwell showed decades ago that explicit rule-teaching doesn’t improve writing quality. Yet we persist.
What if we taught writing the way AI learns—through extensive examples plus expert guidance about what works—and then taught students to go beyond what AI can do? That would align with what composition research keeps telling us: writing develops through immersion, apprenticeship, and extensive practice with real feedback.
Why Our Formulas Don’t Work
Walk into most high school English classes and you’ll hear the familiar litany: “Your introduction needs a hook, background, and thesis statement.” “Each body paragraph needs a topic sentence, evidence, and analysis.” “Use transitions like ‘furthermore’ and ‘in conclusion.’”
These formulas have dominated instruction for decades despite mounting evidence they don’t work.
James Berlin called this approach “current-traditional rhetoric”—treating writing as arranging pre-existing thoughts according to prescribed patterns. Get the structure right, the thinking goes, and content will follow.
Researchers have been pushing back for forty years. Janet Emig’s 1971 study showed that real writers don’t work this way—they don’t start with complete theses and then hunt for support. They discover what they think through writing itself.
Linda Flower and John Hayes demonstrated that writing involves recursive problem-solving, not linear execution of a plan. You figure things out as you go, looping back, reconceiving, trying again.
Nancy Sommers found that experienced writers revise for meaning—they reconsider arguments, restructure for logic. Student writers trained in formulas just polish: fix grammar, swap word choices, and tweak sentences.
The formulas teach them to think of revision as cleanup rather than as thinking. As the great historian Edmund S. Morgan once told me (apparently paraphrasing Ernest Hemingway): “There’s no writing, only rewriting.”
Ann Penrose and Cheryl Geisler showed that novice writers treat sources as information to mine for supporting quotes, while expert writers engage sources as arguments to understand and position against. Formula-based instruction reinforces the novice approach: find three sources supporting your thesis, plug them in.
So why do formulas persist? David Russell points to institutional pressures: they make grading easier, create the appearance of standards, and let writing be taught by people with minimal composition training. They promise efficiency—teach the five-paragraph essay once, students apply it everywhere.
But as Kathleen Blake Yancey notes, that promise is hollow. The five-paragraph essay doesn’t transfer. Students who can crank it out on demand can’t adapt to situations requiring different approaches. They’ve learned a template, not guiding principles.
Lisa Delpit adds that formula-based instruction may particularly hurt students from non-dominant backgrounds. When we teach writing as rule-following, we privilege kids already familiar with academic conventions while mystifying those conventions for everyone else. We fail to make explicit what actually makes writing work.
Research shows what does work: Anne Beaufort’s longitudinal studies found students improve through sustained engagement with real writing tasks, extensive practice, and substantive feedback from more experienced practitioners.
Paul Prior demonstrated that learning to write in any field involves apprenticeship—watching experts work, getting coached on your attempts, gradually internalizing quality standards through practice with feedback.
Sound familiar? That’s remarkably close to how AI learns: extensive examples plus expert guidance about what works.
How AI Actually Learns to Write
Understanding AI’s learning process reveals both why it beats formula-following students and what better teaching might look like.
Stage One: Gorging on Examples
The model ingests millions of text examples—essays, articles, books, reports. Through massive exposure, it identifies patterns: how words tend to follow each other, how sentences connect, what structures appear in different genres.
This resembles what Vygotsky called learning through immersion—you learn by exposure to how practitioners actually work. The model sees vast variation in how real writers write, not how formulas say they should write.
But at this stage, the model can’t judge quality. It can match patterns but not distinguish excellent from mediocre writing.
Stage Two: Learning from Expert Judgment
Here’s where it gets interesting for teaching. Human experts evaluate the model’s outputs with nuanced commentary, much like Nancy Sommers describes expert response to student writing:
“This explanation works—you define terms before using them, move simple to complex, use concrete examples.”
“This argument fails—you introduce claims without support, circle back repetitively, ignore obvious objections.”
“This narrative has good pacing and detail, but I don’t see how it connects to your larger point.”
These experts understand writing the way Anne Beaufort describes expertise: they’ve internalized standards through extensive practice, can articulate why approaches work or fail, and recognize patterns across varied examples.
Through thousands of these evaluations, the model identifies patterns correlating with expert judgment. This is classic apprenticeship: novice produces attempts, expert evaluates, novice gradually internalizes expert standards through repeated practice with feedback.
Stage Three: Comparative Judgment
Experts also compare outputs: “Response A is clearer than B because…” This helps the model grasp quality differences.
Carol Berkenkotter and Thomas Huckin call this developing “genre knowledge”—understanding not just what features texts have but why certain features work better in certain situations. The model learns contextual appropriateness, not just abstract rules.
The Result
The trained model can produce prose exhibiting effective writing features because it’s learned patterns that experts identify as marking quality. It explains clearly because experts taught it what clarity looks like. It integrates evidence smoothly because experts showed it thousands of examples.
That’s why AI often beats formula-following students: it’s learned from expert guidance about what actually works, not abstract rules. As Charles Bazerman might say, it’s been “enculturated into literate practices” through exposure and expert guidance rather than rule memorization.
But—crucially—AI lacks what Flower and Hayes identified as writing’s central cognitive processes: goal-setting, planning for meaning, and metacognitive awareness. It generates text without intentionality, it revises for smoothness without reconceiving for meaning.
What This Means for Teaching: The Case for Example-Based Instruction
If AI learns through examples plus expert guidance—and research shows that’s how humans learn effectively—shouldn’t we teach that way?
Composition scholars have long pushed for approaches centered on examples and authentic practice. James Moffett advocated “learning by doing” through real communication. Elizabeth Wardle and Doug Downs developed a “Writing About Writing” curricula where students analyze exemplary texts while learning about writing research. Charles Bazerman showed that writers learn through extensive reading within genres, gradually internalizing how particular communities construct knowledge through text.
You learn to write academic arguments by reading hundreds of them, not by memorizing simplistic formulas.
Moving from Rules to Examples
Instead of starting with abstractions—”essays need thesis statements”—start with concrete examples and expert analysis, as Wendy Bishop and other process folks recommended.
Show three opening paragraphs with commentary:
“This works because it establishes real stakes—it identifies a puzzle existing scholarship hasn’t addressed. Notice how it defines key terms immediately. The final sentence suggests the essay’s direction without giving everything away.”
This is what David Bartholomae calls “inventing the university”—helping students see how expert writers actually operate, making visible the moves that create effective prose.
Students aren’t just seeing examples—they’re seeing them through expert eyes, learning what experienced readers notice and value. Marilyn Sternglass’s research showed students develop as writers through sustained engagement with expert readers who make their judgments visible and teachable.
Making Expert Thinking Visible
Flower and Hayes identified that expert writers engage in complex recursive processes: setting goals, generating ideas, organizing, drafting, reviewing, revising. Novices often skip steps or do them superficially.
AI training makes expert evaluation visible in new ways: “This paragraph doesn’t achieve its goal because…” “This evidence doesn’t support your claim because…”
We can do the same. Don’t just mark essays with grades. Make your expert reading visible:
“I’m noticing your second paragraph makes a claim but provides no evidence. You write: ‘Morrison challenges conventional narratives of freedom.’ Interesting claim—but then you jump to something else.
Compare this to the professional essay where the writer made a similar claim, then quoted a specific passage, explained how it works, and connected it to the larger argument.”
This is what philosopher Donald Schön called “reflection-in-action”—making explicit the tacit knowledge experts use when evaluating writing.
Teaching Real Revision
Harvard’s Nancy Sommers found that experienced writers revise substantively—reconceiving arguments, restructuring logic, cutting what doesn’t serve the purpose. Novice writers just polish.
AI can smooth prose—matching patterns experts identify as creating clarity. What it can’t do is recognize an entire essay needs reconceiving because the argument doesn’t actually work.
Teach that deeper reconceptualization. Show multiple drafts of strong essays getting better through substantive changes:
“The first draft buried the main argument in paragraph five. The revision moves it to the opening and rebuilds around it. Why is that better? Readers get the central claim upfront.”
“The original had three examples all showing the same thing. The revision cuts one and adds a complicating example the argument must account for. Why stronger? Engaging complexity makes arguments more sophisticated.”
Teaching Expert Engagement with Sources
Penrose and Geisler showed novice writers treat sources as repositories of information while experts engage them as arguments. AI can find and summarize sources but can’t experience genuine intellectual engagement.
Research writing should mean what Penrose and Geisler call “epistemic transformation”—using sources to develop understanding, not just support predetermined claims.
Show how expert scholars engage sources:
“Notice how this scholar summarizes three existing interpretations, identifies what they share and where they differ, then positions her argument as addressing what all three miss. She’s identifying a gap that makes her contribution necessary.”
Writing as Situated Practice
Anne Beaufort showed that writing expertise develops through sustained practice within discourse communities, where you learn five knowledge domains: discourse community knowledge, subject matter knowledge, genre knowledge, rhetorical knowledge, and writing process knowledge.
AI training inadvertently shows this: the model learns genre conventions, rhetorical principles, and community standards through extensive examples and expert guidance.
We should teach similarly. Don’t teach “writing” abstractly. Teach writing-in-contexts: How do historians construct arguments? What counts as evidence in literary analysis? How do scientists report findings?
Show examples from specific genres with commentary:
“This scientific article follows IMRAD structure—Introduction, Methods, Results, Discussion. That’s not arbitrary; it reflects how scientific communities construct knowledge: state the question, show methods so others can evaluate and replicate, report findings objectively, interpret significance.”
A Research-Based Approach
If we’re teaching the way AI learns—examples plus expert guidance—aligned with composition research, here’s what it looks like:
▪ Extensive Reading as Apprenticeship: Students read constantly, actively. Every reading assignment includes craft attention:
“We’re reading this essay because it demonstrates inductive structure beautifully. The writer begins with specific observations, accumulates evidence, and only in paragraph five articulates the general pattern. This works because…”
This is what cognitive apprenticeship theory describes: making expert thinking visible so novices can learn to think similarly.
▪ Portfolios and Reflection: Kathleen Blake Yancey showed that students develop as writers when they develop metacognitive awareness—understanding their own processes and making deliberate choices.
Have students maintain portfolios with reflective commentary:
“Include three effective opening paragraphs from your reading. For each, explain what makes it work. Then include an opening from your own writing and explain your strategy and why you chose it.”
This develops what Flower and Hayes call “metacognitive knowledge”—conscious awareness of writing strategies and when to use them.
▪ Workshopping: Peter Elbow and Donald Murray showed writers develop through giving and receiving feedback. But Anne Ruggles Gere’s research showed effective peer response requires training students to read like experts.
Teach them to provide the kind of commentary AI trainers give:
“Don’t just say ‘good job’ or ‘this is confusing.’ Be specific: ‘Your second paragraph confuses me because the pronoun “it” could refer to either X or Y. I had to reread to figure out what you meant.’”
Conferences: Donald Murray emphasized one-on-one expert guidance. This mirrors AI training’s core principle: expert evaluation of specific attempts with detailed feedback.
Regular conferences where you provide expert commentary on drafts:
“You’ve made an interesting claim in paragraph three, but I’m not seeing how it connects to your opening question. Let’s talk through the logic. What’s the relationship between this claim and your larger argument?”
This is what Vygotsky called “scaffolding in the zone of proximal development”—expert guidance on tasks just beyond what students can do alone.
Beyond What AI Can Do
Here’s where we must go beyond AI, drawing on what composition research says actually matters.
Purpose and Rhetorical Situation
AI can match patterns but has no sense of purpose or situation. Lloyd Bitzer emphasized that effective writing responds to problems and is shaped by audience and constraints.
Teach rhetorical analysis:
“What problem makes this writing necessary? Who’s your audience—what do they know, value, and need? What constraints affect you—length, genre, time, evidence?”
This is what Carolyn Miller calls “genre as social action”—understanding writing as purposeful response to situations, not just matching patterns.
Invention and Discovery
Janet Emig and James Britton showed that writing is itself a mode of learning, not just transcribing pre-existing thought.
Humans, unlike AI, can and should: “Start writing before you know your argument. Write to discover what you think. Follow tangents. Let contradictions emerge. The mess is where thinking happens.”
This aligns with Ann Berthoff’s work on writing as making meaning.
Voice and Presence
AI-generated text is characteristically voiceless. Peter Elbow and Donald Murray emphasized voice as essential—not just personal voice but the sense of a particular intelligence at work.
Teach that effective writing sounds like someone, that what Aristotle called “ethos” emerges from presence in prose:
“Read this from Baldwin: ‘I was trying to claim my inheritance. And that inheritance was not peace, but war.’ You hear Baldwin—his directness, his refusal of sentimentality, his complexity. Now this AI paragraph: ‘Civil rights activism involved complex negotiations.’ Technically correct, utterly voiceless.”
Intellectual Honesty
Wayne Booth emphasized rhetoric’s ethical dimensions. AI will generate confident assertions regardless of evidence. Human writers must practice intellectual honesty.
Teach what Booth called “the rhetoric of assent”—earning trust through honesty about limitations, fair treatment of opposing views, genuine engagement with complexity:
“When your evidence complicates your argument, acknowledge it. This doesn’t weaken your case—it strengthens your credibility. Readers trust writers who engage counterevidence honestly.”
What This Looks Like in Practice
Bringing it all together, here’s a research-based writing pedagogy for an AI age:
▪ Immersion: Following Bazerman, Beaufort, and others, students read extensively within specific discourse communities—not just for content but for craft. Build libraries of exemplary writing with expert commentary. Students encounter hundreds of examples while learning to read as writers.
▪ Cognitive Apprenticeship: Following Allan Collins and John Seely Brown, make your expert thinking visible: “When I read this opening, I’m looking for: What’s the question? Why does it matter? Where’s this going? This opening answers all three.”
▪ Portfolios and Reflection: Following Yancey, use portfolios with substantial reflection to develop metacognitive awareness. Students include multiple drafts showing revision, examples from reading they learned from, reflective commentary analyzing their development.
▪ Workshops and Conferences: Following Elbow, Murray, and Gere, create workshop communities where students learn from each other with guidance in expert-style feedback. Provide one-on-one conferences with detailed expert feedback on drafts—time-intensive but research shows it’s among the most effective interventions.
▪ Genre-Based Assignments: Following Russell, Beaufort, and writing-across-the-curriculum research, teach writing within specific situations and communities rather than as generic skill. Don’t assign “an essay.” Assign: “Write a literary analysis for an academic audience” or “Write an op-ed for general readers.” Each requires different approaches learned through genre study.
Writing-About-Writing: Following Wardle and Downs, have students read composition research and apply it to their practice: “Read Sommers on revision. Then analyze your own process. Are you revising substantively or cosmetically? What would change if you revised like the experienced writers Sommers describes?”
The Emerging Research on AI
Dylan Shaffer and others found that students who use AI most effectively are those who already have strong writing skills—they can evaluate AI output, identify weaknesses, and revise purposefully.
Students without those standards can’t distinguish effective from ineffective AI output.
Once they’ve internalized standards of effective writing, they can use AI productively. Without those standards, AI just helps them produce more sophisticated-sounding but still problematic prose.
Anna Mills and others suggest we should teach with AI, not against it—using it to help students understand how writing works, what makes prose effective, when to trust or revise AI suggestions. This requires students having the very expertise formula-based instruction fails to develop.
Why This Matters Now
AI training models what composition research has long advocated: learning through extensive examples, expert guidance about quality, sustained practice with substantive feedback, development of internalized standards.
The scholars I’ve cited—Sommers, Flower and Hayes, Beaufort, Bazerman, Penrose and Geisler, Yancey, Murray, Elbow, and many others—have been arguing for approaches aligned with how both AI and effective human writers actually learn.
Maybe now we’ll finally listen.
Teach writing through extensive reading paired with expert commentary on craft. Have students compare weak and strong writing side by side. Make your own evaluative thinking visible when you respond to their work.
Require real revision based on substantive feedback, not cosmetic editing. Build in metacognitive reflection so students understand their own processes. Treat writing as apprenticeship within particular discourse communities. And above all, teach writing as epistemic—as a way of discovering what one thinks, not merely reporting it.
Then teach what AI cannot do. Teach students to write with purpose in response to real rhetorical situations. Teach revision not just as the rearrangement of sentences, but as rethinking an argument.
Teach genuine inquiry driven by authentic questions. Teach voice as the presence of a mind at work on the page. Teach ethical engagement with sources and ideas.
This approach prepares students to use AI’s pattern-matching tools intelligently while cultivating what makes writing irreducibly human: purpose, judgment, discovery, voice, honesty, and responsibility.
It’s time to teach writing the way research—and now AI—shows it’s actually learned: through examples, expert guidance, sustained practice, and development of conscious understanding that allows purposeful choice.
That’s writing instruction adequate to our moment, grounded in research, and responsive to technological change.
Raise the Bar
We stand at a fork in the road.
Either we continue to teach writing as a series of formulas and students will quite rationally outsource the task to machines. Or we raise the bar.
What AI cannot do—at least not reliably—is think courageously, notice what is strange, press evidence until it yields something unexpected, or risk an argument that might fail.
If we teach students only to produce predictable essays, we are training them to compete with an algorithm that will always be faster, cheaper, and more compliant.
If we teach them to cultivate judgment—to make claims that surprise, to analyze rather than summarize, to wrestle seriously with counterargument, to connect ideas across domains—then we are teaching something no template can automate.
The choice is ours.
We can teach students to write like AI. Or we can teach them to write better than AI.
If we do not make that choice consciously, it will be made for us. And the prose will no longer belong to them.
