Posted in Academic writing, History, Writing

On Writing: How the King James Bible and How It Shaped the English Language and Still Teaches Us How to Write

When you’re interested in improving your writing, it’s a good idea to have some models to work from.  I’ve presented some of my favorite models in this blog.  These have included a number of examples of good writing by both academics (Max Weber, E.P. Thompson, Jim March, and Mary Metzand nonacademics (Frederick Douglass, Elmore Leonard).

Today I want to explore one of the two most influential forces in shaping the English language over the years:  The King James Bible.  (The other, of course, is Shakespeare.)  Earlier I presented one analysis by Ann Wroe, which focused on the thundering sound of the prose in this extraordinary text.  Today I want to draw on two other pieces of writing that explore the powerful model that this bible provides us all for how to write in English with power and grace.  One is by Adam Nicholson, who wrote a book on the subject (God’s Secretaries: The Making of the King James Bible).  The other, which I reprint in full at the end of this post, is by Charles McGrath.  

The impulse to produce a bible in English arose with the English reformation, as a Protestant vernacular alternative to the Latin version that was canonical in the Catholic church.  The text was commissioned in 1604 by King James, who succeeded Elizabeth I after her long reign, and it was constructed by a committee of 54 scholars.  They went back to the original texts in Hebrew and Greek, but they drew heavily on earlier English translations. 

The foundational translation was written by William Tyndale, who was executed for heresy in Antwerp in 1536, and this was reworked into what became known as the Geneva bible by Calvinists who were living in Switzerland.  One aim of the committee was to produce a version that was more compatible with the beliefs of English and Scottish versions of the faith, but for James the primary impetus was to remove the anti-royalist tone that was embedded within the earlier text.  Recent scholars have concluded that 84% of the words in the King James New Testament and 76% in the Old Testament are Tyndale’s.

As Nicholson puts it, the language of the King James Bible is an amazing mix — “majestic but intimate, the voice of the universe somehow heard in the innermost part of the ear.”

You don’t have to be a Christian to hear the power of those words—simple in vocabulary, cosmic in scale, stately in their rhythms, deeply emotional in their impact. Most of us might think we have forgotten its words, but the King James Bible has sewn itself into the fabric of the language. If a child is ever the apple of her parents’ eye or an idea seems as old as the hills, if we are at death’s door or at our wits’ end, if we have gone through a baptism of fire or are about to bite the dust, if it seems at times that the blind are leading the blind or we are casting pearls before swine, if you are either buttering someone up or casting the first stone, the King James Bible, whether we know it or not, is speaking through us. The haves and have-nots, heads on plates, thieves in the night, scum of the earth, best until last, sackcloth and ashes, streets paved in gold, and the skin of one’s teeth: All of them have been transmitted to us by the translators who did their magnificent work 400 years ago.

Wouldn’t it be lovely if we academics could write in way that sticks in people’s minds for 400 years?  Well, maybe that’s a bit too much to hope for.  But even if we can’t aspire to be epochally epigrammatic, there are still lessons we can learn from Tyndale and the Group of 54.  

One such lesson is the power of simplicity.  Too often scholars feel the compulsion to gussy up their language with jargon and Latinate constructions in the name of professionalism.  If any idiot can understand what you’re saying, then you’re not being a serious scholar.  But the magic of the King James Bible is that it uses simple Anglo-Saxon words to make the most profound statements.  Listen to this passage from Ecclesiastes:

I returned and saw under the sun, that the race is not to the swift, nor the battle to the strong, neither yet bread to the wise, nor yet riches to men of understanding, nor yet favor to men of skill, but time and chance happeneth to them all.

Or this sentence from Paul’s letter to the Phillipians:

Finally, brethren, whatsoever things are true, whatsoever things are honest, whatsoever things are just, whatsoever things are pure, whatsoever things are lovely, whatsoever things are of good report; if there be any virtue, and if there be any praise, think on these things.

Or the stunning opening line of the Gospel of John:

In the beginning was the Word, and the Word was with God, and the Word was God.

This is a text that can speak clearly to the untutored while at the same time elevating them to a higher plane.  For us it’s a model for how to match simplicity with profundity.

KJB

Why the King James Bible Endures

By CHARLES McGRATH

The King James Bible, which was first published 400 years ago next month, may be the single best thing ever accomplished by a committee. The Bible was the work of 54 scholars and clergymen who met over seven years in six nine-man subcommittees, called “companies.” In a preface to the new Bible, Miles Smith, one of the translators and a man so impatient that he once walked out of a boring sermon and went to the pub, wrote that anything new inevitably “endured many a storm of gainsaying, or opposition.” So there must have been disputes — shouting; table pounding; high-ruffed, black-gowned clergymen folding their arms and stomping out of the room — but there is no record of them. And the finished text shows none of the PowerPoint insipidness we associate with committee-speak or with later group translations like the 1961 New English Bible, which T.S. Eliot said did not even rise to “dignified mediocrity.” Far from bland, the King James Bible is one of the great masterpieces of English prose.

The issue of how, or even whether, to translate sacred texts was a fraught one in those days, often with political as well as religious overtones, and it still is. The Roman Catholic Church, for instance, recently decided to retranslate the missal used at Mass to make it more formal and less conversational. Critics have complained that the new text is awkward and archaic, while its defenders (some of whom probably still prefer the Mass in Latin) insist that’s just the point — that language a little out of the ordinary is more devotional and inspiring. No one would ever say that the King James Bible is an easy read. And yet its very oddness is part of its power.

From the start, the King James Bible was intended to be not a literary creation but rather a political and theological compromise between the established church and the growing Puritan movement. What the king cared about was clarity, simplicity, doctrinal orthodoxy. The translators worked hard on that, going back to the original Hebrew, Greek and Aramaic, and yet they also spent a lot of time tweaking the English text in the interest of euphony and musicality. Time and again the language seems to slip almost unconsciously into iambic pentameter — this was the age of Shakespeare, commentators are always reminding us — and right from the beginning the translators embraced the principles of repetition and the dramatic pause: “In the beginning God created the Heaven, and the Earth. And the earth was without forme, and void, and darkenesse was upon the face of the deepe: and the Spirit of God mooved upon the face of the waters.”

The influence of the King James Bible is so great that the list of idioms from it that have slipped into everyday speech, taking such deep root that we use them all the time without any awareness of their biblical origin, is practically endless: sour grapes; fatted calf; salt of the earth; drop in a bucket; skin of one’s teeth; apple of one’s eye; girded loins; feet of clay; whited sepulchers; filthy lucre; pearls before swine; fly in the ointment; fight the good fight; eat, drink and be merry.

But what we also love about this Bible is its strangeness — its weird punctuation, odd pronouns (as in “Our Father, which art in heaven”), all those verbs that end in “eth”: “In the morning it flourisheth, and groweth up; in the evening it is cut downe, and withereth.” As Robert Alter has demonstrated in his startling and revealing translations of the Psalms and the Pentateuch, the Hebrew Bible is even stranger, and in ways that the King James translators may not have entirely comprehended, and yet their text performs the great trick of being at once recognizably English and also a little bit foreign. You can hear its distinctive cadences in the speeches of Lincoln, the poetry of Whitman, the novels of Cormac McCarthy.

Even in its time, the King James Bible was deliberately archaic in grammar and phraseology: an expression like “yea, verily,” for example, had gone out of fashion some 50 years before. The translators didn’t want their Bible to sound contemporary, because they knew that contemporaneity quickly goes out of fashion. In his very useful guide, “God’s Secretaries: The Making of the King James Bible,” Adam Nicolson points out that when the Victorians came to revise the King James Bible in 1885, they embraced this principle wholeheartedly, and like those people who whack and scratch old furniture to make it look even more ancient, they threw in a lot of extra Jacobeanisms, like “howbeit,” “peradventure, “holden” and “behooved.”

This is the opposite, of course, of the procedure followed by most new translations, starting with Good News for Modern Man, a paperback Bible published by the American Bible Society in 1966, whose goal was to reflect not the language of the Bible but its ideas, rendering them into current terms, so that Ezekiel 23:20, for example (“For she doted upon their paramours, whose flesh is as the flesh of asses, and whose issue is like the issue of horses”) becomes “She was filled with lust for oversexed men who had all the lustfulness of donkeys or stallions.”

There are countless new Bibles available now, many of them specialized: a Bible for couples, for gays and lesbians, for recovering addicts, for surfers, for skaters and skateboarders, not to mention a superheroes Bible for children. They are all “accessible,” but most are a little tone-deaf, lacking in grandeur and majesty, replacing “through a glasse, darkly,” for instance, with something along the lines of “like a dim image in a mirror.” But what this modernizing ignores is that the most powerful religious language is often a little elevated and incantatory, even ambiguous or just plain hard to understand. The new Catholic missal, for instance, does not seem to fear the forbidding phrase, replacing the statement that Jesus is “one in being with the Father” with the more complicated idea that he is “consubstantial with the Father.”

Not everyone prefers a God who talks like a pal or a guidance counselor. Even some of us who are nonbelievers want a God who speaketh like — well, God. The great achievement of the King James translators is to have arrived at a language that is both ordinary and heightened, that rings in the ear and lingers in the mind. And that all 54 of them were able to agree on every phrase, every comma, without sounding as gassy and evasive as the Financial Crisis Inquiry Commission, is little short of amazing, in itself proof of something like divine inspiration.

 

Posted in Academic writing, Writing

Elmore Leonard’s Master Class on Writing a Scene

As you may have figured out by now, I’m a big fan of Elmore Leonard.  I wrote an earlier post about the deft way he leads you into a story and introduces a character on the very first page of a book.  He never gives his readers fits the way we academic writers do ours, by making them plow through half a paper before they finally discover its point.

Here I want to show you one of the best scenes Leonard ever wrote — and he wrote a lot of them.  It’s from the book Be Cool, which is the sequel to another called Get Shorty.  Both were turned into films starring John Travolta as Chili Palmer.  Chili is a loan shark from back east who heads to Hollywood to collect on a marker, but what he really wants is to make movies.  As a favor, he looks up a producer who owes someone else money, and instead of collecting he pitches a story.  The rest of the series is about the cinematic mess that ensues.

Chili Palmer

In the scene below, Chili runs into a minor thug floating in a backyard swimming pool.  In the larger story this is a nothing scene, but it’s stunning how Leonard turns it into a tour de force.  In a virtuoso display of writing, he shows Chili effortlessly take the thug apart while also mesmerizing him.  Chili the movie maker rewrites the scene as he’s acting it out and then directs the thug on the raft how to play his own part more effectively.

Watch how Chili does it:

He got out of there, went into the living room and stood looking around, seeing it now as the lobby of an expensive health club, a spa: walk through there to the pool where one of the guests was drying out. From here Chili had a clear view of Derek, the kid floating in the pool on the yellow raft, sun beating down on him, his shades reflecting the light. Chili walked outside, crossed the terrace to where a quart bottle of Absolut, almost full, stood at the tiled edge of the pool. He looked down at Derek laid out in his undershorts.

He said, “Derek Stones?”

And watched the kid raise his head from the round edge of the raft, stare this way through his shades and let his head fall back again.

“Your mother called,” Chili said. “You have to go home.”

A wrought-iron table and chairs with cushions stood in an arbor of shade close to the house. Chili walked over and sat down. He watched Derek struggle to pull himself up and begin paddling with his hands, bringing the raft to the side of the pool; watched him try to crawl out and fall in the water when the raft moved out from under him. Derek made it finally, came over to the table and stood there showing Chili his skinny white body, his titty rings, his tats, his sagging wet underwear.

“You wake me up,” Derek said, “with some shit about I’m suppose to go home? I don’t even know you, man. You from the funeral home? Put on your undertaker suit and deliver Tommy’s ashes? No, I forgot, they’re being picked up. But you’re either from the funeral home or—shit, I know what you are, you’re a lawyer. I can tell ’cause all you assholes look alike.”

Chili said to him, “Derek, are you trying to fuck with me?”

Derek said, “Shit, if I was fucking with you, man, you’d know it.”

Chili was shaking his head before the words were out of Derek’s mouth.

“You sure that’s what you want to say? ‘If I was fuckin with you, man, you’d know it?’ The ‘If I was fucking with you’ part is okay, if that’s the way you want to go. But then, ‘you’d know it’—come on, you can do better than that.”

Derek took off his shades and squinted at him.

“The fuck’re you talking about?”

“You hear a line,” Chili said, “like in a movie. The one guy says, ‘Are you trying to fuck with me?’ The other guy comes back with, ‘If I was fuckin with you, man . . .’ and you want to hear what he says next ’cause it’s the punch line. He’s not gonna say, ‘You’d know it.’ When the first guy says, ‘Are you trying to fuck with me?’ he already knows the guy’s fuckin with him, it’s a rhetorical question. So the other guy isn’t gonna say ‘you’d know it.’ You understand what I’m saying? ‘You’d know it’ doesn’t do the job. You have to think of something better than that.”

“Wait,” Derek said, in his wet underwear, weaving a little, still half in the bag. “The first guy goes, ‘You trying to fuck with me?’ Okay, and the second guy goes, ‘If I was fucking with you . . . If I was fucking with you, man . . .’ “

Chili waited. “Yeah?”

“Okay, how about, ‘You wouldn’t live to tell about it?’

“Jesus Christ,” Chili said, “come on, Derek, does that make sense? ‘You wouldn’t live to tell about it’? What’s that mean? Fuckin with a guy’s the same as taking him out?” Chili got up from the table. “What you have to do, Derek, you want to be cool, is have punch lines on the top of your head for every occasion. Guy says, ‘Are you trying to fuck with me?’ You’re ready, you come back with your line.” Chili said, “Think about it,” walking away. He went in the house through the glass doors to the bedroom.

Don’t you wish you could be Elmore Leonard and write a scene like that, or be Chili Palmer and construct it on the fly?  I sure do, and I’m not sure which role would be the more gratifying.

You could have a lot of fun picking apart the things that make the scene work.  Chili the movie maker walking into the living room and suddenly “seeing it as the lobby of an expensive health spa.”  Derek with “his skinny white body, his titty rings, his tats, his sagging wet underwear.”  The way Derek talks: “The fuck’re you talking about?”  Derek struggling to come up with the right line to replace the lame one he thought up himself.  Chili explaining the core dilemma of the writer, that you can’t ever set up a punchline and then fail to deliver.

But instead of explaining his joke, let’s just learn from his example.  Deliver what you promise.  Reward the effort that your readers invest in engaging with your work.  Have your key insight ready, deliver it on cue, and then walk away.  Never step on the punchline.

Posted in Academic writing, Wit, Writing

Wit (and the Art of Writing)

 

They laughed when I told them I wanted to be a comedian. Well they’re not laughing now.

Bob Monkhouse

Wit is notoriously difficult to analyze, and any effort to do so is likely to turn out dry and witless.  But two recent authors have done a remarkably effective job of trying to make sense of what constitutes wit and they manage to do so wittily.  That’s a risky venture, which most sensible people would avoid like COVID-19.  One book is Wit’s End by James Geary; the other is Humour by Terry Eagleton.  The epigraph comes from Eagleton.  Both have the good sense to reflect on the subject without analyzing it to death or trampling on the punchline.  Eagleton uses Freud as a negative case in point:

Children, insists Freud, lack all sense of the comic, but it is possible he is confusing them with the author of a notoriously unfunny work entitled Jokes and Their Relation to the Unconscious.

Interestingly, Geary says that wit begins with the pun.

Despite its bad reputation, punning is, in fact, among the highest displays of wit. Indeed, puns point to the essence of all true wit—the ability to hold in the mind two different ideas about the same thing at the same time.

In poems, words rhyme; in puns, ideas rhyme. This is the ultimate test of wittiness: keeping your balance even when you’re of two minds.

Groucho’s quip upon entering a restaurant and seeing a previous spouse at another table—“ Marx spots the ex.”

Geary Cover

Instead of avoiding ambiguity, wit revels in it, using paradoxical juxtaposition to shake you out of a trance and ask you to consider an issue from a strikingly different angle.  Arthur Koestler described the pun as “two strings of thought tied together by an acoustic knot.”  There’s an echo here of Emerson’s epigram, “A foolish consistency is the hobgoblin of little minds…”  Misdirection can lead to comic relief but it can also produce intellectual insight.

Geary goes on to show how the joke is integrally related to other forms of creative thought:

There is no sharp boundary splitting the wit of the scientist, inventor, or improviser from that of the artist, the sage, or the jester. The creative experience moves seamlessly from the “Aha!” of scientific discovery to the “Ah” of aesthetic insight to the “Ha-ha” of the pun and the punch line.  “Comic discovery is paradox stated—scientific discovery is paradox resolved,” Koestler wrote.

He shows that wit and metaphor have a lot in common.

If wit consists, as we say, in the ability to hold in the mind two different ideas about the same thing at the same time, this is exactly the function of metaphor. A metaphor carries the attention from the concrete to the abstract, from object to concept. When that direction is reversed, and attention is brought back from concept to object, the mind is surprised. Mistaking the figurative for fact is therefore a signature trick of wit.

Hence is it said, kleptomaniacs don’t understand metaphor because they take things literally.

Both wit and metaphor have these qualities in common:  “brevity, novelty, and clarity.”

Read my lips. Shoot from the hip. Wit switch hits. Wit ad-libs. It teaches new dogs lotsa old tricks. Throw spaghetti ’gainst the wall—wit’s what sticks. You can’t beat it or repeat it, not even with a shtick. Wit rocks the boat. That’s all she wrote.

Eagleton picks up Geary’s theme of how wit and metaphor are grounded in the “aha” of incongruity.

There are many theories of humour in addition to those we have looked at. They include the play theory, the conflict theory, the ambivalence theory, the dispositional theory, the mastery theory, the Gestalt theory, the Piagetian theory and the configurational theory. Several of these, however, are really versions of the incongruity theory, which remains the most plausible account of why we laugh. On this view, humour springs from a clash of incongruous aspects – a sudden shift of perspective, an unexpected slippage of meaning, an arresting dissonance or discrepancy, a momentary defamiliarising of the familiar and so on. As a temporary ‘derailment of sense’, it involves the disruption of orderly thought processes or the violation of laws or conventions. It is, as D. H. Munro puts it, a breach in the usual order of events.

“The Duke’s a long time coming today,” said the Duchess, stirring her tea with the other hand.

Eagleton Cover

He talks about how humor gives us license to be momentarily freed from the shackles of reason and order, a revolt of the id against the superego.  But the key is that reason and order are quickly restored, so the lapse of control is risk free.

As a pure enunciation that expresses nothing but itself, laughter lacks intrinsic sense, rather like an animal’s cry, but despite this it is richly freighted with cultural meaning. As such, it has a kinship with music. Not only has laughter no inherent meaning, but at its most riotous and convulsive it involves the disintegration of sense, as the body tears one’s speech to fragments and the id pitches the ego into temporary disarray. As with grief, severe pain, extreme fear or blind rage, truly uproarious laughter involves a loss of physical self-control, as the body gets momentarily out of hand and we regress to the uncoordinated state of the infant. It is quite literally a bodily disorder.

It is just the same with the fantasy revolution of carnival, when the morning after the merriment the sun will rise on a thousand empty wine bottles, gnawed chicken legs and lost virginities and everyday life will resume, not without a certain ambiguous sense of relief. Or think of stage comedy, where the audience is never in any doubt that the order so delightfully disrupted will be restored, perhaps even reinforced by this fleeting attempt to flout it, and thus can blend its anarchic pleasures with a degree of conservative self-satisfaction.

Like Geary, Eagleton shows how a key to wit is its ability to hone down an issue to a sharp point, which is captured in a verbal succinctness that is akin to poetry.

Wit has a point, which is why it is sometimes compared to the thrust of a rapier. It is rapier-like in its swift, shapely, streamlined, agile, flashing, glancing, dazzling, dexterous, pointed, clashing, flamboyant aspects, but also because it can stab and wound.

A witticism is a self-conscious verbal performance, but it is one that minimises its own medium, compacting its words into the slimmest possible space in an awareness that the slightest surplus of signification might prove fatal to its success. As with poetry, every verbal unit must pull its weight, and the cadence, rhythm and resonance of a piece of wit may be vital to its impact. The tighter the organisation, the more a verbal slide, ambiguity, conceptual shift or trifling dislocation of syntax registers its effect.

There is a strong lesson for writers in this discussion of wit.  Sharpen the argument, tighten the prose, focus on “brevity, novelty, and clarity.”  Learn from the craft of the poet and the comedian.  Less is more.

One problem witb academic writing in particular is that it takes itself too seriously.  It pays for us to keep our wit about us as we  write scholarly papers, acknowledging that we don’t know quite as much about the subject as we are letting on.  Conceding a bit of weakness can be quite appealing.  Oscar Wilde:  “I can resist anything but tempation.”

Everyday life involves sustaining a number of polite fictions: that we take a consuming interest in the health and well-being of our most casual acquaintances, that we never think about sex for a single moment, that we are thoroughly familiar with the later work of Schoenberg and so on. It is pleasant to drop the mask for a moment and strike up a comedic solidarity of weakness.

It is as though we are all really play-actors in our conventional social roles, sticking solemnly to our meticulously scripted parts but ready at the slightest fluff or stumble to dissolve into infantile, uproariously irresponsible laughter at the sheer arbitrariness and absurdity of the whole charade.

And don’t forget what Mel Brooks said:  Tragedy is when you cut your finger, and comedy is when someone else walks into an open sewer and dies.

Posted in Academic writing, Higher Education, Teaching, Writing

I Would Rather Do Anything Else than Grade Your Final Papers — Robin Lee Mozer

If the greatest joy that comes from retirement is that I no longer have to attend faculty meetings, the second greatest joy is that I no longer have to grade student papers.  I know, I know: commenting on student writing is a key component of being a good teacher, and there’s a real satisfaction that comes from helping someone become a better thinker and better writer.

But most students are not producing papers to improve their minds or hone their writing skills.  They’re just trying to fulfill a course requirement and get a decent grade.  And this creates a strong incentive not for excellence but for adequacy.  It encourages people to devote most of their energy toward gaming the system.

The key skill is to produce something that looks and feels like a good answer to the exam question or a good analysis of an intellectual problem.  Students have a powerful incentive to accomplish the highest grade for the lowest investment of time and intellectual effort.  This means aiming for quantity over quality (puff up the prose to hit the word count) and form over substance (dutifully refer to the required readings without actually drawing meaningful content from them).  Glibness provides useful cover for the absence of content.  It’s depressing to observe how the system fosters discursive means that undermine the purported aims of education.

Back in the days when students turned in physical papers and then received them back with handwritten comments from the instructor, I used to get a twinge in my stomach when I saw that most students didn’t bother to pick up their final papers from the box outside my office.  I felt like a sucker for providing careful comments that no one would ever see.  At one point I even asked students to tell me in advance if they wanted their papers back, so I only commented on the ones that might get read.  But this was even more depressing, since it meant that a lot of students didn’t even mind letting me know that they really only cared about the grade.  The fiction of doing something useful was what helped keep me going.

So, like many other faculty, I responded with joy to a 2016 piece that Robin Lee Mozer wrote in McSweeney’s called “I Would Rather Do Anything Else than Grade Your Final Papers.”  As a public service to teachers everywhere, I’m republishing her essay here.  Enjoy.

 

I WOULD RATHER DO ANYTHING ELSE THAN GRADE YOUR FINAL PAPERS

Dear Students Who Have Just Completed My Class,

I would rather do anything else than grade your Final Papers.

I would rather base jump off of the parking garage next to the student activity center or eat that entire sketchy tray of taco meat leftover from last week’s student achievement luncheon that’s sitting in the department refrigerator or walk all the way from my house to the airport on my hands than grade your Final Papers.

I would rather have a sustained conversation with my grandfather about politics and government-supported healthcare and what’s wrong with the system today and why he doesn’t believe in homeowner’s insurance because it’s all a scam than grade your Final Papers. Rather than grade your Final Papers, I would stand in the aisle at Lowe’s and listen patiently to All the Men mansplain the process of buying lumber and how essential it is to sight down the board before you buy it to ensure that it’s not bowed or cupped or crook because if you buy lumber with defects like that you’re just wasting your money even as I am standing there, sighting down a 2×4 the way my father taught me 15 years ago.

I would rather go to Costco on the Friday afternoon before a three-day weekend. With my preschooler. After preschool.

I would rather go through natural childbirth with twins. With triplets. I would rather take your chemistry final for you. I would rather eat beef stroganoff. I would rather go back to the beginning of the semester like Sisyphus and recreate my syllabus from scratch while simultaneously building an elaborate class website via our university’s shitty web-based course content manager and then teach the entire semester over again than grade your goddamn Final Papers.

I do not want to read your 3AM-energy-drink-fueled excuse for a thesis statement. I do not want to sift through your mixed metaphors, your abundantly employed logical fallacies, your incessant editorializing of your writing process wherein you tell me As I was reading through articles for this paper I noticed that — or In the article that I have chosen to analyze, I believe the author is trying to or worse yet, I sat down to write this paper and ideas kept flowing into my mind as I considered what I should write about because honestly, we both know that the only thing flowing into your mind were thoughts of late night pizza or late night sex or late night pizza and sex, or maybe thoughts of that chemistry final you’re probably going to fail later this week and anyway, you should know by now that any sentence about anything flowing into or out of or around your blessed mind won’t stand in this college writing classroom or Honors seminar or lit survey because we are Professors and dear god, we have Standards.

I do not want to read the one good point you make using the one source that isn’t Wikipedia. I do not want to take the time to notice that it is cited properly. I do not want to read around your 1.25-inch margins or your gauche use of size 13 sans serif fonts when everyone knows that 12-point Times New Roman is just. Fucking. Standard. I do not want to note your missing page numbers. Again. For the sixth time this semester. I do not want to attempt to read your essay printed in lighter ink to save toner, as you say, with the river of faded text from a failing printer cartridge splitting your paper like Charlton Heston in The Ten Commandments, only there, it was a sea and an entire people and here it is your vague stand-in for an argument.

I do not want to be disappointed.

I do not want to think less of you as a human being because I know that you have other classes and that you really should study for that chemistry final because it is organic chemistry and everyone who has ever had a pre-med major for a roommate knows that organic chemistry is the weed out course and even though you do not know this yet because you have never even had any sort of roommate until now, you are going to be weeded out. You are going to be weeded out and then you will be disappointed and I do not want that for you. I do not want that for you because you will have enough disappointments in your life, like when you don’t become a doctor and instead become a philosophy major and realize that you will never make as much money as your brother who went into some soul-sucking STEM field and landed some cushy government contract and made Mom and Dad so proud and who now gives you expensive home appliances like espresso machines and Dyson vacuums for birthday gifts and all you ever send him are socks and that subscription to that shave club for the $6 middle-grade blades.

I do not want you to be disappointed. I would rather do anything else than disappoint you and crush all your hopes and dreams —

Except grade your Final Papers.

The offer to take your chemistry final instead still stands.

Posted in Academic writing, Educational Research, Higher Education, Writing

Getting It Wrong — Rethinking a Life in Scholarship

This post is an overview of my life as a scholar.  I presented an oral version in my job talk at Stanford in 2002.  The idea was to make sense of the path I’d taken in my scholarly writing up to that point.  What were the issues I was looking at and why?  How did these ideas develop over time?  And what lessons can we learn from this process that might be of use to scholars who are just starting out.

This piece first appeared in print as the introduction to a 2005 book called Education, Markets, and the Public Good: The Selected Works of David F. Labaree.  As a friend told after hearing about the book, “Isn’t this kind of compilation something that’s published after you’re dead?”  So why was I doing this at as a mere youth of 58?  The answer: Routledge offered me the opportunity.  Was there ever an academic who turned out the chance to publish something when the chance arose?  The book was part of a series called — listen for the drum roll — The World Library of Educationalists, which must have a place near the top of the list of bad ideas floated by publishers.  After the first year, when a few libraries rose to the bait, annual sales of this volume never exceeded single digits.  It’s rank in the Amazon bestseller list is normally in the two millions.

Needless to say, no one ever read this piece in its originally published form.  So I tried again, this time slightly adapting it for a 2011 volume edited by Wayne Urban called Leaders in the Historical Study of American Education, which consisted of autobiographical sketches by scholars in the field.  It now ranks in the five millions on Amazon, so the essay still never found a reader.  As a result, I decided to give the piece one more chance at life in my blog.  I enjoyed reading it again and thought it offered some value to young scholars just starting out in a daunting profession.  I hope you enjoy it too.

The core insight is that research trajectories are not things you can  carefully map out in advance.  They just happen.  You learn as you go.  And the most effective means of learning from your own work — at least from my experience — arises from getting it wrong, time and time again.  If you’re not getting things wrong, you may not be learning much at all, since you may just be continually finding what you’re looking for.  It may well be that what you need to find are the things you’re not looking for and that you really don’t want to confront.  The things that challenge your own world view, that take you in a direction you’d rather not go, forcing you to give up ideas you really want to keep.

Another insight I got from this process of reflection is that it’s good to know what are the central weaknesses in the way you do research.  Everyone has them.  Best to acknowledge where you’re coming from and learn to live with that.  These weaknesses don’t discount the value of your work, they just put limits on it.  Your way of doing scholarship are probably better at producing some kinds of insights over others.  That’s OK.  Build on your strengths and let others point out your weaknesses.  You have no obligation and no ability to give the final answer on any important question.  Instead, your job is to make a provocative contribution to the ongoing scholarly conversation and let other scholars take it from there, countering your errors and filling in the gaps.  There is no last word.

Here’s a link to a PDF of the 2011 version.  Hope you find it useful.

 

Adventures in Scholarship

Instead of writing an autobiographical sketch for this volume, I thought it would be more useful to write about the process of scholarship, using my own case as a cautionary tale.  The idea is to help emerging scholars in the field to think about how scholars develop a line of research across a career, both with the hope of disabusing them of misconceptions and showing them how scholarship can unfold as a scary but exhilarating adventure in intellectual development.  The brief story I tell here has three interlocking themes:  You need to study things that resonate with your own experience; you need to take risks and plan to make a lot of mistakes; and you need to rely on friends and colleagues to tell you when you’re going wrong.  Let me explore each of these points.

Study What Resonates with Experience

First, a little about the nature of the issues I explore in my scholarship and then some thoughts about the source of my interest in these issues. My work focuses on the historical sociology of the American system of education and on the thick vein of irony that runs through it.  This system has long presented itself as a model of equal opportunity and open accessibility, and there is a lot of evidence to support these claims.  In comparison with Europe, this upward expansion of access to education came earlier, moved faster, and extended to more people.  Today, virtually anyone can go to some form of postsecondary education in the U.S., and more than two-thirds do.  But what students find when they enter the educational system at any level is that they are gaining equal access to a sharply unequal array of educational experiences.  Why?  Because the system balances open access with radical stratification.  Everyone can go to high school, but quality of education varies radically across schools.  Almost everyone can go to college, but the institutions that are most accessible (community colleges) provide the smallest boost to a student’s life chances, whereas the ones that offer the surest entrée into the best jobs (major research universities) are highly selective.  This extreme mixture of equality and inequality, of accessibility and stratification, is a striking and fascinating characteristic of American education, which I have explored in some form or another in all my work.

Another prominent irony in the story of American education is that this system, which was set up to instill learning, actually undercuts learning because of a strong tendency toward formalism.  Educational consumers (students and their parents) quickly learn that the greatest rewards of the system go to those who attain its highest levels (measured by years of schooling, academic track, and institutional prestige), where credentials are highly scarce and thus the most valuable.  This vertically-skewed incentive structure strongly encourages consumers to game the system by seeking to accumulate the largest number of tokens of attainment – grades, credits, and degrees – in the most prestigious programs at the most selective schools.  However, nothing in this reward structure encourages learning, since the payoff comes from the scarcity of the tokens and not the volume of knowledge accumulated in the process of acquiring these tokens.  At best, learning is a side effect of this kind of credential-driven system.  At worst, it is a casualty of the system, since the structure fosters consumerism among students, who naturally seek to gain the most credentials for the least investment in time and effort.  Thus the logic of the used-car lot takes hold in the halls of learning.

In exploring these two issues of stratification and formalism, I tend to focus on one particular mechanism that helps explain both kinds of educational consequences, and that is the market.  Education in the U.S., I argue, has increasingly become a commodity, which is offered and purchased through market processes in much the same way as other consumer goods.  Educational institutions have to be sensitive to consumers, by providing the mix of educational products that the various sectors of the market demand.  This promotes stratification in education, because consumers want educational credentials that will distinguish them from the pack in their pursuit of social advantage.  It also promotes formalism, because markets operate based on the exchange value of a commodity (what it can be exchanged for) rather than its use value (what it can be used for).  Educational consumerism preserves and increases social inequality, undermines knowledge acquisition, and promotes the dysfunctional overinvestment of public and private resources in an endless race for degrees of advantage.  The result is that education has increasingly come to be seen primarily as a private good, whose benefits accrue only to the owner of the educational credential, rather than a public good, whose benefits are shared by all members of the community even if they don’t have a degree or a child in school.  In many ways, the aim of my work has been to figure out why the American vision of education over the years made this shift from public to private.

This is what my work has focused on in the last 30 years, but why focus on these issues?  Why this obsessive interest in formalism, markets, stratification, and education as arbiter of status competition?  Simple. These were the concerns I grew up with.

George Orwell once described his family’s social location as the lower upper middle class, and this captures the situation of my own family.  In The Road to Wigan Pier, his meditation on class relations in England, he talks about his family as being both culture rich and money poor.[1]  Likewise for mine.  Both of my grandfathers were ministers.  On my father’s side the string of clergy went back four generations in the U.S.  On my mother’s side, not only was her father a minister but so was her mother’s father, who was in turn the heir to a long clerical lineage in Scotland.  All of these ministers were Presbyterians, whose clergy has long had a distinctive history of being highly educated cultural leaders who were poor as church mice.  The last is a bit of an exaggeration, but the point is that their prestige and authority came from learning and not from wealth.  So they tended to value education and disdain grubbing for money.  My father was an engineer who managed to support his family in a modest but comfortable middle-class lifestyle.  He and my mother plowed all of their resources into the education of their three sons, sending all of them to a private high school in Philadelphia (Germantown Academy) and to private colleges (Lehigh, Drexel, Wooster, and Harvard).  Both of my parents were educated at elite schools (Princeton and Wilson) – on ministerial scholarships – and they wanted to do the same for their own children.

What this meant is that we grew up taking great pride in our cultural heritage and educational accomplishments and adopting a condescending attitude to those who simply engaged in trade for a living.  Coupled with this condescension was a distinct tinge of envy for the nice clothes, well decorated houses, new cars, and fancy trips that the families of our friends experienced.  I thought of my family as a kind of frayed nobility, raising the flag of culture in a materialistic society while wearing hand-me-down clothes.  From this background, it was only natural for me to study education as the central social institution, and to focus in particular on the way education had been corrupted by the consumerism and status-competition of a market society.  In doing so I was merely entering the family business.  Someone out there needed to stand up for substantive over formalistic learning and for the public good over the private good, while at the same time calling attention to the dangers of a social hierarchy based on material status.  So I launched my scholarship from a platform of snobbish populism – a hankering for a lost world where position was grounded on the cultural authority of true learning and where mere credentialism could not hold sway.

Expect to Get Things Wrong

Becoming a scholar is not easy under the best of circumstances, and we may make it even harder by trying to imbue emerging scholars with a dedication for getting things right.[2]  In doctoral programs and tenure reviews, we stress the importance of rigorous research methods and study design, scrupulous attribution of ideas, methodical accumulation of data, and cautious validation of claims.  Being careful to stand on firm ground methodologically in itself is not a bad thing for scholars, but trying to be right all the time can easily make us overly cautious, encouraging us to keep so close to our data and so far from controversy that we end up saying nothing that’s really interesting.  A close look at how scholars actually carry out their craft reveals that they generally thrive on frustration.  Or at least that has been my experience.  When I look back at my own work over the years, I find that the most consistent element is a tendency for getting it wrong.  Time after time I have had to admit failure in the pursuit of my intended goal, abandon an idea that I had once warmly embraced, or backtrack to correct a major error.  In the short run these missteps were disturbing, but in the long run they have proven fruitful.

Maybe I’m just rationalizing, but it seems that getting it wrong is an integral part of scholarship.  For one thing, it’s central to the process of writing.  Ideas often sound good in our heads and resonate nicely in the classroom, but the real test is whether they work on paper.[3]  Only there can we figure out the details of the argument, assess the quality of the logic, and weigh the salience of the evidence.  And whenever we try to translate a promising idea into a written text, we inevitably encounter problems that weren’t apparent when we were happily playing with the idea over lunch.  This is part of what makes writing so scary and so exciting:  It’s a high wire act, in which failure threatens us with every step forward.  Can we get past each of these apparently insuperable problems?  We don’t really know until we get to the end.

This means that if there’s little risk in writing a paper there’s also little potential reward.  If all we’re doing is putting a fully developed idea down on paper, then this isn’t writing; it’s transcribing.  Scholarly writing is most productive when authors are learning from the process, and this happens only if the writing helps us figure out something we didn’t really know (or only sensed), helps us solve an intellectual problem we weren’t sure was solvable, or makes us turn a corner we didn’t know was there.  Learning is one of the main things that makes the actual process of writing (as opposed to the final published product) worthwhile for the writer.  And if we aren’t learning something from our own writing, then there’s little reason to think that future readers will learn from it either.  But these kinds of learning can only occur if a successful outcome for a paper is not obvious at the outset, which means that the possibility of failure is critically important to the pursuit of scholarship.

Getting it wrong is also functional for scholarship because it can force us to give up a cherished idea in the face of the kinds of arguments and evidence that accumulate during the course of research.  Like everyone else, scholars are prone to confirmation bias.  We look for evidence to support the analysis we prefer and overlook evidence that supports other interpretations.  So when we collide with something in our research or writing that deflects us from the path toward our preferred destination, we tend to experience this deflection as failure.  However, although these experiences are not pleasant, they can be quite productive.  Not only do they prompt us to learn things we don’t want to know, they can also introduce arguments into the literature that people don’t want to hear.  A colleague at the University of

Michigan, David Angus, had both of these benefits in mind when he used to pose the following challenge to every candidate for a faculty position in the School of Education:  “Tell me about some point when your research forced you to give up an idea you really cared about.”

I have experienced all of these forms of getting it wrong.  Books never worked out the way they were supposed to, because of changes forced on me by the need to come up with remedies for ailing arguments.  The analysis often turned in a direction that meant giving up something I wanted to keep and embracing something I preferred to avoid.  And nothing ever stayed finished.  Just when I thought I had a good analytical hammer and started using it to pound everything in sight, it would shatter into pieces and I would be forced to start over.  This story of misdirection and misplaced intentions starts, as does every academic story, with a dissertation.

Marx Gives Way to Weber

My dissertation topic fell into my lap one day during the final course in my doctoral program in sociology at the University of Pennsylvania, when I mentioned to Michael Katz that I had done a brief study of Philadelphia’s Central High School for an earlier class.  He had a new grant for studying the history of education in Philadelphia and Central was the lead school.  He needed someone to study the school, and I needed a topic, advisor, and funding; by happy accident, it all came together in 15 minutes.  I had first become interested in education as an object of study as an undergraduate at Harvard in the late 1960s, where I majored in Students for a Democratic Society and minored in sociology.  In my last year or two there, I worked on a Marxist analysis of Harvard as an institution of social privilege (is there a better case?), which whet my appetite for educational research.

For the dissertation, I wanted to apply the same kind of Marxist approach to Central High School, which seemed to beg for it.  Founded in 1838, it was the first high school in the city and one of the first in the county, and it later developed into the elite academic high school for boys in the city.  It looked like the Harvard of public high schools.  I had a model for this kind of analysis, Katz’s study of Beverly High School, in which he explained how this high school, shortly after its founding, came to be seen by many citizens as an institution that primarily served the upper classes, thus prompting the town meeting to abolish the school in 1861.[4]  I was planning to do this kind of study about Central, and there seemed to be plenty of evidence to support such an interpretation, including its heavily upper-middle-class student body, its aristocratic reputation in the press, and its later history as the city’s elite high school.

That was the intent, but my plan quickly ran into two big problems in the data I was gathering.  First, a statistical analysis of student attainment and achievement at the school over its first 80 years showed a consistent pattern:  only one-quarter of the students managed to graduate, which meant it was highly selective; but grades and not class determined who made it and who didn’t, which meant it was – surprise – highly meritocratic.  Attrition in modern high schools is strongly correlated with class, but this was not true in the early years at Central.  Middle class students were more likely to enroll in the first place, but they were no more likely to succeed than working class students.  The second problem was that the high school’s role in the Philadelphia school system didn’t fit the Marxist story of top-down control that I was trying to tell.  In the first 50 years of the high school, there was a total absence of bureaucratic authority over the Philadelphia school system.  The high school was an attractive good in the local educational market, offering elevated education in a grand building at a collegiate level (it granted bachelor degrees) and at no cost.  Grammar school students competed for access to this commodity by passing an entrance exam, and grammar school masters competed to get the most students into Central by teaching to the test.  The power that the high school exerted over the system was considerable but informal, arising from consumer demand from below rather than bureaucratic dictate from above.

Thus my plans to tell a story of class privilege and social control fell apart at the very outset of my dissertation; in its place, I found a story about markets and stratification:  Marx gives way to Weber.  The establishment of Central High school in the nation’s second largest city created a desirable commodity with instant scarcity, and this consumer-based market power not only gave the high school control over the school system but also gave it enough autonomy to establish a working meritocracy.  The high school promoted inequality: it served a largely middle class constituency and established an extreme form of educational stratification.  But it imposed a tough meritocratic regime equally on the children of the middle class and working class, with both groups failing most of the time.

Call on Your Friends for Help

In the story I’m telling here, the bad news is that scholarship is a terrain that naturally lures you into repeatedly getting it wrong.  The good news is that help is available if you look for it, which can turn scholarly wrong-headedness into a fruitful learning experience.  Just ask your friends and colleagues.  The things you most don’t want to hear may be just the things that will save you from intellectual confusion and professional oblivion.  Let me continue with the story, showing how colleagues repeatedly saved my bacon.

Markets Give Ground to Politics

Once I completed the dissertation, I gradually settled into being a Weberian, a process that took a while because of the disdain that Marxists hold for Weber.[5]  I finally decided I had a good story to tell about markets and schools, even if it wasn’t the one I had wanted to tell, so I used this story in rewriting the dissertation as a book.  When I had what I thought was a final draft ready to send to the publisher, I showed it to my colleague at Michigan State, David Cohen, who had generously offered to give it a reading.  His comments were extraordinarily helpful and quite devastating.  In the book, he said, I was interpreting the evolution of the high school and the school system as a result of the impact of the market, but the story I was really telling was about an ongoing tension for control of schools between markets and politics.[6]  The latter element was there in the text, but I had failed to recognize it and make it explicit in the analysis.  In short, he explained to me the point of my own book; so I had to rewrite the entire manuscript in order to bring out this implicit argument.

Framing this case in the history of American education as a tension between politics and markets allowed me to tap into the larger pattern of tensions that always exist in a liberal democracy:  the democratic urge to promote equality of power and access and outcomes, and the liberal urge to preserve individual liberty, promote free markets, and tolerate inequality.  The story of Central High School spoke to both these elements.  It showed a system that provided equal opportunity and unequal outcomes.  Democratic politics pressed for expanding access to high school for all citizens, whereas markets pressed for restricting access to high school credentials through attrition and tracking.  Central see-sawed back and forth between these poles, finally settling on the grand compromise that has come to characterize American education ever since:  open access to a stratified school system.  Using both politics and markets in the analysis also introduced me to the problem of formalism, since political goals for education (preparing competent citizens) value learning, whereas market goals (education for social advantage) value credentialing.

Disaggregating Markets

The book came out in 1988 with the title, The Making of an American High School.[7]  With politics and markets as my new hammer, everything looked like a nail.  So I wrote a series of papers in which I applied the idea to a wide variety of educational institutions and reform efforts, including the evolution of high school teaching as work, the history of social promotion, the history of the community college, the rhetorics of educational reform, and the emergence of the education school.

Midway through this flurry of papers, however, I ran into another big problem.  I sent a draft of my community college paper to David Hogan, a friend and former member of my dissertation committee at Penn, and his critique stopped me cold.  He pointed out that I was using the idea of educational markets to refer to two things that were quite different, both in concept and in practice.  One was the actions of educational consumers, the students who want education to provide the credentials they needed in order to get ahead; the other was the actions of educational providers, the taxpayers and employers who want education to produce the human capital that society needs in order to function.  The consumer sought education’s exchange value, providing selective benefits for the individual who owns the credential; the producer sought education’s use value, providing collective benefits to everyone in society, even those not in school.

This forced me to reconstruct the argument from the ground up, abandoning the politics and markets angle and constructing in its place a tension among three goals that competed for primacy in shaping the history of American education.  “Democratic equality” referred to the goal of using education to prepare capable citizens; “social efficiency” referred to the goal of using education to prepare productive workers; and “social mobility” referred to the goal of using education to enable individuals to get ahead in society.  The first was a stand-in for educational politics, the second and third were a disaggregation of educational markets.

Abandoning the Good, the Bad, and the Ugly

Once formulated, the idea of the three goals became a mainstay in my teaching, and for a while it framed everything I wrote.  I finished the string of papers I mentioned earlier, energized by the analytical possibilities inherent in the new tool.  But by the mid-1990s, I began to be afraid that its magic power would start to fade on me soon, as had happened with earlier enthusiasms like Marxism and politics-and-markets.  Most ideas have a relatively short shelf life, as metaphors quickly reach their limits and big ideas start to shrink upon close examination.  That doesn’t mean these images and concepts are worthless, only that they are bounded, both conceptually and temporally.  So scholars need to strike while the iron is hot.  Michael Katz once made this point to me with the Delphic advice, “Write your first book first.”  In other words, if you have an idea worth injecting into the conversation, you should do so now, since it will eventually evolve into something else, leaving the first idea unexpressed.  Since the evolution of an idea is never finished, holding off publication until the idea is done is a formula for never publishing.

So it seemed like the right time to put together a collection of my three-goals papers into a book, and I had to act quickly before they started to turn sour.  With a contract for the book and a sabbatical providing time to put it together, I now had to face the problem of framing the opening chapter.  In early 1996 I completed a draft and submitted it to American Educational Research Journal.  The reviews knocked me back on my heels.  They were supportive but highly critical.  One in particular, which I later found out was written by Norton Grubb, forced me to rethink the entire scheme of competing goals.  He pointed out something I had completely missed in my enthusiasm for the tool-of-the-moment.  In practice my analytical scheme with three goals turned into a normative scheme with two:  a Manichean vision of light and darkness, with Democratic Equality as the Good, and with Social Mobility and Social Efficiency as the Bad and the Ugly.  This ideologically colored representation didn’t hold up under close scrutiny.  Grubb pointed out that social efficiency is not as ugly as I was suggesting.  Like democratic equality and unlike social mobility, it promotes learning, since it has a stake in the skills of the workforce.  Also, like democratic equality, it views education as a public good, whose benefits accrue to everyone and not just (as with social mobility) to the credential holder.

This trenchant critique forced me to start over, putting a different spin on the whole idea of competing goals, abandoning the binary vision of good and evil, reluctantly embracing the idea of balance, and removing the last vestige of my original bumper-sticker Marxism.  As I reconstructed the argument, I put forward the idea that all three of these goals emerge naturally from the nature of a liberal democracy, and that all three are necessary.[8]  There is no resolution to the tension among educational goals, just as there is no resolution to the problem of being both liberal and democratic.  We need an educational system that makes capable citizens and productive workers while also enabling individuals to pursue their own aspirations.  And we all act out our support for each of these goals according to which social role is most salient to us at the moment.  As citizens, we want graduates who can vote intelligently; as taxpayers and employers, we want graduates who will increase economic productivity; and as parents, we want an educational system that offers our children social opportunity.  The problem is the imbalance in the current mix of goals, as the growing primacy of social mobility over the other two goals privileges private over public interests, stratification over equality, and credentials over learning.

Examining Life at the Bottom of the System

With this reconstruction of the story, I was able to finish my second book, published in 1997, and get it out the door before any other major problems could threaten its viability.[9]  One such problem was already coming into view.  In comments on my AERJ goals paper, John Rury (the editor) pointed out that my argument relied on a status competition model of social organization – students fighting for scarce credentials in order to move up or stay up – that did not really apply to the lower levels of the system.  Students in the lower tracks in high school and in the open-access realms of higher education (community colleges and regional state universities) lived in a different world from the one I was talking about.  They were affected by the credentials race, but they weren’t really in the race themselves.  For them, the incentives to compete were minimal, the rewards remote, and the primary imperative was not success but survival.

Fortunately, however, there was one place at the bottom of the educational hierarchy I did know pretty well, and that was the poor beleaguered education school.  From 1985 to 2003, while I was teaching in the College of Education at Michigan State University, I received a rich education in the subject.  I had already started a book about ed schools, but it wasn’t until the book was half completed that I realized it was forcing me to rethink my whole thesis about the educational status game.  Here was an educational institution that was the antithesis of the Harvards and Central High Schools that I had been writing about thus far.  Residing at the very bottom of the educational hierarchy, the ed school was disdained by academics, avoided by the best students, ignored by policymakers, and discounted by its own graduates.  It was the perfect case to use in answering a question I had been avoiding:  What happens to education when credentials carry no exchange value and the status game is already lost?

What I found is that life at the bottom has some advantages, but they are outweighed by disadvantages.  On the positive side, the education school’s low status frees it to focus efforts on learning rather than on credentials, on the use value rather than exchange value of education; in this sense, it is liberated from the race for credentials that consumes the more prestigious realms of higher education.  On the negative side, however, the ed school’s low status means that it has none of the autonomy that prestigious institutions (like Central High School) generate for themselves, which leaves it vulnerable to kibitzing from the outside.  This institutional weakness also has made the ed school meekly responsive to its environment, so that over the years it obediently produced large numbers of teachers at low cost and with modest professional preparation, as requested.

When I had completed a draft of the book, I asked for comments from two colleagues at Michigan State, Lynn Fendler and Tom Bird, who promptly pointed out several big problems with the text.  One had to do with the argument in the last few chapters, where I was trying to make two contradictory points:  ed schools were weak in shaping schools but effective in promoting progressive ideology.  The other problem had to do with the book’s tone:  as an insider taking a critical position about ed schools, I sounded like I was trying to enhance my own status at the expense of colleagues.  Fortunately, they were able to show me a way out of both predicaments.  On the first issue, they helped me see that ed schools were more committed to progressivism as a rhetorical stance than as a mode of educational practice.  In our work as teacher educators, we have to prepare teachers to function within an educational system that is hostile to progressive practices.  On the second issue, they suggested that I shift from the third person to the first person.  By announcing clearly both my membership in the community under examination and my participation in the problems I was critiquing, I could change the tone from accusatory to confessional.  With these important changes in place, The Trouble with Ed Schools was published in 2004.[10]

Enabling Limitations

In this essay I have been telling a story about grounding research in an unlovely but fertile mindset, getting it wrong repeatedly, and then trying to fix it with the help of friends.  However, I don’t want to leave the impression that I think any of these fixes really resolved the problems.  The story is more about filling potholes than about re-engineering the road.  It’s also about some fundamental limitations in my approach to the historical sociology of American education, which I have been unwilling and unable to fix since they lie at the core of my way of seeing things.  Intellectual frameworks define, shape, and enable the work of scholars.  Such frameworks can be helpful by allowing us to cut a slice through the data and reveal interesting patterns that are not apparent from other angles, but they can only do so if they maintain a sharp leading edge.  As an analytical instrument, a razor works better than a baseball bat, and a beach ball doesn’t work at all.  The sharp edge, however, comes at a cost, since it necessarily narrows the analytical scope and commits a scholar to one slice through a problem at the expense of others.  I’m all too aware of the limitations that arise from my own cut at things.

One problem is that I tend to write a history without actors.  Taking a macro-sociological approach to history, I am drawn to explore general patterns and central tendencies in the school-society relationship rather than the peculiarities of individual cases.  In the stories I tell, people don’t act.  Instead, social forces contend, social institutions evolve in response to social pressures, and collective outcomes ensue.  My focus is on general processes and structures rather than on the variations within categories.  What is largely missing from my account of American education is the radical diversity of traits and behaviors that characterizes educational actors and organizations.  I plead guilty to these charges.  However, my aim has been not to write a tightly textured history of the particular but to explore some of the broad socially structured patters that shape the main outlines of American educational life.  My sense is that this kind of work serves a useful purpose—especially in a field such as education, whose dominant perspectives have been psychological and presentist rather than sociological and historical; and in a sub-field like history of education, which can be prone to the narrow monograph with little attention to the big picture; and in a country like the United States, which is highly individualistic in orientation and tends to discount the significance of the collective and the categorical.

Another characteristic of my work is that I tend to stretch arguments well beyond the supporting evidence.  As anyone can see in reading my books, I am not in the business of building an edifice of data and planting a cautious empirical generalization on the roof.  My first book masqueraded as a social history of an early high school, but it was actually an essay on the political and market forces shaping the evolution of American education in general—a big leap to make from historical data about a single, atypical school.  Likewise my second book is a series of speculations about credentialing and consumerism that rests on a modest and eclectic empirical foundation.  My third book involves minimal data on education in education schools and maximal rumination about the nature of “the education school.”  In short, validating claims has not been my strong suit.  I think the field of educational research is sufficiently broad and rich that it can afford to have some scholars who focus on constructing credible empirical arguments about education and others who focus on exploring ways of thinking about the subject.

The moral of this story, therefore, may be that scholarship is less a monologue than a conversation.  In education, as in other areas, our field is so expansive that we can’t cover more than a small portion, and it’s so complex that we can’t even gain mastery over our own tiny piece of the terrain.  But that’s ok.  As participants in the scholarly conversation, our responsibility is not to get things right but to keep things interesting, while we rely on discomfiting interactions with our data and with our colleagues to provide the correctives we need to make our scholarship more durable.

[1]  George Orwell,  The Road to Wigan Pier (New York: Harcourt, Brace, 1958).

[2]  I am grateful to Lynn Fendler and Tom Bird for comments on an earlier draft of this portion of the essay.  As they have done before, they saved me from some embarrassing mistakes.  I presented an earlier version of this analysis in a colloquium at the Stanford School of Education in 2002 and in the Division F Mentoring Seminar at the American Educational Research Association annual meeting in New Orleans later the same year.  A later version was published as the introduction to Education, Markets, and the Public Good: The Selected Works of David F. Labaree (London: Routledge Falmer, 2007).  Reprinted with the kind permission of Taylor and Francis.

[3]  That doesn’t mean it’s necessarily the best way to start developing an idea.  For me, teaching has always served better as a medium for stimulating creative thought.  It’s a chance for me to engage with ideas from texts about a particular topic, develop a story about these ideas, and see how it sounds when I tell it in class and listen to student responses.  The classroom has a wonderful mix of traits for these purposes: by forcing discipline and structure on the creative process while allowing space for improvisation and offering the chance to reconstruct everything the next time around.  After my first book, most of my writing had its origins in this pedagogical process.  But at a certain point I find that I have to test these ideas in print.

[4]  Michael B. Katz, The Irony of Early School Reform: Educational Innovation in Mid-Nineteenth Century Massachusetts (Boston: Harvard University Press, 1968).

[5]  Marx’s message is rousing and it can fit on a bumper sticker:  Workers of the world, unite!  But Weber’s message is more complicated, pessimistic, and off-putting:  The iron cage of rationalization has come to dominate the structure of thought and social action, but we can’t stop it or even escape from it.

[6]  He also pointed out, in passing, that my chapter on the attainment system at the high school – which incorporated 17 tables in the book (30 in the dissertation), and which took me two years to develop by collecting, coding, keying, and statistically analyzing data from 2,000 student records – was essentially one big footnote in support of the statement, “Central High School was meritocratic.”  Depressing but true.

[7]  David F. Labaree, The Making of an American High School: The Credentials Market and the Central High School of Philadelphia, 1838-1939 (New Haven: Yale University Press, 1988).

[8]  David F. Labaree, “Public Goods, Private Goods: The American Struggle over Educational Goals. American Educational Research Journal 34:1 (Spring, 1998): 39-81.

[9]  David F. Labaree,  How to Succeed in School Without Really Learning: The Credentials Race in American Education (New Haven, Yale University Press, 1997).

[10] David F. Labaree,  The Trouble with Ed Schools (New Haven: Yale University Press, 2004).

Posted in Academic writing, Writing

Academic Writing Issues #9: Metaphors — The Poetry of Everyday Life

Earlier I posted a piece about mangled metaphors (Academic Writing Issues # 6), which focused on the trouble that writers get into when they use a metaphor without taking into account the root comparison that is embedded within it.  Example:  talking about “the doctrine set forth in Roe v. Wade and its progeny” — a still-born metaphor if there ever was one.  So writers need to be wary of metaphors, especially those that have become cliches, thus making the original reference dormant.

But don’t let these problems put you off from using metaphors altogether.  Actually, it’s nearly impossible to write without any metaphors, since they are so central to communication.  Literal meanings are useful, and in scientific writing precision is important to maintain clarity.  But literal language is boring, pedestrian.  It just plods along, telling a story without conveying what the story means.  Metaphor is how we create a richness of meaning, which comes from not just telling what something is but showing what’s it’s related to.  Metaphors create depth and resonance, and they stick in your mind.

Think about the power of a great book title, which captures the essence of the text in a vivid image:  Bowling Alone; Bell Curve; The Unbearable Lightness of Being; The Botany of Desire.

In the piece below, David Brooks talks about metaphors as the poetry of everyday life in a 2011 column from the New York Times.  I think you’ll like it.

 

April 11, 2011

Poetry for Everyday Life

By DAVID BROOKS

Here’s a clunky but unremarkable sentence that appeared in the British press before the last national election: “Britain’s recovery from the worst recession in decades is gaining traction, but confused economic data and the high risk of hung Parliament could yet snuff out its momentum.”

The sentence is only worth quoting because in 28 words it contains four metaphors. Economies don’t really gain traction, like a tractor. Momentum doesn’t literally get snuffed out, like a cigarette. We just use those metaphors, without even thinking about it, as a way to capture what is going on.

In his fine new book, “I Is an Other,” James Geary reports on linguistic research suggesting that people use a metaphor every 10 to 25 words. Metaphors are not rhetorical frills at the edge of how we think, Geary writes. They are at the very heart of it.

George Lakoff and Mark Johnson, two of the leading researchers in this field, have pointed out that we often use food metaphors to describe the world of ideas. We devour a book, try to digest raw facts and attempt to regurgitate other people’s ideas, even though they might be half-baked.

When talking about relationships, we often use health metaphors. A friend might be involved in a sick relationship. Another might have a healthy marriage.

When talking about argument, we use war metaphors. When talking about time, we often use money metaphors. But when talking about money, we rely on liquid metaphors. We dip into savings, sponge off friends or skim funds off the top. Even the job title stockbroker derives from the French word brocheur, the tavern worker who tapped the kegs of beer to get the liquidity flowing.

The psychologist Michael Morris points out that when the stock market is going up, we tend to use agent metaphors, implying the market is a living thing with clear intentions. We say the market climbs or soars or fights its way upward. When the market goes down, on the other hand, we use object metaphors, implying it is inanimate. The market falls, plummets or slides.

Most of us, when asked to stop and think about it, are by now aware of the pervasiveness of metaphorical thinking. But in the normal rush of events, we often see straight through metaphors, unaware of how they refract perceptions. So it’s probably important to pause once a month or so to pierce the illusion that we see the world directly. It’s good to pause to appreciate how flexible and tenuous our grip on reality actually is.

Metaphors help compensate for our natural weaknesses. Most of us are not very good at thinking about abstractions or spiritual states, so we rely on concrete or spatial metaphors to (imperfectly) do the job. A lifetime is pictured as a journey across a landscape. A person who is sad is down in the dumps, while a happy fellow is riding high.

Most of us are not good at understanding new things, so we grasp them imperfectly by relating them metaphorically to things that already exist. That’s a “desktop” on your computer screen.

Metaphors are things we pass down from generation to generation, which transmit a culture’s distinct way of seeing and being in the world. In his superb book “Judaism: A Way of Being,” David Gelernter notes that Jewish thought uses the image of a veil to describe how Jews perceive God — as a presence to be sensed but not seen, which is intimate and yet apart.

Judaism also emphasizes the metaphor of separateness as a path to sanctification. The Israelites had to separate themselves from Egypt. The Sabbath is separate from the week. Kosher food is separate from the nonkosher. The metaphor describes a life in which one moves from nature and conventional society to the sacred realm.

To be aware of the central role metaphors play is to be aware of how imprecise our most important thinking is. It’s to be aware of the constant need to question metaphors with data — to separate the living from the dead ones, and the authentic metaphors that seek to illuminate the world from the tinny advertising and political metaphors that seek to manipulate it.

Most important, being aware of metaphors reminds you of the central role that poetic skills play in our thought. If much of our thinking is shaped and driven by metaphor, then the skilled thinker will be able to recognize patterns, blend patterns, apprehend the relationships and pursue unexpected likenesses.

Even the hardest of the sciences depend on a foundation of metaphors. To be aware of metaphors is to be humbled by the complexity of the world, to realize that deep in the undercurrents of thought there are thousands of lenses popping up between us and the world, and that we’re surrounded at all times by what Steven Pinker of Harvard once called “pedestrian poetry.”

Posted in Education policy, Educational Research, History of education, Teaching, Writing

James March: Education and the Pursuit of Optimism

This post is aabout a 1975 paper by James G. March, which was published in, of all places, the Texas Tech Journal of Education.  Given that provenance, it’s something you likely have never encountered before unless someone actually handed it to you.  I used it in a number of my classes and wanted to share it with you.

March was a fascinating scholar who had a long a distinguished career as an organizational theorist, teaching at Carnegie-Mellon and later at the Stanford business and education schools. He died last year.  I had the privilege of getting to know him in retirement after I moved to Stanford.  He was the rare combination of cutting edge social scientist and ardent humanist, who among his other accomplishments published a half dozen volumes of poetry.

This paper shows both sides of his approach to issues.  In it he explores the role that education has played in the U.S., in particular its complex relationship with all-American optimism.  Characteristically, in developing his analysis, he relies not on social science data but on literature — among others, Tolstoy, Cervantes, Solzhenitsyn, and Borges.

I love how he frames the nature of teaching and learning in a way that is vastly distant from the usual language of social efficiency and human capital production — and also distant from the chipper American faith that education can fix everything.  A tragic worldview pervades his discussion, reflecting the perspective of the great works of literature upon which he draws.

I find his argument particularly salient for teachers, who have been the majority of my own students over the years.  It’s common for teachers to ask the impossible of themselves, by trying to fulfill the promise that education with save all their students.  Too often the result is the feeling of failure and/or the fate of burnout.

Below I distill some of the core insights from this paper, but there is no substitute for reading and reveling in the original, which you can find here.

He starts out by asserting that “The modern history of American education is a history of optimism.”  The problem with this is that it blinds us to the limited ability of social engineering in general and education in particular to realize our greatest hopes.

By insisting that great action be justified by great hopes, we encourage a belief in the possibility of magic. For examples, read the litany of magic in the literature on free schools, Montessori, Head Start, Sesame Street, team teaching, open schools, structured schools, computer-assisted instruction, community control. and hot lunches. Inasmuch as there appears to be rather little magic in the world, great hopes are normally difficult to realize. Having been seduced into great expectations, we are abandoned to a choice between failure and delusion.

The temptations of delusion are accentuated both by our investment in hope and by the potential for ambiguity in educational outcomes. To a substantial extent we are able to believe whatever we want to believe, and we want to believe in the possibility of progress. We are unsure about what we want to accomplish, or how we would know when we had accomplished it, or how to allocate credit or blame for accomplishment or lack of it. So we fool ourselves.

The conversion of great hopes into magic, and magic into delusion describes much of modern educational history. It continues to be a dominant theme of educational reform in the United States. But there comes a time when the conversion docs not work for everyone. As we come to rccognize the political, sociological, and psychological dynamics of repeated waves of optimism based on heroic hopes, our willingness to participate in the process is compromised.

As an antidote to the problem, he proposes three paradoxical principles for action:  pessimism without despair; irrelevance without loss of faith; and optimism without hope.

Pessimism without despair:  This means embracing the essential connection between education and life, without expecting the most desirable outcome.  It is what it is.  The example is Solzhenitsyn’s character Shukov, learning to live in a prison camp.  The message is this:  Don’t set unreasonable expectations for what’s possible, defining anything else as failure.  Small victories in the classroom are a big deal.

Irrelevance without loss of faith:  This means recognizing that you can’t control events, so instead you do what you can wherever you are.  His example is General Kutuzov in War and Peace.  He won the war against Napoleon by continually retreating and by restraining his officers from attacking the enemy.  Making things happen is overrated.  There’s a lot the teacher simply can’t accomplish, and you need to recognize that.

Optimism without hope:  The aim here is to do what is needed rather than what seems to be effective.  His example is Don Quixote, a man who cuts a ridiculous figure by tilting at windmills, but who has a beneficial impact on everyone he encounters.  The message for teachers is that you set out to do what you think is best for your students, because it’s the right thing to do rather than because it is necessarily effective.  This is moral-political logic for schooling instead of the usual utilitarian logic.

So where does this leave you as a teacher, administrator, policymaker?

  • Don’t let anyone convince you that schooling is all about producing human capital, improving test scores, or pursuing any other technical and instrumentalist goal.

  • Its origins are political and moral: to form a nation state, build character, and provide social opportunity.

  • Teaching is not a form of social engineering, making society run more efficiently

  • It’s not about fixing social problems, for which it is often ill suited

  • Instead, it’s a normative practice organized around shaping the kind of people we want to be — about doing what’s right instead of what’s useful.

Posted in Academic writing, Writing

Academic Writing Issues #7 — Writing the Perfect Sentence

The art of writing ultimately comes down to the art of writing sentences.  In his lovely book, How to Write a Sentence, Stanley Fish explains that the heart of any sentence is not its content but its form.  The form is what defines the logical relationship between the various elements within the sentence.  The same formal set of relationships within a sentence structure can be filled with an infinite array of possible bits of content.  If you master the forms, he says, you will be able to harness them to your own aims in producing content.  His core counter-intuitive admonition is this:  “You shall tie yourself to forms and the forms shall set you free.”  Note the perfect form in Lewis Carrolls’ nonsense poem Jaberwocky:

Twas brillig, and the slithy toves

Did gyre and gimble in the wabe;

All mimsy were the borogoves,

And the mome raths outgrabe.

I strongly recommend reading the book, which I used for years in my class on academic writing.  You’ll learn a lot about writing and you’ll also accumulate a lovely collection of stunning quotes.

Below is a piece Fish published in the New Statesman in 2011, which deftly summarizes the core argument in the book.  Enjoy.  Here’s a link to the original.

 

How to write the perfect sentence

Stanley Fish

Published 17 February 2011

In learning how to master the art of putting words together, the trick is to concentrate on technique and not content. Substance comes second.

Look around the room you’re sitting in. Pick out four items at random. I’m doing it now and my items are a desk, a television, a door and a pencil. Now, make the words you have chosen into a sentence using as few additional words as possible. For example: “I was sitting at my desk, looking at the television, when a pencil fell off and rolled to the door.” Or: “The television close to the door obscured my view of the desk and the pencil I needed.” Or: “The pencil on my desk was pointed towards the door and away from the television.” You will find that you can always do this exercise – and you could do it for ever.

That’s the easy part. The hard part is to answer this question: what did you just do? How were you able to turn a random list into a sentence? It might take you a little while but, in time, you will figure it out and say something like this: “I put the relationships in.” That is to say, you arranged the words so that they were linked up to the others by relationships of cause, effect, contiguity, similarity, subordination, place, manner and so on (but not too far on; the relationships are finite). Once you have managed this – and you do it all the time in speech, effortlessly and unselfconsciously – hitherto discrete items participate in the making of a little world in which actors, actions and the objects of actions interact in ways that are precisely represented.

This little miracle you have performed is called a sentence and we are now in a position to define it: a sentence is a structure of logical relationships. Notice how different this is from the usual definitions such as, “A sentence is built out of the eight parts of speech,” or, “A sentence is an independent clause containing a subject and a predicate,” or, “A sentence is a complete thought.” These definitions seem like declarations out of a fog that they deepen. The words are offered as if they explained everything, but each demands an explanation.

When you know that a sentence is a structure of logical relationships, you know two things: what a sentence is – what must be achieved for there to be focused thought and communication – and when a sentence that you are trying to write goes wrong. This happens when the relationships that allow sense to be sharpened are missing or when there are too many of them for comfort (a goal in writing poetry but a problem in writing sentences). In such cases, the components of what you aspired to make into a sentence stand alone, isolated; they hang out there in space and turn back into items on a list.

Armed with this knowledge, you can begin to look at your own sentences and those of others with a view to discerning what is successful and unsuccessful about them. As you do this, you will be deepening your understanding of what a sentence is and introducing yourself to the myriad ways in which logical structures of verbal thought can be built, unbuilt, elaborated upon and admired.

My new book, How to Write a Sentence, is a light-hearted manual of instruction designed to teach you how to do these things – how to write a sentence and how to appreciate in analytical detail the sentences produced by authors who knock your socks off. These two aspects – lessons in sentence craft and lessons in sentence appreciation – reinforce each other; the better able you are to appreciate great sentences, the closer you are to being able to write one. An intimate knowledge of what makes sentences work is one prerequisite for writing them.

Consider the first of those aspects – sentence craft. The chief lesson here is: “It’s not the thought that counts.” By that, I mean that skill in writing sentences is a matter of understanding and mastering form not content. The usual commonplace wisdom is that you have to write about something, but actually you don’t. The exercise I introduced above would work even if your list was made up of nonsense words, as long as each word came tagged with its formal identification – actor, action, object of action, modifier, conjunction, and so on. You could still tie those nonsense words together in ligatures of relationships and come up with perfectly formed sentences like Noam Chomsky’s “Colourless green ideas sleep furiously,” or the stanzas of Lewis Carroll’s “Jabberwocky”.

If what you want to do is become facile (in a good sense) in producing sentences, the sentences with which you practise should be as banal and substantively inconsequential as possible; for then you will not be tempted to be interested in them. The moment that interest comes to the fore, the focus on craft will be lost. (I know that this sounds counter-intuitive, but stick with me.)

I call this the Karate Kid method of learning to write. In that 1984 cult movie (recently remade), the title figure learns how to fight not by participating in a match but by repeating (endlessly and pointlessly, it seems to him) the purely formal motions of waxing cars and painting fences. The idea is that when you are ready either to compete or to say something that is meaningful and means something to you, the forms you have mastered and internalised will generate the content that would have remained inchoate (at best) without them.

These points can be illustrated with senten­ces that are too good to be tossed aside. In the book, I use them to make points about form, but I can’t resist their power or the desire to explain it. When that happens, content returns to my exposition and I shift into full appreciation mode, caressing these extraordinary verbal productions even as I analyse them. I become like a sports commentator, crying, “Did you see that?” or “How could he have pulled that off?” or “How could she keep it going so long and still not lose us?” In the end, the apostle of form surrenders to substance, or rather, to the pleasure of seeing substance emerge though the brilliant deployment of forms.

As a counterpoint to that brilliance, let me hazard an imitation of two of the marvels I discuss. Take Swift’s sublimely malign sentence, “Last week I saw a woman flayed and you will hardly believe how much it altered her person for the worse.” And then consider this decidedly lame imitation: “Last night I ate six whole pizzas and you would hardly believe how sick I was.”

Or compare John Updike’s description in the New Yorker of the home run that the baseball player Ted Williams hit on his last at-bat in 1960 – “It was in the books while it was still in the sky” – to “He had won the match before the first serve.” My efforts in this vein are lessons both in form and humility.

The two strands of my argument can be brought together by considering sentences that are about their own form and unfolding; sentences that meditate on or burst their own limitations, and thus remind us of why we have to write sentences in the first place – we are mortal and finite – and of what rewards may await us in a realm where sentences need no longer be fashioned. Here is such a sentence by the metaphysical poet John Donne:

If we consider eternity, into that time never entered; eternity is not an everlasting flux of time, but time is a short parenthesis in a long period; and eternity had been the same as it is, though time had never been.

The content of the sentence is the unreality of time in the context of eternity, but because a sentence is necessarily a temporal thing, it undermines that insight by being. (Asserting in time the unreality of time won’t do the trick.) Donne does his best to undermine the undermining by making the sentence a reflection on its fatal finitude. No matter how long it is, no matter what its pretension to a finality of statement, it will be a short parenthesis in an enunciation without beginning, middle or end. That enunciation alone is in possession of the present – “is” – and what the sentence comes to rest on is the declaration of its already having passed into the state of non-existence: “had never been”.

Donne’s sentence is in my book; my analysis of it is not. I am grateful to the New Statesman for the opportunity to produce it and to demonstrate once again the critic’s inadequacy to his object.

Stanley Fish is Davidson-Kahn Distinguished University Professor of Humanities and Law at Florida International University. His latest book is “How to Write a Sentence: and How to Read One” (HarperCollins, £12.99)

https://www.newstatesman.com/books/2011/02/write-sentence-comes

 

 

Posted in Academic writing, Writing

Academic Writing Issues #5 — Failing to Use Dynamic Verbs

Many people have complained that academic writers are addicted to the passive voice, doing anything to avoid using the first person:  “Data were gathered.”  I wonder who did that?  But in some ways a bigger problem is that we refuse to use the kind of dynamic verbs that can energize our stories and drive the argument forward.  Below is a lovely piece by Constance Hale, originally published as part of the New York Times series in 2012 on writing called Draft.  In it she explains the difference between static verbs and power verbs.  Yes, she says, static verbs have their uses; but when we rely too heavily on them, we drain all energy, urgency, and personality from our authorial voices.  We can also end up lulling our readers to sleep.

She gives us some excellent examples about how we can use the full array of verbs at our disposal to tell compelling, nuanced, and engaging stories.  Enjoy.

Here’s a link to the original version.

 

New York Times

APRIL 16, 2012, 9:00 PM

Make-or-Break Verbs

By CONSTANCE HALE

Draft is a series about the art and craft of writing.

This is the third in a series of writing lessons by the author.

A sentence can offer a moment of quiet, it can crackle with energy or it can just lie there, listless and uninteresting.

What makes the difference? The verb.

Verbs kick-start sentences: Without them, words would simply cluster together in suspended animation. We often call them action words, but verbs also can carry sentiments (love, fear, lust, disgust), hint at cognition (realize, know, recognize), bend ideas together (falsify, prove, hypothesize), assert possession (own, have) and conjure existence itself (is, are).

Fundamentally, verbs fall into two classes: static (to be, to seem, to become) and dynamic (to whistle, to waffle, to wonder). (These two classes are sometimes called “passive” and “active,” and the former are also known as “linking” or “copulative” verbs.) Static verbs stand back, politely allowing nouns and adjectives to take center stage. Dynamic verbs thunder in from the wings, announcing an event, producing a spark, adding drama to an assembled group.

Static Verbs
Static verbs themselves fall into several subgroups, starting with what I call existential verbs: all the forms of to be, whether the present (am, are, is), the past (was, were) or the other more vexing tenses (is being, had been, might have been). In Shakespeare’s “Hamlet,” the Prince of Denmark asks, “To be, or not to be?” when pondering life-and-death questions. An aging King Lear uses both is and am when he wonders about his very identity:

“Who is it that can tell me who I am?”

Jumping ahead a few hundred years, Henry Miller echoes Lear when, in his autobiographical novel “Tropic of Cancer,” he wanders in Dijon, France, reflecting upon his fate:

“Yet I am up and about, a walking ghost, a white man terrorized by the cold sanity of this slaughter-house geometry. Who am I? What am I doing here?”

Drawing inspiration from Miller, we might think of these verbs as ghostly verbs, almost invisible. They exist to call attention not to themselves, but to other words in the sentence.

Another subgroup is what I call wimp verbs (appear, seem, become). Most often, they allow a writer to hedge (on an observation, description or opinion) rather than commit to an idea: Lear appears confused. Miller seems lost.

Finally, there are the sensing verbs (feel, look, taste, smell and sound), which have dual identities: They are dynamic in some sentences and static in others. If Miller said I feel the wind through my coat, that’s dynamic. But if he said I feel blue, that’s static.

Static verbs establish a relationship of equals between the subject of a sentence and its complement. Think of those verbs as quiet equals signs, holding the subject and the predicate in delicate equilibrium. For example, I, in the subject, equals feel blue in the predicate.

Power Verbs
Dynamic verbs are the classic action words. They turn the subject of a sentence into a doer in some sort of drama. But there are dynamic verbs — and then there are dynamos. Verbs like has, does, goes, gets and puts are all dynamic, but they don’t let us envision the action. The dynamos, by contrast, give us an instant picture of a specific movement. Why have a character go when he could gambol, shamble, lumber, lurch, sway, swagger or sashay?

Picking pointed verbs also allows us to forgo adverbs. Many of these modifiers merely prop up a limp verb anyway. Strike speaks softly and insert whispers. Erase eats hungrily in favor of devours. And whatever you do, avoid adverbs that mindlessly repeat the sense of the verb, as in circle around, merge together or mentally recall.

This sentence from “Tinkers,” by Paul Harding, shows how taking time to find the right verb pays off:

“The forest had nearly wicked from me that tiny germ of heat allotted to each person….”

Wick is an evocative word that nicely gets across the essence of a more commonplace verb like sucked or drained.

Sportswriters and announcers must be masters of dynamic verbs, because they endlessly describe the same thing while trying to keep their readers and listeners riveted. We’re not just talking about a player who singles, doubles or homers. We’re talking about, as announcers described during the 2010 World Series, a batter who “spoils the pitch” (hits a foul ball), a first baseman who “digs it out of the dirt” (catches a bad throw) and a pitcher who “scatters three singles through six innings” (keeps the hits to a minimum).

Imagine the challenge of writers who cover races. How can you write about, say, all those horses hustling around a track in a way that makes a single one of them come alive? Here’s how Laura Hillenbrand, in “Seabiscuit,” described that horse’s winning sprint:

“Carrying 130 pounds, 22 more than Wedding Call and 16 more than Whichcee, Seabiscuit delivered a tremendous surge. He slashed into the hole, disappeared between his two larger opponents, then burst into the lead… Seabiscuit shook free and hurtled into the homestretch alone as the field fell away behind him.”

Even scenes that at first blush seem quiet can bristle with life. The best descriptive writers find a way to balance nouns and verbs, inertia and action, tranquillity and turbulence. Take Jo Ann Beard, who opens the short story “Cousins” with static verbs as quiet as a lake at dawn:

“Here is a scene. Two sisters are fishing together in a flat-bottomed boat on an olive green lake….”

When the world of the lake starts to awaken, the verbs signal not just the stirring of life but crisp tension:

“A duck stands up, shakes out its feathers and peers above the still grass at the edge of the water. The skin of the lake twitches suddenly and a fish springs loose into the air, drops back down with a flat splash. Ripples move across the surface like radio waves. The sun hoists itself up and gets busy, laying a sparkling rug across the water, burning the beads of dew off the reeds, baking the tops of our mothers’ heads.”

Want to practice finding dynamic verbs? Go to a horse race, a baseball game or even walk-a-thon. Find someone to watch intently. Describe what you see. Or, if you’re in a quiet mood, sit on a park bench, in a pew or in a boat on a lake, and then open your senses. Write what you see, hear and feel. Consider whether to let your verbs jump into the scene or stand by patiently.

Verbs can make or break your writing, so consider them carefully in every sentence you write. Do you want to sit your subject down and hold a mirror to it? Go ahead, use is. Do you want to plunge your subject into a little drama? Go dynamic. Whichever you select, give your readers language that makes them eager for the next sentence.

Constance Hale, a journalist based in San Francisco, is the author of “Sin and Syntax” and the forthcoming “Vex, Hex, Smash, Smooch.” She covers writing and the writing life at sinandsyntax.com.

Posted in Academic writing, Writing

Academic Writing Issues #4 — Failing to Listen for the Music

All too often, academic writing is tone deaf to the music of language.  Just as we tend to consider unprofessional any writing that is playful, engaging, funny, or moving, so too with writing that is musical.  A professional monotone is the scholar’s voice of choice.  This stance leads to two big problems.  One is that it puts off the reader, exactly the person we should be trying to draw into our story.  Why so easily abandon one of the great tools of effective rhetoric?  Another is that it alienates academic writers from their own words, forcing them to adopt the generic voice of the pedant rather than the particular voice the person who is the author.

For better or for worse — usually for worse — we as scholars are contributing to the literary legacy of our culture, so why not do so in a way that sometimes sings or at least doesn’t end on a false note.  Speaking of which, consider a quote from one of the masters of English prose, Abraham Lincoln, from the last paragraph of his first inaugural address.  Picture him talking at the brink of the nation’s most terrible war, and then listen to his melodic phrasing:

I am loath to close. We are not enemies, but friends. We must not be enemies. Though passion may have strained it must not break our bonds of affection. The mystic chords of memory, stretching from every battlefield, and patriot grave, to every living heart and hearthstone, all over this broad land, will yet swell the chorus of the Union, when again touched, as surely they will be, by the better angels of our nature.

In the English language, there are two rhetorical storehouses that for centuries have grounded writers like Lincoln — Shakespeare and the King James Bible.  Both are compulsively quotable, and both provide models for how to combine meaning and music in the way we write.

Take a look at this lovely piece by Ann Wroe, an appreciation of the music of the King James Bible, which makes all the the other translations sound tone deaf.

Published in the Economist

March 30, 2011

IN THE BEGINNING WAS THE SOUND

By Ann Wroe

Bible

The King James Bible is 400 years old this year, and the music of its sentences is still ringing out. But what exactly made it so good? Ann Wroe gives chapter and verse…

Like many Catholics, I came late to the King James Bible. I was schooled in the flat Knox version, and knew the beautiful, musical Latin Vulgate well before I was introduced to biblical beauty in my own tongue. I was around 20, sitting in St John’s College Chapel in Oxford in the glow of late winter candlelight, though that fond memory may be embellished a little. A reading from the King James was given at Evensong. The effect was extraordinary: as if I had suddenly found, in the house of language I had loved and explored all my life, a hidden central chamber whose pillars and vaulting, rhythm and strength had given shape to everything around them.

The King James now breathes venerability. Even online it calls up crammed, black, indented fonts, thick rag paper and rubbed leather bindings—with, inside the heavy cover, spidery lists of family ancestors begotten long ago. To read it is to enter a sort of communion with everyone who has read or listened to it before, a crowd of ghosts: Puritan women in wide white collars, stern Victorian fathers clasping their canes, soldiers muddy from killing fields, serving girls in Sunday best, and every schoolboy whose inky fingers have burrowed to 2 Kings 27, where Rabshakeh says, “Hath my master not sent me to the men which sit on the wall, that they may eat their own dung, and drink their own piss with you?”

When it appeared, moreover, it was already familiar, in the sense that it borrowed freely from William Tyndale’s great translation of a century before. Deliberately, and with commendable modesty, the members of King James’s translation committees said they did not seek “to make a new translation, nor yet to make of a bad one a good one, but to make a good one better”. What exactly they borrowed and where they improved is a detective job for scholars, not for this piece. So where it mentions “translators” Tyndale is included among them, the original and probably the best; for this book still breathes him, as much as them.

In both his time and theirs this was a modern translation, the living language of streets, docks, workshops, fields. Ancient Israel and Jacobean England went easily together. The original writers of the books of the Old Testament knew about pruning trees, putting on armour, drawing water, the readying of horses for battle and the laying of stones for a wall; and in the King James all these activities are still evidently familiar, the jargon easy, and the language light. “Yet man is born unto trouble, as the sparks fly upward”, runs the wonderful phrase in Job 5: 7, and we are at a blacksmith’s door in an English village, watching hammer strike anvil, or kicking a rolling log on our own cottage hearth. “Hard as a piece of the nether millstone” brings the creak of a 17th-century mill, as well as the sweat of more ancient hands. In both worlds, “seedtime and harvest” are real seasons. This age-old continuity comforts us, even though we no longer know or share it.

By the same token, the reader of the King James lives vicariously in a world of solid certainties. There is nothing quaint here about a candle or a flagon, or money in a tied leather purse; nothing arcane about threads woven on a handloom, mire in the streets or the snuffle of swine outside the town gates. This is life. Everything is closely observed, tactile, and has weight. When Adam and Eve sew fig-leaves together to cover their shame they make “aprons” (Genesis 3: 7), leather-thick and workmanlike, the sort a cobbler might wear. Even the colours invoked in the King James—crimson, scarlet, purple—are nouns rather than adjectives (“though your sins be as scarlet”, Isaiah 1: 18), sold by the block as solid powder or heaped glossy on a brush. And God’s intervention in this world, whether as artist, builder, woodsman or demolition man, is as physical and real as the materials he works with.

English, of course, was richer in those days, full of neesings and axletrees, habergeons and gazingstocks, if indeed a gazingstock has a plural. Modern skin has spots: the King James gives us botches, collops and blains, horridly and lumpily different. It gives us curious clutter, too, a whole storehouse of tools and knick-knacks whose use is now half-forgotten—nuff-dishes, besoms, latchets and gins, and fashions seemingly more suited to a souped-up motor than to the daughters of Jerusalem:

The chains, and the bracelets, and the mufflers,
The bonnets, and the ornaments of the legs, and the
headbands, and the tablets, and the earrings,
The rings, and nose jewels,
The changeable suits of apparel, and the mantles, and the
wimples, and the crisping pins…  (Isaiah 3: 19-22)

“Crisping pins” have now been swallowed up (in the Good News version) in “fine robes, gowns, cloaks and purses”. And so we have lost that sharp, momentary image of varnished nails pushing pins into unruly frizzes of hair, and lipsticked mouths pursed in concentration, as the daughters of Zion prepare to take on the town. These women are “froward”, a word that has been lost now, but which haunts the King James like a strutting shadow with a shrill, hectoring voice. Few lines are longer-drawn out, freighted with sighs, than these from Proverbs 27:15: “A continual dropping in a very rainy day and a contentious woman are alike.”
Other characters cause trouble, too. In the King James, people are aggressively physical. They shoot out their lips, stretch forth their necks and wink with their eyes; they open their mouths wide and say “Aha, aha”, wagging their heads, in ways that would get them arrested in Wal-Mart. They do not simply refuse to listen, but pull away their shoulders and stop their ears; they do not merely trip, but dash their feet against stones. Sex is peremptory: men “know” women, lie with them, “go in unto” them, as brisk as the women are available. “Begat” is perhaps the word the King James is best known for, list after list of begetting. The curt efficiency of the word (did no one suggest “fathered”?) makes the erotic languor of the Song of Solomon, with its lilies and heaps of wheat, shine out like a jewel.

The world in which these things happen has a particular look and feel that comes not just from the original authors, but often from the translators and the words they favoured. Mystery colours much of it. They like “lurking places of the villages” (Psalms 10: 8), “secret places of the stairs” (Song of Solomon 2:14), and things done “privily”, or “close”. God hides in “pavilions” that seem as mysterious as the shifting dunes of the desert, or the white flapping tents of the clouds. The word “creeping” is used everywhere to suggest that something lives; very little moves fast here, and heads and bellies are bent close to the earth. Even flying is slow, through the thick darkness. People go forth abroad, and waters come down from above, with considerable effort, as though through slowly opening layers. Elements are divided into their constituent parts: the waters of the sea, a flame of fire. A rainbow curves brightly away from the astonished, struggling observer, “in sight like unto an emerald” (Revelation 4: 3). But the grandeur of the language gives momentousness even to the corner of a room, a drain running beside a field, a patch of abandoned ground:

I went by the field of the slothful, and by the vineyard of the
man void of understanding;
And lo, it was all grown over with thorns, and nettles had
covered the face thereof, and the stone wall thereof was
broken down.
Then I saw, and considered it well; I looked upon it, and
received instruction.
Yet a little sleep, a little slumber, a little folding of the hands
to sleep…  (Proverbs 24: 30-33)

In such places shepherds “abide” with their sheep, motionless as figures made of stone. This landscape is carved broad and deep, like a woodcut, with sharply folded mountains, thick woven water, stylised trees and cities piled and blocked as with children’s bricks (all the better to be scattered by God later, no stone upon another). A sense of desolation haunts these streets and gates, echoing and shelterless places in which even Wisdom runs wild and cries. Yet within them sometimes we find a scene paced as tensely as in any modern novel, as when a young man in Proverbs steps out,

Passing through the street near her corner; and he went the
way to her house,
In the twilight, in the evening, in the black and dark night:
And, behold, there met him a woman with the attire of an
harlot, and subtil of heart.  (Proverbs 7: 8-10)

Just as stained glass shines more brightly for being set in stone, so the King James gains in splendour by comparison with the Revised Standard, Good News, New International and Heaven-knows-what versions that have come later. Thus John’s magnificent “The Word was with God, and the Word was God” (John 1: 1), has become “The Word was already existing”, scholarship usurping splendour. That lilting line in Genesis (1: 8), “And the evening and the morning were the second day” (note that second “the”, so apparently expendable, yet so necessary to the music) becomes “There was morning, and there was evening”, a broken-backed crawl. The fig-leaf aprons are now reduced to “coverings for themselves”. And the garden planted “eastward in Eden” (Genesis 2: 8), another of the King James’s myriad and scarcely conscious touches of grace, has become “to the east, in Eden” a place from which the magic has drained away.

Everywhere modern translations are more specific, doubtless more accurate, but always less melodious. The King James, deeply scholarly as it is, displaying the best learning of the day, never forgets that the word of God must be heard, understood and retained by the simple. For them—children repeating after the teacher, workers fidgeting in their best clothes, Tyndale’s own whistling ploughboy—rhythm and music are the best aids to remembering. This is language not for silent study but for reading and declaiming aloud. It needs to work like poetry, and poetry it is.

The King James is famous for its monosyllables, great drumbeats of description or admonition: “And God said, Let there be light: and there was light” (Genesis 1: 3); “The fool hath said in his heart, There is no God” (Psalms 14: 1); “In the sweat of thy face shalt thou eat bread” (Genesis 3: 19). These are fundaments, bases, bricks to build with. Yet its rhythms are also far cleverer than that, endlessly and subtly adjusted. Typically, a King James sentence has two parts broken by a pause around the mid-point, with the first part slightly more declaratory and the second slightly more explanatory: the stronger syllables massed towards the beginning, the weaker crowding softly towards their end. “Surely there is a vein for the silver, and a place for gold where they fine it” (Job 28: 1); “He buildeth his house as a moth, and as a booth that the keeper maketh” (Job 27: 18). But sometimes the order is inverted, and the words too: “As the bird by wandering, as the swallow by flying, so the curse causeless shall not come” (Proverbs 26: 2); “Out of the south cometh the whirlwind: and cold out of the north” (Job 37: 9). Perhaps the whirlwind itself has disordered things. This contrapuntal system even allows for a bit of bathos and fun: “Divers weights are an abomination unto the lord; and a false balance is not good” (Proverbs 20: 23).

Certain devices were available then which modern writers may well envy. The old English language allowed rhythms and syncopations that cannot be employed any more. Consider the use of “even”, dropped in with an almost casual flourish: “And the stars of heaven fell unto the earth, even as a fig tree casteth her untimely figs, when she is shaken of a mighty wind” (Revelations 6: 13). Or “neither”, used in the same way: “Many waters cannot quench love, neither can the floods drown it” (Song of Solomon 8: 7). Modern translations separate those two thoughts, but the beauty lies in their conjunction with a word as light as air.
Undoubtedly the King James has been enhanced for us by the music that now curls round it. “For unto us a child is born” (Isaiah 9: 6) can’t now be read without Handel’s tripping chorus, or “Man that is born of a woman” without Purcell’s yearning melancholy (“He cometh forth like a flower, and is cut down” Job 14: 2). Even “To every thing there is a season”, from Ecclesiastes (3: 1), is now overlaid with the nasal, gently stoned tones of Simon & Garfunkel. Yet the King James also lured these musicians in the beginning, snaring them with stray lines that were already singing. “Stay me with flagons, comfort me with apples, for I am sick of love” (Song of Solomon 2: 5). “Thou hast heard me from the horns of the unicorns” (Psalms 22: 21). “The heavens declare the glory of God; and the firmament sheweth his handywork” (Psalms 19: 1). “I am a brother to dragons, and a companion to owls” (Job 30: 29). Or this, also from the Book of Job, possibly the most beautiful of all the Bible’s books—a passage that flows from one astonishingly random and sudden question, “Hast thou entered into the treasures of the snow?” (Job 38:22):

Hath the rain a father? Or who hath begotten the drops of
dew?
Out of whose womb came the ice? And the hoary frost of
heaven, who hath gendered it?
The waters are hid as with a stone, and the face of the deep
is frozen.
Canst thou bind the sweet influences of Plaeiades, or loose
the bands of Orion?  (Job 38:28-31)

The beauty of this is inherent, deep in the original mind and eye that formed it. But again, the translators have made choices here: “hid” rather than “hidden”, “gendered” rather than “engendered”, all for the very best rhythmic reasons.  We can trust them; we know that they would certainly have employed “hidden” and “engendered” if the music called for it. Unfailingly, their ear is sure. And if we suspect that rhythm sometimes matters more than meaning, that is fine too: it leaves space for the sacred and numinous, that which cannot be grasped, that which lies beyond all words, to move within the lines.

That subtle notion of divinity, however, is seldom uppermost in the Old Testament. This God smites a lot. Three close-printed columns of Young’s Concordance are filled with his smiting, lightly interspersed with other people’s. Mere men use hand weapons, bows and arrows, or, with Jacobean niftiness, “the edge of the sword”; but the God of the King James simply smites, whether Moabites or Jebusites, vines or rocks or first-born, like a broad, bright thunderbolt. No other word could be so satisfactory, the opening consonants clenched like a fist that propels God’s anger down, and in, and on. We know that these are tough workman’s hands: this is the God who “stretcheth out the north over the empty place, and hangeth the earth upon nothing” (Job 26: 7). Smiting must have survived after the King James; but perhaps it was now so soft with over-use, so bruised, that it faded out of the language.

This God surprises, too. He “hisses unto” people, perhaps a cross between a whistle and a whoop, as if marshalling a yard of hens. God goes before, “preventing” us; he whips off our disguises, our clothes or our leaves, “discovering” us, and the shock of the original meanings of those words alerts us to the origins of power itself. “Who can stay the bottles of heaven?” cries a voice in Job 38: 37; and we suspect God again, like a teenage yob this time, lurking in his pavilion of cloud.

At moments like this it also seems that the translators themselves might be mystified, fingers scratching neat beards while they survey the incomprehensible words. Did they really understand, for example, that odd medical diagnosis in Proverbs: “The blueness of a wound cleanseth away evil; so do stripes the inward parts of the belly” (20: 30)? Or these lines from the last chapter of Ecclesiastes, the mystifying staple of so many funerals?

…they shall be afraid of that which is high, and fears shall
be in the way, and the almond tree shall flourish, and
the grasshopper shall be a burden, and desire shall fail;
because man goeth to his long home, and the mourners
go about the streets:
Or ever the silver cord be loosed, or the golden bowl be
broken, or the pitcher be broken at the fountain, or the
wheel broken at the cistern.
Then shall the dust return to the earth as it was… (Ecclesiastes 12: 5-7)

These are surreal images, unlikely litter in the fields and streets; but they are made all the more potent by the heavy phrasing, the inevitability of the building lines, and the conscious repetition, broken, broken, broken. We know our translators have plenty of synonyms up their sleeves. They choose not to use them. When these lines are read, though we barely know what they mean, they spell despair. And they are meant to, as the man in the pulpit in a moment reminds us:

Vanity of vanities, saith the preacher; all is vanity (Ecclesiastes 12: 8)

Yet often, too, a spirit of playfulness seems to be at work. Consider, lastly, the rain. This is ordinary rain most of the time, malqosh in the Hebrew, and all modern translations make it so. But in the King James we also have “the latter rain”, and “small rain”, and we are alerted to their delicacy and difference. Small rain (“as the small rain upon the tender herb” Deuteronomy 32: 2) is presumably the sort that blows in the air, that makes no imprint on a puddle; the Irish would call it a soft day. And latter rain, perhaps, is the sort that skulks at the end of an afternoon, or suddenly cascades down in an autumn gust, or patters for a desultory few minutes after a day of approaching thunder—and then we open our mouths wide to it, laughing, grateful, as for the word of God.
Ann Wroe [3] is obituaries and briefings editor of The Economist and author of “Being Shelley”