Posted in Academic writing, History, Writing

On Writing: How the King James Bible and How It Shaped the English Language and Still Teaches Us How to Write

When you’re interested in improving your writing, it’s a good idea to have some models to work from.  I’ve presented some of my favorite models in this blog.  These have included a number of examples of good writing by both academics (Max Weber, E.P. Thompson, Jim March, and Mary Metzand nonacademics (Frederick Douglass, Elmore Leonard).

Today I want to explore one of the two most influential forces in shaping the English language over the years:  The King James Bible.  (The other, of course, is Shakespeare.)  Earlier I presented one analysis by Ann Wroe, which focused on the thundering sound of the prose in this extraordinary text.  Today I want to draw on two other pieces of writing that explore the powerful model that this bible provides us all for how to write in English with power and grace.  One is by Adam Nicholson, who wrote a book on the subject (God’s Secretaries: The Making of the King James Bible).  The other, which I reprint in full at the end of this post, is by Charles McGrath.  

The impulse to produce a bible in English arose with the English reformation, as a Protestant vernacular alternative to the Latin version that was canonical in the Catholic church.  The text was commissioned in 1604 by King James, who succeeded Elizabeth I after her long reign, and it was constructed by a committee of 54 scholars.  They went back to the original texts in Hebrew and Greek, but they drew heavily on earlier English translations. 

The foundational translation was written by William Tyndale, who was executed for heresy in Antwerp in 1536, and this was reworked into what became known as the Geneva bible by Calvinists who were living in Switzerland.  One aim of the committee was to produce a version that was more compatible with the beliefs of English and Scottish versions of the faith, but for James the primary impetus was to remove the anti-royalist tone that was embedded within the earlier text.  Recent scholars have concluded that 84% of the words in the King James New Testament and 76% in the Old Testament are Tyndale’s.

As Nicholson puts it, the language of the King James Bible is an amazing mix — “majestic but intimate, the voice of the universe somehow heard in the innermost part of the ear.”

You don’t have to be a Christian to hear the power of those words—simple in vocabulary, cosmic in scale, stately in their rhythms, deeply emotional in their impact. Most of us might think we have forgotten its words, but the King James Bible has sewn itself into the fabric of the language. If a child is ever the apple of her parents’ eye or an idea seems as old as the hills, if we are at death’s door or at our wits’ end, if we have gone through a baptism of fire or are about to bite the dust, if it seems at times that the blind are leading the blind or we are casting pearls before swine, if you are either buttering someone up or casting the first stone, the King James Bible, whether we know it or not, is speaking through us. The haves and have-nots, heads on plates, thieves in the night, scum of the earth, best until last, sackcloth and ashes, streets paved in gold, and the skin of one’s teeth: All of them have been transmitted to us by the translators who did their magnificent work 400 years ago.

Wouldn’t it be lovely if we academics could write in way that sticks in people’s minds for 400 years?  Well, maybe that’s a bit too much to hope for.  But even if we can’t aspire to be epochally epigrammatic, there are still lessons we can learn from Tyndale and the Group of 54.  

One such lesson is the power of simplicity.  Too often scholars feel the compulsion to gussy up their language with jargon and Latinate constructions in the name of professionalism.  If any idiot can understand what you’re saying, then you’re not being a serious scholar.  But the magic of the King James Bible is that it uses simple Anglo-Saxon words to make the most profound statements.  Listen to this passage from Ecclesiastes:

I returned and saw under the sun, that the race is not to the swift, nor the battle to the strong, neither yet bread to the wise, nor yet riches to men of understanding, nor yet favor to men of skill, but time and chance happeneth to them all.

Or this sentence from Paul’s letter to the Phillipians:

Finally, brethren, whatsoever things are true, whatsoever things are honest, whatsoever things are just, whatsoever things are pure, whatsoever things are lovely, whatsoever things are of good report; if there be any virtue, and if there be any praise, think on these things.

Or the stunning opening line of the Gospel of John:

In the beginning was the Word, and the Word was with God, and the Word was God.

This is a text that can speak clearly to the untutored while at the same time elevating them to a higher plane.  For us it’s a model for how to match simplicity with profundity.

KJB

Why the King James Bible Endures

By CHARLES McGRATH

The King James Bible, which was first published 400 years ago next month, may be the single best thing ever accomplished by a committee. The Bible was the work of 54 scholars and clergymen who met over seven years in six nine-man subcommittees, called “companies.” In a preface to the new Bible, Miles Smith, one of the translators and a man so impatient that he once walked out of a boring sermon and went to the pub, wrote that anything new inevitably “endured many a storm of gainsaying, or opposition.” So there must have been disputes — shouting; table pounding; high-ruffed, black-gowned clergymen folding their arms and stomping out of the room — but there is no record of them. And the finished text shows none of the PowerPoint insipidness we associate with committee-speak or with later group translations like the 1961 New English Bible, which T.S. Eliot said did not even rise to “dignified mediocrity.” Far from bland, the King James Bible is one of the great masterpieces of English prose.

The issue of how, or even whether, to translate sacred texts was a fraught one in those days, often with political as well as religious overtones, and it still is. The Roman Catholic Church, for instance, recently decided to retranslate the missal used at Mass to make it more formal and less conversational. Critics have complained that the new text is awkward and archaic, while its defenders (some of whom probably still prefer the Mass in Latin) insist that’s just the point — that language a little out of the ordinary is more devotional and inspiring. No one would ever say that the King James Bible is an easy read. And yet its very oddness is part of its power.

From the start, the King James Bible was intended to be not a literary creation but rather a political and theological compromise between the established church and the growing Puritan movement. What the king cared about was clarity, simplicity, doctrinal orthodoxy. The translators worked hard on that, going back to the original Hebrew, Greek and Aramaic, and yet they also spent a lot of time tweaking the English text in the interest of euphony and musicality. Time and again the language seems to slip almost unconsciously into iambic pentameter — this was the age of Shakespeare, commentators are always reminding us — and right from the beginning the translators embraced the principles of repetition and the dramatic pause: “In the beginning God created the Heaven, and the Earth. And the earth was without forme, and void, and darkenesse was upon the face of the deepe: and the Spirit of God mooved upon the face of the waters.”

The influence of the King James Bible is so great that the list of idioms from it that have slipped into everyday speech, taking such deep root that we use them all the time without any awareness of their biblical origin, is practically endless: sour grapes; fatted calf; salt of the earth; drop in a bucket; skin of one’s teeth; apple of one’s eye; girded loins; feet of clay; whited sepulchers; filthy lucre; pearls before swine; fly in the ointment; fight the good fight; eat, drink and be merry.

But what we also love about this Bible is its strangeness — its weird punctuation, odd pronouns (as in “Our Father, which art in heaven”), all those verbs that end in “eth”: “In the morning it flourisheth, and groweth up; in the evening it is cut downe, and withereth.” As Robert Alter has demonstrated in his startling and revealing translations of the Psalms and the Pentateuch, the Hebrew Bible is even stranger, and in ways that the King James translators may not have entirely comprehended, and yet their text performs the great trick of being at once recognizably English and also a little bit foreign. You can hear its distinctive cadences in the speeches of Lincoln, the poetry of Whitman, the novels of Cormac McCarthy.

Even in its time, the King James Bible was deliberately archaic in grammar and phraseology: an expression like “yea, verily,” for example, had gone out of fashion some 50 years before. The translators didn’t want their Bible to sound contemporary, because they knew that contemporaneity quickly goes out of fashion. In his very useful guide, “God’s Secretaries: The Making of the King James Bible,” Adam Nicolson points out that when the Victorians came to revise the King James Bible in 1885, they embraced this principle wholeheartedly, and like those people who whack and scratch old furniture to make it look even more ancient, they threw in a lot of extra Jacobeanisms, like “howbeit,” “peradventure, “holden” and “behooved.”

This is the opposite, of course, of the procedure followed by most new translations, starting with Good News for Modern Man, a paperback Bible published by the American Bible Society in 1966, whose goal was to reflect not the language of the Bible but its ideas, rendering them into current terms, so that Ezekiel 23:20, for example (“For she doted upon their paramours, whose flesh is as the flesh of asses, and whose issue is like the issue of horses”) becomes “She was filled with lust for oversexed men who had all the lustfulness of donkeys or stallions.”

There are countless new Bibles available now, many of them specialized: a Bible for couples, for gays and lesbians, for recovering addicts, for surfers, for skaters and skateboarders, not to mention a superheroes Bible for children. They are all “accessible,” but most are a little tone-deaf, lacking in grandeur and majesty, replacing “through a glasse, darkly,” for instance, with something along the lines of “like a dim image in a mirror.” But what this modernizing ignores is that the most powerful religious language is often a little elevated and incantatory, even ambiguous or just plain hard to understand. The new Catholic missal, for instance, does not seem to fear the forbidding phrase, replacing the statement that Jesus is “one in being with the Father” with the more complicated idea that he is “consubstantial with the Father.”

Not everyone prefers a God who talks like a pal or a guidance counselor. Even some of us who are nonbelievers want a God who speaketh like — well, God. The great achievement of the King James translators is to have arrived at a language that is both ordinary and heightened, that rings in the ear and lingers in the mind. And that all 54 of them were able to agree on every phrase, every comma, without sounding as gassy and evasive as the Financial Crisis Inquiry Commission, is little short of amazing, in itself proof of something like divine inspiration.

 

Posted in Academic writing, Writing

Elmore Leonard’s Master Class on Writing a Scene

As you may have figured out by now, I’m a big fan of Elmore Leonard.  I wrote an earlier post about the deft way he leads you into a story and introduces a character on the very first page of a book.  He never gives his readers fits the way we academic writers do ours, by making them plow through half a paper before they finally discover its point.

Here I want to show you one of the best scenes Leonard ever wrote — and he wrote a lot of them.  It’s from the book Be Cool, which is the sequel to another called Get Shorty.  Both were turned into films starring John Travolta as Chili Palmer.  Chili is a loan shark from back east who heads to Hollywood to collect on a marker, but what he really wants is to make movies.  As a favor, he looks up a producer who owes someone else money, and instead of collecting he pitches a story.  The rest of the series is about the cinematic mess that ensues.

Chili Palmer

In the scene below, Chili runs into a minor thug floating in a backyard swimming pool.  In the larger story this is a nothing scene, but it’s stunning how Leonard turns it into a tour de force.  In a virtuoso display of writing, he shows Chili effortlessly take the thug apart while also mesmerizing him.  Chili the movie maker rewrites the scene as he’s acting it out and then directs the thug on the raft how to play his own part more effectively.

Watch how Chili does it:

He got out of there, went into the living room and stood looking around, seeing it now as the lobby of an expensive health club, a spa: walk through there to the pool where one of the guests was drying out. From here Chili had a clear view of Derek, the kid floating in the pool on the yellow raft, sun beating down on him, his shades reflecting the light. Chili walked outside, crossed the terrace to where a quart bottle of Absolut, almost full, stood at the tiled edge of the pool. He looked down at Derek laid out in his undershorts.

He said, “Derek Stones?”

And watched the kid raise his head from the round edge of the raft, stare this way through his shades and let his head fall back again.

“Your mother called,” Chili said. “You have to go home.”

A wrought-iron table and chairs with cushions stood in an arbor of shade close to the house. Chili walked over and sat down. He watched Derek struggle to pull himself up and begin paddling with his hands, bringing the raft to the side of the pool; watched him try to crawl out and fall in the water when the raft moved out from under him. Derek made it finally, came over to the table and stood there showing Chili his skinny white body, his titty rings, his tats, his sagging wet underwear.

“You wake me up,” Derek said, “with some shit about I’m suppose to go home? I don’t even know you, man. You from the funeral home? Put on your undertaker suit and deliver Tommy’s ashes? No, I forgot, they’re being picked up. But you’re either from the funeral home or—shit, I know what you are, you’re a lawyer. I can tell ’cause all you assholes look alike.”

Chili said to him, “Derek, are you trying to fuck with me?”

Derek said, “Shit, if I was fucking with you, man, you’d know it.”

Chili was shaking his head before the words were out of Derek’s mouth.

“You sure that’s what you want to say? ‘If I was fuckin with you, man, you’d know it?’ The ‘If I was fucking with you’ part is okay, if that’s the way you want to go. But then, ‘you’d know it’—come on, you can do better than that.”

Derek took off his shades and squinted at him.

“The fuck’re you talking about?”

“You hear a line,” Chili said, “like in a movie. The one guy says, ‘Are you trying to fuck with me?’ The other guy comes back with, ‘If I was fuckin with you, man . . .’ and you want to hear what he says next ’cause it’s the punch line. He’s not gonna say, ‘You’d know it.’ When the first guy says, ‘Are you trying to fuck with me?’ he already knows the guy’s fuckin with him, it’s a rhetorical question. So the other guy isn’t gonna say ‘you’d know it.’ You understand what I’m saying? ‘You’d know it’ doesn’t do the job. You have to think of something better than that.”

“Wait,” Derek said, in his wet underwear, weaving a little, still half in the bag. “The first guy goes, ‘You trying to fuck with me?’ Okay, and the second guy goes, ‘If I was fucking with you . . . If I was fucking with you, man . . .’ “

Chili waited. “Yeah?”

“Okay, how about, ‘You wouldn’t live to tell about it?’

“Jesus Christ,” Chili said, “come on, Derek, does that make sense? ‘You wouldn’t live to tell about it’? What’s that mean? Fuckin with a guy’s the same as taking him out?” Chili got up from the table. “What you have to do, Derek, you want to be cool, is have punch lines on the top of your head for every occasion. Guy says, ‘Are you trying to fuck with me?’ You’re ready, you come back with your line.” Chili said, “Think about it,” walking away. He went in the house through the glass doors to the bedroom.

Don’t you wish you could be Elmore Leonard and write a scene like that, or be Chili Palmer and construct it on the fly?  I sure do, and I’m not sure which role would be the more gratifying.

You could have a lot of fun picking apart the things that make the scene work.  Chili the movie maker walking into the living room and suddenly “seeing it as the lobby of an expensive health spa.”  Derek with “his skinny white body, his titty rings, his tats, his sagging wet underwear.”  The way Derek talks: “The fuck’re you talking about?”  Derek struggling to come up with the right line to replace the lame one he thought up himself.  Chili explaining the core dilemma of the writer, that you can’t ever set up a punchline and then fail to deliver.

But instead of explaining his joke, let’s just learn from his example.  Deliver what you promise.  Reward the effort that your readers invest in engaging with your work.  Have your key insight ready, deliver it on cue, and then walk away.  Never step on the punchline.

Posted in Academic writing

Patricia Limerick: Dancing with Professors

 

In this post, I feature a lovely piece by historian Patricia Limerick called “Dancing with Professors: The Trouble with Academic Prose,” which was published in the Observer in 2015.

Everyone disparages academic writing, and for good reason.  No one reads journal articles for fun.  Limerick, whose work shows she knows something about good writing, finds the problem in the way academics try so hard to sound professional.  From this perspective, writing with clarity and grace carries the stigma of amateurism.  If it’s readily understandable to a layperson, it’s not tenurable.

“We must remember,” she says, “that professors are the ones nobody wanted to dance with in high school.”  We not approachable or accessible and we like it that way.  Any loser can be popular; the academic aspires to be profound.

So we learn turgid writing in graduate school, as part of our induction into the profession, and we stay in this mode for the rest of our careers — long after we have lost the need to shore up our initially shaky credibility as serious scholars.  We constrain ourselves from taking flight with language even after the shackles of grad school have fallen away.

Don’t miss her discussion of how academic writers are like buzzards on a tree limb.  Really.  It’ll stick with you for a long long time.

Enjoy.

Typwriter

Dancing with Professors:

The Trouble with Academic Prose

Patricia Nelson Limerick

Professor of History, University of Colorado

In ordinary life, when a listener cannot understand what someone has said, this is the usual exchange:

Listener: I cannot understand what you are saying.

Speaker: Let me try to say it more clearly.

But in scholarly writing in the late 20th century, other rules apply. This is the implicit exchange:

Reader: I cannot understand what you are saying.

Academic Writer: Too bad. The problem is that you are an unsophisticated and untrained reader. If you were smarter, you would understand me.

The exchange remains implicit, because no one wants to say, “This doesn’t make any sense,” for fear that the response, “It would, if you were smarter,” might actually be true.

While we waste our time fighting over ideological conformity in the scholarly world, horrible writing remains a far more important problem. For all their differences, most right_wing scholars and most left_wing scholars share a common allegiance to a cult of obscurity. Left, right and center all hide behind the idea that unintelligible prose indicates a sophisticated mind. The politically correct and the politically incorrect come together in the violence they commit against the English language.

University presses have certainly filled their quota every year, in dreary monographs, tangled paragraphs and impenetrable sentences. But trade publishers have also violated the trust of innocent and hopeful readers. As a prime example of unprovoked assaults on innocent words, consider the verbal behavior of Allan Bloom in “The Closing of the American Mind,” published by a large mainstream press. Here is a sample:

“If openness means to go with the flow,’ it is necessarily an accommodation to the present. That present is so closed to doubt about so many things impeding the progress of its principles that unqualified openness to it would mean forgetting the despised alternatives to it, knowledge of which makes us aware of what is doubtful in it.”

Is there a reader so full of blind courage as to claim to know what this sentence means? Remember, the book in which this remark appeared was a lamentation over the failings of today’s students, a call to arms to return to tradition and standards in education. And yet, in 20 years of paper grading, I do not recall many sentences that asked, so pathetically, to be put out of their misery.

Jump to the opposite side of the political spectrum from Allan Bloom, and literary grace makes no noticeable gains. Contemplate this breathless, indefatigable sentence from the geographer, Allan Pred, and Mr. Pred and Bloom seem, if only in literary style, to be soul mates.

“If what is at stake is an understanding of geographical and historical variations in the sexual division of productive and reproductive labor, of contemporary local and regional variations in female wage labor and women’s work outside the formal economy, of on_the_ground variations in the everyday content of women’s lives, inside and outside of their families, then it must be recognized that, at some nontrivial level, none of the corporal practices associated with these variations can be severed from spatially and temporally specific linguistic practices, from language that not only enable the conveyance of instructions, commands, role depictions and operating rules, but that also regulate and control, that normalize and spell out the limits of the permissible through the conveyance of disapproval, ridicule and reproach.”

In this example, 124 words, along with many ideas, find themselves crammed into one sentence. In their company, one starts to get panicky. “Throw open the windows; bring in the oxygen tanks!” one wants to shout. “These words and ideas are nearly suffocated. Get them air!” And yet the condition of this desperately packed and crowded sentence is a perfectly familiar one to readers of academic writing, readers who have simply learned to suppress the panic.

Everyone knows that today’s college students cannot write, but few seem willing to admit that the professors who denounce them are not doing much better. The problem is so blatant that there are signs that the students are catching on. In my American history survey course last semester, I presented a few writing rules that I intended to enforce inflexibly. The students looked more and more peevish; they looked as if they were about to run down the hall, find a telephone, place an urgent call and demand that someone from the A.C.L.U. rush up to campus to sue me for interfering with their First Amendment rights to compose unintelligible, misshapen sentences.

Finally one aggrieved student raised her hand and said, “You are telling us not to write long, dull sentences, but most of our reading is full of long, dull sentences.”

As this student was beginning to recognize, when professors undertake to appraise and improve student writing, the blind are leading the blind. It is, in truth, difficult to persuade students to write well when they find so few good examples in their assigned reading.

The current social and judicial context for higher education makes this whole issue pressing. In Colorado, as in most states, the legislators re convinced that the university is neglecting students and wasting state resources on pointless research. Under those circumstances, the miserable writing habits of professors pose a direct and concrete danger to higher education. Rather than going to the state legislature, proudly presenting stacks of the faculty’s compelling and engaging publications, you end up hoping that the lawmakers stay out of the library and stay away, especially, from the periodical room, with its piles of academic journals. The habits of academic writers lend powerful support to the impression that research is a waste of the writers’ time and of the public’s money.

Why do so many professors write bad prose?

Ten years ago, I heard a classics professor say the single most important thing_in my opinion_that anyone has said about professors. “We must remember,” he declared, “that professors are the ones nobody wanted to dance with in high school.”

This is an insight that lights up the universe_or at least the university. It is a proposition that every entering freshman should be told, and it is certainly a proposition that helps to explain the problem of academic writing. What one sees in professors, repeatedly, is exactly the manner that anyone would adopt after a couple of sad evenings sidelined under the crepe_paper streamers in the gym, sitting on a folding chair while everyone else danced. Dignity, for professors, perches precariously on how well they can convey this message, “I am immersed in some very important thoughts, which unsophisticated people could not even begin to understand. Thus, I would not want to dance, even if one of you unsophisticated people were to ask me.”

Think of this, then, the next time you look at an unintelligible academic text. “I would not want the attention of a wide reading audience, even if a wide audience were to ask for me.” Isn’t that exactly what the pompous and pedantic tone of the classically academic writer conveys?

Professors are often shy, timid and fearful people, and under those circumstances, dull, difficult prose can function as a kind of protective camouflage. When you write typical academic prose, it is nearly impossible to make a strong, clear statement. The benefit here is that no one can attack your position, say you are wrong or even raise questions about the accuracy of what you have said, if they cannot tell what you have said. In those terms, awful, indecipherable prose is its own form of armor, protecting the fragile, sensitive thoughts of timid souls.

The best texts for helping us understand the academic world are, of course, Lewis Carroll’s Alice’s Adventures in Wonderland and Through the Looking Glass. Just as devotees of Carroll would expect, he has provided us with the best analogy for understanding the origin and function of bad academic writing. Tweedledee and Tweedledum have quite a heated argument over a rattle. They become so angry that they decide to fight. But before they fight, they go off to gather various devices of padding and protection: “bolsters, blankets, hearthrugs, tablecloths, dish covers and coal scuttles.” Then, with Alice’s help in tying and fastening, they transform these household items into armor. Alice is not impressed: ” Really, they’ll be more like bundles of old clothes than anything else, by the time they’re ready!’ she said to herself, as she arranged a bolster round the neck of Tweedledee, to keep his head from being cut off,’ as he said, Why this precaution?” Because, Tweedledee explains, “it’s one of the most serious things that can possibly happen to one in a battle_to get one’s head cut off.”

Here, in the brothers’ anxieties and fears, we have an exact analogy for the problems of academic writing. The next time you look at a classically professorial sentence_long, tangled, obscure, jargonized, polysyllabic_think of Tweedledum and Tweedledee dressed for battle, and see if those timid little thoughts, concealed under layers of clauses and phrases, do not remind you of those agitated but cautious brothers, arrayed in their bolsters, blankets, dish covers and coal scuttles. The motive, too, is similar. Tweedledum and Tweedledee were in terror of being hurt, and so they padded themselves so thoroughly that they could not be hurt; nor, for that matter, could they move. A properly dreary, inert sentence has exactly the same benefit; it protects its writer from sharp disagreement, while it also protects him from movement.

Why choose camouflage and insulation over clarity and directness? Tweedledee, of course, spoke for everyone, academic or not, when he confessed his fear. It is indeed, as he said, “one of the most serious things that can possibly happen to one in a battle_to get one’s head cut off.” Under those circumstances, logic says: tie the bolster around the neck, and add a protective hearthrug or two. Pack in another qualifying clause or two. Hide behind the passive_voice verb. Preface any assertion with a phrase like “it could be argued” or “a case could be made.” Protecting one’s neck does seem to be the way to keep one’s head from being cut off.

Graduate school implants in many people the belief that there are terrible penalties to be paid for writing clearly, especially writing clearly in ways that challenge established thinking in the field. And yet, in academic warfare (and I speak as a veteran) your head and your neck are rarely in serious danger. You can remove the bolster and the hearthrug. Your opponents will try to whack at you, but they will seldom, if ever, land a blow_in large part because they are themselves so wrapped in protective camouflage and insulation that they lose both mobility and accuracy.

So we have a widespread pattern of professors protecting themselves from injury by wrapping their ideas in dull prose, and yet the danger they try to fend off is not a genuine danger. Express yourself clearly, and it is unlikely that either your head_or, more important, your tenure_will be cut off.

How, then, do we save professors from themselves? Fearful people are not made courageous by scolding; they need to be coaxed and encouraged. But how do we do that, especially when this particular form of fearfulness masks itself as pomposity, aloofness and an assured air of superiority?

Fortunately, we have available the world’s most important and illuminating story on the difficulty of persuading people to break out of habits of timidity, caution, and unnecessary fear. I borrow this story from Larry McMurty, one of my rivals in the interpreting of the American West, though I am putting the story to a use that Mr. McMurty did not intend.

In a collection of his essays, In a Narrow Grave, Mr. McMurty wrote about the weird process of watching his book Horsemen Pass By being turned into the movie Hud. He arrived in the Texas Panhandle a week or two after filming had started, and he was particularly anxious to learn how the buzzard scene had gone. In that scene, Paul Newman was supposed to ride up and discover a dead cow, look up at a tree branch lined with buzzards and, in his distress over the loss of the cow, fire his gun at one of the buzzards. At that moment, all of the other buzzards were supposed to fly away into the blue Panhandle sky.

But when Mr. McMurty asked people how the buzzard scene had gone, all he got, he said, were “stricken looks.”

The first problem, it turned out, had to do with the quality of the available local buzzards_who proved to be an excessively scruffy group. So more appealing, more photogenic buzzards had to be flown in from some distance and at considerable expense.

But then came the second problem: how to keep the buzzards sitting on the tree branch until it was time for their cue to fly.

That seemed easy. Wire their feet to the branch, and then, after Paul Newman fires his shot, pull the wire, releasing their feet, thus allowing them to take off.

But, as Mr. McMurty said in an important and memorable phrase, the film makers had not reckoned with the “mentality of buzzards.” With their feet wired, the buzzards did not have enough mobility to fly. But they did have enough mobility to pitch forward.

So that’s what they did: with their feet wired, they tried to fly, pitched forward, and hung upside down from the dead branch, with their wings flapping.

I had the good fortune a couple of years ago to meet a woman who had been an extra for this movie, and she added a detail that Mr. McMurty left out of his essay: namely, the buzzard circulatory system does not work upside down, and so, after a moment or two of flapping, the buzzards passed out.

Twelve buzzards hanging upside down from a tree branch: this was not what Hollywood wanted from the West, but that’s what Hollywood had produced.

And then we get to the second stage of buzzard psychology. After six or seven episodes of pitching forward, passing out, being revived, being replaced on the branch and pitching forward again, the buzzards gave up. Now, when you pulled the wire and released their feet, they sat there, saying in clear, nonverbal terms: “We tried that before. It did not work. We are not going to try it again.” Now the film makers had to fly in a high_powered animal trainer to restore buzzard self_esteem. It was all a big mess. Larry McMurty got a wonderful story out of it; and we, in turn, get the best possible parable of the workings of habit and timidity.

How does the parable apply? In any and all disciplines, you go to graduate school to have your feet wired to the branch. There is nothing inherently wrong with that: scholars should have some common ground, share some background assumptions, hold some similar habits of mind. This gives you, quite literally, your footing. And yet, in the process of getting your feet wired, you have some awkward moments, and the intellectual equivalent of pitching forward and hanging upside down. That experience_especially if you do it in a public place like a seminar_provides no pleasure. One or two rounds of that humiliation, and the world begins to seem like a treacherous place. Under those circumstances, it does indeed seem to be the choice of wisdom to sit quietly on the branch, to sit without even the thought of flying, since even the thought might be enough to tilt the balance and set off another round of flapping, fainting and embarrassment.

Yet when scholars get out of graduate school and get Ph.D.’s, and, even more important, when scholars get tenure, the wire is truly pulled. Their feet are free. They can fly whenever and wherever they like. Yet by then the second stage of buzzard psychology has taken hold, and they refuse to fly. The wire is pulled, and yet the buzzards sit there, hunched and grumpy. If they teach in a university with a graduate program, they actively instruct young buzzards in the necessity of keeping their youthful feet on the branch.

This is a very well_established pattern, and it is the ruination of scholarly activity in the modern world. Many professors who teach graduate students think that one of their principal duties is to train students in the conventions of academic writing.

I do not believe that professors enforce a standard of dull writing on graduate students in order to be cruel. They demand dreariness because they think that dreariness is in the students’ best interests. Professors believe that a dull writing style is an academic survival skill because they think that is what editors want, both editors of academic journals and editors of university presses. What we have here is a chain of misinformation and misunderstanding, where everyone thinks that the other guy is the one who demands, dull, impersonal prose.

Let me say again what is at stake here: universities and colleges are currently embattled, distrusted by the public and state funding institutions. As distressing as this situation is, it provides the perfect setting and the perfect timing for declaring an end to scholarly publication as a series of guarded conversations between professors.

The redemption of the university, especially in terms of the public’s appraisal of the value of research and publication, requires all the writers who have something they want to publish to ask themselves the question: Does this have to be a closed communication, shutting out all but specialists willing to fight their way through the thickest of jargon? Or can this be an open communication, engaging specialists with new information and new thinking, but also offering an invitation to nonspecialists to learn from this study, to grasp its importance, and by extension, to find concrete reasons to see value in the work of the university?

This is a country in need of wisdom, and of clearly reasoned conviction and vision. And that, at the bedrock, is the reason behind this campaign to save professors from themselves and to detoxify academic prose. The context is a bit different, but the statement that Willy Loman made to his sons in Death of a Salesman keeps coming to mind: “The woods are burning boys, the woods are burning.” In a society confronted by a faltering economy, racial and ethnic conflicts, and environmental disasters, “the woods are burning,” and since we so urgently need everyone’s contribution in putting some of these fires out, there is no reason to indulge professorial vanity or timidity.

Ego is, of course, the key obstacle here. As badly as most of them write, professors are nonetheless proud and sensitive writers, resistant in criticism. But even the most desperate cases can be redeemed and persuaded to think of writing as a challenging craft, not as existential trauma. A few years ago, I began to look at carpenters and other artisans as the emotional model for writers. A carpenter, let us say, makes a door for a cabinet. If the door does not hang straight, the carpenter does not say, “I will not change that door; it is an expression of my individuality; who cares if it will not close?” Instead, the carpenter removes the door and works on it until it fits. That attitude, applied to writing, could be our salvation. If we thought more like carpenters, academic writers could find a route out of the trap of ego and vanity. Escaped from that trap, we could simply work on successive drafts until what we have to say is clear.

Colleges and universities are filled with knowledgeable, thoughtful people who have been effectively silenced by an awful writing style, a style with its flaws concealed behind a smokescreen of sophistication and professionalism. A coalition of academic writers, graduate advisers. journal editors, university press editors and trade publishers can seize this moment_and pull the wire. The buzzards can be set free_free to leave that dead tree branch, free to regain to regain their confidence, free to soar.

Posted in Academic writing, Wit, Writing

Wit (and the Art of Writing)

 

They laughed when I told them I wanted to be a comedian. Well they’re not laughing now.

Bob Monkhouse

Wit is notoriously difficult to analyze, and any effort to do so is likely to turn out dry and witless.  But two recent authors have done a remarkably effective job of trying to make sense of what constitutes wit and they manage to do so wittily.  That’s a risky venture, which most sensible people would avoid like COVID-19.  One book is Wit’s End by James Geary; the other is Humour by Terry Eagleton.  The epigraph comes from Eagleton.  Both have the good sense to reflect on the subject without analyzing it to death or trampling on the punchline.  Eagleton uses Freud as a negative case in point:

Children, insists Freud, lack all sense of the comic, but it is possible he is confusing them with the author of a notoriously unfunny work entitled Jokes and Their Relation to the Unconscious.

Interestingly, Geary says that wit begins with the pun.

Despite its bad reputation, punning is, in fact, among the highest displays of wit. Indeed, puns point to the essence of all true wit—the ability to hold in the mind two different ideas about the same thing at the same time.

In poems, words rhyme; in puns, ideas rhyme. This is the ultimate test of wittiness: keeping your balance even when you’re of two minds.

Groucho’s quip upon entering a restaurant and seeing a previous spouse at another table—“ Marx spots the ex.”

Geary Cover

Instead of avoiding ambiguity, wit revels in it, using paradoxical juxtaposition to shake you out of a trance and ask you to consider an issue from a strikingly different angle.  Arthur Koestler described the pun as “two strings of thought tied together by an acoustic knot.”  There’s an echo here of Emerson’s epigram, “A foolish consistency is the hobgoblin of little minds…”  Misdirection can lead to comic relief but it can also produce intellectual insight.

Geary goes on to show how the joke is integrally related to other forms of creative thought:

There is no sharp boundary splitting the wit of the scientist, inventor, or improviser from that of the artist, the sage, or the jester. The creative experience moves seamlessly from the “Aha!” of scientific discovery to the “Ah” of aesthetic insight to the “Ha-ha” of the pun and the punch line.  “Comic discovery is paradox stated—scientific discovery is paradox resolved,” Koestler wrote.

He shows that wit and metaphor have a lot in common.

If wit consists, as we say, in the ability to hold in the mind two different ideas about the same thing at the same time, this is exactly the function of metaphor. A metaphor carries the attention from the concrete to the abstract, from object to concept. When that direction is reversed, and attention is brought back from concept to object, the mind is surprised. Mistaking the figurative for fact is therefore a signature trick of wit.

Hence is it said, kleptomaniacs don’t understand metaphor because they take things literally.

Both wit and metaphor have these qualities in common:  “brevity, novelty, and clarity.”

Read my lips. Shoot from the hip. Wit switch hits. Wit ad-libs. It teaches new dogs lotsa old tricks. Throw spaghetti ’gainst the wall—wit’s what sticks. You can’t beat it or repeat it, not even with a shtick. Wit rocks the boat. That’s all she wrote.

Eagleton picks up Geary’s theme of how wit and metaphor are grounded in the “aha” of incongruity.

There are many theories of humour in addition to those we have looked at. They include the play theory, the conflict theory, the ambivalence theory, the dispositional theory, the mastery theory, the Gestalt theory, the Piagetian theory and the configurational theory. Several of these, however, are really versions of the incongruity theory, which remains the most plausible account of why we laugh. On this view, humour springs from a clash of incongruous aspects – a sudden shift of perspective, an unexpected slippage of meaning, an arresting dissonance or discrepancy, a momentary defamiliarising of the familiar and so on. As a temporary ‘derailment of sense’, it involves the disruption of orderly thought processes or the violation of laws or conventions. It is, as D. H. Munro puts it, a breach in the usual order of events.

“The Duke’s a long time coming today,” said the Duchess, stirring her tea with the other hand.

Eagleton Cover

He talks about how humor gives us license to be momentarily freed from the shackles of reason and order, a revolt of the id against the superego.  But the key is that reason and order are quickly restored, so the lapse of control is risk free.

As a pure enunciation that expresses nothing but itself, laughter lacks intrinsic sense, rather like an animal’s cry, but despite this it is richly freighted with cultural meaning. As such, it has a kinship with music. Not only has laughter no inherent meaning, but at its most riotous and convulsive it involves the disintegration of sense, as the body tears one’s speech to fragments and the id pitches the ego into temporary disarray. As with grief, severe pain, extreme fear or blind rage, truly uproarious laughter involves a loss of physical self-control, as the body gets momentarily out of hand and we regress to the uncoordinated state of the infant. It is quite literally a bodily disorder.

It is just the same with the fantasy revolution of carnival, when the morning after the merriment the sun will rise on a thousand empty wine bottles, gnawed chicken legs and lost virginities and everyday life will resume, not without a certain ambiguous sense of relief. Or think of stage comedy, where the audience is never in any doubt that the order so delightfully disrupted will be restored, perhaps even reinforced by this fleeting attempt to flout it, and thus can blend its anarchic pleasures with a degree of conservative self-satisfaction.

Like Geary, Eagleton shows how a key to wit is its ability to hone down an issue to a sharp point, which is captured in a verbal succinctness that is akin to poetry.

Wit has a point, which is why it is sometimes compared to the thrust of a rapier. It is rapier-like in its swift, shapely, streamlined, agile, flashing, glancing, dazzling, dexterous, pointed, clashing, flamboyant aspects, but also because it can stab and wound.

A witticism is a self-conscious verbal performance, but it is one that minimises its own medium, compacting its words into the slimmest possible space in an awareness that the slightest surplus of signification might prove fatal to its success. As with poetry, every verbal unit must pull its weight, and the cadence, rhythm and resonance of a piece of wit may be vital to its impact. The tighter the organisation, the more a verbal slide, ambiguity, conceptual shift or trifling dislocation of syntax registers its effect.

There is a strong lesson for writers in this discussion of wit.  Sharpen the argument, tighten the prose, focus on “brevity, novelty, and clarity.”  Learn from the craft of the poet and the comedian.  Less is more.

One problem witb academic writing in particular is that it takes itself too seriously.  It pays for us to keep our wit about us as we  write scholarly papers, acknowledging that we don’t know quite as much about the subject as we are letting on.  Conceding a bit of weakness can be quite appealing.  Oscar Wilde:  “I can resist anything but tempation.”

Everyday life involves sustaining a number of polite fictions: that we take a consuming interest in the health and well-being of our most casual acquaintances, that we never think about sex for a single moment, that we are thoroughly familiar with the later work of Schoenberg and so on. It is pleasant to drop the mask for a moment and strike up a comedic solidarity of weakness.

It is as though we are all really play-actors in our conventional social roles, sticking solemnly to our meticulously scripted parts but ready at the slightest fluff or stumble to dissolve into infantile, uproariously irresponsible laughter at the sheer arbitrariness and absurdity of the whole charade.

And don’t forget what Mel Brooks said:  Tragedy is when you cut your finger, and comedy is when someone else walks into an open sewer and dies.

Posted in Academic writing, Higher Education, Teaching, Writing

I Would Rather Do Anything Else than Grade Your Final Papers — Robin Lee Mozer

If the greatest joy that comes from retirement is that I no longer have to attend faculty meetings, the second greatest joy is that I no longer have to grade student papers.  I know, I know: commenting on student writing is a key component of being a good teacher, and there’s a real satisfaction that comes from helping someone become a better thinker and better writer.

But most students are not producing papers to improve their minds or hone their writing skills.  They’re just trying to fulfill a course requirement and get a decent grade.  And this creates a strong incentive not for excellence but for adequacy.  It encourages people to devote most of their energy toward gaming the system.

The key skill is to produce something that looks and feels like a good answer to the exam question or a good analysis of an intellectual problem.  Students have a powerful incentive to accomplish the highest grade for the lowest investment of time and intellectual effort.  This means aiming for quantity over quality (puff up the prose to hit the word count) and form over substance (dutifully refer to the required readings without actually drawing meaningful content from them).  Glibness provides useful cover for the absence of content.  It’s depressing to observe how the system fosters discursive means that undermine the purported aims of education.

Back in the days when students turned in physical papers and then received them back with handwritten comments from the instructor, I used to get a twinge in my stomach when I saw that most students didn’t bother to pick up their final papers from the box outside my office.  I felt like a sucker for providing careful comments that no one would ever see.  At one point I even asked students to tell me in advance if they wanted their papers back, so I only commented on the ones that might get read.  But this was even more depressing, since it meant that a lot of students didn’t even mind letting me know that they really only cared about the grade.  The fiction of doing something useful was what helped keep me going.

So, like many other faculty, I responded with joy to a 2016 piece that Robin Lee Mozer wrote in McSweeney’s called “I Would Rather Do Anything Else than Grade Your Final Papers.”  As a public service to teachers everywhere, I’m republishing her essay here.  Enjoy.

 

I WOULD RATHER DO ANYTHING ELSE THAN GRADE YOUR FINAL PAPERS

Dear Students Who Have Just Completed My Class,

I would rather do anything else than grade your Final Papers.

I would rather base jump off of the parking garage next to the student activity center or eat that entire sketchy tray of taco meat leftover from last week’s student achievement luncheon that’s sitting in the department refrigerator or walk all the way from my house to the airport on my hands than grade your Final Papers.

I would rather have a sustained conversation with my grandfather about politics and government-supported healthcare and what’s wrong with the system today and why he doesn’t believe in homeowner’s insurance because it’s all a scam than grade your Final Papers. Rather than grade your Final Papers, I would stand in the aisle at Lowe’s and listen patiently to All the Men mansplain the process of buying lumber and how essential it is to sight down the board before you buy it to ensure that it’s not bowed or cupped or crook because if you buy lumber with defects like that you’re just wasting your money even as I am standing there, sighting down a 2×4 the way my father taught me 15 years ago.

I would rather go to Costco on the Friday afternoon before a three-day weekend. With my preschooler. After preschool.

I would rather go through natural childbirth with twins. With triplets. I would rather take your chemistry final for you. I would rather eat beef stroganoff. I would rather go back to the beginning of the semester like Sisyphus and recreate my syllabus from scratch while simultaneously building an elaborate class website via our university’s shitty web-based course content manager and then teach the entire semester over again than grade your goddamn Final Papers.

I do not want to read your 3AM-energy-drink-fueled excuse for a thesis statement. I do not want to sift through your mixed metaphors, your abundantly employed logical fallacies, your incessant editorializing of your writing process wherein you tell me As I was reading through articles for this paper I noticed that — or In the article that I have chosen to analyze, I believe the author is trying to or worse yet, I sat down to write this paper and ideas kept flowing into my mind as I considered what I should write about because honestly, we both know that the only thing flowing into your mind were thoughts of late night pizza or late night sex or late night pizza and sex, or maybe thoughts of that chemistry final you’re probably going to fail later this week and anyway, you should know by now that any sentence about anything flowing into or out of or around your blessed mind won’t stand in this college writing classroom or Honors seminar or lit survey because we are Professors and dear god, we have Standards.

I do not want to read the one good point you make using the one source that isn’t Wikipedia. I do not want to take the time to notice that it is cited properly. I do not want to read around your 1.25-inch margins or your gauche use of size 13 sans serif fonts when everyone knows that 12-point Times New Roman is just. Fucking. Standard. I do not want to note your missing page numbers. Again. For the sixth time this semester. I do not want to attempt to read your essay printed in lighter ink to save toner, as you say, with the river of faded text from a failing printer cartridge splitting your paper like Charlton Heston in The Ten Commandments, only there, it was a sea and an entire people and here it is your vague stand-in for an argument.

I do not want to be disappointed.

I do not want to think less of you as a human being because I know that you have other classes and that you really should study for that chemistry final because it is organic chemistry and everyone who has ever had a pre-med major for a roommate knows that organic chemistry is the weed out course and even though you do not know this yet because you have never even had any sort of roommate until now, you are going to be weeded out. You are going to be weeded out and then you will be disappointed and I do not want that for you. I do not want that for you because you will have enough disappointments in your life, like when you don’t become a doctor and instead become a philosophy major and realize that you will never make as much money as your brother who went into some soul-sucking STEM field and landed some cushy government contract and made Mom and Dad so proud and who now gives you expensive home appliances like espresso machines and Dyson vacuums for birthday gifts and all you ever send him are socks and that subscription to that shave club for the $6 middle-grade blades.

I do not want you to be disappointed. I would rather do anything else than disappoint you and crush all your hopes and dreams —

Except grade your Final Papers.

The offer to take your chemistry final instead still stands.

Posted in Academic writing, Educational Research, Higher Education, Writing

Getting It Wrong — Rethinking a Life in Scholarship

This post is an overview of my life as a scholar.  I presented an oral version in my job talk at Stanford in 2002.  The idea was to make sense of the path I’d taken in my scholarly writing up to that point.  What were the issues I was looking at and why?  How did these ideas develop over time?  And what lessons can we learn from this process that might be of use to scholars who are just starting out.

This piece first appeared in print as the introduction to a 2005 book called Education, Markets, and the Public Good: The Selected Works of David F. Labaree.  As a friend told after hearing about the book, “Isn’t this kind of compilation something that’s published after you’re dead?”  So why was I doing this at as a mere youth of 58?  The answer: Routledge offered me the opportunity.  Was there ever an academic who turned out the chance to publish something when the chance arose?  The book was part of a series called — listen for the drum roll — The World Library of Educationalists, which must have a place near the top of the list of bad ideas floated by publishers.  After the first year, when a few libraries rose to the bait, annual sales of this volume never exceeded single digits.  It’s rank in the Amazon bestseller list is normally in the two millions.

Needless to say, no one ever read this piece in its originally published form.  So I tried again, this time slightly adapting it for a 2011 volume edited by Wayne Urban called Leaders in the Historical Study of American Education, which consisted of autobiographical sketches by scholars in the field.  It now ranks in the five millions on Amazon, so the essay still never found a reader.  As a result, I decided to give the piece one more chance at life in my blog.  I enjoyed reading it again and thought it offered some value to young scholars just starting out in a daunting profession.  I hope you enjoy it too.

The core insight is that research trajectories are not things you can  carefully map out in advance.  They just happen.  You learn as you go.  And the most effective means of learning from your own work — at least from my experience — arises from getting it wrong, time and time again.  If you’re not getting things wrong, you may not be learning much at all, since you may just be continually finding what you’re looking for.  It may well be that what you need to find are the things you’re not looking for and that you really don’t want to confront.  The things that challenge your own world view, that take you in a direction you’d rather not go, forcing you to give up ideas you really want to keep.

Another insight I got from this process of reflection is that it’s good to know what are the central weaknesses in the way you do research.  Everyone has them.  Best to acknowledge where you’re coming from and learn to live with that.  These weaknesses don’t discount the value of your work, they just put limits on it.  Your way of doing scholarship are probably better at producing some kinds of insights over others.  That’s OK.  Build on your strengths and let others point out your weaknesses.  You have no obligation and no ability to give the final answer on any important question.  Instead, your job is to make a provocative contribution to the ongoing scholarly conversation and let other scholars take it from there, countering your errors and filling in the gaps.  There is no last word.

Here’s a link to a PDF of the 2011 version.  Hope you find it useful.

 

Adventures in Scholarship

Instead of writing an autobiographical sketch for this volume, I thought it would be more useful to write about the process of scholarship, using my own case as a cautionary tale.  The idea is to help emerging scholars in the field to think about how scholars develop a line of research across a career, both with the hope of disabusing them of misconceptions and showing them how scholarship can unfold as a scary but exhilarating adventure in intellectual development.  The brief story I tell here has three interlocking themes:  You need to study things that resonate with your own experience; you need to take risks and plan to make a lot of mistakes; and you need to rely on friends and colleagues to tell you when you’re going wrong.  Let me explore each of these points.

Study What Resonates with Experience

First, a little about the nature of the issues I explore in my scholarship and then some thoughts about the source of my interest in these issues. My work focuses on the historical sociology of the American system of education and on the thick vein of irony that runs through it.  This system has long presented itself as a model of equal opportunity and open accessibility, and there is a lot of evidence to support these claims.  In comparison with Europe, this upward expansion of access to education came earlier, moved faster, and extended to more people.  Today, virtually anyone can go to some form of postsecondary education in the U.S., and more than two-thirds do.  But what students find when they enter the educational system at any level is that they are gaining equal access to a sharply unequal array of educational experiences.  Why?  Because the system balances open access with radical stratification.  Everyone can go to high school, but quality of education varies radically across schools.  Almost everyone can go to college, but the institutions that are most accessible (community colleges) provide the smallest boost to a student’s life chances, whereas the ones that offer the surest entrée into the best jobs (major research universities) are highly selective.  This extreme mixture of equality and inequality, of accessibility and stratification, is a striking and fascinating characteristic of American education, which I have explored in some form or another in all my work.

Another prominent irony in the story of American education is that this system, which was set up to instill learning, actually undercuts learning because of a strong tendency toward formalism.  Educational consumers (students and their parents) quickly learn that the greatest rewards of the system go to those who attain its highest levels (measured by years of schooling, academic track, and institutional prestige), where credentials are highly scarce and thus the most valuable.  This vertically-skewed incentive structure strongly encourages consumers to game the system by seeking to accumulate the largest number of tokens of attainment – grades, credits, and degrees – in the most prestigious programs at the most selective schools.  However, nothing in this reward structure encourages learning, since the payoff comes from the scarcity of the tokens and not the volume of knowledge accumulated in the process of acquiring these tokens.  At best, learning is a side effect of this kind of credential-driven system.  At worst, it is a casualty of the system, since the structure fosters consumerism among students, who naturally seek to gain the most credentials for the least investment in time and effort.  Thus the logic of the used-car lot takes hold in the halls of learning.

In exploring these two issues of stratification and formalism, I tend to focus on one particular mechanism that helps explain both kinds of educational consequences, and that is the market.  Education in the U.S., I argue, has increasingly become a commodity, which is offered and purchased through market processes in much the same way as other consumer goods.  Educational institutions have to be sensitive to consumers, by providing the mix of educational products that the various sectors of the market demand.  This promotes stratification in education, because consumers want educational credentials that will distinguish them from the pack in their pursuit of social advantage.  It also promotes formalism, because markets operate based on the exchange value of a commodity (what it can be exchanged for) rather than its use value (what it can be used for).  Educational consumerism preserves and increases social inequality, undermines knowledge acquisition, and promotes the dysfunctional overinvestment of public and private resources in an endless race for degrees of advantage.  The result is that education has increasingly come to be seen primarily as a private good, whose benefits accrue only to the owner of the educational credential, rather than a public good, whose benefits are shared by all members of the community even if they don’t have a degree or a child in school.  In many ways, the aim of my work has been to figure out why the American vision of education over the years made this shift from public to private.

This is what my work has focused on in the last 30 years, but why focus on these issues?  Why this obsessive interest in formalism, markets, stratification, and education as arbiter of status competition?  Simple. These were the concerns I grew up with.

George Orwell once described his family’s social location as the lower upper middle class, and this captures the situation of my own family.  In The Road to Wigan Pier, his meditation on class relations in England, he talks about his family as being both culture rich and money poor.[1]  Likewise for mine.  Both of my grandfathers were ministers.  On my father’s side the string of clergy went back four generations in the U.S.  On my mother’s side, not only was her father a minister but so was her mother’s father, who was in turn the heir to a long clerical lineage in Scotland.  All of these ministers were Presbyterians, whose clergy has long had a distinctive history of being highly educated cultural leaders who were poor as church mice.  The last is a bit of an exaggeration, but the point is that their prestige and authority came from learning and not from wealth.  So they tended to value education and disdain grubbing for money.  My father was an engineer who managed to support his family in a modest but comfortable middle-class lifestyle.  He and my mother plowed all of their resources into the education of their three sons, sending all of them to a private high school in Philadelphia (Germantown Academy) and to private colleges (Lehigh, Drexel, Wooster, and Harvard).  Both of my parents were educated at elite schools (Princeton and Wilson) – on ministerial scholarships – and they wanted to do the same for their own children.

What this meant is that we grew up taking great pride in our cultural heritage and educational accomplishments and adopting a condescending attitude to those who simply engaged in trade for a living.  Coupled with this condescension was a distinct tinge of envy for the nice clothes, well decorated houses, new cars, and fancy trips that the families of our friends experienced.  I thought of my family as a kind of frayed nobility, raising the flag of culture in a materialistic society while wearing hand-me-down clothes.  From this background, it was only natural for me to study education as the central social institution, and to focus in particular on the way education had been corrupted by the consumerism and status-competition of a market society.  In doing so I was merely entering the family business.  Someone out there needed to stand up for substantive over formalistic learning and for the public good over the private good, while at the same time calling attention to the dangers of a social hierarchy based on material status.  So I launched my scholarship from a platform of snobbish populism – a hankering for a lost world where position was grounded on the cultural authority of true learning and where mere credentialism could not hold sway.

Expect to Get Things Wrong

Becoming a scholar is not easy under the best of circumstances, and we may make it even harder by trying to imbue emerging scholars with a dedication for getting things right.[2]  In doctoral programs and tenure reviews, we stress the importance of rigorous research methods and study design, scrupulous attribution of ideas, methodical accumulation of data, and cautious validation of claims.  Being careful to stand on firm ground methodologically in itself is not a bad thing for scholars, but trying to be right all the time can easily make us overly cautious, encouraging us to keep so close to our data and so far from controversy that we end up saying nothing that’s really interesting.  A close look at how scholars actually carry out their craft reveals that they generally thrive on frustration.  Or at least that has been my experience.  When I look back at my own work over the years, I find that the most consistent element is a tendency for getting it wrong.  Time after time I have had to admit failure in the pursuit of my intended goal, abandon an idea that I had once warmly embraced, or backtrack to correct a major error.  In the short run these missteps were disturbing, but in the long run they have proven fruitful.

Maybe I’m just rationalizing, but it seems that getting it wrong is an integral part of scholarship.  For one thing, it’s central to the process of writing.  Ideas often sound good in our heads and resonate nicely in the classroom, but the real test is whether they work on paper.[3]  Only there can we figure out the details of the argument, assess the quality of the logic, and weigh the salience of the evidence.  And whenever we try to translate a promising idea into a written text, we inevitably encounter problems that weren’t apparent when we were happily playing with the idea over lunch.  This is part of what makes writing so scary and so exciting:  It’s a high wire act, in which failure threatens us with every step forward.  Can we get past each of these apparently insuperable problems?  We don’t really know until we get to the end.

This means that if there’s little risk in writing a paper there’s also little potential reward.  If all we’re doing is putting a fully developed idea down on paper, then this isn’t writing; it’s transcribing.  Scholarly writing is most productive when authors are learning from the process, and this happens only if the writing helps us figure out something we didn’t really know (or only sensed), helps us solve an intellectual problem we weren’t sure was solvable, or makes us turn a corner we didn’t know was there.  Learning is one of the main things that makes the actual process of writing (as opposed to the final published product) worthwhile for the writer.  And if we aren’t learning something from our own writing, then there’s little reason to think that future readers will learn from it either.  But these kinds of learning can only occur if a successful outcome for a paper is not obvious at the outset, which means that the possibility of failure is critically important to the pursuit of scholarship.

Getting it wrong is also functional for scholarship because it can force us to give up a cherished idea in the face of the kinds of arguments and evidence that accumulate during the course of research.  Like everyone else, scholars are prone to confirmation bias.  We look for evidence to support the analysis we prefer and overlook evidence that supports other interpretations.  So when we collide with something in our research or writing that deflects us from the path toward our preferred destination, we tend to experience this deflection as failure.  However, although these experiences are not pleasant, they can be quite productive.  Not only do they prompt us to learn things we don’t want to know, they can also introduce arguments into the literature that people don’t want to hear.  A colleague at the University of

Michigan, David Angus, had both of these benefits in mind when he used to pose the following challenge to every candidate for a faculty position in the School of Education:  “Tell me about some point when your research forced you to give up an idea you really cared about.”

I have experienced all of these forms of getting it wrong.  Books never worked out the way they were supposed to, because of changes forced on me by the need to come up with remedies for ailing arguments.  The analysis often turned in a direction that meant giving up something I wanted to keep and embracing something I preferred to avoid.  And nothing ever stayed finished.  Just when I thought I had a good analytical hammer and started using it to pound everything in sight, it would shatter into pieces and I would be forced to start over.  This story of misdirection and misplaced intentions starts, as does every academic story, with a dissertation.

Marx Gives Way to Weber

My dissertation topic fell into my lap one day during the final course in my doctoral program in sociology at the University of Pennsylvania, when I mentioned to Michael Katz that I had done a brief study of Philadelphia’s Central High School for an earlier class.  He had a new grant for studying the history of education in Philadelphia and Central was the lead school.  He needed someone to study the school, and I needed a topic, advisor, and funding; by happy accident, it all came together in 15 minutes.  I had first become interested in education as an object of study as an undergraduate at Harvard in the late 1960s, where I majored in Students for a Democratic Society and minored in sociology.  In my last year or two there, I worked on a Marxist analysis of Harvard as an institution of social privilege (is there a better case?), which whet my appetite for educational research.

For the dissertation, I wanted to apply the same kind of Marxist approach to Central High School, which seemed to beg for it.  Founded in 1838, it was the first high school in the city and one of the first in the county, and it later developed into the elite academic high school for boys in the city.  It looked like the Harvard of public high schools.  I had a model for this kind of analysis, Katz’s study of Beverly High School, in which he explained how this high school, shortly after its founding, came to be seen by many citizens as an institution that primarily served the upper classes, thus prompting the town meeting to abolish the school in 1861.[4]  I was planning to do this kind of study about Central, and there seemed to be plenty of evidence to support such an interpretation, including its heavily upper-middle-class student body, its aristocratic reputation in the press, and its later history as the city’s elite high school.

That was the intent, but my plan quickly ran into two big problems in the data I was gathering.  First, a statistical analysis of student attainment and achievement at the school over its first 80 years showed a consistent pattern:  only one-quarter of the students managed to graduate, which meant it was highly selective; but grades and not class determined who made it and who didn’t, which meant it was – surprise – highly meritocratic.  Attrition in modern high schools is strongly correlated with class, but this was not true in the early years at Central.  Middle class students were more likely to enroll in the first place, but they were no more likely to succeed than working class students.  The second problem was that the high school’s role in the Philadelphia school system didn’t fit the Marxist story of top-down control that I was trying to tell.  In the first 50 years of the high school, there was a total absence of bureaucratic authority over the Philadelphia school system.  The high school was an attractive good in the local educational market, offering elevated education in a grand building at a collegiate level (it granted bachelor degrees) and at no cost.  Grammar school students competed for access to this commodity by passing an entrance exam, and grammar school masters competed to get the most students into Central by teaching to the test.  The power that the high school exerted over the system was considerable but informal, arising from consumer demand from below rather than bureaucratic dictate from above.

Thus my plans to tell a story of class privilege and social control fell apart at the very outset of my dissertation; in its place, I found a story about markets and stratification:  Marx gives way to Weber.  The establishment of Central High school in the nation’s second largest city created a desirable commodity with instant scarcity, and this consumer-based market power not only gave the high school control over the school system but also gave it enough autonomy to establish a working meritocracy.  The high school promoted inequality: it served a largely middle class constituency and established an extreme form of educational stratification.  But it imposed a tough meritocratic regime equally on the children of the middle class and working class, with both groups failing most of the time.

Call on Your Friends for Help

In the story I’m telling here, the bad news is that scholarship is a terrain that naturally lures you into repeatedly getting it wrong.  The good news is that help is available if you look for it, which can turn scholarly wrong-headedness into a fruitful learning experience.  Just ask your friends and colleagues.  The things you most don’t want to hear may be just the things that will save you from intellectual confusion and professional oblivion.  Let me continue with the story, showing how colleagues repeatedly saved my bacon.

Markets Give Ground to Politics

Once I completed the dissertation, I gradually settled into being a Weberian, a process that took a while because of the disdain that Marxists hold for Weber.[5]  I finally decided I had a good story to tell about markets and schools, even if it wasn’t the one I had wanted to tell, so I used this story in rewriting the dissertation as a book.  When I had what I thought was a final draft ready to send to the publisher, I showed it to my colleague at Michigan State, David Cohen, who had generously offered to give it a reading.  His comments were extraordinarily helpful and quite devastating.  In the book, he said, I was interpreting the evolution of the high school and the school system as a result of the impact of the market, but the story I was really telling was about an ongoing tension for control of schools between markets and politics.[6]  The latter element was there in the text, but I had failed to recognize it and make it explicit in the analysis.  In short, he explained to me the point of my own book; so I had to rewrite the entire manuscript in order to bring out this implicit argument.

Framing this case in the history of American education as a tension between politics and markets allowed me to tap into the larger pattern of tensions that always exist in a liberal democracy:  the democratic urge to promote equality of power and access and outcomes, and the liberal urge to preserve individual liberty, promote free markets, and tolerate inequality.  The story of Central High School spoke to both these elements.  It showed a system that provided equal opportunity and unequal outcomes.  Democratic politics pressed for expanding access to high school for all citizens, whereas markets pressed for restricting access to high school credentials through attrition and tracking.  Central see-sawed back and forth between these poles, finally settling on the grand compromise that has come to characterize American education ever since:  open access to a stratified school system.  Using both politics and markets in the analysis also introduced me to the problem of formalism, since political goals for education (preparing competent citizens) value learning, whereas market goals (education for social advantage) value credentialing.

Disaggregating Markets

The book came out in 1988 with the title, The Making of an American High School.[7]  With politics and markets as my new hammer, everything looked like a nail.  So I wrote a series of papers in which I applied the idea to a wide variety of educational institutions and reform efforts, including the evolution of high school teaching as work, the history of social promotion, the history of the community college, the rhetorics of educational reform, and the emergence of the education school.

Midway through this flurry of papers, however, I ran into another big problem.  I sent a draft of my community college paper to David Hogan, a friend and former member of my dissertation committee at Penn, and his critique stopped me cold.  He pointed out that I was using the idea of educational markets to refer to two things that were quite different, both in concept and in practice.  One was the actions of educational consumers, the students who want education to provide the credentials they needed in order to get ahead; the other was the actions of educational providers, the taxpayers and employers who want education to produce the human capital that society needs in order to function.  The consumer sought education’s exchange value, providing selective benefits for the individual who owns the credential; the producer sought education’s use value, providing collective benefits to everyone in society, even those not in school.

This forced me to reconstruct the argument from the ground up, abandoning the politics and markets angle and constructing in its place a tension among three goals that competed for primacy in shaping the history of American education.  “Democratic equality” referred to the goal of using education to prepare capable citizens; “social efficiency” referred to the goal of using education to prepare productive workers; and “social mobility” referred to the goal of using education to enable individuals to get ahead in society.  The first was a stand-in for educational politics, the second and third were a disaggregation of educational markets.

Abandoning the Good, the Bad, and the Ugly

Once formulated, the idea of the three goals became a mainstay in my teaching, and for a while it framed everything I wrote.  I finished the string of papers I mentioned earlier, energized by the analytical possibilities inherent in the new tool.  But by the mid-1990s, I began to be afraid that its magic power would start to fade on me soon, as had happened with earlier enthusiasms like Marxism and politics-and-markets.  Most ideas have a relatively short shelf life, as metaphors quickly reach their limits and big ideas start to shrink upon close examination.  That doesn’t mean these images and concepts are worthless, only that they are bounded, both conceptually and temporally.  So scholars need to strike while the iron is hot.  Michael Katz once made this point to me with the Delphic advice, “Write your first book first.”  In other words, if you have an idea worth injecting into the conversation, you should do so now, since it will eventually evolve into something else, leaving the first idea unexpressed.  Since the evolution of an idea is never finished, holding off publication until the idea is done is a formula for never publishing.

So it seemed like the right time to put together a collection of my three-goals papers into a book, and I had to act quickly before they started to turn sour.  With a contract for the book and a sabbatical providing time to put it together, I now had to face the problem of framing the opening chapter.  In early 1996 I completed a draft and submitted it to American Educational Research Journal.  The reviews knocked me back on my heels.  They were supportive but highly critical.  One in particular, which I later found out was written by Norton Grubb, forced me to rethink the entire scheme of competing goals.  He pointed out something I had completely missed in my enthusiasm for the tool-of-the-moment.  In practice my analytical scheme with three goals turned into a normative scheme with two:  a Manichean vision of light and darkness, with Democratic Equality as the Good, and with Social Mobility and Social Efficiency as the Bad and the Ugly.  This ideologically colored representation didn’t hold up under close scrutiny.  Grubb pointed out that social efficiency is not as ugly as I was suggesting.  Like democratic equality and unlike social mobility, it promotes learning, since it has a stake in the skills of the workforce.  Also, like democratic equality, it views education as a public good, whose benefits accrue to everyone and not just (as with social mobility) to the credential holder.

This trenchant critique forced me to start over, putting a different spin on the whole idea of competing goals, abandoning the binary vision of good and evil, reluctantly embracing the idea of balance, and removing the last vestige of my original bumper-sticker Marxism.  As I reconstructed the argument, I put forward the idea that all three of these goals emerge naturally from the nature of a liberal democracy, and that all three are necessary.[8]  There is no resolution to the tension among educational goals, just as there is no resolution to the problem of being both liberal and democratic.  We need an educational system that makes capable citizens and productive workers while also enabling individuals to pursue their own aspirations.  And we all act out our support for each of these goals according to which social role is most salient to us at the moment.  As citizens, we want graduates who can vote intelligently; as taxpayers and employers, we want graduates who will increase economic productivity; and as parents, we want an educational system that offers our children social opportunity.  The problem is the imbalance in the current mix of goals, as the growing primacy of social mobility over the other two goals privileges private over public interests, stratification over equality, and credentials over learning.

Examining Life at the Bottom of the System

With this reconstruction of the story, I was able to finish my second book, published in 1997, and get it out the door before any other major problems could threaten its viability.[9]  One such problem was already coming into view.  In comments on my AERJ goals paper, John Rury (the editor) pointed out that my argument relied on a status competition model of social organization – students fighting for scarce credentials in order to move up or stay up – that did not really apply to the lower levels of the system.  Students in the lower tracks in high school and in the open-access realms of higher education (community colleges and regional state universities) lived in a different world from the one I was talking about.  They were affected by the credentials race, but they weren’t really in the race themselves.  For them, the incentives to compete were minimal, the rewards remote, and the primary imperative was not success but survival.

Fortunately, however, there was one place at the bottom of the educational hierarchy I did know pretty well, and that was the poor beleaguered education school.  From 1985 to 2003, while I was teaching in the College of Education at Michigan State University, I received a rich education in the subject.  I had already started a book about ed schools, but it wasn’t until the book was half completed that I realized it was forcing me to rethink my whole thesis about the educational status game.  Here was an educational institution that was the antithesis of the Harvards and Central High Schools that I had been writing about thus far.  Residing at the very bottom of the educational hierarchy, the ed school was disdained by academics, avoided by the best students, ignored by policymakers, and discounted by its own graduates.  It was the perfect case to use in answering a question I had been avoiding:  What happens to education when credentials carry no exchange value and the status game is already lost?

What I found is that life at the bottom has some advantages, but they are outweighed by disadvantages.  On the positive side, the education school’s low status frees it to focus efforts on learning rather than on credentials, on the use value rather than exchange value of education; in this sense, it is liberated from the race for credentials that consumes the more prestigious realms of higher education.  On the negative side, however, the ed school’s low status means that it has none of the autonomy that prestigious institutions (like Central High School) generate for themselves, which leaves it vulnerable to kibitzing from the outside.  This institutional weakness also has made the ed school meekly responsive to its environment, so that over the years it obediently produced large numbers of teachers at low cost and with modest professional preparation, as requested.

When I had completed a draft of the book, I asked for comments from two colleagues at Michigan State, Lynn Fendler and Tom Bird, who promptly pointed out several big problems with the text.  One had to do with the argument in the last few chapters, where I was trying to make two contradictory points:  ed schools were weak in shaping schools but effective in promoting progressive ideology.  The other problem had to do with the book’s tone:  as an insider taking a critical position about ed schools, I sounded like I was trying to enhance my own status at the expense of colleagues.  Fortunately, they were able to show me a way out of both predicaments.  On the first issue, they helped me see that ed schools were more committed to progressivism as a rhetorical stance than as a mode of educational practice.  In our work as teacher educators, we have to prepare teachers to function within an educational system that is hostile to progressive practices.  On the second issue, they suggested that I shift from the third person to the first person.  By announcing clearly both my membership in the community under examination and my participation in the problems I was critiquing, I could change the tone from accusatory to confessional.  With these important changes in place, The Trouble with Ed Schools was published in 2004.[10]

Enabling Limitations

In this essay I have been telling a story about grounding research in an unlovely but fertile mindset, getting it wrong repeatedly, and then trying to fix it with the help of friends.  However, I don’t want to leave the impression that I think any of these fixes really resolved the problems.  The story is more about filling potholes than about re-engineering the road.  It’s also about some fundamental limitations in my approach to the historical sociology of American education, which I have been unwilling and unable to fix since they lie at the core of my way of seeing things.  Intellectual frameworks define, shape, and enable the work of scholars.  Such frameworks can be helpful by allowing us to cut a slice through the data and reveal interesting patterns that are not apparent from other angles, but they can only do so if they maintain a sharp leading edge.  As an analytical instrument, a razor works better than a baseball bat, and a beach ball doesn’t work at all.  The sharp edge, however, comes at a cost, since it necessarily narrows the analytical scope and commits a scholar to one slice through a problem at the expense of others.  I’m all too aware of the limitations that arise from my own cut at things.

One problem is that I tend to write a history without actors.  Taking a macro-sociological approach to history, I am drawn to explore general patterns and central tendencies in the school-society relationship rather than the peculiarities of individual cases.  In the stories I tell, people don’t act.  Instead, social forces contend, social institutions evolve in response to social pressures, and collective outcomes ensue.  My focus is on general processes and structures rather than on the variations within categories.  What is largely missing from my account of American education is the radical diversity of traits and behaviors that characterizes educational actors and organizations.  I plead guilty to these charges.  However, my aim has been not to write a tightly textured history of the particular but to explore some of the broad socially structured patters that shape the main outlines of American educational life.  My sense is that this kind of work serves a useful purpose—especially in a field such as education, whose dominant perspectives have been psychological and presentist rather than sociological and historical; and in a sub-field like history of education, which can be prone to the narrow monograph with little attention to the big picture; and in a country like the United States, which is highly individualistic in orientation and tends to discount the significance of the collective and the categorical.

Another characteristic of my work is that I tend to stretch arguments well beyond the supporting evidence.  As anyone can see in reading my books, I am not in the business of building an edifice of data and planting a cautious empirical generalization on the roof.  My first book masqueraded as a social history of an early high school, but it was actually an essay on the political and market forces shaping the evolution of American education in general—a big leap to make from historical data about a single, atypical school.  Likewise my second book is a series of speculations about credentialing and consumerism that rests on a modest and eclectic empirical foundation.  My third book involves minimal data on education in education schools and maximal rumination about the nature of “the education school.”  In short, validating claims has not been my strong suit.  I think the field of educational research is sufficiently broad and rich that it can afford to have some scholars who focus on constructing credible empirical arguments about education and others who focus on exploring ways of thinking about the subject.

The moral of this story, therefore, may be that scholarship is less a monologue than a conversation.  In education, as in other areas, our field is so expansive that we can’t cover more than a small portion, and it’s so complex that we can’t even gain mastery over our own tiny piece of the terrain.  But that’s ok.  As participants in the scholarly conversation, our responsibility is not to get things right but to keep things interesting, while we rely on discomfiting interactions with our data and with our colleagues to provide the correctives we need to make our scholarship more durable.

[1]  George Orwell,  The Road to Wigan Pier (New York: Harcourt, Brace, 1958).

[2]  I am grateful to Lynn Fendler and Tom Bird for comments on an earlier draft of this portion of the essay.  As they have done before, they saved me from some embarrassing mistakes.  I presented an earlier version of this analysis in a colloquium at the Stanford School of Education in 2002 and in the Division F Mentoring Seminar at the American Educational Research Association annual meeting in New Orleans later the same year.  A later version was published as the introduction to Education, Markets, and the Public Good: The Selected Works of David F. Labaree (London: Routledge Falmer, 2007).  Reprinted with the kind permission of Taylor and Francis.

[3]  That doesn’t mean it’s necessarily the best way to start developing an idea.  For me, teaching has always served better as a medium for stimulating creative thought.  It’s a chance for me to engage with ideas from texts about a particular topic, develop a story about these ideas, and see how it sounds when I tell it in class and listen to student responses.  The classroom has a wonderful mix of traits for these purposes: by forcing discipline and structure on the creative process while allowing space for improvisation and offering the chance to reconstruct everything the next time around.  After my first book, most of my writing had its origins in this pedagogical process.  But at a certain point I find that I have to test these ideas in print.

[4]  Michael B. Katz, The Irony of Early School Reform: Educational Innovation in Mid-Nineteenth Century Massachusetts (Boston: Harvard University Press, 1968).

[5]  Marx’s message is rousing and it can fit on a bumper sticker:  Workers of the world, unite!  But Weber’s message is more complicated, pessimistic, and off-putting:  The iron cage of rationalization has come to dominate the structure of thought and social action, but we can’t stop it or even escape from it.

[6]  He also pointed out, in passing, that my chapter on the attainment system at the high school – which incorporated 17 tables in the book (30 in the dissertation), and which took me two years to develop by collecting, coding, keying, and statistically analyzing data from 2,000 student records – was essentially one big footnote in support of the statement, “Central High School was meritocratic.”  Depressing but true.

[7]  David F. Labaree, The Making of an American High School: The Credentials Market and the Central High School of Philadelphia, 1838-1939 (New Haven: Yale University Press, 1988).

[8]  David F. Labaree, “Public Goods, Private Goods: The American Struggle over Educational Goals. American Educational Research Journal 34:1 (Spring, 1998): 39-81.

[9]  David F. Labaree,  How to Succeed in School Without Really Learning: The Credentials Race in American Education (New Haven, Yale University Press, 1997).

[10] David F. Labaree,  The Trouble with Ed Schools (New Haven: Yale University Press, 2004).

Posted in Academic writing, Writing

Academic Writing Issues #9: Metaphors — The Poetry of Everyday Life

Earlier I posted a piece about mangled metaphors (Academic Writing Issues # 6), which focused on the trouble that writers get into when they use a metaphor without taking into account the root comparison that is embedded within it.  Example:  talking about “the doctrine set forth in Roe v. Wade and its progeny” — a still-born metaphor if there ever was one.  So writers need to be wary of metaphors, especially those that have become cliches, thus making the original reference dormant.

But don’t let these problems put you off from using metaphors altogether.  Actually, it’s nearly impossible to write without any metaphors, since they are so central to communication.  Literal meanings are useful, and in scientific writing precision is important to maintain clarity.  But literal language is boring, pedestrian.  It just plods along, telling a story without conveying what the story means.  Metaphor is how we create a richness of meaning, which comes from not just telling what something is but showing what’s it’s related to.  Metaphors create depth and resonance, and they stick in your mind.

Think about the power of a great book title, which captures the essence of the text in a vivid image:  Bowling Alone; Bell Curve; The Unbearable Lightness of Being; The Botany of Desire.

In the piece below, David Brooks talks about metaphors as the poetry of everyday life in a 2011 column from the New York Times.  I think you’ll like it.

 

April 11, 2011

Poetry for Everyday Life

By DAVID BROOKS

Here’s a clunky but unremarkable sentence that appeared in the British press before the last national election: “Britain’s recovery from the worst recession in decades is gaining traction, but confused economic data and the high risk of hung Parliament could yet snuff out its momentum.”

The sentence is only worth quoting because in 28 words it contains four metaphors. Economies don’t really gain traction, like a tractor. Momentum doesn’t literally get snuffed out, like a cigarette. We just use those metaphors, without even thinking about it, as a way to capture what is going on.

In his fine new book, “I Is an Other,” James Geary reports on linguistic research suggesting that people use a metaphor every 10 to 25 words. Metaphors are not rhetorical frills at the edge of how we think, Geary writes. They are at the very heart of it.

George Lakoff and Mark Johnson, two of the leading researchers in this field, have pointed out that we often use food metaphors to describe the world of ideas. We devour a book, try to digest raw facts and attempt to regurgitate other people’s ideas, even though they might be half-baked.

When talking about relationships, we often use health metaphors. A friend might be involved in a sick relationship. Another might have a healthy marriage.

When talking about argument, we use war metaphors. When talking about time, we often use money metaphors. But when talking about money, we rely on liquid metaphors. We dip into savings, sponge off friends or skim funds off the top. Even the job title stockbroker derives from the French word brocheur, the tavern worker who tapped the kegs of beer to get the liquidity flowing.

The psychologist Michael Morris points out that when the stock market is going up, we tend to use agent metaphors, implying the market is a living thing with clear intentions. We say the market climbs or soars or fights its way upward. When the market goes down, on the other hand, we use object metaphors, implying it is inanimate. The market falls, plummets or slides.

Most of us, when asked to stop and think about it, are by now aware of the pervasiveness of metaphorical thinking. But in the normal rush of events, we often see straight through metaphors, unaware of how they refract perceptions. So it’s probably important to pause once a month or so to pierce the illusion that we see the world directly. It’s good to pause to appreciate how flexible and tenuous our grip on reality actually is.

Metaphors help compensate for our natural weaknesses. Most of us are not very good at thinking about abstractions or spiritual states, so we rely on concrete or spatial metaphors to (imperfectly) do the job. A lifetime is pictured as a journey across a landscape. A person who is sad is down in the dumps, while a happy fellow is riding high.

Most of us are not good at understanding new things, so we grasp them imperfectly by relating them metaphorically to things that already exist. That’s a “desktop” on your computer screen.

Metaphors are things we pass down from generation to generation, which transmit a culture’s distinct way of seeing and being in the world. In his superb book “Judaism: A Way of Being,” David Gelernter notes that Jewish thought uses the image of a veil to describe how Jews perceive God — as a presence to be sensed but not seen, which is intimate and yet apart.

Judaism also emphasizes the metaphor of separateness as a path to sanctification. The Israelites had to separate themselves from Egypt. The Sabbath is separate from the week. Kosher food is separate from the nonkosher. The metaphor describes a life in which one moves from nature and conventional society to the sacred realm.

To be aware of the central role metaphors play is to be aware of how imprecise our most important thinking is. It’s to be aware of the constant need to question metaphors with data — to separate the living from the dead ones, and the authentic metaphors that seek to illuminate the world from the tinny advertising and political metaphors that seek to manipulate it.

Most important, being aware of metaphors reminds you of the central role that poetic skills play in our thought. If much of our thinking is shaped and driven by metaphor, then the skilled thinker will be able to recognize patterns, blend patterns, apprehend the relationships and pursue unexpected likenesses.

Even the hardest of the sciences depend on a foundation of metaphors. To be aware of metaphors is to be humbled by the complexity of the world, to realize that deep in the undercurrents of thought there are thousands of lenses popping up between us and the world, and that we’re surrounded at all times by what Steven Pinker of Harvard once called “pedestrian poetry.”

Posted in Academic writing, Capitalism, History

E.P. Thompson: Time, Work-Discipline, and Industrial Capitalism

This post is a tribute to a wonderful essay by the great British historian of working-class history, E. P. Thompson.  His classic work is The Making of the English Working Class, published in 1966.  The paper I’m touting here provides a lovely window into the heart of his craft, which is an unlikely combination of Oxbridge erudition and Marxist analysis.

It’s the story of the rise of a new sense of time in the world that emerged with the arrival of capitalism, at which point suddenly time became money.  If you’re making shoes to order in a precapitalist workshop, you work until the order is completed and then you take it easy.  But if your labor is being hired by the hour, then your employer has an enormous incentive to squeeze as much productivity as possible out of every minute you are on the clock. The old model is more natural for humans: work until you’ve accomplished what you need and then stop.  Binge and break.  Think about the way college students spend their time when they’re not being supervised — a mix of all-nighters and partying.

Thompson captures the essence of the change between natural time and the time clock with this beautiful epigraph from Thomas Hardy’s Tess of the D’Urbervilles.

Tess … started on her way up the dark and crooked lane or street not made for hasty progress; a street laid out before inches of land had value, and when one-handed clocks sufficiently subdivided the day.

This quote and his analysis has had a huge impact on the way I came to see the world as a scholar of history.

Here’s a link to the paper, which was published in the journal Past and Present in 1967.  Enjoy.

front page time work discipline -- pp 67

Posted in Academic writing, Uncategorized

Academic Writing Issues #8 — Getting Off to a Fast Start

The introduction to a paper is critically important.  This is where you try to draw in readers, tell them what you’re going to address, and show why this issue is important.  It’s also a place to show a little style, demonstrating that you’re going to take readers on a fun ride.  Below are two exemplary cases of opening strong, one is from a detective novel, the other from an academic book.

If you want to see how to draw in the reader quickly, a good place to look is the work of a genre writer.  Authors who make a living from their writing need to make their case up front — to catch readers in the first paragraph and make them want to keep going.  Check out writers of mystery, detective, spy, or science fiction novels.  They’ve got to be good on the first page or the reader is just going to put the book down and pick up another.

One of my favorite genre writers is Elmore Leonard, who’s a master of the opening page.  Here’s the opening page of his novel Glitz:

THE NIGHT VINCENT WAS SHOT he saw it coming. The guy approached out of the streetlight on the corner of Meridian and Sixteenth, South Beach, and reached Vincent as he was walking from his car to his apartment building. It was early, a few minutes past nine.

Vincent turned his head to look at the guy and there was a moment when he could have taken him and did consider it, hit the guy as hard as he could. But Vincent was carrying a sack of groceries. He wasn’t going to drop a half gallon of Gallo Hearty Burgundy, a bottle of prune juice and a jar of Ragú spaghetti sauce on the sidewalk. Not even when the guy showed his gun, called him a motherfucker through his teeth and said he wanted Vincent’s wallet and all the money he had on him. The guy was not big, he was scruffy, wore a tank top and biker boots and smelled. Vincent believed he had seen him before, in the detective bureau holding cell. It wouldn’t surprise him. Muggers were repeaters in their strungout state, often dumb, always desperate. They came on with adrenaline pumping, hoping to hit and get out. Vincent’s hope was to give the guy pause.

He said, “You see that car? Standard Plymouth, nothing on it, not even wheel covers?” It was a pale gray. “You think I’d go out and buy a car like that?” The guy was wired or not paying attention. Vincent had to tell him, “It’s a police car, asshole. Now gimme the gun and go lean against it.”

What he should have done, either put the groceries down and given the guy his wallet or screamed in the guy’s face to hit the deck, now, or he was fucking dead. Instead of trying to be clever and getting shot for it.

Quite a grabber, isn’t it — right from the opening sentence.  For me the key is the deft and concise way he manages to introduce his main character — Vincent, the scruffy, street-wise detective.  Instead of an extensive physical description or character analysis, he provides a list of what’s in his bag of groceries.  Specific details like Gallo Hearty Burgundy and Ragu spaghetti sauce tell you clearly what kind of guy he is:  not a man of refinement on the world stage but a single guy in a seedy part of town with proletarian tastes.  And the next paragraph shows him as the wise-guy cop who can’t resist sticking it to a guy even though it might well not be the smartest move under the circumstances.  One page and you already know Vincent and want to stick with him for a while.

The second example comes from the opening of the first chapter of a 1968 book by the educational sociologist Philip Jackson called Life in Classrooms.

On a typical weekday morning between September and June some 35 million Americans kiss their loved ones goodby, pick up their lunch pails and books, and leave to spend their day in that collection of enclosures (totaling about one million) known as elementary school class­rooms. This massive exodus from home to school is accomplished with a minimum of fuss and bother. Few tears are shed (except perhaps by the very youngest) and few cheers are raised. The school attendance of children is such a common experience in our society that those of us who watch them go hardly pause to consider what happens to them when they get there. Of course our indifference disappears occasionally. When something goes wrong or when we have been notified of his remarkable achievement, we might ponder, for a moment at least, the mean­ing of the experience for the child in question, but most of the time we simply note that our Johnny is on his way to school, and now, it is time for our second cup of coffee.

Parents are interested, to be sure, in how well Johnny does while there, and when he comes trudging home they may ask him questions about what happened today or, more generally, how things went. But both their questions and his answers typically focus on the highlights of the school experience-its unusual aspects-rather than on the mundane and seemingly trivial events that filled the bulk of his school hours. Parents are interested, in other words, in the spice of school life rather than its substance.

Teachers, too, are chiefly concerned with only a very narrow aspect of a youngster’s school experience. They, too, are likely to focus on specific acts of misbehavior or accomplishment as representing what a particular student did in school today, even though the acts in question occupied but a small fraction of the student’s time. Teachers, like parents, seldom ponder the significance of the thousands of fleeting events that combine to form the routine of the classroom.

And the student himself is no less selective. Even if someone bothered to question him about the minutiae of his school day, he would probably be unable to give a complete account of what he had done. For him, too, the day has been reduced in memory into a small number of signal events-“I got 100 on my spelling test,” “A new boy came and he sat next to me,”-or recurring activities-“We went to gym,” “We had music.” His spontaneous recall of detail is not much greater than that required to answer our conventional questions.

This concentration on the highlights of school life is understandable from the standpoint of human interest. A similar selection process operates when we inquire into or recount other types of daily activity. When we are asked about our trip downtown or our day at the office we rarely bother describing the ride on the bus or the time spent in front of the watercooler. In­deed, we are more likely to report that nothing happened than to catalogue the pedestrian actions that took place between home and return. Unless something interesting occurred there is little purpose in talking about our experience.

Yet from the standpoint of giving shape and meaning to our lives these events about which we rarely speak may be as important as those that hold our listener’s attention. Certainly they represent a much larger portion of our experience than do those about which we talk. The daily routine, the “rat race,” and the infamous “old grind” may be brightened from time to time by happenings that add color to an otherwise drab existence, but the grayness of our daily lives has an abrasive potency of its own. Anthropologists understand this fact better than do most other social scientists, and their field studies have taught us to appreciate the cultural signifi­cance of the humdrum elements of human existence. This is the lesson we must heed as we seek to understand life in elementary classrooms.

Notice how he draws you into observing the daily life of school from the perspective of its main participants — parents, teachers, and students.  He’s showing you how the routine of schooling is so familiar to everyone that it becomes invisible.  Ask students what happened in school today and they’re likely to say, “Nothing.”  Of course, a lot actually happened but none of it is noteworthy.  You only hear about something that broke the routine:  there was a concert in assembly; Jimmy threw up in the lunchroom.

This is his point.  Students are learning things from the regular process of schooling.  They stand in line, wait for the bell, get evaluated, respond to commands.  This is not the formal curriculum, made up of school subjects, but the hidden curriculum of doing school.  The process of schooling, he suggests, may in fact have a bigger impact on the student than its formal content.  He draws you into this idea and leaves you wanting to know more.  That’s good writing.

Posted in Academic writing, Writing

Academic Writing Issues #7 — Writing the Perfect Sentence

The art of writing ultimately comes down to the art of writing sentences.  In his lovely book, How to Write a Sentence, Stanley Fish explains that the heart of any sentence is not its content but its form.  The form is what defines the logical relationship between the various elements within the sentence.  The same formal set of relationships within a sentence structure can be filled with an infinite array of possible bits of content.  If you master the forms, he says, you will be able to harness them to your own aims in producing content.  His core counter-intuitive admonition is this:  “You shall tie yourself to forms and the forms shall set you free.”  Note the perfect form in Lewis Carrolls’ nonsense poem Jaberwocky:

Twas brillig, and the slithy toves

Did gyre and gimble in the wabe;

All mimsy were the borogoves,

And the mome raths outgrabe.

I strongly recommend reading the book, which I used for years in my class on academic writing.  You’ll learn a lot about writing and you’ll also accumulate a lovely collection of stunning quotes.

Below is a piece Fish published in the New Statesman in 2011, which deftly summarizes the core argument in the book.  Enjoy.  Here’s a link to the original.

 

How to write the perfect sentence

Stanley Fish

Published 17 February 2011

In learning how to master the art of putting words together, the trick is to concentrate on technique and not content. Substance comes second.

Look around the room you’re sitting in. Pick out four items at random. I’m doing it now and my items are a desk, a television, a door and a pencil. Now, make the words you have chosen into a sentence using as few additional words as possible. For example: “I was sitting at my desk, looking at the television, when a pencil fell off and rolled to the door.” Or: “The television close to the door obscured my view of the desk and the pencil I needed.” Or: “The pencil on my desk was pointed towards the door and away from the television.” You will find that you can always do this exercise – and you could do it for ever.

That’s the easy part. The hard part is to answer this question: what did you just do? How were you able to turn a random list into a sentence? It might take you a little while but, in time, you will figure it out and say something like this: “I put the relationships in.” That is to say, you arranged the words so that they were linked up to the others by relationships of cause, effect, contiguity, similarity, subordination, place, manner and so on (but not too far on; the relationships are finite). Once you have managed this – and you do it all the time in speech, effortlessly and unselfconsciously – hitherto discrete items participate in the making of a little world in which actors, actions and the objects of actions interact in ways that are precisely represented.

This little miracle you have performed is called a sentence and we are now in a position to define it: a sentence is a structure of logical relationships. Notice how different this is from the usual definitions such as, “A sentence is built out of the eight parts of speech,” or, “A sentence is an independent clause containing a subject and a predicate,” or, “A sentence is a complete thought.” These definitions seem like declarations out of a fog that they deepen. The words are offered as if they explained everything, but each demands an explanation.

When you know that a sentence is a structure of logical relationships, you know two things: what a sentence is – what must be achieved for there to be focused thought and communication – and when a sentence that you are trying to write goes wrong. This happens when the relationships that allow sense to be sharpened are missing or when there are too many of them for comfort (a goal in writing poetry but a problem in writing sentences). In such cases, the components of what you aspired to make into a sentence stand alone, isolated; they hang out there in space and turn back into items on a list.

Armed with this knowledge, you can begin to look at your own sentences and those of others with a view to discerning what is successful and unsuccessful about them. As you do this, you will be deepening your understanding of what a sentence is and introducing yourself to the myriad ways in which logical structures of verbal thought can be built, unbuilt, elaborated upon and admired.

My new book, How to Write a Sentence, is a light-hearted manual of instruction designed to teach you how to do these things – how to write a sentence and how to appreciate in analytical detail the sentences produced by authors who knock your socks off. These two aspects – lessons in sentence craft and lessons in sentence appreciation – reinforce each other; the better able you are to appreciate great sentences, the closer you are to being able to write one. An intimate knowledge of what makes sentences work is one prerequisite for writing them.

Consider the first of those aspects – sentence craft. The chief lesson here is: “It’s not the thought that counts.” By that, I mean that skill in writing sentences is a matter of understanding and mastering form not content. The usual commonplace wisdom is that you have to write about something, but actually you don’t. The exercise I introduced above would work even if your list was made up of nonsense words, as long as each word came tagged with its formal identification – actor, action, object of action, modifier, conjunction, and so on. You could still tie those nonsense words together in ligatures of relationships and come up with perfectly formed sentences like Noam Chomsky’s “Colourless green ideas sleep furiously,” or the stanzas of Lewis Carroll’s “Jabberwocky”.

If what you want to do is become facile (in a good sense) in producing sentences, the sentences with which you practise should be as banal and substantively inconsequential as possible; for then you will not be tempted to be interested in them. The moment that interest comes to the fore, the focus on craft will be lost. (I know that this sounds counter-intuitive, but stick with me.)

I call this the Karate Kid method of learning to write. In that 1984 cult movie (recently remade), the title figure learns how to fight not by participating in a match but by repeating (endlessly and pointlessly, it seems to him) the purely formal motions of waxing cars and painting fences. The idea is that when you are ready either to compete or to say something that is meaningful and means something to you, the forms you have mastered and internalised will generate the content that would have remained inchoate (at best) without them.

These points can be illustrated with senten­ces that are too good to be tossed aside. In the book, I use them to make points about form, but I can’t resist their power or the desire to explain it. When that happens, content returns to my exposition and I shift into full appreciation mode, caressing these extraordinary verbal productions even as I analyse them. I become like a sports commentator, crying, “Did you see that?” or “How could he have pulled that off?” or “How could she keep it going so long and still not lose us?” In the end, the apostle of form surrenders to substance, or rather, to the pleasure of seeing substance emerge though the brilliant deployment of forms.

As a counterpoint to that brilliance, let me hazard an imitation of two of the marvels I discuss. Take Swift’s sublimely malign sentence, “Last week I saw a woman flayed and you will hardly believe how much it altered her person for the worse.” And then consider this decidedly lame imitation: “Last night I ate six whole pizzas and you would hardly believe how sick I was.”

Or compare John Updike’s description in the New Yorker of the home run that the baseball player Ted Williams hit on his last at-bat in 1960 – “It was in the books while it was still in the sky” – to “He had won the match before the first serve.” My efforts in this vein are lessons both in form and humility.

The two strands of my argument can be brought together by considering sentences that are about their own form and unfolding; sentences that meditate on or burst their own limitations, and thus remind us of why we have to write sentences in the first place – we are mortal and finite – and of what rewards may await us in a realm where sentences need no longer be fashioned. Here is such a sentence by the metaphysical poet John Donne:

If we consider eternity, into that time never entered; eternity is not an everlasting flux of time, but time is a short parenthesis in a long period; and eternity had been the same as it is, though time had never been.

The content of the sentence is the unreality of time in the context of eternity, but because a sentence is necessarily a temporal thing, it undermines that insight by being. (Asserting in time the unreality of time won’t do the trick.) Donne does his best to undermine the undermining by making the sentence a reflection on its fatal finitude. No matter how long it is, no matter what its pretension to a finality of statement, it will be a short parenthesis in an enunciation without beginning, middle or end. That enunciation alone is in possession of the present – “is” – and what the sentence comes to rest on is the declaration of its already having passed into the state of non-existence: “had never been”.

Donne’s sentence is in my book; my analysis of it is not. I am grateful to the New Statesman for the opportunity to produce it and to demonstrate once again the critic’s inadequacy to his object.

Stanley Fish is Davidson-Kahn Distinguished University Professor of Humanities and Law at Florida International University. His latest book is “How to Write a Sentence: and How to Read One” (HarperCollins, £12.99)

https://www.newstatesman.com/books/2011/02/write-sentence-comes