Posted in Academic writing, Wit, Writing

Wit (and the Art of Writing)


They laughed when I told them I wanted to be a comedian. Well they’re not laughing now.

Bob Monkhouse

Wit is notoriously difficult to analyze, and any effort to do so is likely to turn out dry and witless.  But two recent authors have done a remarkably effective job of trying to make sense of what constitutes wit and they manage to do so wittily.  That’s a risky venture, which most sensible people would avoid like COVID-19.  One book is Wit’s End by James Geary; the other is Humour by Terry Eagleton.  The epigraph comes from Eagleton.  Both have the good sense to reflect on the subject without analyzing it to death or trampling on the punchline.  Eagleton uses Freud as a negative case in point:

Children, insists Freud, lack all sense of the comic, but it is possible he is confusing them with the author of a notoriously unfunny work entitled Jokes and Their Relation to the Unconscious.

Interestingly, Geary says that wit begins with the pun.

Despite its bad reputation, punning is, in fact, among the highest displays of wit. Indeed, puns point to the essence of all true wit—the ability to hold in the mind two different ideas about the same thing at the same time.

In poems, words rhyme; in puns, ideas rhyme. This is the ultimate test of wittiness: keeping your balance even when you’re of two minds.

Groucho’s quip upon entering a restaurant and seeing a previous spouse at another table—“ Marx spots the ex.”

Geary Cover

Instead of avoiding ambiguity, wit revels in it, using paradoxical juxtaposition to shake you out of a trance and ask you to consider an issue from a strikingly different angle.  Arthur Koestler described the pun as “two strings of thought tied together by an acoustic knot.”  There’s an echo here of Emerson’s epigram, “A foolish consistency is the hobgoblin of little minds…”  Misdirection can lead to comic relief but it can also produce intellectual insight.

Geary goes on to show how the joke is integrally related to other forms of creative thought:

There is no sharp boundary splitting the wit of the scientist, inventor, or improviser from that of the artist, the sage, or the jester. The creative experience moves seamlessly from the “Aha!” of scientific discovery to the “Ah” of aesthetic insight to the “Ha-ha” of the pun and the punch line.  “Comic discovery is paradox stated—scientific discovery is paradox resolved,” Koestler wrote.

He shows that wit and metaphor have a lot in common.

If wit consists, as we say, in the ability to hold in the mind two different ideas about the same thing at the same time, this is exactly the function of metaphor. A metaphor carries the attention from the concrete to the abstract, from object to concept. When that direction is reversed, and attention is brought back from concept to object, the mind is surprised. Mistaking the figurative for fact is therefore a signature trick of wit.

Hence is it said, kleptomaniacs don’t understand metaphor because they take things literally.

Both wit and metaphor have these qualities in common:  “brevity, novelty, and clarity.”

Read my lips. Shoot from the hip. Wit switch hits. Wit ad-libs. It teaches new dogs lotsa old tricks. Throw spaghetti ’gainst the wall—wit’s what sticks. You can’t beat it or repeat it, not even with a shtick. Wit rocks the boat. That’s all she wrote.

Eagleton picks up Geary’s theme of how wit and metaphor are grounded in the “aha” of incongruity.

There are many theories of humour in addition to those we have looked at. They include the play theory, the conflict theory, the ambivalence theory, the dispositional theory, the mastery theory, the Gestalt theory, the Piagetian theory and the configurational theory. Several of these, however, are really versions of the incongruity theory, which remains the most plausible account of why we laugh. On this view, humour springs from a clash of incongruous aspects – a sudden shift of perspective, an unexpected slippage of meaning, an arresting dissonance or discrepancy, a momentary defamiliarising of the familiar and so on. As a temporary ‘derailment of sense’, it involves the disruption of orderly thought processes or the violation of laws or conventions. It is, as D. H. Munro puts it, a breach in the usual order of events.

“The Duke’s a long time coming today,” said the Duchess, stirring her tea with the other hand.

Eagleton Cover

He talks about how humor gives us license to be momentarily freed from the shackles of reason and order, a revolt of the id against the superego.  But the key is that reason and order are quickly restored, so the lapse of control is risk free.

As a pure enunciation that expresses nothing but itself, laughter lacks intrinsic sense, rather like an animal’s cry, but despite this it is richly freighted with cultural meaning. As such, it has a kinship with music. Not only has laughter no inherent meaning, but at its most riotous and convulsive it involves the disintegration of sense, as the body tears one’s speech to fragments and the id pitches the ego into temporary disarray. As with grief, severe pain, extreme fear or blind rage, truly uproarious laughter involves a loss of physical self-control, as the body gets momentarily out of hand and we regress to the uncoordinated state of the infant. It is quite literally a bodily disorder.

It is just the same with the fantasy revolution of carnival, when the morning after the merriment the sun will rise on a thousand empty wine bottles, gnawed chicken legs and lost virginities and everyday life will resume, not without a certain ambiguous sense of relief. Or think of stage comedy, where the audience is never in any doubt that the order so delightfully disrupted will be restored, perhaps even reinforced by this fleeting attempt to flout it, and thus can blend its anarchic pleasures with a degree of conservative self-satisfaction.

Like Geary, Eagleton shows how a key to wit is its ability to hone down an issue to a sharp point, which is captured in a verbal succinctness that is akin to poetry.

Wit has a point, which is why it is sometimes compared to the thrust of a rapier. It is rapier-like in its swift, shapely, streamlined, agile, flashing, glancing, dazzling, dexterous, pointed, clashing, flamboyant aspects, but also because it can stab and wound.

A witticism is a self-conscious verbal performance, but it is one that minimises its own medium, compacting its words into the slimmest possible space in an awareness that the slightest surplus of signification might prove fatal to its success. As with poetry, every verbal unit must pull its weight, and the cadence, rhythm and resonance of a piece of wit may be vital to its impact. The tighter the organisation, the more a verbal slide, ambiguity, conceptual shift or trifling dislocation of syntax registers its effect.

There is a strong lesson for writers in this discussion of wit.  Sharpen the argument, tighten the prose, focus on “brevity, novelty, and clarity.”  Learn from the craft of the poet and the comedian.  Less is more.

One problem witb academic writing in particular is that it takes itself too seriously.  It pays for us to keep our wit about us as we  write scholarly papers, acknowledging that we don’t know quite as much about the subject as we are letting on.  Conceding a bit of weakness can be quite appealing.  Oscar Wilde:  “I can resist anything but tempation.”

Everyday life involves sustaining a number of polite fictions: that we take a consuming interest in the health and well-being of our most casual acquaintances, that we never think about sex for a single moment, that we are thoroughly familiar with the later work of Schoenberg and so on. It is pleasant to drop the mask for a moment and strike up a comedic solidarity of weakness.

It is as though we are all really play-actors in our conventional social roles, sticking solemnly to our meticulously scripted parts but ready at the slightest fluff or stumble to dissolve into infantile, uproariously irresponsible laughter at the sheer arbitrariness and absurdity of the whole charade.

And don’t forget what Mel Brooks said:  Tragedy is when you cut your finger, and comedy is when someone else walks into an open sewer and dies.

Posted in Academic writing, Higher Education, Teaching, Writing

I Would Rather Do Anything Else than Grade Your Final Papers — Robin Lee Mozer

If the greatest joy that comes from retirement is that I no longer have to attend faculty meetings, the second greatest joy is that I no longer have to grade student papers.  I know, I know: commenting on student writing is a key component of being a good teacher, and there’s a real satisfaction that comes from helping someone become a better thinker and better writer.

But most students are not producing papers to improve their minds or hone their writing skills.  They’re just trying to fulfill a course requirement and get a decent grade.  And this creates a strong incentive not for excellence but for adequacy.  It encourages people to devote most of their energy toward gaming the system.

The key skill is to produce something that looks and feels like a good answer to the exam question or a good analysis of an intellectual problem.  Students have a powerful incentive to accomplish the highest grade for the lowest investment of time and intellectual effort.  This means aiming for quantity over quality (puff up the prose to hit the word count) and form over substance (dutifully refer to the required readings without actually drawing meaningful content from them).  Glibness provides useful cover for the absence of content.  It’s depressing to observe how the system fosters discursive means that undermine the purported aims of education.

Back in the days when students turned in physical papers and then received them back with handwritten comments from the instructor, I used to get a twinge in my stomach when I saw that most students didn’t bother to pick up their final papers from the box outside my office.  I felt like a sucker for providing careful comments that no one would ever see.  At one point I even asked students to tell me in advance if they wanted their papers back, so I only commented on the ones that might get read.  But this was even more depressing, since it meant that a lot of students didn’t even mind letting me know that they really only cared about the grade.  The fiction of doing something useful was what helped keep me going.

So, like many other faculty, I responded with joy to a 2016 piece that Robin Lee Mozer wrote in McSweeney’s called “I Would Rather Do Anything Else than Grade Your Final Papers.”  As a public service to teachers everywhere, I’m republishing her essay here.  Enjoy.



Dear Students Who Have Just Completed My Class,

I would rather do anything else than grade your Final Papers.

I would rather base jump off of the parking garage next to the student activity center or eat that entire sketchy tray of taco meat leftover from last week’s student achievement luncheon that’s sitting in the department refrigerator or walk all the way from my house to the airport on my hands than grade your Final Papers.

I would rather have a sustained conversation with my grandfather about politics and government-supported healthcare and what’s wrong with the system today and why he doesn’t believe in homeowner’s insurance because it’s all a scam than grade your Final Papers. Rather than grade your Final Papers, I would stand in the aisle at Lowe’s and listen patiently to All the Men mansplain the process of buying lumber and how essential it is to sight down the board before you buy it to ensure that it’s not bowed or cupped or crook because if you buy lumber with defects like that you’re just wasting your money even as I am standing there, sighting down a 2×4 the way my father taught me 15 years ago.

I would rather go to Costco on the Friday afternoon before a three-day weekend. With my preschooler. After preschool.

I would rather go through natural childbirth with twins. With triplets. I would rather take your chemistry final for you. I would rather eat beef stroganoff. I would rather go back to the beginning of the semester like Sisyphus and recreate my syllabus from scratch while simultaneously building an elaborate class website via our university’s shitty web-based course content manager and then teach the entire semester over again than grade your goddamn Final Papers.

I do not want to read your 3AM-energy-drink-fueled excuse for a thesis statement. I do not want to sift through your mixed metaphors, your abundantly employed logical fallacies, your incessant editorializing of your writing process wherein you tell me As I was reading through articles for this paper I noticed that — or In the article that I have chosen to analyze, I believe the author is trying to or worse yet, I sat down to write this paper and ideas kept flowing into my mind as I considered what I should write about because honestly, we both know that the only thing flowing into your mind were thoughts of late night pizza or late night sex or late night pizza and sex, or maybe thoughts of that chemistry final you’re probably going to fail later this week and anyway, you should know by now that any sentence about anything flowing into or out of or around your blessed mind won’t stand in this college writing classroom or Honors seminar or lit survey because we are Professors and dear god, we have Standards.

I do not want to read the one good point you make using the one source that isn’t Wikipedia. I do not want to take the time to notice that it is cited properly. I do not want to read around your 1.25-inch margins or your gauche use of size 13 sans serif fonts when everyone knows that 12-point Times New Roman is just. Fucking. Standard. I do not want to note your missing page numbers. Again. For the sixth time this semester. I do not want to attempt to read your essay printed in lighter ink to save toner, as you say, with the river of faded text from a failing printer cartridge splitting your paper like Charlton Heston in The Ten Commandments, only there, it was a sea and an entire people and here it is your vague stand-in for an argument.

I do not want to be disappointed.

I do not want to think less of you as a human being because I know that you have other classes and that you really should study for that chemistry final because it is organic chemistry and everyone who has ever had a pre-med major for a roommate knows that organic chemistry is the weed out course and even though you do not know this yet because you have never even had any sort of roommate until now, you are going to be weeded out. You are going to be weeded out and then you will be disappointed and I do not want that for you. I do not want that for you because you will have enough disappointments in your life, like when you don’t become a doctor and instead become a philosophy major and realize that you will never make as much money as your brother who went into some soul-sucking STEM field and landed some cushy government contract and made Mom and Dad so proud and who now gives you expensive home appliances like espresso machines and Dyson vacuums for birthday gifts and all you ever send him are socks and that subscription to that shave club for the $6 middle-grade blades.

I do not want you to be disappointed. I would rather do anything else than disappoint you and crush all your hopes and dreams —

Except grade your Final Papers.

The offer to take your chemistry final instead still stands.

Posted in Academic writing, Educational Research, Higher Education, Writing

Getting It Wrong — Rethinking a Life in Scholarship

This post is an overview of my life as a scholar.  I presented an oral version in my job talk at Stanford in 2002.  The idea was to make sense of the path I’d taken in my scholarly writing up to that point.  What were the issues I was looking at and why?  How did these ideas develop over time?  And what lessons can we learn from this process that might be of use to scholars who are just starting out.

This piece first appeared in print as the introduction to a 2005 book called Education, Markets, and the Public Good: The Selected Works of David F. Labaree.  As a friend told after hearing about the book, “Isn’t this kind of compilation something that’s published after you’re dead?”  So why was I doing this at as a mere youth of 58?  The answer: Routledge offered me the opportunity.  Was there ever an academic who turned out the chance to publish something when the chance arose?  The book was part of a series called — listen for the drum roll — The World Library of Educationalists, which must have a place near the top of the list of bad ideas floated by publishers.  After the first year, when a few libraries rose to the bait, annual sales of this volume never exceeded single digits.  It’s rank in the Amazon bestseller list is normally in the two millions.

Needless to say, no one ever read this piece in its originally published form.  So I tried again, this time slightly adapting it for a 2011 volume edited by Wayne Urban called Leaders in the Historical Study of American Education, which consisted of autobiographical sketches by scholars in the field.  It now ranks in the five millions on Amazon, so the essay still never found a reader.  As a result, I decided to give the piece one more chance at life in my blog.  I enjoyed reading it again and thought it offered some value to young scholars just starting out in a daunting profession.  I hope you enjoy it too.

The core insight is that research trajectories are not things you can  carefully map out in advance.  They just happen.  You learn as you go.  And the most effective means of learning from your own work — at least from my experience — arises from getting it wrong, time and time again.  If you’re not getting things wrong, you may not be learning much at all, since you may just be continually finding what you’re looking for.  It may well be that what you need to find are the things you’re not looking for and that you really don’t want to confront.  The things that challenge your own world view, that take you in a direction you’d rather not go, forcing you to give up ideas you really want to keep.

Another insight I got from this process of reflection is that it’s good to know what are the central weaknesses in the way you do research.  Everyone has them.  Best to acknowledge where you’re coming from and learn to live with that.  These weaknesses don’t discount the value of your work, they just put limits on it.  Your way of doing scholarship are probably better at producing some kinds of insights over others.  That’s OK.  Build on your strengths and let others point out your weaknesses.  You have no obligation and no ability to give the final answer on any important question.  Instead, your job is to make a provocative contribution to the ongoing scholarly conversation and let other scholars take it from there, countering your errors and filling in the gaps.  There is no last word.

Here’s a link to a PDF of the 2011 version.  Hope you find it useful.


Adventures in Scholarship

Instead of writing an autobiographical sketch for this volume, I thought it would be more useful to write about the process of scholarship, using my own case as a cautionary tale.  The idea is to help emerging scholars in the field to think about how scholars develop a line of research across a career, both with the hope of disabusing them of misconceptions and showing them how scholarship can unfold as a scary but exhilarating adventure in intellectual development.  The brief story I tell here has three interlocking themes:  You need to study things that resonate with your own experience; you need to take risks and plan to make a lot of mistakes; and you need to rely on friends and colleagues to tell you when you’re going wrong.  Let me explore each of these points.

Study What Resonates with Experience

First, a little about the nature of the issues I explore in my scholarship and then some thoughts about the source of my interest in these issues. My work focuses on the historical sociology of the American system of education and on the thick vein of irony that runs through it.  This system has long presented itself as a model of equal opportunity and open accessibility, and there is a lot of evidence to support these claims.  In comparison with Europe, this upward expansion of access to education came earlier, moved faster, and extended to more people.  Today, virtually anyone can go to some form of postsecondary education in the U.S., and more than two-thirds do.  But what students find when they enter the educational system at any level is that they are gaining equal access to a sharply unequal array of educational experiences.  Why?  Because the system balances open access with radical stratification.  Everyone can go to high school, but quality of education varies radically across schools.  Almost everyone can go to college, but the institutions that are most accessible (community colleges) provide the smallest boost to a student’s life chances, whereas the ones that offer the surest entrée into the best jobs (major research universities) are highly selective.  This extreme mixture of equality and inequality, of accessibility and stratification, is a striking and fascinating characteristic of American education, which I have explored in some form or another in all my work.

Another prominent irony in the story of American education is that this system, which was set up to instill learning, actually undercuts learning because of a strong tendency toward formalism.  Educational consumers (students and their parents) quickly learn that the greatest rewards of the system go to those who attain its highest levels (measured by years of schooling, academic track, and institutional prestige), where credentials are highly scarce and thus the most valuable.  This vertically-skewed incentive structure strongly encourages consumers to game the system by seeking to accumulate the largest number of tokens of attainment – grades, credits, and degrees – in the most prestigious programs at the most selective schools.  However, nothing in this reward structure encourages learning, since the payoff comes from the scarcity of the tokens and not the volume of knowledge accumulated in the process of acquiring these tokens.  At best, learning is a side effect of this kind of credential-driven system.  At worst, it is a casualty of the system, since the structure fosters consumerism among students, who naturally seek to gain the most credentials for the least investment in time and effort.  Thus the logic of the used-car lot takes hold in the halls of learning.

In exploring these two issues of stratification and formalism, I tend to focus on one particular mechanism that helps explain both kinds of educational consequences, and that is the market.  Education in the U.S., I argue, has increasingly become a commodity, which is offered and purchased through market processes in much the same way as other consumer goods.  Educational institutions have to be sensitive to consumers, by providing the mix of educational products that the various sectors of the market demand.  This promotes stratification in education, because consumers want educational credentials that will distinguish them from the pack in their pursuit of social advantage.  It also promotes formalism, because markets operate based on the exchange value of a commodity (what it can be exchanged for) rather than its use value (what it can be used for).  Educational consumerism preserves and increases social inequality, undermines knowledge acquisition, and promotes the dysfunctional overinvestment of public and private resources in an endless race for degrees of advantage.  The result is that education has increasingly come to be seen primarily as a private good, whose benefits accrue only to the owner of the educational credential, rather than a public good, whose benefits are shared by all members of the community even if they don’t have a degree or a child in school.  In many ways, the aim of my work has been to figure out why the American vision of education over the years made this shift from public to private.

This is what my work has focused on in the last 30 years, but why focus on these issues?  Why this obsessive interest in formalism, markets, stratification, and education as arbiter of status competition?  Simple. These were the concerns I grew up with.

George Orwell once described his family’s social location as the lower upper middle class, and this captures the situation of my own family.  In The Road to Wigan Pier, his meditation on class relations in England, he talks about his family as being both culture rich and money poor.[1]  Likewise for mine.  Both of my grandfathers were ministers.  On my father’s side the string of clergy went back four generations in the U.S.  On my mother’s side, not only was her father a minister but so was her mother’s father, who was in turn the heir to a long clerical lineage in Scotland.  All of these ministers were Presbyterians, whose clergy has long had a distinctive history of being highly educated cultural leaders who were poor as church mice.  The last is a bit of an exaggeration, but the point is that their prestige and authority came from learning and not from wealth.  So they tended to value education and disdain grubbing for money.  My father was an engineer who managed to support his family in a modest but comfortable middle-class lifestyle.  He and my mother plowed all of their resources into the education of their three sons, sending all of them to a private high school in Philadelphia (Germantown Academy) and to private colleges (Lehigh, Drexel, Wooster, and Harvard).  Both of my parents were educated at elite schools (Princeton and Wilson) – on ministerial scholarships – and they wanted to do the same for their own children.

What this meant is that we grew up taking great pride in our cultural heritage and educational accomplishments and adopting a condescending attitude to those who simply engaged in trade for a living.  Coupled with this condescension was a distinct tinge of envy for the nice clothes, well decorated houses, new cars, and fancy trips that the families of our friends experienced.  I thought of my family as a kind of frayed nobility, raising the flag of culture in a materialistic society while wearing hand-me-down clothes.  From this background, it was only natural for me to study education as the central social institution, and to focus in particular on the way education had been corrupted by the consumerism and status-competition of a market society.  In doing so I was merely entering the family business.  Someone out there needed to stand up for substantive over formalistic learning and for the public good over the private good, while at the same time calling attention to the dangers of a social hierarchy based on material status.  So I launched my scholarship from a platform of snobbish populism – a hankering for a lost world where position was grounded on the cultural authority of true learning and where mere credentialism could not hold sway.

Expect to Get Things Wrong

Becoming a scholar is not easy under the best of circumstances, and we may make it even harder by trying to imbue emerging scholars with a dedication for getting things right.[2]  In doctoral programs and tenure reviews, we stress the importance of rigorous research methods and study design, scrupulous attribution of ideas, methodical accumulation of data, and cautious validation of claims.  Being careful to stand on firm ground methodologically in itself is not a bad thing for scholars, but trying to be right all the time can easily make us overly cautious, encouraging us to keep so close to our data and so far from controversy that we end up saying nothing that’s really interesting.  A close look at how scholars actually carry out their craft reveals that they generally thrive on frustration.  Or at least that has been my experience.  When I look back at my own work over the years, I find that the most consistent element is a tendency for getting it wrong.  Time after time I have had to admit failure in the pursuit of my intended goal, abandon an idea that I had once warmly embraced, or backtrack to correct a major error.  In the short run these missteps were disturbing, but in the long run they have proven fruitful.

Maybe I’m just rationalizing, but it seems that getting it wrong is an integral part of scholarship.  For one thing, it’s central to the process of writing.  Ideas often sound good in our heads and resonate nicely in the classroom, but the real test is whether they work on paper.[3]  Only there can we figure out the details of the argument, assess the quality of the logic, and weigh the salience of the evidence.  And whenever we try to translate a promising idea into a written text, we inevitably encounter problems that weren’t apparent when we were happily playing with the idea over lunch.  This is part of what makes writing so scary and so exciting:  It’s a high wire act, in which failure threatens us with every step forward.  Can we get past each of these apparently insuperable problems?  We don’t really know until we get to the end.

This means that if there’s little risk in writing a paper there’s also little potential reward.  If all we’re doing is putting a fully developed idea down on paper, then this isn’t writing; it’s transcribing.  Scholarly writing is most productive when authors are learning from the process, and this happens only if the writing helps us figure out something we didn’t really know (or only sensed), helps us solve an intellectual problem we weren’t sure was solvable, or makes us turn a corner we didn’t know was there.  Learning is one of the main things that makes the actual process of writing (as opposed to the final published product) worthwhile for the writer.  And if we aren’t learning something from our own writing, then there’s little reason to think that future readers will learn from it either.  But these kinds of learning can only occur if a successful outcome for a paper is not obvious at the outset, which means that the possibility of failure is critically important to the pursuit of scholarship.

Getting it wrong is also functional for scholarship because it can force us to give up a cherished idea in the face of the kinds of arguments and evidence that accumulate during the course of research.  Like everyone else, scholars are prone to confirmation bias.  We look for evidence to support the analysis we prefer and overlook evidence that supports other interpretations.  So when we collide with something in our research or writing that deflects us from the path toward our preferred destination, we tend to experience this deflection as failure.  However, although these experiences are not pleasant, they can be quite productive.  Not only do they prompt us to learn things we don’t want to know, they can also introduce arguments into the literature that people don’t want to hear.  A colleague at the University of

Michigan, David Angus, had both of these benefits in mind when he used to pose the following challenge to every candidate for a faculty position in the School of Education:  “Tell me about some point when your research forced you to give up an idea you really cared about.”

I have experienced all of these forms of getting it wrong.  Books never worked out the way they were supposed to, because of changes forced on me by the need to come up with remedies for ailing arguments.  The analysis often turned in a direction that meant giving up something I wanted to keep and embracing something I preferred to avoid.  And nothing ever stayed finished.  Just when I thought I had a good analytical hammer and started using it to pound everything in sight, it would shatter into pieces and I would be forced to start over.  This story of misdirection and misplaced intentions starts, as does every academic story, with a dissertation.

Marx Gives Way to Weber

My dissertation topic fell into my lap one day during the final course in my doctoral program in sociology at the University of Pennsylvania, when I mentioned to Michael Katz that I had done a brief study of Philadelphia’s Central High School for an earlier class.  He had a new grant for studying the history of education in Philadelphia and Central was the lead school.  He needed someone to study the school, and I needed a topic, advisor, and funding; by happy accident, it all came together in 15 minutes.  I had first become interested in education as an object of study as an undergraduate at Harvard in the late 1960s, where I majored in Students for a Democratic Society and minored in sociology.  In my last year or two there, I worked on a Marxist analysis of Harvard as an institution of social privilege (is there a better case?), which whet my appetite for educational research.

For the dissertation, I wanted to apply the same kind of Marxist approach to Central High School, which seemed to beg for it.  Founded in 1838, it was the first high school in the city and one of the first in the county, and it later developed into the elite academic high school for boys in the city.  It looked like the Harvard of public high schools.  I had a model for this kind of analysis, Katz’s study of Beverly High School, in which he explained how this high school, shortly after its founding, came to be seen by many citizens as an institution that primarily served the upper classes, thus prompting the town meeting to abolish the school in 1861.[4]  I was planning to do this kind of study about Central, and there seemed to be plenty of evidence to support such an interpretation, including its heavily upper-middle-class student body, its aristocratic reputation in the press, and its later history as the city’s elite high school.

That was the intent, but my plan quickly ran into two big problems in the data I was gathering.  First, a statistical analysis of student attainment and achievement at the school over its first 80 years showed a consistent pattern:  only one-quarter of the students managed to graduate, which meant it was highly selective; but grades and not class determined who made it and who didn’t, which meant it was – surprise – highly meritocratic.  Attrition in modern high schools is strongly correlated with class, but this was not true in the early years at Central.  Middle class students were more likely to enroll in the first place, but they were no more likely to succeed than working class students.  The second problem was that the high school’s role in the Philadelphia school system didn’t fit the Marxist story of top-down control that I was trying to tell.  In the first 50 years of the high school, there was a total absence of bureaucratic authority over the Philadelphia school system.  The high school was an attractive good in the local educational market, offering elevated education in a grand building at a collegiate level (it granted bachelor degrees) and at no cost.  Grammar school students competed for access to this commodity by passing an entrance exam, and grammar school masters competed to get the most students into Central by teaching to the test.  The power that the high school exerted over the system was considerable but informal, arising from consumer demand from below rather than bureaucratic dictate from above.

Thus my plans to tell a story of class privilege and social control fell apart at the very outset of my dissertation; in its place, I found a story about markets and stratification:  Marx gives way to Weber.  The establishment of Central High school in the nation’s second largest city created a desirable commodity with instant scarcity, and this consumer-based market power not only gave the high school control over the school system but also gave it enough autonomy to establish a working meritocracy.  The high school promoted inequality: it served a largely middle class constituency and established an extreme form of educational stratification.  But it imposed a tough meritocratic regime equally on the children of the middle class and working class, with both groups failing most of the time.

Call on Your Friends for Help

In the story I’m telling here, the bad news is that scholarship is a terrain that naturally lures you into repeatedly getting it wrong.  The good news is that help is available if you look for it, which can turn scholarly wrong-headedness into a fruitful learning experience.  Just ask your friends and colleagues.  The things you most don’t want to hear may be just the things that will save you from intellectual confusion and professional oblivion.  Let me continue with the story, showing how colleagues repeatedly saved my bacon.

Markets Give Ground to Politics

Once I completed the dissertation, I gradually settled into being a Weberian, a process that took a while because of the disdain that Marxists hold for Weber.[5]  I finally decided I had a good story to tell about markets and schools, even if it wasn’t the one I had wanted to tell, so I used this story in rewriting the dissertation as a book.  When I had what I thought was a final draft ready to send to the publisher, I showed it to my colleague at Michigan State, David Cohen, who had generously offered to give it a reading.  His comments were extraordinarily helpful and quite devastating.  In the book, he said, I was interpreting the evolution of the high school and the school system as a result of the impact of the market, but the story I was really telling was about an ongoing tension for control of schools between markets and politics.[6]  The latter element was there in the text, but I had failed to recognize it and make it explicit in the analysis.  In short, he explained to me the point of my own book; so I had to rewrite the entire manuscript in order to bring out this implicit argument.

Framing this case in the history of American education as a tension between politics and markets allowed me to tap into the larger pattern of tensions that always exist in a liberal democracy:  the democratic urge to promote equality of power and access and outcomes, and the liberal urge to preserve individual liberty, promote free markets, and tolerate inequality.  The story of Central High School spoke to both these elements.  It showed a system that provided equal opportunity and unequal outcomes.  Democratic politics pressed for expanding access to high school for all citizens, whereas markets pressed for restricting access to high school credentials through attrition and tracking.  Central see-sawed back and forth between these poles, finally settling on the grand compromise that has come to characterize American education ever since:  open access to a stratified school system.  Using both politics and markets in the analysis also introduced me to the problem of formalism, since political goals for education (preparing competent citizens) value learning, whereas market goals (education for social advantage) value credentialing.

Disaggregating Markets

The book came out in 1988 with the title, The Making of an American High School.[7]  With politics and markets as my new hammer, everything looked like a nail.  So I wrote a series of papers in which I applied the idea to a wide variety of educational institutions and reform efforts, including the evolution of high school teaching as work, the history of social promotion, the history of the community college, the rhetorics of educational reform, and the emergence of the education school.

Midway through this flurry of papers, however, I ran into another big problem.  I sent a draft of my community college paper to David Hogan, a friend and former member of my dissertation committee at Penn, and his critique stopped me cold.  He pointed out that I was using the idea of educational markets to refer to two things that were quite different, both in concept and in practice.  One was the actions of educational consumers, the students who want education to provide the credentials they needed in order to get ahead; the other was the actions of educational providers, the taxpayers and employers who want education to produce the human capital that society needs in order to function.  The consumer sought education’s exchange value, providing selective benefits for the individual who owns the credential; the producer sought education’s use value, providing collective benefits to everyone in society, even those not in school.

This forced me to reconstruct the argument from the ground up, abandoning the politics and markets angle and constructing in its place a tension among three goals that competed for primacy in shaping the history of American education.  “Democratic equality” referred to the goal of using education to prepare capable citizens; “social efficiency” referred to the goal of using education to prepare productive workers; and “social mobility” referred to the goal of using education to enable individuals to get ahead in society.  The first was a stand-in for educational politics, the second and third were a disaggregation of educational markets.

Abandoning the Good, the Bad, and the Ugly

Once formulated, the idea of the three goals became a mainstay in my teaching, and for a while it framed everything I wrote.  I finished the string of papers I mentioned earlier, energized by the analytical possibilities inherent in the new tool.  But by the mid-1990s, I began to be afraid that its magic power would start to fade on me soon, as had happened with earlier enthusiasms like Marxism and politics-and-markets.  Most ideas have a relatively short shelf life, as metaphors quickly reach their limits and big ideas start to shrink upon close examination.  That doesn’t mean these images and concepts are worthless, only that they are bounded, both conceptually and temporally.  So scholars need to strike while the iron is hot.  Michael Katz once made this point to me with the Delphic advice, “Write your first book first.”  In other words, if you have an idea worth injecting into the conversation, you should do so now, since it will eventually evolve into something else, leaving the first idea unexpressed.  Since the evolution of an idea is never finished, holding off publication until the idea is done is a formula for never publishing.

So it seemed like the right time to put together a collection of my three-goals papers into a book, and I had to act quickly before they started to turn sour.  With a contract for the book and a sabbatical providing time to put it together, I now had to face the problem of framing the opening chapter.  In early 1996 I completed a draft and submitted it to American Educational Research Journal.  The reviews knocked me back on my heels.  They were supportive but highly critical.  One in particular, which I later found out was written by Norton Grubb, forced me to rethink the entire scheme of competing goals.  He pointed out something I had completely missed in my enthusiasm for the tool-of-the-moment.  In practice my analytical scheme with three goals turned into a normative scheme with two:  a Manichean vision of light and darkness, with Democratic Equality as the Good, and with Social Mobility and Social Efficiency as the Bad and the Ugly.  This ideologically colored representation didn’t hold up under close scrutiny.  Grubb pointed out that social efficiency is not as ugly as I was suggesting.  Like democratic equality and unlike social mobility, it promotes learning, since it has a stake in the skills of the workforce.  Also, like democratic equality, it views education as a public good, whose benefits accrue to everyone and not just (as with social mobility) to the credential holder.

This trenchant critique forced me to start over, putting a different spin on the whole idea of competing goals, abandoning the binary vision of good and evil, reluctantly embracing the idea of balance, and removing the last vestige of my original bumper-sticker Marxism.  As I reconstructed the argument, I put forward the idea that all three of these goals emerge naturally from the nature of a liberal democracy, and that all three are necessary.[8]  There is no resolution to the tension among educational goals, just as there is no resolution to the problem of being both liberal and democratic.  We need an educational system that makes capable citizens and productive workers while also enabling individuals to pursue their own aspirations.  And we all act out our support for each of these goals according to which social role is most salient to us at the moment.  As citizens, we want graduates who can vote intelligently; as taxpayers and employers, we want graduates who will increase economic productivity; and as parents, we want an educational system that offers our children social opportunity.  The problem is the imbalance in the current mix of goals, as the growing primacy of social mobility over the other two goals privileges private over public interests, stratification over equality, and credentials over learning.

Examining Life at the Bottom of the System

With this reconstruction of the story, I was able to finish my second book, published in 1997, and get it out the door before any other major problems could threaten its viability.[9]  One such problem was already coming into view.  In comments on my AERJ goals paper, John Rury (the editor) pointed out that my argument relied on a status competition model of social organization – students fighting for scarce credentials in order to move up or stay up – that did not really apply to the lower levels of the system.  Students in the lower tracks in high school and in the open-access realms of higher education (community colleges and regional state universities) lived in a different world from the one I was talking about.  They were affected by the credentials race, but they weren’t really in the race themselves.  For them, the incentives to compete were minimal, the rewards remote, and the primary imperative was not success but survival.

Fortunately, however, there was one place at the bottom of the educational hierarchy I did know pretty well, and that was the poor beleaguered education school.  From 1985 to 2003, while I was teaching in the College of Education at Michigan State University, I received a rich education in the subject.  I had already started a book about ed schools, but it wasn’t until the book was half completed that I realized it was forcing me to rethink my whole thesis about the educational status game.  Here was an educational institution that was the antithesis of the Harvards and Central High Schools that I had been writing about thus far.  Residing at the very bottom of the educational hierarchy, the ed school was disdained by academics, avoided by the best students, ignored by policymakers, and discounted by its own graduates.  It was the perfect case to use in answering a question I had been avoiding:  What happens to education when credentials carry no exchange value and the status game is already lost?

What I found is that life at the bottom has some advantages, but they are outweighed by disadvantages.  On the positive side, the education school’s low status frees it to focus efforts on learning rather than on credentials, on the use value rather than exchange value of education; in this sense, it is liberated from the race for credentials that consumes the more prestigious realms of higher education.  On the negative side, however, the ed school’s low status means that it has none of the autonomy that prestigious institutions (like Central High School) generate for themselves, which leaves it vulnerable to kibitzing from the outside.  This institutional weakness also has made the ed school meekly responsive to its environment, so that over the years it obediently produced large numbers of teachers at low cost and with modest professional preparation, as requested.

When I had completed a draft of the book, I asked for comments from two colleagues at Michigan State, Lynn Fendler and Tom Bird, who promptly pointed out several big problems with the text.  One had to do with the argument in the last few chapters, where I was trying to make two contradictory points:  ed schools were weak in shaping schools but effective in promoting progressive ideology.  The other problem had to do with the book’s tone:  as an insider taking a critical position about ed schools, I sounded like I was trying to enhance my own status at the expense of colleagues.  Fortunately, they were able to show me a way out of both predicaments.  On the first issue, they helped me see that ed schools were more committed to progressivism as a rhetorical stance than as a mode of educational practice.  In our work as teacher educators, we have to prepare teachers to function within an educational system that is hostile to progressive practices.  On the second issue, they suggested that I shift from the third person to the first person.  By announcing clearly both my membership in the community under examination and my participation in the problems I was critiquing, I could change the tone from accusatory to confessional.  With these important changes in place, The Trouble with Ed Schools was published in 2004.[10]

Enabling Limitations

In this essay I have been telling a story about grounding research in an unlovely but fertile mindset, getting it wrong repeatedly, and then trying to fix it with the help of friends.  However, I don’t want to leave the impression that I think any of these fixes really resolved the problems.  The story is more about filling potholes than about re-engineering the road.  It’s also about some fundamental limitations in my approach to the historical sociology of American education, which I have been unwilling and unable to fix since they lie at the core of my way of seeing things.  Intellectual frameworks define, shape, and enable the work of scholars.  Such frameworks can be helpful by allowing us to cut a slice through the data and reveal interesting patterns that are not apparent from other angles, but they can only do so if they maintain a sharp leading edge.  As an analytical instrument, a razor works better than a baseball bat, and a beach ball doesn’t work at all.  The sharp edge, however, comes at a cost, since it necessarily narrows the analytical scope and commits a scholar to one slice through a problem at the expense of others.  I’m all too aware of the limitations that arise from my own cut at things.

One problem is that I tend to write a history without actors.  Taking a macro-sociological approach to history, I am drawn to explore general patterns and central tendencies in the school-society relationship rather than the peculiarities of individual cases.  In the stories I tell, people don’t act.  Instead, social forces contend, social institutions evolve in response to social pressures, and collective outcomes ensue.  My focus is on general processes and structures rather than on the variations within categories.  What is largely missing from my account of American education is the radical diversity of traits and behaviors that characterizes educational actors and organizations.  I plead guilty to these charges.  However, my aim has been not to write a tightly textured history of the particular but to explore some of the broad socially structured patters that shape the main outlines of American educational life.  My sense is that this kind of work serves a useful purpose—especially in a field such as education, whose dominant perspectives have been psychological and presentist rather than sociological and historical; and in a sub-field like history of education, which can be prone to the narrow monograph with little attention to the big picture; and in a country like the United States, which is highly individualistic in orientation and tends to discount the significance of the collective and the categorical.

Another characteristic of my work is that I tend to stretch arguments well beyond the supporting evidence.  As anyone can see in reading my books, I am not in the business of building an edifice of data and planting a cautious empirical generalization on the roof.  My first book masqueraded as a social history of an early high school, but it was actually an essay on the political and market forces shaping the evolution of American education in general—a big leap to make from historical data about a single, atypical school.  Likewise my second book is a series of speculations about credentialing and consumerism that rests on a modest and eclectic empirical foundation.  My third book involves minimal data on education in education schools and maximal rumination about the nature of “the education school.”  In short, validating claims has not been my strong suit.  I think the field of educational research is sufficiently broad and rich that it can afford to have some scholars who focus on constructing credible empirical arguments about education and others who focus on exploring ways of thinking about the subject.

The moral of this story, therefore, may be that scholarship is less a monologue than a conversation.  In education, as in other areas, our field is so expansive that we can’t cover more than a small portion, and it’s so complex that we can’t even gain mastery over our own tiny piece of the terrain.  But that’s ok.  As participants in the scholarly conversation, our responsibility is not to get things right but to keep things interesting, while we rely on discomfiting interactions with our data and with our colleagues to provide the correctives we need to make our scholarship more durable.

[1]  George Orwell,  The Road to Wigan Pier (New York: Harcourt, Brace, 1958).

[2]  I am grateful to Lynn Fendler and Tom Bird for comments on an earlier draft of this portion of the essay.  As they have done before, they saved me from some embarrassing mistakes.  I presented an earlier version of this analysis in a colloquium at the Stanford School of Education in 2002 and in the Division F Mentoring Seminar at the American Educational Research Association annual meeting in New Orleans later the same year.  A later version was published as the introduction to Education, Markets, and the Public Good: The Selected Works of David F. Labaree (London: Routledge Falmer, 2007).  Reprinted with the kind permission of Taylor and Francis.

[3]  That doesn’t mean it’s necessarily the best way to start developing an idea.  For me, teaching has always served better as a medium for stimulating creative thought.  It’s a chance for me to engage with ideas from texts about a particular topic, develop a story about these ideas, and see how it sounds when I tell it in class and listen to student responses.  The classroom has a wonderful mix of traits for these purposes: by forcing discipline and structure on the creative process while allowing space for improvisation and offering the chance to reconstruct everything the next time around.  After my first book, most of my writing had its origins in this pedagogical process.  But at a certain point I find that I have to test these ideas in print.

[4]  Michael B. Katz, The Irony of Early School Reform: Educational Innovation in Mid-Nineteenth Century Massachusetts (Boston: Harvard University Press, 1968).

[5]  Marx’s message is rousing and it can fit on a bumper sticker:  Workers of the world, unite!  But Weber’s message is more complicated, pessimistic, and off-putting:  The iron cage of rationalization has come to dominate the structure of thought and social action, but we can’t stop it or even escape from it.

[6]  He also pointed out, in passing, that my chapter on the attainment system at the high school – which incorporated 17 tables in the book (30 in the dissertation), and which took me two years to develop by collecting, coding, keying, and statistically analyzing data from 2,000 student records – was essentially one big footnote in support of the statement, “Central High School was meritocratic.”  Depressing but true.

[7]  David F. Labaree, The Making of an American High School: The Credentials Market and the Central High School of Philadelphia, 1838-1939 (New Haven: Yale University Press, 1988).

[8]  David F. Labaree, “Public Goods, Private Goods: The American Struggle over Educational Goals. American Educational Research Journal 34:1 (Spring, 1998): 39-81.

[9]  David F. Labaree,  How to Succeed in School Without Really Learning: The Credentials Race in American Education (New Haven, Yale University Press, 1997).

[10] David F. Labaree,  The Trouble with Ed Schools (New Haven: Yale University Press, 2004).

Posted in Academic writing, Writing

Academic Writing Issues #9: Metaphors — The Poetry of Everyday Life

Earlier I posted a piece about mangled metaphors (Academic Writing Issues # 6), which focused on the trouble that writers get into when they use a metaphor without taking into account the root comparison that is embedded within it.  Example:  talking about “the doctrine set forth in Roe v. Wade and its progeny” — a still-born metaphor if there ever was one.  So writers need to be wary of metaphors, especially those that have become cliches, thus making the original reference dormant.

But don’t let these problems put you off from using metaphors altogether.  Actually, it’s nearly impossible to write without any metaphors, since they are so central to communication.  Literal meanings are useful, and in scientific writing precision is important to maintain clarity.  But literal language is boring, pedestrian.  It just plods along, telling a story without conveying what the story means.  Metaphor is how we create a richness of meaning, which comes from not just telling what something is but showing what’s it’s related to.  Metaphors create depth and resonance, and they stick in your mind.

Think about the power of a great book title, which captures the essence of the text in a vivid image:  Bowling Alone; Bell Curve; The Unbearable Lightness of Being; The Botany of Desire.

In the piece below, David Brooks talks about metaphors as the poetry of everyday life in a 2011 column from the New York Times.  I think you’ll like it.


April 11, 2011

Poetry for Everyday Life


Here’s a clunky but unremarkable sentence that appeared in the British press before the last national election: “Britain’s recovery from the worst recession in decades is gaining traction, but confused economic data and the high risk of hung Parliament could yet snuff out its momentum.”

The sentence is only worth quoting because in 28 words it contains four metaphors. Economies don’t really gain traction, like a tractor. Momentum doesn’t literally get snuffed out, like a cigarette. We just use those metaphors, without even thinking about it, as a way to capture what is going on.

In his fine new book, “I Is an Other,” James Geary reports on linguistic research suggesting that people use a metaphor every 10 to 25 words. Metaphors are not rhetorical frills at the edge of how we think, Geary writes. They are at the very heart of it.

George Lakoff and Mark Johnson, two of the leading researchers in this field, have pointed out that we often use food metaphors to describe the world of ideas. We devour a book, try to digest raw facts and attempt to regurgitate other people’s ideas, even though they might be half-baked.

When talking about relationships, we often use health metaphors. A friend might be involved in a sick relationship. Another might have a healthy marriage.

When talking about argument, we use war metaphors. When talking about time, we often use money metaphors. But when talking about money, we rely on liquid metaphors. We dip into savings, sponge off friends or skim funds off the top. Even the job title stockbroker derives from the French word brocheur, the tavern worker who tapped the kegs of beer to get the liquidity flowing.

The psychologist Michael Morris points out that when the stock market is going up, we tend to use agent metaphors, implying the market is a living thing with clear intentions. We say the market climbs or soars or fights its way upward. When the market goes down, on the other hand, we use object metaphors, implying it is inanimate. The market falls, plummets or slides.

Most of us, when asked to stop and think about it, are by now aware of the pervasiveness of metaphorical thinking. But in the normal rush of events, we often see straight through metaphors, unaware of how they refract perceptions. So it’s probably important to pause once a month or so to pierce the illusion that we see the world directly. It’s good to pause to appreciate how flexible and tenuous our grip on reality actually is.

Metaphors help compensate for our natural weaknesses. Most of us are not very good at thinking about abstractions or spiritual states, so we rely on concrete or spatial metaphors to (imperfectly) do the job. A lifetime is pictured as a journey across a landscape. A person who is sad is down in the dumps, while a happy fellow is riding high.

Most of us are not good at understanding new things, so we grasp them imperfectly by relating them metaphorically to things that already exist. That’s a “desktop” on your computer screen.

Metaphors are things we pass down from generation to generation, which transmit a culture’s distinct way of seeing and being in the world. In his superb book “Judaism: A Way of Being,” David Gelernter notes that Jewish thought uses the image of a veil to describe how Jews perceive God — as a presence to be sensed but not seen, which is intimate and yet apart.

Judaism also emphasizes the metaphor of separateness as a path to sanctification. The Israelites had to separate themselves from Egypt. The Sabbath is separate from the week. Kosher food is separate from the nonkosher. The metaphor describes a life in which one moves from nature and conventional society to the sacred realm.

To be aware of the central role metaphors play is to be aware of how imprecise our most important thinking is. It’s to be aware of the constant need to question metaphors with data — to separate the living from the dead ones, and the authentic metaphors that seek to illuminate the world from the tinny advertising and political metaphors that seek to manipulate it.

Most important, being aware of metaphors reminds you of the central role that poetic skills play in our thought. If much of our thinking is shaped and driven by metaphor, then the skilled thinker will be able to recognize patterns, blend patterns, apprehend the relationships and pursue unexpected likenesses.

Even the hardest of the sciences depend on a foundation of metaphors. To be aware of metaphors is to be humbled by the complexity of the world, to realize that deep in the undercurrents of thought there are thousands of lenses popping up between us and the world, and that we’re surrounded at all times by what Steven Pinker of Harvard once called “pedestrian poetry.”

Posted in Academic writing, Capitalism, History

E.P. Thompson: Time, Work-Discipline, and Industrial Capitalism

This post is a tribute to a wonderful essay by the great British historian of working-class history, E. P. Thompson.  His classic work is The Making of the English Working Class, published in 1966.  The paper I’m touting here provides a lovely window into the heart of his craft, which is an unlikely combination of Oxbridge erudition and Marxist analysis.

It’s the story of the rise of a new sense of time in the world that emerged with the arrival of capitalism, at which point suddenly time became money.  If you’re making shoes to order in a precapitalist workshop, you work until the order is completed and then you take it easy.  But if your labor is being hired by the hour, then your employer has an enormous incentive to squeeze as much productivity as possible out of every minute you are on the clock. The old model is more natural for humans: work until you’ve accomplished what you need and then stop.  Binge and break.  Think about the way college students spend their time when they’re not being supervised — a mix of all-nighters and partying.

Thompson captures the essence of the change between natural time and the time clock with this beautiful epigraph from Thomas Hardy’s Tess of the D’Urbervilles.

Tess … started on her way up the dark and crooked lane or street not made for hasty progress; a street laid out before inches of land had value, and when one-handed clocks sufficiently subdivided the day.

This quote and his analysis has had a huge impact on the way I came to see the world as a scholar of history.

Here’s a link to the paper, which was published in the journal Past and Present in 1967.  Enjoy.

front page time work discipline -- pp 67

Posted in Academic writing, Uncategorized

Academic Writing Issues #8 — Getting Off to a Fast Start

The introduction to a paper is critically important.  This is where you try to draw in readers, tell them what you’re going to address, and show why this issue is important.  It’s also a place to show a little style, demonstrating that you’re going to take readers on a fun ride.  Below are two exemplary cases of opening strong, one is from a detective novel, the other from an academic book.

If you want to see how to draw in the reader quickly, a good place to look is the work of a genre writer.  Authors who make a living from their writing need to make their case up front — to catch readers in the first paragraph and make them want to keep going.  Check out writers of mystery, detective, spy, or science fiction novels.  They’ve got to be good on the first page or the reader is just going to put the book down and pick up another.

One of my favorite genre writers is Elmore Leonard, who’s a master of the opening page.  Here’s the opening page of his novel Glitz:

THE NIGHT VINCENT WAS SHOT he saw it coming. The guy approached out of the streetlight on the corner of Meridian and Sixteenth, South Beach, and reached Vincent as he was walking from his car to his apartment building. It was early, a few minutes past nine.

Vincent turned his head to look at the guy and there was a moment when he could have taken him and did consider it, hit the guy as hard as he could. But Vincent was carrying a sack of groceries. He wasn’t going to drop a half gallon of Gallo Hearty Burgundy, a bottle of prune juice and a jar of Ragú spaghetti sauce on the sidewalk. Not even when the guy showed his gun, called him a motherfucker through his teeth and said he wanted Vincent’s wallet and all the money he had on him. The guy was not big, he was scruffy, wore a tank top and biker boots and smelled. Vincent believed he had seen him before, in the detective bureau holding cell. It wouldn’t surprise him. Muggers were repeaters in their strungout state, often dumb, always desperate. They came on with adrenaline pumping, hoping to hit and get out. Vincent’s hope was to give the guy pause.

He said, “You see that car? Standard Plymouth, nothing on it, not even wheel covers?” It was a pale gray. “You think I’d go out and buy a car like that?” The guy was wired or not paying attention. Vincent had to tell him, “It’s a police car, asshole. Now gimme the gun and go lean against it.”

What he should have done, either put the groceries down and given the guy his wallet or screamed in the guy’s face to hit the deck, now, or he was fucking dead. Instead of trying to be clever and getting shot for it.

Quite a grabber, isn’t it — right from the opening sentence.  For me the key is the deft and concise way he manages to introduce his main character — Vincent, the scruffy, street-wise detective.  Instead of an extensive physical description or character analysis, he provides a list of what’s in his bag of groceries.  Specific details like Gallo Hearty Burgundy and Ragu spaghetti sauce tell you clearly what kind of guy he is:  not a man of refinement on the world stage but a single guy in a seedy part of town with proletarian tastes.  And the next paragraph shows him as the wise-guy cop who can’t resist sticking it to a guy even though it might well not be the smartest move under the circumstances.  One page and you already know Vincent and want to stick with him for a while.

The second example comes from the opening of the first chapter of a 1968 book by the educational sociologist Philip Jackson called Life in Classrooms.

On a typical weekday morning between September and June some 35 million Americans kiss their loved ones goodby, pick up their lunch pails and books, and leave to spend their day in that collection of enclosures (totaling about one million) known as elementary school class­rooms. This massive exodus from home to school is accomplished with a minimum of fuss and bother. Few tears are shed (except perhaps by the very youngest) and few cheers are raised. The school attendance of children is such a common experience in our society that those of us who watch them go hardly pause to consider what happens to them when they get there. Of course our indifference disappears occasionally. When something goes wrong or when we have been notified of his remarkable achievement, we might ponder, for a moment at least, the mean­ing of the experience for the child in question, but most of the time we simply note that our Johnny is on his way to school, and now, it is time for our second cup of coffee.

Parents are interested, to be sure, in how well Johnny does while there, and when he comes trudging home they may ask him questions about what happened today or, more generally, how things went. But both their questions and his answers typically focus on the highlights of the school experience-its unusual aspects-rather than on the mundane and seemingly trivial events that filled the bulk of his school hours. Parents are interested, in other words, in the spice of school life rather than its substance.

Teachers, too, are chiefly concerned with only a very narrow aspect of a youngster’s school experience. They, too, are likely to focus on specific acts of misbehavior or accomplishment as representing what a particular student did in school today, even though the acts in question occupied but a small fraction of the student’s time. Teachers, like parents, seldom ponder the significance of the thousands of fleeting events that combine to form the routine of the classroom.

And the student himself is no less selective. Even if someone bothered to question him about the minutiae of his school day, he would probably be unable to give a complete account of what he had done. For him, too, the day has been reduced in memory into a small number of signal events-“I got 100 on my spelling test,” “A new boy came and he sat next to me,”-or recurring activities-“We went to gym,” “We had music.” His spontaneous recall of detail is not much greater than that required to answer our conventional questions.

This concentration on the highlights of school life is understandable from the standpoint of human interest. A similar selection process operates when we inquire into or recount other types of daily activity. When we are asked about our trip downtown or our day at the office we rarely bother describing the ride on the bus or the time spent in front of the watercooler. In­deed, we are more likely to report that nothing happened than to catalogue the pedestrian actions that took place between home and return. Unless something interesting occurred there is little purpose in talking about our experience.

Yet from the standpoint of giving shape and meaning to our lives these events about which we rarely speak may be as important as those that hold our listener’s attention. Certainly they represent a much larger portion of our experience than do those about which we talk. The daily routine, the “rat race,” and the infamous “old grind” may be brightened from time to time by happenings that add color to an otherwise drab existence, but the grayness of our daily lives has an abrasive potency of its own. Anthropologists understand this fact better than do most other social scientists, and their field studies have taught us to appreciate the cultural signifi­cance of the humdrum elements of human existence. This is the lesson we must heed as we seek to understand life in elementary classrooms.

Notice how he draws you into observing the daily life of school from the perspective of its main participants — parents, teachers, and students.  He’s showing you how the routine of schooling is so familiar to everyone that it becomes invisible.  Ask students what happened in school today and they’re likely to say, “Nothing.”  Of course, a lot actually happened but none of it is noteworthy.  You only hear about something that broke the routine:  there was a concert in assembly; Jimmy threw up in the lunchroom.

This is his point.  Students are learning things from the regular process of schooling.  They stand in line, wait for the bell, get evaluated, respond to commands.  This is not the formal curriculum, made up of school subjects, but the hidden curriculum of doing school.  The process of schooling, he suggests, may in fact have a bigger impact on the student than its formal content.  He draws you into this idea and leaves you wanting to know more.  That’s good writing.

Posted in Academic writing, Writing

Academic Writing Issues #7 — Writing the Perfect Sentence

The art of writing ultimately comes down to the art of writing sentences.  In his lovely book, How to Write a Sentence, Stanley Fish explains that the heart of any sentence is not its content but its form.  The form is what defines the logical relationship between the various elements within the sentence.  The same formal set of relationships within a sentence structure can be filled with an infinite array of possible bits of content.  If you master the forms, he says, you will be able to harness them to your own aims in producing content.  His core counter-intuitive admonition is this:  “You shall tie yourself to forms and the forms shall set you free.”  Note the perfect form in Lewis Carrolls’ nonsense poem Jaberwocky:

Twas brillig, and the slithy toves

Did gyre and gimble in the wabe;

All mimsy were the borogoves,

And the mome raths outgrabe.

I strongly recommend reading the book, which I used for years in my class on academic writing.  You’ll learn a lot about writing and you’ll also accumulate a lovely collection of stunning quotes.

Below is a piece Fish published in the New Statesman in 2011, which deftly summarizes the core argument in the book.  Enjoy.  Here’s a link to the original.


How to write the perfect sentence

Stanley Fish

Published 17 February 2011

In learning how to master the art of putting words together, the trick is to concentrate on technique and not content. Substance comes second.

Look around the room you’re sitting in. Pick out four items at random. I’m doing it now and my items are a desk, a television, a door and a pencil. Now, make the words you have chosen into a sentence using as few additional words as possible. For example: “I was sitting at my desk, looking at the television, when a pencil fell off and rolled to the door.” Or: “The television close to the door obscured my view of the desk and the pencil I needed.” Or: “The pencil on my desk was pointed towards the door and away from the television.” You will find that you can always do this exercise – and you could do it for ever.

That’s the easy part. The hard part is to answer this question: what did you just do? How were you able to turn a random list into a sentence? It might take you a little while but, in time, you will figure it out and say something like this: “I put the relationships in.” That is to say, you arranged the words so that they were linked up to the others by relationships of cause, effect, contiguity, similarity, subordination, place, manner and so on (but not too far on; the relationships are finite). Once you have managed this – and you do it all the time in speech, effortlessly and unselfconsciously – hitherto discrete items participate in the making of a little world in which actors, actions and the objects of actions interact in ways that are precisely represented.

This little miracle you have performed is called a sentence and we are now in a position to define it: a sentence is a structure of logical relationships. Notice how different this is from the usual definitions such as, “A sentence is built out of the eight parts of speech,” or, “A sentence is an independent clause containing a subject and a predicate,” or, “A sentence is a complete thought.” These definitions seem like declarations out of a fog that they deepen. The words are offered as if they explained everything, but each demands an explanation.

When you know that a sentence is a structure of logical relationships, you know two things: what a sentence is – what must be achieved for there to be focused thought and communication – and when a sentence that you are trying to write goes wrong. This happens when the relationships that allow sense to be sharpened are missing or when there are too many of them for comfort (a goal in writing poetry but a problem in writing sentences). In such cases, the components of what you aspired to make into a sentence stand alone, isolated; they hang out there in space and turn back into items on a list.

Armed with this knowledge, you can begin to look at your own sentences and those of others with a view to discerning what is successful and unsuccessful about them. As you do this, you will be deepening your understanding of what a sentence is and introducing yourself to the myriad ways in which logical structures of verbal thought can be built, unbuilt, elaborated upon and admired.

My new book, How to Write a Sentence, is a light-hearted manual of instruction designed to teach you how to do these things – how to write a sentence and how to appreciate in analytical detail the sentences produced by authors who knock your socks off. These two aspects – lessons in sentence craft and lessons in sentence appreciation – reinforce each other; the better able you are to appreciate great sentences, the closer you are to being able to write one. An intimate knowledge of what makes sentences work is one prerequisite for writing them.

Consider the first of those aspects – sentence craft. The chief lesson here is: “It’s not the thought that counts.” By that, I mean that skill in writing sentences is a matter of understanding and mastering form not content. The usual commonplace wisdom is that you have to write about something, but actually you don’t. The exercise I introduced above would work even if your list was made up of nonsense words, as long as each word came tagged with its formal identification – actor, action, object of action, modifier, conjunction, and so on. You could still tie those nonsense words together in ligatures of relationships and come up with perfectly formed sentences like Noam Chomsky’s “Colourless green ideas sleep furiously,” or the stanzas of Lewis Carroll’s “Jabberwocky”.

If what you want to do is become facile (in a good sense) in producing sentences, the sentences with which you practise should be as banal and substantively inconsequential as possible; for then you will not be tempted to be interested in them. The moment that interest comes to the fore, the focus on craft will be lost. (I know that this sounds counter-intuitive, but stick with me.)

I call this the Karate Kid method of learning to write. In that 1984 cult movie (recently remade), the title figure learns how to fight not by participating in a match but by repeating (endlessly and pointlessly, it seems to him) the purely formal motions of waxing cars and painting fences. The idea is that when you are ready either to compete or to say something that is meaningful and means something to you, the forms you have mastered and internalised will generate the content that would have remained inchoate (at best) without them.

These points can be illustrated with senten­ces that are too good to be tossed aside. In the book, I use them to make points about form, but I can’t resist their power or the desire to explain it. When that happens, content returns to my exposition and I shift into full appreciation mode, caressing these extraordinary verbal productions even as I analyse them. I become like a sports commentator, crying, “Did you see that?” or “How could he have pulled that off?” or “How could she keep it going so long and still not lose us?” In the end, the apostle of form surrenders to substance, or rather, to the pleasure of seeing substance emerge though the brilliant deployment of forms.

As a counterpoint to that brilliance, let me hazard an imitation of two of the marvels I discuss. Take Swift’s sublimely malign sentence, “Last week I saw a woman flayed and you will hardly believe how much it altered her person for the worse.” And then consider this decidedly lame imitation: “Last night I ate six whole pizzas and you would hardly believe how sick I was.”

Or compare John Updike’s description in the New Yorker of the home run that the baseball player Ted Williams hit on his last at-bat in 1960 – “It was in the books while it was still in the sky” – to “He had won the match before the first serve.” My efforts in this vein are lessons both in form and humility.

The two strands of my argument can be brought together by considering sentences that are about their own form and unfolding; sentences that meditate on or burst their own limitations, and thus remind us of why we have to write sentences in the first place – we are mortal and finite – and of what rewards may await us in a realm where sentences need no longer be fashioned. Here is such a sentence by the metaphysical poet John Donne:

If we consider eternity, into that time never entered; eternity is not an everlasting flux of time, but time is a short parenthesis in a long period; and eternity had been the same as it is, though time had never been.

The content of the sentence is the unreality of time in the context of eternity, but because a sentence is necessarily a temporal thing, it undermines that insight by being. (Asserting in time the unreality of time won’t do the trick.) Donne does his best to undermine the undermining by making the sentence a reflection on its fatal finitude. No matter how long it is, no matter what its pretension to a finality of statement, it will be a short parenthesis in an enunciation without beginning, middle or end. That enunciation alone is in possession of the present – “is” – and what the sentence comes to rest on is the declaration of its already having passed into the state of non-existence: “had never been”.

Donne’s sentence is in my book; my analysis of it is not. I am grateful to the New Statesman for the opportunity to produce it and to demonstrate once again the critic’s inadequacy to his object.

Stanley Fish is Davidson-Kahn Distinguished University Professor of Humanities and Law at Florida International University. His latest book is “How to Write a Sentence: and How to Read One” (HarperCollins, £12.99)



Posted in Academic writing, Uncategorized

Academic Writing Issues #6 — Mangling Metaphors

Metaphor is an indispensable tool for the writer.  It carries out an essential function by connecting what you’re talking about with other related issues that the reader already recognizes.  This provides a comparative perspective, which gives a richer context for the issue at hand.  Metaphor also introduces a playful characterization of the issue by making figurative comparisons that are counter-intuitive, finding similarities in things that are apparently opposite.  The result is to provoke the reader’s thinking in ways that straightforward exposition cannot.

Metaphors can — and often do — go disastrously wrong.  Here’s Bryan Garner on the subject:  “Skillful use of metaphor is one of the highest attainments of writing; graceless and even aesthetically offensive use of metaphors is one of the commonest scourges of writing.”  A particular problem, especially for academics, is using a shopworn metaphor that has become a cliché, which has lost all value through overuse.  Cases in point: lens; interrogate; path; bottom line; no stone unturned; weighing the evidence.

In this post, I provide two pieces that speak to the issue of metaphors.  One is a section on the subject from Bryan Garner’s Modern American Usage, which provides some great examples of metaphor gone bad.  A second is an extended excerpt from Matt Taibbi’s delightfully vicious takedown of Thomas Friedman’s best-seller, The World Is Flat.


Bryan Garner on Metaphor

METAPHORS. A. Generally. A metaphor is a figure of speech in which one thing is called by the name of something else, or is said to be that other thing. Unlike similes, which use like or as, metaphorical comparisons are implicit—not explicit. Skillful use of metaphor is one of the highest attainments of writing; graceless and even aesthetically offensive use of metaphors is one of the commonest scourges of writing.

Although a graphic phrase often lends both force and compactness to writing, it must seem contextually agreeable. That is, speaking technically, the vehicle of the metaphor (i.e., the literal sense of the metaphorical language) must accord with the tenor of the metaphor (i.e., the ultimate, metaphorical sense), which is to say that the means must fit the end. To illustrate the distinction between the vehicle and the tenor of a metaphor, in the statement that essay is a patchwork quilt without discernible design, the makeup of the essay is the tenor, and the quilt is the vehicle. It is the comparison of the tenor with the vehicle that makes or breaks a metaphor.

A writer would be ill advised, for example, to use rustic metaphors in a discussion of the problems of air pollution, which is essentially a problem of the bigger cities and outlying areas. Doing that mismatches the vehicle with the tenor.

  1. Mixed Metaphors. The most embarrassing problem with metaphors occurs when one metaphor crowds another. It can happen with CLICHÉS—e.g.:
  • “It’s on a day like this that the cream really rises to the crop.” (This mingles the cream rises to the top with the cream of the crop.)
  • “He’s really got his hands cut out for him.” (This mingles he’s got his hands full with he’s got his work cut out for him.)
  • “This will separate the men from the chaff.” (This mingles separate the men from the boys with separate the wheat from the chaff.)
  • “It will take someone willing to pick up the gauntlet and run with it.” (This mingles pick up the gauntlet with pick up the ball and run with it.)
  • “From now on, I am watching everything you do with a fine-toothed comb.” (Watching everything you do isn’t something that can occur with a fine-toothed comb.)

The purpose of an image is to fix the idea in the reader’s or hearer’s mind. If jarringly disparate images appear together, the audience is left confused or sometimes laughing, at the writer’s expense.

The following classic example comes from a speech by Boyle Roche in the Irish Parliament, delivered in about 1790: “Mr. Speaker, I smell a rat. I see him floating in the air. But mark me, sir, I will nip him in the bud.” Perhaps the supreme example of the comic misuse of metaphor occurred in the speech of a scientist who referred to “a virgin field pregnant with possibilities.”

  1. Dormant Metaphors. Dormant metaphors sometimes come alive in contexts in which the user had no intention of reviving them. In the following examples, progeny, outpouring, and behind their backs are dormant metaphors that, in most contexts, don’t suggest their literal meanings. But when they’re used with certain concrete terms, the results can be jarring—e.g.:
  • “This Note examines the doctrine set forth in Roe v. Wade and its progeny.” “Potential Fathers and Abortion,” 55 Brooklyn L. Rev. 1359, 1363 (1990). (Roe v. Wade, of course, legalized abortion.)
  • “The slayings also have generated an outpouring of hand wringing from Canada’s commentators.” Anne Swardson, “In Canada, It Takes Only Two Deaths,” Wash. Post (Nat’l Weekly ed.), 18–24 Apr. 1994, at 17. (Hand-wringing can’t be poured.)
  • “But managers at Hyland Hills have found that, for whatever reasons, more and more young skiers are smoking behind their backs. And they are worried that others are setting a bad example.” Barbara Lloyd, “Ski Area Cracks Down on Smoking,” N.Y. Times, 25 Jan. 1996, at B13. (It’s a fire hazard to smoke behind your back.)

Yet another pitfall for the unwary is the CLICHÉ-metaphor that the writer renders incorrectly, as by writing taxed to the breaking point instead of stretched to the breaking point.

Garner, Bryan. Garner’s Modern American Usage (pp. 534-535). Oxford University Press. Kindle Edition.


Matt Taibbi on The World Is Flat

Start with the title.

The book’s genesis is a conversation Friedman has with Nandan Nilekani, the CEO of Infosys. Nilekani caually mutters to Friedman: “Tom, the playing field is being leveled.” To you and me, an innocent throwaway phrase–the level playing field being, after all, one of the most oft-repeated stock ideas in the history of human interaction. Not to Friedman. Ten minutes after his talk with Nilekani, he is pitching a tent in his company van on the road back from the Infosys campus in Bangalore:

As I left the Infosys campus that evening along the road back to Bangalore, I kept chewing on that phrase: “The playing field is being leveled.”  What Nandan is saying, I thought, is that the playing field is being flattened… Flattened? Flattened? My God, he’s telling me the world is flat!

This is like three pages into the book, and already the premise is totally fucked. Nilekani said level, not flat. The two concepts are completely different. Level is a qualitative idea that implies equality and competitive balance; flat is a physical, geographic concept that Friedman, remember, is openly contrasting–ironically, as it were–with Columbus’s discovery that the world is round.

Except for one thing. The significance of Columbus’s discovery was that on a round earth, humanity is more interconnected than on a flat one. On a round earth, the two most distant points are closer together than they are on a flat earth. But Friedman is going to spend the next 470 pages turning the “flat world” into a metaphor for global interconnectedness. Furthermore, he is specifically going to use the word round to describe the old, geographically isolated, unconnected world.

“Let me… share with you some of the encounters that led me to conclude that the world is no longer round,” he says. He will literally travel backward in time, against the current of human knowledge.

To recap: Friedman, imagining himself Columbus, journeys toward India. Columbus, he notes, traveled in three ships; Friedman “had Lufthansa business class.” When he reaches India–Bangalore to be specific–he immediately plays golf. His caddy, he notes with interest, wears a cap with the 3M logo. Surrounding the golf course are billboards for Texas Instruments and Pizza Hut. The Pizza Hut billboard reads: “Gigabites of Taste.” Because he sees a Pizza Hut ad on the way to a golf course, something that could never happen in America, Friedman concludes: “No, this definitely wasn’t Kansas.”
Report Advertisement

After golf, he meets Nilekani, who casually mentions that the playing field is level. A nothing phrase, but Friedman has traveled all the way around the world to hear it. Man travels to India, plays golf, sees Pizza Hut billboard, listens to Indian CEO mutter small talk, writes 470-page book reversing the course of 2000 years of human thought. That he misattributes his thesis to Nilekani is perfect: Friedman is a person who not only speaks in malapropisms, he also hears malapropisms. Told level; heard flat. This is the intellectual version of Far Out Space Nuts, when NASA repairman Bob Denver sets a whole sitcom in motion by pressing “launch” instead of “lunch” in a space capsule. And once he hits that button, the rocket takes off.

And boy, does it take off. Predictably, Friedman spends the rest of his huge book piling one insane image on top of the other, so that by the end–and I’m not joking here–we are meant to understand that the flat world is a giant ice-cream sundae that is more beef than sizzle, in which everyone can fit his hose into his fire hydrant, and in which most but not all of us are covered with a mostly good special sauce. Moreover, Friedman’s book is the first I have encountered, anywhere, in which the reader needs a calculator to figure the value of the author’s metaphors.

God strike me dead if I’m joking about this. Judge for yourself. After the initial passages of the book, after Nilekani has forgotten Friedman and gone back to interacting with the sane, Friedman begins constructing a monstrous mathematical model of flatness. The baseline argument begins with a lengthy description of the “ten great flatteners,” which is basically a highlight reel of globalization tomahawk dunks from the past two decades: the collapse of the Berlin Wall, the Netscape IPO, the pre-Y2K outsourcing craze, and so on. Everything that would give an IBM human resources director a boner, that’s a flattener. The catch here is that Flattener #10 is new communications technology: “Digital, Mobile, Personal, and Virtual.” These technologies Friedman calls “steroids,” because they are “amplifying and turbocharging all the other flatteners.”

According to the mathematics of the book, if you add an IPac to your offshoring, you go from running to sprinting with gazelles and from eating with lions to devouring with them. Although these 10 flatteners existed already by the time Friedman wrote “The Lexus and the Olive Tree”–a period of time referred to in the book as Globalization 2.0, with Globalization 1.0 beginning with Columbus–they did not come together to bring about Globalization 3.0, the flat world, until the 10 flatteners had, with the help of the steroids, gone through their “Triple Convergence.” The first convergence is the merging of software and hardware to the degree that makes, say, the Konica Minolta Bizhub (the product featured in Friedman’s favorite television commercial) possible. The second convergence came when new technologies combined with new ways of doing business. The third convergence came when the people of certain low-wage industrial countries–India, Russia, China, among others–walked onto the playing field. Thanks to steroids, incidentally, they occasionally are “not just walking” but “jogging and even sprinting” onto the playing field.
Now let’s say that the steroids speed things up by a factor of two. It could be any number, but let’s be conservative and say two. The whole point of the book is to describe the journey from Globalization 2.0 (Friedman’s first bestselling book) to Globalization 3.0 (his current bestselling book). To get from 2.0 to 3.0, you take 10 flatteners, and you have them converge–let’s say this means squaring them, because that seems to be the idea–three times. By now, the flattening factor is about a thousand. Add a few steroids in there, and we’re dealing with a flattening factor somewhere in the several thousands at any given page of the book. We’re talking about a metaphor that mathematically adds up to a four-digit number. If you’re like me, you’re already lost by the time Friedman starts adding to this numerical jumble his very special qualitative descriptive imagery. For instance:

And now the icing on the cake, the ubersteroid that makes it all mobile: wireless. Wireless is what allows you to take everything that has been digitized, made virtual and personal, and do it from anywhere.  Ladies and gentlemen, I bring you a Thomas Friedman metaphor, a set of upside-down antlers with four thousand points: the icing on your uber-steroid-flattener-cake!

Let’s speak Friedmanese for a moment and examine just a few of the notches on these antlers (Friedman, incidentally, measures the flattening of the world in notches, i.e. “The flattening process had to go another notch”; I’m not sure where the notches go in the flat plane, but there they are.) Flattener #1 is actually two flatteners, the collapse of the Berlin Wall and the spread of the Windows operating system. In a Friedman book, the reader naturally seizes up in dread the instant a suggestive word like “Windows” is introduced; you wince, knowing what’s coming, the same way you do when Leslie Nielsen orders a Black Russian. And Friedman doesn’t disappoint. His description of the early 90s:

The walls had fallen down and the Windows had opened, making the world much flatter than it had ever been–but the age of seamless global communication had not yet dawned.
How the fuck do you open a window in a fallen wall? More to the point, why would you open a window in a fallen wall? Or did the walls somehow fall in such a way that they left the windows floating in place to be opened?

Four hundred and 73 pages of this, folks. Is there no God?

© 2012 New York Press All rights reserved.
View this story online at:

Posted in Academic writing, Writing

Academic Writing Issues #5 — Failing to Use Dynamic Verbs

Many people have complained that academic writers are addicted to the passive voice, doing anything to avoid using the first person:  “Data were gathered.”  I wonder who did that?  But in some ways a bigger problem is that we refuse to use the kind of dynamic verbs that can energize our stories and drive the argument forward.  Below is a lovely piece by Constance Hale, originally published as part of the New York Times series in 2012 on writing called Draft.  In it she explains the difference between static verbs and power verbs.  Yes, she says, static verbs have their uses; but when we rely too heavily on them, we drain all energy, urgency, and personality from our authorial voices.  We can also end up lulling our readers to sleep.

She gives us some excellent examples about how we can use the full array of verbs at our disposal to tell compelling, nuanced, and engaging stories.  Enjoy.

Here’s a link to the original version.


New York Times

APRIL 16, 2012, 9:00 PM

Make-or-Break Verbs


Draft is a series about the art and craft of writing.

This is the third in a series of writing lessons by the author.

A sentence can offer a moment of quiet, it can crackle with energy or it can just lie there, listless and uninteresting.

What makes the difference? The verb.

Verbs kick-start sentences: Without them, words would simply cluster together in suspended animation. We often call them action words, but verbs also can carry sentiments (love, fear, lust, disgust), hint at cognition (realize, know, recognize), bend ideas together (falsify, prove, hypothesize), assert possession (own, have) and conjure existence itself (is, are).

Fundamentally, verbs fall into two classes: static (to be, to seem, to become) and dynamic (to whistle, to waffle, to wonder). (These two classes are sometimes called “passive” and “active,” and the former are also known as “linking” or “copulative” verbs.) Static verbs stand back, politely allowing nouns and adjectives to take center stage. Dynamic verbs thunder in from the wings, announcing an event, producing a spark, adding drama to an assembled group.

Static Verbs
Static verbs themselves fall into several subgroups, starting with what I call existential verbs: all the forms of to be, whether the present (am, are, is), the past (was, were) or the other more vexing tenses (is being, had been, might have been). In Shakespeare’s “Hamlet,” the Prince of Denmark asks, “To be, or not to be?” when pondering life-and-death questions. An aging King Lear uses both is and am when he wonders about his very identity:

“Who is it that can tell me who I am?”

Jumping ahead a few hundred years, Henry Miller echoes Lear when, in his autobiographical novel “Tropic of Cancer,” he wanders in Dijon, France, reflecting upon his fate:

“Yet I am up and about, a walking ghost, a white man terrorized by the cold sanity of this slaughter-house geometry. Who am I? What am I doing here?”

Drawing inspiration from Miller, we might think of these verbs as ghostly verbs, almost invisible. They exist to call attention not to themselves, but to other words in the sentence.

Another subgroup is what I call wimp verbs (appear, seem, become). Most often, they allow a writer to hedge (on an observation, description or opinion) rather than commit to an idea: Lear appears confused. Miller seems lost.

Finally, there are the sensing verbs (feel, look, taste, smell and sound), which have dual identities: They are dynamic in some sentences and static in others. If Miller said I feel the wind through my coat, that’s dynamic. But if he said I feel blue, that’s static.

Static verbs establish a relationship of equals between the subject of a sentence and its complement. Think of those verbs as quiet equals signs, holding the subject and the predicate in delicate equilibrium. For example, I, in the subject, equals feel blue in the predicate.

Power Verbs
Dynamic verbs are the classic action words. They turn the subject of a sentence into a doer in some sort of drama. But there are dynamic verbs — and then there are dynamos. Verbs like has, does, goes, gets and puts are all dynamic, but they don’t let us envision the action. The dynamos, by contrast, give us an instant picture of a specific movement. Why have a character go when he could gambol, shamble, lumber, lurch, sway, swagger or sashay?

Picking pointed verbs also allows us to forgo adverbs. Many of these modifiers merely prop up a limp verb anyway. Strike speaks softly and insert whispers. Erase eats hungrily in favor of devours. And whatever you do, avoid adverbs that mindlessly repeat the sense of the verb, as in circle around, merge together or mentally recall.

This sentence from “Tinkers,” by Paul Harding, shows how taking time to find the right verb pays off:

“The forest had nearly wicked from me that tiny germ of heat allotted to each person….”

Wick is an evocative word that nicely gets across the essence of a more commonplace verb like sucked or drained.

Sportswriters and announcers must be masters of dynamic verbs, because they endlessly describe the same thing while trying to keep their readers and listeners riveted. We’re not just talking about a player who singles, doubles or homers. We’re talking about, as announcers described during the 2010 World Series, a batter who “spoils the pitch” (hits a foul ball), a first baseman who “digs it out of the dirt” (catches a bad throw) and a pitcher who “scatters three singles through six innings” (keeps the hits to a minimum).

Imagine the challenge of writers who cover races. How can you write about, say, all those horses hustling around a track in a way that makes a single one of them come alive? Here’s how Laura Hillenbrand, in “Seabiscuit,” described that horse’s winning sprint:

“Carrying 130 pounds, 22 more than Wedding Call and 16 more than Whichcee, Seabiscuit delivered a tremendous surge. He slashed into the hole, disappeared between his two larger opponents, then burst into the lead… Seabiscuit shook free and hurtled into the homestretch alone as the field fell away behind him.”

Even scenes that at first blush seem quiet can bristle with life. The best descriptive writers find a way to balance nouns and verbs, inertia and action, tranquillity and turbulence. Take Jo Ann Beard, who opens the short story “Cousins” with static verbs as quiet as a lake at dawn:

“Here is a scene. Two sisters are fishing together in a flat-bottomed boat on an olive green lake….”

When the world of the lake starts to awaken, the verbs signal not just the stirring of life but crisp tension:

“A duck stands up, shakes out its feathers and peers above the still grass at the edge of the water. The skin of the lake twitches suddenly and a fish springs loose into the air, drops back down with a flat splash. Ripples move across the surface like radio waves. The sun hoists itself up and gets busy, laying a sparkling rug across the water, burning the beads of dew off the reeds, baking the tops of our mothers’ heads.”

Want to practice finding dynamic verbs? Go to a horse race, a baseball game or even walk-a-thon. Find someone to watch intently. Describe what you see. Or, if you’re in a quiet mood, sit on a park bench, in a pew or in a boat on a lake, and then open your senses. Write what you see, hear and feel. Consider whether to let your verbs jump into the scene or stand by patiently.

Verbs can make or break your writing, so consider them carefully in every sentence you write. Do you want to sit your subject down and hold a mirror to it? Go ahead, use is. Do you want to plunge your subject into a little drama? Go dynamic. Whichever you select, give your readers language that makes them eager for the next sentence.

Constance Hale, a journalist based in San Francisco, is the author of “Sin and Syntax” and the forthcoming “Vex, Hex, Smash, Smooch.” She covers writing and the writing life at

Posted in Academic writing, Writing

Academic Writing Issues #4 — Failing to Listen for the Music

All too often, academic writing is tone deaf to the music of language.  Just as we tend to consider unprofessional any writing that is playful, engaging, funny, or moving, so too with writing that is musical.  A professional monotone is the scholar’s voice of choice.  This stance leads to two big problems.  One is that it puts off the reader, exactly the person we should be trying to draw into our story.  Why so easily abandon one of the great tools of effective rhetoric?  Another is that it alienates academic writers from their own words, forcing them to adopt the generic voice of the pedant rather than the particular voice the person who is the author.

For better or for worse — usually for worse — we as scholars are contributing to the literary legacy of our culture, so why not do so in a way that sometimes sings or at least doesn’t end on a false note.  Speaking of which, consider a quote from one of the masters of English prose, Abraham Lincoln, from the last paragraph of his first inaugural address.  Picture him talking at the brink of the nation’s most terrible war, and then listen to his melodic phrasing:

I am loath to close. We are not enemies, but friends. We must not be enemies. Though passion may have strained it must not break our bonds of affection. The mystic chords of memory, stretching from every battlefield, and patriot grave, to every living heart and hearthstone, all over this broad land, will yet swell the chorus of the Union, when again touched, as surely they will be, by the better angels of our nature.

In the English language, there are two rhetorical storehouses that for centuries have grounded writers like Lincoln — Shakespeare and the King James Bible.  Both are compulsively quotable, and both provide models for how to combine meaning and music in the way we write.

Take a look at this lovely piece by Ann Wroe, an appreciation of the music of the King James Bible, which makes all the the other translations sound tone deaf.

Published in the Economist

March 30, 2011


By Ann Wroe


The King James Bible is 400 years old this year, and the music of its sentences is still ringing out. But what exactly made it so good? Ann Wroe gives chapter and verse…

Like many Catholics, I came late to the King James Bible. I was schooled in the flat Knox version, and knew the beautiful, musical Latin Vulgate well before I was introduced to biblical beauty in my own tongue. I was around 20, sitting in St John’s College Chapel in Oxford in the glow of late winter candlelight, though that fond memory may be embellished a little. A reading from the King James was given at Evensong. The effect was extraordinary: as if I had suddenly found, in the house of language I had loved and explored all my life, a hidden central chamber whose pillars and vaulting, rhythm and strength had given shape to everything around them.

The King James now breathes venerability. Even online it calls up crammed, black, indented fonts, thick rag paper and rubbed leather bindings—with, inside the heavy cover, spidery lists of family ancestors begotten long ago. To read it is to enter a sort of communion with everyone who has read or listened to it before, a crowd of ghosts: Puritan women in wide white collars, stern Victorian fathers clasping their canes, soldiers muddy from killing fields, serving girls in Sunday best, and every schoolboy whose inky fingers have burrowed to 2 Kings 27, where Rabshakeh says, “Hath my master not sent me to the men which sit on the wall, that they may eat their own dung, and drink their own piss with you?”

When it appeared, moreover, it was already familiar, in the sense that it borrowed freely from William Tyndale’s great translation of a century before. Deliberately, and with commendable modesty, the members of King James’s translation committees said they did not seek “to make a new translation, nor yet to make of a bad one a good one, but to make a good one better”. What exactly they borrowed and where they improved is a detective job for scholars, not for this piece. So where it mentions “translators” Tyndale is included among them, the original and probably the best; for this book still breathes him, as much as them.

In both his time and theirs this was a modern translation, the living language of streets, docks, workshops, fields. Ancient Israel and Jacobean England went easily together. The original writers of the books of the Old Testament knew about pruning trees, putting on armour, drawing water, the readying of horses for battle and the laying of stones for a wall; and in the King James all these activities are still evidently familiar, the jargon easy, and the language light. “Yet man is born unto trouble, as the sparks fly upward”, runs the wonderful phrase in Job 5: 7, and we are at a blacksmith’s door in an English village, watching hammer strike anvil, or kicking a rolling log on our own cottage hearth. “Hard as a piece of the nether millstone” brings the creak of a 17th-century mill, as well as the sweat of more ancient hands. In both worlds, “seedtime and harvest” are real seasons. This age-old continuity comforts us, even though we no longer know or share it.

By the same token, the reader of the King James lives vicariously in a world of solid certainties. There is nothing quaint here about a candle or a flagon, or money in a tied leather purse; nothing arcane about threads woven on a handloom, mire in the streets or the snuffle of swine outside the town gates. This is life. Everything is closely observed, tactile, and has weight. When Adam and Eve sew fig-leaves together to cover their shame they make “aprons” (Genesis 3: 7), leather-thick and workmanlike, the sort a cobbler might wear. Even the colours invoked in the King James—crimson, scarlet, purple—are nouns rather than adjectives (“though your sins be as scarlet”, Isaiah 1: 18), sold by the block as solid powder or heaped glossy on a brush. And God’s intervention in this world, whether as artist, builder, woodsman or demolition man, is as physical and real as the materials he works with.

English, of course, was richer in those days, full of neesings and axletrees, habergeons and gazingstocks, if indeed a gazingstock has a plural. Modern skin has spots: the King James gives us botches, collops and blains, horridly and lumpily different. It gives us curious clutter, too, a whole storehouse of tools and knick-knacks whose use is now half-forgotten—nuff-dishes, besoms, latchets and gins, and fashions seemingly more suited to a souped-up motor than to the daughters of Jerusalem:

The chains, and the bracelets, and the mufflers,
The bonnets, and the ornaments of the legs, and the
headbands, and the tablets, and the earrings,
The rings, and nose jewels,
The changeable suits of apparel, and the mantles, and the
wimples, and the crisping pins…  (Isaiah 3: 19-22)

“Crisping pins” have now been swallowed up (in the Good News version) in “fine robes, gowns, cloaks and purses”. And so we have lost that sharp, momentary image of varnished nails pushing pins into unruly frizzes of hair, and lipsticked mouths pursed in concentration, as the daughters of Zion prepare to take on the town. These women are “froward”, a word that has been lost now, but which haunts the King James like a strutting shadow with a shrill, hectoring voice. Few lines are longer-drawn out, freighted with sighs, than these from Proverbs 27:15: “A continual dropping in a very rainy day and a contentious woman are alike.”
Other characters cause trouble, too. In the King James, people are aggressively physical. They shoot out their lips, stretch forth their necks and wink with their eyes; they open their mouths wide and say “Aha, aha”, wagging their heads, in ways that would get them arrested in Wal-Mart. They do not simply refuse to listen, but pull away their shoulders and stop their ears; they do not merely trip, but dash their feet against stones. Sex is peremptory: men “know” women, lie with them, “go in unto” them, as brisk as the women are available. “Begat” is perhaps the word the King James is best known for, list after list of begetting. The curt efficiency of the word (did no one suggest “fathered”?) makes the erotic languor of the Song of Solomon, with its lilies and heaps of wheat, shine out like a jewel.

The world in which these things happen has a particular look and feel that comes not just from the original authors, but often from the translators and the words they favoured. Mystery colours much of it. They like “lurking places of the villages” (Psalms 10: 8), “secret places of the stairs” (Song of Solomon 2:14), and things done “privily”, or “close”. God hides in “pavilions” that seem as mysterious as the shifting dunes of the desert, or the white flapping tents of the clouds. The word “creeping” is used everywhere to suggest that something lives; very little moves fast here, and heads and bellies are bent close to the earth. Even flying is slow, through the thick darkness. People go forth abroad, and waters come down from above, with considerable effort, as though through slowly opening layers. Elements are divided into their constituent parts: the waters of the sea, a flame of fire. A rainbow curves brightly away from the astonished, struggling observer, “in sight like unto an emerald” (Revelation 4: 3). But the grandeur of the language gives momentousness even to the corner of a room, a drain running beside a field, a patch of abandoned ground:

I went by the field of the slothful, and by the vineyard of the
man void of understanding;
And lo, it was all grown over with thorns, and nettles had
covered the face thereof, and the stone wall thereof was
broken down.
Then I saw, and considered it well; I looked upon it, and
received instruction.
Yet a little sleep, a little slumber, a little folding of the hands
to sleep…  (Proverbs 24: 30-33)

In such places shepherds “abide” with their sheep, motionless as figures made of stone. This landscape is carved broad and deep, like a woodcut, with sharply folded mountains, thick woven water, stylised trees and cities piled and blocked as with children’s bricks (all the better to be scattered by God later, no stone upon another). A sense of desolation haunts these streets and gates, echoing and shelterless places in which even Wisdom runs wild and cries. Yet within them sometimes we find a scene paced as tensely as in any modern novel, as when a young man in Proverbs steps out,

Passing through the street near her corner; and he went the
way to her house,
In the twilight, in the evening, in the black and dark night:
And, behold, there met him a woman with the attire of an
harlot, and subtil of heart.  (Proverbs 7: 8-10)

Just as stained glass shines more brightly for being set in stone, so the King James gains in splendour by comparison with the Revised Standard, Good News, New International and Heaven-knows-what versions that have come later. Thus John’s magnificent “The Word was with God, and the Word was God” (John 1: 1), has become “The Word was already existing”, scholarship usurping splendour. That lilting line in Genesis (1: 8), “And the evening and the morning were the second day” (note that second “the”, so apparently expendable, yet so necessary to the music) becomes “There was morning, and there was evening”, a broken-backed crawl. The fig-leaf aprons are now reduced to “coverings for themselves”. And the garden planted “eastward in Eden” (Genesis 2: 8), another of the King James’s myriad and scarcely conscious touches of grace, has become “to the east, in Eden” a place from which the magic has drained away.

Everywhere modern translations are more specific, doubtless more accurate, but always less melodious. The King James, deeply scholarly as it is, displaying the best learning of the day, never forgets that the word of God must be heard, understood and retained by the simple. For them—children repeating after the teacher, workers fidgeting in their best clothes, Tyndale’s own whistling ploughboy—rhythm and music are the best aids to remembering. This is language not for silent study but for reading and declaiming aloud. It needs to work like poetry, and poetry it is.

The King James is famous for its monosyllables, great drumbeats of description or admonition: “And God said, Let there be light: and there was light” (Genesis 1: 3); “The fool hath said in his heart, There is no God” (Psalms 14: 1); “In the sweat of thy face shalt thou eat bread” (Genesis 3: 19). These are fundaments, bases, bricks to build with. Yet its rhythms are also far cleverer than that, endlessly and subtly adjusted. Typically, a King James sentence has two parts broken by a pause around the mid-point, with the first part slightly more declaratory and the second slightly more explanatory: the stronger syllables massed towards the beginning, the weaker crowding softly towards their end. “Surely there is a vein for the silver, and a place for gold where they fine it” (Job 28: 1); “He buildeth his house as a moth, and as a booth that the keeper maketh” (Job 27: 18). But sometimes the order is inverted, and the words too: “As the bird by wandering, as the swallow by flying, so the curse causeless shall not come” (Proverbs 26: 2); “Out of the south cometh the whirlwind: and cold out of the north” (Job 37: 9). Perhaps the whirlwind itself has disordered things. This contrapuntal system even allows for a bit of bathos and fun: “Divers weights are an abomination unto the lord; and a false balance is not good” (Proverbs 20: 23).

Certain devices were available then which modern writers may well envy. The old English language allowed rhythms and syncopations that cannot be employed any more. Consider the use of “even”, dropped in with an almost casual flourish: “And the stars of heaven fell unto the earth, even as a fig tree casteth her untimely figs, when she is shaken of a mighty wind” (Revelations 6: 13). Or “neither”, used in the same way: “Many waters cannot quench love, neither can the floods drown it” (Song of Solomon 8: 7). Modern translations separate those two thoughts, but the beauty lies in their conjunction with a word as light as air.
Undoubtedly the King James has been enhanced for us by the music that now curls round it. “For unto us a child is born” (Isaiah 9: 6) can’t now be read without Handel’s tripping chorus, or “Man that is born of a woman” without Purcell’s yearning melancholy (“He cometh forth like a flower, and is cut down” Job 14: 2). Even “To every thing there is a season”, from Ecclesiastes (3: 1), is now overlaid with the nasal, gently stoned tones of Simon & Garfunkel. Yet the King James also lured these musicians in the beginning, snaring them with stray lines that were already singing. “Stay me with flagons, comfort me with apples, for I am sick of love” (Song of Solomon 2: 5). “Thou hast heard me from the horns of the unicorns” (Psalms 22: 21). “The heavens declare the glory of God; and the firmament sheweth his handywork” (Psalms 19: 1). “I am a brother to dragons, and a companion to owls” (Job 30: 29). Or this, also from the Book of Job, possibly the most beautiful of all the Bible’s books—a passage that flows from one astonishingly random and sudden question, “Hast thou entered into the treasures of the snow?” (Job 38:22):

Hath the rain a father? Or who hath begotten the drops of
Out of whose womb came the ice? And the hoary frost of
heaven, who hath gendered it?
The waters are hid as with a stone, and the face of the deep
is frozen.
Canst thou bind the sweet influences of Plaeiades, or loose
the bands of Orion?  (Job 38:28-31)

The beauty of this is inherent, deep in the original mind and eye that formed it. But again, the translators have made choices here: “hid” rather than “hidden”, “gendered” rather than “engendered”, all for the very best rhythmic reasons.  We can trust them; we know that they would certainly have employed “hidden” and “engendered” if the music called for it. Unfailingly, their ear is sure. And if we suspect that rhythm sometimes matters more than meaning, that is fine too: it leaves space for the sacred and numinous, that which cannot be grasped, that which lies beyond all words, to move within the lines.

That subtle notion of divinity, however, is seldom uppermost in the Old Testament. This God smites a lot. Three close-printed columns of Young’s Concordance are filled with his smiting, lightly interspersed with other people’s. Mere men use hand weapons, bows and arrows, or, with Jacobean niftiness, “the edge of the sword”; but the God of the King James simply smites, whether Moabites or Jebusites, vines or rocks or first-born, like a broad, bright thunderbolt. No other word could be so satisfactory, the opening consonants clenched like a fist that propels God’s anger down, and in, and on. We know that these are tough workman’s hands: this is the God who “stretcheth out the north over the empty place, and hangeth the earth upon nothing” (Job 26: 7). Smiting must have survived after the King James; but perhaps it was now so soft with over-use, so bruised, that it faded out of the language.

This God surprises, too. He “hisses unto” people, perhaps a cross between a whistle and a whoop, as if marshalling a yard of hens. God goes before, “preventing” us; he whips off our disguises, our clothes or our leaves, “discovering” us, and the shock of the original meanings of those words alerts us to the origins of power itself. “Who can stay the bottles of heaven?” cries a voice in Job 38: 37; and we suspect God again, like a teenage yob this time, lurking in his pavilion of cloud.

At moments like this it also seems that the translators themselves might be mystified, fingers scratching neat beards while they survey the incomprehensible words. Did they really understand, for example, that odd medical diagnosis in Proverbs: “The blueness of a wound cleanseth away evil; so do stripes the inward parts of the belly” (20: 30)? Or these lines from the last chapter of Ecclesiastes, the mystifying staple of so many funerals?

…they shall be afraid of that which is high, and fears shall
be in the way, and the almond tree shall flourish, and
the grasshopper shall be a burden, and desire shall fail;
because man goeth to his long home, and the mourners
go about the streets:
Or ever the silver cord be loosed, or the golden bowl be
broken, or the pitcher be broken at the fountain, or the
wheel broken at the cistern.
Then shall the dust return to the earth as it was… (Ecclesiastes 12: 5-7)

These are surreal images, unlikely litter in the fields and streets; but they are made all the more potent by the heavy phrasing, the inevitability of the building lines, and the conscious repetition, broken, broken, broken. We know our translators have plenty of synonyms up their sleeves. They choose not to use them. When these lines are read, though we barely know what they mean, they spell despair. And they are meant to, as the man in the pulpit in a moment reminds us:

Vanity of vanities, saith the preacher; all is vanity (Ecclesiastes 12: 8)

Yet often, too, a spirit of playfulness seems to be at work. Consider, lastly, the rain. This is ordinary rain most of the time, malqosh in the Hebrew, and all modern translations make it so. But in the King James we also have “the latter rain”, and “small rain”, and we are alerted to their delicacy and difference. Small rain (“as the small rain upon the tender herb” Deuteronomy 32: 2) is presumably the sort that blows in the air, that makes no imprint on a puddle; the Irish would call it a soft day. And latter rain, perhaps, is the sort that skulks at the end of an afternoon, or suddenly cascades down in an autumn gust, or patters for a desultory few minutes after a day of approaching thunder—and then we open our mouths wide to it, laughing, grateful, as for the word of God.
Ann Wroe [3] is obituaries and briefings editor of The Economist and author of “Being Shelley”