Posted in Academic writing, History, Writing

On Writing: How the King James Bible and How It Shaped the English Language and Still Teaches Us How to Write

When you’re interested in improving your writing, it’s a good idea to have some models to work from.  I’ve presented some of my favorite models in this blog.  These have included a number of examples of good writing by both academics (Max Weber, E.P. Thompson, Jim March, and Mary Metzand nonacademics (Frederick Douglass, Elmore Leonard).

Today I want to explore one of the two most influential forces in shaping the English language over the years:  The King James Bible.  (The other, of course, is Shakespeare.)  Earlier I presented one analysis by Ann Wroe, which focused on the thundering sound of the prose in this extraordinary text.  Today I want to draw on two other pieces of writing that explore the powerful model that this bible provides us all for how to write in English with power and grace.  One is by Adam Nicholson, who wrote a book on the subject (God’s Secretaries: The Making of the King James Bible).  The other, which I reprint in full at the end of this post, is by Charles McGrath.  

The impulse to produce a bible in English arose with the English reformation, as a Protestant vernacular alternative to the Latin version that was canonical in the Catholic church.  The text was commissioned in 1604 by King James, who succeeded Elizabeth I after her long reign, and it was constructed by a committee of 54 scholars.  They went back to the original texts in Hebrew and Greek, but they drew heavily on earlier English translations. 

The foundational translation was written by William Tyndale, who was executed for heresy in Antwerp in 1536, and this was reworked into what became known as the Geneva bible by Calvinists who were living in Switzerland.  One aim of the committee was to produce a version that was more compatible with the beliefs of English and Scottish versions of the faith, but for James the primary impetus was to remove the anti-royalist tone that was embedded within the earlier text.  Recent scholars have concluded that 84% of the words in the King James New Testament and 76% in the Old Testament are Tyndale’s.

As Nicholson puts it, the language of the King James Bible is an amazing mix — “majestic but intimate, the voice of the universe somehow heard in the innermost part of the ear.”

You don’t have to be a Christian to hear the power of those words—simple in vocabulary, cosmic in scale, stately in their rhythms, deeply emotional in their impact. Most of us might think we have forgotten its words, but the King James Bible has sewn itself into the fabric of the language. If a child is ever the apple of her parents’ eye or an idea seems as old as the hills, if we are at death’s door or at our wits’ end, if we have gone through a baptism of fire or are about to bite the dust, if it seems at times that the blind are leading the blind or we are casting pearls before swine, if you are either buttering someone up or casting the first stone, the King James Bible, whether we know it or not, is speaking through us. The haves and have-nots, heads on plates, thieves in the night, scum of the earth, best until last, sackcloth and ashes, streets paved in gold, and the skin of one’s teeth: All of them have been transmitted to us by the translators who did their magnificent work 400 years ago.

Wouldn’t it be lovely if we academics could write in way that sticks in people’s minds for 400 years?  Well, maybe that’s a bit too much to hope for.  But even if we can’t aspire to be epochally epigrammatic, there are still lessons we can learn from Tyndale and the Group of 54.  

One such lesson is the power of simplicity.  Too often scholars feel the compulsion to gussy up their language with jargon and Latinate constructions in the name of professionalism.  If any idiot can understand what you’re saying, then you’re not being a serious scholar.  But the magic of the King James Bible is that it uses simple Anglo-Saxon words to make the most profound statements.  Listen to this passage from Ecclesiastes:

I returned and saw under the sun, that the race is not to the swift, nor the battle to the strong, neither yet bread to the wise, nor yet riches to men of understanding, nor yet favor to men of skill, but time and chance happeneth to them all.

Or this sentence from Paul’s letter to the Phillipians:

Finally, brethren, whatsoever things are true, whatsoever things are honest, whatsoever things are just, whatsoever things are pure, whatsoever things are lovely, whatsoever things are of good report; if there be any virtue, and if there be any praise, think on these things.

Or the stunning opening line of the Gospel of John:

In the beginning was the Word, and the Word was with God, and the Word was God.

This is a text that can speak clearly to the untutored while at the same time elevating them to a higher plane.  For us it’s a model for how to match simplicity with profundity.

KJB

Why the King James Bible Endures

By CHARLES McGRATH

The King James Bible, which was first published 400 years ago next month, may be the single best thing ever accomplished by a committee. The Bible was the work of 54 scholars and clergymen who met over seven years in six nine-man subcommittees, called “companies.” In a preface to the new Bible, Miles Smith, one of the translators and a man so impatient that he once walked out of a boring sermon and went to the pub, wrote that anything new inevitably “endured many a storm of gainsaying, or opposition.” So there must have been disputes — shouting; table pounding; high-ruffed, black-gowned clergymen folding their arms and stomping out of the room — but there is no record of them. And the finished text shows none of the PowerPoint insipidness we associate with committee-speak or with later group translations like the 1961 New English Bible, which T.S. Eliot said did not even rise to “dignified mediocrity.” Far from bland, the King James Bible is one of the great masterpieces of English prose.

The issue of how, or even whether, to translate sacred texts was a fraught one in those days, often with political as well as religious overtones, and it still is. The Roman Catholic Church, for instance, recently decided to retranslate the missal used at Mass to make it more formal and less conversational. Critics have complained that the new text is awkward and archaic, while its defenders (some of whom probably still prefer the Mass in Latin) insist that’s just the point — that language a little out of the ordinary is more devotional and inspiring. No one would ever say that the King James Bible is an easy read. And yet its very oddness is part of its power.

From the start, the King James Bible was intended to be not a literary creation but rather a political and theological compromise between the established church and the growing Puritan movement. What the king cared about was clarity, simplicity, doctrinal orthodoxy. The translators worked hard on that, going back to the original Hebrew, Greek and Aramaic, and yet they also spent a lot of time tweaking the English text in the interest of euphony and musicality. Time and again the language seems to slip almost unconsciously into iambic pentameter — this was the age of Shakespeare, commentators are always reminding us — and right from the beginning the translators embraced the principles of repetition and the dramatic pause: “In the beginning God created the Heaven, and the Earth. And the earth was without forme, and void, and darkenesse was upon the face of the deepe: and the Spirit of God mooved upon the face of the waters.”

The influence of the King James Bible is so great that the list of idioms from it that have slipped into everyday speech, taking such deep root that we use them all the time without any awareness of their biblical origin, is practically endless: sour grapes; fatted calf; salt of the earth; drop in a bucket; skin of one’s teeth; apple of one’s eye; girded loins; feet of clay; whited sepulchers; filthy lucre; pearls before swine; fly in the ointment; fight the good fight; eat, drink and be merry.

But what we also love about this Bible is its strangeness — its weird punctuation, odd pronouns (as in “Our Father, which art in heaven”), all those verbs that end in “eth”: “In the morning it flourisheth, and groweth up; in the evening it is cut downe, and withereth.” As Robert Alter has demonstrated in his startling and revealing translations of the Psalms and the Pentateuch, the Hebrew Bible is even stranger, and in ways that the King James translators may not have entirely comprehended, and yet their text performs the great trick of being at once recognizably English and also a little bit foreign. You can hear its distinctive cadences in the speeches of Lincoln, the poetry of Whitman, the novels of Cormac McCarthy.

Even in its time, the King James Bible was deliberately archaic in grammar and phraseology: an expression like “yea, verily,” for example, had gone out of fashion some 50 years before. The translators didn’t want their Bible to sound contemporary, because they knew that contemporaneity quickly goes out of fashion. In his very useful guide, “God’s Secretaries: The Making of the King James Bible,” Adam Nicolson points out that when the Victorians came to revise the King James Bible in 1885, they embraced this principle wholeheartedly, and like those people who whack and scratch old furniture to make it look even more ancient, they threw in a lot of extra Jacobeanisms, like “howbeit,” “peradventure, “holden” and “behooved.”

This is the opposite, of course, of the procedure followed by most new translations, starting with Good News for Modern Man, a paperback Bible published by the American Bible Society in 1966, whose goal was to reflect not the language of the Bible but its ideas, rendering them into current terms, so that Ezekiel 23:20, for example (“For she doted upon their paramours, whose flesh is as the flesh of asses, and whose issue is like the issue of horses”) becomes “She was filled with lust for oversexed men who had all the lustfulness of donkeys or stallions.”

There are countless new Bibles available now, many of them specialized: a Bible for couples, for gays and lesbians, for recovering addicts, for surfers, for skaters and skateboarders, not to mention a superheroes Bible for children. They are all “accessible,” but most are a little tone-deaf, lacking in grandeur and majesty, replacing “through a glasse, darkly,” for instance, with something along the lines of “like a dim image in a mirror.” But what this modernizing ignores is that the most powerful religious language is often a little elevated and incantatory, even ambiguous or just plain hard to understand. The new Catholic missal, for instance, does not seem to fear the forbidding phrase, replacing the statement that Jesus is “one in being with the Father” with the more complicated idea that he is “consubstantial with the Father.”

Not everyone prefers a God who talks like a pal or a guidance counselor. Even some of us who are nonbelievers want a God who speaketh like — well, God. The great achievement of the King James translators is to have arrived at a language that is both ordinary and heightened, that rings in the ear and lingers in the mind. And that all 54 of them were able to agree on every phrase, every comma, without sounding as gassy and evasive as the Financial Crisis Inquiry Commission, is little short of amazing, in itself proof of something like divine inspiration.

 

Posted in Economic growth, Education policy, Higher Education

Hausmann: The Education Myth

In this post I reprint a piece by Ricardo Hausmann (an economist at Harvard’s Kennedy School), which was published in Project Syndicate in 2015. Here’s a link to the original.  If you can’t get past the paywall, here’s a link to a PDF.

What I like about this piece is the way Hausmann challenges a central principle that guides educational policy, both domestic and international.  This is the belief that education is the central engine of economic growth.  According to this credo, increasing education is how we can increase productivity, GDP, and standard of living.  Hausmann shows, however, that the impact of education on economic growth is a lot less than promised.  Other factors appear to be more important than education at expanding economies, so investing in these strategies may be a lot more efficient than the costly process of increasing access to tertiary education.

As the former chief economist at the Inter-American Development Bank and the head of the Harvard Growth Lab, he seems to know something about this subject.  See what you think.

Human Capital and Ec Development

The Education Myth

TIRANA – In an era characterized by political polarization and policy paralysis, we should celebrate broad agreement on economic strategy wherever we find it. One such area of agreement is the idea that the key to inclusive growth is, as then-British Prime Minister Tony Blair put in his 2001 reelection campaign, “education, education, education.” If we broaden access to schools and improve their quality, economic growth will be both substantial and equitable.

As the Italians would say: magari fosse vero. If only it were true. Enthusiasm for education is perfectly understandable. We want the best education possible for our children, because we want them to have a full range of options in life, to be able to appreciate its many marvels and participate in its challenges. We also know that better educated people tend to earn more.

Education’s importance is incontrovertible – teaching is my day job, so I certainly hope it is of some value. But whether it constitutes a strategy for economic growth is another matter. What most people mean by better education is more schooling; and, by higher-quality education, they mean the effective acquisition of skills (as revealed, say, by the test scores in the OECD’s standardized PISA exam). But does that really drive economic growth?

In fact, the push for better education is an experiment that has already been carried out globally. And, as my Harvard colleague Lant Pritchett has pointed out, the long-term payoff has been surprisingly disappointing.

In the 50 years from 1960 to 2010, the global labor force’s average time in school essentially tripled, from 2.8 years to 8.3 years. This means that the average worker in a median country went from less than half a primary education to more than half a high school education.

How much richer should these countries have expected to become? In 1965, France had a labor force that averaged less than five years of schooling and a per capita income of $14,000 (at 2005 prices). In 2010, countries with a similar level of education had a per capita income of less than $1,000.

In 1960, countries with an education level of 8.3 years of schooling were 5.5 times richer than those with 2.8 year of schooling. By contrast, countries that had increased their education from 2.8 years of schooling in 1960 to 8.3 years of schooling in 2010 were only 167% richer. Moreover, much of this increase cannot possibly be attributed to education, as workers in 2010 had the advantage of technologies that were 50 years more advanced than those in 1960. Clearly, something other than education is needed to generate prosperity.

As is often the case, the experience of individual countries is more revealing than the averages. China started with less education than Tunisia, Mexico, Kenya, or Iran in 1960, and had made less progress than them by 2010. And yet, in terms of economic growth, China blew all of them out of the water. The same can be said of Thailand and Indonesia vis-à-vis the Philippines, Cameroon, Ghana, or Panama. Again, the fast growers must be doing something in addition to providing education.

The experience within countries is also revealing. In Mexico, the average income of men aged 25-30 with a full primary education differs by more than a factor of three between poorer municipalities and richer ones. The difference cannot possibly be related to educational quality, because those who moved from poor municipalities to richer ones also earned more.

And there is more bad news for the “education, education, education” crowd: Most of the skills that a labor force possesses were acquired on the job. What a society knows how to do is known mainly in its firms, not in its schools. At most modern firms, fewer than 15% of the positions are open for entry-level workers, meaning that employers demand something that the education system cannot – and is not expected – to provide.

When presented with these facts, education enthusiasts often argue that education is a necessary but not a sufficient condition for growth. But in that case, investment in education is unlikely to deliver much if the other conditions are missing. After all, though the typical country with ten years of schooling had a per capita income of $30,000 in 2010, per capita income in Albania, Armenia, and Sri Lanka, which have achieved that level of schooling, was less than $5,000. Whatever is preventing these countries from becoming richer, it is not lack of education.

A country’s income is the sum of the output produced by each worker. To increase income, we need to increase worker productivity. Evidently, “something in the water,” other than education, makes people much more productive in some places than in others. A successful growth strategy needs to figure out what this is.

Make no mistake: education presumably does raise productivity. But to say that education is your growth strategy means that you are giving up on everyone who has already gone through the school system – most people over 18, and almost all over 25. It is a strategy that ignores the potential that is in 100% of today’s labor force, 98% of next year’s, and a huge number of people who will be around for the next half-century. An education-only strategy is bound to make all of them regret having been born too soon.

This generation is too old for education to be its growth strategy. It needs a growth strategy that will make it more productive – and thus able to create the resources to invest more in the education of the next generation. Our generation owes it to theirs to have a growth strategy for ourselves. And that strategy will not be about us going back to school.

Ricardo Hausmann, a former minister of planning of Venezuela and former Chief Economist at the Inter-American Development Bank, is a professor at Harvard’s John F. Kennedy School of Government and Director of the Harvard Growth Lab.

Posted in Academic writing, Writing

Elmore Leonard’s Master Class on Writing a Scene

As you may have figured out by now, I’m a big fan of Elmore Leonard.  I wrote an earlier post about the deft way he leads you into a story and introduces a character on the very first page of a book.  He never gives his readers fits the way we academic writers do ours, by making them plow through half a paper before they finally discover its point.

Here I want to show you one of the best scenes Leonard ever wrote — and he wrote a lot of them.  It’s from the book Be Cool, which is the sequel to another called Get Shorty.  Both were turned into films starring John Travolta as Chili Palmer.  Chili is a loan shark from back east who heads to Hollywood to collect on a marker, but what he really wants is to make movies.  As a favor, he looks up a producer who owes someone else money, and instead of collecting he pitches a story.  The rest of the series is about the cinematic mess that ensues.

Chili Palmer

In the scene below, Chili runs into a minor thug floating in a backyard swimming pool.  In the larger story this is a nothing scene, but it’s stunning how Leonard turns it into a tour de force.  In a virtuoso display of writing, he shows Chili effortlessly take the thug apart while also mesmerizing him.  Chili the movie maker rewrites the scene as he’s acting it out and then directs the thug on the raft how to play his own part more effectively.

Watch how Chili does it:

He got out of there, went into the living room and stood looking around, seeing it now as the lobby of an expensive health club, a spa: walk through there to the pool where one of the guests was drying out. From here Chili had a clear view of Derek, the kid floating in the pool on the yellow raft, sun beating down on him, his shades reflecting the light. Chili walked outside, crossed the terrace to where a quart bottle of Absolut, almost full, stood at the tiled edge of the pool. He looked down at Derek laid out in his undershorts.

He said, “Derek Stones?”

And watched the kid raise his head from the round edge of the raft, stare this way through his shades and let his head fall back again.

“Your mother called,” Chili said. “You have to go home.”

A wrought-iron table and chairs with cushions stood in an arbor of shade close to the house. Chili walked over and sat down. He watched Derek struggle to pull himself up and begin paddling with his hands, bringing the raft to the side of the pool; watched him try to crawl out and fall in the water when the raft moved out from under him. Derek made it finally, came over to the table and stood there showing Chili his skinny white body, his titty rings, his tats, his sagging wet underwear.

“You wake me up,” Derek said, “with some shit about I’m suppose to go home? I don’t even know you, man. You from the funeral home? Put on your undertaker suit and deliver Tommy’s ashes? No, I forgot, they’re being picked up. But you’re either from the funeral home or—shit, I know what you are, you’re a lawyer. I can tell ’cause all you assholes look alike.”

Chili said to him, “Derek, are you trying to fuck with me?”

Derek said, “Shit, if I was fucking with you, man, you’d know it.”

Chili was shaking his head before the words were out of Derek’s mouth.

“You sure that’s what you want to say? ‘If I was fuckin with you, man, you’d know it?’ The ‘If I was fucking with you’ part is okay, if that’s the way you want to go. But then, ‘you’d know it’—come on, you can do better than that.”

Derek took off his shades and squinted at him.

“The fuck’re you talking about?”

“You hear a line,” Chili said, “like in a movie. The one guy says, ‘Are you trying to fuck with me?’ The other guy comes back with, ‘If I was fuckin with you, man . . .’ and you want to hear what he says next ’cause it’s the punch line. He’s not gonna say, ‘You’d know it.’ When the first guy says, ‘Are you trying to fuck with me?’ he already knows the guy’s fuckin with him, it’s a rhetorical question. So the other guy isn’t gonna say ‘you’d know it.’ You understand what I’m saying? ‘You’d know it’ doesn’t do the job. You have to think of something better than that.”

“Wait,” Derek said, in his wet underwear, weaving a little, still half in the bag. “The first guy goes, ‘You trying to fuck with me?’ Okay, and the second guy goes, ‘If I was fucking with you . . . If I was fucking with you, man . . .’ “

Chili waited. “Yeah?”

“Okay, how about, ‘You wouldn’t live to tell about it?’

“Jesus Christ,” Chili said, “come on, Derek, does that make sense? ‘You wouldn’t live to tell about it’? What’s that mean? Fuckin with a guy’s the same as taking him out?” Chili got up from the table. “What you have to do, Derek, you want to be cool, is have punch lines on the top of your head for every occasion. Guy says, ‘Are you trying to fuck with me?’ You’re ready, you come back with your line.” Chili said, “Think about it,” walking away. He went in the house through the glass doors to the bedroom.

Don’t you wish you could be Elmore Leonard and write a scene like that, or be Chili Palmer and construct it on the fly?  I sure do, and I’m not sure which role would be the more gratifying.

You could have a lot of fun picking apart the things that make the scene work.  Chili the movie maker walking into the living room and suddenly “seeing it as the lobby of an expensive health spa.”  Derek with “his skinny white body, his titty rings, his tats, his sagging wet underwear.”  The way Derek talks: “The fuck’re you talking about?”  Derek struggling to come up with the right line to replace the lame one he thought up himself.  Chili explaining the core dilemma of the writer, that you can’t ever set up a punchline and then fail to deliver.

But instead of explaining his joke, let’s just learn from his example.  Deliver what you promise.  Reward the effort that your readers invest in engaging with your work.  Have your key insight ready, deliver it on cue, and then walk away.  Never step on the punchline.

Posted in Uncategorized

Public Schools for Private Gain: The Declining American Commitment to Serving the Public Good

This post is a piece I published in Kappan in November, 2018.  Here’s a link to the original.

Public schools for private gain:

The declining American commitment to serving the public good

When schooling comes to be viewed mainly as a source of private benefit, both schools
and society suffer grave consequences.

By David F. Labaree

We Americans tend to talk about public schooling as though we know what that term means. But in the complex educational landscape of the 21st century — where charter schools, private schools, and religious schools compete with traditional public schools for resources and support — it’s becoming less and less obvious what makes a school “public” at all.

Public Schools for Private Gain Image

A school is public, one might argue, if it meets certain formal criteria: It is funded by the public, governed by the public, and openly accessible to the public. But in that case, what should we make of charter schools, which are broadly understood to be public schools even though many are governed by private organizations? And how should we categorize religious schools that enroll students using public vouchers or tax credits, or public schools that use exams to restrict access? For that matter, don’t private schools often serve public interests, and don’t public schools often promote students’ private interests?

In short, our efforts to distinguish between public and nonpublic schools often oversimplify the ways in which today’s schools operate and the complex roles they play in our society. And such distinctions matter because they shape our thinking about educational policy. After all, if we’re unclear which schools deserve what kinds of funding and support, then how do we justify a system of elementary, secondary, and higher education that consumes more than $800 billion in taxes every year and consumes 10 to 20 or more years of every person’s life?

To clarify what we mean by public schooling, it’s helpful to broaden the discussion by considering not just the formal features of schools (their funding, governance, and admissions criteria) but also their aims. That is, to what extent do they pursue the public good, and to what extent do they serve private interests?

A public good is one that benefits all members of the community, whether or not they contribute to its upkeep or make use of it personally. In contrast, private goods benefit individuals, accruing only to those people who are able to take advantage of them. Thus, schooling is a public good to the extent that it helps everyone (including people who don’t have children in school); it is, by nature, inclusive. And schooling is a private good to the extent that it provides individuals with knowledge, skills, and credentials they can use to distinguish themselves from other people and get ahead in life; it is a form of private property, whose benefits are exclusive to those who own them.

People, organizations, and governments that create public goods tend to face what is known as the “free-rider” problem: If you can’t prevent people from enjoying goods for free, then they’ll have little incentive to pay for them. For example, if I can hang out at my local park whenever I want, then why should I donate to the park clean-up fund that my neighbors organized? I can get a free ride on them, enjoying a clean park without chipping in any of my own money.

The standard solution to the free-rider problem is to make it mandatory for everybody to support certain public goods (for example, efforts to reduce air pollution, fight crime, and monitor food safety) by using mechanisms such as general taxation. Indeed, this is how we’ve always supported our public schools. You may pay tuition to send your children to an exclusive, ivy-covered academy — or you might not have kids at all — but even so, you are required to pay taxes that fund schools for the whole community. Your family may not benefit personally from the services provided by, say, the elementary school down the road, but you do benefit, along with your neighbors, from having a well-funded school nearby. If local kids get a decent education and grow up to become gainfully employed, law-abiding citizens, that is a public good. It makes the entire community a better, safer, and happier place to live.

For much of American history, schooling has been understood in this way, first and foremost. For example, at the founding of our educational system, in the early 19th century, schools were supposed to turn young people into virtuous and competent citizens, a public good that was strongly political in nature. By the turn of the 20th century, schooling was still regarded mainly as a public good, but the mission had begun to shift from politics (creating citizens) to economics (training capable workers who can help promote broad prosperity). Over the subsequent decades, however, growing numbers of Americans came to view schooling mainly as a private good, producing credentials that allow individuals to get ahead, or stay ahead, in the competition for money and social status.

In this article, I argue that this shift in how Americans have viewed schooling — from conceiving of it mainly as a public good to defining it mostly as a private good — has led to dramatic changes in both the quality of the education that students receive and the kind of society we expect our schools to create. The institution that for much of our history helped bring us together into a community of citizens is increasingly dispersing us into a social hierarchy defined by the level of education we’ve attained.

The social functions of U.S. schooling: A short history[1]

In the early 19th century, the United States created a system of universal public schooling for the same reason that other emerging nations have done so over the years: to turn subjects of the King into citizens of the state.

Historically, public schooling has been the midwife of the nation state, whose viability depends on its ability to convert the occupants of a particular territory into members of an imagined community, who come to see themselves for the first time as French, say, or American. This mission was particularly important for the U.S. because it was a republic entering a world that had long demonstrated hostility toward the survival of such states. From ancient Rome to the Italian city states of the Renaissance, republics tended either to succumb to a tyrant or be destroyed in a Hobbesian war among irreconcilable interests.

As the Founders well knew, the survival of the American republic depended on its ability to form individuals into a republican community in which citizens were imbued with a commitment to the public good. Further, when the Common School Movement emerged in the 1820s and 30s, it faced an additional challenge as well, because the civic virtue of the fragile new republic was undergoing a vigorous challenge from the possessive individualism of the emerging free-market economy. Horace Mann, the leader of the movement in Massachusetts, put the case this way: “It may be an easy thing to make a Republic; but it is a very laborious thing to make Republicans; and woe to the republic that rests upon no better foundations than ignorance, selfishness, and passion.”[2]

The key characteristic of the new common school was not its curriculum or pedagogy but its commonality. It brought young people together into a single building where they engaged in a shared social and cultural experience, meant to counter the differences of social class that posed a serious threat to republican identity. Ideally, students would learn, in age-graded classrooms, to belong to a community of equals.

Further, the goal wasn’t just to teach them to internalize democratic norms but also to make it possible for capitalism to coexist with republicanism. For the free market to function, the state had to relax its control over individuals, allowing them to make their own decisions as rational actors. By learning to regulate their own thoughts and behaviors within the space of the classroom, students would become prepared for both commerce and citizenship, able to pursue their self-interests in the economic marketplace while at the same time participating in the political marketplace of ideas.

However, by the end of the 19th century, the survival of the republic was no longer in question. At that point, the U.S. was emerging as a world power, with booming industrial production, large-scale immigration, and a growing military presence. And while there was some pressure to turn peasant immigrants from Southern and Eastern Europe into American citizens, policy makers were even more concerned with turning them into modern industrial workers. In the roaring economy of the Progressive Era, then, the mission of schooling evolved: The most pressing goal was to strengthen the nation’s human capital (to put it in today’s terms).

Note, though, that schooling continued to be defined as a public good. When the workforce became more skilled and productivity increased, the whole country benefited. Overall, Americans’ standard of living improved. Thus, there remained a strong rationale for everyone to contribute to the education of other people’s children. And that rationale continues to resonate somewhat today. Even now, politicians and policy makers often talk about “investing” public funds in education as a way to promote economic growth, lifting all boats.

It was only in the 20th century that schooling came to be regarded as the primary means for individuals to obtain a good job. As their enrollments skyrocketed, high schools gave up the longstanding practice of providing a common course of study for all students and, instead, differentiated the curriculum, providing separate tracks designed for different career trajectories: the industrial course for factory workers, the business course for clerical workers, and the academic course for those bound for college (and then for work in management and the professions). As one school board president in the 1920s put it, “For a long time, all boys were trained to be President . . . Now we are training them to get jobs.”[3]

The new vocationalism lacked the grandeur of the mission set for the Common School, but it did address parents’ primary concern: how to ensure their children ended up with a good income and a secure social position, ideally by landing a job in the upper ranks of the new occupational hierarchy. Such work tended to be safer, cleaner, less manual, more mental, more secure, more prestigious, and better paid. And, crucially, each step up in the hierarchy required a higher level of education.

This new function of schooling — allocating desirable jobs — was in some ways just the flip side of the idea that schools exist to produce capable workers. What a policy maker views as a process of strengthening the nation’s human capital looks, to the individual student, like a way to attain personal status. For the student, school becomes purely a contest to obtain better educational qualifications and get better jobs. And from this angle, school is a decidedly private good. The pursuit of high-status jobs is a zero-sum game. If you get hired for a position, then I don’t.

All but gone is the assumption that purpose of schooling is to benefit the community at large. Less and less often do Americans conceive of education as a cooperative effort in nation-building or a collective investment in workforce development. Increasingly, rather, school comes to be viewed as an intense competition among individuals to get ahead in society and avoid being left behind. It has begun to look, to a great extent, like a means of creating winners and losers in the pursuit of academic merit, with the results determining who becomes winners and losers in life.

Consequences of the rise of schooling as a private good

When schools become a mechanism for allocating social status, they provoke intense competition over invidious educational distinctions. But while schooling may serve as a very private good, that doesn’t mean it can’t also function, at the same time, as a public good.

At one level, everyone who attends a school benefits personally from the knowledge, skills, and socialization they gain there, as well as from any diplomas they receive, which certify their learning and provide a signal to the job market about their relative employability for a variety of occupational positions. Viewed from this angle, even students at the most traditional public schools  accrue private goods.

And at another level, everyone in society benefits from having a well-educated and successful group of fellow citizens and co-workers. One of the core concepts of neoclassical economics is that the pursuit of private and personal gain often has public benefits. People with more education tend to commit fewer crimes, participate more fully in public life, vote more often, and contribute to civil society through engagement with a variety of nongovernmental organizations. They are more likely to assume positions of political, social, and economic leadership and to populate the professions. And they tend to be more productive workers, which helps both to spur economic growth and to increase the standard of living for the population as a whole. The fact that these benefits may be unintended consequences, resulting indirectly from people seeking personal gain and glory, doesn’t make them any less significant.

Consider the classic statement of this phenomenon by Adam Smith: “It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own interest . . . Nobody but a beggar chuses [sic] to depend chiefly upon the benevolence of his fellow-citizens.”[4] From this perspective, the competition for educational advantage benefits not only the individuals who gain the credentials but also the public at large. When we strengthen the level of skill in the workforce, everybody’s quality of life improves. And if true, this solves the free-rider problem: Rather than compelling people to contribute to the public good, we can simply encourage them to pursue their private interests, trusting that this will, over the long haul, produce the greatest benefits for everybody.

The problem is that, whether or not this theory is correct, few of us can afford to wait for the long haul. Encouraging individuals to pursue their private interests doesn’t do much for the vast numbers of people who have serious obstacles to confront in the short term. Moreover, while a rising tide of economic growth may raise all boats, this doesn’t change the fact that most kids are born in dinghies, not yachts.

We know from decades of research that children from lower-income backgrounds tend to attend worse schools than those born into affluent families, are less likely to be in the high-level reading group or the honors track, and are much less likely to graduate from high school. If they go to college, they are less likely to attend a four year institution and are less likely to earn a degree. And every year, it becomes less and less likely that a person who was born in a dinghy will ever end up owning a yacht, much less raise their children in one.

For those families that do enjoy greater wealth, the public benefits of schooling are easy to miss, whereas the private benefits are material, immediate, and personal. When push comes to shove, the latter are simply more compelling. It’s no surprise that affluent parents will deploy their economic, social, and cultural capital to gain as many educational advantages as they can for their children. They move to the best school district they can afford or send their kids to private school; they make sure they get into the classes with the best teachers, gain access to the gifted program in elementary school and the advanced placement program in high school. And they push their children toward the most selective college they can attend. To do anything less would be a disservice.

Sure, in the name of fairness and justice they could choose to send their children to the same lousy schools that less fortunate people are forced to attend. But even if they support efforts to improve the quality of educational opportunities for other people’s children, what kind of parent would put their children’s future at risk for a political principle?

In short, the pursuit of private educational goods drives most parents’ immediate decisions, while efforts to promote the public good are deferred to the indeterminate realm of political action for possible resolution in the distant future. It’s not that anybody wants to punish other people’s children; it’s just that they need to take care of their own. But when the public good is forever postponed, the effects are punishing indeed. And when schooling comes to be viewed solely as a means of private advancement, the consequences are dismal for both school and society:

  • Over time, the market rewards the accumulation of educational credentials more than it values knowledge and skills. For example, employers will pay a higher salary to a person who squeaked out a college degree than one who excelled in all four years of college but left one credit short of a diploma.
  • As a result, students learn early on that the goal is to acquire as many grades, credits, and degrees as possible rather than the knowledge and skills that these tokens are supposed to represent. So much the better if you can find ways to game the system (by, for example, studying only what’s likely to be on the test, buttering up the teacher, or just plain cheating). Only a sucker pays the sticker price.
  • In turn, schooling becomes more and more stratified, in two related ways: first, students have incentives to pursue the highest level of schooling they can (a graduate degree is better than a 4-year degree, which is better than a 2-year degree, and so on). Second, they have incentives to get into the highest-status institutions they can, at every level.
  • Cooperative learning becomes a dangerous waste of time. Students’ have no incentive to learn from their classmates but only to maximize their own ranking relative to them.
  • Families with more economic and cultural and social capital begin to hoard educational opportunities for their own children, elbowing others aside for access to the most desirable schools, teachers, and other resources.
  • This in turn threatens the legitimacy of the whole system, undermining the claim that people succeed according to their educational merit.
  • Moreover, people with the highest-status degrees and jobs tend to marry each other and pass their concentrated levels of advantage on to their own children, which only widens the divide across subsequent generations.
  • Enjoying greater wealth, those parents choose to send their children to private schools, or they choose to live in neighborhoods with elite public schools — in any case, the nominally “public” school hardly differs from the private academy, except that while it enjoys public subsidies, its boundaries have been drawn up in a way that denies access to other people’s children. (In effect, such a school is a public resource turned toward private ends.)

My point is that over the last several decades, as schooling has come to be viewed mainly as a source of private benefit rather than as a public good, the consequences have been dramatic for both schools and society. Increasingly prized as a resource by affluent families, traditional public schooling has become a mechanism by which to reinforce their advantages. And as a result, is has become harder and harder to distinguish what is truly public about our public schools.

At a deeper level, as we have privatized our vision of public schooling, we have shown a willingness to back away from the social commitment to the public good that motivated the formation of the American republic and the common school system. We have grown all too comfortable in allowing the fate of other people’s children be determined by the unequal competition among consumers for social advantage through schooling. The invisible hand of the market may work for the general benefit in the economic activities of the butcher and the baker but not in the political project of creating citizens.

[1] The discussion in this section is drawn from the following: David F. Labaree, “Public Goods, Private Goods: The American Struggle over Educational Goals,” American Educational Research Journal, 34:1 (Spring, 1997), pp. 39-81; David F. Labaree, “Founding the American School System,” in Someone Has to Fail: The Zero-Sum Game of Public Schooling (Cambridge, MA: Harvard University Press, 2010), pp. 42-79.

[2] Horace Mann, Fifth annual report to the Massachusetts Board of Education (Boston: Board of Education, 1841).

[3] Robert S. and Helen Merrill Lynd, Middletown (New York: Harcourt, Brace and World, 1929), p. 194.

[4] Adam Smith, An Inquiry in the Nature and Causes of the Wealth of Nations, Edwin Cannan, ed. (Chicago: University of Chicago Press, 1776/1976), book 1, chapter 2, p. 18.

 

 

Posted in Empire, History, Resilience, War

Resilience in the Face of Climate Change and Epidemic: Ancient Rome and Today’s America

Tell me if you think this sounds familiar:  In its latter years (500-700 ACE), the Roman Empire faced a formidable challenge from two devastating environmental forces — dramatic climate change and massive epidemic.  As Mark Twain is supposed to have said, “History doesn’t repeat itself, but it often rhymes.”

During our own bout of climate change and ravaging disease, I’ve been reading Kyle Harper’s book The Fate of Rome: Climate, Disease, and the End of Empire.  The whole time, rhymes were running through my head.  We all know that things did not turn out well for Rome, whose civilization went through the most devastating collapse in world history.  The state disintegrated, population fell in half, and the European standard of living did not recover the level it had in 500 until a thousand years later.

Fate of Rome Cover

So Rome ended badly, but what about us?  The American empire may be eclipsing, but it’s not like the end is near.  Rome was dependent on animal power and a fragile agricultural base, and its medical “system” did more harm than good.  All in all we seem much better equipped to deal with climate change and disease than they were.  As a result, I’m not suggesting that we’re headed for the same calamitous fall that faced Roman civilization, but I do think we can learn something important by observing how they handled their own situation.

What’s so interesting about the fall of Rome is that it took so long.  The empire held on for 500 years, even under circumstances where its fall was thoroughly overdetermined.  The traditional story of the fall is about fraying political institutions in an overextended empire, surrounded by surging “barbarian” states that were prodded into existence by Rome’s looming threat.

To this political account, Harper adds the environment.  The climate was originally very kind to Rome, supporting growth during a long period of warm and wet weather known as the Roman Climate Optimum (200 BCE to 150 ACE).  But then conditions grew increasingly unstable, leading to the Late Antique Little Ice Age (450-700), with massive crop failures brought on by a drop in solar energy and massive volcanic eruptions.  In the midst of this arose a series of epidemics, fostered (like our own) by the opening up of trade routes, which culminated in the bubonic plague (541-749) that killed off half of the populace.

What kept Rome going all this time was a set of resilient civic institutions.  That’s what I think we can learn from the Roman case.  My fear is that our own institutions are considerably more fragile.  In this analysis, I’m picking up on a theme from an earlier blog post:  The Triumph of Efficiency over Effectiveness: A Brief for Resilience through Redundancy.

Here is how Harper describes the institutional framework of this empire:

Rome was ruled by a monarch in all but name, who administered a far-flung empire with the aid, first and foremost, of the senatorial aristocracy. It was an aristocracy of wealth, with property requirements for entry, and it was a competitive aristocracy of service. Low rates of intergenerational succession meant that most aristocrats “came from families that sent representatives into politics for only one generation.”

The emperor was the commander-in-chief, but senators jealously guarded the right to the high posts of legionary command and prestigious governorships. The imperial aristocracy was able to control the empire with a remarkably thin layer of administrators. This light skein was only successful because it was cast over a foundational layer of civic aristocracies across the empire. The cities have been called the “load-bearing” pillars of the empire, and their elites were afforded special inducements, including Roman citizenship and pathways into the imperial aristocracy. The low rates of central taxation left ample room for peculation by the civic aristocracy. The enormous success of the “grand bargain” between the military monarchy and the local elites allowed imperial society to absorb profound but gradual changes—like the provincialization of the aristocracy and bureaucracy—without jolting the social order.

The Roman frontier system epitomized the resilience of the empire; it was designed to bend but not break, to bide time for the vast logistical superiority of the empire to overwhelm Rome’s adversaries. Even the most developed rival in the orbit of Rome would melt before the advance of the legionary columns. The Roman peace, then, was not the prolonged absence of war, but its dispersion outward along the edges of empire.

The grand and decisive imperial bargain, which defined the imperial regime in the first two centuries, was the implicit accord between the empire and “the cities.” The Romans ruled through cities and their noble families. The Romans coaxed the civic aristocracies of the Mediterranean world into their imperial project. By leaving tax collection in the hands of the local gentry, and bestowing citizenship liberally, the Romans co-opted elites across three continents into the governing class and thereby managed to command a vast empire with only a few hundred high-ranking Roman officials. In retrospect, it is surprising how quickly the empire ceased to be a mechanism of naked extraction, and became a sort of commonwealth.

Note that last part:  Rome “became a sort of commonwealth.”  It conquered much of the Western world and incorporated one-quarter of the earth’s population, but the conquered territories were generally better off under Rome than they had been before — benefiting from citizenship, expanded trade, and growing standards of living.  It was a remarkably stratified society, but its benefits extended even to the lower orders.  (For more on this issue, see my earlier post about Walter Scheidel’s book on the social benefits of war.)

At the heart of the Roman system were three cultural norms that guided civic life: self sufficiency, reciprocity, and patronage.  Let me focus on the latter, which seems to be dangerously absent in our own society at the moment.

The expectation of paternalistic generosity lay heavily on the rich, ensuring that less exalted members of society had an emergency lien on their stores of wealth. Of course, the rich charged for this insurance, in the form of respect and loyalty, and in the Roman Empire there was a constant need to monitor the fine line between clientage and dependence.

A key part of the grand bargain engineered by Rome was the state’s responsibility to feed its citizens.

The grain dole was the political entitlement of an imperial people, under the patronage of the emperor.

Preparation for famine — a chronic threat to premodern agricultural societies — was at the center of the system’s institutional resilience.  This was particularly important in an empire as thoroughly city-centered as Rome.  Keep in mind that Rome during the empire was the first city in the world to have 1 million residents; the second was London 1500 year later.

These strategies of resilience, writ large, were engrained in the practices of the ancient city. Diversification and storage were adapted to scale. Urban food storage was the first line of redundancy. Under the Roman Empire, the monumental dimensions of storage facilities attest the political priority of food security. Moreover, cities grew organically along the waters, where they were not confined to dependence on a single hinterland.

When food crisis did unfold, the Roman government stood ready to intervene, sometimes through direct provision but more often simply by the suppression of unseemly venality.

The most familiar system of resilience was the food supply of Rome. The remnants of the monumental public granaries that stored the food supply of the metropolis are still breathtaking.

Wouldn’t it be nice if we in the U.S. could face the challenges of climate change and pandemic as a commonwealth?  If so, we would be working to increase the resilience of our system:  by sharing the burden and spreading the wealth: by building up redundancy to store up for future challenges; by freeing ourselves from the ideology of economic efficiency in the service of social effectiveness.  Wouldn’t that be nice.

Posted in Culture, History, Politics, Populism, Sociology

Colin Woodard: Maps that Show the Historical Roots of Current US Political Faultlines

This post is a commentary on Colin Woodard’s book American Nations: A History of the Eleven Rival Regional Cultures of North America.  

Woodard argues that the United States is not a single national culture but  a collection of national cultures, each with its own geographic base.  The core insight for this analytical approach comes from “Wilbur Zelinsky of Pennsylvania State University [who] formulated [a] theory in 1973, which he called the Doctrine of First Effective Settlement. ‘Whenever an empty territory undergoes settlement, or an earlier population is dislodged by invaders, the specific characteristics of the first group able to effect a viable, self-perpetuating society are of crucial significance for the later social and cultural geography of the area, no matter how tiny the initial band of settlers may have been,’ Zelinsky wrote. ‘Thus, in terms of lasting impact, the activities of a few hundred, or even a few score, initial colonizers can mean much more for the cultural geography of a place than the contributions of tens of thousands of new immigrants a few generations later.’”

I’m suspicious of theories that smack of cultural immutability and cultural determinism, but Woodard’s account is more sophisticated than that.  His is a story of the power of founders in a new institutional setting, who lay out the foundational norms for a society that lacks any cultural history of its own or which expelled the preexisting cultural group (in the U.S. case, Native Americans).  So part of the story is about the acculturation of newcomers into an existing worldview.  But another part is the highly selective nature of immigration, since new arrivals often seek out places to settle that are culturally compatible.  They may target a particular destination because its cultural characteristics, creating a pipeline of like-minded immigrants; or they choose to move on to another territory if the first port of entry is not to their taste.  Once established, these cultures often expanded westward as the country developed, extending the size and geographical scope of each nation.

Why does he insist on calling them nations?  At first this bothered me a bit, but then I realized he was using the term “nation” in Benedict Anderson’s sense as “imagined communities.”  Tidewater and Yankeedom are not nation states; they are cultural components of the American state.  But they do act as nations for their citizens.  Each of these nations is a community of shared values and worldviews that binds people together who have never met and often live far away.  The magic of the nation is that it creates a community of common sense and purpose that extends well beyond the reach of normal social interaction.  If you’re Yankee to the core, you can land in a strange town in Yankeedom and feel at home.  These are my people.  I belong here.

He argues that these national groupings continue to have a significant impact of the cultural geography of the US, shaping people’s values, styles of social organization, views of religion and government, and ultimately how they vote.  The kicker is the alignment between the spatial distribution of these cultures and the current voting patterns.  He lays out this argument succinctly in a 2018 op-ed he wrote for the New York Times.  I recommend reading it.

The whole analysis is neatly summarized in the two maps he deployed in that op-ed, which I have reproduced below.

The Map of America’s 11 Nations

11 Nations Map

This first map shows the geographic boundaries of the various cultural groupings in the U.S.  It all started on the east coast with the founding cultural binary that shaped the formation of the country in the late 18th century — New England Yankees and Tidewater planters.  He argues that they are direct descendants of the two factions in the English civil war of the mid 17th century, with the Yankees as the Calvinist Roundheads, who (especially after being routed by the restoration in England) sought to establish a new theocratic society in the northeast founded on strong government, and the Anglican Cavaliers, who sought to reproduce the decentralized English aristocratic ideal on Virginia plantations.  In between was the Dutch entrepot of New York, focused on commerce and multiculturalism (think “Hamilton”), and the Quaker colony in Pennsylvania, founded on equality and suspicion of government.  The US constitution was an effort to balance all of these cultural priorities within a single federal system.

Then came two other groups that didn’t fit well into any of these four cultural enclaves.  The immigrants to the Deep South originated in the slave societies of British West Indies, bringing with them a rigid caste structure and a particularly harsh version of chattel slavery.  Immigrants to Greater Appalachia came from the Scots-Irish clan cultures in Northern Ireland and the Scottish borderlands, with a strong commitment to individual liberty, resentment of government, and a taste for violence.

Tidewater and Yankeedom dominated the presidency and federal government for the country’s first 40 years.  But in 1828 the US elected its first president from rapidly expanding Appalachia, Andrew Jackson.  And by then the massive westward expansion of the Deep South, along with the extraordinary wealth and power that accrued from its cotton-producing slave economy, created the dynamics leading to the Civil War.  This pitted the four nations of the northeast against Tidewater and Deep South, with Appalachia split between the two, resentful of both Yankee piety and Southern condescension.  The multiracial and multicultural nations of French New Orleans and the Mexican southwest (El Norte) were hostile to the Deep South and resented its efforts to expand its dominion westward.

The other two major cultural groupings emerged in the mid 19th century.  The thin strip along the west coast consisted of Yankees in the cities and Appalachians in the back country, combining the utopianism of the former with the radical individualism of the latter.  The Far West is the one grouping that is based not on cultural geography but physical geography.  A vast arid area unsuited to farming, it became the domain of the only two entities powerful enough to control it — large corporations (railroad and mining), which exploited it, and the federal government, which owned most of the land and provided armed protection from Indians.

So let’s jump ahead and look at the consequences of this cultural landscape for our current political divisions.  Examine the electoral map for the 2016 presidential race, which shows the vote in Woodard’s 11 nations.

The 2016 Electoral Map

2016 Vote Map

Usually you see voting maps with results by state.  Here instead we see voting results by county, which allows for a more fine-tuned analysis.  Woodard assigns each county to one of the 11 “nations” and then shows the red or blue vote margin for each cultural grouping.

It’s striking to see how well the nations match the vote.  The strongest vote for Clinton came from the Left Coast, El Norte, and New Netherland, with substantial support from Yankeedom, Tidewater, and Spanish Caribbean.  Midlands was only marginally supportive of the Democrat.  Meanwhile the Deep South and Far West were modestly pro-Trump (about as much as Yankeedom was pro-Clinton), but the true kicker was Appalachia, which voted overwhelmingly for Trump (along with New France in southern Louisiana).

Appalachia forms the heart of Trump’s electoral base of support.  It’s an area that resents intellectual, cultural, and political elites; that turns away from mainstream religious denominations in favor of evangelical sects; and that lags behing behind in the 21st century information economy.  As a result, this is the heartland of populism.  It’s no wonder that the portrait on the wall in Trump’s Oval portrays Andrew Jackson.

Now one more map, this time showing were in the country people have been social distancing and where they haven’t, as measure by how much they were traveling away from home (using cell phone data).  It comes from a piece Woodard recently published in Washington Monthly.

Social Distancing Map

Once again, the patterns correspond nicely to the 11 nations.  Here’s how Woodard summarizes the data:

Yankeedom, the Midlands, New Netherland, and the Left Coast show dramatic decreases in movement – 70 to 100 percent in most counties, whether urban or rural, rich, or poor.

Across much of Greater Appalachia, the Deep South and the Far West, by contrast, travel fell by only 15 to 50 percent. This was true even in much of Kentucky, the interior counties of Washington and Oregon, where Democratic governors had imposed a statewide shelter-in-place order.

Not surprisingly, most of the states where governors imposed stay-at-home orders by March 27 are located in or dominated by one or a combination of the communitarian nations. This includes states whose governors are Republicans: Ohio, New Hampshire, Vermont, and Massachusetts.

Most of the laggard governors lead states dominated by individualistic nations. In the Deep South and Greater Appalachia you find Florida’s Ron DeSantis, who allowed spring breakers to party on the beaches. There’s Brian Kemp of Georgia who left matters in the hands of local officials for much of the month and then, on April 2, claimed to have just learned the virus can be transmitted by asymptomatic individuals. You have Asa Hutchinson of Arkansas, who on April 7 denied mayors the power to impose local lockdowns. And then there’s Mississippi’s Tate Reeves, who resisted action because “I don’t like government telling private business what they can and cannot do.”

Nothing like a pandemic to show what your civic values are.  Is it all about us or all about me?

Posted in Academic writing

Patricia Limerick: Dancing with Professors

 

In this post, I feature a lovely piece by historian Patricia Limerick called “Dancing with Professors: The Trouble with Academic Prose,” which was published in the Observer in 2015.

Everyone disparages academic writing, and for good reason.  No one reads journal articles for fun.  Limerick, whose work shows she knows something about good writing, finds the problem in the way academics try so hard to sound professional.  From this perspective, writing with clarity and grace carries the stigma of amateurism.  If it’s readily understandable to a layperson, it’s not tenurable.

“We must remember,” she says, “that professors are the ones nobody wanted to dance with in high school.”  We not approachable or accessible and we like it that way.  Any loser can be popular; the academic aspires to be profound.

So we learn turgid writing in graduate school, as part of our induction into the profession, and we stay in this mode for the rest of our careers — long after we have lost the need to shore up our initially shaky credibility as serious scholars.  We constrain ourselves from taking flight with language even after the shackles of grad school have fallen away.

Don’t miss her discussion of how academic writers are like buzzards on a tree limb.  Really.  It’ll stick with you for a long long time.

Enjoy.

Typwriter

Dancing with Professors:

The Trouble with Academic Prose

Patricia Nelson Limerick

Professor of History, University of Colorado

In ordinary life, when a listener cannot understand what someone has said, this is the usual exchange:

Listener: I cannot understand what you are saying.

Speaker: Let me try to say it more clearly.

But in scholarly writing in the late 20th century, other rules apply. This is the implicit exchange:

Reader: I cannot understand what you are saying.

Academic Writer: Too bad. The problem is that you are an unsophisticated and untrained reader. If you were smarter, you would understand me.

The exchange remains implicit, because no one wants to say, “This doesn’t make any sense,” for fear that the response, “It would, if you were smarter,” might actually be true.

While we waste our time fighting over ideological conformity in the scholarly world, horrible writing remains a far more important problem. For all their differences, most right_wing scholars and most left_wing scholars share a common allegiance to a cult of obscurity. Left, right and center all hide behind the idea that unintelligible prose indicates a sophisticated mind. The politically correct and the politically incorrect come together in the violence they commit against the English language.

University presses have certainly filled their quota every year, in dreary monographs, tangled paragraphs and impenetrable sentences. But trade publishers have also violated the trust of innocent and hopeful readers. As a prime example of unprovoked assaults on innocent words, consider the verbal behavior of Allan Bloom in “The Closing of the American Mind,” published by a large mainstream press. Here is a sample:

“If openness means to go with the flow,’ it is necessarily an accommodation to the present. That present is so closed to doubt about so many things impeding the progress of its principles that unqualified openness to it would mean forgetting the despised alternatives to it, knowledge of which makes us aware of what is doubtful in it.”

Is there a reader so full of blind courage as to claim to know what this sentence means? Remember, the book in which this remark appeared was a lamentation over the failings of today’s students, a call to arms to return to tradition and standards in education. And yet, in 20 years of paper grading, I do not recall many sentences that asked, so pathetically, to be put out of their misery.

Jump to the opposite side of the political spectrum from Allan Bloom, and literary grace makes no noticeable gains. Contemplate this breathless, indefatigable sentence from the geographer, Allan Pred, and Mr. Pred and Bloom seem, if only in literary style, to be soul mates.

“If what is at stake is an understanding of geographical and historical variations in the sexual division of productive and reproductive labor, of contemporary local and regional variations in female wage labor and women’s work outside the formal economy, of on_the_ground variations in the everyday content of women’s lives, inside and outside of their families, then it must be recognized that, at some nontrivial level, none of the corporal practices associated with these variations can be severed from spatially and temporally specific linguistic practices, from language that not only enable the conveyance of instructions, commands, role depictions and operating rules, but that also regulate and control, that normalize and spell out the limits of the permissible through the conveyance of disapproval, ridicule and reproach.”

In this example, 124 words, along with many ideas, find themselves crammed into one sentence. In their company, one starts to get panicky. “Throw open the windows; bring in the oxygen tanks!” one wants to shout. “These words and ideas are nearly suffocated. Get them air!” And yet the condition of this desperately packed and crowded sentence is a perfectly familiar one to readers of academic writing, readers who have simply learned to suppress the panic.

Everyone knows that today’s college students cannot write, but few seem willing to admit that the professors who denounce them are not doing much better. The problem is so blatant that there are signs that the students are catching on. In my American history survey course last semester, I presented a few writing rules that I intended to enforce inflexibly. The students looked more and more peevish; they looked as if they were about to run down the hall, find a telephone, place an urgent call and demand that someone from the A.C.L.U. rush up to campus to sue me for interfering with their First Amendment rights to compose unintelligible, misshapen sentences.

Finally one aggrieved student raised her hand and said, “You are telling us not to write long, dull sentences, but most of our reading is full of long, dull sentences.”

As this student was beginning to recognize, when professors undertake to appraise and improve student writing, the blind are leading the blind. It is, in truth, difficult to persuade students to write well when they find so few good examples in their assigned reading.

The current social and judicial context for higher education makes this whole issue pressing. In Colorado, as in most states, the legislators re convinced that the university is neglecting students and wasting state resources on pointless research. Under those circumstances, the miserable writing habits of professors pose a direct and concrete danger to higher education. Rather than going to the state legislature, proudly presenting stacks of the faculty’s compelling and engaging publications, you end up hoping that the lawmakers stay out of the library and stay away, especially, from the periodical room, with its piles of academic journals. The habits of academic writers lend powerful support to the impression that research is a waste of the writers’ time and of the public’s money.

Why do so many professors write bad prose?

Ten years ago, I heard a classics professor say the single most important thing_in my opinion_that anyone has said about professors. “We must remember,” he declared, “that professors are the ones nobody wanted to dance with in high school.”

This is an insight that lights up the universe_or at least the university. It is a proposition that every entering freshman should be told, and it is certainly a proposition that helps to explain the problem of academic writing. What one sees in professors, repeatedly, is exactly the manner that anyone would adopt after a couple of sad evenings sidelined under the crepe_paper streamers in the gym, sitting on a folding chair while everyone else danced. Dignity, for professors, perches precariously on how well they can convey this message, “I am immersed in some very important thoughts, which unsophisticated people could not even begin to understand. Thus, I would not want to dance, even if one of you unsophisticated people were to ask me.”

Think of this, then, the next time you look at an unintelligible academic text. “I would not want the attention of a wide reading audience, even if a wide audience were to ask for me.” Isn’t that exactly what the pompous and pedantic tone of the classically academic writer conveys?

Professors are often shy, timid and fearful people, and under those circumstances, dull, difficult prose can function as a kind of protective camouflage. When you write typical academic prose, it is nearly impossible to make a strong, clear statement. The benefit here is that no one can attack your position, say you are wrong or even raise questions about the accuracy of what you have said, if they cannot tell what you have said. In those terms, awful, indecipherable prose is its own form of armor, protecting the fragile, sensitive thoughts of timid souls.

The best texts for helping us understand the academic world are, of course, Lewis Carroll’s Alice’s Adventures in Wonderland and Through the Looking Glass. Just as devotees of Carroll would expect, he has provided us with the best analogy for understanding the origin and function of bad academic writing. Tweedledee and Tweedledum have quite a heated argument over a rattle. They become so angry that they decide to fight. But before they fight, they go off to gather various devices of padding and protection: “bolsters, blankets, hearthrugs, tablecloths, dish covers and coal scuttles.” Then, with Alice’s help in tying and fastening, they transform these household items into armor. Alice is not impressed: ” Really, they’ll be more like bundles of old clothes than anything else, by the time they’re ready!’ she said to herself, as she arranged a bolster round the neck of Tweedledee, to keep his head from being cut off,’ as he said, Why this precaution?” Because, Tweedledee explains, “it’s one of the most serious things that can possibly happen to one in a battle_to get one’s head cut off.”

Here, in the brothers’ anxieties and fears, we have an exact analogy for the problems of academic writing. The next time you look at a classically professorial sentence_long, tangled, obscure, jargonized, polysyllabic_think of Tweedledum and Tweedledee dressed for battle, and see if those timid little thoughts, concealed under layers of clauses and phrases, do not remind you of those agitated but cautious brothers, arrayed in their bolsters, blankets, dish covers and coal scuttles. The motive, too, is similar. Tweedledum and Tweedledee were in terror of being hurt, and so they padded themselves so thoroughly that they could not be hurt; nor, for that matter, could they move. A properly dreary, inert sentence has exactly the same benefit; it protects its writer from sharp disagreement, while it also protects him from movement.

Why choose camouflage and insulation over clarity and directness? Tweedledee, of course, spoke for everyone, academic or not, when he confessed his fear. It is indeed, as he said, “one of the most serious things that can possibly happen to one in a battle_to get one’s head cut off.” Under those circumstances, logic says: tie the bolster around the neck, and add a protective hearthrug or two. Pack in another qualifying clause or two. Hide behind the passive_voice verb. Preface any assertion with a phrase like “it could be argued” or “a case could be made.” Protecting one’s neck does seem to be the way to keep one’s head from being cut off.

Graduate school implants in many people the belief that there are terrible penalties to be paid for writing clearly, especially writing clearly in ways that challenge established thinking in the field. And yet, in academic warfare (and I speak as a veteran) your head and your neck are rarely in serious danger. You can remove the bolster and the hearthrug. Your opponents will try to whack at you, but they will seldom, if ever, land a blow_in large part because they are themselves so wrapped in protective camouflage and insulation that they lose both mobility and accuracy.

So we have a widespread pattern of professors protecting themselves from injury by wrapping their ideas in dull prose, and yet the danger they try to fend off is not a genuine danger. Express yourself clearly, and it is unlikely that either your head_or, more important, your tenure_will be cut off.

How, then, do we save professors from themselves? Fearful people are not made courageous by scolding; they need to be coaxed and encouraged. But how do we do that, especially when this particular form of fearfulness masks itself as pomposity, aloofness and an assured air of superiority?

Fortunately, we have available the world’s most important and illuminating story on the difficulty of persuading people to break out of habits of timidity, caution, and unnecessary fear. I borrow this story from Larry McMurty, one of my rivals in the interpreting of the American West, though I am putting the story to a use that Mr. McMurty did not intend.

In a collection of his essays, In a Narrow Grave, Mr. McMurty wrote about the weird process of watching his book Horsemen Pass By being turned into the movie Hud. He arrived in the Texas Panhandle a week or two after filming had started, and he was particularly anxious to learn how the buzzard scene had gone. In that scene, Paul Newman was supposed to ride up and discover a dead cow, look up at a tree branch lined with buzzards and, in his distress over the loss of the cow, fire his gun at one of the buzzards. At that moment, all of the other buzzards were supposed to fly away into the blue Panhandle sky.

But when Mr. McMurty asked people how the buzzard scene had gone, all he got, he said, were “stricken looks.”

The first problem, it turned out, had to do with the quality of the available local buzzards_who proved to be an excessively scruffy group. So more appealing, more photogenic buzzards had to be flown in from some distance and at considerable expense.

But then came the second problem: how to keep the buzzards sitting on the tree branch until it was time for their cue to fly.

That seemed easy. Wire their feet to the branch, and then, after Paul Newman fires his shot, pull the wire, releasing their feet, thus allowing them to take off.

But, as Mr. McMurty said in an important and memorable phrase, the film makers had not reckoned with the “mentality of buzzards.” With their feet wired, the buzzards did not have enough mobility to fly. But they did have enough mobility to pitch forward.

So that’s what they did: with their feet wired, they tried to fly, pitched forward, and hung upside down from the dead branch, with their wings flapping.

I had the good fortune a couple of years ago to meet a woman who had been an extra for this movie, and she added a detail that Mr. McMurty left out of his essay: namely, the buzzard circulatory system does not work upside down, and so, after a moment or two of flapping, the buzzards passed out.

Twelve buzzards hanging upside down from a tree branch: this was not what Hollywood wanted from the West, but that’s what Hollywood had produced.

And then we get to the second stage of buzzard psychology. After six or seven episodes of pitching forward, passing out, being revived, being replaced on the branch and pitching forward again, the buzzards gave up. Now, when you pulled the wire and released their feet, they sat there, saying in clear, nonverbal terms: “We tried that before. It did not work. We are not going to try it again.” Now the film makers had to fly in a high_powered animal trainer to restore buzzard self_esteem. It was all a big mess. Larry McMurty got a wonderful story out of it; and we, in turn, get the best possible parable of the workings of habit and timidity.

How does the parable apply? In any and all disciplines, you go to graduate school to have your feet wired to the branch. There is nothing inherently wrong with that: scholars should have some common ground, share some background assumptions, hold some similar habits of mind. This gives you, quite literally, your footing. And yet, in the process of getting your feet wired, you have some awkward moments, and the intellectual equivalent of pitching forward and hanging upside down. That experience_especially if you do it in a public place like a seminar_provides no pleasure. One or two rounds of that humiliation, and the world begins to seem like a treacherous place. Under those circumstances, it does indeed seem to be the choice of wisdom to sit quietly on the branch, to sit without even the thought of flying, since even the thought might be enough to tilt the balance and set off another round of flapping, fainting and embarrassment.

Yet when scholars get out of graduate school and get Ph.D.’s, and, even more important, when scholars get tenure, the wire is truly pulled. Their feet are free. They can fly whenever and wherever they like. Yet by then the second stage of buzzard psychology has taken hold, and they refuse to fly. The wire is pulled, and yet the buzzards sit there, hunched and grumpy. If they teach in a university with a graduate program, they actively instruct young buzzards in the necessity of keeping their youthful feet on the branch.

This is a very well_established pattern, and it is the ruination of scholarly activity in the modern world. Many professors who teach graduate students think that one of their principal duties is to train students in the conventions of academic writing.

I do not believe that professors enforce a standard of dull writing on graduate students in order to be cruel. They demand dreariness because they think that dreariness is in the students’ best interests. Professors believe that a dull writing style is an academic survival skill because they think that is what editors want, both editors of academic journals and editors of university presses. What we have here is a chain of misinformation and misunderstanding, where everyone thinks that the other guy is the one who demands, dull, impersonal prose.

Let me say again what is at stake here: universities and colleges are currently embattled, distrusted by the public and state funding institutions. As distressing as this situation is, it provides the perfect setting and the perfect timing for declaring an end to scholarly publication as a series of guarded conversations between professors.

The redemption of the university, especially in terms of the public’s appraisal of the value of research and publication, requires all the writers who have something they want to publish to ask themselves the question: Does this have to be a closed communication, shutting out all but specialists willing to fight their way through the thickest of jargon? Or can this be an open communication, engaging specialists with new information and new thinking, but also offering an invitation to nonspecialists to learn from this study, to grasp its importance, and by extension, to find concrete reasons to see value in the work of the university?

This is a country in need of wisdom, and of clearly reasoned conviction and vision. And that, at the bedrock, is the reason behind this campaign to save professors from themselves and to detoxify academic prose. The context is a bit different, but the statement that Willy Loman made to his sons in Death of a Salesman keeps coming to mind: “The woods are burning boys, the woods are burning.” In a society confronted by a faltering economy, racial and ethnic conflicts, and environmental disasters, “the woods are burning,” and since we so urgently need everyone’s contribution in putting some of these fires out, there is no reason to indulge professorial vanity or timidity.

Ego is, of course, the key obstacle here. As badly as most of them write, professors are nonetheless proud and sensitive writers, resistant in criticism. But even the most desperate cases can be redeemed and persuaded to think of writing as a challenging craft, not as existential trauma. A few years ago, I began to look at carpenters and other artisans as the emotional model for writers. A carpenter, let us say, makes a door for a cabinet. If the door does not hang straight, the carpenter does not say, “I will not change that door; it is an expression of my individuality; who cares if it will not close?” Instead, the carpenter removes the door and works on it until it fits. That attitude, applied to writing, could be our salvation. If we thought more like carpenters, academic writers could find a route out of the trap of ego and vanity. Escaped from that trap, we could simply work on successive drafts until what we have to say is clear.

Colleges and universities are filled with knowledgeable, thoughtful people who have been effectively silenced by an awful writing style, a style with its flaws concealed behind a smokescreen of sophistication and professionalism. A coalition of academic writers, graduate advisers. journal editors, university press editors and trade publishers can seize this moment_and pull the wire. The buzzards can be set free_free to leave that dead tree branch, free to regain to regain their confidence, free to soar.

Posted in Academic writing, Wit, Writing

Wit (and the Art of Writing)

 

They laughed when I told them I wanted to be a comedian. Well they’re not laughing now.

Bob Monkhouse

Wit is notoriously difficult to analyze, and any effort to do so is likely to turn out dry and witless.  But two recent authors have done a remarkably effective job of trying to make sense of what constitutes wit and they manage to do so wittily.  That’s a risky venture, which most sensible people would avoid like COVID-19.  One book is Wit’s End by James Geary; the other is Humour by Terry Eagleton.  The epigraph comes from Eagleton.  Both have the good sense to reflect on the subject without analyzing it to death or trampling on the punchline.  Eagleton uses Freud as a negative case in point:

Children, insists Freud, lack all sense of the comic, but it is possible he is confusing them with the author of a notoriously unfunny work entitled Jokes and Their Relation to the Unconscious.

Interestingly, Geary says that wit begins with the pun.

Despite its bad reputation, punning is, in fact, among the highest displays of wit. Indeed, puns point to the essence of all true wit—the ability to hold in the mind two different ideas about the same thing at the same time.

In poems, words rhyme; in puns, ideas rhyme. This is the ultimate test of wittiness: keeping your balance even when you’re of two minds.

Groucho’s quip upon entering a restaurant and seeing a previous spouse at another table—“ Marx spots the ex.”

Geary Cover

Instead of avoiding ambiguity, wit revels in it, using paradoxical juxtaposition to shake you out of a trance and ask you to consider an issue from a strikingly different angle.  Arthur Koestler described the pun as “two strings of thought tied together by an acoustic knot.”  There’s an echo here of Emerson’s epigram, “A foolish consistency is the hobgoblin of little minds…”  Misdirection can lead to comic relief but it can also produce intellectual insight.

Geary goes on to show how the joke is integrally related to other forms of creative thought:

There is no sharp boundary splitting the wit of the scientist, inventor, or improviser from that of the artist, the sage, or the jester. The creative experience moves seamlessly from the “Aha!” of scientific discovery to the “Ah” of aesthetic insight to the “Ha-ha” of the pun and the punch line.  “Comic discovery is paradox stated—scientific discovery is paradox resolved,” Koestler wrote.

He shows that wit and metaphor have a lot in common.

If wit consists, as we say, in the ability to hold in the mind two different ideas about the same thing at the same time, this is exactly the function of metaphor. A metaphor carries the attention from the concrete to the abstract, from object to concept. When that direction is reversed, and attention is brought back from concept to object, the mind is surprised. Mistaking the figurative for fact is therefore a signature trick of wit.

Hence is it said, kleptomaniacs don’t understand metaphor because they take things literally.

Both wit and metaphor have these qualities in common:  “brevity, novelty, and clarity.”

Read my lips. Shoot from the hip. Wit switch hits. Wit ad-libs. It teaches new dogs lotsa old tricks. Throw spaghetti ’gainst the wall—wit’s what sticks. You can’t beat it or repeat it, not even with a shtick. Wit rocks the boat. That’s all she wrote.

Eagleton picks up Geary’s theme of how wit and metaphor are grounded in the “aha” of incongruity.

There are many theories of humour in addition to those we have looked at. They include the play theory, the conflict theory, the ambivalence theory, the dispositional theory, the mastery theory, the Gestalt theory, the Piagetian theory and the configurational theory. Several of these, however, are really versions of the incongruity theory, which remains the most plausible account of why we laugh. On this view, humour springs from a clash of incongruous aspects – a sudden shift of perspective, an unexpected slippage of meaning, an arresting dissonance or discrepancy, a momentary defamiliarising of the familiar and so on. As a temporary ‘derailment of sense’, it involves the disruption of orderly thought processes or the violation of laws or conventions. It is, as D. H. Munro puts it, a breach in the usual order of events.

“The Duke’s a long time coming today,” said the Duchess, stirring her tea with the other hand.

Eagleton Cover

He talks about how humor gives us license to be momentarily freed from the shackles of reason and order, a revolt of the id against the superego.  But the key is that reason and order are quickly restored, so the lapse of control is risk free.

As a pure enunciation that expresses nothing but itself, laughter lacks intrinsic sense, rather like an animal’s cry, but despite this it is richly freighted with cultural meaning. As such, it has a kinship with music. Not only has laughter no inherent meaning, but at its most riotous and convulsive it involves the disintegration of sense, as the body tears one’s speech to fragments and the id pitches the ego into temporary disarray. As with grief, severe pain, extreme fear or blind rage, truly uproarious laughter involves a loss of physical self-control, as the body gets momentarily out of hand and we regress to the uncoordinated state of the infant. It is quite literally a bodily disorder.

It is just the same with the fantasy revolution of carnival, when the morning after the merriment the sun will rise on a thousand empty wine bottles, gnawed chicken legs and lost virginities and everyday life will resume, not without a certain ambiguous sense of relief. Or think of stage comedy, where the audience is never in any doubt that the order so delightfully disrupted will be restored, perhaps even reinforced by this fleeting attempt to flout it, and thus can blend its anarchic pleasures with a degree of conservative self-satisfaction.

Like Geary, Eagleton shows how a key to wit is its ability to hone down an issue to a sharp point, which is captured in a verbal succinctness that is akin to poetry.

Wit has a point, which is why it is sometimes compared to the thrust of a rapier. It is rapier-like in its swift, shapely, streamlined, agile, flashing, glancing, dazzling, dexterous, pointed, clashing, flamboyant aspects, but also because it can stab and wound.

A witticism is a self-conscious verbal performance, but it is one that minimises its own medium, compacting its words into the slimmest possible space in an awareness that the slightest surplus of signification might prove fatal to its success. As with poetry, every verbal unit must pull its weight, and the cadence, rhythm and resonance of a piece of wit may be vital to its impact. The tighter the organisation, the more a verbal slide, ambiguity, conceptual shift or trifling dislocation of syntax registers its effect.

There is a strong lesson for writers in this discussion of wit.  Sharpen the argument, tighten the prose, focus on “brevity, novelty, and clarity.”  Learn from the craft of the poet and the comedian.  Less is more.

One problem witb academic writing in particular is that it takes itself too seriously.  It pays for us to keep our wit about us as we  write scholarly papers, acknowledging that we don’t know quite as much about the subject as we are letting on.  Conceding a bit of weakness can be quite appealing.  Oscar Wilde:  “I can resist anything but tempation.”

Everyday life involves sustaining a number of polite fictions: that we take a consuming interest in the health and well-being of our most casual acquaintances, that we never think about sex for a single moment, that we are thoroughly familiar with the later work of Schoenberg and so on. It is pleasant to drop the mask for a moment and strike up a comedic solidarity of weakness.

It is as though we are all really play-actors in our conventional social roles, sticking solemnly to our meticulously scripted parts but ready at the slightest fluff or stumble to dissolve into infantile, uproariously irresponsible laughter at the sheer arbitrariness and absurdity of the whole charade.

And don’t forget what Mel Brooks said:  Tragedy is when you cut your finger, and comedy is when someone else walks into an open sewer and dies.

Posted in Pandemic, Resilience, Systems

The Triumph of Efficiency over Effectiveness: A Brief for Resilience through Redundancy

The current covid-19 pandemic has shown a lot of things that are wrong in American society, including terrible leadership, a frail social safety net, and a lack of investment in public goods.  But one that has particularly struck me is the way our socioeconomic structure has been taken over by the logic of efficiency over the logic of effectiveness.  In the name of efficiency, we have focused heavily on keeping costs down in both our economy and our health system.

Industry does this by developing global supply chains that take advantage of cheap third world labor and the low cost of shipping and also by instituting just-in-time delivery of supplies to factories.  The former puts us at the mercy of events on the other side of the world, and the latter leaves us with no inventory to tide us over until supplies resume.  As we have seen, the result is that that production can shut down over night, with no easy way to get it going again any time soon.

There is a similar pattern with health care.  In the interest of cost efficiency, we have reduced the number of hospital beds and the amount of critical care supplies to what is needed during ordinary times.  Excess capacity, in both production and health care, is deemed wastefully inefficient.

The core problem with this strategy is that effectiveness depends on a certain degree of inefficiency.  To be effective, a system of production or medicine needs a cushion of excess capacity in order to tide it over during difficult times.  Both need a store of supplies that is considerably in excess of what is required under more routine circumstances.   And both need a certain amount of redundancy:  multiple suppliers of the same goods, multiple hospitals providing the same service.  For a system of production, health care, or national security to be resilient in the face of extreme demands, we have to be willing to subsidize the kind of excess capacity that we will need in a crisis.

The military has long understood this, so it is continually preparing for war in times of peace.  When a threat emerges, you don’t have time to spend a year of two getting up to speed with training, munitions, transportation, and — yes — hospital beds.  Because of this, we now see naval hospital ships gliding into the harbors of New York and Los Angeles to provide a small assist during our severe shortage of medical capacity.  What have the ships been doing for the last few years?  Preparing for a future emergency.  That’s very inefficient, but it’s also critically important for national survival.

Hospital ship

A healthy society — one with a strong survival instinct — needs to be willing to provide public subsidies for health emergencies that may be infrequent but are totally inevitable.  We need to build up excess capacity in the face of future uncertainty.  Industry already seems to be getting the idea that the fetish of lean productive capacity may be hazardous to the survival of many firms.  It seems likely that in the future firms will recruit multiple suppliers instead of one on the other side of the world and will build up inventory.  They can’t afford another disaster like this one.

What worries me is that our system of health and public welfare may not take the same prudent steps in planning for an uncertain future.  In the last 50 years, our public sector has been hard-wired to the ethic of efficiency, in which prudent capacity building is seen as reckless waste and where major responsibilities of government are outsourced to private providers.

But if we show a little foresight, we might learn the lesson of the current pandemic and shore up our public capacity for withstanding future shocks to our system.

Posted in Higher Education, History

The Exceptionalism of American Higher Education

This post is an op-ed I published on my birthday (May 17) in 2018 on the online international opinion site, Project Syndicate.  The original is hidden behind a paywall; here are PDFs in English, Spanish, and Arabic.

It’s a brief essay about what is distinctive about the American system of higher education, drawn from my book, A Perfect Mess: The Unlikely Ascendancy of American Higher Education.

Web Image

The Exceptionalism of American Higher Education

 By David F. Labaree

STANFORD – In the second half of the twentieth century, American universities and colleges emerged as dominant players in the global ecology of higher education, a dominance that continues to this day. In terms of the number of Nobel laureates produced, eight of the world’s top ten universities are in the United States. Forty-two of the world’s 50 largest university endowments are in America. And, when ranked by research output, 15 of the top 20 institutions are based in the US.

Given these metrics, few can dispute that the American model of higher education is the world’s most successful. The question is why, and whether the US approach can be exported.

While America’s oldest universities date to the seventeenth and eighteenth centuries, the American system of higher education took shape in the early nineteenth century, under conditions in which the market was strong, the state was weak, and the church was divided. The “university” concept first arose in medieval Europe, with the strong support of monarchs and the Catholic Church. But in the US, with the exception of American military academies, the federal government never succeeded in establishing a system of higher education, and states were too poor to provide much support for colleges within their borders.

In these circumstances, early US colleges were nonprofit corporations that had state charters but little government money. Instead, they relied on student tuition, as well as donations from local elites, most of whom were more interested in how a college would increase the value of their adjoining property than they were in supporting education.

As a result, most US colleges were built on the frontier rather than in cities; the institutions were used to attract settlers to buy land. In this way, the first college towns were the equivalent of today’s golf-course developments – verdant enclaves that promised a better quality of life. At the same time, religious denominations competed to sponsor colleges in order to plant their own flags in new territories.

What this competition produced was a series of small, rural, and underfunded colleges led by administrators who had to learn to survive in a highly competitive environment, and where supply long preceded demand. As a result, schools were positioned to capitalize on the modest advantages they did have. Most were highly accessible (there was one in nearly every town), inexpensive (competition kept a lid on tuition), and geographically specific (colleges often became avatars for towns whose names they took). By 1880, there were five times as many colleges and universities in the US than in all of Europe.

The unintended consequence of this early saturation was a radically decentralized system of higher education that fostered a high degree of autonomy. The college president, though usually a clergyman, was in effect the CEO of a struggling enterprise that needed to attract and retain students and donors. Although university presidents often begged for, and occasionally received, state money, government funding was neither sizeable nor reliable.

In the absence of financial security, these educational CEO’s had to hustle. They were good at building long-term relationships with local notables and tuition-paying students. Once states began opening public colleges in the mid-nineteenth century, the new institutions adapted to the existing system. State funding was still insufficient, so leaders of public colleges needed to attract tuition from students and donations from graduates.

By the start of the twentieth century, when enrollments began to climb in response to a growing demand for white-collar workers, the mixed public-private system was set to expand. Local autonomy gave institutions the freedom to establish a brand in the marketplace, and in the absence of strong state control, university leaders positioned their institutions to take pursue opportunities and adapt to changing conditions. As funding for research grew after World War II, college administrators started competing vigorously for these new sources of support.

By the middle of the twentieth century, the US system of higher education reached maturity, as colleges capitalized on decentralized and autonomous governance structures to take advantage of the lush opportunities for growth that arose during the Cold War. Colleges were able to leverage the public support they had developed during the long lean years, when a university degree was highly accessible and cheap. With the exception of the oldest New England colleges – the “Ivies” – American universities never developed the elitist aura of Old World institutions like Oxford and Cambridge. Instead, they retained a populist ethos – embodied in football and fraternities and flexible academic standards – that continues to serve them well politically.

So, can other systems of higher learning adapt the US model of educational excellence to local conditions? The answer is straightforward: no.  You had to be there.

In the twenty-first century, it is not possible for colleges to emerge with the same degree of autonomy that American colleges enjoyed some 200 years ago before the development of a strong nation state. Today, most non-American institutions are wholly-owned subsidiaries of the state; governments set priorities, and administrators pursue them in a top-down manner. By contrast, American universities have retained the spirit of independence, and faculty are often given latitude to channel entrepreneurial ideas into new programs, institutes, schools, and research. This bottom-up structure makes the US system of higher education costly, consumer-driven, and deeply stratified. But this is also what gives the system its global edge.