Posted in Higher Education, Populism, Sports

Nobel prizes are great, but college football is why American universities dominate the globe

This post is a reprint of a piece I published in Quartz in 2017.  Here’s a link to the original.  It’s an effort to explore the distinctively populist character of American higher education. 

The idea is that a key to understanding the strong public support that US colleges and universities have managed to generate is their ability to reach beyond the narrow constituency for its elevated intellectual accomplishments.  The magic is that they are elite institutions that can also appeal to the populace.  And the peculiar world of college football provides a window into how the magic works.  

If you drive around the state of Michigan, where I used to live, you would see farmers on tractors wearing a cap that typically bore the logo of either University of Michigan (maize and blue) or Michigan State (green and white).  Maybe they or their kids attended the school, or maybe they were using its patented seed; but more often than not, it was because they were rooting for the football team.  It’s hard to overestimate the value for the higher ed system of drawing a broad base of popular support.


Nobel prizes are great, but college football is why

American universities dominate the globe

David F. Labaree


College football costs too much. It exploits players and even damages their brains. It glorifies violence and promotes a thuggish brand of masculinity. And it undermines the college’s academic mission.

We hear this a lot, and much of it is true. But consider, for the moment, that football may help explain how the American system of higher education has become so successful. According to rankings computed by Jiao Tong University in Shanghai, American institutions account for 32 of the top 50 and 16 of the top 20 universities in the world. Also, between 2000 and 2014, 49% of all Nobel recipients were scholars at US universities.

In doing research for a book about the American system of higher education, I discovered that the key to its strength has been its ability to combine elite scholarship with populist appeal. And football played a key role in creating this alchemy.

American colleges developed their skills at attracting consumers and local supporters in the early nineteenth century, when the basic elements of the higher education system came together.

These colleges emerged in a very difficult environment, when the state was weak, the market strong, and the church divided. Unlike European forebears, who could depend on funding from the state or the established church, American colleges arose as not-for-profit corporations that received only sporadic funding from church denominations and state governments and instead had to rely on students and local donors. Often adopting the name of the town where they were located, these colleges could only survive, much less thrive, if they were able to attract and retain students from nearby towns and draw donations from alumni and local citizens.

In this quest, American colleges and universities have been uniquely and spectacularly successful. Go to any American campus and you will see that nearly everyone seems to be wearing the brand—the school colors, the logo, the image of the mascot, team jerseys. Unlike their European counterparts, American students don’t just attend an institution of higher education; they identify with it. It’s not just where they enroll; it’s who they are. In the US, the first question that twenty-year-old strangers ask each other is “Where do you go to college?” And half the time the question is moot because the speakers are wearing their college colors.

Football, along with other intercollegiate sports, has been enormously helpful in building the college brand. It helps draw together all of the members of the college community (students, faculty, staff, alumni, and local fans) in opposition to the hated rival in the big game. It promotes a loyalty that lasts for a lifetime, which translates into a broad political base for gaining state funding and a steady flow of private donations.

Thus one advantage that football brings to the American university is financial. It’s not that intercollegiate sports turn a large profit; in fact, the large majority lose money. Instead, it’s that they help mobilize a stream of public and private funding. And now that state appropriations for public higher education are in steady decline, public universities, like their private counterparts, are increasingly dependent on private funding.

Another advantage that football brings is that it softens the college’s elitism. Even the most elite American public research universities (Michigan, Wisconsin, Berkeley, UCLA) have a strong populist appeal. The same is true of a number of elite privates (think Stanford, Vanderbilt, Duke, USC). In large part this comes from their role as a venue for popular entertainment supported by their accessibility to a large number of undergraduate students. As a result, the US university has managed to avoid much of the social elitism of British institutions such as Oxford and Cambridge and the academic elitism of the German university dominated by the graduate school. Go to a college town on game day, and nearly every car, house, and person is sporting the college colors.

This broad support is particularly important these days, now that the red-blue political divide has begun to affect colleges as well. A recent study showed that, while most Americans still believe that colleges have a positive influence on the country, 58% of Republicans do not. History strongly suggests that football is going to be more effective than Nobel prizes in winning back their loyalty.

So let’s hear it for college football. It’s worth two cheers at least.

Posted in Academic writing, History, Writing

On Writing: How the King James Bible and How It Shaped the English Language and Still Teaches Us How to Write

When you’re interested in improving your writing, it’s a good idea to have some models to work from.  I’ve presented some of my favorite models in this blog.  These have included a number of examples of good writing by both academics (Max Weber, E.P. Thompson, Jim March, and Mary Metzand nonacademics (Frederick Douglass, Elmore Leonard).

Today I want to explore one of the two most influential forces in shaping the English language over the years:  The King James Bible.  (The other, of course, is Shakespeare.)  Earlier I presented one analysis by Ann Wroe, which focused on the thundering sound of the prose in this extraordinary text.  Today I want to draw on two other pieces of writing that explore the powerful model that this bible provides us all for how to write in English with power and grace.  One is by Adam Nicholson, who wrote a book on the subject (God’s Secretaries: The Making of the King James Bible).  The other, which I reprint in full at the end of this post, is by Charles McGrath.  

The impulse to produce a bible in English arose with the English reformation, as a Protestant vernacular alternative to the Latin version that was canonical in the Catholic church.  The text was commissioned in 1604 by King James, who succeeded Elizabeth I after her long reign, and it was constructed by a committee of 54 scholars.  They went back to the original texts in Hebrew and Greek, but they drew heavily on earlier English translations. 

The foundational translation was written by William Tyndale, who was executed for heresy in Antwerp in 1536, and this was reworked into what became known as the Geneva bible by Calvinists who were living in Switzerland.  One aim of the committee was to produce a version that was more compatible with the beliefs of English and Scottish versions of the faith, but for James the primary impetus was to remove the anti-royalist tone that was embedded within the earlier text.  Recent scholars have concluded that 84% of the words in the King James New Testament and 76% in the Old Testament are Tyndale’s.

As Nicholson puts it, the language of the King James Bible is an amazing mix — “majestic but intimate, the voice of the universe somehow heard in the innermost part of the ear.”

You don’t have to be a Christian to hear the power of those words—simple in vocabulary, cosmic in scale, stately in their rhythms, deeply emotional in their impact. Most of us might think we have forgotten its words, but the King James Bible has sewn itself into the fabric of the language. If a child is ever the apple of her parents’ eye or an idea seems as old as the hills, if we are at death’s door or at our wits’ end, if we have gone through a baptism of fire or are about to bite the dust, if it seems at times that the blind are leading the blind or we are casting pearls before swine, if you are either buttering someone up or casting the first stone, the King James Bible, whether we know it or not, is speaking through us. The haves and have-nots, heads on plates, thieves in the night, scum of the earth, best until last, sackcloth and ashes, streets paved in gold, and the skin of one’s teeth: All of them have been transmitted to us by the translators who did their magnificent work 400 years ago.

Wouldn’t it be lovely if we academics could write in way that sticks in people’s minds for 400 years?  Well, maybe that’s a bit too much to hope for.  But even if we can’t aspire to be epochally epigrammatic, there are still lessons we can learn from Tyndale and the Group of 54.  

One such lesson is the power of simplicity.  Too often scholars feel the compulsion to gussy up their language with jargon and Latinate constructions in the name of professionalism.  If any idiot can understand what you’re saying, then you’re not being a serious scholar.  But the magic of the King James Bible is that it uses simple Anglo-Saxon words to make the most profound statements.  Listen to this passage from Ecclesiastes:

I returned and saw under the sun, that the race is not to the swift, nor the battle to the strong, neither yet bread to the wise, nor yet riches to men of understanding, nor yet favor to men of skill, but time and chance happeneth to them all.

Or this sentence from Paul’s letter to the Phillipians:

Finally, brethren, whatsoever things are true, whatsoever things are honest, whatsoever things are just, whatsoever things are pure, whatsoever things are lovely, whatsoever things are of good report; if there be any virtue, and if there be any praise, think on these things.

Or the stunning opening line of the Gospel of John:

In the beginning was the Word, and the Word was with God, and the Word was God.

This is a text that can speak clearly to the untutored while at the same time elevating them to a higher plane.  For us it’s a model for how to match simplicity with profundity.


Why the King James Bible Endures


The King James Bible, which was first published 400 years ago next month, may be the single best thing ever accomplished by a committee. The Bible was the work of 54 scholars and clergymen who met over seven years in six nine-man subcommittees, called “companies.” In a preface to the new Bible, Miles Smith, one of the translators and a man so impatient that he once walked out of a boring sermon and went to the pub, wrote that anything new inevitably “endured many a storm of gainsaying, or opposition.” So there must have been disputes — shouting; table pounding; high-ruffed, black-gowned clergymen folding their arms and stomping out of the room — but there is no record of them. And the finished text shows none of the PowerPoint insipidness we associate with committee-speak or with later group translations like the 1961 New English Bible, which T.S. Eliot said did not even rise to “dignified mediocrity.” Far from bland, the King James Bible is one of the great masterpieces of English prose.

The issue of how, or even whether, to translate sacred texts was a fraught one in those days, often with political as well as religious overtones, and it still is. The Roman Catholic Church, for instance, recently decided to retranslate the missal used at Mass to make it more formal and less conversational. Critics have complained that the new text is awkward and archaic, while its defenders (some of whom probably still prefer the Mass in Latin) insist that’s just the point — that language a little out of the ordinary is more devotional and inspiring. No one would ever say that the King James Bible is an easy read. And yet its very oddness is part of its power.

From the start, the King James Bible was intended to be not a literary creation but rather a political and theological compromise between the established church and the growing Puritan movement. What the king cared about was clarity, simplicity, doctrinal orthodoxy. The translators worked hard on that, going back to the original Hebrew, Greek and Aramaic, and yet they also spent a lot of time tweaking the English text in the interest of euphony and musicality. Time and again the language seems to slip almost unconsciously into iambic pentameter — this was the age of Shakespeare, commentators are always reminding us — and right from the beginning the translators embraced the principles of repetition and the dramatic pause: “In the beginning God created the Heaven, and the Earth. And the earth was without forme, and void, and darkenesse was upon the face of the deepe: and the Spirit of God mooved upon the face of the waters.”

The influence of the King James Bible is so great that the list of idioms from it that have slipped into everyday speech, taking such deep root that we use them all the time without any awareness of their biblical origin, is practically endless: sour grapes; fatted calf; salt of the earth; drop in a bucket; skin of one’s teeth; apple of one’s eye; girded loins; feet of clay; whited sepulchers; filthy lucre; pearls before swine; fly in the ointment; fight the good fight; eat, drink and be merry.

But what we also love about this Bible is its strangeness — its weird punctuation, odd pronouns (as in “Our Father, which art in heaven”), all those verbs that end in “eth”: “In the morning it flourisheth, and groweth up; in the evening it is cut downe, and withereth.” As Robert Alter has demonstrated in his startling and revealing translations of the Psalms and the Pentateuch, the Hebrew Bible is even stranger, and in ways that the King James translators may not have entirely comprehended, and yet their text performs the great trick of being at once recognizably English and also a little bit foreign. You can hear its distinctive cadences in the speeches of Lincoln, the poetry of Whitman, the novels of Cormac McCarthy.

Even in its time, the King James Bible was deliberately archaic in grammar and phraseology: an expression like “yea, verily,” for example, had gone out of fashion some 50 years before. The translators didn’t want their Bible to sound contemporary, because they knew that contemporaneity quickly goes out of fashion. In his very useful guide, “God’s Secretaries: The Making of the King James Bible,” Adam Nicolson points out that when the Victorians came to revise the King James Bible in 1885, they embraced this principle wholeheartedly, and like those people who whack and scratch old furniture to make it look even more ancient, they threw in a lot of extra Jacobeanisms, like “howbeit,” “peradventure, “holden” and “behooved.”

This is the opposite, of course, of the procedure followed by most new translations, starting with Good News for Modern Man, a paperback Bible published by the American Bible Society in 1966, whose goal was to reflect not the language of the Bible but its ideas, rendering them into current terms, so that Ezekiel 23:20, for example (“For she doted upon their paramours, whose flesh is as the flesh of asses, and whose issue is like the issue of horses”) becomes “She was filled with lust for oversexed men who had all the lustfulness of donkeys or stallions.”

There are countless new Bibles available now, many of them specialized: a Bible for couples, for gays and lesbians, for recovering addicts, for surfers, for skaters and skateboarders, not to mention a superheroes Bible for children. They are all “accessible,” but most are a little tone-deaf, lacking in grandeur and majesty, replacing “through a glasse, darkly,” for instance, with something along the lines of “like a dim image in a mirror.” But what this modernizing ignores is that the most powerful religious language is often a little elevated and incantatory, even ambiguous or just plain hard to understand. The new Catholic missal, for instance, does not seem to fear the forbidding phrase, replacing the statement that Jesus is “one in being with the Father” with the more complicated idea that he is “consubstantial with the Father.”

Not everyone prefers a God who talks like a pal or a guidance counselor. Even some of us who are nonbelievers want a God who speaketh like — well, God. The great achievement of the King James translators is to have arrived at a language that is both ordinary and heightened, that rings in the ear and lingers in the mind. And that all 54 of them were able to agree on every phrase, every comma, without sounding as gassy and evasive as the Financial Crisis Inquiry Commission, is little short of amazing, in itself proof of something like divine inspiration.


Posted in Economic growth, Education policy, Higher Education

Hausmann: The Education Myth

In this post I reprint a piece by Ricardo Hausmann (an economist at Harvard’s Kennedy School), which was published in Project Syndicate in 2015. Here’s a link to the original.  If you can’t get past the paywall, here’s a link to a PDF.

What I like about this piece is the way Hausmann challenges a central principle that guides educational policy, both domestic and international.  This is the belief that education is the central engine of economic growth.  According to this credo, increasing education is how we can increase productivity, GDP, and standard of living.  Hausmann shows, however, that the impact of education on economic growth is a lot less than promised.  Other factors appear to be more important than education at expanding economies, so investing in these strategies may be a lot more efficient than the costly process of increasing access to tertiary education.

As the former chief economist at the Inter-American Development Bank and the head of the Harvard Growth Lab, he seems to know something about this subject.  See what you think.

Human Capital and Ec Development

The Education Myth

TIRANA – In an era characterized by political polarization and policy paralysis, we should celebrate broad agreement on economic strategy wherever we find it. One such area of agreement is the idea that the key to inclusive growth is, as then-British Prime Minister Tony Blair put in his 2001 reelection campaign, “education, education, education.” If we broaden access to schools and improve their quality, economic growth will be both substantial and equitable.

As the Italians would say: magari fosse vero. If only it were true. Enthusiasm for education is perfectly understandable. We want the best education possible for our children, because we want them to have a full range of options in life, to be able to appreciate its many marvels and participate in its challenges. We also know that better educated people tend to earn more.

Education’s importance is incontrovertible – teaching is my day job, so I certainly hope it is of some value. But whether it constitutes a strategy for economic growth is another matter. What most people mean by better education is more schooling; and, by higher-quality education, they mean the effective acquisition of skills (as revealed, say, by the test scores in the OECD’s standardized PISA exam). But does that really drive economic growth?

In fact, the push for better education is an experiment that has already been carried out globally. And, as my Harvard colleague Lant Pritchett has pointed out, the long-term payoff has been surprisingly disappointing.

In the 50 years from 1960 to 2010, the global labor force’s average time in school essentially tripled, from 2.8 years to 8.3 years. This means that the average worker in a median country went from less than half a primary education to more than half a high school education.

How much richer should these countries have expected to become? In 1965, France had a labor force that averaged less than five years of schooling and a per capita income of $14,000 (at 2005 prices). In 2010, countries with a similar level of education had a per capita income of less than $1,000.

In 1960, countries with an education level of 8.3 years of schooling were 5.5 times richer than those with 2.8 year of schooling. By contrast, countries that had increased their education from 2.8 years of schooling in 1960 to 8.3 years of schooling in 2010 were only 167% richer. Moreover, much of this increase cannot possibly be attributed to education, as workers in 2010 had the advantage of technologies that were 50 years more advanced than those in 1960. Clearly, something other than education is needed to generate prosperity.

As is often the case, the experience of individual countries is more revealing than the averages. China started with less education than Tunisia, Mexico, Kenya, or Iran in 1960, and had made less progress than them by 2010. And yet, in terms of economic growth, China blew all of them out of the water. The same can be said of Thailand and Indonesia vis-à-vis the Philippines, Cameroon, Ghana, or Panama. Again, the fast growers must be doing something in addition to providing education.

The experience within countries is also revealing. In Mexico, the average income of men aged 25-30 with a full primary education differs by more than a factor of three between poorer municipalities and richer ones. The difference cannot possibly be related to educational quality, because those who moved from poor municipalities to richer ones also earned more.

And there is more bad news for the “education, education, education” crowd: Most of the skills that a labor force possesses were acquired on the job. What a society knows how to do is known mainly in its firms, not in its schools. At most modern firms, fewer than 15% of the positions are open for entry-level workers, meaning that employers demand something that the education system cannot – and is not expected – to provide.

When presented with these facts, education enthusiasts often argue that education is a necessary but not a sufficient condition for growth. But in that case, investment in education is unlikely to deliver much if the other conditions are missing. After all, though the typical country with ten years of schooling had a per capita income of $30,000 in 2010, per capita income in Albania, Armenia, and Sri Lanka, which have achieved that level of schooling, was less than $5,000. Whatever is preventing these countries from becoming richer, it is not lack of education.

A country’s income is the sum of the output produced by each worker. To increase income, we need to increase worker productivity. Evidently, “something in the water,” other than education, makes people much more productive in some places than in others. A successful growth strategy needs to figure out what this is.

Make no mistake: education presumably does raise productivity. But to say that education is your growth strategy means that you are giving up on everyone who has already gone through the school system – most people over 18, and almost all over 25. It is a strategy that ignores the potential that is in 100% of today’s labor force, 98% of next year’s, and a huge number of people who will be around for the next half-century. An education-only strategy is bound to make all of them regret having been born too soon.

This generation is too old for education to be its growth strategy. It needs a growth strategy that will make it more productive – and thus able to create the resources to invest more in the education of the next generation. Our generation owes it to theirs to have a growth strategy for ourselves. And that strategy will not be about us going back to school.

Ricardo Hausmann, a former minister of planning of Venezuela and former Chief Economist at the Inter-American Development Bank, is a professor at Harvard’s John F. Kennedy School of Government and Director of the Harvard Growth Lab.

Posted in Academic writing, Writing

Elmore Leonard’s Master Class on Writing a Scene

As you may have figured out by now, I’m a big fan of Elmore Leonard.  I wrote an earlier post about the deft way he leads you into a story and introduces a character on the very first page of a book.  He never gives his readers fits the way we academic writers do ours, by making them plow through half a paper before they finally discover its point.

Here I want to show you one of the best scenes Leonard ever wrote — and he wrote a lot of them.  It’s from the book Be Cool, which is the sequel to another called Get Shorty.  Both were turned into films starring John Travolta as Chili Palmer.  Chili is a loan shark from back east who heads to Hollywood to collect on a marker, but what he really wants is to make movies.  As a favor, he looks up a producer who owes someone else money, and instead of collecting he pitches a story.  The rest of the series is about the cinematic mess that ensues.

Chili Palmer

In the scene below, Chili runs into a minor thug floating in a backyard swimming pool.  In the larger story this is a nothing scene, but it’s stunning how Leonard turns it into a tour de force.  In a virtuoso display of writing, he shows Chili effortlessly take the thug apart while also mesmerizing him.  Chili the movie maker rewrites the scene as he’s acting it out and then directs the thug on the raft how to play his own part more effectively.

Watch how Chili does it:

He got out of there, went into the living room and stood looking around, seeing it now as the lobby of an expensive health club, a spa: walk through there to the pool where one of the guests was drying out. From here Chili had a clear view of Derek, the kid floating in the pool on the yellow raft, sun beating down on him, his shades reflecting the light. Chili walked outside, crossed the terrace to where a quart bottle of Absolut, almost full, stood at the tiled edge of the pool. He looked down at Derek laid out in his undershorts.

He said, “Derek Stones?”

And watched the kid raise his head from the round edge of the raft, stare this way through his shades and let his head fall back again.

“Your mother called,” Chili said. “You have to go home.”

A wrought-iron table and chairs with cushions stood in an arbor of shade close to the house. Chili walked over and sat down. He watched Derek struggle to pull himself up and begin paddling with his hands, bringing the raft to the side of the pool; watched him try to crawl out and fall in the water when the raft moved out from under him. Derek made it finally, came over to the table and stood there showing Chili his skinny white body, his titty rings, his tats, his sagging wet underwear.

“You wake me up,” Derek said, “with some shit about I’m suppose to go home? I don’t even know you, man. You from the funeral home? Put on your undertaker suit and deliver Tommy’s ashes? No, I forgot, they’re being picked up. But you’re either from the funeral home or—shit, I know what you are, you’re a lawyer. I can tell ’cause all you assholes look alike.”

Chili said to him, “Derek, are you trying to fuck with me?”

Derek said, “Shit, if I was fucking with you, man, you’d know it.”

Chili was shaking his head before the words were out of Derek’s mouth.

“You sure that’s what you want to say? ‘If I was fuckin with you, man, you’d know it?’ The ‘If I was fucking with you’ part is okay, if that’s the way you want to go. But then, ‘you’d know it’—come on, you can do better than that.”

Derek took off his shades and squinted at him.

“The fuck’re you talking about?”

“You hear a line,” Chili said, “like in a movie. The one guy says, ‘Are you trying to fuck with me?’ The other guy comes back with, ‘If I was fuckin with you, man . . .’ and you want to hear what he says next ’cause it’s the punch line. He’s not gonna say, ‘You’d know it.’ When the first guy says, ‘Are you trying to fuck with me?’ he already knows the guy’s fuckin with him, it’s a rhetorical question. So the other guy isn’t gonna say ‘you’d know it.’ You understand what I’m saying? ‘You’d know it’ doesn’t do the job. You have to think of something better than that.”

“Wait,” Derek said, in his wet underwear, weaving a little, still half in the bag. “The first guy goes, ‘You trying to fuck with me?’ Okay, and the second guy goes, ‘If I was fucking with you . . . If I was fucking with you, man . . .’ “

Chili waited. “Yeah?”

“Okay, how about, ‘You wouldn’t live to tell about it?’

“Jesus Christ,” Chili said, “come on, Derek, does that make sense? ‘You wouldn’t live to tell about it’? What’s that mean? Fuckin with a guy’s the same as taking him out?” Chili got up from the table. “What you have to do, Derek, you want to be cool, is have punch lines on the top of your head for every occasion. Guy says, ‘Are you trying to fuck with me?’ You’re ready, you come back with your line.” Chili said, “Think about it,” walking away. He went in the house through the glass doors to the bedroom.

Don’t you wish you could be Elmore Leonard and write a scene like that, or be Chili Palmer and construct it on the fly?  I sure do, and I’m not sure which role would be the more gratifying.

You could have a lot of fun picking apart the things that make the scene work.  Chili the movie maker walking into the living room and suddenly “seeing it as the lobby of an expensive health spa.”  Derek with “his skinny white body, his titty rings, his tats, his sagging wet underwear.”  The way Derek talks: “The fuck’re you talking about?”  Derek struggling to come up with the right line to replace the lame one he thought up himself.  Chili explaining the core dilemma of the writer, that you can’t ever set up a punchline and then fail to deliver.

But instead of explaining his joke, let’s just learn from his example.  Deliver what you promise.  Reward the effort that your readers invest in engaging with your work.  Have your key insight ready, deliver it on cue, and then walk away.  Never step on the punchline.

Posted in Uncategorized

Public Schools for Private Gain: The Declining American Commitment to Serving the Public Good

This post is a piece I published in Kappan in November, 2018.  Here’s a link to the original.

Public schools for private gain:

The declining American commitment to serving the public good

When schooling comes to be viewed mainly as a source of private benefit, both schools
and society suffer grave consequences.

By David F. Labaree

We Americans tend to talk about public schooling as though we know what that term means. But in the complex educational landscape of the 21st century — where charter schools, private schools, and religious schools compete with traditional public schools for resources and support — it’s becoming less and less obvious what makes a school “public” at all.

Public Schools for Private Gain Image

A school is public, one might argue, if it meets certain formal criteria: It is funded by the public, governed by the public, and openly accessible to the public. But in that case, what should we make of charter schools, which are broadly understood to be public schools even though many are governed by private organizations? And how should we categorize religious schools that enroll students using public vouchers or tax credits, or public schools that use exams to restrict access? For that matter, don’t private schools often serve public interests, and don’t public schools often promote students’ private interests?

In short, our efforts to distinguish between public and nonpublic schools often oversimplify the ways in which today’s schools operate and the complex roles they play in our society. And such distinctions matter because they shape our thinking about educational policy. After all, if we’re unclear which schools deserve what kinds of funding and support, then how do we justify a system of elementary, secondary, and higher education that consumes more than $800 billion in taxes every year and consumes 10 to 20 or more years of every person’s life?

To clarify what we mean by public schooling, it’s helpful to broaden the discussion by considering not just the formal features of schools (their funding, governance, and admissions criteria) but also their aims. That is, to what extent do they pursue the public good, and to what extent do they serve private interests?

A public good is one that benefits all members of the community, whether or not they contribute to its upkeep or make use of it personally. In contrast, private goods benefit individuals, accruing only to those people who are able to take advantage of them. Thus, schooling is a public good to the extent that it helps everyone (including people who don’t have children in school); it is, by nature, inclusive. And schooling is a private good to the extent that it provides individuals with knowledge, skills, and credentials they can use to distinguish themselves from other people and get ahead in life; it is a form of private property, whose benefits are exclusive to those who own them.

People, organizations, and governments that create public goods tend to face what is known as the “free-rider” problem: If you can’t prevent people from enjoying goods for free, then they’ll have little incentive to pay for them. For example, if I can hang out at my local park whenever I want, then why should I donate to the park clean-up fund that my neighbors organized? I can get a free ride on them, enjoying a clean park without chipping in any of my own money.

The standard solution to the free-rider problem is to make it mandatory for everybody to support certain public goods (for example, efforts to reduce air pollution, fight crime, and monitor food safety) by using mechanisms such as general taxation. Indeed, this is how we’ve always supported our public schools. You may pay tuition to send your children to an exclusive, ivy-covered academy — or you might not have kids at all — but even so, you are required to pay taxes that fund schools for the whole community. Your family may not benefit personally from the services provided by, say, the elementary school down the road, but you do benefit, along with your neighbors, from having a well-funded school nearby. If local kids get a decent education and grow up to become gainfully employed, law-abiding citizens, that is a public good. It makes the entire community a better, safer, and happier place to live.

For much of American history, schooling has been understood in this way, first and foremost. For example, at the founding of our educational system, in the early 19th century, schools were supposed to turn young people into virtuous and competent citizens, a public good that was strongly political in nature. By the turn of the 20th century, schooling was still regarded mainly as a public good, but the mission had begun to shift from politics (creating citizens) to economics (training capable workers who can help promote broad prosperity). Over the subsequent decades, however, growing numbers of Americans came to view schooling mainly as a private good, producing credentials that allow individuals to get ahead, or stay ahead, in the competition for money and social status.

In this article, I argue that this shift in how Americans have viewed schooling — from conceiving of it mainly as a public good to defining it mostly as a private good — has led to dramatic changes in both the quality of the education that students receive and the kind of society we expect our schools to create. The institution that for much of our history helped bring us together into a community of citizens is increasingly dispersing us into a social hierarchy defined by the level of education we’ve attained.

The social functions of U.S. schooling: A short history[1]

In the early 19th century, the United States created a system of universal public schooling for the same reason that other emerging nations have done so over the years: to turn subjects of the King into citizens of the state.

Historically, public schooling has been the midwife of the nation state, whose viability depends on its ability to convert the occupants of a particular territory into members of an imagined community, who come to see themselves for the first time as French, say, or American. This mission was particularly important for the U.S. because it was a republic entering a world that had long demonstrated hostility toward the survival of such states. From ancient Rome to the Italian city states of the Renaissance, republics tended either to succumb to a tyrant or be destroyed in a Hobbesian war among irreconcilable interests.

As the Founders well knew, the survival of the American republic depended on its ability to form individuals into a republican community in which citizens were imbued with a commitment to the public good. Further, when the Common School Movement emerged in the 1820s and 30s, it faced an additional challenge as well, because the civic virtue of the fragile new republic was undergoing a vigorous challenge from the possessive individualism of the emerging free-market economy. Horace Mann, the leader of the movement in Massachusetts, put the case this way: “It may be an easy thing to make a Republic; but it is a very laborious thing to make Republicans; and woe to the republic that rests upon no better foundations than ignorance, selfishness, and passion.”[2]

The key characteristic of the new common school was not its curriculum or pedagogy but its commonality. It brought young people together into a single building where they engaged in a shared social and cultural experience, meant to counter the differences of social class that posed a serious threat to republican identity. Ideally, students would learn, in age-graded classrooms, to belong to a community of equals.

Further, the goal wasn’t just to teach them to internalize democratic norms but also to make it possible for capitalism to coexist with republicanism. For the free market to function, the state had to relax its control over individuals, allowing them to make their own decisions as rational actors. By learning to regulate their own thoughts and behaviors within the space of the classroom, students would become prepared for both commerce and citizenship, able to pursue their self-interests in the economic marketplace while at the same time participating in the political marketplace of ideas.

However, by the end of the 19th century, the survival of the republic was no longer in question. At that point, the U.S. was emerging as a world power, with booming industrial production, large-scale immigration, and a growing military presence. And while there was some pressure to turn peasant immigrants from Southern and Eastern Europe into American citizens, policy makers were even more concerned with turning them into modern industrial workers. In the roaring economy of the Progressive Era, then, the mission of schooling evolved: The most pressing goal was to strengthen the nation’s human capital (to put it in today’s terms).

Note, though, that schooling continued to be defined as a public good. When the workforce became more skilled and productivity increased, the whole country benefited. Overall, Americans’ standard of living improved. Thus, there remained a strong rationale for everyone to contribute to the education of other people’s children. And that rationale continues to resonate somewhat today. Even now, politicians and policy makers often talk about “investing” public funds in education as a way to promote economic growth, lifting all boats.

It was only in the 20th century that schooling came to be regarded as the primary means for individuals to obtain a good job. As their enrollments skyrocketed, high schools gave up the longstanding practice of providing a common course of study for all students and, instead, differentiated the curriculum, providing separate tracks designed for different career trajectories: the industrial course for factory workers, the business course for clerical workers, and the academic course for those bound for college (and then for work in management and the professions). As one school board president in the 1920s put it, “For a long time, all boys were trained to be President . . . Now we are training them to get jobs.”[3]

The new vocationalism lacked the grandeur of the mission set for the Common School, but it did address parents’ primary concern: how to ensure their children ended up with a good income and a secure social position, ideally by landing a job in the upper ranks of the new occupational hierarchy. Such work tended to be safer, cleaner, less manual, more mental, more secure, more prestigious, and better paid. And, crucially, each step up in the hierarchy required a higher level of education.

This new function of schooling — allocating desirable jobs — was in some ways just the flip side of the idea that schools exist to produce capable workers. What a policy maker views as a process of strengthening the nation’s human capital looks, to the individual student, like a way to attain personal status. For the student, school becomes purely a contest to obtain better educational qualifications and get better jobs. And from this angle, school is a decidedly private good. The pursuit of high-status jobs is a zero-sum game. If you get hired for a position, then I don’t.

All but gone is the assumption that purpose of schooling is to benefit the community at large. Less and less often do Americans conceive of education as a cooperative effort in nation-building or a collective investment in workforce development. Increasingly, rather, school comes to be viewed as an intense competition among individuals to get ahead in society and avoid being left behind. It has begun to look, to a great extent, like a means of creating winners and losers in the pursuit of academic merit, with the results determining who becomes winners and losers in life.

Consequences of the rise of schooling as a private good

When schools become a mechanism for allocating social status, they provoke intense competition over invidious educational distinctions. But while schooling may serve as a very private good, that doesn’t mean it can’t also function, at the same time, as a public good.

At one level, everyone who attends a school benefits personally from the knowledge, skills, and socialization they gain there, as well as from any diplomas they receive, which certify their learning and provide a signal to the job market about their relative employability for a variety of occupational positions. Viewed from this angle, even students at the most traditional public schools  accrue private goods.

And at another level, everyone in society benefits from having a well-educated and successful group of fellow citizens and co-workers. One of the core concepts of neoclassical economics is that the pursuit of private and personal gain often has public benefits. People with more education tend to commit fewer crimes, participate more fully in public life, vote more often, and contribute to civil society through engagement with a variety of nongovernmental organizations. They are more likely to assume positions of political, social, and economic leadership and to populate the professions. And they tend to be more productive workers, which helps both to spur economic growth and to increase the standard of living for the population as a whole. The fact that these benefits may be unintended consequences, resulting indirectly from people seeking personal gain and glory, doesn’t make them any less significant.

Consider the classic statement of this phenomenon by Adam Smith: “It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own interest . . . Nobody but a beggar chuses [sic] to depend chiefly upon the benevolence of his fellow-citizens.”[4] From this perspective, the competition for educational advantage benefits not only the individuals who gain the credentials but also the public at large. When we strengthen the level of skill in the workforce, everybody’s quality of life improves. And if true, this solves the free-rider problem: Rather than compelling people to contribute to the public good, we can simply encourage them to pursue their private interests, trusting that this will, over the long haul, produce the greatest benefits for everybody.

The problem is that, whether or not this theory is correct, few of us can afford to wait for the long haul. Encouraging individuals to pursue their private interests doesn’t do much for the vast numbers of people who have serious obstacles to confront in the short term. Moreover, while a rising tide of economic growth may raise all boats, this doesn’t change the fact that most kids are born in dinghies, not yachts.

We know from decades of research that children from lower-income backgrounds tend to attend worse schools than those born into affluent families, are less likely to be in the high-level reading group or the honors track, and are much less likely to graduate from high school. If they go to college, they are less likely to attend a four year institution and are less likely to earn a degree. And every year, it becomes less and less likely that a person who was born in a dinghy will ever end up owning a yacht, much less raise their children in one.

For those families that do enjoy greater wealth, the public benefits of schooling are easy to miss, whereas the private benefits are material, immediate, and personal. When push comes to shove, the latter are simply more compelling. It’s no surprise that affluent parents will deploy their economic, social, and cultural capital to gain as many educational advantages as they can for their children. They move to the best school district they can afford or send their kids to private school; they make sure they get into the classes with the best teachers, gain access to the gifted program in elementary school and the advanced placement program in high school. And they push their children toward the most selective college they can attend. To do anything less would be a disservice.

Sure, in the name of fairness and justice they could choose to send their children to the same lousy schools that less fortunate people are forced to attend. But even if they support efforts to improve the quality of educational opportunities for other people’s children, what kind of parent would put their children’s future at risk for a political principle?

In short, the pursuit of private educational goods drives most parents’ immediate decisions, while efforts to promote the public good are deferred to the indeterminate realm of political action for possible resolution in the distant future. It’s not that anybody wants to punish other people’s children; it’s just that they need to take care of their own. But when the public good is forever postponed, the effects are punishing indeed. And when schooling comes to be viewed solely as a means of private advancement, the consequences are dismal for both school and society:

  • Over time, the market rewards the accumulation of educational credentials more than it values knowledge and skills. For example, employers will pay a higher salary to a person who squeaked out a college degree than one who excelled in all four years of college but left one credit short of a diploma.
  • As a result, students learn early on that the goal is to acquire as many grades, credits, and degrees as possible rather than the knowledge and skills that these tokens are supposed to represent. So much the better if you can find ways to game the system (by, for example, studying only what’s likely to be on the test, buttering up the teacher, or just plain cheating). Only a sucker pays the sticker price.
  • In turn, schooling becomes more and more stratified, in two related ways: first, students have incentives to pursue the highest level of schooling they can (a graduate degree is better than a 4-year degree, which is better than a 2-year degree, and so on). Second, they have incentives to get into the highest-status institutions they can, at every level.
  • Cooperative learning becomes a dangerous waste of time. Students’ have no incentive to learn from their classmates but only to maximize their own ranking relative to them.
  • Families with more economic and cultural and social capital begin to hoard educational opportunities for their own children, elbowing others aside for access to the most desirable schools, teachers, and other resources.
  • This in turn threatens the legitimacy of the whole system, undermining the claim that people succeed according to their educational merit.
  • Moreover, people with the highest-status degrees and jobs tend to marry each other and pass their concentrated levels of advantage on to their own children, which only widens the divide across subsequent generations.
  • Enjoying greater wealth, those parents choose to send their children to private schools, or they choose to live in neighborhoods with elite public schools — in any case, the nominally “public” school hardly differs from the private academy, except that while it enjoys public subsidies, its boundaries have been drawn up in a way that denies access to other people’s children. (In effect, such a school is a public resource turned toward private ends.)

My point is that over the last several decades, as schooling has come to be viewed mainly as a source of private benefit rather than as a public good, the consequences have been dramatic for both schools and society. Increasingly prized as a resource by affluent families, traditional public schooling has become a mechanism by which to reinforce their advantages. And as a result, is has become harder and harder to distinguish what is truly public about our public schools.

At a deeper level, as we have privatized our vision of public schooling, we have shown a willingness to back away from the social commitment to the public good that motivated the formation of the American republic and the common school system. We have grown all too comfortable in allowing the fate of other people’s children be determined by the unequal competition among consumers for social advantage through schooling. The invisible hand of the market may work for the general benefit in the economic activities of the butcher and the baker but not in the political project of creating citizens.

[1] The discussion in this section is drawn from the following: David F. Labaree, “Public Goods, Private Goods: The American Struggle over Educational Goals,” American Educational Research Journal, 34:1 (Spring, 1997), pp. 39-81; David F. Labaree, “Founding the American School System,” in Someone Has to Fail: The Zero-Sum Game of Public Schooling (Cambridge, MA: Harvard University Press, 2010), pp. 42-79.

[2] Horace Mann, Fifth annual report to the Massachusetts Board of Education (Boston: Board of Education, 1841).

[3] Robert S. and Helen Merrill Lynd, Middletown (New York: Harcourt, Brace and World, 1929), p. 194.

[4] Adam Smith, An Inquiry in the Nature and Causes of the Wealth of Nations, Edwin Cannan, ed. (Chicago: University of Chicago Press, 1776/1976), book 1, chapter 2, p. 18.



Posted in Empire, History, Resilience, War

Resilience in the Face of Climate Change and Epidemic: Ancient Rome and Today’s America

Tell me if you think this sounds familiar:  In its latter years (500-700 ACE), the Roman Empire faced a formidable challenge from two devastating environmental forces — dramatic climate change and massive epidemic.  As Mark Twain is supposed to have said, “History doesn’t repeat itself, but it often rhymes.”

During our own bout of climate change and ravaging disease, I’ve been reading Kyle Harper’s book The Fate of Rome: Climate, Disease, and the End of Empire.  The whole time, rhymes were running through my head.  We all know that things did not turn out well for Rome, whose civilization went through the most devastating collapse in world history.  The state disintegrated, population fell in half, and the European standard of living did not recover the level it had in 500 until a thousand years later.

Fate of Rome Cover

So Rome ended badly, but what about us?  The American empire may be eclipsing, but it’s not like the end is near.  Rome was dependent on animal power and a fragile agricultural base, and its medical “system” did more harm than good.  All in all we seem much better equipped to deal with climate change and disease than they were.  As a result, I’m not suggesting that we’re headed for the same calamitous fall that faced Roman civilization, but I do think we can learn something important by observing how they handled their own situation.

What’s so interesting about the fall of Rome is that it took so long.  The empire held on for 500 years, even under circumstances where its fall was thoroughly overdetermined.  The traditional story of the fall is about fraying political institutions in an overextended empire, surrounded by surging “barbarian” states that were prodded into existence by Rome’s looming threat.

To this political account, Harper adds the environment.  The climate was originally very kind to Rome, supporting growth during a long period of warm and wet weather known as the Roman Climate Optimum (200 BCE to 150 ACE).  But then conditions grew increasingly unstable, leading to the Late Antique Little Ice Age (450-700), with massive crop failures brought on by a drop in solar energy and massive volcanic eruptions.  In the midst of this arose a series of epidemics, fostered (like our own) by the opening up of trade routes, which culminated in the bubonic plague (541-749) that killed off half of the populace.

What kept Rome going all this time was a set of resilient civic institutions.  That’s what I think we can learn from the Roman case.  My fear is that our own institutions are considerably more fragile.  In this analysis, I’m picking up on a theme from an earlier blog post:  The Triumph of Efficiency over Effectiveness: A Brief for Resilience through Redundancy.

Here is how Harper describes the institutional framework of this empire:

Rome was ruled by a monarch in all but name, who administered a far-flung empire with the aid, first and foremost, of the senatorial aristocracy. It was an aristocracy of wealth, with property requirements for entry, and it was a competitive aristocracy of service. Low rates of intergenerational succession meant that most aristocrats “came from families that sent representatives into politics for only one generation.”

The emperor was the commander-in-chief, but senators jealously guarded the right to the high posts of legionary command and prestigious governorships. The imperial aristocracy was able to control the empire with a remarkably thin layer of administrators. This light skein was only successful because it was cast over a foundational layer of civic aristocracies across the empire. The cities have been called the “load-bearing” pillars of the empire, and their elites were afforded special inducements, including Roman citizenship and pathways into the imperial aristocracy. The low rates of central taxation left ample room for peculation by the civic aristocracy. The enormous success of the “grand bargain” between the military monarchy and the local elites allowed imperial society to absorb profound but gradual changes—like the provincialization of the aristocracy and bureaucracy—without jolting the social order.

The Roman frontier system epitomized the resilience of the empire; it was designed to bend but not break, to bide time for the vast logistical superiority of the empire to overwhelm Rome’s adversaries. Even the most developed rival in the orbit of Rome would melt before the advance of the legionary columns. The Roman peace, then, was not the prolonged absence of war, but its dispersion outward along the edges of empire.

The grand and decisive imperial bargain, which defined the imperial regime in the first two centuries, was the implicit accord between the empire and “the cities.” The Romans ruled through cities and their noble families. The Romans coaxed the civic aristocracies of the Mediterranean world into their imperial project. By leaving tax collection in the hands of the local gentry, and bestowing citizenship liberally, the Romans co-opted elites across three continents into the governing class and thereby managed to command a vast empire with only a few hundred high-ranking Roman officials. In retrospect, it is surprising how quickly the empire ceased to be a mechanism of naked extraction, and became a sort of commonwealth.

Note that last part:  Rome “became a sort of commonwealth.”  It conquered much of the Western world and incorporated one-quarter of the earth’s population, but the conquered territories were generally better off under Rome than they had been before — benefiting from citizenship, expanded trade, and growing standards of living.  It was a remarkably stratified society, but its benefits extended even to the lower orders.  (For more on this issue, see my earlier post about Walter Scheidel’s book on the social benefits of war.)

At the heart of the Roman system were three cultural norms that guided civic life: self sufficiency, reciprocity, and patronage.  Let me focus on the latter, which seems to be dangerously absent in our own society at the moment.

The expectation of paternalistic generosity lay heavily on the rich, ensuring that less exalted members of society had an emergency lien on their stores of wealth. Of course, the rich charged for this insurance, in the form of respect and loyalty, and in the Roman Empire there was a constant need to monitor the fine line between clientage and dependence.

A key part of the grand bargain engineered by Rome was the state’s responsibility to feed its citizens.

The grain dole was the political entitlement of an imperial people, under the patronage of the emperor.

Preparation for famine — a chronic threat to premodern agricultural societies — was at the center of the system’s institutional resilience.  This was particularly important in an empire as thoroughly city-centered as Rome.  Keep in mind that Rome during the empire was the first city in the world to have 1 million residents; the second was London 1500 year later.

These strategies of resilience, writ large, were engrained in the practices of the ancient city. Diversification and storage were adapted to scale. Urban food storage was the first line of redundancy. Under the Roman Empire, the monumental dimensions of storage facilities attest the political priority of food security. Moreover, cities grew organically along the waters, where they were not confined to dependence on a single hinterland.

When food crisis did unfold, the Roman government stood ready to intervene, sometimes through direct provision but more often simply by the suppression of unseemly venality.

The most familiar system of resilience was the food supply of Rome. The remnants of the monumental public granaries that stored the food supply of the metropolis are still breathtaking.

Wouldn’t it be nice if we in the U.S. could face the challenges of climate change and pandemic as a commonwealth?  If so, we would be working to increase the resilience of our system:  by sharing the burden and spreading the wealth: by building up redundancy to store up for future challenges; by freeing ourselves from the ideology of economic efficiency in the service of social effectiveness.  Wouldn’t that be nice.