Posted in Academic writing, Writing

Dumitrescu: How to Write Well

This post is a review essay by Irina Dumitrescu about five books that explore how to write well.  It appeared in Times Literary Supplement, March 20, 2020.  Here’s a link to the original.  

She’s reviewing five books about writing.  Is there any writing task more fraught with peril that trying to write about writing?  Anything less than superlative literary style would constitute an abject failure.  Fortunately this author is up to the challenge.

Here are some of my favorite passages.

She reminds of an enduring truth about writing.  Everyone starts with imitating others.  You need models to work from:

Shakespeare patterned his comedies on Terence’s Latin romps, and Terence stole his plots from the Greek Menander. Milton copied Virgil, who plagiarized Homer. The history of literature is a catwalk on which the same old skeletons keep coming out in new clothes.

On the other hand,

Style unsettles this pedagogy of models and moulds. As the novelist Elizabeth McCracken once told Ben Yagoda in an interview, “A writer’s voice lives in his or her bad habits … the trick is to make them charming bad habits”. Readers longing for something beyond mere information – verbal fireworks, the tremor of an authentic connection, a touch of quiet magic – will do well to find the rule-breakers on the bookshop shelf. Idiosyncrasies (even mistakes) account for the specific charm of a given author, and they slyly open the door to decisions of taste.

One author makes the case against turgid academic writing in a book that she admiringly calls

an inspiring mess, a book that in its haphazard organization is its own argument for playfulness and improvisation. Like Warner, Kumar cannot stand “the five-paragraph costume armor of the high school essay”. Nor does he have much patience for other formulaic aspects of academic writing: didactic topic sentences, or jargony vocabulary such as “emergence” and “post-capitalist hegemony”. In his description of a website that produces meaningless theoretical prose at the touch of a button, Kumar notes that “the academy is the original random sentence generator”.

Of all the books she discusses, my favorite (and hers, I think) is 

Joe Moran’s exquisite book First You Write a Sentence…. As befits a cultural historian, Moran compares writing sentences to crafting other artisanal objects – they are artworks and spaces of refuge, gifts with which an author shapes the world to be more beautiful and capacious and kind. Like a town square or a city park, “a well-made sentence shows … silent solicitude for others. It cares”.

Moran’s own sentences are so deliciously epigrammatic that I considered giving up chocolate in favour of re-reading his book. Because he has dedicated an entire volume to one small form, he has the leisure to attend to fine details. As he explores sentences from every angle, he describes the relative heat of different verbs, the delicately shading nuances of punctuation choices, how short words feel in the mouth, the opportunity of white space. “Learn to love the feel of sentences,” he writes with a connoisseur’s delight, “the arcs of anticipation and suspense, the balancing phrases, the wholesome little snap of the full stop.”

Enjoy.

How to write well

Rules, style and the ‘well-made sentence’

By Irina Dumitrescu

IN THIS REVIEW
WHY THEY CAN’T WRITE
Killing the five-paragraph essay and other necessities
288pp. Johns Hopkins University Press. £20.50 (US $27.95).
John Warner
WRITING TO PERSUADE
How to bring people over to your side
224pp. Norton. £18.99 (US $26.95).
Trish Hall
EVERY DAY I WRITE THE BOOK
Notes on style
256pp. Duke University Press. Paperback, £20.99 (US $24.95).
Amitava Kumar
FIRST YOU WRITE A SENTENCE
The elements of reading, writing … and life
240pp. Penguin. Paperback, £9.99.
Joe Moran
MEANDER, SPIRAL, EXPLODE
Design and pattern in narrative
272pp. Catapult. Paperback, $16.95.
Jane Alison

In high school a close friend told me about a lesson her father had received when he was learning to write in English. Any essay could be improved by the addition of one specific phrase: “in a world tormented by the spectre of thermonuclear holocaust”. We thought it would be hilarious to surprise our own teachers with this gem, but nothing came of it. Twenty years later, as I looked through the files on an old computer, I discovered my high school compositions. There, at the end of an essay on Hugo Grotius and just war theory I must have written for this purpose alone, was that irresistible rhetorical flourish.

As much as we might admire what is fresh and innovative, we all learn by imitating patterns. Babies learning to speak do not immediately acquire the full grammar of their mother tongue and a vocabulary to slot into it, but inch slowly into the language by repeating basic phrases, then varying them. Adults learning a foreign language are wise to do the same. Pianists run through exercises to train their dexterity, basketball players run through their plays, dancers rehearse combos they can later slip into longer choreographies. To be called “formulaic” is no compliment, but whenever people express themselves or take action in the world, they rely on familiar formulas.

Writing advice is caught in this paradox. Mavens of clear communication know that simple rules are memorable and easy to follow. Use a verb instead of a noun. Change passive to active. Cut unnecessary words. Avoid jargon. No aspiring author will make the language dance by following these dictates, but they will be understood, and that is something. The same holds for structure. In school, pupils are drilled in the basic shapes of arguments, such as the “rule of three”, the “five-paragraph essay” or, à l’américaine, the Hamburger Essay (the main argument being the meat). Would-be novelists weigh their Fichtean Curves against their Hero’s Journeys, and screenwriters can buy software that will ensure their movie script hits every beat prescribed by Blake Snyder in his bestselling book Save the Cat! (2005). And why not? Shakespeare patterned his comedies on Terence’s Latin romps, and Terence stole his plots from the Greek Menander. Milton copied Virgil, who plagiarized Homer. The history of literature is a catwalk on which the same old skeletons keep coming out in new clothes.

Style unsettles this pedagogy of models and moulds. As the novelist Elizabeth McCracken once told Ben Yagoda in an interview, “A writer’s voice lives in his or her bad habits … the trick is to make them charming bad habits”. Readers longing for something beyond mere information – verbal fireworks, the tremor of an authentic connection, a touch of quiet magic – will do well to find the rule-breakers on the bookshop shelf. Idiosyncrasies (even mistakes) account for the specific charm of a given author, and they slyly open the door to decisions of taste. Think of David Foster Wallace’s endless sentences, George R. R. Martin’s neologisms, the faux-naivety of Gertrude Stein. In his book on literary voice, The Sound on the Page (2004), Yagoda argues that style reveals “something essential” and impossible to conceal about an author’s character. The notion that the way a person arranges words is inextricably tied to their moral core has a long history, but its implication for teaching writing is what interests me here: convince or compel writers to cleave too closely to a set of prescribed rules, and you chip away at who they are.

This explains why John Warner’s book about writing, Why They Can’t Write: Killing the five-paragraph essay and other necessities, contains almost no advice on how to write. A long-time college instructor, Warner hints at his argument in his subtitle: his is a polemical take on American standardized testing practices, socioeconomic conditions, and institutions of learning that destroy any love or motivation young people might have for expressing themselves in writing. Against the perennial assumption that today’s students are too lazy and precious to work hard, Warner holds firm: “Students are not entitled or coddled. They are defeated”. The symbol of the US’s misguided approach to education is the argumentative structure drilled into each teenager as a shortcut for thinking and reflection. “If writing is like exercise,” he quips, “the five-paragraph essay is like one of those ab belt doohickeys that claim to electroshock your core into a six-pack.”

What is to blame for students’ bad writing? According to Warner, the entire context in which it is taught. He rails against school systems that privilege shallow “achievement” over curiosity and learning, a culture of “surveillance and compliance” (including apps that track students’ behaviour and report it to parents in real time), an obsession with standardized testing that is fundamentally inimical to thoughtful reading and writing, and a love of faddish psychological theories and worthless digital learning projects.

It is easy for a lover of good writing to share Warner’s anger at the shallow and mechanistic culture of public education in the United States, easy to smile knowingly when he notes that standardized tests prize students’ ability to produce “pseudo-academic BS”, meaningless convoluted sentences cobbled together out of sophisticated-sounding words. Warner’s argument against teaching grammar is harder to swallow. Seeing in grammar yet another case of rules and correctness being put ahead of thoughtful engagement, Warner claims, “the sentence is not the basic skill or fundamental unit of writing. The idea is”. Instead of assignments, he gives his students “writing experiences”, interlocked prompts designed to hone their ability to observe, analyse and communicate. His position on grammatical teaching is a step too far: it can be a tool as much as a shackle. Still, writers may recognize the truth of Warner’s reflection that “what looks like a problem with basic sentence construction may instead be a struggle to find an idea for the page”.

Trish Hall shares Warner’s belief that effective writing means putting thinking before craft. Hall ran the New York Times’s op-ed page for half a decade, and in Writing To Persuade she shows us how to succeed at one kind of formula, the short newspaper opinion piece. The book is slim, filled out with personal recollections in muted prose, and enlivened by the occasional celebrity anecdote. Her target audience seems to be the kind of educated professionals who regularly read the New York Times, who may even write as part of their work, but who have not thought about what it means to address those who do not share their opinions. Hall does offer useful, sometimes surprising, tips on avoiding jargon, finding a writerly voice, and telling a story, but most of the book is dedicated to cultivating the humanity beneath the writing.

“I can’t overstate the value of putting down your phone and having conversations with people”, she writes. Persuasion is not simply a matter of hammering one’s own point through with unassailable facts and arguments. It is a question of listening to other people, cultivating empathy for their experience, drawing on shared values to reach common ground. It also demands vulnerability; Hall praises writers who “reveal something almost painfully personal even as they connect to a larger issue or story that feels both universal and urgent”.

Much of her advice would not have surprised a classical rhetorician. She even quotes Cicero’s famous remark about it being a mistake to try “to compel others to believe and live as we do”, a mantra for this book. At her best, Hall outlines a rhetoric that is also a guide to living peaceably with others: understanding their desires, connecting. A simple experiment – not finishing other people’s sentences even when you think you know what they will say – exemplifies this understated wisdom. At her worst, Hall is too much the marketer, as when she notes that strong emotions play well on social media and enjoins her readers to “stay away from depressing images and crying people”. There ought to be enough space in a newspaper for frankly expressed opinions about the suffering of humanity. What she demonstrates, however, is that writing for an audience is a social act. Writing To Persuade is a stealth guide to manners for living in a world where conversations are as likely to take place in 280 characters on a screen as they are at a dinner table.

In Hall’s hands, considering other people means following a programmatic set of writing instructions. Amitava Kumar, a scholar who has written well- regarded works of memoir and journalism, thinks another way is possible. In Every Day I Write the Book: Notes on style, he breaks out of the strictures of academic prose by creating a virtual community of other writers on his pages. The book is a collection of short meditations on different topics related to writing, its form and practice, primarily in the university. Kumar’s style is poised and lyrical elsewhere, but here he takes on a familiar, relaxed persona, and he often lets his interlocutors have the best lines. Selections from his reading bump up against email conversations, chats on the Vassar campus, and Facebook comments; it is a noisy party where everyone has a bon mot at the ready. The book itself is assembled like a scrapbook, filled with reproductions of photographs, screenshots, handwritten notes and newspaper clippings Kumar has gathered over the years.

It is, in other words, an inspiring mess, a book that in its haphazard organization is its own argument for playfulness and improvisation. Like Warner, Kumar cannot stand “the five-paragraph costume armor of the high school essay”. Nor does he have much patience for other formulaic aspects of academic writing: didactic topic sentences, or jargony vocabulary such as “emergence” and “post-capitalist hegemony”. In his description of a website that produces meaningless theoretical prose at the touch of a button, Kumar notes that “the academy is the original random sentence generator”. He is not anti-intellectual; his loyalties lie with the university, even as he understands its provinciality too well. But he asks his fellow writers to hold on fiercely to the weird and whimsical elements in their own creations, to be “inventive in our use of language and in our search for form”.

This means many things in practice. Kumar includes a section of unusual writing exercises, many of them borrowed from other authors: rewriting a brilliant passage badly to see what made it work; scribbling just what will fit on a Post-it Note to begin a longer piece; writing letters to public figures. Other moments are about connection. In a chapter on voice, he quotes the poet and novelist Bhanu Kapil’s description of how she began a series of interviews with Indian and Pakistani women: “The first question I asked, to a young Muslim woman … Indian parents, thick Glaswegian accent, [was] ‘Who was responsible for the suffering of your mother?’ She burst into tears”. That one question could fill many libraries. Invention also means embracing collaboration with editors, and understanding writing as “a practice of revision and extension and opening”. Kumar calls for loyalty to one’s creative calling, wherever it may lead. The reward? Nothing less than freedom and immortality.

But surely craft still matters? We may accept that writing is rooted in the ethical relationships between teachers, students, writers, editors and those silent imagined readers. Does this mean that the skill of conveying an idea in language in a clear and aesthetically pleasing fashion is nothing but the icing on the cake? Joe Moran’s exquisite book First You Write a Sentence: The elements of reading, writing … and life suggests otherwise. As befits a cultural historian, Moran compares writing sentences to crafting other artisanal objects – they are artworks and spaces of refuge, gifts with which an author shapes the world to be more beautiful and capacious and kind. Like a town square or a city park, “a well-made sentence shows … silent solicitude for others. It cares”.

Moran’s own sentences are so deliciously epigrammatic that I considered giving up chocolate in favour of re-reading his book. Because he has dedicated an entire volume to one small form, he has the leisure to attend to fine details. As he explores sentences from every angle, he describes the relative heat of different verbs, the delicately shading nuances of punctuation choices, how short words feel in the mouth, the opportunity of white space. “Learn to love the feel of sentences,” he writes with a connoisseur’s delight, “the arcs of anticipation and suspense, the balancing phrases, the wholesome little snap of the full stop.”

The book is full of advice, but Moran’s rules are not meant to inhibit. He will happily tell you how to achieve a style clear as glass, then praise the rococo rhetorician who “wants to forge reality through words, not just gaze at it blankly through a window”. He is more mentor than instructor, slowly guiding us to notice and appreciate the intricacies of a well-forged phrase. And he does so with tender generosity towards the unloved heroism of “cussedly making sentences that no one asked for and no one will be obliged to read”. As pleasurable as it is to watch Moran unfold the possibilities of an English sentence, his finest contribution is an understanding of the psychology – fragile, labile – of the writer. He knows that a writer must fight distraction, bad verbal habits, and the cheap appeal of early drafts to find their voice. There it is! “It was lost amid your dishevelled thoughts and wordless anxieties, until you pulled it out of yourself, as a flowing line of sentences.”

Human beings take pleasure in noticing nature’s patterns, according to Moran, and these patterns help them to thrive, sometimes in unforeseen ways. A sentence is also form imposed on chaos, and his suggestion that it has an organic role in the survival of the species might seem bold. (Though how many of us owe our lives to a parent who said the right words in a pleasing order?) The novelist Jane Alison’s invigorating book Meander, Spiral, Explode: Design and pattern in narrative follows a similar impulse, seeking the elegant forms that order nature in the structures of stories and novels. Her bugbear is the dramatic arc, the shape that Aristotle noticed in the tragedies of his time but that has become a tyrant of creative writing instruction. “Something that swells and tautens until climax, then collapses? Bit masculo-sexual, no?” Alison has other ideas for excitement.

In brief, compelling meditations on contemporary fiction, she teases out figures we might expect to spy from a plane window or in the heart of a tree. Here are corkscrews and wavelets and fractals and networks of cells. Is this forced? Alison recognizes the cheekiness of her project, knows her readings of form may not convince every reader. Her aim is not to classify tales, to pin them like butterflies on a styrofoam board. She knows, for example, that any complex literary narrative will create a network of associations in the reader’s mind. Her goal is to imagine how a reader might experience a story, looking for “structures that create an inner sensation of traveling toward something and leave a sense of shape behind, so that the stories feel organized”.

Shapes appear in Alison’s mind as clusters of images, so what begins as literary analysis condenses into a small poem. For “meander”, Alison asks us to “picture a river curving and kinking, a snake in motion, a snail’s silver trail, or the path left by a goat”. She speaks of the use of colour in narrative “as a unifying wash, a secret code, or a stealthy constellation”. The point is not ornamentation, though Alison can write a sentence lush enough to drown in, but tempting fiction writers to render life more closely. Against the grand tragedy of the narrative arc, she proposes small undulations: “Dispersed patterning, a sense of ripple or oscillation, little ups and downs, might be more true to human experience than a single crashing wave”. These are the shifting moods of a single day, the temporary loss of the house keys, the sky a sunnier hue than expected.

The Roman educator Quintilian once insisted that an orator must be a good man. It was a commonplace of his time. The rigorous study of eloquence, he thought, required a mind undistracted by vice. The books discussed here inherit this ancient conviction that the attempt to write well is a bettering one. Composing a crisp sentence demands attention to fine detail and a craftsmanlike dedication to perfection. Deciding what to set to paper requires the ability to imagine where a reader might struggle or yawn. In a world tormented by spectres too reckless to name, care and empathy are welcome strangers.

Irina Dumitrescu is Professor of English Medieval Studies at the University of Bonn

Posted in Empire, History, Modernity

Mikhail — How the Ottomans Shaped the Modern World

This post is a reflection on the role that the Ottoman Empire played in shaping the modern world.  It draws on a new book by Alan Mikhail, God’s Shadow: Sultan Selim, His Ottoman Empire, and the Making of the Modern World.  

The Ottomans are the Rodney Dangerfields of empires: They don’t get no respect.  If we picture them at all, it’s either the exotic image of turbans and concubines in Topkapi Palace or the sad image of the “sick man of Europe” in the days before World War I, which finally put them out of their misery.  Neither does them justice.  For a long time, they were the most powerful empire in the world, which dramatically shaped life on three continents — Europe, Asia, and Africa. 

But what makes their story so interesting is that it is more than just an account of some faded glory in the past.  As Mikhail points out, the Ottomans left an indelible stamp on the modern world.  It was their powerful presence in the middle of Eurasia that pushed the minor but ambitious states of Western Europe to set sail for the East and West Indies.  The Dutch, Portuguese, Spanish, and English couldn’t get to the treasures of China and India by land because of the impassable presence of the Ottomans.  So they either had to sail east around Africa to get there or forge a new path to the west, which led them to the Americas.  In fact, they did both, and the result was the riches that turned them into imperial powers who came to dominate much of the known world.  

Without the Ottomans, there would not have been the massive expansion of world trade, the Spanish empire, the riches and technological innovations that spurred the industrial revolution and empowered the English and American empires.

God's Shadow

Here are some passages from the book that give you a feel of the impact the Ottomans had:

For half a century before 1492, and for centuries afterward, the Ottoman Empire stood as the most powerful state on earth: the largest empire in the Mediterranean since ancient Rome, and the most enduring in the history of Islam. In the decades around 1500, the Ottomans controlled more territory and ruled over more people than any other world power. It was the Ottoman monopoly of trade routes with the East, combined with their military prowess on land and on sea, that pushed Spain and Portugal out of the Mediterranean, forcing merchants and sailors from these fifteenth-century kingdoms to become global explorers as they risked treacherous voyages across oceans and around continents—all to avoid the Ottomans.

From China to Mexico, the Ottoman Empire shaped the known world at the turn of the sixteenth century. Given its hegemony, it became locked in military, ideological, and economic competition with the Spanish and Italian states, Russia, India, and China, as well as other Muslim powers. The Ottomans influenced in one way or another nearly every major event of those years, with reverberations down to our own time. Dozens of familiar figures, such as Columbus, Vasco da Gama, Montezuma, the reformer Luther, the warlord Tamerlane, and generations of popes—as well as millions of other greater and lesser historical personages—calibrated their actions and defined their very existence in reaction to the reach and grasp of Ottoman power.

Other facts, too, have blotted out our recognition of the Ottoman influence on our own history. Foremost, we tend to read the history of the last half-millennium as “the rise of the West.” (This anachronism rings as true in Turkey and the rest of the Middle East as it does in Europe and America.) In fact, in 1500, and even in 1600, there was no such thing as the now much-vaunted notion of “the West.” Throughout the early modern centuries, the European continent consisted of a fragile collection of disparate kingdoms and small, weak principalities locked in constant warfare. The large land-based empires of Eurasia were the dominant powers of the Old World, and, apart from a few European outposts in and around the Caribbean, the Americas remained the vast domain of its indigenous peoples. The Ottoman Empire held more territory in Europe than did most European-based states. In 1600, if asked to pick a single power that would take over the world, a betting man would have put his money on the Ottoman Empire, or perhaps China, but certainly not on any European entity.

The sheer scope was the empire at its height was extraordinary:

For close to four centuries, from 1453 until well into the exceedingly fractured 1800s, the Ottomans remained at the center of global politics, economics, and war. As European states rose and fell, the Ottomans stood strong. They battled Europe’s medieval and early modern empires, and in the twentieth century continued to fight in Europe, albeit against vastly different enemies. Everyone from Machiavelli to Jefferson to Hitler—quite an unlikely trio—was forced to confront the challenge of the Ottomans’ colossal power and influence. Counting from their first military victory, at Bursa, they ruled for nearly six centuries in territories that today comprise some thirty-three countries. Their armies would control massive swaths of Europe, Africa, and Asia; some of the world’s most crucial trade corridors; and cities along the shores of the Mediterranean, Red, Black, and Caspian seas, the Indian Ocean, and the Persian Gulf. They held Istanbul and Cairo, two of the largest cities on earth, as well as the holy cities of Mecca, Medina, and Jerusalem, and what was the world’s largest Jewish city for over four hundred years, Salonica (Thessaloniki in today’s Greece). From their lowly beginnings as sheep-herders on the long, hard road across Central Asia, the Ottomans ultimately succeeded in proving themselves the closest thing to the Roman Empire since the Roman Empire itself.

One of the interesting things about the Ottomans was how cosmopolitan and relatively tolerant they were.  The Spanish threw the Muslims and Jews out of Spain but the Ottomans welcomed a variety of peoples, cultures, languages, and religions.  It wasn’t until relatively late that the empire came to be predominately Muslim.

Although all religious minorities throughout the Mediterranean were subjected to much hardship, the Ottomans, despite what Innocent thought, never persecuted non-Muslims in the way that the Inquisition persecuted Muslims and Jews—and, despite the centuries of calls for Christian Crusades, Muslims never attempted a war against the whole of Christianity. While considered legally inferior to Muslims, Christians and Jews in the Ottoman Empire (as elsewhere in the lands of Islam) had more rights than other religious minorities around the world. They had their own law courts, freedom to worship in the empire’s numerous synagogues and churches, and communal autonomy. While Christian Europe was killing its religious minorities, the Ottomans protected theirs and welcomed those expelled from Europe. Although the sultans of the empire were Muslims, the majority of the population was not. Indeed, the Ottoman Empire was effectively the Mediterranean’s most populous Christian state: the Ottoman sultan ruled over more Christian subjects than the Catholic pope.

The sultan who moved the Ottoman empire into the big leagues — tripling its size — was Selim the Grim, who is the central figure of this book (look at his image on the book’s cover and you’ll see how he earned the name).  His son was Suleyman the Magnificent, whose long rule made him the lasting symbol of the empire at its peak.  Another sign of the heterogeneous nature of the Ottomans is that the sultans themselves were of mixed blood.

Because, in this period, Ottoman sultans and princes produced sons not from their wives but from their concubines, all Ottoman sultans were the sons of foreign, usually Christian-born, slaves like Gülbahar [Selim’s mother].

In the exceedingly cosmopolitan empire, the harem ensured that a non-Turkish, non-Muslim, non-elite diversity was infused into the very bloodline of the imperial family. As the son of a mother with roots in a far-off land, a distant culture, and a religion other than Islam, Selim viscerally experienced the ethnically and religiously amalgamated nature of the Ottoman Empire, and grew up in provincial Amasya with an expansive outlook on the fifteenth-century world.

Posted in History, Liberal democracy, Philosophy

Fukuyama — Liberalism and Its Discontents

This post is a brilliant essay by Francis Fukuyama, “Liberalism and Its Discontents.”  In it, he explores the problems facing liberal democracy today.  As always, it is threatened by autocratic regimes around the world.  But what’s new since the fall of the Soviet Union is the threat from illiberal democracy, both at home and abroad, in the form of populism of the right and the left.  
His argument is a strong defense of the liberal democratic order, but it is also a very smart analysis of how liberal democracy has sowed the seeds of its own downfall.  He shows how much it depends on the existence of a vibrant civil society and robust social capital, both of which its own emphasis on individual liberty tends to undermine.  He also shows how its stress on free markets has fostered the rise of the neoliberal religion, which seeks to subordinate the once robust liberal state to the market.  And he notes how its tolerance of diverse viewpoints leaves it vulnerable to illiberal views that seek to wipe it out of existence.
This essay was published in the inaugural issue of the magazine American Purpose On October 5, 2020.  Here’s a link to the original.
It’s well worth your while to give this essay a close read.

Illustration_AmericanPurpose_Edited

Liberalism and Its Discontents

The challenges from the left and the right.

Francis Fukuyama

Today, there is a broad consensus that democracy is under attack or in retreat in many parts of the world. It is being contested not just by authoritarian states like China and Russia, but by populists who have been elected in many democracies that seemed secure.

The “democracy” under attack today is a shorthand for liberal democracy, and what is really under greatest threat is the liberal component of this pair. The democracy part refers to the accountability of those who hold political power through mechanisms like free and fair multiparty elections under universal adult franchise. The liberal part, by contrast, refers primarily to a rule of law that constrains the power of government and requires that even the most powerful actors in the system operate under the same general rules as ordinary citizens. Liberal democracies, in other words, have a constitutional system of checks and balances that limits the power of elected leaders.Democracy itself is being challenged by authoritarian states like Russia and China that manipulate or dispense with free and fair elections. But the more insidious threat arises from populists within existing liberal democracies who are using the legitimacy they gain through their electoral mandates to challenge or undermine liberal institutions. Leaders like Hungary’s Viktor Orbán, India’s Narendra Modi, and Donald Trump in the United States have tried to undermine judicial independence by packing courts with political supporters, have openly broken laws, or have sought to delegitimize the press by labeling mainstream media as “enemies of the people.” They have tried to dismantle professional bureaucracies and to turn them into partisan instruments. It is no accident that Orbán puts himself forward as a proponent of “illiberal democracy.”

The contemporary attack on liberalism goes much deeper than the ambitions of a handful of populist politicians, however. They would not be as successful as they have been were they not riding a wave of discontent with some of the underlying characteristics of liberal societies. To understand this, we need to look at the historical origins of liberalism, its evolution over the decades, and its limitations as a governing doctrine.

What Liberalism Was

Classical liberalism can best be understood as an institutional solution to the problem of governing over diversity. Or to put it in slightly different terms, it is a system for peacefully managing diversity in pluralistic societies. It arose in Europe in the late 17th and 18th centuries in response to the wars of religion that followed the Protestant Reformation, wars that lasted for 150 years and killed major portions of the populations of continental Europe.

While Europe’s religious wars were driven by economic and social factors, they derived their ferocity from the fact that the warring parties represented different Christian sects that wanted to impose their particular interpretation of religious doctrine on their populations. This was a period in which the adherents of forbidden sects were persecuted—heretics were regularly tortured, hanged, or burned at the stake—and their clergy hunted. The founders of modern liberalism like Thomas Hobbes and John Locke sought to lower the aspirations of politics, not to promote a good life as defined by religion, but rather to preserve life itself, since diverse populations could not agree on what the good life was. This was the distant origin of the phrase “life, liberty, and the pursuit of happiness” in the Declaration of Independence. The most fundamental principle enshrined in liberalism is one of tolerance: You do not have to agree with your fellow citizens about the most important things, but only that each individual should get to decide what those things are without interference from you or from the state. The limits of tolerance are reached only when the principle of tolerance itself is challenged, or when citizens resort to violence to get their way.

Understood in this fashion, liberalism was simply a pragmatic tool for resolving conflicts in diverse societies, one that sought to lower the temperature of politics by taking questions of final ends off the table and moving them into the sphere of private life. This remains one of its most important selling points today: If diverse societies like India or the United States move away from liberal principles and try to base national identity on race, ethnicity, or religion, they are inviting a return to potentially violent conflict. The United States suffered such conflict during its Civil War, and Modi’s India is inviting communal violence by shifting its national identity to one based on Hinduism.

There is however a deeper understanding of liberalism that developed in continental Europe that has been incorporated into modern liberal doctrine. In this view, liberalism is not simply a mechanism for pragmatically avoiding violent conflict, but also a means of protecting fundamental human dignity.

The ground of human dignity has shifted over time. In aristocratic societies, it was an attribute only of warriors who risked their lives in battle. Christianity universalized the concept of dignity based on the possibility of human moral choice: Human beings had a higher moral status than the rest of created nature but lower than that of God because they could choose between right and wrong. Unlike beauty or intelligence or strength, this characteristic was universally shared and made human beings equal in the sight of God. By the time of the Enlightenment, the capacity for choice or individual autonomy was given a secular form by thinkers like Rousseau (“perfectibility”) and Kant (a “good will”), and became the ground for the modern understanding of the fundamental right to dignity written into many 20th-century constitutions. Liberalism recognizes the equal dignity of every human being by granting them rights that protect individual autonomy: rights to speech, to assembly, to belief, and ultimately to participate in self-government.

Liberalism thus protects diversity by deliberately not specifying higher goals of human life. This disqualifies religiously defined communities as liberal. Liberalism also grants equal rights to all people considered full human beings, based on their capacity for individual choice. Liberalism thus tends toward a kind of universalism: Liberals care not just about their rights, but about the rights of others outside their particular communities. Thus the French Revolution carried the Rights of Man across Europe. From the beginning the major arguments among liberals were not over this principle, but rather over who qualified as rights-bearing individuals, with various groups—racial and ethnic minorities, women, foreigners, the propertyless, children, the insane, and criminals—excluded from this magic circle.

A final characteristic of historical liberalism was its association with the right to own property. Property rights and the enforcement of contracts through legal institutions became the foundation for economic growth in Britain, the Netherlands, Germany, the United States, and other states that were not necessarily democratic but protected property rights. For that reason liberalism strongly associated with economic growth and modernization. Rights were protected by an independent judiciary that could call on the power of the state for enforcement. Properly understood, rule of law referred both to the application of day-to-day rules that governed interactions between individuals and to the design of political institutions that formally allocated political power through constitutions. The class that was most committed to liberalism historically was the class of property owners, not just agrarian landlords but the myriads of middle-class business owners and entrepreneurs that Karl Marx would label the bourgeoisie.

Liberalism is connected to democracy, but is not the same thing as it. It is possible to have regimes that are liberal but not democratic: Germany in the 19th century and Singapore and Hong Kong in the late 20th century come to mind. It is also possible to have democracies that are not liberal, like the ones Viktor Orbán and Narendra Modi are trying to create that privilege some groups over others. Liberalism is allied to democracy through its protection of individual autonomy, which ultimately implies a right to political choice and to the franchise. But it is not the same as democracy. From the French Revolution on, there were radical proponents of democratic equality who were willing to abandon liberal rule of law altogether and vest power in a dictatorial state that would equalize outcomes. Under the banner of Marxism-Leninism, this became one of the great fault lines of the 20th century. Even in avowedly liberal states, like many in late 19th- and early 20th-century Europe and North America, there were powerful trade union movements and social democratic parties that were more interested in economic redistribution than in the strict protection of property rights.

Liberalism also saw the rise of another competitor besides communism: nationalism. Nationalists rejected liberalism’s universalism and sought to confer rights only on their favored group, defined by culture, language, or ethnicity. As the 19th century progressed, Europe reorganized itself from a dynastic to a national basis, with the unification of Italy and Germany and with growing nationalist agitation within the multiethnic Ottoman and Austro-Hungarian empires. In 1914 this exploded into the Great War, which killed millions of people and laid the kindling for a second global conflagration in 1939.

The defeat of Germany, Italy, and Japan in 1945 paved the way for a restoration of liberalism as the democratic world’s governing ideology. Europeans saw the folly of organizing politics around an exclusive and aggressive understanding of nation, and created the European Community and later the European Union to subordinate the old nation-states to a cooperative transnational structure. For its part, the United States played a powerful role in creating a new set of international institutions, including the United Nations (and affiliated Bretton Woods organizations like the World Bank and IMF), GATT and the World Trade Organization, and cooperative regional ventures like NATO and NAFTA.

The largest threat to this order came from the former Soviet Union and its allied communist parties in Eastern Europe and the developing world. But the former Soviet Union collapsed in 1991, as did the perceived legitimacy of Marxism-Leninism, and many former communist countries sought to incorporate themselves into existing international institutions like the EU and NATO. This post-Cold War world would collectively come to be known as the liberal international order.

But the period from 1950 to the 1970s was the heyday of liberal democracy in the developed world. Liberal rule of law abetted democracy by protecting ordinary people from abuse: The U.S. Supreme Court, for example, was critical in breaking down legal racial segregation through decisions like Brown v. Board of Education. And democracy protected the rule of law: When Richard Nixon engaged in illegal wiretapping and use of the CIA, it was a democratically elected Congress that helped drive him from power. Liberal rule of law laid the basis for the strong post-World War II economic growth that then enabled democratically elected legislatures to create redistributive welfare states. Inequality was tolerable in this period because most people could see their material conditions improving. In short, this period saw a largely happy coexistence of liberalism and democracy throughout the developed world.

Discontents

Liberalism has been a broadly successful ideology, and one that is responsible for much of the peace and prosperity of the modern world. But it also has a number of shortcomings, some of which were triggered by external circumstances, and others of which are intrinsic to the doctrine. The first lies in the realm of economics, the second in the realm of culture.

The economic shortcomings have to do with the tendency of economic liberalism to evolve into what has come to be called “neoliberalism.” Neoliberalism is today a pejorative term used to describe a form of economic thought, often associated with the University of Chicago or the Austrian school, and economists like Friedrich Hayek, Milton Friedman, George Stigler, and Gary Becker. They sharply denigrated the role of the state in the economy, and emphasized free markets as spurs to growth and efficient allocators of resources. Many of the analyses and policies recommended by this school were in fact helpful and overdue: Economies were overregulated, state-owned companies inefficient, and governments responsible for the simultaneous high inflation and low growth experienced during the 1970s.

But valid insights about the efficiency of markets evolved into something of a religion, in which state intervention was opposed not based on empirical observation but as a matter of principle. Deregulation produced lower airline ticket prices and shipping costs for trucks, but also laid the ground for the great financial crisis of 2008 when it was applied to the financial sector. Privatization was pushed even in cases of natural monopolies like municipal water or telecom systems, leading to travesties like the privatization of Mexico’s TelMex, where a public monopoly was transformed into a private one. Perhaps most important, the fundamental insight of trade theory, that free trade leads to higher wealth for all parties concerned, neglected the further insight that this was true only in the aggregate, and that many individuals would be hurt by trade liberalization. The period from the 1980s onward saw the negotiation of both global and regional free trade agreements that shifted jobs and investment away from rich democracies to developing countries, increasing within-country inequalities. In the meantime, many countries starved their public sectors of resources and attention, leading to deficiencies in a host of public services from education to health to security.

The result was the world that emerged by the 2010s in which aggregate incomes were higher than ever but inequality within countries had also grown enormously. Many countries around the world saw the emergence of a small class of oligarchs, multibillionaires who could convert their economic resources into political power through lobbyists and purchases of media properties. Globalization enabled them to move their money to safe jurisdictions easily, starving states of tax revenue and making regulation very difficult. Globalization also entailed liberalization of rules concerning migration. Foreign-born populations began to increase in many Western countries, abetted by crises like the Syrian civil war that sent more than a million refugees into Europe. All of this paved the way for the populist reaction that became clearly evident in 2016 with Britain’s Brexit vote and the election of Donald Trump in the United States.

The second discontent with liberalism as it evolved over the decades was rooted in its very premises. Liberalism deliberately lowered the horizon of politics: A liberal state will not tell you how to live your life, or what a good life entails; how you pursue happiness is up to you. This produces a vacuum at the core of liberal societies, one that often gets filled by consumerism or pop culture or other random activities that do not necessarily lead to human flourishing. This has been the critique of a group of (mostly) Catholic intellectuals including Patrick Deneen, Sohrab Ahmari, Adrian Vermeule, and others, who feel that liberalism offers “thin gruel” for anyone with deeper moral commitments.

This leads us to a deeper stratum of discontent. Liberal theory, both in its economic and political guises, is built around individuals and their rights, and the political system protects their ability to make these choices autonomously. Indeed, in neoclassical economic theory, social cooperation arises only as a result of rational individuals deciding that it is in their self-interest to work with other individuals. Among conservative intellectuals, Patrick Deneen has gone the furthest by arguing that this whole approach is deeply flawed precisely because it is based on this individualistic premise, and sanctifies individual autonomy above all other goods. Thus for him, the entire American project based as it was on Lockean individualistic principles was misfounded. Human beings for him are not primarily autonomous individuals, but deeply social beings who are defined by their obligations and ties to a range of social structures, from families to kin groups to nations.

This social understanding of human nature was a truism taken for granted by most thinkers prior to the Western Enlightenment. It is also one that is one supported by a great deal of recent research in the life sciences that shows that human beings are hard-wired to be social creatures: Many of our most salient faculties are ones that lead us to cooperate with one another in groups of various sizes and types. This cooperation does not arise necessarily from rational calculation; it is supported by emotional faculties like pride, guilt, shame, and anger that reinforce social bonds. The success of human beings over the millennia that has allowed our species to completely dominate its natural habitat has to do with this aptitude for following norms that induce social cooperation.

By contrast, the kind of individualism celebrated in liberal economic and political theory is a contingent development that emerged in Western societies over the centuries. Its history is long and complicated, but it originated in the inheritance rules set down by the Catholic Church in early medieval times which undermined the extended kinship networks that had characterized Germanic tribal societies. Individualism was further validated by its functionality in promoting market capitalism: Markets worked more efficiently if individuals were not constrained by obligations to kin and other social networks. But this kind of individualism has always been at odds with the social proclivities of human beings. It also does not come naturally to people in certain other non-Western societies like India or the Arab world, where kin, caste, or ethnic ties are still facts of life.

The implication of these observations for contemporary liberal societies is straightforward. Members of such societies want opportunities to bond with one another in a host of ways: as citizens of a nation, members of an ethnic or racial group, residents of a region, or adherents to a particular set of religious beliefs. Membership in such groups gives their lives meaning and texture in a way that mere citizenship in a liberal democracy does not.

Many of the critics of liberalism on the right feel that it has undervalued the nation and traditional national identity: Thus Viktor Orbán has asserted that Hungarian national identity is based on Hungarian ethnicity and on maintenance of traditional Hungarian values and cultural practices. New nationalists like Yoram Hazony celebrate nationhood and national culture as the rallying cry for community, and they bemoan liberalism’s dissolving effect on religious commitment, yearning for a thicker sense of community and shared values, underpinned by virtues in service of that community.

There are parallel discontents on the left. Juridical equality before the law does not mean that people will be treated equally in practice. Racism, sexism, and anti-gay bias all persist in liberal societies, and those injustices have become identities around which people could mobilize. The Western world has seen the emergence of a series of social movements since the 1960s, beginning with the civil rights movement in the United States, and movements promoting the rights of women, indigenous peoples, the disabled, the LGBT community, and the like. The more progress that has been made toward eradicating social injustices, the more intolerable the remaining injustices seem, and thus the moral imperative to mobilizing to correct them. The complaint of the left is different in substance but similar in structure to that of the right: Liberal society does not do enough to root out deep-seated racism, sexism, and other forms of discrimination, so politics must go beyond liberalism. And, as on the right, progressives want the deeper bonding and personal satisfaction of associating—in this case, with people who have suffered from similar indignities.

This instinct for bonding and the thinness of shared moral life in liberal societies has shifted global politics on both the right and the left toward a politics of identity and away from the liberal world order of the late 20th century. Liberal values like tolerance and individual freedom are prized most intensely when they are denied: People who live in brutal dictatorships want the simple freedom to speak, associate, and worship as they choose. But over time life in a liberal society comes to be taken for granted and its sense of shared community seems thin. Thus in the United States, arguments between right and left increasingly revolve around identity, and particularly racial identity issues, rather than around economic ideology and questions about the appropriate role of the state in the economy.

There is another significant issue that liberalism fails to grapple adequately with, which concerns the boundaries of citizenship and rights. The premises of liberal doctrine tend toward universalism: Liberals worry about human rights, and not just the rights of Englishmen, or white Americans, or some other restricted class of people. But rights are protected and enforced by states which have limited territorial jurisdiction, and the question of who qualifies as a citizen with voting rights becomes a highly contested one. Some advocates of migrant rights assert a universal human right to migrate, but this is a political nonstarter in virtually every contemporary liberal democracy. At the present moment, the issue of the boundaries of political communities is settled by some combination of historical precedent and political contestation, rather than being based on any clear liberal principle.

Conclusion

Vladimir Putin told the Financial Times that liberalism has become an “obsolete” doctrine. While it may be under attack from many quarters today, it is in fact more necessary than ever.

It is more necessary because it is fundamentally a means of governing over diversity, and the world is more diverse than it ever has been. Democracy disconnected from liberalism will not protect diversity, because majorities will use their power to repress minorities. Liberalism was born in the mid-17th century as a means of resolving religious conflicts, and it was reborn again after 1945 to solve conflicts between nationalisms. Any illiberal effort to build a social order around thick ties defined by race, ethnicity, or religion will exclude important members of the community, and down the road will lead to conflict. Russia itself retains liberal characteristics: Russian citizenship and nationality is not defined by either Russian ethnicity or the Orthodox religion; the Russian Federation’s millions of Muslim inhabitants enjoy equal juridical rights. In situations of de facto diversity, attempts to impose a single way of life on an entire population is a formula for dictatorship.

The only other way to organize a diverse society is through formal power-sharing arrangements among different identity groups that give only a nod toward shared nationality. This is the way that Lebanon, Iraq, Bosnia, and other countries in the Middle East and the Balkans are governed. This type of consociationalism leads to very poor governance and long-term instability, and works poorly in societies where identity groups are not geographically based. This is not a path down which any contemporary liberal democracy should want to tread.

That being said, the kinds of economic and social policies that liberal societies should pursue is today a wide-open question. The evolution of liberalism into neoliberalism after the 1980s greatly reduced the policy space available to centrist political leaders, and permitted the growth of huge inequalities that have been fueling populisms of the right and the left. Classical liberalism is perfectly compatible with a strong state that seeks social protections for populations left behind by globalization, even as it protects basic property rights and a market economy. Liberalism is necessarily connected to democracy, and liberal economic policies need to be tempered by considerations of democratic equality and the need for political stability.

I suspect that most religious conservatives critical of liberalism today in the United States and other developed countries do not fool themselves into thinking that they can turn the clock back to a period when their social views were mainstream. Their complaint is a different one: that contemporary liberals are ready to tolerate any set of views, from radical Islam to Satanism, other than those of religious conservatives, and that they find their own freedom constrained.

This complaint is a serious one: Many progressives on the left have shown themselves willing to abandon liberal values in pursuit of social justice objectives. There has been a sustained intellectual attack on liberal principles over the past three decades coming out of academic pursuits like gender studies, critical race theory, postcolonial studies, and queer theory, that deny the universalistic premises underlying modern liberalism. The challenge is not simply one of intolerance of other views or “cancel culture” in the academy or the arts. Rather, the challenge is to basic principles that all human beings were born equal in a fundamental sense, or that a liberal society should strive to be color-blind. These different theories tend to argue that the lived experiences of specific and ever-narrower identity groups are incommensurate, and that what divides them is more powerful than what unites them as citizens. For some in the tradition of Michel Foucault, foundational approaches to cognition coming out of liberal modernity like the scientific method or evidence-based research are simply constructs meant to bolster the hidden power of racial and economic elites.

The issue here is thus not whether progressive illiberalism exists, but rather how great a long-term danger it represents. In countries from India and Hungary to the United States, nationalist conservatives have actually taken power and have sought to use the power of the state to dismantle liberal institutions and impose their own views on society as a whole. That danger is a clear and present one.

Progressive anti-liberals, by contrast, have not succeeded in seizing the commanding heights of political power in any developed country. Religious conservatives are still free to worship in any way they see fit, and indeed are organized in the United States as a powerful political bloc that can sway elections. Progressives exercise power in different and more nuanced ways, primarily through their dominance of cultural institutions like the mainstream media, the arts, and large parts of academia. The power of the state has been enlisted behind their agenda on such matters as striking down via the courts conservative restrictions on abortion and gay marriage and in the shaping of public school curricula. An open question for the future is whether cultural dominance today will ultimately lead to political dominance in the future, and thus a more thoroughgoing rollback of liberal rights by progressives.

Liberalism’s present-day crisis is not new; since its invention in the 17th century, liberalism has been repeatedly challenged by thick communitarians on the right and progressive egalitarians on the left. Liberalism properly understood is perfectly compatible with communitarian impulses and has been the basis for the flourishing of deep and diverse forms of civil society. It is also compatible with the social justice aims of progressives: One of its greatest achievements was the creation of modern redistributive welfare states in the late 20th century. Liberalism’s problem is that it works slowly through deliberation and compromise, and never achieves its communal or social justice goals as completely as their advocates would like. But it is hard to see how the discarding of liberal values is going to lead to anything in the long term other than increasing social conflict and ultimately a return to violence as a means of resolving differences.

Francis Fukuyama, chairman of the editorial board of American Purpose, directs the Center on Democracy, Development and the Rule of Law at Stanford University.

Posted in History, History of education, War

An Affair to Remember: America’s Brief Fling with the University as a Public Good

This post is an essay about the brief but glorious golden age of the US university during the three decades after World War II.  

American higher education rose to fame and fortune during the Cold War, when both student enrollments and funded research shot upward. Prior to World War II, the federal government showed little interest in universities and provided little support. The war spurred a large investment in defense-based scientific research in universities, and the emergence of the Cold War expanded federal investment exponentially. Unlike a hot war, the Cold War offered an extended period of federally funded research public subsidy for expanding student enrollments. The result was the golden age of the American university. The good times continued for about 30 years and then began to go bad. The decline was triggered by the combination of a decline in the perceived Soviet threat and a taxpayer revolt against high public spending; both trends culminating with the fall of the Berlin Wall in 1989. With no money and no enemy, the Cold War university fell as quickly as it arose. Instead of seeing the Cold War university as the norm, we need to think of it as the exception. What we are experiencing now in American higher education is a regression to the mean, in which, over the long haul, Americans have understood higher education to be a distinctly private good.

I originally presented this piece in 2014 at a conference at Catholic University in Leuven, Belgium.  It was then published in the Journal of Philosophy of Education in 2016 (here’s a link to the JOPE version) and then became a chapter in my 2017 book, A Perfect Mess.  Waste not, want not.  Hope you enjoy it.

Cold War

An Affair to Remember:

America’s Brief Fling with the University as a Public Good

David F. Labaree

            American higher education rose to fame and fortune during the Cold War, when both student enrollments and funded research shot upward.  Prior to World War II, the federal government showed little interest in universities and provided little support.  The war spurred a large investment in defense-based scientific research in universities for reasons of both efficiency and necessity:  universities had the researchers and infrastructure in place and the government needed to gear up quickly.  With the emergence of the Cold War in 1947, the relationship continued and federal investment expanded exponentially.  Unlike a hot war, the Cold War offered a long timeline for global competition between communism and democracy, which meant institutionalizing the wartime model of federally funded research and building a set of structures for continuing investment in knowledge whose military value was unquestioned. At the same time, the communist challenge provided a strong rationale for sending a large number of students to college.  These increased enrollments would educate the skilled workers needed by the Cold War economy, produce informed citizens to combat the Soviet menace, and demonstrate to the world the broad social opportunities available in a liberal democracy.  The result of this enormous public investment in higher education has become known as the golden age of the American university.

            Of course, as is so often the case with a golden age, it didn’t last.  The good times continued for about 30 years and then began to go bad.  The decline was triggered by the combination of a decline in the perceived Soviet threat and a taxpayer revolt against high public spending; both trends with the fall of the Berlin Wall in 1989.  With no money and no enemy, the Cold War university fell as quickly as it arose. 

            In this paper I try to make sense of this short-lived institution.  But I want to avoid the note of nostalgia that pervades many current academic accounts, in which professors and administrators grieve for the good old days of the mid-century university and spin fantasies of recapturing them.  Barring another national crisis of the same dimension, however, it just won’t happen.  Instead of seeing the Cold War university as the norm that we need to return to, I suggest that it’s the exception.  What we’re experiencing now in American higher education is, in many ways, a regression to the mean. 

            My central theme is this:  Over the long haul, Americans have understood higher education as a distinctly private good.  The period from 1940 to 1970 was the one time in our history when the university became a public good.  And now we are back to the place we have always been, where the university’s primary role is to provide individual consumers a chance to gain social access and social advantage.  Since students are the primary beneficiaries, then they should also foot the bill; so state subsidies are hard to justify.

            Here is my plan.  First, I provide an overview of the long period before 1940 when American higher education functioned primarily as a private good.  During this period, the beneficiaries changed from the university’s founders to its consumers, but private benefit was the steady state.  This is the baseline against which we can understand the rapid postwar rise and fall of public investment in higher education.  Next, I look at the huge expansion of public funding for higher education starting with World War II and continuing for the next 30 years.  Along the way I sketch how the research university came to enjoy a special boost in support and rising esteem during these decades.  Then I examine the fall from grace toward the end of the century when the public-good rationale for higher ed faded as quickly as it had emerged.  And I close by exploring the implications of this story for understanding the American system of higher education as a whole. 

            During most of its history, the central concern driving the system has not been what it can do for society but what it can do for me.  In many ways, this approach has been highly beneficial.  Much of its success as a system – as measured by wealth, rankings, and citations – derives from its core structure as a market-based system producing private goods for consumers rather than a politically-based system producing public goods for state and society.  But this view of higher education as private property is also a key source of the system’s pathologies.  It helps explain why public funding for higher education is declining and student debt is rising; why private colleges are so much richer and more prestigious that public colleges; why the system is so stratified, with wealthy students attending the exclusive colleges at the top where social rewards are high and with poor students attending the inclusive colleges at the bottom where such rewards are low; and why quality varies so radically, from colleges that ride atop the global rankings to colleges that drift in intellectual backwaters.

The Private Origins of the System

            One of the peculiar aspects of the history of American higher education is that private colleges preceded public.  Another, which in part follows from the first, is that private colleges are also more prestigious.  Nearly everywhere else in the world, state-supported and governed universities occupy the pinnacle of the national system while private institutions play a small and subordinate role, supplying degrees of less distinction and serving students of less ability.  But in the U.S., the top private universities produce more research, gain more academic citations, attract better faculty and students, and graduate more leaders of industry, government, and the professions.  According to the 2013 Shanghai rankings, 16 of the top 25 universities in the U.S. are private, and the concentration is even higher at the top of this list, where private institutions make up 8 of the top 10 (Institute of Higher Education, 2013). 

            This phenomenon is rooted in the conditions under which colleges first emerged in the U.S.  American higher education developed into a system in the early 19th century, when three key elements were in place:  the state was weak, the market was strong, and the church was divided.  The federal government at the time was small and poor, surviving largely on tariffs and the sale of public lands, and state governments were strapped simply trying to supply basic public services.  Colleges were a low priority for government since they served no compelling public need – unlike public schools, which states saw as essential for producing citizens for the republic.  So colleges only emerged when local promoters requested and received a  corporate charter from the state.  These were private not-for-profit institutions that functioned much like any other corporation.  States provided funding only sporadically and only if an institution’s situation turned dire.  And after the Dartmouth College decision in 1819, the Supreme Court made clear that a college’s corporate charter meant that it could govern itself without state interference.  Therefore, in the absence of state funding and control, early American colleges developed a market-based system of higher education. 

            If the roots of the American system were private, they were also extraordinarily local.  Unlike the European university, with its aspirations toward universality and its history of cosmopolitanism, the American college of the nineteenth century was a home-town entity.  Most often, it was founded to advance the parochial cause of promoting a particular religious denomination rather than to promote higher learning.  In a setting where no church was dominant and all had to compete for visibility, stature, and congregants, founding colleges was a valuable way to plant the flag and promote the faith.  This was particularly true when the population was rapidly expanding into new territories to the west, which meant that no denomination could afford to cede the new terrain to competitors.  Starting a college in Ohio was a way to ensure denominational growth, prepare clergy, and spread the word.

            At the same time, colleges were founded with an eye toward civic boosterism, intended to shore up a community’s claim to be a major cultural and commercial center rather than a sleepy farm town.  With a college, a town could claim that it deserved to gain lucrative recognition as a stop on the railroad line, the site for a state prison, the county seat, or even the state capital.  These consequences would elevate the value of land in the town, which would work to the benefit of major landholders.  In this sense, the nineteenth century college, like much of American history, was in part the product of a land development scheme.  In general, these two motives combined: colleges emerged as a way to advance both the interests of particular sects and also the interests of the towns where they were lodged.  Often ministers were also land speculators.  It was always better to have multiple rationales and sources of support than just one (Brown (1995); Boorstin (1965); Potts (1971).  In either case, however, the benefits of founding a college accrued to individual landowners and particular religious denominations and not to the larger public.

As a result these incentives, church officials and civic leaders around the country scrambled to get a state charter for a college, establish a board of trustees made up of local notables, and install a president.  The latter (usually a clergyman) would rent a local building, hire a small and not very accomplished faculty, and serve as the CEO of a marginal educational enterprise, one that sought to draw tuition-paying students from the area in order to make the college a going concern.  With colleges arising to meet local and sectarian needs, the result was the birth of a large number of small, parochial, and weakly funded institutions in a very short period of time in the nineteenth century, which meant that most of these colleges faced a difficult struggle to survive in the competition with peer institutions.  In the absence of reliable support from church or state, these colleges had to find a way to get by on their own. 

            Into this mix of private colleges, state and local governments began to introduce public institutions.  First came a series of universities established by individual states to serve their local populations.  Here too competition was a bigger factor than demand for learning, since a state government increasingly needed to have a university of its own in order to keep up with its neighbors.  Next came a group of land-grant colleges that began to emerge by midcentury.  Funded by grants of land from the federal government, these were public institutions that focused on providing practical education for occupations in agriculture and engineering.  Finally was an array of normal schools, which aimed at preparing teachers for the expanding system of public elementary education.  Like the private colleges, these public institutions emerged to meet the economic needs of towns that eagerly sought to house them.  And although they colleges were creatures of the state, they had only limited public funding and had to rely heavily on student tuition and private donations.

            The rate of growth of this system of higher education was staggering.  At the beginning of the American republic in 1790 the country had 19 institutions calling themselves colleges or universities (Tewksbury (1932), Table 1; Collins, 1979, Table 5.2).  By 1880, it had 811, which doesn’t even include the normal schools.  As a comparison, this was five times as many institutions as existed that year in all of Western Europe (Ruegg (2004).  To be sure, the American institutions were for the most part colleges in name only, with low academic standards, an average student body of 131 (Carter et al. (2006), Table Bc523) and faculty of 14 (Carter et al. (2006), Table Bc571).  But nonetheless this was a massive infrastructure for a system of higher education. 

            At a density of 16 colleges per million of population, the U.S. in 1880 had the most overbuilt system of higher education in the world (Collins, 1979, Table 5.2).  Created in order to meet the private needs of land speculators and religious sects rather that the public interest of state and society, the system got way ahead of demand for its services.  That changed in the 1880s.  By adopting parts of the German research university model (in form if not in substance), the top level of the American system acquired a modicum of academic respectability.  In addition – and this is more important for our purposes here – going to college finally came to be seen as a good investment for a growing number of middle-class student-consumers. 

            Three factors came together to make college attractive.  Primary among these was the jarring change in the structure of status transmission for middle-class families toward the end of the nineteenth century.  The tradition of passing on social position to your children by transferring ownership of the small family business was under dire threat, as factories were driving independent craft production out of the market and department stories were making small retail shops economically marginal.  Under these circumstances, middle class families began to adopt what Burton Bledstein calls the “culture of professionalism” (Bledstein, 1976).  Pursuing a profession (law, medicine, clergy) had long been an option for young people in this social stratum, but now this attraction grew stronger as the definition of profession grew broader.  With the threat of sinking into the working class becoming more likely, families found reassurance in the prospect of a form of work that would buffer their children from the insecurity and degradation of wage labor.  This did not necessarily mean becoming a traditional professional, where the prospects were limited and entry costs high, but instead it meant becoming a salaried employee in a management position that was clearly separated from the shop floor.  The burgeoning white-collar work opportunities as managers in corporate and government bureaucracies provided the promise of social status, economic security, and protection from downward mobility.  And the best way to certify yourself as eligible for this kind of work was to acquire a college degree. 

            Two other factors added to the attractions of college.  One was that a high school degree – once a scarce commodity that became a form of distinction for middle class youth during the nineteenth century – was in danger of becoming commonplace.  Across the middle of the century, enrollments in primary and grammar schools were growing fast, and by the 1880s they were filling up.  By 1900, the average American 20-year-old had eight years of schooling, which meant that political pressure was growing to increase access to high school (Goldin & Katz, 2008, p. 19).  This started to happen in the 1880s, and for the next 50 years high school enrollments doubled every decade.  The consequences were predictable.  If the working class was beginning to get a high school education, then middle class families felt compelled to preserve their advantage by pursuing college.

            The last piece that fell into place to increase the drawing power of college for middle class families was the effort by colleges in the 1880s and 90s to make undergraduate enrollment not just useful but enjoyable.  Ever desperate to find ways to draw and retain students, colleges responded to competitive pressure by inventing the core elements that came to define the college experience for American students in the twentieth century.  These included fraternities and sororities, pleasant residential halls, a wide variety of extracurricular entertainments, and – of course – football.  College life became a major focus of popular magazines, and college athletic events earned big coverage in newspapers.  In remarkably short order, going to college became a life stage in the acculturation of middle class youth.  It was the place where you could prepare for a respectable job, acquire sociability, learn middle class cultural norms, have a good time, and meet a suitable spouse.  And, for those who were so inclined, was the potential fringe benefit of getting an education.

            Spurred by student desire to get ahead or stay ahead, college enrollments started growing quickly.  They were at 116,000  in 1879, 157,000 in 1889, 238,000 in 1899, 355,000 in 1909, 598,000 in 1919, 1,104,000 in 1929, and 1,494,000 in 1939 (Carter et al. (2006), Table Bc523).  This was a rate of increase of more than 50 percent a decade – not as fast as the increases that would come at midcentury, but still impressive.  During this same 60-year period, total college enrollment as a proportion of the population 18-to-24 years old rose from 1.6 percent to 9.1 percent (Carter et al. (2006), Table Bc524).  By 1930, U.S. had three times the population of the U.K. and 20 times the number of college students (Levine. 1986, p. 135).  And the reason they were enrolling in such numbers was clear.  According to studies in the 1920s, almost two-thirds of undergraduates were there to get ready for a particular job, mostly in the lesser professions and middle management (Levine, 1986, p. 40).  Business and engineering were the most popular majors and the social sciences were on the rise.  As David Levine put it in his important book about college in the interwar years, “Institutions of higher learning were no longer content to educate; they now set out to train, accredit, and impart social status to their students” (Levine, 1986, p. 19.

            Enrollments were growing in public colleges faster than in private colleges, but only by a small amount.  In fact it wasn’t until 1931 – for the first time in the history of American higher education – that the public sector finally accounted for a majority of college students (Carter et al., 2006, Tables Bc531 and Bc534).  The increases occurred across all levels of the system, including the top public research universities; but the largest share of enrollments flowed into the newer institutions at the bottom of the system:  the state colleges that were emerging from normal schools, urban commuter colleges (mostly private), and an array of public and private junior colleges that offered two-year vocational programs. 

            For our purposes today, the key point is this:  The American system of colleges and universities that emerged in the nineteenth century and continued until World War II was a market-driven structure that construed higher education as a private good.  Until around 1880, the primary benefits of the system went to the people who founded individual institutions – the land speculators and religious sects for whom a new college brought wealth and competitive advantage.  This explains why colleges emerged in such remote places long before there was substantial student demand.  The role of the state in this process was muted.  The state was too weak and too poor to provide strong support for higher education, and there was no obvious state interest that argued for doing so.  Until the decade before the war, most student enrollments were in the private sector, and even at the war’s start the majority of institutions in the system were private (Carter et al., 2006, Tables Bc510 to Bc520).  

            After 1880, the primary benefits of the system went to the students who enrolled.  For them, it became the primary way to gain entry to the relatively secure confines of salaried work in management and the professions.  For middle class families, college in this period emerged as the main mechanism for transmitting social advantage from parents to children; and for others, it became the object of aspiration as the place to get access to the middle class.  State governments put increasing amounts of money into support for public higher education, not because of the public benefits it would produce but because voters demanded increasing access to this very attractive private good.

The Rise of the Cold War University

            And then came the Second World War.  There is no need here to recount the devastation it brought about or the nightmarish residue it left.  But it’s worth keeping in mind the peculiar fact that this conflict is remembered fondly by Americans, who often refer to it as the Good War (Terkel, 1997).  The war cost a lot of American lives and money, but it also brought a lot of benefits.  It didn’t hurt, of course, to be on the winning side and to have all the fighting take place on foreign territory.  And part of the positive feeling associated with the war comes from the way it thrust the country into a new role as the dominant world power.  But perhaps even more the warm feeling arises from the memory of this as a time when the country came together around a common cause.  For citizens of the United States – the most liberal of liberal democracies, where private liberty is much more highly valued than public loyalty – it was a novel and exciting feeling to rally around the federal government.  Usually viewed with suspicion as a threat to the rights of individuals and a drain on private wealth, the American government in the 1940s took on the mantle of good in the fight against evil.  Its public image became the resolute face of a white-haired man dressed in red, white, and blue, who pointed at the viewer in a famous recruiting poster.  It’s slogan: “Uncle Sam Wants You.” 

            One consequence of the war was a sharp increase in the size of the U.S. government.  The historically small federal state had started to grow substantially in the 1930s as a result of the New Deal effort to spend the country out of a decade-long economic depression, a time when spending doubled.  But the war raised the level of federal spending by a factor of seven, from $1,000 to $7,000 per capita.  After the war, the level dropped back to $2,000; and then the onset of the Cold War sent federal spending into a sharp, and this time sustained, increase – reaching $3,000 in the 50s, 4,000 in the 60s, and regaining the previous high of $7,000 in the 80s, during the last days of the Soviet Union (Garrett & Rhine, 2006, figure 3). 

            If for Americans in general World War II carries warm associations, for people in higher education it marks the beginning of the Best of Times – a short but intense period of generous public funding and rapid expansion.  Initially, of course, the war brought trouble, since it sent most prospective college students into the military.  Colleges quickly adapted by repurposing their facilities for military training and other war-related activities.  But the real long-term benefits came when the federal government decided to draw higher education more centrally into the war effort – first, as the central site for military research and development; and second, as the place to send veterans when the war was over.  Let me say a little about each.

            In the first half of the twentieth century, university researchers had to scrabble around looking for funding, forced to rely on a mix of foundations, corporations, and private donors.  The federal government saw little benefit in employing their services.  In a particularly striking case at the start of World War, the professional association of academic chemists offered its help to the War Department, which declined “on the grounds that it already had a chemist in its employ” (Levine, 1986, p. 51).[1]  The existing model was for government to maintain its own modest research facilities instead of relying on the university. 

            The scale of the next war changed all this.  At the very start, a former engineering dean from MIT, Vannevar Bush, took charge of mobilizing university scientists behind the war effort as head of the Office of Scientific Research and Development.  The model he established for managing the relationship between government and researchers set the pattern for university research that still exists in the U.S. today: Instead of setting up government centers, the idea was to farm out research to universities.  Issue a request for proposals to meet a particular research need; award the grant to the academic researchers who seemed best equipped to meet this need; and pay 50 percent or more overhead to the university for the facilities that researchers would use.  This method drew on the expertise and facilities that already existed at research universities, which both saved the government from having to maintain a costly permanent research operation and also gave it the flexibility to draw on the right people for particular projects.  For universities, it provided a large source of funds, which enhanced their research reputations, helped them expand faculty, and paid for infrastructure.  It was a win-win situation.  It also established the entrepreneurial model of the university researcher in perpetual search for grant money.  And for the first time in the history of American higher education, the university was being considered a public good, whose research capacity could serve the national interest by helping to win a war. 

            If universities could meet one national need during the war by providing military research, they could meet another national need after the war by enrolling veterans.  The GI Bill of Rights, passed by congress in 1944, was designed to pay off a debt and resolve a manpower problem.  Its official name, the Servicemen’s Readjustment Act of 1944, reflects both aims.  By the end of the war there were 15 million men and women who had served in the military, who clearly deserved a reward for their years of service to the country.  The bill offered them the opportunity to continue their education at federal expense, which included attending the college of their choice.  This opportunity also offered another public benefit, since it responded to deep concern about the ability of the economy to absorb this flood of veterans.  The country had been sliding back into depression at the start of the war, and the fear was that massive unemployment at war’s end was a real possibility.  The strategy worked.  Under the GI Bill, about two million veterans eventually attended some form of college.  By 1948, when veteran enrollment peaked, American colleges and universities had one million more students than 10 years earlier (Geiger (2004), pp. 40-41; Carter et al. (2006), Table Bc523).  This was another win-win situation.  The state rewarded national service, headed off mass unemployment, and produced a pile of human capital for future growth.  Higher education got a flood of students who could pay their own way.  The worry, of course, was what was going to happen when the wartime research contracts ended and the veterans graduated. 

            That’s where the Cold War came in to save the day.  And the timing was perfect.  The first major action of the new conflict – the Berlin Blockade – came in 1948, the same year that veteran enrollments at American colleges reached their peak.  If World War II was good for American higher education, the Cold War was a bonanza.  The hot war meant boom and bust – providing a short surge of money and students followed by a sharp decline.  But the Cold War was a prolonged effort to contain Communism.  It was sustainable because actual combat was limited and often carried out by proxies.  For universities this was a gift that, for 30 years, kept on giving.  The military threat was massive in scale – nothing less than the threat of nuclear annihilation.  And supplementing it was an ideological challenge – the competition between two social and political systems for hearts and minds.  As a result, the government needed top universities to provide it with massive amounts of scientific research that would support the military effort.  And it also needed all levels of the higher education system to educate the large numbers of citizens required to deal with the ideological menace.  We needed to produce the scientists and engineers who would allow us to compete with Soviet technology.  We needed to provide high-level human capital in order to promote economic growth and demonstrate the economic superiority of capitalism over communism.  And we needed to provide educational opportunity for our own racial minorities and lower classes in order to show that our system is not only effective but also fair and equitable.  This would be a powerful weapon in the effort to win over the third world with the attractions of the American Way.  The Cold War American government treated higher education system as a highly valuable public good, which would make a large contribution to the national interest; and the system was pleased to be the object of so much federal largesse (Loss, 2012).

            On the research side, the impact of the Cold War on American universities was dramatic.  The best way to measure this is by examining patterns of federal research and development spending over the years, which traces the ebb and flow of national threats across the last 60 years.  Funding rose slowly  from $13 billion in 1953 (in constant 2014 dollars) until the Sputnik crisis (after the Soviets succeeded in placing the first satellite in earth orbit), when funding jumped to $40 billion in 1959 and rose rapidly to a peak of $88 billion in 1967.  Then the amount backed off to $66 billion in 1975, climbing to a new peak of $104 billion in 1990 just before the collapse of the Soviet Union and then dropping off.  It started growing again in 2002 after the attack on the twin towers, reaching an all-time high of $151 billion in 2010 and has been declining ever since (AAAS, 2014).[2] 

            Initially, defense funding accounted for 85 percent of federal research funding, gradually falling back to about half in 1967, as nondefense funding increased, but remaining in a solid majority position up until the present.  For most of the period after 1957, however, the largest element in nondefense spending was research on space technology, which arose directly from the Soviet Sputnik threat.  If you combine defense and space appropriations, this accounts for about three-quarters of federal research funding until 1990.  Defense research closely tracked perceived threats in the international environment, dropping by 20 percent after 1989 and then making a comeback in 2001.  Overall,  federal funding during the Cold War for research of all types grew in constant dollars from $13 billion in 1953 to $104 in 1990, an increase of 700 percent.  These were good times for university researchers (AAAS, 2014).

            At the same time that research funding was growing rapidly, so were college enrollments.  The number of students in American higher education grew from 2.4 million in 1949 to 3.6 million in 1959; but then came the 1960s, when enrollments more than doubled, reaching 8 million in 1969.  The number hit 11.6 million in 1979 and then began to slow down – creeping up to 13.5 million in 1989 and leveling off at around 14 million in the 1990s (Carter et al., 2006, Table Bc523; NCES, 2014, Table 303.10).  During the 30 years between 1949 and 1979, enrollments increased by more than 9 million students, a growth of almost 400 percent.  And the bulk of the enrollment increases in the last two decades were in part-time students and at two-year colleges.  Among four-year institutions, the primary growth occurred not at private or flagship public universities but at regional state universities, the former normal schools.  The Cold War was not just good for research universities; it was also great for institutions of higher education all the way down the status ladder.

            In part we can understand this radical growth in college enrollments as an extension of the long-term surge in consumer demand for American higher education as a private good.  Recall that enrollments started accelerating late in the nineteenth century, when college attendance started to provide an edge in gaining middle class jobs.  This meant that attending college gave middle-class families a way to pass on social advantage while attending high school gave working-class families a way to gain social opportunity.  But by 1940, high school enrollments had become universal.  So for working-class families, the new zone of social opportunity became higher education.  This increase in consumer demand provided a market-based explanation for at least part of the flood of postwar enrollments.

            At the same time, however, the Cold War provided a strong public rationale for broadening access to college.  In 1946, President Harry Truman appointed a commission to provide a plan for expanding access to higher education, which was first time in American history that a president sought advice about education at any level.  The result was a six-volume report with the title Higher Education for American Democracy.  It’s no coincidence that the report was issued in 1947, the starting point of the Cold War.  The authors framed the report around the new threat of atomic war, arguing that “It is essential today that education come decisively to grips with the world-wide crisis of mankind” (President’s Commission, 1947, vol. 1, p. 6).  What they proposed as a public response to the crisis was a dramatic increase in access to higher education.

            The American people should set as their ultimate goal an educational system in which at no level – high school, college, graduate school, or professional school – will a qualified individual in any part of the country encounter an insuperable economic barrier to the attainment of the kind of education suited to his aptitudes and interests.
        This means that we shall aim at making higher education equally available to all young people, as we now do education in the elementary and high schools, to the extent that their capacity warrants a further social investment in their training (President’s Commission, 1947, vol. 1, p. 36).

Tellingly, the report devotes a lot of space exploring the existing barriers to educational opportunity posed by class and race – exactly the kinds of issues that were making liberal democracies look bad in light of the egalitarian promise of communism.

Decline of the System’s Public Mission

            So in the mid twentieth century, Americans went through an intense but brief infatuation with higher education as a public good.  Somehow college was going to help save us from the communist menace and the looming threat of nuclear war.  Like World War II, the Cold War brought together a notoriously individualistic population around the common goal of national survival and the preservation of liberal democracy.  It was a time when every public building had an area designated as a bomb shelter.  In the elementary school I attended in the 1950s, I can remember regular air raid drills.  The alarm would sound and teachers would lead us downstairs to the basement, whose concrete-block walls were supposed to protect us from a nuclear blast.  Although the drills did nothing to preserve life, they did serve an important social function.  Like Sunday church services, these rituals drew individuals together into communities of faith where we enacted our allegiance to a higher power. 

            For American college professors, these were the glory years, when fear of annihilation gave us a glamorous public mission and what seemed like an endless flow of public funds and funded students.  But it did not – and could not – last.  Wars can bring great benefits to the home front, but then they end.  The Cold War lasted longer than most, but this longevity came at the expense of intensity.  By the 1970s, the U.S. had lived with the nuclear threat for 30 years without any sign that the worst case was going to materialize.  You can only stand guard for so long before attention begins to flag and ordinary concerns start to push back to the surface.  In addition, waging war is extremely expensive, draining both public purse and public sympathy.  The two Cold War conflicts that engaged American troops cost a lot, stirred strong opposition, and ended badly, providing neither the idealistic glow of the Good War nor the satisfying closure of unconditional surrender by the enemy.  Korea ended with a stalemate and the return to the status quo ante bellum.  Vietnam ended with defeat and the humiliating image in 1975 of the last Americans being plucked off a rooftop in Saigon – which the victors then promptly renamed Ho Chi Minh City.

            The Soviet menace and the nuclear threat persisted, but in a form that – after the grim experience of war in the rice paddies – seemed distant and slightly unreal.  Add to this the problem that, as a tool for defeating the enemy, the radical expansion of higher education by the 70s did not appear to be a cost-effective option.  Higher ed is a very labor-intensive enterprise, in which size brings few economies of scale, and its public benefits in the war effort were hard to pin down.  As the national danger came to seem more remote, the costs of higher ed became more visible and more problematic.  Look around any university campus, and the primary beneficiaries of public largesse seem to be private actors – the faculty and staff who work there and the students whose degrees earn them higher income.  So about 30 years into the Cold War, the question naturally arose:  Why should the public pay so much to provide cushy jobs for the first group and to subsidize the personal ambition of the second?  If graduates reap the primary benefits of a college education, shouldn’t they be paying for it rather than the beleaguered taxpayer?

            The 1970s marked the beginning of the American tax revolt, and not surprisingly this revolt emerged first in the bellwether state of California.  Fueled by booming defense plants and high immigration, California had a great run in the decades after 1945.  During this period, the state developed the most comprehensive system of higher education in the country.  In 1960 it formalized this system with a Master Plan that offered every Californian the opportunity to attend college in one of three state systems.  The University of California focused on research, graduate programs, and educating the top high school graduates.  California State University (developed mostly from former teachers colleges) focused on undergraduate programs for the second tier of high school graduates.  The community college system offered the rest of the population two-year programs for vocational training and possible transfer to one of the two university systems.  By 1975, there were 9 campuses in the University of California, 23 in California State University, and xx in the community college system, with a total enrollment across all systems of 1.5 million students – accounting for 14 percent of the college students in the U.S. (Carter et al., 2006, Table Bc523; Douglass, 2000, Table 1).  Not only was the system enormous, but the Master Plan declared it illegal to charge California students tuition.  The biggest and best public system of higher education in the country was free.

            And this was the problem.  What allowed the system to grow so fast was a state fiscal regime that was quite rare in the American context – one based on high public services supported by high taxes.  After enjoying the benefits of this combination for a few years, taxpayers suddenly woke up to the realization that this approach to paying for higher education was at core un-American.  For a country deeply grounded in liberal democracy, the system of higher ed for all at no cost to the consumer looked a lot like socialism.  So, of course, it had to go.  In the mid-1970s the country’s first taxpayer revolt emerged in California, culminating in a successful campaign in 1978 to pass a state-wide initiative that put a limit on increases in property taxes.  Other tax limitation initiatives followed (Martin, 2008).  As a result, the average state appropriation per student at University of California dropped from about $3,400 (in 1960 dollars) in 1987 to $1,100 in 2010, a decline of 68 percent (UC Data Analysis (2014).  This quickly led to a steady increase in fees charged to students at California’s colleges and universities.  (It turned out that tuition was illegal but demanding fees from students was not.)  In 1960 dollars, the annual fees for in-state undergraduates at the University of California rose from $317 in 1987 to $1,122 in 2010, an increase of more than 250 percent (UC Data Analysis (2014).  This pattern of tax limitations and tuition increases spread across the country.  Nationwide during the same period of time, the average state appropriation per student at a four year public college fell from $8,500 to $5,900 (in 2012 dollars), a decline of 31 percent, while average undergraduate tuition doubled, rising from $2,600 to $5,200 (SHEEO, 2013, Figure 3).

            The decline in the state share of higher education costs was most pronounced at the top public research universities, which had a wider range of income sources.  By 2009, the average such institution was receiving only 25 percent of its revenue from state government (National Science Board (2012), Figure 5).  An extreme case is University of Virginia, where in 2013 the state provided less than six percent of the university’s operating budget (University of Virginia, 2014). 

            While these changes were happening at the state level, the federal government was also backing away from its Cold War generosity to students in higher education.  Legislation such as the National Defense Education Act (1958) and Higher Education Act (1965) had provided support for students through a roughly equal balance of grants and loans.  But in 1980 the election of Ronald Reagan as president meant that the push to lower taxes would become national policy.  At this point, support for students shifted from cash support to federally guaranteed loans.  The idea was that a college degree was a great investment for students, which would pay long-term economic dividends, so they should shoulder an increasing share of the cost.  The proportion of total student support in the form of loans was 54 percent in 1975, 67 percent in 1985, and 78 percent in 1995, and the ratio has remained at that level ever since (McPherson & Schapiro, 1998, Table 3.3; College Board, 2013, Table 1).  By 1995, students were borrowing $41 billion to attend college, which grew to $89 billion in 2005 (College Board, 2014, Table 1).  At present, about 60 percent of all students accumulate college debt, most of it in the form of federal loans, and the total student debt load has passed $1 trillion.

            At the same time that the federal government was cutting back on funding college students, it was also reducing funding for university research.  As I mentioned earlier, federal research grants in constant dollars peaked at about $100 billion in 1990, the year after the fall of the Berlin wall – a good marker for the end of the Cold War.  At this point defense accounted for about two-thirds of all university research funding – three-quarters if your include space research.  Defense research declined by about 20 percent during the 90s and didn’t start rising again substantially until 2002, the year after the fall of the Twin Towers and the beginning of the new existential threat known as the War on Terror.  Defense research reached a new peak in 2009 at a level about a third above the Cold War high, and it has been declining steadily ever since.  Increases in nondefense research helped compensate for only a part of the loss of defense funds (AAAS, 2014).

Conclusion

            The American system of higher education came into existence as a distinctly private good.  It arose in the nineteenth century to serve the pursuit of sectarian advantage and land speculation, and then in the twentieth century it evolved into a system for providing individual consumers a way to get ahead or stay ahead in the social hierarchy.  Quite late in the game it took World War II to give higher education an expansive national mission and reconstitute it as a public good.  But hot wars are unsustainable for long, so in 1945 the system was sliding quickly back toward public irrelevance before it was saved by the timely arrival of the Cold War.  As I have shown, the Cold War was very very good for American system of higher education.  It produced a massive increase in funding by federal and state governments, both for university research and for college student subsidies, and – more critically – it sustained this support for a period of three decades.  But these golden years gradually gave way before a national wave of taxpayer fatigue and the surprise collapse of the Soviet Union.  With the nation strapped for funds and with its global enemy dissolved, it no longer had the urgent need to enlist America’s colleges and universities in a grand national cause.  The result was a decade of declining research support and static student enrollments. In 2002 the wars in Afghanistan and Iraq brought a momentary surge in both, but these measures peaked after only eight years and then went again into decline.  Increasingly, higher education is returning to its roots as a private good.

            So what are we to take away from this story of the rise and fall of the Cold War university?  One conclusion is that the golden age of the American university in the mid twentieth century was a one-off event.  Wars may be endemic but the Cold War was unique.  So American university administrators and professors need to stop pining for a return to the good old days and learn how to live in the post-Cold-War era.  The good news is that the impact of the surge in public investment in higher education has left the system in a radically stronger condition than it was in before World War II.  Enrollments have gone from 1.5 million to 21 million; federal research funding has gone from zero to $135 billion; federal grants and loans to college students have gone from zero to $170 billion (NCES, 2014, Table 303.10; AAAS, 2014; College Board, 2014, Table 1).  And the American system of colleges and universities went from an international also-ran to a powerhouse in the world economy of higher education.  Even though all of the numbers are now dropping, they are dropping from a very high level, which is the legacy of the Cold War.  So really, we should stop whining.  We should just say thanks to the bomb for all that it did for us and move on.

            The bad news, of course, is that the numbers really are going down.  Government funding for research is declining and there is no prospect for a turnaround in the foreseeable future.  This is a problem because the federal government is the primary source of funds for basic research in the U.S.; corporations are only interested in investing in research that yields immediate dividends.  During the Cold War, research universities developed a business plan that depended heavily on external research funds to support faculty, graduate students, and overhead.  That model is now broken.  The cost of pursuing a college education is increasingly being borne by the students themselves, as states are paying a declining share of the costs of higher education.  Tuition is rising and as a result student loans are rising.  Public research universities are in a particularly difficult position because their state funding is falling most rapidly.  According to one estimate, at the current rate of decline the average state fiscal support for public higher education will reach zero in 2059 (Mortenson, 2012). 

            But in the midst of all of this bad news, we need to keep in mind that the American system of higher education has a long history of surviving and even thriving under conditions of at best modest public funding.  At its heart, this is a system of higher education based not on the state but the market.  In the hardscrabble nineteenth century, the system developed mechanisms for getting by without the steady support of funds from church or state.  It learned how to attract tuition-paying students, give them the college experience they wanted, get them to identify closely with the institution, and then milk them for donations when they graduate.  Football, fraternities, logo-bearing T shirts, and fund-raising operations all paid off handsomely.  It learned how to adapt quickly to trends in the competitive environment, whether it’s the adoption of intercollegiate football, the establishment of research centers to capitalize on funding opportunities, or providing students with food courts and rock-climbing walls.  Public institutions have a long history of behaving much like private institutions because they were never able to count on continuing state funding. 

            This system has worked well over the years.  Along with the Cold War, it has enabled American higher education to achieve an admirable global status.  By the measures of citations, wealth, drawing power, and Nobel prizes, the system has been very effective.  But it comes with enormous costs.  Private universities have serious advantages over public universities, as we can see from university rankings.  The system is the most stratified structure of higher education in the world.  Top universities in the U.S. get an unacknowledged subsidy from the colleges at the bottom of the hierarchy, which receive less public funding, charge less tuition, and receive less generous donations.  And students sort themselves into institutions in the college hierarchy that parallels their position in the status hierarchy.  Students with more cultural capital and economic capital gain greater social benefit from the system than those with less, since they go to college more often, attend the best institutions, and graduate at a much higher rate.  Nearly everyone can go to college in the U.S., but the colleges that are most accessible provide the least social advantage. 

            So, conceived and nurtured into maturity as a private good, the American system of higher education remains a market-based organism.  It took the threat of nuclear war to turn it – briefly – into a public good.  But these days seem as remote as the time when schoolchildren huddled together in a bomb shelter. 

References

American Association for the Advancement of Science. (2014). Historical Trends in Federal R & D: By Function, Defense and Nondefense R & D, 1953-2015.  http://www.aaas.org/page/historical-trends-federal-rd (accessed 8-21-14.

Bledstein, B. J. (1976). The Culture of Professionalism: The Middle Class and the Development of Higher Education in America. New York:  W. W. Norton.

Boorstin, D. J. (1965). Culture with Many Capitals: The  Booster College. In The Americans: The National Experience (pp. 152-161). New York: Knopf Doubleday.

Brown, D. K. (1995). Degrees of Control: A Sociology of Educational Expansion and Occupational Credentialism. New York: Teachers College Press.

Carter, S. B., et al. (2006). Historical Statistics of the United States, Millennial Education on Line. New York: Cambridge University Press.

College Board. (2013). Trends in student aid, 2013. New York: The College Board.

College Board. (2014). Trends in Higher Education: Total Federal and Nonfederal Loans over Time.  https://trends.collegeboard.org/student-aid/figures-tables/growth-federal-and-nonfederal-loans-over-time (accessed 9-4-14).

Collins, R. (1979). The Credential Society: An Historical Sociology of Education and Stratification. New York: Academic Press.

Douglass, J. A. (2000). The California Idea and American Higher Education: 1850 to the 1960 Master Plan. Stanford, CA: Stanford University Press.

Garrett, T. A., & Rhine, R. M. (2006).  On the Size and Growth of Government. Federal Reserve Bank of St. Louis Review, 88:1 (pp. 13-30).

Geiger, R. L. (2004). To Advance Knowledge: The Growth of American research Universities, 1900-1940. New Brunswick: Transaction.

Goldin, C. & Katz, L. F. (2008). The Race between Education and Technology. Cambridge: Belknap Press of Harvard University Press.

Institute of Higher Education, Shanghai Jiao Tong University.  (2013).  Academic Ranking of World Universities – 2013.  http://www.shanghairanking.com/ARWU2013.html (accessed 6-11-14).

Levine, D. O. (1986). The American college and the culture of aspiration, 1914-1940 Ithaca: Cornell University Press.

Loss, C. P.  (2011).  Between citizens and the state: The politics of American higher education in the 20th century. Princeton, NJ: Princeton University Press.

Martin, I. W. (2008). The Permanent Tax Revolt: How the Property Tax Transformed American Politics. Stanford, CA: Stanford University Press.

McPherson, M. S. & Schapiro, M. O.  (1999).  Reinforcing Stratification in American Higher Education:  Some Disturbing Trends.  Stanford: National Center for Postsecondary Improvement.

Mortenson, T. G. (2012).  State Funding: A Race to the Bottom.  The Presidency (winter).  http://www.acenet.edu/the-presidency/columns-and-features/Pages/state-funding-a-race-to-the-bottom.aspx (accessed 10-18-14).

National Center for Education Statistics. (2014). Digest of Education Statistics, 2013. Washington, DC: US Government Printing Office.

National Science Board. (2012). Diminishing Funding Expectations: Trends and Challenges for Public Research Universities. Arlington, VA: National Science Foundation.

Potts, D. B. (1971).  American Colleges in the Nineteenth Century: From Localism to Denominationalism. History of Education Quarterly, 11: 4 (pp. 363-380).

President’s Commission on Higher Education. (1947). Higher education for American democracy, a report. Washington, DC: US Government Printing Office.

Rüegg, W. (2004). European Universities and Similar Institutions in Existence between 1812 and the End of 1944: A Chronological List: Universities.  In Walter Rüegg, A History of the University in Europe, vol. 3. London: Cambridge University Press.

State Higher Education Executive Officers (SHEEO). (2013). State Higher Education Finance, FY 2012. www.sheeo.org/sites/default/files/publications/SHEF-FY12.pdf (accessed 9-8-14).

Terkel, S. (1997). The Good War: An Oral History of World War II. New York: New Press.

Tewksbury, D. G. (1932). The Founding of American Colleges and Universities before the Civil War. New York: Teachers College Press.

U of California Data Analysis. (2014). UC Funding and Fees Analysis.  http://ucpay.globl.org/funding_vs_fees.php (accessed 9-2-14).

University of Virginia (2014). Financing the University 101. http://www.virginia.edu/finance101/answers.html (accessed 9-2-14).

[1] Under pressure of the war effort, the department eventually relented and enlisted the help of chemists to study gas warfare.  But the initial response is telling.

[2] Not all of this funding went into the higher education system.  Some went to stand-alone research organizations such as the Rand Corporation and American Institute of Research.  But these organizations in many ways function as an adjunct to higher education, with researcher moving freely between them and the university.

Posted in Academic writing, Course Syllabus, Writing Class

Class on Academic Writing

This is the syllabus for a class on academic writing for clarity and grace, which I originally posted more than a year ago.  It is designed as a 10-week class, with weekly readings, slides, and texts for editing.  It’s aimed at doctoral students who are preparing to become researchers who seek to publish their scholarship.  Ideally you can take the class with a group of peers, where you give each other feedback on your own writing projects in progress.  But you can also take the class by yourself.

Below is the syllabus, which includes links to all readings, class slides, and texts for editing.  Here’s a link to the Word document with all of the links, which is easier to work with.

I’ve also constructed a 6-week version of the class, which is aimed at graduate and undergraduate students who want to work on their writing for whatever purpose they choose.  Here’s a link to that syllabus as a Word document.

 

“The effort the writer does not put into writing, the reader has to put into reading.”

Stephen Toulmin

Academic Writing for Clarity and Grace

A Ten-Week Class

David Labaree                            

Web: http://www.stanford.edu/~dlabaree/

Twitter: @Dlabaree

Blog: https://davidlabaree.com/                                                     

                                                Course Description

            The title sounds like a joke, since academics (especially in the social sciences) do not have a reputation for writing with either clarity or grace much less both.  But I hope in this class to draw students into my own and every other academic’s lifelong quest to become a better writer.  The course will bring in a wide range of reference works that I have found useful over the years in working on my own writing and in helping students with theirs.  The idea is not that a 10-week class will make students good writers; many of us have been working at this for 40 years or more and we’re just getting started.  Instead, the plan is to provide students with some helpful strategies, habits, and critical faculties; increase their sense of writing as an extended process of revision; and leave them with a set of books that will support them in their own lifelong pursuit of good writing.

This online course is based on one I used to teach at Stanford for graduate students in education who wanted to work on their writing.  It was offered in the ten-week format of the university’s quarter system, and I’m keeping that format.  But you can use it in any way that works for you. 

Some may want to treat it as a weekly class, doing the readings for each week, reviewing the PowerPoint slides for that week, and working through some of the exercises.  If you’re treating it this way, it would work best if you can do it with a writing group made up of other students with similar interests.  That way you can take advantage of the workshop component of the class, in which members of the group exchange sections of a paper they working on, giving and receiving feedback.

Others may use it as a general source of information about writing, diving into particular readings or slide decks as needed.

Classes include some instruction on particular skills and particular aspects of the writing process:  developing an analytical angle on a subject; writing a good sentence; getting started in the writing process; working out the logic of the argument; developing the forms of validation for the argument; learning what your point is from the process of writing rather than as a precursor to writing; and revising, revising, revising.  We spend another part of the class working as a group doing exercises in spotting and fixing problems.  For these purposes we will use some helpful examples from the Williams book and elsewhere that focus on particular skills, but you can use the work produced within your own writing group. 

Work in your writing group:  Everyone needs to develop a recognition of the value of getting critical feedback from others on their work in progress, so you should be exchanging papers and work at editing each other’s work.  Student work outside of class will include reading required texts, editing other student’s work around particular areas of concern, and working on revising your own paper or papers.  Every week you will be submitting a piece of written work to your writing group, which will involve repeated efforts to edit a particular text of your own; and every week you will provide feedback to others in your group about their own texts. 

Much of class time will focus on working on particular texts around a key issue of the day – like framing, wordiness, clarity, sentence rhythm.  These texts will be examples from the readings and also papers by students, on which they would like to get feedback from the class as a whole.  Topics will include things like:

  • Framing an argument, writing the introduction to a paper
  • Elements of rhetoric
  • Sentence rhythm and music
  • Emphasis – putting the key element at the end of sentence and paragraph; delivering the punch line
  • Concision – eliminating wordiness
  • Clarity – avoiding nominalizations; opting for Anglo-Saxon words; clearing up murky syntax
  • Focusing on action and actors
  • Metaphor and imagery
  • Correct usage: punctuation, common grammatical errors, word use
  • Avoiding the most common academic tics: jargon, isms, Latinate constructions, nominalizations, abstraction, hiding from view behind passive voice and third person
  • The basics of making an argument
  • Using quotes – integrating them into your argument, and commenting on them instead of assuming they make the point on their own.
  • Using data – how to integrate data into a text and explain its meaning and significance
  • The relation of writing and thought
  • Revision – of writing and thinking
  • The relation of grammar and mechanics to rhetorical effect
  • Sentence style
  • The relation of style to audience
  • Disciplinary conventions for style, organization, modes of argument, evidence
  • Authority and voice

            Writing is a very personal process and the things we write are expressions of who we are, so it is important for everyone in the class to keep focused on being constructive in their comments and being tolerant of criticism from others.  Criticism from others is very important for writers, but no one likes it.  I have a ritual every time I get feedback on a paper or manuscript – whether blind reviews from journals or publishers or personal comments from colleagues.  I let the review sit for a while until I’m in the right mood.  Then I open it and skim it quickly to get the overall impression of how positive or negative it is.  At that point I set it aside, cursing the editors for sending the paper to such an incompetent reviewer or reconsidering my formerly high opinion of the particular colleague-critic, then finally coming back a few days later (after a vodka or two) to read the thing carefully and assess the damage.  Neurotic I know, but most writers are neurotic about their craft.  It’s hard not to take criticism personally.  Beyond all reason, I always expect the reviewers to say, “Don’t change a word; publish it immediately!”  But somehow they never do.  So I’m asking all members of the class both to recognize the vulnerability of their fellow writers and to open themselves up to the criticism of these colleagues in the craft. 

Course Texts

Books listed with an * are ones where older editions are available; it’s ok to use one of these editions instead of the most recent version.

*Williams, Joseph M. & Bizup, Joseph.  (2016). Style: Lessons in clarity and grace (12th ed.).  New York: Longman.  

*Becker, Howard S.  (2007).  Writing for social scientists:  How to start and finish your thesis, book, or article (2nd ed.).  Chicago: University of Chicago Press.

*Graff, Gerald, & Birkenstein, Cathy. (2014). “They say, I say:” The moves that matter in academic writing (3rd ed.). New York: Norton.

Sword, Helen.  (2012).  Stylish academic writing. Cambridge: Harvard University Press.

*Garner, Bryan A.  (2016). Garner’s modern English usage (4th ed.). New York: Oxford University Press.  (Any earlier edition is fine to use.)

Other required readings are available in PDF on a Google drive. 

Course Outline

Week 1:  Introduction to Course; Writing Rituals; Writing Well, or at Least Less Badly

Zinnser, William. (2010). Writing English as a second language.  Point of Departure (Winter). Americanscholar.org.

Munger, Michael C. (2010). 10 tips for how to write less badly. Chronicle of Higher Education (Sept. 6).  Chronicle.com.

Lepore, Jill. (2009). How to write a paper for this class. History Department, Harvard University.

Lamott, Anne. (2005). Bird by bird: Some instructions on writing and life. In English 111 Reader.  Miami University Department of English.

Zuckerman, Ezra W. (2008). Tips to article writers. http://web.mit.edu/ewzucker/www/Tips%20to%20article%20writers.pdf.

Slides for week 1 class

Week 2:  Clarity

Williams, Joseph M. & Bizup, Joseph.  (2016).  Style: Lessons in clarity and grace (12th ed.).  New York: Longman. Lessons One, Two, Three, Four, Five, and Six.  It’s ok to use any earlier edition of this book.

Slides for week 2 class

Week 3:  Structuring the Argument in a Paper

Graff, Gerald, & Birkenstein, Cathy. (2014). “They say, I say:” The moves that matter in academic writing (3rd ed.). New York: Norton.  You can use any earlier edition of this book.

Wroe, Ann. (2011). In the beginning was the sound. Intelligent Life Magazine, Spring. http://moreintelligentlife.com/content/arts/ann-wroe/beginning-was-sound.

Slides for week 3 class

Week 4:  Grace

Williams, Joseph M. & Bizup, Joseph.  (2016).  Style: Lessons in clarity and grace (12th ed.).  New York: Longman. Lessons Seven, Eight, and Nine.

Orwell, George. (1946). Politics and the English Language. Horizon.

Lipton, Peter. (2007). Writing Philosophy.

Slides for week 4 class

Week 5:  Stylish Academic Writing

Sword, Helen.  (2012).  Stylish academic writing. Cambridge: Harvard University Press.

Check out Helen Sword’s website, Writer’s Diet, which allows you to paste in a text of your own and get back an analysis of how flabby or fit it is: http://www.writersdiet.com/WT.php.

Haslett, Adam. (2011). The art of good writing. Financial Times (Jan. 22).  Ft.com.

Slides for week 5 class

Week 6:  Writing in the Social Sciences

Becker, Howard S.  (2007).  Writing for social scientists:  How to start and finish your thesis, book, or article (2nd ed.).  Chicago: University of Chicago Press.  It’s fine to use any earlier edition of this book.

Slides for week 6 class

Week 7:  Usage

Garner, Bryan A.  (2016). Garner’s modern English usage (4th ed.). New York: Oxford University Press.  Selections.  Any earlier edition of this book is fine to use.

Wallace, David Foster. (2001). Tense present: Democracy, English, and the wars over usage. Harpers (April), 39-58.

Slides for week 7 class

Week 8:  Writing with Clarity and Grace

Limerick, Patricia. (1993). Dancing with professors: The trouble with academic prose.

Scott Brauer. (2014). Writing instructor, skeptical of automated grading, pits machine vs. machine. Chronicle of Higher Education, April 28.

Pinker, Steven. (2014). Why academics stink at writing. Chronicle of Education, Sept. 26.

Labaree, David F. (2018). The Five-Paragraph Fetish. Aeon.

Slides for week 8 class

Week 9:  Clarity of Form

Williams, Joseph M. & Bizup, Joseph.  (2016).  Style: Lessons in clarity and grace (12th ed.).  New York: Longman. Lessons Ten, Eleven, and Twelve.

Yagoda, Ben. (2011). The elements of clunk. Chronicle of Higher Education (Jan. 2).  Chronicle.com.

 Slides for week 9 class

Week 10:  Writing with Clarity and Grace

March, James G. (1975). Education and the pursuit of optimism. Texas Tech Journal of Education, 2:1, 5-17.

Gladwell, Malcolm. (2000). The art of failure: Why some people choke and others panic. New Yorker (Aug. 21 and 28).  Gladwell.com

Labaree, David F. (2012). Sermon on educational research. Bildungsgeschichte: International Journal for the Historiography of Education, 2:1, 78-87.

Slides for week 10 class

Posted in Capitalism, Higher Education, Meritocracy, Politics

Sandel: The Tyranny of Merit

This post is a reflection on Michael Sandel’s new book, The Tyranny of Merit: What’s Become of the Common Good?  He’s a philosopher at Harvard and this is his analysis of the dangers posed by the American meritocracy.  The issue is one I’ve been exploring here for the last two years in a variety of posts (here, here, here, here, here, here, and here.)

I find Sandel’s analysis compelling, both in the ways it resonates with other takes on the subject and also in his distinctive contributions to the discussion.  My only complaint is that the whole discussion could have been carried out more effectively in a single magazine article.  The book tends to be repetitive, and it also gets into the weeds on some philosophical issues that blur its focus and undercut its impact.  Here I present what I think are the key points.  I hope you find it useful.

Sandel Cover

Both the good news and the bad news about meritocracy is its promise of opportunity for all based on individual merit rather than the luck of birth.  It’s hard to hate a principle that frees us from the tyranny of inheritance. 

The meritocratic ideal places great weight on the notion of personal responsibility. Holding people responsible for what they do is a good thing, up to a point. It respects their capacity to think and act for themselves, as moral agents and as citizens. But it is one thing to hold people responsible for acting morally; it is something else to assume that we are, each of us, wholly responsible for our lot in life.

The problem is that simply calling the new model of status attainment “achievement” rather than “ascription” doesn’t mean that your ability to get ahead is truly free of circumstances beyond your control.  

But the rhetoric of rising now rings hollow. In today’s economy, it is not easy to rise. Americans born to poor parents tend to stay poor as adults. Of those born in the bottom fifth of the income scale, only about one in twenty will make it to the top fifth; most will not even rise to the middle class. It is easier to rise from poverty in Canada or Germany, Denmark, and other European countries than it is in the United States.

The meritocratic faith argues that the social structure of inequality provides a powerful incentive for individuals to work hard to get ahead in order to escape from a bad situation and move on to something better.  The more inequality, such as in the US, the more incentive to move up.  The reality, however, is quite different.

But today, the countries with the highest mobility tend to be those with the greatest equality. The ability to rise, it seems, depends less on the spur of poverty than on access to education, health care, and other resources that equip people to succeed in the world of work.

Sandel goes on to point out additional problems with meritocracy beyond the difficulties in trying to get ahead all on your own: 1) demoralizing the losers in the race; 2) denigrating those without a college degree; and 3) turning politics into the realm of the expert rather than the citizen.

The tyranny of merit arises from more than the rhetoric of rising. It consists in a cluster of attitudes and circumstances that, taken together, have made meritocracy toxic. First, under conditions of rampant inequality and stalled mobility, reiterating the message that we are responsible for our fate and deserve what we get erodes solidarity and demoralizes those left behind by globalization. Second, insisting that a college degree is the primary route to a respectable job and a decent life creates a credentialist prejudice that undermines the dignity of work and demeans those who have not been to college; and third, insisting that social and political problems are best solved by highly educated, value-neutral experts is a technocratic conceit that corrupts democracy and disempowers ordinary citizens.

Consider the first point. Meritocracy fosters triumphalism for the winners and despair for the losers.  It you succeed or fail, you alone get the credit or the blame.  This was not the case in the bad old days of aristocrats and peasants.

If, in a feudal society, you were born into serfdom, your life would be hard, but you would not be burdened by the thought that you were responsible for your subordinate position. Nor would you labor under the belief that the landlord for whom you toiled had achieved his position by being more capable and resourceful than you. You would know he was not more deserving than you, only luckier.

If, by contrast, you found yourself on the bottom rung of a meritocratic society, it would be difficult to resist the thought that your disadvantage was at least partly your own doing, a reflection of your failure to display sufficient talent and ambition to get ahead. A society that enables people to rise, and that celebrates rising, pronounces a harsh verdict on those who fail to do so.

This triumphalist aspect of meritocracy is a kind of providentialism without God, at least without a God who intervenes in human affairs. The successful make it on their own, but their success attests to their virtue. This way of thinking heightens the moral stakes of economic competition. It sanctifies the winners and denigrates the losers.

One key issue that makes meritocracy potentially toxic is its assumption that we deserve the talents that earn us such great rewards.

There are two reasons to question this assumption. First, my having this or that talent is not my doing but a matter of good luck, and I do not merit or deserve the benefits (or burdens) that derive from luck. Meritocrats acknowledge that I do not deserve the benefits that arise from being born into a wealthy family. So why should other forms of luck—such as having a particular talent—be any different? 

Second, that I live in a society that prizes the talents I happen to have is also not something for which I can claim credit. This too is a matter of good fortune. LeBron James makes tens of millions of dollars playing basketball, a hugely popular game. Beyond being blessed with prodigious athletic gifts, LeBron is lucky to live in a society that values and rewards them. It is not his doing that he lives today, when people love the game at which he excels, rather than in Renaissance Florence, when fresco painters, not basketball players, were in high demand.

The same can be said of those who excel in pursuits our society values less highly. The world champion arm wrestler may be as good at arm wrestling as LeBron is at basketball. It is not his fault that, except for a few pub patrons, no one is willing to pay to watch him pin an opponent’s arm to the table.

He then moves on to the second point, about the central role of college in determining who’s got merit. 

Should colleges and universities take on the role of sorting people based on talent to determine who gets ahead in life?

There are at least two reasons to doubt that they should. The first concerns the invidious judgments such sorting implies for those who get sorted out, and the damaging consequences for a shared civic life. The second concerns the injury the meritocratic struggle inflicts on those who get sorted in and the risk that the sorting mission becomes so all-consuming that it diverts colleges and universities from their educational mission. In short, turning higher education into a hyper-competitive sorting contest is unhealthy for democracy and education alike.

The difficulty of predicting which talents are most socially beneficial is particularly true for the complex array of skills that people pick up in college.  Which ones matter most for determining a person’s ability to make an important contribution to society and which don’t?  How do we know if an elite college provides more of those skills than an open-access college?  This matters because a graduate from the former gets a much higher reward than one from the latter.  Pretending that a prestigious college degree is the best way to measure future performance is particularly difficult to sustain because success and degree are conflated.  Graduates of top colleges get the best jobs and thus seem to have the greatest impact, whereas non-grads never get the chance to show what they can do.

Another sports analogy helps to make this point.

Consider how difficult it is to assess even more narrowly defined talents and skills. Nolan Ryan, one of the greatest pitchers in the history of baseball, holds the all-time record for most strikeouts and was elected on the first ballot to baseball’s Hall of Fame. When he was eighteen years old, he was not signed until the twelfth round of the baseball draft; teams chose 294 other, seemingly more promising players before he was chosen. Tom Brady, one of the greatest quarterbacks in the history of football, was the 199th draft pick. If even so circumscribed a talent as the ability to throw a baseball or a football is hard to predict with much certainty, it is folly to think that the ability to have a broad and significant impact on society, or on some future field of endeavor, can be predicted well enough to justify fine-grained rankings of promising high school seniors.

And then there’s the third point, the damage that meritocracy does to democratic politics.  One element of of this is that it turns politics into an arena for credentialed experts, consigning ordinary citizens to the back seat.  How many political leaders today are without a college degree?  Vanishingly few.  Another is that meritocracy not only bars non-grads from power but they also bars them from social respect.  

Grievances arising from disrespect are at the heart of the populist movement that has swept across Europe and the US.  Sandel calls this a “politics of humiliation.”

The politics of humiliation differs in this respect from the politics of injustice. Protest against injustice looks outward; it complains that the system is rigged, that the winners have cheated or manipulated their way to the top. Protest against humiliation is psychologically more freighted. It combines resentment of the winners with nagging self-doubt: perhaps the rich are rich because they are more deserving than the poor; maybe the losers are complicit in their misfortune after all.

This feature of the politics of humiliation makes it more combustible than other political sentiments. It is a potent ingredient in the volatile brew of anger and resentment that fuels populist protest.

Sandel draws on a wonderful book by Arlie Hochschild, Strangers in Their Own Land, in which she interviews Trump supporters in Louisiana.

Hochschild offered this sympathetic account of the predicament confronting her beleaguered working-class hosts:

You are a stranger in your own land. You do not recognize yourself in how others see you. It is a struggle to feel seen and honored. And to feel honored you have to feel—and feel seen as—moving forward. But through no fault of your own, and in ways that are hidden, you are slipping backward.

Once consequence of this for those left behind is a rise in “deaths of despair.”

The overall death rate for white men and women in middle age (ages 45–54) has not changed much over the past two decades. But mortality varies greatly by education. Since the 1990s, death rates for college graduates declined by 40 percent. For those without a college degree, they rose by 25 percent. Here then is another advantage of the well-credentialed. If you have a bachelor’s degree, your risk of dying in middle age is only one quarter of the risk facing those without a college diploma. 

Deaths of despair account for much of this difference. People with less education have long been at greater risk than those with college degrees of dying from alcohol, drugs, or suicide. But the diploma divide in death has become increasingly stark. By 2017, men without a bachelor’s degree were three times more likely than college graduates to die deaths of despair.

Sandel offers two relatively reforms that might help mitigate the tyranny of meritocracy.  One focuses on elite college admissions.  

Of the 40,000-plus applicants, winnow out those who are unlikely to flourish at Harvard or Stanford, those who are not qualified to perform well and to contribute to the education of their fellow students. This would leave the admissions committee with, say, 30,000 qualified contenders, or 25,000, or 20,000. Rather than engage in the exceedingly difficult and uncertain task of trying to predict who among them are the most surpassingly meritorious, choose the entering class by lottery. In other words, toss the folders of the qualified applicants down the stairs, pick up 2,000 of them, and leave it at that.

This helps get around two problems:  the difficulty in trying to predict merit; and the outsize rewards of a winner-take-all admissions system.  But good luck trying to get this put in place over the howls of outrage from upper-middle-class parents, who have learned how to game the system to their advantage.  Consider this one small example of the reaction when an elite Alexandria high school proposed random admission from a pool of the most qualified.

Another reform is more radical and even harder to imagine putting into practice.  It begins with reconsideration of what we mean by the “common good.”

The contrast between consumer and producer identities points to two different ways of understanding the common good. One approach, familiar among economic policy makers, defines the common good as the sum of everyone’s preferences and interests. According to this account, we achieve the common good by maximizing consumer welfare, typically by maximizing economic growth. If the common good is simply a matter of satisfying consumer preferences, then market wages are a good measure of who has contributed what. Those who make the most money have presumably made the most valuable contribution to the common good, by producing the goods and services that consumers want.

A second approach rejects this consumerist notion of the common good in favor of what might be called a civic conception. According to the civic ideal, the common good is not simply about adding up preferences or maximizing consumer welfare. It is about reflecting critically on our preferences—ideally, elevating and improving them—so that we can live worthwhile and flourishing lives. This cannot be achieved through economic activity alone. It requires deliberating with our fellow citizens about how to bring about a just and good society, one that cultivates civic virtue and enables us to reason together about the purposes worthy of our political community.

If we can carry out this deliberation — a big if indeed — then we can proceed to implement a system for shifting the basis for individual compensation from what the market is willing to pay to what we collectively feel is most valuable to society.  

Thinking about pay, most would agree that what people make for this or that job often overstates or understates the true social value of the work they do. Only an ardent libertarian would insist that the wealthy casino magnate’s contribution to society is a thousand times more valuable than that of a pediatrician. The pandemic of 2020 prompted many to reflect, at least fleetingly, on the importance of the work performed by grocery store clerks, delivery workers, home care providers, and other essential but modestly paid workers. In a market society, however, it is hard to resist the tendency to confuse the money we make with the value of our contribution to the common good.

To implement a system based on public benefit rather than marketability would require completely revamping our structure of determining salaries and taxes. 

The idea is that the government would provide a supplementary payment for each hour worked by a low-wage employee, based on a target hourly-wage rate. The wage subsidy is, in a way, the opposite of a payroll tax. Rather than deduct a certain amount of each worker’s earnings, the government would contribute a certain amount, in hopes of enabling low-income workers to make a decent living even if they lack the skills to command a substantial market wage.

Generally speaking, this would mean shifting the tax burden from work to consumption and speculation. A radical way of doing so would be to lower or even eliminate payroll taxes and to raise revenue instead by taxing consumption, wealth, and financial transactions. A modest step in this direction would be to reduce the payroll tax (which makes work expensive for employers and employees alike) and make up the lost revenue with a financial transactions tax on high-frequency trading, which contributes little to the real economy.

This is how Sandel ends his book:

The meritocratic conviction that people deserve whatever riches the market bestows on their talents makes solidarity an almost impossible project. For why do the successful owe anything to the less-advantaged members of society? The answer to this question depends on recognizing that, for all our striving, we are not self-made and self-sufficient; finding ourselves in a society that prizes our talents is our good fortune, not our due. A lively sense of the contingency of our lot can inspire a certain humility: “There, but for the grace of God, or the accident of birth, or the mystery of fate, go I.” Such humility is the beginning of the way back from the harsh ethic of success that drives us apart. It points beyond the tyranny of merit toward a less rancorous, more generous public life.

Posted in Democracy, Inequality, Meritocracy, Public Good

What the Old Establishment Can Teach the New Tech Elite

It is unlikely that Mark Zuckerberg, Jeff Bezos and the other lords and ladies of Silicon Valley spend any time in English churchyards. But if they were to visit these delightfully melancholic places, the first things that they would encounter would be monuments to the fallen of the Great War. Their initial emotion, like anybody else’s looking at these morbid plinths, would rightly be one of relief. It is good that the West’s young men are no longer herded into uniform and marched toward machine guns.

If they looked harder, however, today’s elite would spot something else in these cemeteries. The whole of society is commemorated in stone: The baronet’s heir was shot to pieces in Flanders alongside the gamekeeper’s son. Recall that in the controversial D.H. Lawrence novel “Lady Chatterley’s Lover,” Lady Chatterley is driven into the arms of the local gamekeeper in part because her husband, Sir Clifford, was paralyzed from the waist down in the Great War.

Such monuments to the dead, which can be found across Europe, are a reminder that a century ago the elite, whatever its other sins, believed in public service. The rich shared common experiences with the poor, rooted in a common love of their country and a common willingness to sacrifice life and limb for something bigger.

That bond survived until the 1960s. Most young men in Europe did a version of what was called “national service”: They had to serve in the armed forces for a couple of years and learned the rudiments of warfare in case total war struck again. The U.S. called on people of all classes to fight in World War II—including John F. Kennedy and George H.W. Bush, who were both nearly killed serving their country—and the Korean War.

The economic elites and the political elites were intertwined. In Britain, a “magic circle” of Old Etonians helped choose the leader of the Conservative Party, convening over lunch at the Beefsteak Club or dinner at Pratt’s to discuss the fate of the nation, as well as the quality of that year’s hunting. What became the European Union was constructed behind closed doors by the continent’s ruling class, while Charles de Gaulle set up the Ecole Nationale d’Administration for the purpose of training a new ruling elite for a new age. American presidents turned to “wise men” of the East Coast Establishment, such as Averell Harriman, the son of a railroad tycoon, or one of the Rockefellers. The “best and the brightest” were supposed to do a stint in Washington.

A memorial to soldiers who died in the two world wars, Oxfordshire, U.K.

PHOTO: TIM GRAHAM/GETTY IMAGES

The Establishment on both sides of the Atlantic was convinced that good government mattered more than anything else. Mess up government and you end up with the Depression and Hitler.

That sense has gone. The New Establishment of Wall Street and the City of London and the New New Establishment of Silicon Valley have precious little to do with Washington or Whitehall. The public sector is for losers. As today’s elite see it, the best thing that government can do is to get out of the way of the really talented people and let them exercise their wealth-creating magic. Pester them too much or tax them too heavily and they will pick up their sticks and take their game elsewhere.

As for common experiences, the smart young people who go from the Ivy League or Oxbridge to work at Google or Goldman Sachs are often as distant from the laboring masses as the class that H.G. Wells, in “The Time Machine,” called the Eloi—pampered, ethereal, childlike creatures that the time traveler discovers at the end of his long journey into the future. Separated from the masses by elite education and pricey lifestyles in fashionable enclaves, today’s elite often have few ties to the country they work in. One former British spy points out that his children are immensely better educated than he was and far more tolerant, but the only time they meet the working class is when their internet shopping arrives; they haven’t shared a barracks with them.

Does this matter? Again, many will point to progress. The old elite was overwhelmingly male and white (with a few exceptions, such as Lady Violet Bonham Carter and Katharine Graham, who often wielded power through dinner parties). It often made a hash of things. Britain’s “magic circle” didn’t cope well with the swinging ‘60s—most catastrophically with the Profumo sex scandal, which greatly damaged the Conservative Party—while America’s whiz kids hardly excelled in Vietnam. By the 1960s, the very term “The Establishment” had become an insult.

Modern money is also far cleaner than old money. The officers who were mowed down at the Somme often came from grand homes, but they were built with the grubby proceeds of coal, slavery and slaughter. (Clifford Chatterley, in his wife’s view, treated miners “as objects rather than men.”) Say what you like against monopolistic tech barons, greedy hedge-fund managers or tax-dodging real estate tycoons, they aren’t sinners in the same league. Men like Mr. Bezos and Mr. Zuckerberg build great businesses and often give away their money to worthy causes. What more should they do?

Quite a lot, actually.

Lieutenant John F. Kennedy, right, and his PT 109 crew in the South Pacific, July 1943.

PHOTO: ASSOCIATED PRESS

The idea that the elite has a responsibility to tend to the state was brilliantly set out by Plato more than 2,000 years ago. In “The Republic” he likened the state to a ship that can easily flounder on the rocks or head in the wrong direction. He argued that for a voyage to succeed, you need a captain who has spent his life studying “the seasons of the years, the skies, the stars and other professional subjects.” He wanted to trust his state to a group of Guardians, selected for their wisdom and character and trained, through an austere and demanding education, in the arts of government.

Covid-19 is a wake-up call for the West, especially for its elite. This year could mark a reverse in history. Five hundred years ago, Europe was a bloody backwater while China was the most advanced country in the world, with the world’s most sophisticated civil service, selected by rigorous examination from across the whole country. The West overtook the East because its leaders mastered the art of government, producing a succession of powerful innovations—the nation-state, the liberal state, the welfare state—while the Chinese state ossified, its Mandarin elite unaware that it was even in competition with anyone else. By the 1960s, America was putting a man on the moon while millions of Chinese were dying of starvation.

Since the 1960s, however, this process has been reversed. Led by Singapore, Asia has been improving its state machinery while the West has ossified. Covid-19 shows just how far this change in the balance of competence has gone. Countries like South Korea, Singapore and even China have done far better at protecting their citizens than either the U.S. or Britain, where governments have conspicuously failed to work.

The elite bears much of the responsibility for this sorry state of affairs. The 1960s was the last time that they had a marked sense of public duty. What followed might be called the great abandonment. The Vietnam War discredited “wise men” such as McGeorge Bundy, a self-styled Platonic Guardian who served as national security adviser to both JFK and LBJ. The Establishment split into warring tribes of progressives and conservatives who were so divided by the culture wars that they seldom come together to fix anything. The explosion of pay in the private sector drew talent away from government. The constant refrain from the Right that the state is a parasite on the productive economy eroded what remained of the public ethic, while the Left, drugged by its ties to public sector unions, lost its appetite for reform. Government became a zombie, preserved and indeed inflated by its staff and clients, but robbed of ideas and talent.

National Service recruits in the U.K. line up to be issued caps, 1953.

PHOTO: POPPERFOTO/GETTY IMAGES

The difference with the East is marked. Singapore has put a Platonic premium on public service. It recruits the brightest young people for the government, makes sure they move frequently between the public and private sectors, and pays them well: Its top civil servants can earn more than a million dollars a year. (It stops short of forbidding its Guardians to marry and laying on orgies for them, as Plato advised, but it does force them to live in public housing.) Other Asian dragons have recruited a cadre of elite civil servants. China’s attempt to follow suit is complicated by the corruption and secrecy that surround the regime, but at its best it is learning from Singapore, creating a new class of mandarins, this time trained in technical fields and science rather than the classics.

What could the West do to rebind the elite to the state? Better pay for civil servants is one answer, especially if it comes with a keenness to shed poor performers in the public sector, as Singapore does. The idea of giving students generous university scholarships in exchange for working for the civil service for a number of years was pioneered by Thomas Jefferson. An even more ambitious idea would be to reintroduce nonmilitary national service, an idea that Emmanuel Macron has raised for France.

But the biggest change that is needed is a change of mind-set. Unlike the dead aristocrats in the churchyards, the geeks who run Google and Facebook have no sense of guilt to give them pause and few ties of blood and soil to connect them to a particular patch of land. They believe that their fortunes are the product of nothing but their own innate genius. They owe the rest of us nothing.

This needs to change. Over the past decade both the Democratic Party and the Republican Party have been shaken by the forces of populism. The shaking will only get worse if the elites don’t play a more active role in politics. Since the Covid-19 outbreak, we have been reminded that good government can make the difference between life and death. Look at the two cities where the Western elite feel most at home: New York has lost more than 20,000 people, London 6,000 (at times the mortality rate was higher than the Blitz). By contrast, in Seoul, a bigger city with subways, nightclubs and everything else, only around 30 have died.

We live in a knowledge economy. For elites, exercising social responsibility should mean more than giving away money, though that is an admirable thing. It should mean sharing your brain—serving, not just giving. Michael Bloomberg did that as mayor of New York during the difficult decade after 9/11 (disclosure: Mr. Bloomberg employs one of us), and Bill Gates is the greatest philanthropist of his time not just because of the amount of money he has spent but because he devotes so much time to designing and driving his philanthropic work.

The habit must be set from early adulthood. More bright young things need to remember John F. Kennedy’s call to duty and think not of what their country can do for them but what they can do for their country. If more of the young flowing out of the Ivy League and Oxbridge worked in the public sector, its technology wouldn’t be so shoddy and its ethos so sluggish.

There is a twist in the dystopian tale that H.G. Wells told in “The Time Machine” more than a century ago. The Eloi seem to live wonderful lives. They frolic above the ground, subsisting on a diet of fruit and living in futuristic (if deteriorating) buildings, while the Morlocks, brutish, apelike creatures, lurk underground, tending machinery and occasionally surfacing to feed and clothe the Eloi. But this is an illusion. The Morlocks are in fact farming the Eloi as a food source, just as we farm cattle, sheep and pigs.

Unless the ethic of public service is once again reignited, the American world order will ossify, just as other empires did before it. That is the message today’s Eloi should take from English churchyards.

Mr. Micklethwait is the editor in chief of Bloomberg and Mr. Wooldridge is the political editor of The Economist. This essay is adapted from their new book, “The Wake Up Call: Why the Pandemic Has Exposed the Weakness of the West, and How to Fix It,” published by Harper Via (which, like The Wall Street Journal, is owned by News Corp).

Posted in Higher Education

Kroger: In Praise of American Higher Education

This post is my effort to be upbeat for a change, looking at what’s good about US education.  It’s a recent essay by John Kroger, “In Praise of American Higher Education,” which was published in Inside Higher Ed.  Here’s a link to the original.  

Hope you enjoy it.  All is not bleak.

In Praise of American Higher Education

By

John Kroger

 September 14, 2020

These are grim times, filled with bad news. Nationally, the death toll from COVID-19 has passed 190,000. Political polarization has reached record levels, with some scholars openly fearing a fascist future for America. In my hometown of Portland, Ore., we have been buffeted by business closures, violent clashes between protesters and police, and out-of-control wildfires that have killed an unknown number of our fellow citizens, destroyed over a thousand homes and filled our streets with smoke. And in the higher education community, we are struggling. Our campuses are now COVID-19 hot spots, hundreds of institutions have implemented layoffs and furloughs impacting a reported 50,000 persons, and many commentators predict a complete financial meltdown for the sector. As I started to write this essay, a friend asked, “Is there any good news to report?”

In America today, we love to bash higher education. The negative drumbeat is incessant. Tuition, we hear, is too high. Students have to take too many loans. College does not prepare students for work. Inequality and racism are widespread. Just look at recent book titles: The Breakdown of Higher EducationCrisis in Higher EducationIntro to FailureThe Quiet Crisis, How Higher Education is Failing AmericaHigher Education Under FireThe Dream Is OverCracks in the Ivory Tower, The Moral Mess of Higher Education; and The Coddling of the American Mind. Jeesh.

So, for good news today, I want to remind everyone that despite all the criticism, the United States possesses a remarkable higher education system. Yes, we have our problems, which we need to address. The government and our colleges and universities need to partner to expand access to college, make it more affordable and decrease loan burdens; we need to ensure that our students graduate with valuable job skills; we need to tackle inequality and systemic racism in admission, hiring and the curriculum. But let us not lose sight of the remarkable things we have achieved and the very real strengths our system possesses — the very strengths that will allow us to tackle and solve the problems we have identified. Consider the following:

The United States has, by far, the largest number of great universities in the world. In the latest Times World University Rankings, the United States is dominant, possessing 14 of the top 20 universities in the world. These universities — places like Yale, UC Berkeley and Johns Hopkins — provide remarkable undergraduate and graduate educations combined with world-leading research outcomes. That reputation for excellence has made the United States the international gold standard for higher education.

We provide remarkable value to our students. As a recent Brookings Institution report noted, “Higher education provides extensive benefits to students, including higher wages, better health, and a lower likelihood of requiring disability payments. A population that is more highly educated also confers wide-ranging benefits to the economy, such as lower rates of unemployment and higher wages even for workers without college degrees. A postsecondary degree can also serve as a buffer against unemployment during economic downturns. Those with postsecondary degrees saw more steady employment through the Great Recession, and the vast majority of net jobs created during the economic recovery went to college-educated workers.”

Our higher education capacity is massive. At last count, almost 20 million students are enrolled in college. This is one reason we are fourth (behind Canada, Japan and South Korea) out of all OECD nations in higher education degree attainment, far ahead of nations like Germany and France. If we believe that mass education is critical to the future of our economy and democracy, this high number — and the fact that most of our institutions could easily grow — should give us great hope.

The United States dominates global research (though China is gaining). As The Economist reported in 2018, “Since the first Nobel prizes were bestowed in 1901, American scientists have won a whopping 269 medals in the fields of chemistry, physics and physiology or medicine. This dwarfs the tallies of America’s nearest competitors, Britain (89), Germany (69) and France (31).” In a recent global ranking of university innovation — “a list that identifies and ranks the educational institutions doing the most to advance science, invent new technologies and power new markets and industries” — U.S. institutions grabbed eight out of the top 10 spots.

We possess an amazing network of community colleges offering very low-cost, high-quality foundational and continuing education to virtually every American. No matter where you live in the United States, a low-cost community college and a world of learning is just a few miles away. This network provides a great foundation for our effort to expand economic opportunity and reach underserved populations. As Secretary of Education Arne Duncan once remarked, “About half of all first-generation college students and minority students attend community colleges. It is a remarkable record. No other system of higher education in the world does so much to provide access and second-chance opportunities as our community colleges.”

We are nimble. Though higher education is often bashed for refusing to change, our ability to do so is remarkable. When COVID-19 broke out in spring 2020, almost every U.S. college and university pivoted successfully to online education in a matter of weeks. Faculty, staff and administrators, often criticized for failing to work together, collectively made this happen overnight. Now, no matter what the future holds, our colleges and universities have the ability to deliver education effectively through both traditional in-person and new online models.

We have a great tradition, starting with the GI Bill, of federal government support for college education. No one in Congress is calling for an end to Pell Grants, one of the few government programs to enjoy overwhelming bipartisan government support in this highly fractured political era. Instead, the only question is the degree to which those grants need to increase and whether that increase should be linked to cost containment by institutions or not. This foundation of political support is vital as we look to ways to expand college access and affordability.

Finally, we have amazing historically Black colleges and universities, with excellent academic programs, outstanding faculty and proud histories. As the nation begins to confront its history of racism and discrimination, these institutions provide a remarkable asset to help the nation come to terms with its past, provide transformational education in the present and move toward a better future.

So, as we go through tough times, and we continue to subject our institutions to necessary and valuable self-criticism, it is important to keep our failures and limitations in perspective. Yes, American higher education could be better. But it is remarkable, valuable and praiseworthy all the same.

Posted in Higher Education, History of education, Organization Theory, Sociology

College: What Is It Good For?

This post is the text of a lecture I gave in 2013 at the annual meeting of the John Dewey Society.  It was published the following year in the Society’s journal, Education and Culture.  Here’s a link to the published version.           

The story I tell here is not a philosophical account of the virtues of the American university but a sociological account about how those virtues arose as unintended consequences of a system of higher education that arose for less elevated reasons.  Drawing my the analysis in the book I was writing at the time, A Perfect Mess, I show how the system emerged in large part out two impulses that had nothing to do with advancing knowledge.  One was in response to the competition among religious groups, seeking to plant the denominational flag on the growing western frontier and provide clergy for the newly arriving flock.  Another was in response to the competition among frontier towns to attract settlers who would buy land, using a college as a sign that this town was not just another dusty farm village but a true center of culture.

The essay then goes on to explore how the current positive social benefits of the US higher ed system are supported by the peculiar institutional form that characterizes American colleges and universities. 

My argument is that the true hero of the story is the evolved form of the American university, and that all the good things like free speech are the side effects of a structure that arose for other purposes.  Indeed, I argue that the institution – an intellectual haven in a heartless utilitarian world – depends on attributes that we would publicly deplore:  opacity, chaotic complexity, and hypocrisy.

In short, I’m portraying the system as one that is infused with irony, from its early origins through to its current functions.  Hope you enjoy it.

A Perfect Mess Cover

College — What Is It Good For

David F. Labaree

            I want to say up front that I’m here under false pretenses.  I’m not a Dewey scholar or a philosopher; I’m a sociologist doing history in the field of education.  And the title of my lecture is a bit deceptive.   I’m not really going to talk about what college is good for.  Instead I’m going to talk about how the institution we know as the modern American university came into being.  As a sociologist I’m more interested in the structure of the institution than in its philosophical aims.  It’s not that I’m opposed to these aims.  In fact, I love working in a university where these kinds of pursuits are open to us:   Where we can enjoy the free flow of ideas; where we explore any issue in the sciences or humanities that engages us; and where we can go wherever the issue leads without worrying about utility or orthodoxy or politics.  It’s a great privilege to work in such an institution.  And this is why I want to spend some time examining how this institution developed its basic form in the improbable context of the United States in the nineteenth century. 

            My argument is that the true hero of the story is the evolved form of the American university, and that all the good things like free speech are the side effects of a structure that arose for other purposes.  Indeed, I argue that the institution – an intellectual haven in a heartless utilitarian world – depends on attributes that we would publicly deplore:  opacity, chaotic complexity, and hypocrisy.

            I tell this story in three parts.  I start by exploring how the American system of higher education emerged in the nineteenth century, without a plan and without any apparent promise that it would turn out well.  By 1900, I show how all the pieces of the current system had come together.  This is the historical part.  Then I show how the combination of these elements created an astonishingly strong, resilient, and powerful structure.  I look at the way this structure deftly balances competing aims – the populist, the practical, and the elite.  This is the sociological part.  Then I veer back toward the issue raised in the title, to figure out what the connection is between the form of American higher education and the things that it is good for. This is the vaguely philosophical part.  I argue that the form serves the extraordinarily useful functions of protecting those of us in the faculty from the real world, protecting us from each other, and hiding what we’re doing behind a set of fictions and veneers that keep anyone from knowing exactly what is really going on. 

           In this light, I look at some of the things that could kill it for us.  One is transparency.  The current accountability movement directed toward higher education could ruin everything by shining a light on the multitude of conflicting aims, hidden cross-subsidies, and forbidden activities that constitute life in the university.  A second is disaggregation.  I’m talking about current proposals to pare down the complexity of the university in the name of efficiency:  Let online modules take over undergraduate teaching; eliminate costly residential colleges; closet research in separate institutes; and get rid of football.  These changes would destroy the synergy that comes from the university’s complex structure.  A third is principle.  I argue that the university is a procedural institution, which would collapse if we all acted on principle instead of form.   I end with a call for us to retreat from substance and stand shoulder-to-shoulder in defense of procedure.

Historical Roots of the System

            The origins of the American system of higher education could not have been more humble or less promising of future glory.  It was a system, but it had no overall structure of governance and it did not emerge from a plan.  It just happened, through an evolutionary process that had direction but no purpose.  We have a higher education system in the same sense that we have a solar system, each of which emerged over time according to its own rules.  These rules shaped the behavior of the system but they were not the product of Intelligent Design. 

            Yet something there was about this system that produced extraordinary institutional growth.  When George Washington assumed the presidency of the new republic in 1789, the U.S. already had 19 colleges and universities (Tewksbury, 1932, Table 1; Collins, 1979, Table 5.2).  By 1830 the numbers rose to 50 and then growth accelerated, with the total reaching 250 in 1860, 563 in 1870, and 811 in 1880.  To give some perspective, the number of universities in the United Kingdom between 1800 and 1880 rose from 6 to 10 and in all of Europe from 111 to 160 (Rüegg, 2004).  So in 1880 this upstart system had 5 times as many institutions of higher education as did the entire continent of Europe.  How did this happen?

            Keep in mind that the university as an institution was born in medieval Europe in the space between the dominant sources of power and wealth, the church and the state, and it drew  its support over the years from these two sources.  But higher education in the U.S. emerged in a post-feudal frontier setting where the conditions were quite different.  The key to understanding the nature of the American system of higher education is that it arose under conditions where the market was strong, the state was weak, and the church was divided.  In the absence of any overarching authority with the power and money to support a system, individual colleges had to find their own sources of support in order to get started and keep going.  They had to operate as independent enterprises in the competitive economy of higher education, and their primary reasons for being had little to do with higher learning.

            In the early- and mid-nineteenth century, the modal form of higher education in the U.S. was the liberal arts college.  This was a non-profit corporation with a state charter and a lay board, which would appoint a president as CEO of the new enterprise.  The president would then rent a building, hire a faculty, and start recruiting students.  With no guaranteed source of funding, the college had to make a go of it on its own, depending heavily on tuition from students and donations from prominent citizens, alumni, and religious sympathizers.  For college founders, location was everything.  However, whereas European universities typically emerged in major cities, these colleges in the U.S. arose in small towns far from urban population centers.  Not a good strategy if your aim was to draw a lot of students.  But the founders had other things in mind.

            One central motive for founding colleges was to promote religious denominations.  The large majority of liberal arts colleges in this period had a religious affiliation and a clergyman as president.  The U.S. was an extremely competitive market for religious groups seeking to spread the faith, and colleges were a key way to achieve this end.  With colleges, they could prepare its own clergy and provide higher education for their members; and these goals were particularly important on the frontier, where the population was growing and the possibilities for denominational expansion were the greatest.  Every denomination wanted to plant the flag in the new territories, which is why Ohio came to have so many colleges.  The denomination provided a college with legitimacy, students, and a built-in donor pool but with little direct funding.

            Another motive for founding colleges was closely allied with the first, and that was land speculation.  Establishing a college in town was not only a way to advance the faith, it was also a way to raise property values.  If town fathers could attract a college, they could make the case that the town was no mere agricultural village but a cultural center, the kind of place where prospective land buyers would want to build a house, set up a business, and raise a family.  Starting a college was cheap and easy.  It would bear the town’s name and serve as its cultural symbol.  With luck it would give the town leverage to become a county seat or gain a station on the rail line.  So a college was a good investment in a town’s future prosperity (Brown, 1995).

            The liberal arts college was the dominant but not the only form that higher education took in nineteenth century America.  Three other types of institutions emerged before 1880.  One was state universities, which were founded and governed by individual states but which received only modest state funding.  Like liberal arts colleges, they arose largely for competitive reasons.  They emerged in the new states as the frontier moved westward, not because of huge student demand but because of the need for legitimacy.  You couldn’t be taken seriously as a state unless you had a state university, especially if your neighbor had just established one. 

            The second form of institution was the land-grant college, which arose from federal efforts to promote land sales in the new territories by providing public land as a founding grant for new institutions of higher education.  Turning their backs on the classical curriculum that had long prevailed in colleges, these schools had a mandate to promote practical learning in fields such as agriculture, engineering, military science, and mining. 

            The third form was the normal school, which emerged in the middle of the century as state-founded high-school-level institutions for the preparation of teachers.  It wasn’t until the end of the century that these schools evolved into teachers colleges; and in the twentieth century they continued that evolution, turning first into full-service state colleges and then by midcentury into regional state universities. 

            Unlike liberal arts colleges, all three of these types of institutions were initiated by and governed by states, and all received some public funding.  But this funding was not nearly enough to keep them afloat, so they faced similar challenges as the liberal arts colleges, since their survival depended heavily on their ability to bring in student tuition and draw donations.  In short, the liberal arts college established the model for survival in a setting with a strong market, weak state, and divided church; and the newer public institutions had to play by the same rules.

            By 1880, the structure of the American system of higher education was well established.  It was a system made up of lean and adaptable institutions, with a strong base in rural communities, and led by entrepreneurial presidents, who kept a sharp eye out for possible threats and opportunities in the highly competitive higher-education market.  These colleges had to attract and keep the loyalty of student consumers, whose tuition was critical for paying the bills and who had plenty of alternatives in towns nearby.  And they also had to maintain a close relationship with local notables, religious peers, and alumni, who provided a crucial base of donations.

            The system was only missing two elements to make it workable in the long term.  It lacked sufficient students, and it lacked academic legitimacy.  On the student side, this was the most overbuilt system of higher education the world has ever seen.  In 1880, 811 colleges were scattered across a thinly populated countryside, which amounted to 16 colleges per million of population (Collins, 1979, Table 5.2).  The average college had only 131 students and 14 faculty and granted 17 degrees per year (Carter et al., 2006, Table Bc523, Table Bc571; U.S. Bureau of the Census, 1975, Series H 751).  As I have shown, these colleges were not established in response to student demand, but nonetheless they depended on students for survival.  Without a sharp growth in student enrollments, the whole system would have collapsed. 

            On the academic side, these were colleges in name only.  They were parochial in both senses of the word, small town institutions stuck in the boondocks and able to make no claim to advancing the boundaries of knowledge.  They were not established to promote higher learning, and they lacked both the intellectual and economic capital required to carry out such a mission.  Many high schools had stronger claims to academic prowess than these colleges.  European visitors in the nineteenth century had a field day ridiculing the intellectual poverty of these institutions.  The system was on death watch.  If it was going to be able to survive, it needed a transfusion that would provide both student enrollments and academic legitimacy. 

            That transfusion arrived just in time from a new European import, the German research university.  This model offered everything that was lacking in the American system.  It reinvented university professors as the best minds of the generation, whose expertise was certified by the new entry-level degree, the Ph.D., and who were pushing back the frontiers of knowledge through scientific research.  It introduced graduate students to the college campus, who would be selected for their high academic promise and trained to follow in the footsteps of their faculty mentors. 

            And at the same time that the German model offered academic credibility to the American system, the peculiarly Americanized form of this model made university enrollment attractive for undergraduates, whose focus was less on higher learning than on jobs and parties.  The remodeled American university provided credible academic preparation in the cognitive skills required for professional and managerial work; and it provided training in the social and political skills required for corporate employment, through the process of playing the academic game and taking on roles in intercollegiate athletics and on-campus social clubs.  It also promised a social life in which one could have a good time and meet a suitable spouse. 

            By 1900, with the arrival of the research university as the capstone, nearly all of the core elements of the current American system of higher education were in place.  Subsequent developments focused primarily on extending the system downward, adding layers that would make it more accessible to larger numbers of students – as normal schools evolved into regional state universities and as community colleges emerged as the open-access base of an increasingly stratified system.  Here ends the history portion of this account. Now we move on to the sociological part of the story.

Sociological Traits of the System

            When the research university model arrived to save the day in the 1880s, the American system of higher education was in desperate straits.  But at the same time this system had an enormous reservoir of potential strengths that prepared it for its future climb to world dominance.  Let’s consider some of these strengths.  First it had a huge capacity in place, the largest in the world by far:  campuses, buildings, faculty, administration, curriculum, and a strong base in the community.  All it needed was students and credibility. 

            Second, it consisted of a group of institutions that had figured out how to survive under dire Darwinian circumstances, where supply greatly exceeded demand and where there was no secure stream of funding from church or state.  In order to keep the enterprises afloat, they had learned how to hustle for market position, troll for students, and dun donors.  Imagine how well this played out when students found a reason to line up at their doors and donors suddenly saw themselves investing in a winner with a soaring intellectual and social mission. 

            Third, they had learned to be extraordinarily sensitive to consumer demand, upon which everything depended.  Fourth, as a result they became lean and highly adaptable enterprises, which were not bounded by the politics of state policy or the dogma of the church but could take advantage of any emerging possibility for a new program, a new kind of student or donor, or a new area of research.  Not only were they able to adapt but they were forced to do so quickly, since otherwise the competition would jump on the opportunity first and eat their lunch.

            By the time the research university arrived on the scene, the American system of higher education was already firmly established and governed by its own peculiar laws of motion and its own evolutionary patterns.  The university did not transform the system.  Instead it crowned the system and made it viable for a century of expansion and elevation.  Americans could not simply adopt the German university model, since this model depended heavily on strong state support, which was lacking in the U.S.  And the American system would not sustain a university as elevated as the German university, with its tight focus on graduate education and research at the expense of other functions.  American universities that tried to pursue this approach – such as Clark University and Johns Hopkins – found themselves quickly trailing the pack of institutions that adopted a hybrid model grounded in the preexisting American system.  In the U.S., the research university provided a crucial add-on rather than a transformation.  In this institutionally-complex market-based system, the research university became embedded within a convoluted but highly functional structure of cross-subsidies, interwoven income streams, widely dispersed political constituencies, and a bewildering array of goals and functions. 

            At the core of the system is a delicate balance among three starkly different models of higher education.  These three roughly correspond to Clark Kerr’s famous characterization of the American system as a mix of the British undergraduate college, the American land-grant college, and the German research university (Kerr, 2001, p. 14).  The first is the populist element, the second is the practical element, and the third is the elite element.  Let me say a little about each of these and make the case for how they work to reinforce each other and shore up the overall system.  I argue that these three elements are unevenly distributed across the whole system, with the populist and practical parts strongest in the lower tiers of the system, where access is easy and job utility are central, and the elite is strongest in the upper tier.  But I also argue that all three are present in the research university at the top of the system.  Consider how all these elements come together in a prototypical flagship state university.

            The populist element has its roots in the British residential undergraduate college, which colonists had in mind when they established the first American colleges; but the changes that emerged in the U.S. in the early nineteenth century were critical.  Key was the fact that American colleges during this period were broadly accessible in a way that colleges in the U.K. never were until the advent of the red-brick universities after the Second World War.  American colleges were not located in fashionable areas in major cities but in small towns in the hinterland.  There were far too many of them for them to be elite, and the need for students meant that tuition and academic standards both had to be kept relatively low.  The American college never exuded the odor of class privilege to the same degree as Oxbridge; its clientele was largely middle class.  For the new research university, this legacy meant that the undergraduate program provided critical economic and political support. 

            From the economic perspective, undergrads paid tuition, which – through large classes and thus the need for graduate teaching assistants – supported graduate programs and the larger research enterprise.  Undergrads, who were socialized in the rituals of football and fraternities, were also the ones who identified most closely with the university, which meant that in later years they became the most loyal donors.  As doers rather than thinkers, they were also the wealthiest group of alumni donors.  Politically, the undergraduate program gave the university a broad base of community support.  Since anyone could conceive of attending the state university, the institution was never as remote or alien as the German model.  Its athletic teams and academic accomplishments were a point of pride for state residents, whether or not they or their children ever attended.  They wore the school colors and cheered for it on game days.

            The practical element has its root in the land-grant college.  The idea here was that the university was not just an enterprise for providing liberal education for the elite but that it could also provide useful occupational skills for ordinary people.  Since the institution needed to attract a large group of students to pay the bills, the American university left no stone unturned when it came to developing programs that students might want.  It promoted itself as a practical and reliable mechanism for getting a good job.  This not only boosted enrollment, but it also sent a message to the citizens of the state that the university was making itself useful to the larger community, producing the teachers, engineers, managers, and dental hygienists that they needed.  

            This practical bent also extended to the university’s research effort, which was not just focusing on ivory tower pursuits.  Its researchers were working hard to design safer bridges, more productive crops, better vaccines, and more reliable student tests.  For example, when I taught at Michigan State I planted my lawn with Spartan grass seed, which was developed at the university.  These forms of applied research led to patents that brought substantial income back to the institution, but their most important function was to provide a broad base of support for the university among people who had no connection with it as an instructional or intellectual enterprise.  The idea was compelling: This is your university, working for you.

            The elite element has its roots in the German research university.  This is the component of the university formula that gives the institution academic credibility at the highest level.  Without it the university would just be a party school for the intellectually challenged and a trade school for job seekers.  From this angle, the university is the haven for the best thinkers, where professors can pursue intellectual challenges of the first order, develop cutting edge research in a wide array of domains, and train graduate students who will carry on these pursuits in the next generation.  And this academic aura envelops the entire enterprise, giving the lowliest freshman exposure to the most distinguished faculty and allowing the average graduate to sport a diploma burnished by the academic reputations of the best and the brightest.  The problem, of course, is that supporting professorial research and advanced graduate study is enormously expensive; research grants only provide a fraction of the needed funds. 

            So the populist and practical domains of the university are critically important components of the larger university package.  Without the foundation of fraternities and football, grass seed and teacher education, the superstructure of academic accomplishment would collapse of its own weight.  The academic side of the university can’t survive without both the financial subsidies and political support that come from the populist and the practical sides.  And the populist and practical sides rely on the academic legitimacy that comes from the elite side.  It’s the mixture of the three that constitutes the core strength of the American system of higher education.  This is why it is so resilient, so adaptable, so wealthy, and so powerful.  This is why its financial and political base is so broad and strong.  And this is why American institutions of higher education enjoy so much autonomy:  They respond to many sources of power in American society and they rely on many sources of support, which means they are not the captive of any single power source or revenue stream.

The Power of Form

            So my story about the American system of higher education is that it succeeded by developing a structure that allowed it to become both economically rich and politically autonomous.  It could tap multiple sources of revenue and legitimacy, which allowed it to avoid becoming the wholly owned subsidiary of the state, the church, or the market.  And by virtue of its structurally reinforced autonomy, college is good for a great many things.

            At last we come back to our topic.  What is college good for?  For those of us on faculties of research universities, they provide several core benefits that we see as especially important.  At the top of the list is that they preserve and promote free speech.  They are zones where faculty and students can feel free to pursue any idea, any line of argument, and any intellectual pursuit that they wish – free of the constraints of political pressure, cultural convention, or material interest.  Closely related to this is the fact that universities become zones where play is not only permissible but even desirable, where it’s ok to pursue an idea just because it’s intriguing, even though there is no apparent practical benefit that this pursuit would produce.

            This, of course, is a rather idealized version of the university.  In practice, as we know, politics, convention, and economics constantly intrude on the zone of autonomy in an effort to shape the process and limit these freedoms.  This is particularly true in the lower strata of the system.  My argument is not that the ideal is met but that the structure of American higher education – especially in the top tier of the system – creates a space of relative autonomy, where these constraining forces are partially held back, allowing the possibility for free intellectual pursuits that cannot be found anywhere else. 

            Free intellectual play is what we in the faculty tend to care about, but others in American society see other benefits arising from higher education that justify the enormous time and treasure that we devote to supporting the system.  Policymakers and employers put primary emphasis on higher education as an engine of human capital production, which provides the economically relevant skills that drive increases in worker productivity and growth in the GDP.  They also hail it as a place of knowledge production, where people develop valuable technologies, theories, and inventions that can feed directly into the economy.  And companies use it as a place to outsource much of their needs for workforce training and research-and-development. 

            These pragmatic benefits that people see coming from the system of higher education are real.  Universities truly are socially useful in such ways.  But it’s important to keep in mind that these social benefits only can arise if the university remains a preserve for free intellectual play.  Universities are much less useful to society if they restrict themselves to the training of individuals for particular present-day jobs, or to the production of research to solve current problems.  They are most useful if they function as storehouses for knowledges, skills, technologies, and theories – for which there is no current application but which may turn out to be enormously useful in the future.  They are the mechanism by which modern societies build capacity to deal with issues that have not yet emerged but sooner or later are likely to do so.

            But that is a discussion for another speech by another scholar.  The point I want make today about the American system of higher education is that it is good for a lot of things but it was established in order to accomplish none of these things.  As I have shown, the system that arose in the nineteenth century was not trying to store knowledge, produce capacity, or increase productivity.  And it wasn’t trying to promote free speech or encourage play with ideas.  It wasn’t even trying to preserve institutional autonomy.  These things happened as the system developed, but they were all unintended consequences.  What was driving development of the system was a clash of competing interests, all of which saw the college as a useful medium for meeting particular ends.  Religious denominations saw them as a way to spread the faith.  Town fathers saw them as a way to promote local development and increase property values.  The federal government saw them as a way to spur the sale of federal lands.  State governments saw them as a way to establish credibility in competition with other states.  College presidents and faculty saw them as a way to promote their own careers.  And at the base of the whole process of system development were the consumers, the students, without whose enrollment and tuition and donations the system would not have been able to persist.  The consumers saw the college as useful in a number of ways:  as a medium for seeking social opportunity and achieving social mobility; as a medium for preserving social advantage and avoiding downward mobility; as a place to have a good time, enjoy an easy transition to adulthood, pick up some social skills, and meet a spouse; even, sometimes, as a place to learn. 

            The point is that the primary benefits of the system of higher education derive from its form, but this form did not arise in order to produce these benefits.  We need to preserve the form in order to continue enjoying these benefits, but unfortunately the organizational  foundations upon which the form is built are, on the face of it, absurd.  And each of these foundational qualities is currently under attack from the perspective of alternative visions that, in contrast, have a certain face validity.  It the attackers accomplish their goals, the system’s form, which has been so enormously productive over the years, will collapse, and with this collapse will come the end of the university as we know it.  I didn’t promise this lecture would end well, did I?

            Let me spell out three challenges that would undercut the core autonomy and synergy that makes the system so productive in its current form.  On the surface, each of the proposed changes seems quite sensible and desirable.  Only by examining the implications of actually pursuing these changes can we see how they threaten the foundational qualities that currently undergird the system.  The system’s foundations are so paradoxical, however, that mounting a public defense of them would be difficult indeed.  Yet it is precisely these traits of the system that we need to defend in order to preserve the current highly functional form of the university.  In what follows, I am drawing inspiration from the work of Suzanne Lohmann (2004, 2006) a political scientist at UCLA, who is the scholar who has addressed these issues most astutely.

            One challenge comes from prospective reformers of American higher education who want to promote transparency.  Who can be against that?  This idea derives from the accountability movement, which has already swept across K-12 education and is now pounding the shores of higher education.  It simply asks universities to show people what they’re doing.  What is the university doing with its money and its effort?  Who is paying for what?  How do the various pieces of the complex structure of the university fit together?  And are they self-supporting or drawing resources from elsewhere?  What is faculty credit-hour production?  How is tuition related to instructional costs?  And so on.   These demands make a lot of sense. 

            The problem, however, as I have shown today, is that the autonomy of the university depends on its ability to shield its inner workings from public scrutiny.  It relies on opacity.  Autonomy will end if the public can see everything that is going on and what everything costs.  Consider all of the cross subsidies that keep the institution afloat:  undergraduates support graduate education, football supports lacrosse, adjuncts subsidize professors, rich schools subsidize poor schools.  Consider all of the instructional activities that would wilt in the light of day; consider all of the research projects that could be seen as useless or politically unacceptable.  The current structure keeps the inner workings of the system obscure, which protects the university from intrusions on its autonomy.  Remember, this autonomy arose by accident not by design; its persistence depends on keeping the details of university operations out of public view.

            A second and related challenge comes from reformers who seek to promote disaggregation.  The university is an organizational nightmare, they say, with all of those institutes and centers, departments and schools, programs and administrative offices.  There are no clear lines of authority, no mechanisms to promote efficiency and eliminate duplication, no tools to achieve economies of scale.  Transparency is one step in the right direction, they say, but the real reform that is needed is to take apart the complex interdependencies and overlapping responsibilities within the university and then figure out how each of these tasks could be accomplished in the most cost-effective and outcome-effective manner.  Why not have a few star professors tape lectures and then offer Massive Open Online Courses at colleges across the country?  Why not have institutions specialize in what they’re best at – remedial education, undergraduate instruction, vocational education, research production, graduate or student training?  Putting them together into a single institution is expensive and grossly inefficient. 

            But recall that it is precisely the aggregation of purposes and functions – the combination of the populist, the practical, and the elite – that has made the university so strong, so successful, and, yes, so useful.  This combination creates a strong base both financially and politically and allows for forms of synergy than cannot happen with a set of isolated educational functions.  The fact is that this institution can’t be disaggregated without losing what makes it the kind of university that students, policymakers, employers, and the general public find so compelling.  A key organizational element that makes the university so effective is its chaotic complexity.

            A third challenge comes not from reformers intruding on the university from the outside but from faculty members meddling with it from the inside.  The threat here arises from the dangerous practice of acting on academic principle.  Fortunately, this is not very common in academe.  But the danger is lurking in the background of every decision about faculty hires.  Here’s how it works.  You review a finalist for a faculty position in a field not closely connected to your own, and you find to your horror that the candidate’s intellectual domain seems absurd on the face of it (how can anyone take this type of work seriously?) and the candidate’s own scholarship doesn’t seem credible.  So you decide to speak against hiring the candidate and organize colleagues to support your position.  But then you happen to read a paper by Suzanne Lohmann, who points out something very fundamental about how universities work. 

            Universities are structured in a manner that protects the faculty from the outside world (that is, protecting them from the forces of transparency and disaggregation), but it’s also organized in a manner that protects the faculty from each other.  The latter is the reason we have such an enormous array of departments and schools in universities.  If every historian had to meet the approval of geologists and every psychologist had be meet the approval of law faculty, no one would ever be hired. 

           The simple fact is that part of what keeps universities healthy and autonomous is hypocrisy.  Because of the Balkanized structure of university organization, we all have our own protected spaces to operate in and we all pass judgment only on our own peers within that space.  To do otherwise would be disastrous.  We don’t have to respect each other’s work across campus, we merely need to tolerate it – grumbling about each other in private and making nice in public.  You pick your faculty, we’ll pick ours.  Lohmann (2006) calls this core procedure of the academy “log-rolling.”  If we all operated on principle, if we all only approved scholars we respected, then the university would be a much diminished place.  Put another way, I wouldn’t want to belong to a university that consisted only of people I found worthy.  Gone would be the diversity of views, paradigms, methodologies, theories, and world views that makes the university such a rich place.  The result is incredibly messy, and it permits a lot of quirky – even ridiculous – research agendas, courses, and instructional programs.  But in aggregate, this libertarian chaos includes an extraordinary range of ideas, capacities, theories, and social possibilities.  It’s exactly the kind of mess we need to treasure and preserve and defend against all opponents.

            So here is the thought I’m leaving you with.  The American system of higher education is enormously productive and useful, and it’s a great resource for students, faculty, policymakers, employers, and society.  What makes it work is not its substance but its form.  Crucial to its success is its devotion to three formal qualities:  opacity, chaotic complexity, and hypocrisy.  Embrace these forms and they will keep us free.

Posted in Academic writing, Writing, Writing Class

Rothman: Why Is Academic Writing So Academic?

In this post, Joshua Rothman addresses the problem of academic writing by comparing it to what’s going on in journalistic writing.  As a journalist who was once a graduate student in English, he knows both worlds well.  So instead of the usual diatribe against academics for being obscure and deadly, he explores the issue structurally, showing how journalism and academia have drifted apart from each other in the last 50 years.  While academia has become more inward turning and narrow, journalism has become more populist, seeking a large audience at any cost.  In the process, both fields have lost something important.  The piece first appeared in the New Yorker in 2014.  Here’s a link to the original.

Rothman throws up his hands at the end, suggesting that writers in both fields are trapped in a situation that offers no escape for anyone who wants to remain a member in good standing in one field or the other.  But I partially disagree with this assessment.  Yes, the structural pressures in both domains to constrain your writing are strong, but they’re not irresistible.  Journalists can find venues like the New Yorker and Atlantic that allow them to avoid having to pander to the click-happy internet browser.  And academics can push against the pressures to make disinterested research uninteresting and colorless.  

There are still a lot of scholars who publish articles in top academic journals and books with major university presses that incorporate lucid prose, lively style, and a clear personal voice.  Doing so does not tarnish their academic reputation or employability, but it also gets them a broader academic audience, more citations, and more intellectual impact.  I’ve posted some examples here by scholars such as Jim March, Mary Metz, Peter Rossi, E.P. Thompson, and Max Weber.  

For lots of examples of good academic prose and stellar advice about how to become a stylish scholarly writer, you should read Helen Sword’s book, Stylish Academic Writing.  I used this book to good effect in my class on academic writing.  (Here is the syllabus for this class, which includes links to all of the readings and my class slides.)  I also strongly suggest checking out her website, where, among other things, you can plug your own text into the Writer’s Diet Test, which will show how flabby or fit your prose is.

Enjoy.

Why Is Academic Writing So Academic?

by Joshua Rothman

Feb. 21, 2014

Rothman Photo

A few years ago, when I was a graduate student in English, I presented a paper at my department’s American Literature Colloquium. (A colloquium is a sort of writing workshop for graduate students.) The essay was about Thomas Kuhn, the historian of science. Kuhn had coined the term “paradigm shift,” and I described how this phrase had been used and abused, much to Kuhn’s dismay, by postmodern insurrectionists and nonsensical self-help gurus. People seemed to like the essay, but they were also uneasy about it. “I don’t think you’ll be able to publish this in an academic journal,” someone said. He thought it was more like something you’d read in a magazine.

Was that a compliment, a dismissal, or both? It’s hard to say. Academic writing is a fraught and mysterious thing. If you’re an academic in a writerly discipline, such as history, English, philosophy, or political science, the most important part of your work—practically and spiritually—is writing. Many academics think of themselves, correctly, as writers. And yet a successful piece of academic prose is rarely judged so by “ordinary” standards. Ordinary writing—the kind you read for fun—seeks to delight (and, sometimes, to delight and instruct). Academic writing has a more ambiguous mission. It’s supposed to be dry but also clever; faceless but also persuasive; clear but also completist. Its deepest ambiguity has to do with audience. Academic prose is, ideally, impersonal, written by one disinterested mind for other equally disinterested minds. But, because it’s intended for a very small audience of hyper-knowledgable, mutually acquainted specialists, it’s actually among the most personal writing there is. If journalists sound friendly, that’s because they’re writing for strangers. With academics, it’s the reverse.

Professors didn’t sit down and decide to make academic writing this way, any more than journalists sat down and decided to invent listicles. Academic writing is the way it is because it’s part of a system. Professors live inside that system and have made peace with it. But every now and then, someone from outside the system swoops in to blame professors for the writing style that they’ve inherited. This week, it was Nicholas Kristof, who set off a rancorous debate about academic writing with a column, in the Times, called “Professors, We Need You!” The academic world, Kristof argued, is in thrall to a “culture of exclusivity” that “glorifies arcane unintelligibility while disdaining impact and audience”; as a result, there are “fewer public intellectuals on American university campuses today than a generation ago.”

The response from the professoriate was swift, severe, accurate, and thoughtful. A Twitter hashtag, #engagedacademics, sprung up, as if to refute Kristof’s claim that professors don’t use enough social media. Professors pointed out that the brainiest part of the blogosphere is overflowing with contributions from academics; that, as teachers, professors already have an important audience in their students; and that the Times itself frequently benefits from professorial ingenuity, which the paper often reports as news. (A number of the stories in the Sunday Review section, in which Kristof’s article appeared, were written by professors.) To a degree, some of the responses, though convincingly argued, inadvertently bolstered Kristof’s case because of the style in which they were written: fractious, humorless, self-serious, and defensively nerdy. As writers, few of Kristof’s interlocutors had his pithy, winning ease. And yet, if they didn’t win with a knock-out blow, the professors won on points. They showed that there was something outdated, and perhaps solipsistic, in Kristof’s yearning for a new crop of sixties-style “public intellectuals.”

As a one-time academic, I spent most of the week rooting for the profs. But I have a lot of sympathy for Kristof, too. I think his heart’s in the right place. (His column ended on a wistful note: “I write this in sorrow, for I considered an academic career.”) My own theory is that he got the situation backward. The problem with academia isn’t that professors are, as Kristof wrote, “marginalizing themselves.” It’s that the system that produces and consumes academic knowledge is changing, and, in the process, making academic work more marginal.

It may be that being a journalist makes it unusually hard for Kristof to see what’s going on in academia. That’s because journalism, which is in the midst of its own transformation, is moving in a populist direction. There are more writers than ever before, writing for more outlets, including on their own blogs, Web sites, and Twitter streams. The pressure on established journalists is to generate traffic. New and clever forms of content are springing up all the time—GIFs, videos, “interactives,” and so on. Dissenters may publish op-eds encouraging journalists to abandon their “culture of populism” and write fewer listicles, but changes in the culture of journalism are, at best, only a part of the story. Just as important, if not more so, are economic and technological developments having to do with subscription models, revenue streams, apps, and devices.

In academia, by contrast, all the forces are pushing things the other way, toward insularity. As in journalism, good jobs are scarce—but, unlike in journalism, professors are their own audience. This means that, since the liberal-arts job market peaked, in the mid-seventies, the audience for academic work has been shrinking. Increasingly, to build a successful academic career you must serially impress very small groups of people (departmental colleagues, journal and book editors, tenure committees). Often, an academic writer is trying to fill a niche. Now, the niches are getting smaller. Academics may write for large audiences on their blogs or as journalists. But when it comes to their academic writing, and to the research that underpins it—to the main activities, in other words, of academic life—they have no choice but to aim for very small targets. Writing a first book, you may have in mind particular professors on a tenure committee; miss that mark and you may not have a job. Academics know which audiences—and, sometimes, which audience members—matter.

It won’t do any good, in short, to ask professors to become more populist. Academic writing and research may be knotty and strange, remote and insular, technical and specialized, forbidding and clannish—but that’s because academia has become that way, too. Today’s academic work, excellent though it may be, is the product of a shrinking system. It’s a tightly-packed, super-competitive jungle in there. The most important part of Kristof’s argument was, it seemed to me, buried in the blog post that he wrote to accompany his column. “When I was a kid,” he wrote, “the Kennedy administration had its ‘brain trust’ of Harvard faculty members, and university professors were often vital public intellectuals.” But the sixties, when the baby boom led to a huge expansion in university enrollments, was also a time when it was easier to be a professor. If academic writing is to become expansive again, academia will probably have to expand first.