Posted in Meritocracy, Populism, Welfare

Hochschild — Strangers in Their Own Land

This post is a reflection on a book by Arlie Russell Hochschild, Strangers in Their Own Land: Anger and Mourning on the American Right.  In it she provides one of the most compelling and persuasive explanation for the turn toward right-wing populism in American politics and the peculiar appeal of Donald Trump.  As she puts it in her subtitle, this is “A Journey to the Heart of Our Political Divide.”

The book, published in 2016, is based on intensive interviews that she did in Louisiana with people on the populist right, long before Trump launched his campaign for president.  At the time, the political movement was the Tea Party, but her subjects ended up providing her an advance look at at the deep issues that led voters to support Trump.

There is no substitute for reading the book, which I strongly recommend.  But to whet your appetite, I provide some of the key points below and some of the most telling quotes.  You’ll find that a lot or her analysis aligns with the analysis by Michael Sandel in The Tyranny of Merit, which I commented on recently.

Hochschild Cover

Here’s the heart of what people were telling her:

You are a stranger in your own land. You do not recognize yourself in how others see you. It is a struggle to feel seen and honored. And to feel honored you have to feel—and feel seen as—moving forward. But through no fault of your own, and in ways that are hidden, you are slipping backward.

As Sandel noted, the meritocracy leaves the uncredentialed with no basis for public respect.  Without SATs and fancy degrees, it’s like you don’t count or you don’t even exist.  This used to be your country and there used to be honor in simply doing your job, going to church, obeying the law, and raising a family, but none of that seems to be true any more.  Respect only now seems to go to those who who are moving ahead in the new knowledge economy, but you and people around you seem to be barely holding your own or falling behind.  

How do you handle this situation?  Not by playing the victim card; that’s for a different kind of person.  “Like nearly everyone I spoke with, Donny was not one to think of himself as a victim. That was the language of the ‘poor me’s’ asking for government handouts. The very word ‘victim’ didn’t sit right.”  Instead, you take stoic stance, adopting one of three versions of what Hochschild calls the “endurance self.”

I was discovering three distinct expressions of this endurance self in different people around Lake Charles—the Team Loyalist, the Worshipper, and the Cowboy, as I came to see them. Each kind of person expresses the value of endurance and expresses a capacity for it. Each attaches an aspect of self to this heroism. The Team Loyalist accomplishes a team goal, supporting the Republican Party. The Worshipper sacrifices a strong wish. The Cowboy affirms a fearless self. 

Each identity involves holding on in spite of the sacrifices you have to make.  The Loyalist sticks by the Republican Party even though it keeps betraying you time and again, as is so often the case in Louisiana.  They allow companies to pollute your environment and skimp on their taxes, but they’re still all you’ve got.  

The Worshipper keeps the faith even though it means giving up something you really care about.

But sometimes you had to do without what you wanted. You couldn’t have both the oil industry and clean lakes, she thought, and if you had to choose, you had to choose oil. “Oil’s been pretty darned good to us,” she said. “I don’t want a smaller house. I don’t want to drive a smaller car.”

So you hang in there.  The Cowboy understands character as a willingness to take risks and live with the consequences.  You can make it on your own, without having to rely on welfare and special privileges.

To Donny, the Cowboy expressed high moral virtue. Equating creativity with daring—the stuff of great explorers, inventors, generals, winners—Donny honored the capacity to take risk and face fear. He could take hard knocks like a man. He could endure. 

The people she spoke with had a deep suspicion of the state.

“The state always seems to come down on the little guy,” he notes. “Take this bayou. If your motorboat leaks a little gas into the water, the warden’ll write you up. But if companies leak thousands of gallons of it and kill all the life here? The state lets them go. If you shoot an endangered brown pelican, they’ll put you in jail. But if a company kills the brown pelican by poisoning the fish he eats? They let it go. I think they overregulate the bottom because it’s harder to regulate the top.”

For liberals, this stance is hard to fathom, because for them the institutions of the state are the key guardians of the public square, which is central to their values.  And this space is now under threat.

…In the liberal deep story, an alarming event occurs; marauders invade the public square, recklessly dismantle it, and selfishly steal away bricks and concrete chunks from the public buildings at its center. Seeing insult added to injury, those guarding the public square watch helplessly as those who’ve dismantled it construct private McMansions with the same bricks and pieces of concrete, privatizing the public realm. That’s the gist of the liberal deep story, and the right can’t understand the deep pride liberals take in their creatively designed, hard-won public sphere as a powerful integrative force in American life. Ironically, you may have more in common with the left than you imagine, for many on the left feel like strangers in their own land too.

For right-wing populists, the federal government is the biggest threat.  For those in the West, the feds are the ones who seem to own all the land and regulate what you can do with it.  In the South, the resentments runs even deeper.

After the Civil War, the North replaced Southern state governments with its own hand-picked governors. The profit-seeking carpetbaggers came, it seemed to those I interviewed, as agents of the dominating North. Exploiters from the North, an angry, traumatized black population at home, and moral condemnation from all—this was the scene some described to me. When the 1960s began sending Freedom Riders and civil rights activists, pressing for new federal laws to dismantle Jim Crow, there they came again, it seemed, the moralizing North. And again, Obamacare, global warming, gun control, abortion rights—did these issues, too, fall into the emotional grooves of history? Does it feel like another strike from the North, from Washington, that has put the brown pelican ahead of the Tea Partier waiting in line?

And then there’s the last issue:  waiting in line.  Hochschild identifies a deep story that runs through all of the accounts she heard, and at the heart is a sense of resentment about being treated unfairly in the pursuit of the American Dream.  The dream is all about the possibilities for getting ahead, and this means an orderly process of status advancement in which people wait in line until it’s their turn.  The core problem is that suddenly they find other people cutting in front of them in line, and the federal government is helping them do it.

Look! You see people cutting in line ahead of you! You’re following the rules. They aren’t. As they cut in, it feels like you are being moved back. How can they just do that? Who are they? Some are black. Through affirmative action plans, pushed by the federal government, they are being given preference for places in colleges and universities, apprenticeships, jobs, welfare payments, and free lunches…. Women, immigrants, refugees, public sector workers—where will it end? Your money is running through a liberal sympathy sieve you don’t control or agree with. These are opportunities you’d have loved to have had in your day—and either you should have had them when you were young or the young shouldn’t be getting them now. It’s not fair.

You’re a compassionate person. But now you’ve been asked to extend your sympathy to all the people who have cut in front of you. So you have your guard up against requests for sympathy. People complain: Racism. Discrimination. Sexism. You’ve heard stories of oppressed blacks, dominated women, weary immigrants, closeted gays, desperate refugees, but at some point, you say to yourself, you have to close the borders to human sympathy—especially if there are some among them who might bring you harm. You’ve suffered a good deal yourself, but you aren’t complaining about it.

Posted in Academic writing, Writing

Dumitrescu: How to Write Well

This post is a review essay by Irina Dumitrescu about five books that explore how to write well.  It appeared in Times Literary Supplement, March 20, 2020.  Here’s a link to the original.  

She’s reviewing five books about writing.  Is there any writing task more fraught with peril that trying to write about writing?  Anything less than superlative literary style would constitute an abject failure.  Fortunately this author is up to the challenge.

Here are some of my favorite passages.

She reminds of an enduring truth about writing.  Everyone starts with imitating others.  You need models to work from:

Shakespeare patterned his comedies on Terence’s Latin romps, and Terence stole his plots from the Greek Menander. Milton copied Virgil, who plagiarized Homer. The history of literature is a catwalk on which the same old skeletons keep coming out in new clothes.

On the other hand,

Style unsettles this pedagogy of models and moulds. As the novelist Elizabeth McCracken once told Ben Yagoda in an interview, “A writer’s voice lives in his or her bad habits … the trick is to make them charming bad habits”. Readers longing for something beyond mere information – verbal fireworks, the tremor of an authentic connection, a touch of quiet magic – will do well to find the rule-breakers on the bookshop shelf. Idiosyncrasies (even mistakes) account for the specific charm of a given author, and they slyly open the door to decisions of taste.

One author makes the case against turgid academic writing in a book that she admiringly calls

an inspiring mess, a book that in its haphazard organization is its own argument for playfulness and improvisation. Like Warner, Kumar cannot stand “the five-paragraph costume armor of the high school essay”. Nor does he have much patience for other formulaic aspects of academic writing: didactic topic sentences, or jargony vocabulary such as “emergence” and “post-capitalist hegemony”. In his description of a website that produces meaningless theoretical prose at the touch of a button, Kumar notes that “the academy is the original random sentence generator”.

Of all the books she discusses, my favorite (and hers, I think) is 

Joe Moran’s exquisite book First You Write a Sentence…. As befits a cultural historian, Moran compares writing sentences to crafting other artisanal objects – they are artworks and spaces of refuge, gifts with which an author shapes the world to be more beautiful and capacious and kind. Like a town square or a city park, “a well-made sentence shows … silent solicitude for others. It cares”.

Moran’s own sentences are so deliciously epigrammatic that I considered giving up chocolate in favour of re-reading his book. Because he has dedicated an entire volume to one small form, he has the leisure to attend to fine details. As he explores sentences from every angle, he describes the relative heat of different verbs, the delicately shading nuances of punctuation choices, how short words feel in the mouth, the opportunity of white space. “Learn to love the feel of sentences,” he writes with a connoisseur’s delight, “the arcs of anticipation and suspense, the balancing phrases, the wholesome little snap of the full stop.”

Enjoy.

How to write well

Rules, style and the ‘well-made sentence’

By Irina Dumitrescu

IN THIS REVIEW
WHY THEY CAN’T WRITE
Killing the five-paragraph essay and other necessities
288pp. Johns Hopkins University Press. £20.50 (US $27.95).
John Warner
WRITING TO PERSUADE
How to bring people over to your side
224pp. Norton. £18.99 (US $26.95).
Trish Hall
EVERY DAY I WRITE THE BOOK
Notes on style
256pp. Duke University Press. Paperback, £20.99 (US $24.95).
Amitava Kumar
FIRST YOU WRITE A SENTENCE
The elements of reading, writing … and life
240pp. Penguin. Paperback, £9.99.
Joe Moran
MEANDER, SPIRAL, EXPLODE
Design and pattern in narrative
272pp. Catapult. Paperback, $16.95.
Jane Alison

In high school a close friend told me about a lesson her father had received when he was learning to write in English. Any essay could be improved by the addition of one specific phrase: “in a world tormented by the spectre of thermonuclear holocaust”. We thought it would be hilarious to surprise our own teachers with this gem, but nothing came of it. Twenty years later, as I looked through the files on an old computer, I discovered my high school compositions. There, at the end of an essay on Hugo Grotius and just war theory I must have written for this purpose alone, was that irresistible rhetorical flourish.

As much as we might admire what is fresh and innovative, we all learn by imitating patterns. Babies learning to speak do not immediately acquire the full grammar of their mother tongue and a vocabulary to slot into it, but inch slowly into the language by repeating basic phrases, then varying them. Adults learning a foreign language are wise to do the same. Pianists run through exercises to train their dexterity, basketball players run through their plays, dancers rehearse combos they can later slip into longer choreographies. To be called “formulaic” is no compliment, but whenever people express themselves or take action in the world, they rely on familiar formulas.

Writing advice is caught in this paradox. Mavens of clear communication know that simple rules are memorable and easy to follow. Use a verb instead of a noun. Change passive to active. Cut unnecessary words. Avoid jargon. No aspiring author will make the language dance by following these dictates, but they will be understood, and that is something. The same holds for structure. In school, pupils are drilled in the basic shapes of arguments, such as the “rule of three”, the “five-paragraph essay” or, à l’américaine, the Hamburger Essay (the main argument being the meat). Would-be novelists weigh their Fichtean Curves against their Hero’s Journeys, and screenwriters can buy software that will ensure their movie script hits every beat prescribed by Blake Snyder in his bestselling book Save the Cat! (2005). And why not? Shakespeare patterned his comedies on Terence’s Latin romps, and Terence stole his plots from the Greek Menander. Milton copied Virgil, who plagiarized Homer. The history of literature is a catwalk on which the same old skeletons keep coming out in new clothes.

Style unsettles this pedagogy of models and moulds. As the novelist Elizabeth McCracken once told Ben Yagoda in an interview, “A writer’s voice lives in his or her bad habits … the trick is to make them charming bad habits”. Readers longing for something beyond mere information – verbal fireworks, the tremor of an authentic connection, a touch of quiet magic – will do well to find the rule-breakers on the bookshop shelf. Idiosyncrasies (even mistakes) account for the specific charm of a given author, and they slyly open the door to decisions of taste. Think of David Foster Wallace’s endless sentences, George R. R. Martin’s neologisms, the faux-naivety of Gertrude Stein. In his book on literary voice, The Sound on the Page (2004), Yagoda argues that style reveals “something essential” and impossible to conceal about an author’s character. The notion that the way a person arranges words is inextricably tied to their moral core has a long history, but its implication for teaching writing is what interests me here: convince or compel writers to cleave too closely to a set of prescribed rules, and you chip away at who they are.

This explains why John Warner’s book about writing, Why They Can’t Write: Killing the five-paragraph essay and other necessities, contains almost no advice on how to write. A long-time college instructor, Warner hints at his argument in his subtitle: his is a polemical take on American standardized testing practices, socioeconomic conditions, and institutions of learning that destroy any love or motivation young people might have for expressing themselves in writing. Against the perennial assumption that today’s students are too lazy and precious to work hard, Warner holds firm: “Students are not entitled or coddled. They are defeated”. The symbol of the US’s misguided approach to education is the argumentative structure drilled into each teenager as a shortcut for thinking and reflection. “If writing is like exercise,” he quips, “the five-paragraph essay is like one of those ab belt doohickeys that claim to electroshock your core into a six-pack.”

What is to blame for students’ bad writing? According to Warner, the entire context in which it is taught. He rails against school systems that privilege shallow “achievement” over curiosity and learning, a culture of “surveillance and compliance” (including apps that track students’ behaviour and report it to parents in real time), an obsession with standardized testing that is fundamentally inimical to thoughtful reading and writing, and a love of faddish psychological theories and worthless digital learning projects.

It is easy for a lover of good writing to share Warner’s anger at the shallow and mechanistic culture of public education in the United States, easy to smile knowingly when he notes that standardized tests prize students’ ability to produce “pseudo-academic BS”, meaningless convoluted sentences cobbled together out of sophisticated-sounding words. Warner’s argument against teaching grammar is harder to swallow. Seeing in grammar yet another case of rules and correctness being put ahead of thoughtful engagement, Warner claims, “the sentence is not the basic skill or fundamental unit of writing. The idea is”. Instead of assignments, he gives his students “writing experiences”, interlocked prompts designed to hone their ability to observe, analyse and communicate. His position on grammatical teaching is a step too far: it can be a tool as much as a shackle. Still, writers may recognize the truth of Warner’s reflection that “what looks like a problem with basic sentence construction may instead be a struggle to find an idea for the page”.

Trish Hall shares Warner’s belief that effective writing means putting thinking before craft. Hall ran the New York Times’s op-ed page for half a decade, and in Writing To Persuade she shows us how to succeed at one kind of formula, the short newspaper opinion piece. The book is slim, filled out with personal recollections in muted prose, and enlivened by the occasional celebrity anecdote. Her target audience seems to be the kind of educated professionals who regularly read the New York Times, who may even write as part of their work, but who have not thought about what it means to address those who do not share their opinions. Hall does offer useful, sometimes surprising, tips on avoiding jargon, finding a writerly voice, and telling a story, but most of the book is dedicated to cultivating the humanity beneath the writing.

“I can’t overstate the value of putting down your phone and having conversations with people”, she writes. Persuasion is not simply a matter of hammering one’s own point through with unassailable facts and arguments. It is a question of listening to other people, cultivating empathy for their experience, drawing on shared values to reach common ground. It also demands vulnerability; Hall praises writers who “reveal something almost painfully personal even as they connect to a larger issue or story that feels both universal and urgent”.

Much of her advice would not have surprised a classical rhetorician. She even quotes Cicero’s famous remark about it being a mistake to try “to compel others to believe and live as we do”, a mantra for this book. At her best, Hall outlines a rhetoric that is also a guide to living peaceably with others: understanding their desires, connecting. A simple experiment – not finishing other people’s sentences even when you think you know what they will say – exemplifies this understated wisdom. At her worst, Hall is too much the marketer, as when she notes that strong emotions play well on social media and enjoins her readers to “stay away from depressing images and crying people”. There ought to be enough space in a newspaper for frankly expressed opinions about the suffering of humanity. What she demonstrates, however, is that writing for an audience is a social act. Writing To Persuade is a stealth guide to manners for living in a world where conversations are as likely to take place in 280 characters on a screen as they are at a dinner table.

In Hall’s hands, considering other people means following a programmatic set of writing instructions. Amitava Kumar, a scholar who has written well- regarded works of memoir and journalism, thinks another way is possible. In Every Day I Write the Book: Notes on style, he breaks out of the strictures of academic prose by creating a virtual community of other writers on his pages. The book is a collection of short meditations on different topics related to writing, its form and practice, primarily in the university. Kumar’s style is poised and lyrical elsewhere, but here he takes on a familiar, relaxed persona, and he often lets his interlocutors have the best lines. Selections from his reading bump up against email conversations, chats on the Vassar campus, and Facebook comments; it is a noisy party where everyone has a bon mot at the ready. The book itself is assembled like a scrapbook, filled with reproductions of photographs, screenshots, handwritten notes and newspaper clippings Kumar has gathered over the years.

It is, in other words, an inspiring mess, a book that in its haphazard organization is its own argument for playfulness and improvisation. Like Warner, Kumar cannot stand “the five-paragraph costume armor of the high school essay”. Nor does he have much patience for other formulaic aspects of academic writing: didactic topic sentences, or jargony vocabulary such as “emergence” and “post-capitalist hegemony”. In his description of a website that produces meaningless theoretical prose at the touch of a button, Kumar notes that “the academy is the original random sentence generator”. He is not anti-intellectual; his loyalties lie with the university, even as he understands its provinciality too well. But he asks his fellow writers to hold on fiercely to the weird and whimsical elements in their own creations, to be “inventive in our use of language and in our search for form”.

This means many things in practice. Kumar includes a section of unusual writing exercises, many of them borrowed from other authors: rewriting a brilliant passage badly to see what made it work; scribbling just what will fit on a Post-it Note to begin a longer piece; writing letters to public figures. Other moments are about connection. In a chapter on voice, he quotes the poet and novelist Bhanu Kapil’s description of how she began a series of interviews with Indian and Pakistani women: “The first question I asked, to a young Muslim woman … Indian parents, thick Glaswegian accent, [was] ‘Who was responsible for the suffering of your mother?’ She burst into tears”. That one question could fill many libraries. Invention also means embracing collaboration with editors, and understanding writing as “a practice of revision and extension and opening”. Kumar calls for loyalty to one’s creative calling, wherever it may lead. The reward? Nothing less than freedom and immortality.

But surely craft still matters? We may accept that writing is rooted in the ethical relationships between teachers, students, writers, editors and those silent imagined readers. Does this mean that the skill of conveying an idea in language in a clear and aesthetically pleasing fashion is nothing but the icing on the cake? Joe Moran’s exquisite book First You Write a Sentence: The elements of reading, writing … and life suggests otherwise. As befits a cultural historian, Moran compares writing sentences to crafting other artisanal objects – they are artworks and spaces of refuge, gifts with which an author shapes the world to be more beautiful and capacious and kind. Like a town square or a city park, “a well-made sentence shows … silent solicitude for others. It cares”.

Moran’s own sentences are so deliciously epigrammatic that I considered giving up chocolate in favour of re-reading his book. Because he has dedicated an entire volume to one small form, he has the leisure to attend to fine details. As he explores sentences from every angle, he describes the relative heat of different verbs, the delicately shading nuances of punctuation choices, how short words feel in the mouth, the opportunity of white space. “Learn to love the feel of sentences,” he writes with a connoisseur’s delight, “the arcs of anticipation and suspense, the balancing phrases, the wholesome little snap of the full stop.”

The book is full of advice, but Moran’s rules are not meant to inhibit. He will happily tell you how to achieve a style clear as glass, then praise the rococo rhetorician who “wants to forge reality through words, not just gaze at it blankly through a window”. He is more mentor than instructor, slowly guiding us to notice and appreciate the intricacies of a well-forged phrase. And he does so with tender generosity towards the unloved heroism of “cussedly making sentences that no one asked for and no one will be obliged to read”. As pleasurable as it is to watch Moran unfold the possibilities of an English sentence, his finest contribution is an understanding of the psychology – fragile, labile – of the writer. He knows that a writer must fight distraction, bad verbal habits, and the cheap appeal of early drafts to find their voice. There it is! “It was lost amid your dishevelled thoughts and wordless anxieties, until you pulled it out of yourself, as a flowing line of sentences.”

Human beings take pleasure in noticing nature’s patterns, according to Moran, and these patterns help them to thrive, sometimes in unforeseen ways. A sentence is also form imposed on chaos, and his suggestion that it has an organic role in the survival of the species might seem bold. (Though how many of us owe our lives to a parent who said the right words in a pleasing order?) The novelist Jane Alison’s invigorating book Meander, Spiral, Explode: Design and pattern in narrative follows a similar impulse, seeking the elegant forms that order nature in the structures of stories and novels. Her bugbear is the dramatic arc, the shape that Aristotle noticed in the tragedies of his time but that has become a tyrant of creative writing instruction. “Something that swells and tautens until climax, then collapses? Bit masculo-sexual, no?” Alison has other ideas for excitement.

In brief, compelling meditations on contemporary fiction, she teases out figures we might expect to spy from a plane window or in the heart of a tree. Here are corkscrews and wavelets and fractals and networks of cells. Is this forced? Alison recognizes the cheekiness of her project, knows her readings of form may not convince every reader. Her aim is not to classify tales, to pin them like butterflies on a styrofoam board. She knows, for example, that any complex literary narrative will create a network of associations in the reader’s mind. Her goal is to imagine how a reader might experience a story, looking for “structures that create an inner sensation of traveling toward something and leave a sense of shape behind, so that the stories feel organized”.

Shapes appear in Alison’s mind as clusters of images, so what begins as literary analysis condenses into a small poem. For “meander”, Alison asks us to “picture a river curving and kinking, a snake in motion, a snail’s silver trail, or the path left by a goat”. She speaks of the use of colour in narrative “as a unifying wash, a secret code, or a stealthy constellation”. The point is not ornamentation, though Alison can write a sentence lush enough to drown in, but tempting fiction writers to render life more closely. Against the grand tragedy of the narrative arc, she proposes small undulations: “Dispersed patterning, a sense of ripple or oscillation, little ups and downs, might be more true to human experience than a single crashing wave”. These are the shifting moods of a single day, the temporary loss of the house keys, the sky a sunnier hue than expected.

The Roman educator Quintilian once insisted that an orator must be a good man. It was a commonplace of his time. The rigorous study of eloquence, he thought, required a mind undistracted by vice. The books discussed here inherit this ancient conviction that the attempt to write well is a bettering one. Composing a crisp sentence demands attention to fine detail and a craftsmanlike dedication to perfection. Deciding what to set to paper requires the ability to imagine where a reader might struggle or yawn. In a world tormented by spectres too reckless to name, care and empathy are welcome strangers.

Irina Dumitrescu is Professor of English Medieval Studies at the University of Bonn

Posted in Empire, History, Modernity

Mikhail — How the Ottomans Shaped the Modern World

This post is a reflection on the role that the Ottoman Empire played in shaping the modern world.  It draws on a new book by Alan Mikhail, God’s Shadow: Sultan Selim, His Ottoman Empire, and the Making of the Modern World.  

The Ottomans are the Rodney Dangerfields of empires: They don’t get no respect.  If we picture them at all, it’s either the exotic image of turbans and concubines in Topkapi Palace or the sad image of the “sick man of Europe” in the days before World War I, which finally put them out of their misery.  Neither does them justice.  For a long time, they were the most powerful empire in the world, which dramatically shaped life on three continents — Europe, Asia, and Africa. 

But what makes their story so interesting is that it is more than just an account of some faded glory in the past.  As Mikhail points out, the Ottomans left an indelible stamp on the modern world.  It was their powerful presence in the middle of Eurasia that pushed the minor but ambitious states of Western Europe to set sail for the East and West Indies.  The Dutch, Portuguese, Spanish, and English couldn’t get to the treasures of China and India by land because of the impassable presence of the Ottomans.  So they either had to sail east around Africa to get there or forge a new path to the west, which led them to the Americas.  In fact, they did both, and the result was the riches that turned them into imperial powers who came to dominate much of the known world.  

Without the Ottomans, there would not have been the massive expansion of world trade, the Spanish empire, the riches and technological innovations that spurred the industrial revolution and empowered the English and American empires.

God's Shadow

Here are some passages from the book that give you a feel of the impact the Ottomans had:

For half a century before 1492, and for centuries afterward, the Ottoman Empire stood as the most powerful state on earth: the largest empire in the Mediterranean since ancient Rome, and the most enduring in the history of Islam. In the decades around 1500, the Ottomans controlled more territory and ruled over more people than any other world power. It was the Ottoman monopoly of trade routes with the East, combined with their military prowess on land and on sea, that pushed Spain and Portugal out of the Mediterranean, forcing merchants and sailors from these fifteenth-century kingdoms to become global explorers as they risked treacherous voyages across oceans and around continents—all to avoid the Ottomans.

From China to Mexico, the Ottoman Empire shaped the known world at the turn of the sixteenth century. Given its hegemony, it became locked in military, ideological, and economic competition with the Spanish and Italian states, Russia, India, and China, as well as other Muslim powers. The Ottomans influenced in one way or another nearly every major event of those years, with reverberations down to our own time. Dozens of familiar figures, such as Columbus, Vasco da Gama, Montezuma, the reformer Luther, the warlord Tamerlane, and generations of popes—as well as millions of other greater and lesser historical personages—calibrated their actions and defined their very existence in reaction to the reach and grasp of Ottoman power.

Other facts, too, have blotted out our recognition of the Ottoman influence on our own history. Foremost, we tend to read the history of the last half-millennium as “the rise of the West.” (This anachronism rings as true in Turkey and the rest of the Middle East as it does in Europe and America.) In fact, in 1500, and even in 1600, there was no such thing as the now much-vaunted notion of “the West.” Throughout the early modern centuries, the European continent consisted of a fragile collection of disparate kingdoms and small, weak principalities locked in constant warfare. The large land-based empires of Eurasia were the dominant powers of the Old World, and, apart from a few European outposts in and around the Caribbean, the Americas remained the vast domain of its indigenous peoples. The Ottoman Empire held more territory in Europe than did most European-based states. In 1600, if asked to pick a single power that would take over the world, a betting man would have put his money on the Ottoman Empire, or perhaps China, but certainly not on any European entity.

The sheer scope was the empire at its height was extraordinary:

For close to four centuries, from 1453 until well into the exceedingly fractured 1800s, the Ottomans remained at the center of global politics, economics, and war. As European states rose and fell, the Ottomans stood strong. They battled Europe’s medieval and early modern empires, and in the twentieth century continued to fight in Europe, albeit against vastly different enemies. Everyone from Machiavelli to Jefferson to Hitler—quite an unlikely trio—was forced to confront the challenge of the Ottomans’ colossal power and influence. Counting from their first military victory, at Bursa, they ruled for nearly six centuries in territories that today comprise some thirty-three countries. Their armies would control massive swaths of Europe, Africa, and Asia; some of the world’s most crucial trade corridors; and cities along the shores of the Mediterranean, Red, Black, and Caspian seas, the Indian Ocean, and the Persian Gulf. They held Istanbul and Cairo, two of the largest cities on earth, as well as the holy cities of Mecca, Medina, and Jerusalem, and what was the world’s largest Jewish city for over four hundred years, Salonica (Thessaloniki in today’s Greece). From their lowly beginnings as sheep-herders on the long, hard road across Central Asia, the Ottomans ultimately succeeded in proving themselves the closest thing to the Roman Empire since the Roman Empire itself.

One of the interesting things about the Ottomans was how cosmopolitan and relatively tolerant they were.  The Spanish threw the Muslims and Jews out of Spain but the Ottomans welcomed a variety of peoples, cultures, languages, and religions.  It wasn’t until relatively late that the empire came to be predominately Muslim.

Although all religious minorities throughout the Mediterranean were subjected to much hardship, the Ottomans, despite what Innocent thought, never persecuted non-Muslims in the way that the Inquisition persecuted Muslims and Jews—and, despite the centuries of calls for Christian Crusades, Muslims never attempted a war against the whole of Christianity. While considered legally inferior to Muslims, Christians and Jews in the Ottoman Empire (as elsewhere in the lands of Islam) had more rights than other religious minorities around the world. They had their own law courts, freedom to worship in the empire’s numerous synagogues and churches, and communal autonomy. While Christian Europe was killing its religious minorities, the Ottomans protected theirs and welcomed those expelled from Europe. Although the sultans of the empire were Muslims, the majority of the population was not. Indeed, the Ottoman Empire was effectively the Mediterranean’s most populous Christian state: the Ottoman sultan ruled over more Christian subjects than the Catholic pope.

The sultan who moved the Ottoman empire into the big leagues — tripling its size — was Selim the Grim, who is the central figure of this book (look at his image on the book’s cover and you’ll see how he earned the name).  His son was Suleyman the Magnificent, whose long rule made him the lasting symbol of the empire at its peak.  Another sign of the heterogeneous nature of the Ottomans is that the sultans themselves were of mixed blood.

Because, in this period, Ottoman sultans and princes produced sons not from their wives but from their concubines, all Ottoman sultans were the sons of foreign, usually Christian-born, slaves like Gülbahar [Selim’s mother].

In the exceedingly cosmopolitan empire, the harem ensured that a non-Turkish, non-Muslim, non-elite diversity was infused into the very bloodline of the imperial family. As the son of a mother with roots in a far-off land, a distant culture, and a religion other than Islam, Selim viscerally experienced the ethnically and religiously amalgamated nature of the Ottoman Empire, and grew up in provincial Amasya with an expansive outlook on the fifteenth-century world.

Posted in History, Liberal democracy, Philosophy

Fukuyama — Liberalism and Its Discontents

This post is a brilliant essay by Francis Fukuyama, “Liberalism and Its Discontents.”  In it, he explores the problems facing liberal democracy today.  As always, it is threatened by autocratic regimes around the world.  But what’s new since the fall of the Soviet Union is the threat from illiberal democracy, both at home and abroad, in the form of populism of the right and the left.  
His argument is a strong defense of the liberal democratic order, but it is also a very smart analysis of how liberal democracy has sowed the seeds of its own downfall.  He shows how much it depends on the existence of a vibrant civil society and robust social capital, both of which its own emphasis on individual liberty tends to undermine.  He also shows how its stress on free markets has fostered the rise of the neoliberal religion, which seeks to subordinate the once robust liberal state to the market.  And he notes how its tolerance of diverse viewpoints leaves it vulnerable to illiberal views that seek to wipe it out of existence.
This essay was published in the inaugural issue of the magazine American Purpose On October 5, 2020.  Here’s a link to the original.
It’s well worth your while to give this essay a close read.

Illustration_AmericanPurpose_Edited

Liberalism and Its Discontents

The challenges from the left and the right.

Francis Fukuyama

Today, there is a broad consensus that democracy is under attack or in retreat in many parts of the world. It is being contested not just by authoritarian states like China and Russia, but by populists who have been elected in many democracies that seemed secure.

The “democracy” under attack today is a shorthand for liberal democracy, and what is really under greatest threat is the liberal component of this pair. The democracy part refers to the accountability of those who hold political power through mechanisms like free and fair multiparty elections under universal adult franchise. The liberal part, by contrast, refers primarily to a rule of law that constrains the power of government and requires that even the most powerful actors in the system operate under the same general rules as ordinary citizens. Liberal democracies, in other words, have a constitutional system of checks and balances that limits the power of elected leaders.Democracy itself is being challenged by authoritarian states like Russia and China that manipulate or dispense with free and fair elections. But the more insidious threat arises from populists within existing liberal democracies who are using the legitimacy they gain through their electoral mandates to challenge or undermine liberal institutions. Leaders like Hungary’s Viktor Orbán, India’s Narendra Modi, and Donald Trump in the United States have tried to undermine judicial independence by packing courts with political supporters, have openly broken laws, or have sought to delegitimize the press by labeling mainstream media as “enemies of the people.” They have tried to dismantle professional bureaucracies and to turn them into partisan instruments. It is no accident that Orbán puts himself forward as a proponent of “illiberal democracy.”

The contemporary attack on liberalism goes much deeper than the ambitions of a handful of populist politicians, however. They would not be as successful as they have been were they not riding a wave of discontent with some of the underlying characteristics of liberal societies. To understand this, we need to look at the historical origins of liberalism, its evolution over the decades, and its limitations as a governing doctrine.

What Liberalism Was

Classical liberalism can best be understood as an institutional solution to the problem of governing over diversity. Or to put it in slightly different terms, it is a system for peacefully managing diversity in pluralistic societies. It arose in Europe in the late 17th and 18th centuries in response to the wars of religion that followed the Protestant Reformation, wars that lasted for 150 years and killed major portions of the populations of continental Europe.

While Europe’s religious wars were driven by economic and social factors, they derived their ferocity from the fact that the warring parties represented different Christian sects that wanted to impose their particular interpretation of religious doctrine on their populations. This was a period in which the adherents of forbidden sects were persecuted—heretics were regularly tortured, hanged, or burned at the stake—and their clergy hunted. The founders of modern liberalism like Thomas Hobbes and John Locke sought to lower the aspirations of politics, not to promote a good life as defined by religion, but rather to preserve life itself, since diverse populations could not agree on what the good life was. This was the distant origin of the phrase “life, liberty, and the pursuit of happiness” in the Declaration of Independence. The most fundamental principle enshrined in liberalism is one of tolerance: You do not have to agree with your fellow citizens about the most important things, but only that each individual should get to decide what those things are without interference from you or from the state. The limits of tolerance are reached only when the principle of tolerance itself is challenged, or when citizens resort to violence to get their way.

Understood in this fashion, liberalism was simply a pragmatic tool for resolving conflicts in diverse societies, one that sought to lower the temperature of politics by taking questions of final ends off the table and moving them into the sphere of private life. This remains one of its most important selling points today: If diverse societies like India or the United States move away from liberal principles and try to base national identity on race, ethnicity, or religion, they are inviting a return to potentially violent conflict. The United States suffered such conflict during its Civil War, and Modi’s India is inviting communal violence by shifting its national identity to one based on Hinduism.

There is however a deeper understanding of liberalism that developed in continental Europe that has been incorporated into modern liberal doctrine. In this view, liberalism is not simply a mechanism for pragmatically avoiding violent conflict, but also a means of protecting fundamental human dignity.

The ground of human dignity has shifted over time. In aristocratic societies, it was an attribute only of warriors who risked their lives in battle. Christianity universalized the concept of dignity based on the possibility of human moral choice: Human beings had a higher moral status than the rest of created nature but lower than that of God because they could choose between right and wrong. Unlike beauty or intelligence or strength, this characteristic was universally shared and made human beings equal in the sight of God. By the time of the Enlightenment, the capacity for choice or individual autonomy was given a secular form by thinkers like Rousseau (“perfectibility”) and Kant (a “good will”), and became the ground for the modern understanding of the fundamental right to dignity written into many 20th-century constitutions. Liberalism recognizes the equal dignity of every human being by granting them rights that protect individual autonomy: rights to speech, to assembly, to belief, and ultimately to participate in self-government.

Liberalism thus protects diversity by deliberately not specifying higher goals of human life. This disqualifies religiously defined communities as liberal. Liberalism also grants equal rights to all people considered full human beings, based on their capacity for individual choice. Liberalism thus tends toward a kind of universalism: Liberals care not just about their rights, but about the rights of others outside their particular communities. Thus the French Revolution carried the Rights of Man across Europe. From the beginning the major arguments among liberals were not over this principle, but rather over who qualified as rights-bearing individuals, with various groups—racial and ethnic minorities, women, foreigners, the propertyless, children, the insane, and criminals—excluded from this magic circle.

A final characteristic of historical liberalism was its association with the right to own property. Property rights and the enforcement of contracts through legal institutions became the foundation for economic growth in Britain, the Netherlands, Germany, the United States, and other states that were not necessarily democratic but protected property rights. For that reason liberalism strongly associated with economic growth and modernization. Rights were protected by an independent judiciary that could call on the power of the state for enforcement. Properly understood, rule of law referred both to the application of day-to-day rules that governed interactions between individuals and to the design of political institutions that formally allocated political power through constitutions. The class that was most committed to liberalism historically was the class of property owners, not just agrarian landlords but the myriads of middle-class business owners and entrepreneurs that Karl Marx would label the bourgeoisie.

Liberalism is connected to democracy, but is not the same thing as it. It is possible to have regimes that are liberal but not democratic: Germany in the 19th century and Singapore and Hong Kong in the late 20th century come to mind. It is also possible to have democracies that are not liberal, like the ones Viktor Orbán and Narendra Modi are trying to create that privilege some groups over others. Liberalism is allied to democracy through its protection of individual autonomy, which ultimately implies a right to political choice and to the franchise. But it is not the same as democracy. From the French Revolution on, there were radical proponents of democratic equality who were willing to abandon liberal rule of law altogether and vest power in a dictatorial state that would equalize outcomes. Under the banner of Marxism-Leninism, this became one of the great fault lines of the 20th century. Even in avowedly liberal states, like many in late 19th- and early 20th-century Europe and North America, there were powerful trade union movements and social democratic parties that were more interested in economic redistribution than in the strict protection of property rights.

Liberalism also saw the rise of another competitor besides communism: nationalism. Nationalists rejected liberalism’s universalism and sought to confer rights only on their favored group, defined by culture, language, or ethnicity. As the 19th century progressed, Europe reorganized itself from a dynastic to a national basis, with the unification of Italy and Germany and with growing nationalist agitation within the multiethnic Ottoman and Austro-Hungarian empires. In 1914 this exploded into the Great War, which killed millions of people and laid the kindling for a second global conflagration in 1939.

The defeat of Germany, Italy, and Japan in 1945 paved the way for a restoration of liberalism as the democratic world’s governing ideology. Europeans saw the folly of organizing politics around an exclusive and aggressive understanding of nation, and created the European Community and later the European Union to subordinate the old nation-states to a cooperative transnational structure. For its part, the United States played a powerful role in creating a new set of international institutions, including the United Nations (and affiliated Bretton Woods organizations like the World Bank and IMF), GATT and the World Trade Organization, and cooperative regional ventures like NATO and NAFTA.

The largest threat to this order came from the former Soviet Union and its allied communist parties in Eastern Europe and the developing world. But the former Soviet Union collapsed in 1991, as did the perceived legitimacy of Marxism-Leninism, and many former communist countries sought to incorporate themselves into existing international institutions like the EU and NATO. This post-Cold War world would collectively come to be known as the liberal international order.

But the period from 1950 to the 1970s was the heyday of liberal democracy in the developed world. Liberal rule of law abetted democracy by protecting ordinary people from abuse: The U.S. Supreme Court, for example, was critical in breaking down legal racial segregation through decisions like Brown v. Board of Education. And democracy protected the rule of law: When Richard Nixon engaged in illegal wiretapping and use of the CIA, it was a democratically elected Congress that helped drive him from power. Liberal rule of law laid the basis for the strong post-World War II economic growth that then enabled democratically elected legislatures to create redistributive welfare states. Inequality was tolerable in this period because most people could see their material conditions improving. In short, this period saw a largely happy coexistence of liberalism and democracy throughout the developed world.

Discontents

Liberalism has been a broadly successful ideology, and one that is responsible for much of the peace and prosperity of the modern world. But it also has a number of shortcomings, some of which were triggered by external circumstances, and others of which are intrinsic to the doctrine. The first lies in the realm of economics, the second in the realm of culture.

The economic shortcomings have to do with the tendency of economic liberalism to evolve into what has come to be called “neoliberalism.” Neoliberalism is today a pejorative term used to describe a form of economic thought, often associated with the University of Chicago or the Austrian school, and economists like Friedrich Hayek, Milton Friedman, George Stigler, and Gary Becker. They sharply denigrated the role of the state in the economy, and emphasized free markets as spurs to growth and efficient allocators of resources. Many of the analyses and policies recommended by this school were in fact helpful and overdue: Economies were overregulated, state-owned companies inefficient, and governments responsible for the simultaneous high inflation and low growth experienced during the 1970s.

But valid insights about the efficiency of markets evolved into something of a religion, in which state intervention was opposed not based on empirical observation but as a matter of principle. Deregulation produced lower airline ticket prices and shipping costs for trucks, but also laid the ground for the great financial crisis of 2008 when it was applied to the financial sector. Privatization was pushed even in cases of natural monopolies like municipal water or telecom systems, leading to travesties like the privatization of Mexico’s TelMex, where a public monopoly was transformed into a private one. Perhaps most important, the fundamental insight of trade theory, that free trade leads to higher wealth for all parties concerned, neglected the further insight that this was true only in the aggregate, and that many individuals would be hurt by trade liberalization. The period from the 1980s onward saw the negotiation of both global and regional free trade agreements that shifted jobs and investment away from rich democracies to developing countries, increasing within-country inequalities. In the meantime, many countries starved their public sectors of resources and attention, leading to deficiencies in a host of public services from education to health to security.

The result was the world that emerged by the 2010s in which aggregate incomes were higher than ever but inequality within countries had also grown enormously. Many countries around the world saw the emergence of a small class of oligarchs, multibillionaires who could convert their economic resources into political power through lobbyists and purchases of media properties. Globalization enabled them to move their money to safe jurisdictions easily, starving states of tax revenue and making regulation very difficult. Globalization also entailed liberalization of rules concerning migration. Foreign-born populations began to increase in many Western countries, abetted by crises like the Syrian civil war that sent more than a million refugees into Europe. All of this paved the way for the populist reaction that became clearly evident in 2016 with Britain’s Brexit vote and the election of Donald Trump in the United States.

The second discontent with liberalism as it evolved over the decades was rooted in its very premises. Liberalism deliberately lowered the horizon of politics: A liberal state will not tell you how to live your life, or what a good life entails; how you pursue happiness is up to you. This produces a vacuum at the core of liberal societies, one that often gets filled by consumerism or pop culture or other random activities that do not necessarily lead to human flourishing. This has been the critique of a group of (mostly) Catholic intellectuals including Patrick Deneen, Sohrab Ahmari, Adrian Vermeule, and others, who feel that liberalism offers “thin gruel” for anyone with deeper moral commitments.

This leads us to a deeper stratum of discontent. Liberal theory, both in its economic and political guises, is built around individuals and their rights, and the political system protects their ability to make these choices autonomously. Indeed, in neoclassical economic theory, social cooperation arises only as a result of rational individuals deciding that it is in their self-interest to work with other individuals. Among conservative intellectuals, Patrick Deneen has gone the furthest by arguing that this whole approach is deeply flawed precisely because it is based on this individualistic premise, and sanctifies individual autonomy above all other goods. Thus for him, the entire American project based as it was on Lockean individualistic principles was misfounded. Human beings for him are not primarily autonomous individuals, but deeply social beings who are defined by their obligations and ties to a range of social structures, from families to kin groups to nations.

This social understanding of human nature was a truism taken for granted by most thinkers prior to the Western Enlightenment. It is also one that is one supported by a great deal of recent research in the life sciences that shows that human beings are hard-wired to be social creatures: Many of our most salient faculties are ones that lead us to cooperate with one another in groups of various sizes and types. This cooperation does not arise necessarily from rational calculation; it is supported by emotional faculties like pride, guilt, shame, and anger that reinforce social bonds. The success of human beings over the millennia that has allowed our species to completely dominate its natural habitat has to do with this aptitude for following norms that induce social cooperation.

By contrast, the kind of individualism celebrated in liberal economic and political theory is a contingent development that emerged in Western societies over the centuries. Its history is long and complicated, but it originated in the inheritance rules set down by the Catholic Church in early medieval times which undermined the extended kinship networks that had characterized Germanic tribal societies. Individualism was further validated by its functionality in promoting market capitalism: Markets worked more efficiently if individuals were not constrained by obligations to kin and other social networks. But this kind of individualism has always been at odds with the social proclivities of human beings. It also does not come naturally to people in certain other non-Western societies like India or the Arab world, where kin, caste, or ethnic ties are still facts of life.

The implication of these observations for contemporary liberal societies is straightforward. Members of such societies want opportunities to bond with one another in a host of ways: as citizens of a nation, members of an ethnic or racial group, residents of a region, or adherents to a particular set of religious beliefs. Membership in such groups gives their lives meaning and texture in a way that mere citizenship in a liberal democracy does not.

Many of the critics of liberalism on the right feel that it has undervalued the nation and traditional national identity: Thus Viktor Orbán has asserted that Hungarian national identity is based on Hungarian ethnicity and on maintenance of traditional Hungarian values and cultural practices. New nationalists like Yoram Hazony celebrate nationhood and national culture as the rallying cry for community, and they bemoan liberalism’s dissolving effect on religious commitment, yearning for a thicker sense of community and shared values, underpinned by virtues in service of that community.

There are parallel discontents on the left. Juridical equality before the law does not mean that people will be treated equally in practice. Racism, sexism, and anti-gay bias all persist in liberal societies, and those injustices have become identities around which people could mobilize. The Western world has seen the emergence of a series of social movements since the 1960s, beginning with the civil rights movement in the United States, and movements promoting the rights of women, indigenous peoples, the disabled, the LGBT community, and the like. The more progress that has been made toward eradicating social injustices, the more intolerable the remaining injustices seem, and thus the moral imperative to mobilizing to correct them. The complaint of the left is different in substance but similar in structure to that of the right: Liberal society does not do enough to root out deep-seated racism, sexism, and other forms of discrimination, so politics must go beyond liberalism. And, as on the right, progressives want the deeper bonding and personal satisfaction of associating—in this case, with people who have suffered from similar indignities.

This instinct for bonding and the thinness of shared moral life in liberal societies has shifted global politics on both the right and the left toward a politics of identity and away from the liberal world order of the late 20th century. Liberal values like tolerance and individual freedom are prized most intensely when they are denied: People who live in brutal dictatorships want the simple freedom to speak, associate, and worship as they choose. But over time life in a liberal society comes to be taken for granted and its sense of shared community seems thin. Thus in the United States, arguments between right and left increasingly revolve around identity, and particularly racial identity issues, rather than around economic ideology and questions about the appropriate role of the state in the economy.

There is another significant issue that liberalism fails to grapple adequately with, which concerns the boundaries of citizenship and rights. The premises of liberal doctrine tend toward universalism: Liberals worry about human rights, and not just the rights of Englishmen, or white Americans, or some other restricted class of people. But rights are protected and enforced by states which have limited territorial jurisdiction, and the question of who qualifies as a citizen with voting rights becomes a highly contested one. Some advocates of migrant rights assert a universal human right to migrate, but this is a political nonstarter in virtually every contemporary liberal democracy. At the present moment, the issue of the boundaries of political communities is settled by some combination of historical precedent and political contestation, rather than being based on any clear liberal principle.

Conclusion

Vladimir Putin told the Financial Times that liberalism has become an “obsolete” doctrine. While it may be under attack from many quarters today, it is in fact more necessary than ever.

It is more necessary because it is fundamentally a means of governing over diversity, and the world is more diverse than it ever has been. Democracy disconnected from liberalism will not protect diversity, because majorities will use their power to repress minorities. Liberalism was born in the mid-17th century as a means of resolving religious conflicts, and it was reborn again after 1945 to solve conflicts between nationalisms. Any illiberal effort to build a social order around thick ties defined by race, ethnicity, or religion will exclude important members of the community, and down the road will lead to conflict. Russia itself retains liberal characteristics: Russian citizenship and nationality is not defined by either Russian ethnicity or the Orthodox religion; the Russian Federation’s millions of Muslim inhabitants enjoy equal juridical rights. In situations of de facto diversity, attempts to impose a single way of life on an entire population is a formula for dictatorship.

The only other way to organize a diverse society is through formal power-sharing arrangements among different identity groups that give only a nod toward shared nationality. This is the way that Lebanon, Iraq, Bosnia, and other countries in the Middle East and the Balkans are governed. This type of consociationalism leads to very poor governance and long-term instability, and works poorly in societies where identity groups are not geographically based. This is not a path down which any contemporary liberal democracy should want to tread.

That being said, the kinds of economic and social policies that liberal societies should pursue is today a wide-open question. The evolution of liberalism into neoliberalism after the 1980s greatly reduced the policy space available to centrist political leaders, and permitted the growth of huge inequalities that have been fueling populisms of the right and the left. Classical liberalism is perfectly compatible with a strong state that seeks social protections for populations left behind by globalization, even as it protects basic property rights and a market economy. Liberalism is necessarily connected to democracy, and liberal economic policies need to be tempered by considerations of democratic equality and the need for political stability.

I suspect that most religious conservatives critical of liberalism today in the United States and other developed countries do not fool themselves into thinking that they can turn the clock back to a period when their social views were mainstream. Their complaint is a different one: that contemporary liberals are ready to tolerate any set of views, from radical Islam to Satanism, other than those of religious conservatives, and that they find their own freedom constrained.

This complaint is a serious one: Many progressives on the left have shown themselves willing to abandon liberal values in pursuit of social justice objectives. There has been a sustained intellectual attack on liberal principles over the past three decades coming out of academic pursuits like gender studies, critical race theory, postcolonial studies, and queer theory, that deny the universalistic premises underlying modern liberalism. The challenge is not simply one of intolerance of other views or “cancel culture” in the academy or the arts. Rather, the challenge is to basic principles that all human beings were born equal in a fundamental sense, or that a liberal society should strive to be color-blind. These different theories tend to argue that the lived experiences of specific and ever-narrower identity groups are incommensurate, and that what divides them is more powerful than what unites them as citizens. For some in the tradition of Michel Foucault, foundational approaches to cognition coming out of liberal modernity like the scientific method or evidence-based research are simply constructs meant to bolster the hidden power of racial and economic elites.

The issue here is thus not whether progressive illiberalism exists, but rather how great a long-term danger it represents. In countries from India and Hungary to the United States, nationalist conservatives have actually taken power and have sought to use the power of the state to dismantle liberal institutions and impose their own views on society as a whole. That danger is a clear and present one.

Progressive anti-liberals, by contrast, have not succeeded in seizing the commanding heights of political power in any developed country. Religious conservatives are still free to worship in any way they see fit, and indeed are organized in the United States as a powerful political bloc that can sway elections. Progressives exercise power in different and more nuanced ways, primarily through their dominance of cultural institutions like the mainstream media, the arts, and large parts of academia. The power of the state has been enlisted behind their agenda on such matters as striking down via the courts conservative restrictions on abortion and gay marriage and in the shaping of public school curricula. An open question for the future is whether cultural dominance today will ultimately lead to political dominance in the future, and thus a more thoroughgoing rollback of liberal rights by progressives.

Liberalism’s present-day crisis is not new; since its invention in the 17th century, liberalism has been repeatedly challenged by thick communitarians on the right and progressive egalitarians on the left. Liberalism properly understood is perfectly compatible with communitarian impulses and has been the basis for the flourishing of deep and diverse forms of civil society. It is also compatible with the social justice aims of progressives: One of its greatest achievements was the creation of modern redistributive welfare states in the late 20th century. Liberalism’s problem is that it works slowly through deliberation and compromise, and never achieves its communal or social justice goals as completely as their advocates would like. But it is hard to see how the discarding of liberal values is going to lead to anything in the long term other than increasing social conflict and ultimately a return to violence as a means of resolving differences.

Francis Fukuyama, chairman of the editorial board of American Purpose, directs the Center on Democracy, Development and the Rule of Law at Stanford University.

Posted in History, History of education, War

An Affair to Remember: America’s Brief Fling with the University as a Public Good

This post is an essay about the brief but glorious golden age of the US university during the three decades after World War II.  

American higher education rose to fame and fortune during the Cold War, when both student enrollments and funded research shot upward. Prior to World War II, the federal government showed little interest in universities and provided little support. The war spurred a large investment in defense-based scientific research in universities, and the emergence of the Cold War expanded federal investment exponentially. Unlike a hot war, the Cold War offered an extended period of federally funded research public subsidy for expanding student enrollments. The result was the golden age of the American university. The good times continued for about 30 years and then began to go bad. The decline was triggered by the combination of a decline in the perceived Soviet threat and a taxpayer revolt against high public spending; both trends culminating with the fall of the Berlin Wall in 1989. With no money and no enemy, the Cold War university fell as quickly as it arose. Instead of seeing the Cold War university as the norm, we need to think of it as the exception. What we are experiencing now in American higher education is a regression to the mean, in which, over the long haul, Americans have understood higher education to be a distinctly private good.

I originally presented this piece in 2014 at a conference at Catholic University in Leuven, Belgium.  It was then published in the Journal of Philosophy of Education in 2016 (here’s a link to the JOPE version) and then became a chapter in my 2017 book, A Perfect Mess.  Waste not, want not.  Hope you enjoy it.

Cold War

An Affair to Remember:

America’s Brief Fling with the University as a Public Good

David F. Labaree

            American higher education rose to fame and fortune during the Cold War, when both student enrollments and funded research shot upward.  Prior to World War II, the federal government showed little interest in universities and provided little support.  The war spurred a large investment in defense-based scientific research in universities for reasons of both efficiency and necessity:  universities had the researchers and infrastructure in place and the government needed to gear up quickly.  With the emergence of the Cold War in 1947, the relationship continued and federal investment expanded exponentially.  Unlike a hot war, the Cold War offered a long timeline for global competition between communism and democracy, which meant institutionalizing the wartime model of federally funded research and building a set of structures for continuing investment in knowledge whose military value was unquestioned. At the same time, the communist challenge provided a strong rationale for sending a large number of students to college.  These increased enrollments would educate the skilled workers needed by the Cold War economy, produce informed citizens to combat the Soviet menace, and demonstrate to the world the broad social opportunities available in a liberal democracy.  The result of this enormous public investment in higher education has become known as the golden age of the American university.

            Of course, as is so often the case with a golden age, it didn’t last.  The good times continued for about 30 years and then began to go bad.  The decline was triggered by the combination of a decline in the perceived Soviet threat and a taxpayer revolt against high public spending; both trends with the fall of the Berlin Wall in 1989.  With no money and no enemy, the Cold War university fell as quickly as it arose. 

            In this paper I try to make sense of this short-lived institution.  But I want to avoid the note of nostalgia that pervades many current academic accounts, in which professors and administrators grieve for the good old days of the mid-century university and spin fantasies of recapturing them.  Barring another national crisis of the same dimension, however, it just won’t happen.  Instead of seeing the Cold War university as the norm that we need to return to, I suggest that it’s the exception.  What we’re experiencing now in American higher education is, in many ways, a regression to the mean. 

            My central theme is this:  Over the long haul, Americans have understood higher education as a distinctly private good.  The period from 1940 to 1970 was the one time in our history when the university became a public good.  And now we are back to the place we have always been, where the university’s primary role is to provide individual consumers a chance to gain social access and social advantage.  Since students are the primary beneficiaries, then they should also foot the bill; so state subsidies are hard to justify.

            Here is my plan.  First, I provide an overview of the long period before 1940 when American higher education functioned primarily as a private good.  During this period, the beneficiaries changed from the university’s founders to its consumers, but private benefit was the steady state.  This is the baseline against which we can understand the rapid postwar rise and fall of public investment in higher education.  Next, I look at the huge expansion of public funding for higher education starting with World War II and continuing for the next 30 years.  Along the way I sketch how the research university came to enjoy a special boost in support and rising esteem during these decades.  Then I examine the fall from grace toward the end of the century when the public-good rationale for higher ed faded as quickly as it had emerged.  And I close by exploring the implications of this story for understanding the American system of higher education as a whole. 

            During most of its history, the central concern driving the system has not been what it can do for society but what it can do for me.  In many ways, this approach has been highly beneficial.  Much of its success as a system – as measured by wealth, rankings, and citations – derives from its core structure as a market-based system producing private goods for consumers rather than a politically-based system producing public goods for state and society.  But this view of higher education as private property is also a key source of the system’s pathologies.  It helps explain why public funding for higher education is declining and student debt is rising; why private colleges are so much richer and more prestigious that public colleges; why the system is so stratified, with wealthy students attending the exclusive colleges at the top where social rewards are high and with poor students attending the inclusive colleges at the bottom where such rewards are low; and why quality varies so radically, from colleges that ride atop the global rankings to colleges that drift in intellectual backwaters.

The Private Origins of the System

            One of the peculiar aspects of the history of American higher education is that private colleges preceded public.  Another, which in part follows from the first, is that private colleges are also more prestigious.  Nearly everywhere else in the world, state-supported and governed universities occupy the pinnacle of the national system while private institutions play a small and subordinate role, supplying degrees of less distinction and serving students of less ability.  But in the U.S., the top private universities produce more research, gain more academic citations, attract better faculty and students, and graduate more leaders of industry, government, and the professions.  According to the 2013 Shanghai rankings, 16 of the top 25 universities in the U.S. are private, and the concentration is even higher at the top of this list, where private institutions make up 8 of the top 10 (Institute of Higher Education, 2013). 

            This phenomenon is rooted in the conditions under which colleges first emerged in the U.S.  American higher education developed into a system in the early 19th century, when three key elements were in place:  the state was weak, the market was strong, and the church was divided.  The federal government at the time was small and poor, surviving largely on tariffs and the sale of public lands, and state governments were strapped simply trying to supply basic public services.  Colleges were a low priority for government since they served no compelling public need – unlike public schools, which states saw as essential for producing citizens for the republic.  So colleges only emerged when local promoters requested and received a  corporate charter from the state.  These were private not-for-profit institutions that functioned much like any other corporation.  States provided funding only sporadically and only if an institution’s situation turned dire.  And after the Dartmouth College decision in 1819, the Supreme Court made clear that a college’s corporate charter meant that it could govern itself without state interference.  Therefore, in the absence of state funding and control, early American colleges developed a market-based system of higher education. 

            If the roots of the American system were private, they were also extraordinarily local.  Unlike the European university, with its aspirations toward universality and its history of cosmopolitanism, the American college of the nineteenth century was a home-town entity.  Most often, it was founded to advance the parochial cause of promoting a particular religious denomination rather than to promote higher learning.  In a setting where no church was dominant and all had to compete for visibility, stature, and congregants, founding colleges was a valuable way to plant the flag and promote the faith.  This was particularly true when the population was rapidly expanding into new territories to the west, which meant that no denomination could afford to cede the new terrain to competitors.  Starting a college in Ohio was a way to ensure denominational growth, prepare clergy, and spread the word.

            At the same time, colleges were founded with an eye toward civic boosterism, intended to shore up a community’s claim to be a major cultural and commercial center rather than a sleepy farm town.  With a college, a town could claim that it deserved to gain lucrative recognition as a stop on the railroad line, the site for a state prison, the county seat, or even the state capital.  These consequences would elevate the value of land in the town, which would work to the benefit of major landholders.  In this sense, the nineteenth century college, like much of American history, was in part the product of a land development scheme.  In general, these two motives combined: colleges emerged as a way to advance both the interests of particular sects and also the interests of the towns where they were lodged.  Often ministers were also land speculators.  It was always better to have multiple rationales and sources of support than just one (Brown (1995); Boorstin (1965); Potts (1971).  In either case, however, the benefits of founding a college accrued to individual landowners and particular religious denominations and not to the larger public.

As a result these incentives, church officials and civic leaders around the country scrambled to get a state charter for a college, establish a board of trustees made up of local notables, and install a president.  The latter (usually a clergyman) would rent a local building, hire a small and not very accomplished faculty, and serve as the CEO of a marginal educational enterprise, one that sought to draw tuition-paying students from the area in order to make the college a going concern.  With colleges arising to meet local and sectarian needs, the result was the birth of a large number of small, parochial, and weakly funded institutions in a very short period of time in the nineteenth century, which meant that most of these colleges faced a difficult struggle to survive in the competition with peer institutions.  In the absence of reliable support from church or state, these colleges had to find a way to get by on their own. 

            Into this mix of private colleges, state and local governments began to introduce public institutions.  First came a series of universities established by individual states to serve their local populations.  Here too competition was a bigger factor than demand for learning, since a state government increasingly needed to have a university of its own in order to keep up with its neighbors.  Next came a group of land-grant colleges that began to emerge by midcentury.  Funded by grants of land from the federal government, these were public institutions that focused on providing practical education for occupations in agriculture and engineering.  Finally was an array of normal schools, which aimed at preparing teachers for the expanding system of public elementary education.  Like the private colleges, these public institutions emerged to meet the economic needs of towns that eagerly sought to house them.  And although they colleges were creatures of the state, they had only limited public funding and had to rely heavily on student tuition and private donations.

            The rate of growth of this system of higher education was staggering.  At the beginning of the American republic in 1790 the country had 19 institutions calling themselves colleges or universities (Tewksbury (1932), Table 1; Collins, 1979, Table 5.2).  By 1880, it had 811, which doesn’t even include the normal schools.  As a comparison, this was five times as many institutions as existed that year in all of Western Europe (Ruegg (2004).  To be sure, the American institutions were for the most part colleges in name only, with low academic standards, an average student body of 131 (Carter et al. (2006), Table Bc523) and faculty of 14 (Carter et al. (2006), Table Bc571).  But nonetheless this was a massive infrastructure for a system of higher education. 

            At a density of 16 colleges per million of population, the U.S. in 1880 had the most overbuilt system of higher education in the world (Collins, 1979, Table 5.2).  Created in order to meet the private needs of land speculators and religious sects rather that the public interest of state and society, the system got way ahead of demand for its services.  That changed in the 1880s.  By adopting parts of the German research university model (in form if not in substance), the top level of the American system acquired a modicum of academic respectability.  In addition – and this is more important for our purposes here – going to college finally came to be seen as a good investment for a growing number of middle-class student-consumers. 

            Three factors came together to make college attractive.  Primary among these was the jarring change in the structure of status transmission for middle-class families toward the end of the nineteenth century.  The tradition of passing on social position to your children by transferring ownership of the small family business was under dire threat, as factories were driving independent craft production out of the market and department stories were making small retail shops economically marginal.  Under these circumstances, middle class families began to adopt what Burton Bledstein calls the “culture of professionalism” (Bledstein, 1976).  Pursuing a profession (law, medicine, clergy) had long been an option for young people in this social stratum, but now this attraction grew stronger as the definition of profession grew broader.  With the threat of sinking into the working class becoming more likely, families found reassurance in the prospect of a form of work that would buffer their children from the insecurity and degradation of wage labor.  This did not necessarily mean becoming a traditional professional, where the prospects were limited and entry costs high, but instead it meant becoming a salaried employee in a management position that was clearly separated from the shop floor.  The burgeoning white-collar work opportunities as managers in corporate and government bureaucracies provided the promise of social status, economic security, and protection from downward mobility.  And the best way to certify yourself as eligible for this kind of work was to acquire a college degree. 

            Two other factors added to the attractions of college.  One was that a high school degree – once a scarce commodity that became a form of distinction for middle class youth during the nineteenth century – was in danger of becoming commonplace.  Across the middle of the century, enrollments in primary and grammar schools were growing fast, and by the 1880s they were filling up.  By 1900, the average American 20-year-old had eight years of schooling, which meant that political pressure was growing to increase access to high school (Goldin & Katz, 2008, p. 19).  This started to happen in the 1880s, and for the next 50 years high school enrollments doubled every decade.  The consequences were predictable.  If the working class was beginning to get a high school education, then middle class families felt compelled to preserve their advantage by pursuing college.

            The last piece that fell into place to increase the drawing power of college for middle class families was the effort by colleges in the 1880s and 90s to make undergraduate enrollment not just useful but enjoyable.  Ever desperate to find ways to draw and retain students, colleges responded to competitive pressure by inventing the core elements that came to define the college experience for American students in the twentieth century.  These included fraternities and sororities, pleasant residential halls, a wide variety of extracurricular entertainments, and – of course – football.  College life became a major focus of popular magazines, and college athletic events earned big coverage in newspapers.  In remarkably short order, going to college became a life stage in the acculturation of middle class youth.  It was the place where you could prepare for a respectable job, acquire sociability, learn middle class cultural norms, have a good time, and meet a suitable spouse.  And, for those who were so inclined, was the potential fringe benefit of getting an education.

            Spurred by student desire to get ahead or stay ahead, college enrollments started growing quickly.  They were at 116,000  in 1879, 157,000 in 1889, 238,000 in 1899, 355,000 in 1909, 598,000 in 1919, 1,104,000 in 1929, and 1,494,000 in 1939 (Carter et al. (2006), Table Bc523).  This was a rate of increase of more than 50 percent a decade – not as fast as the increases that would come at midcentury, but still impressive.  During this same 60-year period, total college enrollment as a proportion of the population 18-to-24 years old rose from 1.6 percent to 9.1 percent (Carter et al. (2006), Table Bc524).  By 1930, U.S. had three times the population of the U.K. and 20 times the number of college students (Levine. 1986, p. 135).  And the reason they were enrolling in such numbers was clear.  According to studies in the 1920s, almost two-thirds of undergraduates were there to get ready for a particular job, mostly in the lesser professions and middle management (Levine, 1986, p. 40).  Business and engineering were the most popular majors and the social sciences were on the rise.  As David Levine put it in his important book about college in the interwar years, “Institutions of higher learning were no longer content to educate; they now set out to train, accredit, and impart social status to their students” (Levine, 1986, p. 19.

            Enrollments were growing in public colleges faster than in private colleges, but only by a small amount.  In fact it wasn’t until 1931 – for the first time in the history of American higher education – that the public sector finally accounted for a majority of college students (Carter et al., 2006, Tables Bc531 and Bc534).  The increases occurred across all levels of the system, including the top public research universities; but the largest share of enrollments flowed into the newer institutions at the bottom of the system:  the state colleges that were emerging from normal schools, urban commuter colleges (mostly private), and an array of public and private junior colleges that offered two-year vocational programs. 

            For our purposes today, the key point is this:  The American system of colleges and universities that emerged in the nineteenth century and continued until World War II was a market-driven structure that construed higher education as a private good.  Until around 1880, the primary benefits of the system went to the people who founded individual institutions – the land speculators and religious sects for whom a new college brought wealth and competitive advantage.  This explains why colleges emerged in such remote places long before there was substantial student demand.  The role of the state in this process was muted.  The state was too weak and too poor to provide strong support for higher education, and there was no obvious state interest that argued for doing so.  Until the decade before the war, most student enrollments were in the private sector, and even at the war’s start the majority of institutions in the system were private (Carter et al., 2006, Tables Bc510 to Bc520).  

            After 1880, the primary benefits of the system went to the students who enrolled.  For them, it became the primary way to gain entry to the relatively secure confines of salaried work in management and the professions.  For middle class families, college in this period emerged as the main mechanism for transmitting social advantage from parents to children; and for others, it became the object of aspiration as the place to get access to the middle class.  State governments put increasing amounts of money into support for public higher education, not because of the public benefits it would produce but because voters demanded increasing access to this very attractive private good.

The Rise of the Cold War University

            And then came the Second World War.  There is no need here to recount the devastation it brought about or the nightmarish residue it left.  But it’s worth keeping in mind the peculiar fact that this conflict is remembered fondly by Americans, who often refer to it as the Good War (Terkel, 1997).  The war cost a lot of American lives and money, but it also brought a lot of benefits.  It didn’t hurt, of course, to be on the winning side and to have all the fighting take place on foreign territory.  And part of the positive feeling associated with the war comes from the way it thrust the country into a new role as the dominant world power.  But perhaps even more the warm feeling arises from the memory of this as a time when the country came together around a common cause.  For citizens of the United States – the most liberal of liberal democracies, where private liberty is much more highly valued than public loyalty – it was a novel and exciting feeling to rally around the federal government.  Usually viewed with suspicion as a threat to the rights of individuals and a drain on private wealth, the American government in the 1940s took on the mantle of good in the fight against evil.  Its public image became the resolute face of a white-haired man dressed in red, white, and blue, who pointed at the viewer in a famous recruiting poster.  It’s slogan: “Uncle Sam Wants You.” 

            One consequence of the war was a sharp increase in the size of the U.S. government.  The historically small federal state had started to grow substantially in the 1930s as a result of the New Deal effort to spend the country out of a decade-long economic depression, a time when spending doubled.  But the war raised the level of federal spending by a factor of seven, from $1,000 to $7,000 per capita.  After the war, the level dropped back to $2,000; and then the onset of the Cold War sent federal spending into a sharp, and this time sustained, increase – reaching $3,000 in the 50s, 4,000 in the 60s, and regaining the previous high of $7,000 in the 80s, during the last days of the Soviet Union (Garrett & Rhine, 2006, figure 3). 

            If for Americans in general World War II carries warm associations, for people in higher education it marks the beginning of the Best of Times – a short but intense period of generous public funding and rapid expansion.  Initially, of course, the war brought trouble, since it sent most prospective college students into the military.  Colleges quickly adapted by repurposing their facilities for military training and other war-related activities.  But the real long-term benefits came when the federal government decided to draw higher education more centrally into the war effort – first, as the central site for military research and development; and second, as the place to send veterans when the war was over.  Let me say a little about each.

            In the first half of the twentieth century, university researchers had to scrabble around looking for funding, forced to rely on a mix of foundations, corporations, and private donors.  The federal government saw little benefit in employing their services.  In a particularly striking case at the start of World War, the professional association of academic chemists offered its help to the War Department, which declined “on the grounds that it already had a chemist in its employ” (Levine, 1986, p. 51).[1]  The existing model was for government to maintain its own modest research facilities instead of relying on the university. 

            The scale of the next war changed all this.  At the very start, a former engineering dean from MIT, Vannevar Bush, took charge of mobilizing university scientists behind the war effort as head of the Office of Scientific Research and Development.  The model he established for managing the relationship between government and researchers set the pattern for university research that still exists in the U.S. today: Instead of setting up government centers, the idea was to farm out research to universities.  Issue a request for proposals to meet a particular research need; award the grant to the academic researchers who seemed best equipped to meet this need; and pay 50 percent or more overhead to the university for the facilities that researchers would use.  This method drew on the expertise and facilities that already existed at research universities, which both saved the government from having to maintain a costly permanent research operation and also gave it the flexibility to draw on the right people for particular projects.  For universities, it provided a large source of funds, which enhanced their research reputations, helped them expand faculty, and paid for infrastructure.  It was a win-win situation.  It also established the entrepreneurial model of the university researcher in perpetual search for grant money.  And for the first time in the history of American higher education, the university was being considered a public good, whose research capacity could serve the national interest by helping to win a war. 

            If universities could meet one national need during the war by providing military research, they could meet another national need after the war by enrolling veterans.  The GI Bill of Rights, passed by congress in 1944, was designed to pay off a debt and resolve a manpower problem.  Its official name, the Servicemen’s Readjustment Act of 1944, reflects both aims.  By the end of the war there were 15 million men and women who had served in the military, who clearly deserved a reward for their years of service to the country.  The bill offered them the opportunity to continue their education at federal expense, which included attending the college of their choice.  This opportunity also offered another public benefit, since it responded to deep concern about the ability of the economy to absorb this flood of veterans.  The country had been sliding back into depression at the start of the war, and the fear was that massive unemployment at war’s end was a real possibility.  The strategy worked.  Under the GI Bill, about two million veterans eventually attended some form of college.  By 1948, when veteran enrollment peaked, American colleges and universities had one million more students than 10 years earlier (Geiger (2004), pp. 40-41; Carter et al. (2006), Table Bc523).  This was another win-win situation.  The state rewarded national service, headed off mass unemployment, and produced a pile of human capital for future growth.  Higher education got a flood of students who could pay their own way.  The worry, of course, was what was going to happen when the wartime research contracts ended and the veterans graduated. 

            That’s where the Cold War came in to save the day.  And the timing was perfect.  The first major action of the new conflict – the Berlin Blockade – came in 1948, the same year that veteran enrollments at American colleges reached their peak.  If World War II was good for American higher education, the Cold War was a bonanza.  The hot war meant boom and bust – providing a short surge of money and students followed by a sharp decline.  But the Cold War was a prolonged effort to contain Communism.  It was sustainable because actual combat was limited and often carried out by proxies.  For universities this was a gift that, for 30 years, kept on giving.  The military threat was massive in scale – nothing less than the threat of nuclear annihilation.  And supplementing it was an ideological challenge – the competition between two social and political systems for hearts and minds.  As a result, the government needed top universities to provide it with massive amounts of scientific research that would support the military effort.  And it also needed all levels of the higher education system to educate the large numbers of citizens required to deal with the ideological menace.  We needed to produce the scientists and engineers who would allow us to compete with Soviet technology.  We needed to provide high-level human capital in order to promote economic growth and demonstrate the economic superiority of capitalism over communism.  And we needed to provide educational opportunity for our own racial minorities and lower classes in order to show that our system is not only effective but also fair and equitable.  This would be a powerful weapon in the effort to win over the third world with the attractions of the American Way.  The Cold War American government treated higher education system as a highly valuable public good, which would make a large contribution to the national interest; and the system was pleased to be the object of so much federal largesse (Loss, 2012).

            On the research side, the impact of the Cold War on American universities was dramatic.  The best way to measure this is by examining patterns of federal research and development spending over the years, which traces the ebb and flow of national threats across the last 60 years.  Funding rose slowly  from $13 billion in 1953 (in constant 2014 dollars) until the Sputnik crisis (after the Soviets succeeded in placing the first satellite in earth orbit), when funding jumped to $40 billion in 1959 and rose rapidly to a peak of $88 billion in 1967.  Then the amount backed off to $66 billion in 1975, climbing to a new peak of $104 billion in 1990 just before the collapse of the Soviet Union and then dropping off.  It started growing again in 2002 after the attack on the twin towers, reaching an all-time high of $151 billion in 2010 and has been declining ever since (AAAS, 2014).[2] 

            Initially, defense funding accounted for 85 percent of federal research funding, gradually falling back to about half in 1967, as nondefense funding increased, but remaining in a solid majority position up until the present.  For most of the period after 1957, however, the largest element in nondefense spending was research on space technology, which arose directly from the Soviet Sputnik threat.  If you combine defense and space appropriations, this accounts for about three-quarters of federal research funding until 1990.  Defense research closely tracked perceived threats in the international environment, dropping by 20 percent after 1989 and then making a comeback in 2001.  Overall,  federal funding during the Cold War for research of all types grew in constant dollars from $13 billion in 1953 to $104 in 1990, an increase of 700 percent.  These were good times for university researchers (AAAS, 2014).

            At the same time that research funding was growing rapidly, so were college enrollments.  The number of students in American higher education grew from 2.4 million in 1949 to 3.6 million in 1959; but then came the 1960s, when enrollments more than doubled, reaching 8 million in 1969.  The number hit 11.6 million in 1979 and then began to slow down – creeping up to 13.5 million in 1989 and leveling off at around 14 million in the 1990s (Carter et al., 2006, Table Bc523; NCES, 2014, Table 303.10).  During the 30 years between 1949 and 1979, enrollments increased by more than 9 million students, a growth of almost 400 percent.  And the bulk of the enrollment increases in the last two decades were in part-time students and at two-year colleges.  Among four-year institutions, the primary growth occurred not at private or flagship public universities but at regional state universities, the former normal schools.  The Cold War was not just good for research universities; it was also great for institutions of higher education all the way down the status ladder.

            In part we can understand this radical growth in college enrollments as an extension of the long-term surge in consumer demand for American higher education as a private good.  Recall that enrollments started accelerating late in the nineteenth century, when college attendance started to provide an edge in gaining middle class jobs.  This meant that attending college gave middle-class families a way to pass on social advantage while attending high school gave working-class families a way to gain social opportunity.  But by 1940, high school enrollments had become universal.  So for working-class families, the new zone of social opportunity became higher education.  This increase in consumer demand provided a market-based explanation for at least part of the flood of postwar enrollments.

            At the same time, however, the Cold War provided a strong public rationale for broadening access to college.  In 1946, President Harry Truman appointed a commission to provide a plan for expanding access to higher education, which was first time in American history that a president sought advice about education at any level.  The result was a six-volume report with the title Higher Education for American Democracy.  It’s no coincidence that the report was issued in 1947, the starting point of the Cold War.  The authors framed the report around the new threat of atomic war, arguing that “It is essential today that education come decisively to grips with the world-wide crisis of mankind” (President’s Commission, 1947, vol. 1, p. 6).  What they proposed as a public response to the crisis was a dramatic increase in access to higher education.

            The American people should set as their ultimate goal an educational system in which at no level – high school, college, graduate school, or professional school – will a qualified individual in any part of the country encounter an insuperable economic barrier to the attainment of the kind of education suited to his aptitudes and interests.
        This means that we shall aim at making higher education equally available to all young people, as we now do education in the elementary and high schools, to the extent that their capacity warrants a further social investment in their training (President’s Commission, 1947, vol. 1, p. 36).

Tellingly, the report devotes a lot of space exploring the existing barriers to educational opportunity posed by class and race – exactly the kinds of issues that were making liberal democracies look bad in light of the egalitarian promise of communism.

Decline of the System’s Public Mission

            So in the mid twentieth century, Americans went through an intense but brief infatuation with higher education as a public good.  Somehow college was going to help save us from the communist menace and the looming threat of nuclear war.  Like World War II, the Cold War brought together a notoriously individualistic population around the common goal of national survival and the preservation of liberal democracy.  It was a time when every public building had an area designated as a bomb shelter.  In the elementary school I attended in the 1950s, I can remember regular air raid drills.  The alarm would sound and teachers would lead us downstairs to the basement, whose concrete-block walls were supposed to protect us from a nuclear blast.  Although the drills did nothing to preserve life, they did serve an important social function.  Like Sunday church services, these rituals drew individuals together into communities of faith where we enacted our allegiance to a higher power. 

            For American college professors, these were the glory years, when fear of annihilation gave us a glamorous public mission and what seemed like an endless flow of public funds and funded students.  But it did not – and could not – last.  Wars can bring great benefits to the home front, but then they end.  The Cold War lasted longer than most, but this longevity came at the expense of intensity.  By the 1970s, the U.S. had lived with the nuclear threat for 30 years without any sign that the worst case was going to materialize.  You can only stand guard for so long before attention begins to flag and ordinary concerns start to push back to the surface.  In addition, waging war is extremely expensive, draining both public purse and public sympathy.  The two Cold War conflicts that engaged American troops cost a lot, stirred strong opposition, and ended badly, providing neither the idealistic glow of the Good War nor the satisfying closure of unconditional surrender by the enemy.  Korea ended with a stalemate and the return to the status quo ante bellum.  Vietnam ended with defeat and the humiliating image in 1975 of the last Americans being plucked off a rooftop in Saigon – which the victors then promptly renamed Ho Chi Minh City.

            The Soviet menace and the nuclear threat persisted, but in a form that – after the grim experience of war in the rice paddies – seemed distant and slightly unreal.  Add to this the problem that, as a tool for defeating the enemy, the radical expansion of higher education by the 70s did not appear to be a cost-effective option.  Higher ed is a very labor-intensive enterprise, in which size brings few economies of scale, and its public benefits in the war effort were hard to pin down.  As the national danger came to seem more remote, the costs of higher ed became more visible and more problematic.  Look around any university campus, and the primary beneficiaries of public largesse seem to be private actors – the faculty and staff who work there and the students whose degrees earn them higher income.  So about 30 years into the Cold War, the question naturally arose:  Why should the public pay so much to provide cushy jobs for the first group and to subsidize the personal ambition of the second?  If graduates reap the primary benefits of a college education, shouldn’t they be paying for it rather than the beleaguered taxpayer?

            The 1970s marked the beginning of the American tax revolt, and not surprisingly this revolt emerged first in the bellwether state of California.  Fueled by booming defense plants and high immigration, California had a great run in the decades after 1945.  During this period, the state developed the most comprehensive system of higher education in the country.  In 1960 it formalized this system with a Master Plan that offered every Californian the opportunity to attend college in one of three state systems.  The University of California focused on research, graduate programs, and educating the top high school graduates.  California State University (developed mostly from former teachers colleges) focused on undergraduate programs for the second tier of high school graduates.  The community college system offered the rest of the population two-year programs for vocational training and possible transfer to one of the two university systems.  By 1975, there were 9 campuses in the University of California, 23 in California State University, and xx in the community college system, with a total enrollment across all systems of 1.5 million students – accounting for 14 percent of the college students in the U.S. (Carter et al., 2006, Table Bc523; Douglass, 2000, Table 1).  Not only was the system enormous, but the Master Plan declared it illegal to charge California students tuition.  The biggest and best public system of higher education in the country was free.

            And this was the problem.  What allowed the system to grow so fast was a state fiscal regime that was quite rare in the American context – one based on high public services supported by high taxes.  After enjoying the benefits of this combination for a few years, taxpayers suddenly woke up to the realization that this approach to paying for higher education was at core un-American.  For a country deeply grounded in liberal democracy, the system of higher ed for all at no cost to the consumer looked a lot like socialism.  So, of course, it had to go.  In the mid-1970s the country’s first taxpayer revolt emerged in California, culminating in a successful campaign in 1978 to pass a state-wide initiative that put a limit on increases in property taxes.  Other tax limitation initiatives followed (Martin, 2008).  As a result, the average state appropriation per student at University of California dropped from about $3,400 (in 1960 dollars) in 1987 to $1,100 in 2010, a decline of 68 percent (UC Data Analysis (2014).  This quickly led to a steady increase in fees charged to students at California’s colleges and universities.  (It turned out that tuition was illegal but demanding fees from students was not.)  In 1960 dollars, the annual fees for in-state undergraduates at the University of California rose from $317 in 1987 to $1,122 in 2010, an increase of more than 250 percent (UC Data Analysis (2014).  This pattern of tax limitations and tuition increases spread across the country.  Nationwide during the same period of time, the average state appropriation per student at a four year public college fell from $8,500 to $5,900 (in 2012 dollars), a decline of 31 percent, while average undergraduate tuition doubled, rising from $2,600 to $5,200 (SHEEO, 2013, Figure 3).

            The decline in the state share of higher education costs was most pronounced at the top public research universities, which had a wider range of income sources.  By 2009, the average such institution was receiving only 25 percent of its revenue from state government (National Science Board (2012), Figure 5).  An extreme case is University of Virginia, where in 2013 the state provided less than six percent of the university’s operating budget (University of Virginia, 2014). 

            While these changes were happening at the state level, the federal government was also backing away from its Cold War generosity to students in higher education.  Legislation such as the National Defense Education Act (1958) and Higher Education Act (1965) had provided support for students through a roughly equal balance of grants and loans.  But in 1980 the election of Ronald Reagan as president meant that the push to lower taxes would become national policy.  At this point, support for students shifted from cash support to federally guaranteed loans.  The idea was that a college degree was a great investment for students, which would pay long-term economic dividends, so they should shoulder an increasing share of the cost.  The proportion of total student support in the form of loans was 54 percent in 1975, 67 percent in 1985, and 78 percent in 1995, and the ratio has remained at that level ever since (McPherson & Schapiro, 1998, Table 3.3; College Board, 2013, Table 1).  By 1995, students were borrowing $41 billion to attend college, which grew to $89 billion in 2005 (College Board, 2014, Table 1).  At present, about 60 percent of all students accumulate college debt, most of it in the form of federal loans, and the total student debt load has passed $1 trillion.

            At the same time that the federal government was cutting back on funding college students, it was also reducing funding for university research.  As I mentioned earlier, federal research grants in constant dollars peaked at about $100 billion in 1990, the year after the fall of the Berlin wall – a good marker for the end of the Cold War.  At this point defense accounted for about two-thirds of all university research funding – three-quarters if your include space research.  Defense research declined by about 20 percent during the 90s and didn’t start rising again substantially until 2002, the year after the fall of the Twin Towers and the beginning of the new existential threat known as the War on Terror.  Defense research reached a new peak in 2009 at a level about a third above the Cold War high, and it has been declining steadily ever since.  Increases in nondefense research helped compensate for only a part of the loss of defense funds (AAAS, 2014).

Conclusion

            The American system of higher education came into existence as a distinctly private good.  It arose in the nineteenth century to serve the pursuit of sectarian advantage and land speculation, and then in the twentieth century it evolved into a system for providing individual consumers a way to get ahead or stay ahead in the social hierarchy.  Quite late in the game it took World War II to give higher education an expansive national mission and reconstitute it as a public good.  But hot wars are unsustainable for long, so in 1945 the system was sliding quickly back toward public irrelevance before it was saved by the timely arrival of the Cold War.  As I have shown, the Cold War was very very good for American system of higher education.  It produced a massive increase in funding by federal and state governments, both for university research and for college student subsidies, and – more critically – it sustained this support for a period of three decades.  But these golden years gradually gave way before a national wave of taxpayer fatigue and the surprise collapse of the Soviet Union.  With the nation strapped for funds and with its global enemy dissolved, it no longer had the urgent need to enlist America’s colleges and universities in a grand national cause.  The result was a decade of declining research support and static student enrollments. In 2002 the wars in Afghanistan and Iraq brought a momentary surge in both, but these measures peaked after only eight years and then went again into decline.  Increasingly, higher education is returning to its roots as a private good.

            So what are we to take away from this story of the rise and fall of the Cold War university?  One conclusion is that the golden age of the American university in the mid twentieth century was a one-off event.  Wars may be endemic but the Cold War was unique.  So American university administrators and professors need to stop pining for a return to the good old days and learn how to live in the post-Cold-War era.  The good news is that the impact of the surge in public investment in higher education has left the system in a radically stronger condition than it was in before World War II.  Enrollments have gone from 1.5 million to 21 million; federal research funding has gone from zero to $135 billion; federal grants and loans to college students have gone from zero to $170 billion (NCES, 2014, Table 303.10; AAAS, 2014; College Board, 2014, Table 1).  And the American system of colleges and universities went from an international also-ran to a powerhouse in the world economy of higher education.  Even though all of the numbers are now dropping, they are dropping from a very high level, which is the legacy of the Cold War.  So really, we should stop whining.  We should just say thanks to the bomb for all that it did for us and move on.

            The bad news, of course, is that the numbers really are going down.  Government funding for research is declining and there is no prospect for a turnaround in the foreseeable future.  This is a problem because the federal government is the primary source of funds for basic research in the U.S.; corporations are only interested in investing in research that yields immediate dividends.  During the Cold War, research universities developed a business plan that depended heavily on external research funds to support faculty, graduate students, and overhead.  That model is now broken.  The cost of pursuing a college education is increasingly being borne by the students themselves, as states are paying a declining share of the costs of higher education.  Tuition is rising and as a result student loans are rising.  Public research universities are in a particularly difficult position because their state funding is falling most rapidly.  According to one estimate, at the current rate of decline the average state fiscal support for public higher education will reach zero in 2059 (Mortenson, 2012). 

            But in the midst of all of this bad news, we need to keep in mind that the American system of higher education has a long history of surviving and even thriving under conditions of at best modest public funding.  At its heart, this is a system of higher education based not on the state but the market.  In the hardscrabble nineteenth century, the system developed mechanisms for getting by without the steady support of funds from church or state.  It learned how to attract tuition-paying students, give them the college experience they wanted, get them to identify closely with the institution, and then milk them for donations when they graduate.  Football, fraternities, logo-bearing T shirts, and fund-raising operations all paid off handsomely.  It learned how to adapt quickly to trends in the competitive environment, whether it’s the adoption of intercollegiate football, the establishment of research centers to capitalize on funding opportunities, or providing students with food courts and rock-climbing walls.  Public institutions have a long history of behaving much like private institutions because they were never able to count on continuing state funding. 

            This system has worked well over the years.  Along with the Cold War, it has enabled American higher education to achieve an admirable global status.  By the measures of citations, wealth, drawing power, and Nobel prizes, the system has been very effective.  But it comes with enormous costs.  Private universities have serious advantages over public universities, as we can see from university rankings.  The system is the most stratified structure of higher education in the world.  Top universities in the U.S. get an unacknowledged subsidy from the colleges at the bottom of the hierarchy, which receive less public funding, charge less tuition, and receive less generous donations.  And students sort themselves into institutions in the college hierarchy that parallels their position in the status hierarchy.  Students with more cultural capital and economic capital gain greater social benefit from the system than those with less, since they go to college more often, attend the best institutions, and graduate at a much higher rate.  Nearly everyone can go to college in the U.S., but the colleges that are most accessible provide the least social advantage. 

            So, conceived and nurtured into maturity as a private good, the American system of higher education remains a market-based organism.  It took the threat of nuclear war to turn it – briefly – into a public good.  But these days seem as remote as the time when schoolchildren huddled together in a bomb shelter. 

References

American Association for the Advancement of Science. (2014). Historical Trends in Federal R & D: By Function, Defense and Nondefense R & D, 1953-2015.  http://www.aaas.org/page/historical-trends-federal-rd (accessed 8-21-14.

Bledstein, B. J. (1976). The Culture of Professionalism: The Middle Class and the Development of Higher Education in America. New York:  W. W. Norton.

Boorstin, D. J. (1965). Culture with Many Capitals: The  Booster College. In The Americans: The National Experience (pp. 152-161). New York: Knopf Doubleday.

Brown, D. K. (1995). Degrees of Control: A Sociology of Educational Expansion and Occupational Credentialism. New York: Teachers College Press.

Carter, S. B., et al. (2006). Historical Statistics of the United States, Millennial Education on Line. New York: Cambridge University Press.

College Board. (2013). Trends in student aid, 2013. New York: The College Board.

College Board. (2014). Trends in Higher Education: Total Federal and Nonfederal Loans over Time.  https://trends.collegeboard.org/student-aid/figures-tables/growth-federal-and-nonfederal-loans-over-time (accessed 9-4-14).

Collins, R. (1979). The Credential Society: An Historical Sociology of Education and Stratification. New York: Academic Press.

Douglass, J. A. (2000). The California Idea and American Higher Education: 1850 to the 1960 Master Plan. Stanford, CA: Stanford University Press.

Garrett, T. A., & Rhine, R. M. (2006).  On the Size and Growth of Government. Federal Reserve Bank of St. Louis Review, 88:1 (pp. 13-30).

Geiger, R. L. (2004). To Advance Knowledge: The Growth of American research Universities, 1900-1940. New Brunswick: Transaction.

Goldin, C. & Katz, L. F. (2008). The Race between Education and Technology. Cambridge: Belknap Press of Harvard University Press.

Institute of Higher Education, Shanghai Jiao Tong University.  (2013).  Academic Ranking of World Universities – 2013.  http://www.shanghairanking.com/ARWU2013.html (accessed 6-11-14).

Levine, D. O. (1986). The American college and the culture of aspiration, 1914-1940 Ithaca: Cornell University Press.

Loss, C. P.  (2011).  Between citizens and the state: The politics of American higher education in the 20th century. Princeton, NJ: Princeton University Press.

Martin, I. W. (2008). The Permanent Tax Revolt: How the Property Tax Transformed American Politics. Stanford, CA: Stanford University Press.

McPherson, M. S. & Schapiro, M. O.  (1999).  Reinforcing Stratification in American Higher Education:  Some Disturbing Trends.  Stanford: National Center for Postsecondary Improvement.

Mortenson, T. G. (2012).  State Funding: A Race to the Bottom.  The Presidency (winter).  http://www.acenet.edu/the-presidency/columns-and-features/Pages/state-funding-a-race-to-the-bottom.aspx (accessed 10-18-14).

National Center for Education Statistics. (2014). Digest of Education Statistics, 2013. Washington, DC: US Government Printing Office.

National Science Board. (2012). Diminishing Funding Expectations: Trends and Challenges for Public Research Universities. Arlington, VA: National Science Foundation.

Potts, D. B. (1971).  American Colleges in the Nineteenth Century: From Localism to Denominationalism. History of Education Quarterly, 11: 4 (pp. 363-380).

President’s Commission on Higher Education. (1947). Higher education for American democracy, a report. Washington, DC: US Government Printing Office.

Rüegg, W. (2004). European Universities and Similar Institutions in Existence between 1812 and the End of 1944: A Chronological List: Universities.  In Walter Rüegg, A History of the University in Europe, vol. 3. London: Cambridge University Press.

State Higher Education Executive Officers (SHEEO). (2013). State Higher Education Finance, FY 2012. www.sheeo.org/sites/default/files/publications/SHEF-FY12.pdf (accessed 9-8-14).

Terkel, S. (1997). The Good War: An Oral History of World War II. New York: New Press.

Tewksbury, D. G. (1932). The Founding of American Colleges and Universities before the Civil War. New York: Teachers College Press.

U of California Data Analysis. (2014). UC Funding and Fees Analysis.  http://ucpay.globl.org/funding_vs_fees.php (accessed 9-2-14).

University of Virginia (2014). Financing the University 101. http://www.virginia.edu/finance101/answers.html (accessed 9-2-14).

[1] Under pressure of the war effort, the department eventually relented and enlisted the help of chemists to study gas warfare.  But the initial response is telling.

[2] Not all of this funding went into the higher education system.  Some went to stand-alone research organizations such as the Rand Corporation and American Institute of Research.  But these organizations in many ways function as an adjunct to higher education, with researcher moving freely between them and the university.

Posted in Academic writing, Course Syllabus, Writing Class

Class on Academic Writing

This is the syllabus for a class on academic writing for clarity and grace, which I originally posted more than a year ago.  It is designed as a 10-week class, with weekly readings, slides, and texts for editing.  It’s aimed at doctoral students who are preparing to become researchers who seek to publish their scholarship.  Ideally you can take the class with a group of peers, where you give each other feedback on your own writing projects in progress.  But you can also take the class by yourself.

Below is the syllabus, which includes links to all readings, class slides, and texts for editing.  Here’s a link to the Word document with all of the links, which is easier to work with.

I’ve also constructed a 6-week version of the class, which is aimed at graduate and undergraduate students who want to work on their writing for whatever purpose they choose.  Here’s a link to that syllabus as a Word document.

 

“The effort the writer does not put into writing, the reader has to put into reading.”

Stephen Toulmin

Academic Writing for Clarity and Grace

A Ten-Week Class

David Labaree                            

Web: http://www.stanford.edu/~dlabaree/

Twitter: @Dlabaree

Blog: https://davidlabaree.com/                                                     

                                                Course Description

            The title sounds like a joke, since academics (especially in the social sciences) do not have a reputation for writing with either clarity or grace much less both.  But I hope in this class to draw students into my own and every other academic’s lifelong quest to become a better writer.  The course will bring in a wide range of reference works that I have found useful over the years in working on my own writing and in helping students with theirs.  The idea is not that a 10-week class will make students good writers; many of us have been working at this for 40 years or more and we’re just getting started.  Instead, the plan is to provide students with some helpful strategies, habits, and critical faculties; increase their sense of writing as an extended process of revision; and leave them with a set of books that will support them in their own lifelong pursuit of good writing.

This online course is based on one I used to teach at Stanford for graduate students in education who wanted to work on their writing.  It was offered in the ten-week format of the university’s quarter system, and I’m keeping that format.  But you can use it in any way that works for you. 

Some may want to treat it as a weekly class, doing the readings for each week, reviewing the PowerPoint slides for that week, and working through some of the exercises.  If you’re treating it this way, it would work best if you can do it with a writing group made up of other students with similar interests.  That way you can take advantage of the workshop component of the class, in which members of the group exchange sections of a paper they working on, giving and receiving feedback.

Others may use it as a general source of information about writing, diving into particular readings or slide decks as needed.

Classes include some instruction on particular skills and particular aspects of the writing process:  developing an analytical angle on a subject; writing a good sentence; getting started in the writing process; working out the logic of the argument; developing the forms of validation for the argument; learning what your point is from the process of writing rather than as a precursor to writing; and revising, revising, revising.  We spend another part of the class working as a group doing exercises in spotting and fixing problems.  For these purposes we will use some helpful examples from the Williams book and elsewhere that focus on particular skills, but you can use the work produced within your own writing group. 

Work in your writing group:  Everyone needs to develop a recognition of the value of getting critical feedback from others on their work in progress, so you should be exchanging papers and work at editing each other’s work.  Student work outside of class will include reading required texts, editing other student’s work around particular areas of concern, and working on revising your own paper or papers.  Every week you will be submitting a piece of written work to your writing group, which will involve repeated efforts to edit a particular text of your own; and every week you will provide feedback to others in your group about their own texts. 

Much of class time will focus on working on particular texts around a key issue of the day – like framing, wordiness, clarity, sentence rhythm.  These texts will be examples from the readings and also papers by students, on which they would like to get feedback from the class as a whole.  Topics will include things like:

  • Framing an argument, writing the introduction to a paper
  • Elements of rhetoric
  • Sentence rhythm and music
  • Emphasis – putting the key element at the end of sentence and paragraph; delivering the punch line
  • Concision – eliminating wordiness
  • Clarity – avoiding nominalizations; opting for Anglo-Saxon words; clearing up murky syntax
  • Focusing on action and actors
  • Metaphor and imagery
  • Correct usage: punctuation, common grammatical errors, word use
  • Avoiding the most common academic tics: jargon, isms, Latinate constructions, nominalizations, abstraction, hiding from view behind passive voice and third person
  • The basics of making an argument
  • Using quotes – integrating them into your argument, and commenting on them instead of assuming they make the point on their own.
  • Using data – how to integrate data into a text and explain its meaning and significance
  • The relation of writing and thought
  • Revision – of writing and thinking
  • The relation of grammar and mechanics to rhetorical effect
  • Sentence style
  • The relation of style to audience
  • Disciplinary conventions for style, organization, modes of argument, evidence
  • Authority and voice

            Writing is a very personal process and the things we write are expressions of who we are, so it is important for everyone in the class to keep focused on being constructive in their comments and being tolerant of criticism from others.  Criticism from others is very important for writers, but no one likes it.  I have a ritual every time I get feedback on a paper or manuscript – whether blind reviews from journals or publishers or personal comments from colleagues.  I let the review sit for a while until I’m in the right mood.  Then I open it and skim it quickly to get the overall impression of how positive or negative it is.  At that point I set it aside, cursing the editors for sending the paper to such an incompetent reviewer or reconsidering my formerly high opinion of the particular colleague-critic, then finally coming back a few days later (after a vodka or two) to read the thing carefully and assess the damage.  Neurotic I know, but most writers are neurotic about their craft.  It’s hard not to take criticism personally.  Beyond all reason, I always expect the reviewers to say, “Don’t change a word; publish it immediately!”  But somehow they never do.  So I’m asking all members of the class both to recognize the vulnerability of their fellow writers and to open themselves up to the criticism of these colleagues in the craft. 

Course Texts

Books listed with an * are ones where older editions are available; it’s ok to use one of these editions instead of the most recent version.

*Williams, Joseph M. & Bizup, Joseph.  (2016). Style: Lessons in clarity and grace (12th ed.).  New York: Longman.  

*Becker, Howard S.  (2007).  Writing for social scientists:  How to start and finish your thesis, book, or article (2nd ed.).  Chicago: University of Chicago Press.

*Graff, Gerald, & Birkenstein, Cathy. (2014). “They say, I say:” The moves that matter in academic writing (3rd ed.). New York: Norton.

Sword, Helen.  (2012).  Stylish academic writing. Cambridge: Harvard University Press.

*Garner, Bryan A.  (2016). Garner’s modern English usage (4th ed.). New York: Oxford University Press.  (Any earlier edition is fine to use.)

Other required readings are available in PDF on a Google drive. 

Course Outline

Week 1:  Introduction to Course; Writing Rituals; Writing Well, or at Least Less Badly

Zinnser, William. (2010). Writing English as a second language.  Point of Departure (Winter). Americanscholar.org.

Munger, Michael C. (2010). 10 tips for how to write less badly. Chronicle of Higher Education (Sept. 6).  Chronicle.com.

Lepore, Jill. (2009). How to write a paper for this class. History Department, Harvard University.

Lamott, Anne. (2005). Bird by bird: Some instructions on writing and life. In English 111 Reader.  Miami University Department of English.

Zuckerman, Ezra W. (2008). Tips to article writers. http://web.mit.edu/ewzucker/www/Tips%20to%20article%20writers.pdf.

Slides for week 1 class

Week 2:  Clarity

Williams, Joseph M. & Bizup, Joseph.  (2016).  Style: Lessons in clarity and grace (12th ed.).  New York: Longman. Lessons One, Two, Three, Four, Five, and Six.  It’s ok to use any earlier edition of this book.

Slides for week 2 class

Week 3:  Structuring the Argument in a Paper

Graff, Gerald, & Birkenstein, Cathy. (2014). “They say, I say:” The moves that matter in academic writing (3rd ed.). New York: Norton.  You can use any earlier edition of this book.

Wroe, Ann. (2011). In the beginning was the sound. Intelligent Life Magazine, Spring. http://moreintelligentlife.com/content/arts/ann-wroe/beginning-was-sound.

Slides for week 3 class

Week 4:  Grace

Williams, Joseph M. & Bizup, Joseph.  (2016).  Style: Lessons in clarity and grace (12th ed.).  New York: Longman. Lessons Seven, Eight, and Nine.

Orwell, George. (1946). Politics and the English Language. Horizon.

Lipton, Peter. (2007). Writing Philosophy.

Slides for week 4 class

Week 5:  Stylish Academic Writing

Sword, Helen.  (2012).  Stylish academic writing. Cambridge: Harvard University Press.

Check out Helen Sword’s website, Writer’s Diet, which allows you to paste in a text of your own and get back an analysis of how flabby or fit it is: http://www.writersdiet.com/WT.php.

Haslett, Adam. (2011). The art of good writing. Financial Times (Jan. 22).  Ft.com.

Slides for week 5 class

Week 6:  Writing in the Social Sciences

Becker, Howard S.  (2007).  Writing for social scientists:  How to start and finish your thesis, book, or article (2nd ed.).  Chicago: University of Chicago Press.  It’s fine to use any earlier edition of this book.

Slides for week 6 class

Week 7:  Usage

Garner, Bryan A.  (2016). Garner’s modern English usage (4th ed.). New York: Oxford University Press.  Selections.  Any earlier edition of this book is fine to use.

Wallace, David Foster. (2001). Tense present: Democracy, English, and the wars over usage. Harpers (April), 39-58.

Slides for week 7 class

Week 8:  Writing with Clarity and Grace

Limerick, Patricia. (1993). Dancing with professors: The trouble with academic prose.

Scott Brauer. (2014). Writing instructor, skeptical of automated grading, pits machine vs. machine. Chronicle of Higher Education, April 28.

Pinker, Steven. (2014). Why academics stink at writing. Chronicle of Education, Sept. 26.

Labaree, David F. (2018). The Five-Paragraph Fetish. Aeon.

Slides for week 8 class

Week 9:  Clarity of Form

Williams, Joseph M. & Bizup, Joseph.  (2016).  Style: Lessons in clarity and grace (12th ed.).  New York: Longman. Lessons Ten, Eleven, and Twelve.

Yagoda, Ben. (2011). The elements of clunk. Chronicle of Higher Education (Jan. 2).  Chronicle.com.

 Slides for week 9 class

Week 10:  Writing with Clarity and Grace

March, James G. (1975). Education and the pursuit of optimism. Texas Tech Journal of Education, 2:1, 5-17.

Gladwell, Malcolm. (2000). The art of failure: Why some people choke and others panic. New Yorker (Aug. 21 and 28).  Gladwell.com

Labaree, David F. (2012). Sermon on educational research. Bildungsgeschichte: International Journal for the Historiography of Education, 2:1, 78-87.

Slides for week 10 class

Posted in Capitalism, Higher Education, Meritocracy, Politics

Sandel: The Tyranny of Merit

This post is a reflection on Michael Sandel’s new book, The Tyranny of Merit: What’s Become of the Common Good?  He’s a philosopher at Harvard and this is his analysis of the dangers posed by the American meritocracy.  The issue is one I’ve been exploring here for the last two years in a variety of posts (here, here, here, here, here, here, and here.)

I find Sandel’s analysis compelling, both in the ways it resonates with other takes on the subject and also in his distinctive contributions to the discussion.  My only complaint is that the whole discussion could have been carried out more effectively in a single magazine article.  The book tends to be repetitive, and it also gets into the weeds on some philosophical issues that blur its focus and undercut its impact.  Here I present what I think are the key points.  I hope you find it useful.

Sandel Cover

Both the good news and the bad news about meritocracy is its promise of opportunity for all based on individual merit rather than the luck of birth.  It’s hard to hate a principle that frees us from the tyranny of inheritance. 

The meritocratic ideal places great weight on the notion of personal responsibility. Holding people responsible for what they do is a good thing, up to a point. It respects their capacity to think and act for themselves, as moral agents and as citizens. But it is one thing to hold people responsible for acting morally; it is something else to assume that we are, each of us, wholly responsible for our lot in life.

The problem is that simply calling the new model of status attainment “achievement” rather than “ascription” doesn’t mean that your ability to get ahead is truly free of circumstances beyond your control.  

But the rhetoric of rising now rings hollow. In today’s economy, it is not easy to rise. Americans born to poor parents tend to stay poor as adults. Of those born in the bottom fifth of the income scale, only about one in twenty will make it to the top fifth; most will not even rise to the middle class. It is easier to rise from poverty in Canada or Germany, Denmark, and other European countries than it is in the United States.

The meritocratic faith argues that the social structure of inequality provides a powerful incentive for individuals to work hard to get ahead in order to escape from a bad situation and move on to something better.  The more inequality, such as in the US, the more incentive to move up.  The reality, however, is quite different.

But today, the countries with the highest mobility tend to be those with the greatest equality. The ability to rise, it seems, depends less on the spur of poverty than on access to education, health care, and other resources that equip people to succeed in the world of work.

Sandel goes on to point out additional problems with meritocracy beyond the difficulties in trying to get ahead all on your own: 1) demoralizing the losers in the race; 2) denigrating those without a college degree; and 3) turning politics into the realm of the expert rather than the citizen.

The tyranny of merit arises from more than the rhetoric of rising. It consists in a cluster of attitudes and circumstances that, taken together, have made meritocracy toxic. First, under conditions of rampant inequality and stalled mobility, reiterating the message that we are responsible for our fate and deserve what we get erodes solidarity and demoralizes those left behind by globalization. Second, insisting that a college degree is the primary route to a respectable job and a decent life creates a credentialist prejudice that undermines the dignity of work and demeans those who have not been to college; and third, insisting that social and political problems are best solved by highly educated, value-neutral experts is a technocratic conceit that corrupts democracy and disempowers ordinary citizens.

Consider the first point. Meritocracy fosters triumphalism for the winners and despair for the losers.  It you succeed or fail, you alone get the credit or the blame.  This was not the case in the bad old days of aristocrats and peasants.

If, in a feudal society, you were born into serfdom, your life would be hard, but you would not be burdened by the thought that you were responsible for your subordinate position. Nor would you labor under the belief that the landlord for whom you toiled had achieved his position by being more capable and resourceful than you. You would know he was not more deserving than you, only luckier.

If, by contrast, you found yourself on the bottom rung of a meritocratic society, it would be difficult to resist the thought that your disadvantage was at least partly your own doing, a reflection of your failure to display sufficient talent and ambition to get ahead. A society that enables people to rise, and that celebrates rising, pronounces a harsh verdict on those who fail to do so.

This triumphalist aspect of meritocracy is a kind of providentialism without God, at least without a God who intervenes in human affairs. The successful make it on their own, but their success attests to their virtue. This way of thinking heightens the moral stakes of economic competition. It sanctifies the winners and denigrates the losers.

One key issue that makes meritocracy potentially toxic is its assumption that we deserve the talents that earn us such great rewards.

There are two reasons to question this assumption. First, my having this or that talent is not my doing but a matter of good luck, and I do not merit or deserve the benefits (or burdens) that derive from luck. Meritocrats acknowledge that I do not deserve the benefits that arise from being born into a wealthy family. So why should other forms of luck—such as having a particular talent—be any different? 

Second, that I live in a society that prizes the talents I happen to have is also not something for which I can claim credit. This too is a matter of good fortune. LeBron James makes tens of millions of dollars playing basketball, a hugely popular game. Beyond being blessed with prodigious athletic gifts, LeBron is lucky to live in a society that values and rewards them. It is not his doing that he lives today, when people love the game at which he excels, rather than in Renaissance Florence, when fresco painters, not basketball players, were in high demand.

The same can be said of those who excel in pursuits our society values less highly. The world champion arm wrestler may be as good at arm wrestling as LeBron is at basketball. It is not his fault that, except for a few pub patrons, no one is willing to pay to watch him pin an opponent’s arm to the table.

He then moves on to the second point, about the central role of college in determining who’s got merit. 

Should colleges and universities take on the role of sorting people based on talent to determine who gets ahead in life?

There are at least two reasons to doubt that they should. The first concerns the invidious judgments such sorting implies for those who get sorted out, and the damaging consequences for a shared civic life. The second concerns the injury the meritocratic struggle inflicts on those who get sorted in and the risk that the sorting mission becomes so all-consuming that it diverts colleges and universities from their educational mission. In short, turning higher education into a hyper-competitive sorting contest is unhealthy for democracy and education alike.

The difficulty of predicting which talents are most socially beneficial is particularly true for the complex array of skills that people pick up in college.  Which ones matter most for determining a person’s ability to make an important contribution to society and which don’t?  How do we know if an elite college provides more of those skills than an open-access college?  This matters because a graduate from the former gets a much higher reward than one from the latter.  Pretending that a prestigious college degree is the best way to measure future performance is particularly difficult to sustain because success and degree are conflated.  Graduates of top colleges get the best jobs and thus seem to have the greatest impact, whereas non-grads never get the chance to show what they can do.

Another sports analogy helps to make this point.

Consider how difficult it is to assess even more narrowly defined talents and skills. Nolan Ryan, one of the greatest pitchers in the history of baseball, holds the all-time record for most strikeouts and was elected on the first ballot to baseball’s Hall of Fame. When he was eighteen years old, he was not signed until the twelfth round of the baseball draft; teams chose 294 other, seemingly more promising players before he was chosen. Tom Brady, one of the greatest quarterbacks in the history of football, was the 199th draft pick. If even so circumscribed a talent as the ability to throw a baseball or a football is hard to predict with much certainty, it is folly to think that the ability to have a broad and significant impact on society, or on some future field of endeavor, can be predicted well enough to justify fine-grained rankings of promising high school seniors.

And then there’s the third point, the damage that meritocracy does to democratic politics.  One element of of this is that it turns politics into an arena for credentialed experts, consigning ordinary citizens to the back seat.  How many political leaders today are without a college degree?  Vanishingly few.  Another is that meritocracy not only bars non-grads from power but they also bars them from social respect.  

Grievances arising from disrespect are at the heart of the populist movement that has swept across Europe and the US.  Sandel calls this a “politics of humiliation.”

The politics of humiliation differs in this respect from the politics of injustice. Protest against injustice looks outward; it complains that the system is rigged, that the winners have cheated or manipulated their way to the top. Protest against humiliation is psychologically more freighted. It combines resentment of the winners with nagging self-doubt: perhaps the rich are rich because they are more deserving than the poor; maybe the losers are complicit in their misfortune after all.

This feature of the politics of humiliation makes it more combustible than other political sentiments. It is a potent ingredient in the volatile brew of anger and resentment that fuels populist protest.

Sandel draws on a wonderful book by Arlie Hochschild, Strangers in Their Own Land, in which she interviews Trump supporters in Louisiana.

Hochschild offered this sympathetic account of the predicament confronting her beleaguered working-class hosts:

You are a stranger in your own land. You do not recognize yourself in how others see you. It is a struggle to feel seen and honored. And to feel honored you have to feel—and feel seen as—moving forward. But through no fault of your own, and in ways that are hidden, you are slipping backward.

Once consequence of this for those left behind is a rise in “deaths of despair.”

The overall death rate for white men and women in middle age (ages 45–54) has not changed much over the past two decades. But mortality varies greatly by education. Since the 1990s, death rates for college graduates declined by 40 percent. For those without a college degree, they rose by 25 percent. Here then is another advantage of the well-credentialed. If you have a bachelor’s degree, your risk of dying in middle age is only one quarter of the risk facing those without a college diploma. 

Deaths of despair account for much of this difference. People with less education have long been at greater risk than those with college degrees of dying from alcohol, drugs, or suicide. But the diploma divide in death has become increasingly stark. By 2017, men without a bachelor’s degree were three times more likely than college graduates to die deaths of despair.

Sandel offers two relatively reforms that might help mitigate the tyranny of meritocracy.  One focuses on elite college admissions.  

Of the 40,000-plus applicants, winnow out those who are unlikely to flourish at Harvard or Stanford, those who are not qualified to perform well and to contribute to the education of their fellow students. This would leave the admissions committee with, say, 30,000 qualified contenders, or 25,000, or 20,000. Rather than engage in the exceedingly difficult and uncertain task of trying to predict who among them are the most surpassingly meritorious, choose the entering class by lottery. In other words, toss the folders of the qualified applicants down the stairs, pick up 2,000 of them, and leave it at that.

This helps get around two problems:  the difficulty in trying to predict merit; and the outsize rewards of a winner-take-all admissions system.  But good luck trying to get this put in place over the howls of outrage from upper-middle-class parents, who have learned how to game the system to their advantage.  Consider this one small example of the reaction when an elite Alexandria high school proposed random admission from a pool of the most qualified.

Another reform is more radical and even harder to imagine putting into practice.  It begins with reconsideration of what we mean by the “common good.”

The contrast between consumer and producer identities points to two different ways of understanding the common good. One approach, familiar among economic policy makers, defines the common good as the sum of everyone’s preferences and interests. According to this account, we achieve the common good by maximizing consumer welfare, typically by maximizing economic growth. If the common good is simply a matter of satisfying consumer preferences, then market wages are a good measure of who has contributed what. Those who make the most money have presumably made the most valuable contribution to the common good, by producing the goods and services that consumers want.

A second approach rejects this consumerist notion of the common good in favor of what might be called a civic conception. According to the civic ideal, the common good is not simply about adding up preferences or maximizing consumer welfare. It is about reflecting critically on our preferences—ideally, elevating and improving them—so that we can live worthwhile and flourishing lives. This cannot be achieved through economic activity alone. It requires deliberating with our fellow citizens about how to bring about a just and good society, one that cultivates civic virtue and enables us to reason together about the purposes worthy of our political community.

If we can carry out this deliberation — a big if indeed — then we can proceed to implement a system for shifting the basis for individual compensation from what the market is willing to pay to what we collectively feel is most valuable to society.  

Thinking about pay, most would agree that what people make for this or that job often overstates or understates the true social value of the work they do. Only an ardent libertarian would insist that the wealthy casino magnate’s contribution to society is a thousand times more valuable than that of a pediatrician. The pandemic of 2020 prompted many to reflect, at least fleetingly, on the importance of the work performed by grocery store clerks, delivery workers, home care providers, and other essential but modestly paid workers. In a market society, however, it is hard to resist the tendency to confuse the money we make with the value of our contribution to the common good.

To implement a system based on public benefit rather than marketability would require completely revamping our structure of determining salaries and taxes. 

The idea is that the government would provide a supplementary payment for each hour worked by a low-wage employee, based on a target hourly-wage rate. The wage subsidy is, in a way, the opposite of a payroll tax. Rather than deduct a certain amount of each worker’s earnings, the government would contribute a certain amount, in hopes of enabling low-income workers to make a decent living even if they lack the skills to command a substantial market wage.

Generally speaking, this would mean shifting the tax burden from work to consumption and speculation. A radical way of doing so would be to lower or even eliminate payroll taxes and to raise revenue instead by taxing consumption, wealth, and financial transactions. A modest step in this direction would be to reduce the payroll tax (which makes work expensive for employers and employees alike) and make up the lost revenue with a financial transactions tax on high-frequency trading, which contributes little to the real economy.

This is how Sandel ends his book:

The meritocratic conviction that people deserve whatever riches the market bestows on their talents makes solidarity an almost impossible project. For why do the successful owe anything to the less-advantaged members of society? The answer to this question depends on recognizing that, for all our striving, we are not self-made and self-sufficient; finding ourselves in a society that prizes our talents is our good fortune, not our due. A lively sense of the contingency of our lot can inspire a certain humility: “There, but for the grace of God, or the accident of birth, or the mystery of fate, go I.” Such humility is the beginning of the way back from the harsh ethic of success that drives us apart. It points beyond the tyranny of merit toward a less rancorous, more generous public life.