Posted in Reading, Writing

Lydia Wilson — Reading, That Strange and Uniquely Human Thing

This essay is from the online science magazine, Nautilus.  Here’s a link to the original.  

Reading is a very recent development in world history (no more than 5,000 year old) and its distinctive to humans.  The original impulse to write things down seemed to come from accounting, maintaining a record of transactions, and them moved toward a more fluent form of general communication.  

This essay explores the way we read and how the brain processes what we read.  There are two ways of representing words in print:  through pictures, as in Chinese characters and Egyptian hieroglyphics, and through alphabetical representations.  The one works through an image that represents an entire word and the other through a phonetic array of letters that represent how the word sounds when spoken.

In the US, this difference is the basis for the reading wars that have ranged in educational circles for years between advocates for whole word vs. phonetic instruction — learning to recognize whole words vs. learning to sound out words alphabetically.  The research shows that the brain processes language similarly either way. 

People raised in the alphabetical tradition use both approaches in interpreting texts, grasping meaning of the most familiar words at a glance and sounding out others.  In the first approach, it’s possible to understand words even if the order of the interior letters is jumbled — which is one reason it’s so hard to proofread your own writing.  The second allows you to figure out the meaning of a configuration of letters that is not immediately recognizable.  One is way faster but the other allows you to keep adding new words to your vocabulary.  So a skilled reader in an alphabetical reads mostly by the word rather than the letter, much the way a reader in an ideographic language reads by the character.  The difference is that in the former system you can sound out new words whereas in the latter system you have to memorize new characters.   

Hope you find this as enlightening as I do.

Reading, That Strange and Uniquely Human Thing

How we evolved to read is a story of one creative species.

The Chinese artist Xu Bing has long experimented to stunning effect with the limits of the written form. Last year I visited the Centre del Carme in Valencia, Spain, to see a retrospective of his work. One installation, Book from the Sky, featured scrolls of paper looping down from the ceiling and lying along the floor of a large room, printed Chinese characters emerging into view as I moved closer to the reams of paper. But this was no ordinary Chinese text: Xu Bing had taken the form, even constituent parts, of real characters, to create around 4,000 entirely false versions. The result was a text which looked readable but had no meaning at all. As Xu Bing himself has noted, his made-up characters “seem to upset intellectuals,” in a sly sendup of our respect for the written word.

There was a long way to go from recording goods to writing great works of literature.

In another room was Book from the Ground, a slim volume, displayed in a room of Xu Bing’s inspiration: symbols and emojis, gathered from around the world and from different contexts, from an airport to a keyboard. Xu Bing scoured the world to find universal images and the result stands in stark contrast to Book from the Sky: This book was designed to be read by anyone. The first page was slightly awkward to read, translating the pictures to the (in my case, English) word. But as I turned the pages, the meaning emerged more fluently, and I was drawn into its story of a day in the life of an office worker. It was as if Xu Bing was asking me to wonder what was happening in my brain as these tiny pictures on the page transformed into meaning, a narrative. How was the process of reading pictorial symbols different from reading letters based on phonetic symbols?

Xu Bing was illustrating what recent studies in neuroscience have revealed: People everywhere read words made from pictures, such as Chinese characters (known as pictographs), and words made from letters, in a remarkably similar way. It’s an insight that opens a window on how writing developed and how we read—and how we might tap deeper wells of creativity and communication.

Humans in different places and times have felt impelled to overcome the limitations of pictures in communicating. Despite the pressing need to capture spoken language in this form, some societies never felt the demand. Until colonialism, aboriginal communities in Australia lived in societies governed by extremely complex laws that passed through the generations entirely through oral means. For tens of thousands of years, rules governing hunting, finding your way, marriage, and ceremony have been embedded in song and performed, learned, and taught in everyday life. There’s beautiful sacred rock art throughout the continent, and symbols used for specific identification, but neither developed into a written system to capture a whole language.

Wilson_BREAKER
BOOKKEEPING: This example of ancient cuneiform writing dates back to the Bronze Age. Irving Finkel of the British Museum says “a kind of administrative responsibility produced the first stumbling attempt at writing and then eventually a proper fluent script.”Fedor Selivanov / Shutterstock

Some of the earliest writing—symbols for meaning rather than pictures alone—is from Mesopotamia, dated to around 3000 B.C.; clay tablets dug up in the archaeological site of Kunara, near the Zagros mountains in modern-day Iraqi Kurdistan. These tablets record quantities of goods in a form of bookkeeping—incoming and outgoing amounts of flour and grains. “The thing about human ingenuity is that when there’s a sharp need for something, it tends to crystallize in discovery,” says Irving Finkel, an assistant keeper of ancient Mesopotamian script, languages, and cultures in the British Museum. Necessity being the mother of invention, in other words. “It’s very probable that it was [a] kind of administrative responsibility which produced the first stumbling attempt at writing and then eventually a proper fluent script,” Finkel says.

Egyptologist Gunther Dreyer came to similar conclusions during a lifetime of excavating Ancient Egypt, discovering artefacts crucial to our understanding of the development of writing. “Why is there a need to write something down? I think the reason for that is simple,” Dreyer says. “Those are the requirements of accounting.” Dreyer points out that ruling back then, as today, involved “collecting taxes and redistributing. And in a big area, you somehow needed to note down who delivered what when.” Indigenous Australians feeding themselves and their communities through a hunter-gatherer lifestyle, bartering goods with other communities, didn’t need to record such trade, either for a third party far away (such as a tax office) or for posterity.

No writing system goes back much further than 5,000 years, a blink of an eye in evolutionary terms.

But there was still a long way to go from recording goods and quantities to writing great works of literature. Humans all over the world faced the same problems in expressing themselves beyond the here-and-now that speech has covered. It turns out that every ancient writing system solved these problems in exactly the same way. “We like to call it the giant leap for mankind,” Finkel says. The leap is from using a picture as a picture (a logogram) to using it to portray a sound (or phonogram)—the Rebus Principle. Many children play a game using this principle, when they discover that a bee can be used for the sound “be,” and combined with a drawing of a leaf, these two unrelated objects can suddenly produce a meaning—belief.

But then ambiguity arises: When is a bee a bee, and when is it a sound? Cuneiform, Egyptian, and Mayan hieroglyphs and Chinese all solved the problem in the same way: They added unspoken elements now known as “classifiers” to clear up whether the writer is talking about keeping bees or simply using “be” as a sound. Chinese still uses this system, with picture, phonetic, and classifier elements all crucial to their written system. But in other places a different system took over: the alphabet, invented around 4,000 years ago in the Sinai Peninsula. Stripped of anything but sound, this handful of symbols can be learned quickly, unlike the thousands of Chinese characters that must be mastered for literacy. After a few centuries of remaining at the margins, the alphabet from the Sinai swept through Europe and much of Asia and Africa, changing into the dizzying range we have today.

No writing system goes back much further than 5,000 years, a mere blink of an eye in evolutionary terms. “Relative to speech [reading is] very young,” says Tae Twomey of University College London, who has spent her career looking into this new trick of Homo sapiens. “The part of the brain that deals with reading had to evolve somehow from the brain that we used before writing was invented.” And it wasn’t just one part that was recruited. “If you think about it, it’s a complex task. You are extracting visual information in order, ultimately, to get to a meaning.” Once I do start to think about this process—a process I can’t remember not being able to do—it starts to seem extremely alien: Thoughts, ideas, instructions, information are being transferred from one human brain into mine, via my optic nerve. But the visual element is only part of the story.

Twomey’s research uses scans to show the different areas of the brain that are active when we read. “It’s a distributed network,” she explains. Neurologist Thomas Hope, a senior research associate at University College London, offers an analogy. “Like most cognitive behavior, we think reading works like the Nile Delta.” It’s not fed by one stream, he says, “but a bunch of potentially redundant streams.”

Wilson_BREAKER-2
WRITE ON: Writing arose at different places around the globe, diverging from pictographic to alphabetic symbols. But when we look deep inside the brain, it turns out people everywhere write and read in remarkably similar ways.Wikimedia

For reading, there are two large tributaries, broadly correlated with sound and vision. (The third major area working on the task is the Broca’s area, in charge of executive function, which acts as the conductor, orchestrating all the inputs.) Beginning readers sound out each letter to get to the meaning. “Reading is not just to communicate meaning, but also to communicate generally,” Hope says. “And the most common way that we communicate is by speaking. So when you read a word, some part of your brain is sounding out what that word would sound like if you were saying it or if someone was saying it to you.” And that act of speech communication is the same across cultures, whatever the written form of the language, so most readers will be hearing as they read.

But sound isn’t all. “I’ve been watching my children learn to read,” Hope says. “You can’t learn to read just by learning the letters. You have to learn to understand and recognize the words, too.” Readers in an alphabetic system have to learn the equivalent of characters: Learning the shape of a word is basically the same job as extracting the meaning from a pictographic character. But once we get more fluent at reading we tend to use a different tributary more. “Another way, that most skilled readers prefer, is to recognize the whole word as a single entity and connect it directly to meaning,” Hope says.

The so-called “Cambridge letter,” a meme in 2003, gives a proficient reader a chance to test this latter mode of reading, through shape recognition rather than sounding out the letters:

Aoccdrnig to a rscheearch at Cmabrigde Uinervtisy, it deosn’t mttaer in waht oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht the frist and lsat ltteer be at the rghit pclae. The rset can be a toatl mses and you can sitll raed it wouthit porbelm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe.

Most people can extract the meaning from this quotation without too much problem, which seems to prove its point: You can read using a general impression of the word rather than relying on the sound. But as Hope tells us, the history of research is a history of overturning simple explanations to find more interesting, if complex, stories below. In fact, jumbling the letters of a word does matter, for some words more than others, and in some sentences more than others.

Matt Davis, at the University of Cambridge (where this research did not take place; the first mistake of the meme), put together a handy blog post on faulty thinking about the letter. First is the fact that two or three letter words do not change at all: the second sentence leaves “the,” “can,” “be,” “a,” “and,” “you,” “can,” and “it” unchanged, giving our brains a lot of easy information to go on. Another feature of the meme is that no word has been misspelled in such a way that it spells a different word—Davis uses the example “salt” and “slat” as a problem that’s been avoided—and further, each jumbling has put letters close to where they originally were: “Cmabrigde” might be recognizable (especially when followed by “Uinervtisy”) but is far harder when written “Cgbaimrde.” Finally, the examples chosen all retain the right sounds of the original words they’re scrambling; the “th” in “without” is preserved in the way they’ve scrambled the letters. And that’s because sound, as it turns out, does matter.

In a recent experiment, Twomey scanned individuals’ brains as they read. Her experiment was based on her own roots of learning to read in Japan. Every Japanese child learns two systems of writing, the kanji system, based on Chinese characters, and the kana system, which is purely phonetic (though the units are syllables rather than the individual sounds of alphabet systems). This dual approach lasts their entire lives, with all books written in both systems (except for children’s books, for learning purposes). This means that you can test the difference in reading different scripts without worrying about controlling for reading proficiency or language differences. The working assumption of many scholars was that the brain scans of people reading pictographic scripts would show an emphasis on the visual part of the brain, the meaning extracted by recognizing the character, in contrast to those reading a phonetic script, who would be using the sound of the letters to arrive at meaning. What Twomey’s scans showed was that the same areas were activated when reading both types of script.

It was a giant leap for mankind—using a picture to portray a sound.

The experiment compared reading strategies within one individual who had learned to read in two systems. Twomey conducted other research that compared reading strategies between individuals, scanning the brains of people reading in Chinese and English. The differences between readers in this experiment wasn’t straightforward to understand. “At first we thought the difference we saw in the brains was due to the difference in scripts they were reading,” Twomey says. “But when we looked at dyslexic readers, they were using both areas, regardless of what script they were reading, which suggests that it has nothing to do with script itself.”

Twomey interprets this surprising finding as evidence of a difference in reading strategies that result from how we learn to read. English readers are taught with a phonics system, using rhymes and other sound-based exercises; Chinese is taught through writing, and associating the character on the page with the meaning directly. Twomey says that dyslexic readers, in their struggle to learn to read, are calling on more of the tributaries in the brain to overcome their difficulties of whichever script they are being taught. This showed up in their brain scans: The pathways used to extract meaning were the same for dyslexic readers whether they were reading pictographic Chinese or the phonetic alphabet. There were no differences between reading picture-based and sound-based words for the brain, just differences in how we’ve been trained to do the job.

Hope, who has read and admired Twomey’s research, offers a summary. The key point is we’re all of us using both of these pathways all the time. You and I might differ slightly in our preferences for them, but we’re still using them both.” This 5,000-year-old technology of humans, which arose at different places around the globe, first used similar systems combining phonetic, pictographic, and classifier elements; a divergence came with the invention of the alphabet, which itself proliferated into such differing forms as Cyrillic, Arabic, Armenian, Tibetan, and Hindi—to name a few. But when we look deep inside the brain, it turns out that we are all doing this strange activity in similar ways.

What this says about teaching is yet to be fully explored, but Twomey’s research suggests that our teaching systems aren’t penetrating the depths of our reading brains. Of course we learn to extract meaning from squiggles on a page, or you wouldn’t be reading this. But we could be taught to use more of the tributaries involved, as dyslexic readers seem to be doing to compensate for their difficulties. If non-dyslexic readers of phonetic scripts, which are usually taught initially through sound-based learning, were also encouraged to learn the word shapes from the start; if those learning pictographic characters chanted them out loud as well as copying them out to memorize them; who knows what new creativity would be unleashed? As we learn more about the mysterious tributaries activated in reading, perhaps there are more teaching strategies to be discovered, helping those who do not find it a natural activity, or for those around the world who miss out on early education.

As I left the Centre del Carme, I saw Xu Bing standing at the exit and asked him to sign my copy of Book from the Ground. He smiled and asked me to write the letters of my first name on a piece of paper before crafting them anew— not in a linear line, as an alphabetic system requires, but in a block, producing the effect of a Chinese character, another trick he has devised to disrupt our experience of reading, which he calls “square brush calligraphy.” He signed his own name in an emoji: round-lensed spectacles. He also included two Chinese characters, though I couldn’t tell whether they were from a Chinese dictionary or Book from the Sky, which he no doubt would have been pleased to know.

To me, the experiences of reading Xu Bing’s various scripts feel vastly different. That’s because I learned to read an alphabetic script and continue to read alphabetically all the time. Perhaps one day children brought up on emojis will learn to read a combination of pictures and letters just as fluently, returning us to the age of Egyptian, Cuneiform, or Mayan systems, where sound and pictures mixed to produce meaning together. Xu Bing reminds us that the way we read is not hard-wired into our brains but can be learned and re-learned. The way we write in the future may take on entirely new, now-unimaginable forms. The artist is now echoed by scientists, who offer one more piece of evidence to explain the success of our species: The superpower of our brain lies with its extraordinary ability to adapt to situations and challenges, bestowing advantages far more quickly than anything evolution can offer.

Lydia Wilson is a researcher at the University of Cambridge’s Computer Laboratory, and a visiting scholar at the Ralph Bunche Institute at CUNY’s Graduate Centre. She recently presented the BBC’s series A Secret History of Writing, and edits the Cambridge Literary Review.

Lead image: Rawpixel.com / Shutterstock

Posted in Empire, History, Meritocracy, Social status

Craig Brown – Ninety-Nine Glimpses of Princess Margaret

Here’s a challenge to any writer.  How do you write a book about someone famous who never did anything?  Craig Brown found an answer with his book, Nine-Nine Glimpses of Princess Margaret.  

Princess Margaret

In this book, he provides not a biography but a set of impressions of Queen Elizabeth’s younger sister as they were recounted by the people around her.  It’s as if she only existed in her reflection.  And he lays out these impressions in a series of 99 brief but poisonously pleasurable chapters.  The result is a feast for the reader and a model for writers of how to make something out of nothing.

Another thing I like about this book is that it undercuts some of my own critique of the meritocracy, which I frequently belabor in this blog.  Nothing like looking at minor royals to make meritocracy look pretty good.  At least people do something to gain their renown.

Brown says he came upon the idea for this book while researching another one, when he kept finding Princess Margaret listed in a vast array of books about the UK in the late twentieth century.  

It is like playing ‘Where’s Wally?’, or staring at clouds in search of a face. Leave it long enough, and she’ll be there, rubbing shoulders with philosophers, film stars, novelists, politicians.

I spy with my little eye, something beginning with M!

Here she is, sitting above Marie Antoinette in Margaret Drabble’s biography of Angus Wilson:

Maraini, Dacia
Marchant, Bill (Sir Herbert)
Maresfield Park
Margaret, Princess
Marie Antoinette
Market Harborough

The reflections she left in these sources are anything but pretty.  As Brown puts it,

It has been said that history is written by the victors, but, on the most basic level, this is not quite true: it is written by the writers.

Princess Margaret had the misfortune to be surrounded by catty people who were eager to leave a written record of their encounters with her — for consumption by people like me who love to read gossipy accounts about the one percent.

In part these accounts serve as a welcome counterpoint to the typical syrupy stories promoted by the royal family, for example,

The queen mother:

Along with radiance, she emitted delight. Her authorised biographer, William Shawcross, chronicles this trail of delight. Wherever she goes, she delights everyone, and they are in turn delighted by her delight, whereupon she is delighted that they are delighted that she is delighted that … and so forth. If you shut his book too abruptly, you’ll notice delight oozing out of its sides.

But from the age of twenty-five, Princess Margaret was rarely described as ‘radiant’, other than on her wedding day, traditionally an occasion on which the adjective is obligatory, to be withheld only if the bride is actually hauled sobbing to the altar.

Most of the stories follow another arc: the Princess arrives late, delaying dinner to catch up with her punishing schedule of drinking and smoking. At the table, she grows more and more relaxed; by midnight, it dawns on the assembled company that she is in it for the long haul, which means that they will be too, since protocol dictates that no one can leave before she does. Then, just as everyone else is growing more chatty and carefree, the Princess abruptly remounts her high horse and upbraids a hapless guest for over-familiarity: ‘When you say my sister, I imagine you are referring to Her Majesty the Queen?’

At times, the reader feels sorry for the princess serving as everyone’s favorite punching bag.  As a royal, your status is purely at the mercy of birth order, establishing your position in the line for the crown.

How odd, to emerge from the womb fourth in line, to go up a notch at the age of six, up another notch that same year, and then to find yourself hurtling down, down, down to fourth place at the birth of Prince Charles in 1948, fifth at the birth of Princess Anne in 1950, then downhill all the way, overtaken by a non-stop stream of riff-raff – Prince Andrew and Prince Edward and Peter Phillips and Princess Beatrice and the rest of them, down, down, down, until by the time of your death you have plummeted to number eleven, behind Zara Phillips, later to become Zara Tindall, mother of Mia Tindall, who, if you were still alive, would herself be one ahead of you, even when she was still in nappies. Not many women have to face the fact that their careers peaked at the age of six, or to live with the prospect of losing their place in the pecking order to a succession of newborn babies, and to face demotion every few years thereafter. Small wonder, then, if Princess Margaret felt short-changed by life.

Her life was defined by deficit.

She remained conscious of her image as the one who wasn’t, and to some extent played on it: the one who wasn’t the Queen; the one who wasn’t taught constitutional history because she wasn’t the one who’d be needing it; the one who wasn’t in the first coach, and wouldn’t ever be first onto the Buckingham Palace balcony; the one who wasn’t given the important duties, but was obliged to make do with the also-rans: the naming of the more out-of-the-way council building, school, hospital or regiment, the state visit to the duller country, the patronage of the more obscure charity, the glad-handing of the smaller fry – the deputies, the vices, the second-in-commands. Her most devoted friends praised her stoicism for assuming the role of lightning rod. ‘For nearly five decades,’ said Reinaldo Herrera, ‘she bore with great dignity the criticism and envy that people dared not show the Queen.’

But sympathy for her situation is hard to sustain for very long, when she spends so much of her time putting other people down.

Her antennae for transgressions were unusually sensitive, quivering into action at the slightest opportunity. ‘I detested Queen Mary,’ she told Gore Vidal. ‘She was rude to all of us except Lilibet, who was going to be Queen. Of course, she had an inferiority complex. We were Royal, and she was not.’ Unlike her, Queen Mary had been born a Serene Highness, not a Royal Highness. The difference, invisible to most, was monumental to Princess Margaret, who treasured the definite article in Her Royal Highness the Princess Margaret. Lacking that ‘the’, her grandmother was in some sense below the salt.

Far more than her sister, she was given to pulling rank. She once reminded her children that she was royal and they were not, and their father was most certainly not. ‘I am unique,’ she would sometimes pipe up at dinner parties. ‘I am the daughter of a King and the sister of a Queen.’ It was no ice-breaker.

Margaret had been born to the King-Emperor at a time when the map of the world was still largely pink. Her sense of entitlement, never modest, grew bigger and bigger with each passing year, gathering weight and speed as the British Empire grew smaller and smaller, and her role in it smaller still.

As a result, she played her role as an awkward mix of princess and bohemian, leaving those around her on edge about whether she was going to go high or go low.

She was of royalty, yet divorced from it; royalty set at an oblique angle, royalty through the looking glass, royalty as pastiche.

She was cabaret camp, Ma’am Ca’amp: she was Noël Coward, cigarette holders, blusher, Jean Cocteau, winking, sighing, dark glasses, Bet Lynch, charades, Watteau, colourful cocktails at midday, ballet, silk, hoity-toity, dismissive overstatement, arriving late, entering with a flourish, exiting with a flounce, pausing for effect, making a scene.

It is languid, bored, world-weary, detached, bored, fidgety, demanding, entitled, disgruntled, bored. It carries the seeds of its own sadness and scatters them around like confetti. It looks in the mirror for protracted periods of time, but avoids exchanging glances with itself. It is disappointment hiding behind the shield of hauteur, keeping pity at bay. ‘I have never known an unhappier woman,’ says John Julius.

Read the book.  You’ll have a hard time putting it down.

Posted in Democracy, Higher Education, Meritocracy, Politics

Jennifer Senior: 95 Percent of Representatives Have a Degree. Look Where That’s Got Us.

This post is a piece by New York Times columnist Jennifer Senior, which was published on December 21.  Here’s a link to the original.

It builds on the argument that Michael Sandel made in The Tyranny of Merit and nicely illuminates some of the issues I’ve been raising in this blog about the problems of meritocracy, the dysfunctions of credentialism, and the political consequences of both.  Past pieces here on the subject are legion, including this, this, this, this, this, and this.  

What I like in particular about her take on the subject is the way she weaves together issues of power, fairness, respect, and community — all of which are pushed in a perilous direction by the new American meritocracy.  And she brings the analysis together by focusing on the effect of college degrees on governing. 

Consider this, that “95 percent of today’s House members have a bachelor’s degree, as does every member of the Senate. Yet just a bit more than one-third of Americans do.”  Does this make us better governed?  Really?

Five years ago, Nicholas Carnes, a political scientist at Duke, tried to measure whether more formal education made political leaders better at their jobs. After conducting a sweeping review of 228 countries between the years 1875 and 2004, he and his colleague Noam Lupu concluded: No. It did not. A college education did not mean less inequality, a greater G.D.P., fewer labor strikes, lower unemployment or less military conflict.

I don’t think we needed a study to tell us this, after watching our own government’s dysfunction over the past several decades. 

Then add to this two other facts:  the Democrats have become the party for the college-educated; and most Democrats in congress went to private colleges while most Republicans went to public colleges.  Is educational exclusivity now the brand for the Democratic party?

I hope you find this analysis as interesting as I did.

All these credentials haven’t led to better results.

Credit…Damon Winter/The New York Times

Over the last few decades, Congress has diversified in important ways. It has gotten less white, less male, less straight — all positive developments. But as I was staring at one of the many recent Senate hearings, filled with the usual magisterial blustering and self-important yada yada, it dawned on me that there’s a way that Congress has moved in a wrong direction, and become quite brazenly unrepresentative.

No, it’s not that the place seethes with millionaires, though there’s that problem too.

It’s that members of Congress are credentialed out the wazoo. An astonishing number have a small kite of extra initials fluttering after their names.

According to the Congressional Research Service, more than one third of the House and more than half the Senate have law degrees. Roughly a fifth of senators and representatives have their master’s. Four senators and 21 House members have MDs, and an identical number in each body (four, twenty-one) have some kind of doctoral degree, whether it’s a Ph.D., a D.Phil., an Ed.D., or a D. Min.

But perhaps most fundamentally, 95 percent of today’s House members have a bachelor’s degree, as does every member of the Senate. Yet just a bit more than one-third of Americans do.

“This means that the credentialed few govern the uncredentialed many,” writes the political philosopher Michael J. Sandel in “The Tyranny of Merit,” published this fall.

There’s an argument to be made that we should want our representatives to be a highly lettered lot. Lots of people have made it, as far back as Plato.

The problem is that there doesn’t seem to be any correlation between good governance and educational attainment that Sandel can discern. In the 1960s, he noted, we got the Vietnam War thanks to “the best and the brightest” — it’s been so long since the publication of David Halberstam’s book that people forget the title was morbidly ironic. In the 1990s and 2000s, the highly credentialed gave us (and here Sandel paused for a deep breath) “stagnant wages, financial deregulation, income inequality, the financial crisis of 2008, a bank bailout that did little to help ordinary people, a decaying infrastructure, and the highest incarceration rate in the world.”

Five years ago, Nicholas Carnes, a political scientist at Duke, tried to measure whether more formal education made political leaders better at their jobs. After conducting a sweeping review of 228 countries between the years 1875 and 2004, he and his colleague Noam Lupu concluded: No. It did not. A college education did not mean less inequality, a greater G.D.P., fewer labor strikes, lower unemployment or less military conflict.

Sandel argues that the technocratic elite’s slow annexation of Congress and European parliaments — which resulted in the rather fateful decisions to outsource jobs and deregulate finance — helped enable the populist revolts now rippling through the West. “It distorted our priorities,” Sandel told me, “and made for a political class that’s too tolerant of crony capitalism and much less attentive to fundamental questions of the dignity of work.”

Both parties are to blame for this. But it was Democrats, Sandel wrote, who seemed especially bullish on the virtues of the meritocracy, arguing that college would be the road to prosperity for the struggling. And it’s a fine idea, well-intentioned, idealistic at its core. But implicit in it is also a punishing notion: If you don’t succeed, you have only yourself to blame. Which President Trump spotted in a trice.

“Unlike Barack Obama and Hillary Clinton, who spoke constantly of ‘opportunity’” Sandel wrote, “Trump scarcely mentioned the word. Instead, he offered blunt talk of winners and losers.”

Trump was equally blunt after winning the Nevada Republican caucuses in 2016. “I love the poorly educated!” he shouted.

A pair of studies from 2019 also tell the story, in numbers, of the professionalization of the Democratic Party — or what Sandel calls “the valorization of credentialism.” One, from Politico, shows that House and Senate Democrats are much more likely to have gone to private liberal arts colleges than public universities, whereas the reverse is true of their Republican counterparts; another shows that congressional Democrats are far more likely to hire graduates of Ivy League schools.

This class bias made whites without college degrees ripe for Republican recruitment. In both 2016 and 2020, two thirds of them voted for Trump; though the G.O.P. is the minority party in the House, more Republican members than Democrats currently do not have college degrees. All 11 are male. Most of them come from the deindustrialized Midwest and South.

Oh, and in the incoming Congress? Six of the seven new members without four-year college degrees are Republicans.

Of course, far darker forces help explain the lures of the modern G.O.P. You’d have to be blind and deaf not to detect them. For decades, Republicans have appealed both cynically and in earnest — it’s hard to know which is more appalling — to racial and ethnic resentments, if not hatred. There’s a reason that the Black working class isn’t defecting to the Republican Party in droves. (Of the nine Democrats in the House without college degrees, seven, it’s worth noting, are people of color.)

For now, it seems to matter little that Republicans have offered little by way of policy to restore the dignity of work. They’ve tapped into a gusher of resentment, and they seem delighted to channel it, irrespective of where, or if, they got their diplomas. Ted Cruz, quite arguably the Senate’s most insolent snob — he wouldn’t sit in a study group at Harvard Law with anyone who hadn’t graduated from Princeton, Yale or Harvard — was ready to argue on Trump’s behalf to overturn the 2020 election results, should the disgraceful Texas attorney general’s case have reached the Supreme Court.

Which raises a provocative question. Given that Trumpism has found purchase among graduates of Harvard Law, would it make any difference if Congress better reflected the United States and had more members without college degrees? Would it meaningfully alter policy at all?

It would likely depend on where they came from. I keep thinking of what Rep. Al Green, Democrat of Texas, told me. His father was a mechanic’s assistant in the segregated South. The white men he worked for cruelly called him “The Secretary” because he could neither read nor write. “So if my father had been elected? You’d have a different Congress,” Green said. “But if it’d been the people who he served — the mechanics who gave him a pejorative moniker? We’d probably have the Congress we have now.”

It’s hard to say whether more socioeconomic diversity would guarantee differences in policy or efficiency. But it could do something more subtle: Rebuild public trust.

“There are people who look at Congress and see the political class as a closed system,” Carnes told me. “My guess is that if Congress looked more like people do as a whole, the cynical view — Oh, they’re all in their ivory tower, they don’t care about us — would get less oxygen.”

When I spoke to Representative Troy Balderson, a Republican from Ohio, he agreed, adding that if more members of Congress didn’t have four-year college degrees, it would erode some stigma associated with not having one.

“When I talk to high school kids and say, ‘I didn’t finish my degree,’ their faces light up,” he told me. Balderson tried college and loved it, but knew he wasn’t cut out for it. He eventually moved back to his hometown to run his family car dealership. Students tend to find his story emboldening. The mere mention of four-year college sets off panic in many of them; they’ve been stereotyped before they even grow up, out of the game before it even starts. “If you don’t have a college degree,” he explains, “you’re a has-been.” Then they look at him and see larger possibilities. That they can be someone’s voice. “You can become a member of Congress.”

Jennifer Senior has been an Op-Ed columnist since September 2018. She had been a daily book critic for The Times; before that, she spent many years as a staff writer for New York magazine. Her best-selling book, “All Joy and No Fun: The Paradox of Modern Parenthood,” has been translated into 12 languages. @JenSeniorNY

Posted in Educational goals, History of education, Systems of Schooling

Politics and Markets: The Enduring Dynamics of the US System of Schooling

This post is a piece I just wrote, which will end up as a chapter in a book edited by Kyle Steele, New Perspectives on the Twentieth Century American High School.  It will be published by Palgrave Macmillan as part of Bill Reese and John Rury series on Historical Studies in Education.  Here is a link to a pdf of the chapter.  This essay is dedicated to my old friend and former colleague, David Cohen, who died earlier this year.

Writing this chapter as an opportunity for me to explore how my thinking about American schooling emerged from the analysis of an early high school in my first book and then developed over the years into a broader understanding of the dynamics that have shaped the history the US educational system.  Here’s an overview of the argument:

In this essay, I explore how the tension between politics and markets, which David Cohen uncovered in my first book, helps us understand the central dynamics of the American system of schooling over its 200-year history. The primary insight is that the system, as with Central High, is at odds with itself. It’s a system without a plan. No one constructed a coherent design for the system or assigned it a clear and consistent mission. Instead, the system evolved through the dynamic interplay of competing actors seeking to accomplish contradictory social goals through a single organizational machinery.

By focusing on this tension, we can begin to understand some of the more puzzling and even troubling characteristics of the American system of schooling. It’s a radically decentralized organizational structure, dispersed across 50 states and 15,000 school districts, and no one is in charge. Yet somehow schools all over the country look and act in ways that are remarkably similar. It’s a system that has a life of its own, fends off concerted efforts by political reformers to change the core grammar of schooling, and evolves at its own pace in response to the demands of the market. Its structure is complex, incoherent, and fraught with internal contradictions, but it nonetheless seems to thrive under these circumstances. And it is somehow able to accommodate the demands placed on it by a disparate array of educational consumers, who all seem to get something valuable out of it, even though these demands pull the system in conflicting directions. It has something for everyone, it seems, except for fans of organizational coherence and efficiency. In fact, one lesson that emerges from this focus on tensions within the system is that coherence and efficiency are vastly overrated. Conflict can be constructive.

This essay starts with the tension between politics and markets that I explored in my first book and then builds on it with analyses I carried out over the next thirty years in which I sought to unpack this tension. These findings were published in three later books: How to Succeed in School Without Really Learning: The Credentials Race in American Education (1997); Someone Has to Fail: The Zero-Sum Game of Public Schooling (2010); and A Perfect Mess: The Unlikely Ascendancy of American Higher Education (2017). The aim of this review is to explore the core dynamics of the US educational system as it emerges in these works. It is a story about a balancing act among competing forces, one that began with a conversation about Central High with my friend David Cohen.

The revelation that came to me as I was working on these later books was that the form and function of the American high school served as the model for the educational system.  The nineteenth-century high school established the mix of common schooling at one level and elite schooling at the next level that came to characterize the system as a whole.  And the tracked comprehensive high school that emerged in the early twentieth century provided the template for the structure of US higher education, which, like Central in 1920, is both highly stratified and broadly inclusive.  Overall, it is a system that embraces its own contradictions by providing something for everyone – at the same time providing social access and preserving social advantage. 

I hope you like it.

Politics and Markets:

The Enduring Dynamics of the US System of Schooling[1]

 David F. Labaree

Sometimes, when you’re writing a book, someone else needs to tell you what it’s truly about. That is what happened to me as I was writing my first book, published in 1988: The Making of an American High School: The Credentials Market and the Central High School of Philadelphia, 1838-1939. I had just completed the manuscript when David Cohen, my colleague at the Michigan State University College of Education, generously offered to read the full draft and give me comments on it. As we sat together for two hours in my office, he explained to me the point I was trying to make in the text but had failed to make explicit. Although the pieces of the story I presented were interesting in themselves, he said, they fell short of forming a larger interpretive scheme. The elements of this larger story were already there, but they were just below the surface. Our conversation showed me that the heart of the story my book told about this high school revolved around an ongoing tension between politics and markets, a tension that shaped its evolution.

Central High was created as an expression of democratic politics. In this role, it was an effort to create informed citizens for the new republic. But once it was launched, it took on a new role, as a vehicle for conferring social status on the highly select group of students who attended. Its subsequent history was a struggle between these two visions of the school, as political pressures mounted to give future students greater access to the high school credential, while the families of current students sought to preserve the exclusivity that provided them with social advantage.

At the same time that David told me what my book was about, he also told me what it was not about. As I saw it, the empirical core of the book was a quantitative dataset I had compiled of 1,834 students who attended the school during census years between 1840 and 1920. I had coded the information from school records, linked it to family data from the census, punched it into IBM cards (remember those?), and analyzed it at length with statistical software. What the data showed was that—unlike the contemporary high school, where social origins best explain who graduates and who drops out—the determining factor at Central was grades. This was my big reveal. But that day in my office, David pointed out to me that all this data—recorded in no fewer than thirty-six tables—added up to a footnote to the statement, “Central High School was a meritocracy.” In total, this part of the study took two years of my still short life. Two years for one footnote.

Needless to say, at the time I struggled to accept either of David’s comments with the gratitude they deserved. He was right, but I was devastated. First, the book I thought was finished would now require a complete rewrite, so I could weave the book’s central theme back into the text. And second, this revision would mean confining the hard-won quantitative analysis to a single chapter, because the most interesting material turned out to be elsewhere. In the rush to display all my hard-won data, I had ended up stepping on my punchline.

In this essay, I explore how the tension between politics and markets, which David Cohen uncovered in my first book, helps us understand the central dynamics of the American system of schooling over its 200-year history. The primary insight is that the system, as with Central High, is at odds with itself. It’s a system without a plan. No one constructed a coherent design for the system or assigned it a clear and consistent mission. Instead, the system evolved through the dynamic interplay of competing actors seeking to accomplish contradictory social goals through a single organizational machinery.

By focusing on this tension, we can begin to understand some of the more puzzling and even troubling characteristics of the American system of schooling. It’s a radically decentralized organizational structure, dispersed across 50 states and 15,000 school districts, and no one is in charge. Yet somehow schools all over the country look and act in ways that are remarkably similar. It’s a system that has a life of its own, fends off concerted efforts by political reformers to change the core grammar of schooling, and evolves at its own pace in response to the demands of the market. Its structure is complex, incoherent, and fraught with internal contradictions, but it nonetheless seems to thrive under these circumstances. And it is somehow able to accommodate the demands placed on it by a disparate array of educational consumers, who all seem to get something valuable out of it, even though these demands pull the system in conflicting directions. It has something for everyone, it seems, except for fans of organizational coherence and efficiency. In fact, one lesson that emerges from this focus on tensions within the system is that coherence and efficiency are vastly overrated. Conflict can be constructive.

This essay starts with the tension between politics and markets that I explored in my first book and then builds on it with analyses I carried out over the next thirty years in which I sought to unpack this tension. These findings were published in three later books: How to Succeed in School Without Really Learning: The Credentials Race in American Education (1997); Someone Has to Fail: The Zero-Sum Game of Public Schooling (2010); and A Perfect Mess: The Unlikely Ascendancy of American Higher Education (2017). The aim of this review is to explore the core dynamics of the US educational system as it emerges in these works. It is a story about a balancing act among competing forces, one that began with a conversation about Central High with my friend David Cohen.

The revelation that came to me as I was working on these later books was that the form and function of the American high school served as the model for the educational system.  The nineteenth-century high school established the mix of common schooling at one level and elite schooling at the next level that came to characterize the system as a whole.  And the tracked comprehensive high school that emerged in the early twentieth century provided the template for the structure of US higher education, which, like Central in 1920, is both highly stratified and broadly inclusive.  Overall, it is a system that embraces its own contradictions by providing something for everyone – at the same time providing social access and preserving social advantage. 

Politics and Markets and the Founding of Central High

To understand the tension in the American educational system you first need to consider the core tension that lies at the heart of the American political system. Liberal democracy is an effort to balance two competing goals. One is political equality, which puts emphasis on the need for rule by the majority, grounded in political consensus, and aiming toward the ideal of equality for all. This is the democratic side of liberal democracy. The other goal is individual liberty, which puts emphasis on preserving the rights of the minority from the tyranny of the majority, open competition among individual actors, and a high tolerance for any resulting social inequality. This is the liberal side of the system, which frees persons, property, and markets from undue political constraint. These are the two tendencies I have labeled politics and markets. Balancing the two is both essential and difficult. It offers equal opportunity for unequal outcomes, majority rule and minority rights.

School is at the center of this because it reflects and serves both elements. It offers everyone access to school and the opportunity to show what individuals can achieve there. And it also creates hierarchies of merit, winners and losers, as it sorts people into different levels of the social structure. In short, it provides social access and also upholds social advantage.

So what happened when Central High School appeared upon the scene? It was founded for political and moral reasons, in support of the common-school ideal of preparing citizens of the new American republic by instilling in them the skills and civic virtues they would need in to establish and preserve republican community. But in order to accomplish this goal, the founders needed to get past a major barrier. Prior to the founding of common schools in Philadelphia in the 1830s, a form of public schooling was already in effect, but it was limited to people who couldn’t afford to pay for their own schooling. To qualify, you had to go down to city hall and declare yourself, in person, as a pauper. Middle- and upper-class families paid for private schooling for their children. Common schools would not work in creating civic community unless they could draw everyone into the mix. But the existing public system was freighted with the label “pauper schools.” Why would a respectable middle-class family want to send their children to such a stigmatized institution?

The answer to this question was ingenious. Induce the better-off to enroll in the public schools by making such enrollment the prerequisite for gaining access to an institution  that was better than anything they could find in the private education market. In Philadelphia, that institution was Central High School. The founders deliberately created it as an irresistible lure for the wealthy. It was located in the most fashionable section of town. It had a classical marble façade, a high-end German telescope mounted in an observatory on its roof, and a curriculum that was comparable to what students could find at the University of Pennsylvania. Modeled more on a college than a private academy, the school’s principal was called president, its teachers were called professors (listed in the front of the city directory along with judges and city council members), and the state authorized the school to award college degrees to its graduates. Its students were the same age-range as those at Penn; you could go to one or the other, but there was no reason to attend both. And unlike Penn, Central was free. It also offered students a meritocratic achievement structure, with a rigorous entrance exam screening those coming in and a tough grading policy that screened those who made it all the way to the end. This meant that graduates of Central were considered more than socially elite; they were certified as smart.

The result was a cultural commodity that became extraordinarily attractive to the middle and upper classes in the city: an elite college education at public expense. But there was a catch. Only students who had attended the public grammar schools could apply for admission to Central; initially they had to spend at least one year in the grammar schools and then the requirement rose to two years. This approach was wildly successful. From day one, the competition to pass the entrance exam and gain access to Central High School was intense. This was true not just for prospective students but also for the city’s grammar school masters, who were engaged in a zero-sum game to see who could get the most students into central and win themselves a prime post as a professor.

Note that the classic liberal democratic tension between political equality and market inequality was already present at the very birth of the common school. In order to create common schools, you needed an uncommon school. Only the selective inducement of the high school could guarantee full community participation in the lower schools. Thus, from the very start, public schooling in the US was both a public good and a private good. As a public good, its benefits accrued to everyone in the city, by creating citizens who were capable of maintaining a democratic polity. But it was also a private good, which provided social advantage to an elite population that could afford the opportunity cost to attain a scarce and valuable high school diploma.

Increased Access Leads to a Tracked and Socially Reproductive Central High

For fifty years, Central High School (and its female counterpart Girls High School) remained the only public secondary schools in Philadelphia, which at the time was the second largest city in the country. High school attendance was a scarce commodity there and in the rest of the country, where in 1880 it accounted for only 1.1 percent of public school enrollments.[2] At the same time that high school enrollments were small and stable, enrollments in grammar schools were expanding rapidly. By 1900, the average American over twenty-five had completed eight years of schooling.[3] If most students were to continue their education, the number of high schools needed to expand rapidly. As a result, the end of the nineteenth century was a dynamic period in the development of the American system of schooling.

The pressures on the high school were coming from two sources. The first was working-class families, who were eager to have their children gain access to a valuable credential that had long been restricted to a privileged few. It’s a time-tested rule of thumb that, in a liberal democracy, you can’t limit access to an attractive public institution like the high school for very long when demand is high. Sheer numbers eventually make themselves felt through the political arena.

In Philadelphia you could see this play out in the political tensions over access to the two high schools. By the 1870s, the school board started imposing quotas on students from the various grammar schools in the city in order to spread access more evenly across the city. By the 1880s, the city began to open manual training schools in parallel with the high schools, and by the 1890s the flood gates opened. A series of new regional high schools were established, allowing a sharp increase in enrollments. At the same time, the board abolished the high school entrance examination, which meant that students now qualified for admission to high school solely by presenting a grammar-school diploma. By 1920, Central had lost its position as the exclusive citadel at the top of the system, where it drew the best students city-wide, now demoted to the status of just one among the many available regional high schools.

Everything suddenly changed in Central High’s form and function. The vision of being a college disappeared, as Central was placed securely between grammar school and college in the new educational hierarchy. Its longstanding core curriculum, which was required for all students, by 1920 became a tracked curriculum pitched toward different academic trajectories: an academic track for those going to college, a mechanical track for future engineers, a commercial track for clerical workers, and an industrial track for machine operators. And whereas the old Central had a proud tradition of school-wide meritocracy, students in the four tracks were distributed in a pattern familiar in high schools today, according to social class, with 72 percent of the academic-track students from the middle class and only 28 percent from the working class.[4]  Its professors, who had won a position at Central after proving their mettle as grammar school masters, now became ordinary teachers, who were much younger, with no teaching experience, and no qualification but a college diploma. (The professors hadn’t needed a college degree; a Central diploma had been sufficient.)

Political pressure for greater access explains the rapid expansion of high school enrollments during this period, but it doesn’t explain why the entire structure of the high school was transformed at the same time. While working-class families wanted to have their children gain access to the high school, in order to enhance their social opportunities, middle-class families wanted to preserve for their children the exclusivity that granted them social advantage. They were the second factor that shaped the school.

In part, this was a simple response to the value of high school as a private good. In political terms, equal access is a valuable public good; but in market terms, it’s a disaster. The value of schooling as a private good is measured by its scarcity. When high school became abundant, it lost its value for middle-class families. The new structure helped to preserve a degree of exclusivity, with middle-class students largely segregated in the academic track and the lower classes dispersed across the lower tracks. In addition, the middle-class students were positioned to move on to college, which had become the new zone of advantage after the high school lost its cachet. This is a pattern we see emerging again after the Second World War, when high school filled up and college enrollments sharply expanded.

For middle-class families at the turn of the twentieth century, this combination of high school tracking and college enrollment was more than just a numbers game, trying to keep one step ahead of the Joneses. Class survival was at stake. For centuries before this period, being middle class had largely meant owning your own small business. For town dwellers, either you were a master craftsman, owning a shop where you supervised journeymen and apprentices in plying the trade of cordwainer or cooper or carpenter, or you ran a retail store serving the public. The way you passed social position to your male children was by setting them up in an apprenticeship or willing them the store.

By the late nineteenth century, this model of status transmission had fallen apart. With the emergence of the factory and machine production, apprenticeship had largely disappeared, as apprentices became simple laborers who no longer had the opportunity to move up to master. And with the emergence of the department store, small retail businesses were in severe jeopardy. No longer able to simply inherit the family business, children in middle-class families faced the daunting prospect of proletarianization. The factory floor was beckoning. These families needed a new way to secure the status of their children, and that solution was education, first in high school and then in college. Through the medium of exclusive schooling, they hoped to position their children to embrace what Burton Bledstein calls “the culture of professionalism.”[5] By this, he is not referring simply to the traditional high professions (law, medicine, clergy) but to any occupational position that is buffered from market pressures.

The iron law of markets is that no one wants to function on a level playing field in open competition with everyone else. So, a business fortifies itself as a corporation, which acts as a conspiracy against the market. And middle-class workers seek an occupation that offers protection from open competition in the job market. Higher level educational credentials can do that. If a high school or college degree is needed to qualify for a position, then this sharply reduces the number of job seekers in the pool. And once on the job, you are less likely to be displaced by someone else because of shifting supply and demand. The ideal is the sinecure, and a diploma is the ticket to secure one. By the twentieth century, college became Sinecures “R” Us.

The job market accommodated this change through the increase in scale of both corporations and government agencies, which created a large array of managerial and clerical positions. These positions were safer, cleaner, and more secure than wage labor. They were protected by educational credentials, annual salaries, chances for promotion, formal dress, and civil service regulations. And, because they were awarded according to educational merit rather than social inheritance, they also granted the salary man a degree of social legitimacy that was not available to the owner’s son. Here’s how Bledstein explains it:

Far more than other types of societies, democratic ones required persuasive symbols of the credibility of authority, symbols the majority of people could reliably believe just and warranted. It became the function of the schools in America to legitimize the authority of the middle class by appealing to the universality and objectivity of “science.”[6]

Evolving in search of this symbolic credibility, the model of the high school that emerged in the early twentieth century looks very familiar to us today. It drew students from the community around the school, who were enrolled in a single comprehensive institution, and who were then distributed into curriculum tracks according to a judicious mix of individual academic merit and inherited social position, with each track aligned with a different occupational trajectory. The school as a whole was as heterogeneous as the surrounding population, but the experience students had there was relatively homogeneous by track and social origin. In one educational setting, you had both democratic equality and market-based inequality, commonality and hierarchy. An exemplary institution for a liberal democracy.

A lovely essay by David Cohen and Barbara Neufeld, “The Failure of High Schools and the Progress of Education,” captures the distinctive tension built into this institution.[7] On one hand, the comprehensive high school was one of the great educational success stories of all time. Starting as a tiny sliver of the educational system in the nineteenth century, it became a mammoth in the twentieth—with population doubling every ten years between 1890 and 1940—and by the end of this period it incorporated the large majority of the teenagers in the country. The elite school for the privileged few evolved rapidly into a comprehensive school for the masses.

But on the other hand, this success turned quickly into failure. Instead of celebrating the accomplishment of the students who managed to graduate from the high school, we began to bemoan those who didn’t, thus creating a new social problem: the high school dropout. Also, as the high school shifted from being seen as a place for students of the highest academic accomplishment to one for students of all abilities, it became the object of handwringing about declining academic standards. As a public good, it was a political success, offering opportunity for all; but as a private good, it was an educational failure, characterized by a watered-down curriculum and low expectations for achievement. The result was that the high school became the object of most educational reform movements in the twentieth century. Once the answer, it was now the problem.

The Lessons of Central High Applied to American Educational System

At this point, having followed the trajectory of the high school, we are in a position to examine more fully the core dynamic that shaped the development of the American educational system as a whole. Here’s how it works. Start with mass schooling at one level of the system and exclusive schooling at the level above. Then, in response to popular demand from working-class families for educational opportunity at the top level, the system expands access to this level, thus making it more inclusive. Next, in response to demand by middle-class families to preserve their educational advantage, the system tracks schooling in the zone of expansion, with their children occupying the upper tracks and newcomers entering in the lower tracks. Finally, the system ushers the previously advantaged educational consumers into the next higher level of the system, where schooling remains exclusive, the new zone of advantage.

In the second quarter of the nineteenth century, for example, we saw the formation of the common school system in the US, with universal enrollment at the elementary level, partial enrollment in grammar schools, and scarce enrollment in high schools. By the end of the century, grammar schools had filled up and pressure rose for greater access to high schools. As a result, high schools shifted toward a tracked structure, with middle-class students in the top tracks and the working-class students in the tracks below. Then in the middle of the twentieth century, the same pattern played out in the system’s expansion at the college level.

By 1940, high school enrollment had become the norm for all American families, which meant that the new zone of educational opportunity was now the previously exclusive domain of higher education. As was the case with high school in the late nineteenth century, political demand arose for working-class access to college, which had previously been the preserve of the middle class. Despite the much higher per-capita cost of college compared to high school, political will converged to deliver this access. The twin spurs were a hot war and a cold war. The need to acknowledge the shared sacrifice of Second World War led to the 1944 GI Bill, which paid for veterans to go to college. And the need during the Cold War to mobilize research, enhance human capital, and demonstrate the superiority of liberal democracy over communism led to the 1965 Higher Education Opportunity Act. The result was an enormous expansion of higher education in the 1950s and 1960s. Enrollments grew from 2.4 million in 1949 to 3.6 million in 1959; but then came the 1960s, when enrollments more than doubled, reaching 8 million in 1969 and then 11.6 million in 1979.[8]

The result was to revolutionize the structure of American higher education. Here’s how I described it in A Perfect Mess:

Until the 1940s, American colleges had admitted students with little concern for academic merit or selectivity, and this was true not only for state universities but also for the private universities now considered as the pinnacle of the system. If you met certain minimal academic requirements and could pay the tuition, you were admitted. But in the postwar years, a sharp divide emerged in the system between the established colleges and universities, which dragged their feet about expanding enrollments and instead became increasingly selective, and the new institutions, which expanded rapidly by admitting nearly everyone who applied.

What were these new institutions that welcomed the newcomers? Often existing public universities would set up branch campuses in other regions of the state, which eventually became independent institutions. Former normal schools, set up in the nineteenth century as high-school level institutions for preparing teachers had evolved into teachers colleges in the early twentieth century; and by the middle of the century they had evolved into full-service state colleges and universities serving regional populations. A number of new urban college campuses also emerged during this period, aimed at students who would commute from home to pursue programs that would prepare them for mid-level white collar jobs. And the biggest players in the new lower tier of American higher education were community colleges, which provided 2-year programs allowing students to enter low-level white-collar jobs or transfer to the university. Community colleges quickly became the largest provider of college instruction in the country. By 1980, they accounted for nearly 40 percent of all college enrollments in the U.S.[9]

These new colleges and universities had several characteristics in common. Compared to their predecessors: they focused on undergraduate education; they prepared students for immediate entry into the workforce; they drew students from nearby; they cost little; and they admitted almost anyone. For all these reasons, especially the last one, they also occupied a position in the college hierarchy that was markedly lower. Just as secondary education expanded only by allowing the newcomers access to the lower tiers of the new comprehensive high school, so higher education expanded only by allowing newcomers access to the lower tiers of the newly stratified structure of the tertiary system.

As a result, the newly expanded and stratified system of higher education protected upper-middle-class students attending the older selective institutions from the lower-middle-class students attending regional and urban universities and the working-class students attending community colleges. At the same time, these upper-middle-class students started pouring into graduate programs in law, medicine, business, and engineering, which quickly became the new zone of educational advantage.[10]

            So, at 50-year intervals across the history of American education, the same pattern kept repeating. Every effort to increase access brought about a counter effort to preserve advantage. Every time the floor of the educational system rose, so did the ceiling. The result is an elevator effect, in which the system gamely provides both access and advantage, thus increasing the upward expansion of educational attainment for all while at the same time preserving social differences. Plus ça change.

What’s Next in the Struggle between Politics and Markets?

So where does that leave us today? I see three problems that have emerged from the tension that has propelled the evolution of the American system of schooling: a time problem, a cost problem, and a public goods problem. Let’s consider each in turn.

The time problem arises from the relentless upward expansion of the system, which is sucking up an increasing share of the American life span. Life expectancy has been growing slowly over the years, but time in school has been growing at a much more rapid rate. In the mid nineteenth century, the modal American spent four years in school. By 1900 it had risen to eight years. By 2000 it was thirteen years. And by 2015, for Americans over twenty-five, 59 percent had some college, 42 percent an associate’s degree, 33 percent a bachelor’s degree, and 12 percent an advanced degree.[11]

In my own case, I spent a grand total of 26 years in school: two years of preschool, twelve years of elementary and secondary school, five years of college, and seven years of graduate school (I’m a slow study). I didn’t finish my doctorate until the ripe old age of 36, which left only thirty years to ply my profession before the social-security retirement age for my cohort. As I used to ask my graduate students—most of whom had also deferred the start of graduate study until a few years after college—when do we finish preparing for life and start living it? When do we finally grow up?

            Not only does the rapid expansion of schooling eat up an increasing share of people’s lives, but it also costs them a lot of money. First, there’s the opportunity cost, as people keep deferring to the future their chances of earning a living. Then there’s the direct cost for students to pay tuition and to support themselves as adult learners. And finally, there’s the expense to the state of providing public education across all these years. As schooling expands upward, the direct costs of education to student and state grow geometrically. High school is much more expensive per student than elementary school, college much more than high school, and graduate school much more than college.

At some point in this progression, the costs start hitting a ceiling, when students are less willing to defer earning and pay the increasing cost of advanced schooling and when taxpayers are less willing to support advanced schooling for all. In the U.S., we started to see this happening in the 1970s, when the sharp rise in college enrollments spurred a taxpayer revolt, which emerged in California (which had America’s largest higher education system and charged no tuition) and started to spread across the country. People began to ask whether they were willing to pay for the higher education of other people’s children on top of the direct cost for themselves. The result was a sharp increase in college tuition (which until then was free or relatively cheap) and the shift in government support away from scholarships and toward loans.

In combination, these increases in time and money began to undermine support for higher education as a public good. If education is seen as providing broad benefits to the community as a whole, then it makes sense to support it with public funds, which had been the case for elementary school in the nineteenth century and for high school in the early twentieth century. For thirty years after 1945, higher education found itself in the same position. The huge public effort  in the Second World War justified the provision of college at public expense for returning soldiers, as established by the GI Bill. In addition, the emerging Cold War assigned higher education a major role in countering the existential threat of communism. University research played a crucial role in supplying the technologies for the arms race and space race with the Soviet Union, and broadening access to college for the working class and racial minorities helped demonstrate the moral credibility of liberal democracy in relation to communism.

But when fiscal costs of this effort mounted in the 1970s and then the Soviet Union collapsed in 1991, the rationale for public subsidy of the extraordinarily high costs of higher education collapsed as well. Under these circumstances, college began to look a lot more like a private good than a public good, whose primary beneficiaries appeared to be its 20 million students. A college degree had become the ticket of admission to the good middle-class life, with its high costs yielding even higher returns in lifelong earnings. If graduates were reaping the bulk of the benefits, then they should bear the costs. Why provide a public subsidy for private gain?

This takes us back to our starting point in this analysis of the American system of schooling: the ongoing tension between politics and markets. As we have seen, that tension was there from day one—with the establishment of the uncommon Central High School at the same time as the common elementary school—and it has persisted over the years. Elite schooling was stacked on top of open-access schooling, with one treating education as a private good and the other as a public good. As demand grew for access to the zone of educational advantage, the system responded by stratifying that zone and expanding enrollment at the next higher level. And the result we’re dealing with now is the triple threat of a system that that has devoured our time, overloaded our costs, and diminished our commitment to education as a public good.

As I write now, in the midst of a pandemic and in the waning weeks of the Trump administration, these issues are driving the debates about education policy. We hear demands for greater access to elite levels of higher education, eliminating tuition at community colleges, and forgiving student debt. And, countering these demands, we hear concerns about the feasibility of paying for these reforms, the public burden of subsidizing students who can afford to pay their way, and the need to preserve elite universities that are the envy of the world. Who knows how these debates will play out. But one thing for sure is that the tensions—between politics and markets and public goods and private goods—will continue.

Bibliography

Bledstein, Burton J. The Culture of Professionalism: The Middle Class and the Development of Higher Education in America (New York: W. W. Norton, 1978).

Cohen, David. K., and Neufeld, Barbara. (1981). “The Failure of High Schools and the Progress of Education.” Daedelus, 110 (Summer 1981), 69-89.

Carter, Susan B. et al., eds.). Historical Statistics of the United States (millennial edition online). New York: Cambridge University Press, 2006.

Labaree, David F. A Perfect Mess: The Unlikely Ascendancy of American Higher Education. Chicago: University of Chicago Press, 2017.

Labaree, David F. The Making of an American High School: The Credentials Market and the Central High School of Philadelphia, 1838-1939. New Haven: Yale University Press, 1988.

National Center for Educational Statistics. 120 Years of American Education (Washington, DC: Government Printing Office, 1993).

National Center for Educational Statistics. Digest of Education Statistics 2013. Washington, DC: Government Printing Office, 2014.

Ryan Camille L., and Bauman, Kurt. “Educational Attainment in the United States: 2015,” Current Population Reports, United States Census Bureau (March 2016), Table 1, accessed December 1, 2020, https://www.census.gov/content/dam/Census/library/publications/2016/demo/p20-578.pdf.

United Nations Development Programme Human Development Reports, “Mean Years of Schooling (Males, aged 25 years and above),” accessed December 1, 2020, http://hdr.undp.org/en/content/mean-years-schooling-males-aged-25-years-and-above-years.

Footnotes

[1] This chapter is dedicated to my friend and former colleague, David Cohen, who died in 2020.

[2] National Center for Educational Statistics, 120 Years of American Education (Washington, DC: Government Printing Office, 1993), Table 8.

[3] NCES, 120 Years of American Education, Table 5.

[4] Labaree, David F., The Making of an American High School: The Credentials Market and the Central High School of Philadelphia, 1838-1939 (New Haven: Yale University Press, 1988), Table 6.4.

[5] Burton J. Bledstein, The Culture of Professionalism: The Middle Class and the Development of Higher Education in America (New York: W. W. Norton, 1978).

[6] Bledstein, The Culture of Professionalism, 123.

[7] Cohen, David. K., & Neufeld, Barbara. (1981). The Failure of High Schools and the Progress of Education. Daedelus, 110 (Summer), 69-89.

[8] Susan B. Carter, et al., eds. Historical Statistics of the United States (millennial edition online) (New York: Cambridge University Press, 2006), Table Bc523). National Center for Educational Statistics, Digest of Education Statistics 2013 (Washington, DC: Government Printing Office, 2014), Table 303.10.

[9] NCES, 120 Years of American Education, Table 24.

[10] Labaree, David F., A Perfect Mess: The Unlikely Ascendancy of American Higher Education (Chicago: University of Chicago Press, 2017), pp. 106-108.

[11] United Nations Development Programme Human Development Reports, “Mean Years of Schooling (Males, aged 25 years and above),” accessed December 1, 2020, http://hdr.undp.org/en/content/mean-years-schooling-males-aged-25-years-and-above-years. Camille L. Ryan and Kurt Bauman, “Educational Attainment in the United States: 2015,” Current Population Reports, ), United States Census Bureau (March 2016), Table 1, accessed December 1, 2020, http://hdr.undp.org/en/content/mean-years-schooling-males-aged-25-years-and-above-years.

Posted in Academic Life, Graduate school

Cartoons about the Life of a Doctoral Student

This post is a collection of some favorite cartoons about the Life as a Doctoral Student.  All of them are from the website PhD, which stands for Piled Higher and Deeper.  The author is Jorge Cham, who got his PhD in Mechanical Engineering at Stanford and then taught at Cal Tech.  Earlier I posted cartoons about Academic Research, and coming up will be a post about Life as a Professor.

Enjoy!

phd100610s -- Grad Student BrainZombies vs. Grad StudentsPandas vs AcademicsLanding on Moon vs Getting PhHaving Kids vs Writing ThesisGrad Student StipendsGrad School Is Like KindergartenBest Years of Your Life

Posted in Politics, Populism, Social status, Sociology

Thomas Edsall: The Resentment that Never Sleeps

This post is a piece by Thomas Edsall published in the New York Times last week.  It explores in detail the recent literature about the role that declining social status has played in the rise of right-wing populism in the US and elsewhere.  Here’s a link to the original.

The argument is one that resonates in my own work posted here (see this, this, and this).  People are less concerned about getting ahead than they are about falling behind.  And one of the consequences of the degree-based meritocracy is the way it disparages people who lack the proper credentials, making clear to them that they are losing ground to the new educated elite.  Here is how Cecilia Ridgway puts it:

Status is as significant as money and power. At a macro level, status stabilizes resource and power inequality by transforming it into cultural status beliefs about group differences regarding who is “better” (esteemed and competent).

Those most affected tend to be neither at the top nor the bottom of the social hierarchy but somewhere in the lower middle regions.  Peter Hall says that

The people most often drawn to the appeals of right-wing populist politicians, such as Trump, tend to be those who sit several rungs up the socioeconomic ladder in terms of their income or occupation. My conjecture is that it is people in this kind of social position who are most susceptible to what Barbara Ehrenreich called a “fear of falling” — namely, anxiety, in the face of an economic or cultural shock, that they might fall further down the social ladder,” a phenomenon often described as “last place aversion.

This is one of the most trenchant analyses of Trumpism that I have yet encountered.  See what you think.

The Resentment That Never Sleeps

Rising anxiety over declining social status tells us a lot about how we got here and where we’re going.

More and more, politics determine which groups are favored and which are denigrated.

Roughly speaking, Trump and the Republican Party have fought to enhance the status of white Christians and white people without college degrees: the white working and middle class. Biden and the Democrats have fought to elevate the standing of previously marginalized groups: women, minorities, the L.G.B.T.Q. community and others.

The ferocity of this politicized status competition can be seen in the anger of white non-college voters over their disparagement by liberal elites, the attempt to flip traditional hierarchies and the emergence of identity politics on both sides of the chasm.

Just over a decade ago, in their paper “Hypotheses on Status Competition,” William C. Wohlforth and David C. Kang, professors of government at Dartmouth and the University of Southern California, wrote that “social status is one of the most important motivators of human behavior” and yet “over the past 35 years, no more than half dozen articles have appeared in top U.S. political science journals building on the proposition that the quest for status will affect patterns of interstate behavior.”

Scholars are now rectifying that omission, with the recognition that in politics, status competition has become increasingly salient, prompting a collection of emotions including envy, jealousy and resentment that have spurred ever more intractable conflicts between left and right, Democrats and Republicans, liberals and conservatives.

Hierarchal ranking, the status classification of different groups — the well-educated and the less-well educated, white people and Black people, the straight and L.G.B.T.Q. communities — has the effect of consolidating and seeming to legitimize existing inequalities in resources and power. Diminished status has become a source of rage on both the left and right, sharpened by divisions over economic security and insecurity, geography and, ultimately, values.

The stakes of status competition are real. Cecilia L. Ridgeway, a professor at Stanford, described the costs and benefits in her 2013 presidential address at the American Sociological Association.

Understanding “the effects of status — inequality based on differences in esteem and respect” is crucial for those seeking to comprehend “the mechanisms behind obdurate, durable patterns of inequality in society,” Ridgeway argued:

Failing to understand the independent force of status processes has limited our ability to explain the persistence of such patterns of inequality in the face of remarkable socioeconomic change.

“As a basis for social inequality, status is a bit different from resources and power. It is based on cultural beliefs rather than directly on material arrangements,” Ridgeway said:

We need to appreciate that status, like resources and power, is a basic source of human motivation that powerfully shapes the struggle for precedence out of which inequality emerges.

Ridgeway elaborated on this argument in an essay, “Why Status Matters for Inequality”:

Status is as significant as money and power. At a macro level, status stabilizes resource and power inequality by transforming it into cultural status beliefs about group differences regarding who is “better” (esteemed and competent).

In an email, Ridgeway made the case that “status is definitely important in contemporary political dynamics here and in Europe,” adding that

Status has always been part of American politics, but right now a variety of social changes have threatened the status of working class and rural whites who used to feel they had a secure, middle status position in American society — not the glitzy top, but respectable, ‘Main Street’ core of America. The reduction of working-class wages and job security, growing demographic diversity, and increasing urbanization of the population have greatly undercut that sense and fueled political reaction.

The political consequences cut across classes.

Peter Hall, a professor of government at Harvard, wrote by email that he and a colleague, Noam Gidron, a professor of political science at Hebrew University in Jerusalem, have found that

across the developed democracies, the lower people feel their social status is, the more inclined they are to vote for anti-establishment parties or candidates on the radical right or radical left.

Those drawn to the left, Hall wrote in an email, come from the top and bottom of the social order:

People who start out near the bottom of the social ladder seem to gravitate toward the radical left, perhaps because its program offers them the most obvious economic redress; and people near the top of the social ladder often also embrace the radical left, perhaps because they share its values.

In contrast, Hall continued,

The people most often drawn to the appeals of right-wing populist politicians, such as Trump, tend to be those who sit several rungs up the socioeconomic ladder in terms of their income or occupation. My conjecture is that it is people in this kind of social position who are most susceptible to what Barbara Ehrenreich called a “fear of falling” — namely, anxiety, in the face of an economic or cultural shock, that they might fall further down the social ladder,” a phenomenon often described as “last place aversion.

Gidron and Hall argue in their 2019 paper “Populism as a Problem of Social Integration” that

Much of the discontent fueling support for radical parties is rooted in feelings of social marginalization — namely, in the sense some people have that they have been pushed to the fringes of their national community and deprived of the roles and respect normally accorded full members of it.

In this context, what Gidron and Hall call “the subjective social status of citizens — defined as their beliefs about where they stand relative to others in society” serves as a tool to measure both levels of anomie in a given country, and the potential of radical politicians to find receptive publics because “the more marginal people feel they are to society, the more likely they are to feel alienated from its political system — providing a reservoir of support for radical parties.”

The populist rhetoric of politicians on both the radical right and left is often aimed directly at status concerns. They frequently adopt the plain-spoken language of the common man, self-consciously repudiating the politically correct or technocratic language of the political elites. Radical politicians on the left evoke the virtues of working people, whereas those on the right emphasize themes of national greatness, which have special appeal for people who rely on claims to national membership for a social status they otherwise lack. The “take back control” and “make America great again” slogans of the Brexit and Trump campaigns were perfectly pitched for such purposes.

Robert Ford, a professor of political science at the University of Manchester in the U.K., argued in an email that three factors have heightened the salience of status concerns.

The first, he wrote, is the vacuum created by “the relative decline of class politics.” The second is the influx of immigrants, “not only because different ‘ways of life’ are perceived as threatening to ‘organically grown’ communities, but also because this threat is associated with the notion that elites are complicit in the dilution of such traditional identities.”

The third factor Ford describes as “an asymmetrical increase in the salience of status concerns due to the political repercussions of educational expansion and generational value change,” especially “because of the progressive monopolization of politics by high-status professionals,” creating a constituency of “cultural losers of modernization” who “found themselves without any mainstream political actors willing to represent and defend their ‘ways of life’ ” — a role Trump sought to fill.

In their book, “Cultural Backlash,” Pippa Norris and Ronald Inglehart, political scientists at Harvard and the University of Michigan, describe the constituencies in play here — the “oldest (interwar) generation, non-college graduates, the working class, white Europeans, the more religious, men, and residents of rural communities” that have moved to the right in part in response to threats to their status:

These groups are most likely to feel that they have become estranged from the silent revolution in social and moral values, left behind by cultural changes that they deeply reject. The interwar generation of non-college educated white men — until recently the politically and socially dominant group in Western cultures — has passed a tipping point at which their hegemonic status, power, and privilege are fading.

The emergence of what political scientists call “affective polarization,” in which partisans incorporate their values, their race, their religion — their belief system — into their identity as a Democrat or Republican, together with more traditional “ideological polarization” based on partisan differences in policy stands, has produced heightened levels of partisan animosity and hatred.

Lilliana Mason, a political scientist at the University of Maryland, describes it this way:

The alignment between partisan and other social identities has generated a rift between Democrats and Republicans that is deeper than any seen in recent American history. Without the crosscutting identities that have traditionally stabilized the American two-party system, partisans in the American electorate are now seeing each other through prejudiced and intolerant eyes.

If polarization has evolved into partisan hatred, status competition serves to calcify the animosity between Democrats and Republicans.

In their July 2020 paper, “Beyond Populism: The Psychology of Status-Seeking and Extreme Political Discontent,” Michael Bang PetersenMathias Osmundsen and Alexander Bor, political scientists at Aarhus University in Denmark, contend there are two basic methods of achieving status: the “prestige” approach requiring notable achievement in a field and “dominance” capitalizing on threats and bullying. “Modern democracies,” they write,

are currently experiencing destabilizing events including the emergence of demagogic leaders, the onset of street riots, circulation of misinformation and extremely hostile political engagements on social media.

They go on:

Building on psychological research on status-seeking, we argue that at the core of extreme political discontent are motivations to achieve status via dominance, i.e., through the use of fear and intimidation. Essentially, extreme political behavior reflects discontent with one’s own personal standing and a desire to actively rectify this through aggression.

This extreme political behavior often coincides with the rise of populism, especially right-wing populism, but Petersen, Osmundsen and Bor contend that the behavior is distinct from populism:

The psychology of dominance is likely to underlie current-day forms of extreme political discontent — and associated activism — for two reasons: First, radical discontent is characterized by verbal or physical aggression, thus directly capitalizing on the competences of people pursuing dominance-based strategies. Second, current-day radical activism seems linked to desires for recognition and feelings of ‘losing out’ in a world marked by, on the one hand, traditional gender and race-based hierarchies, which limit the mobility of minority groups and, on the other hand, globalized competition, which puts a premium on human capital.

Extreme discontent, they continue,

is a phenomenon among individuals for whom prestige-based pathways to status are, at least in their own perception, unlikely to be successful. Despite their political differences, this perception may be the psychological commonality of, on the one hand, race- or gender-based grievance movements and, on the other hand, white lower-middle class right-wing voters.

The authors emphasize that the distinction between populism and status-driven dominance is based on populism’s “orientation toward group conformity and equality,” which stands “in stark contrast to dominance motivations. In contrast to conformity, dominance leads to self-promotion. In contrast to equality, dominance leads to support for steep hierarchies.”

Thomas Kurer, a political scientist at the University of Zurich, contends that status competition is a political tool deployed overwhelmingly by the right. By email, Kurer wrote:

It is almost exclusively political actors from the right and the radical right that actively campaign on the status issue. They emphasize implications of changing status hierarchies that might negatively affect the societal standing of their core constituencies and thereby aim to mobilize voters who fear, but have not yet experienced, societal regression. The observation that campaigning on potential status loss is much more widespread and, apparently, more politically worthwhile than campaigning on status gains and makes a lot of sense in light of the long-established finding in social psychology that citizens care much more about a relative loss compared to same-sized gains.

Kurer argued that it is the threat of lost prestige, rather than the actual loss, that is a key factor in status-based political mobilization:

Looking at the basic socio-demographic profile of a Brexiter or a typical supporter of a right-wing populist party in many advanced democracies suggests that we need to be careful with a simplified narrative of a ‘revolt of the left behind’. A good share of these voters can be found in what we might call the lower middle class, which means they might well have decent jobs and decent salaries — but they fear, often for good reasons, that they are not on the winning side of economic modernization.

Kurer noted that in his own April 2020 study, “The Declining Middle: Occupational Change, Social Status, and the Populist Right,” he found

that it is voters who are and remain in jobs susceptible to automation and digitalization, so called routine jobs, who vote for the radical right and not those who actually lose their routine jobs. The latter are much more likely to abstain from politics altogether.

In a separate study of British voters who supported the leave side of Brexit, “The malaise of the squeezed middle: Challenging the narrative of the ‘left behind’ Brexiter,” by Lorenza Antonucci of the University of Birmingham, Laszlo Horvath of the University of Exeter, Yordan Kutiyski of VU University Amsterdam and André Krouwel of the Vrije University of Amsterdam, found that this segment of the electorate

is associated more with intermediate levels of education than with low or absent education, in particular in the presence of a perceived declining economic position. Secondly, we find that Brexiters hold distinct psychosocial features of malaise due to declining economic conditions, rather than anxiety or anger. Thirdly, our exploratory model finds voting Leave associated with self-identification as middle class, rather than with working class. We also find that intermediate levels of income were not more likely to vote for remain than low-income groups.

In an intriguing analysis of the changing role of status in politics, Herbert Kitschelt, a political scientist at Duke, emailed the following argument. In the recent past, he wrote:

One unique thing about working class movements — particularly when infused with Marxism — is that they could dissociate class from social status by constructing an alternative status hierarchy and social theory: Workers may be poor and deprived of skill, but in world-historic perspective they are designated to be the victorious agents of overcoming capitalism in favor of a more humane social order.

Since then, Kitschelt continued, “the downfall of the working class over the last thirty years is not just a question of its numerical shrinkage, its political disorganization and stagnating wages. It also signifies a loss of status.” The political consequences are evident and can be seen in the aftermath of the defeat of President Trump:

Those who cannot adopt or compete in the dominant status order — closely associated with the acquisition of knowledge and the mastery of complex cultural performances — make opposition to this order a badge of pride and recognition. The proliferation of conspiracy theories is an indicator of this process. People make themselves believe in them, because it induces them into an alternative world of status and rank.

On the left, Kitschelt wrote, the high value accorded to individuality, difference and autonomy creates

a fundamental tension between the demand for egalitarian economic redistribution — and the associated hope for status leveling — and the prerogative awarded to individualist or voluntary group distinction. This is the locus, where identity politics — and the specific form of intersectionality as a mode of signaling multiple facets of distinctiveness — comes in.

In the contest of contemporary politics, status competition serves to exacerbate some of the worst aspects of polarization, Kitschelt wrote:

If polarization is understood as the progressive division of society into clusters of people with political preferences and ways of life that set them further and further apart from each other, status politics is clearly a reinforcement of polarization. This augmentation of social division becomes particularly virulent when it features no longer just a clash between high and low status groups in what is still commonly understood as a unified status order, but if each side produces its own status hierarchies with their own values.

These trends will only worsen as claims of separate “status hierarchies” are buttressed by declining economic opportunities and widespread alienation from the mainstream liberal culture.

Millions of voters, including the core group of Trump supporters — whites without college degrees — face bleak futures, pushed further down the ladder by meritocratic competition that rewards what they don’t have: higher education and high scores on standardized tests. Jockeying for place in a merciless meritocracy feeds into the status wars that are presently poisoning the country, even as exacerbated levels of competition are, theoretically, an indispensable component of contemporary geopolitical and economic reality.

Voters in the bottom half of the income distribution face a level of hypercompetition that has, in turn, served to elevate politicized status anxiety in a world where social and economic mobility has, for many, ground to a halt: 90 percent of the age cohort born in the 1940s looked forward to a better standard of living than their parents’, compared with 50 percent for those born since 1980. Even worse, those in the lower status ranks suffer the most lethal consequences of the current pandemic.

These forces in their totality suggest that Joe Biden faces the toughest challenge of his career in attempting to fulfill his pledge to the electorate: “We can restore the defining American promise, that no matter where you start in life, there’s nothing you can’t achieve. And, in doing so, we can restore the soul of our nation.”

Trump has capitalized on the failures of this American promise. Now we have to hope that Biden can deliver.

Posted in Democracy, Empire, History, Politics

“The Crown” and the Long Tradition of Petitioning the Monarch for Redress of Grievances

In episode 5 of The Crown‘s season 4, a desperate out-of-work painter named Michael Fagan breaks into Buckingham Palace, enters the queen’s bedroom, sits on the foot of her bed, and asks her for a cigarette.  “Filthy habit,” she replies. “Yes, I know, I’m trying to quit,” he says.  Then he gets down to business, by asking her to provide relief from his dire economic condition.  The incident is real, occurring on January 9, 1982, during the time when Margaret Thatcher was prime minister. In light of the bizarre circumstances of the encounter, both parties were remarkably calm. When security officers arrived, the queen asked them to delay arrest until she could shake the intruder’s hand.  

Fagan and the Queen

Watching this I was struck by the way it represented a much broader historical phenomenon:  the longstanding practice of people petitioning the sovereign for the redress of grievances.  In many ways, this practice is an answer to the question of why monarchy was such a durable form of governance over the centuries.  The answer is that, except in absolutist regimes, the king was able to position himself as head of state rather than head of government.  Government was the domain of his ministers, who set policy and passed laws.  The king appointed his ministers but could claim some distance from their policies, able to dispatch them quickly — by dismissal or decapitation — if the policies didn’t work out.  In this way, the king could represent himself as the guardian of the people while the ministers were just the functionaries of government.  And as sovereign, he granted ordinary people the right to petition him when government policies put them in jeopardy.  This gave a remarkable durability of the monarchy, as it was able to evade responsibility for bad outcomes and earn affection from the populace by occasionally redressing their grievances.  It was a very effective good-cop/bad-cop arrangement that has endured right up to the present.  

As constitutional monarch, Queen Elizabeth appointed Margaret Thatcher as prime minister, but she had plausible deniability for being responsible for Thatcher’s draconian policies, which had caused so much economic harm to UK workers like Fagan.  So it made sense for him to approach her directly in seeking relief, as so many had approached monarchs in the past.  In his extreme state — barred access to his children, perpetually out of work, and with no hope in sight — he had nothing to lose by his daring palace break-in.

There is a long history of petitions to the crown.  In 1774, the First Continental Congress of the American colonies sent a petition to George III requesting relief from a set of grievances they put before him.

Most Gracious Sovereign: We, your Majesty’s faithful subjects of the Colonies of New-Hampshire, Massachusetts Bay, Rhode-Island and Providence Plantations, Connecticut, New-York, New-Jersey, Pennsylvania, the Counties of New-Castle, Kent, and Sussex, on Delaware, Maryland, Virginia, North Carolina, and South Carolina, in behalf of ourselves and the inhabitants of those Colonies who have deputed us to represent them in General Congress, by this our humble Petition, beg leave to lay our Grievances before the Throne.

After laying out these grievances in detail, the Congress closed the petition in a tone of hope and respect:

We therefore most earnestly beseech your Majesty, that your Royal authority and interposition may be used for our relief, and that a gracious Answer may be given to this Petition.

That your Majesty may enjoy every felicity through a long and glorious Reign, over loyal and happy subjects, and that your descendants may inherit your prosperity and Dominions till time shall be no more, is, and always will be, our sincere and fervent prayer.

The king, of course, did not respond as they had requested and the ultimate result for the American war of independence.  So petitioning the sovereign has been no guarantee of success, but the sheer possibility of the king’s intervention has kept hope alive.  

Consider another famous petition, whose outcome was both faster and worse.  In January of 1905, a group of 135,000 workingmen presented the following plea to Tsar Nicholas II at his winter palace in St. Petersburg.

Sire,—

We working men of St. Petersburg, our wives and children, and our parents, helpless, aged men and women, have come to you, О Tsar, in quest of justice and protection. We have been beggared, oppressed, over-burdened with excessive toil, treated with contumely. We are not recognized as normal human beings, but are dealt with as slaves who have to bear their bitter lot in silence. Patiently we endured this; but now we are being thrust deeper into the slough of right-lessness and ignorance, are being suffocated by despotism and arbitrary whims, and now, О Tsar, we have no strength left. The awful moment has come when death is better than the prolongation of our unendurable tortures.

After a long list of grievances and demands, the petition closed with the following words:

Those, Sire, constitute our principal needs, which we come to lay before you. Give orders and swear that they shall be fulfilled, and you will render Russia happy and glorious, and will impress your name on our hearts and on the hearts of our children, and our children’s children for all time. But if you withhold the word, if you are not responsive to our petition, we will die here on this square before your palace, for we have nowhere else to go to and no reason to repair elsewhere. For us there are but two roads, one leading to liberty and happiness, the other to the tomb. Point, Sire, to either of them; we will take it, even though it lead to death. Should our lives serve as a holocaust of agonizing Russia, we will not grudge these sacrifices; we gladly offer them up.

The Tsar chose Door No. 2 and several hundred workers were gunned down in front of the palace, an event remembered in history as Bloody Sunday.

Michael Fagan’s petition didn’t end well either.  Thatcher’s policies remained in effect; he was declared insane by a court and sentenced to a mental hospital.  Released three months later, he spent years in and out of prison.  But he’s also been talking about the incident ever since.  Wouldn’t you?

Constitutional monarchs like Queen Elizabeth continue to enjoy the benefit of being head of state without the responsibility and accountability that comes from being head of government.  Presidents, in governments like the United States, don’t have the luxury of enjoying prestige without power, of being the symbol of the nation while remaining above the fray.  On the other hand, maybe pinning your hopes on someone who can’t deliver for you is a fool’s game.  Maybe we’re better off with a leader who can be held to account.

Posted in Education policy, History of education, Research, Teacher education

Do No Harm: Reflections on the Impact of Educational Research

This post is a short piece I wrote in 2011 for a special issue of the journal Teacher Education and Practice on “Enhancing Teaching and Learning Through Scholarship.” My one take is that research in education is not necessarily well positioned to enhance education; on the contrary, it often does more harm than good.  See what you think.  Here’s a link to the original.

Do No Harm

David F. Labaree

            Education is a field of dreams and so is educational research.  As educators, we dream of schools that can improve the lives of students, solve social problems, and enrich the quality of life; and as educational researchers, we dream that our studies will enhance the effectiveness of schools in achieving these worthy goals.  Both fields draw recruits who see the possibilities of education as a force for doing good, and that turns out to be a problem, because the history of both fields shows that the chances for doing real harm are substantial.  Over the years, research on teaching and teacher education – the topic of the discussion in this special issue – has caused a lot of damage to teaching and learning and learning-to-teach in schools.  So I suggest a good principle to adopt when considering the role of research in teacher education is a version of the Hippocratic Oath:  First do no harm. 

The history of educational research in the United States in the twentieth century supports a pessimistic assessment of the field’s impact on American school and society.  There was Edward L. Thorndike, whose work emphasized the importance of differentiating the curriculum in order to provide the skills and knowledge that students would later need in playing sharply different roles in a stratified workforce.  There was David Snedden, who labored tirelessly to promote narrowly vocational training for that large group of students who would end up serving in what he called “the rank and file.”  There were the kingpins of educational testing, such as Lewis Terman, who developed instruments that allowed educators to measure student ability and student learning, which in turn helped determine which track students should occupy and what role they should play in later life.  Put together, these kinds of enormously productive educational researchers helped build a system of schooling that emphasized sorting over learning and promoted a vision of teaching that emphasized the delivery of curriculum over the engagement of students.  They laid the foundation for the current machinery of curriculum standards and high-stakes testing that has turned American teaching into a machinery for raising test scores.

            Of course, these educational researchers usually did not intend to do harm.  (Snedden is the exception here, a man who was on a mission to dumb down schooling for the lower classes.)  For the most part, they saw making curriculum more scientific and intelligence testing more accurate as ways to allow individuals with merit to escape from the clutches of their social origins.  Like most educational researchers, they were optimists about the possible impact of their work.  But their examples should serve as a cautionary tale for researchers who see their work as an unmitigated exercise in human improvement. 

One factor in particular tends to bend the work of researchers toward the dark side of the force, and that is research funding.  Very few government agencies and foundations are eager to support basic research in education.  Instead, funding aligns with the latest educational policy objectives, and to get funded researchers need to demonstrate that their work will in some manner serve these objectives.  That is not to say that the researchers necessarily support these policy missions, but in order to win the grant they do have to harness their work, at least rhetorically, to the aims that motivate the request for proposals.  In the current global policy climate, that means the work needs to address issues around accountability and standards and improving test scores.  If you cannot spin your work in this direction, you will have trouble getting funded.

            Another factor that interferes with the educational researcher’s desire to do good for teachers and teacher educators is the need to confront an educational version of Gresham’s Law:  Bad research tends to displace good.  The best research is complex, and this puts the researcher at a competitive disadvantage, since policymakers and teacher educators prefer results that are definitive and easy to understand.  The most sophisticated work we produce tends to show an educational reality that has a complex array of elements interacting within a fiendishly complex organizational structure, which means that research findings have to be carefully qualified to the point where it is nearly impossible to say with clarity that a particular form of educational practice is effective or ineffective.  Instead, we have to report that it all depends.  In addition, in order to understand the research findings in any depth, you need to be able to sort through issues of design, methodology, and validity that are only accessible to experts in the field. 

            Meanwhile, there is a vast array of research available to policymakers and practitioners that supports clear answers to educational problems and does so in a manner that is easy for the layperson to comprehend.  This kind of work comes from two kinds of groups: think tanks, and entrepreneurial organizations for the delivery of education.  Think tanks remove a key element of complexity from the research process by deciding in advance what the politically desirable policy is and then conducting studies that provide clear support for that policy.  In the U.S. there are also a variety of non-governmental organizations that are active in promoting and delivering a particular brand of educational service, such as Teach For America (TFA, with its alternative to traditional teacher preparation) and the Knowledge Is Power Program (KIPP, with its alternative approach to running schools in low income neighborhoods).  These organizations commission research that conveniently demonstrates the effectiveness of what they do.  And both types of research producers are particularly effective at marketing their findings to the relevant actors in the policy and education communities. 

University based educational research cannot compete with these other producers in clarity and understandability, but they can undercut the impact of this work a bit by doing what university researchers have always been good at.  We have an advantage in being the only group without a dog in the policy hunt, which allows us to perform credible fundamental research about how schools work, how teaching and learning happens, and how teachers learn to teach.  Work like this can help show how simplistic and politically biased these other research products really are.  And it won’t do much harm.

Posted in Liberal democracy, Philosophy, Politics, Populism

Francis Fukuyama: Liberalism and Its Discontents

This post is an essay by political scientist Francis Fukuyama about the challenges facing liberal democracy today from populisms of the left and right.  The original appeared in the on-line journal, American Purpose, which he helped found.  

A large number of essays have emerged in recent years worrying about the future of liberal democracy, but to my taste most of these take a defensive posture that aims more at shoring up the walls protecting this institution than on analyzing why it is under attack.  I am personally committed to the values of liberal democracy, but I find these defenses more stirring than illuminating.  This is the first discussion I have seen about how liberalism’s problems arise from characteristics of liberalism itself.  

A key tension in liberal democracy is between its two components.  Liberal societies value individual liberty, the rule of law, and the protection of minority rights.  Democratic societies value social equality and majority rule.  There are some liberal societies that are not democracies, such as Hong Kong and Singapore in the late 20th century; and there are democratic societies that are illiberal, such as the ones emerging today in Hungary and India.  The key threat he sees today is the rise of illiberalism in the democracies of Western Europe and in the U.S., coming both from right and left.

Each political vector latches on to one of liberalism’s key weakness.  The right focuses on the individualism of the liberal creed, which treats inherently social human beings as independent self-interested actors.  This leaves a craving for community, religion, nationalism, and cultural identity.  Recent cases in point: Brexit, MAGA, and the rise of evangelicalism.  The left focuses on liberalism’s tolerance for social inequality, which is built into the idea of a state reluctant to interfere in private actions and choices.  This tendency has been exacerbated in the last 50 years by the rise of neoliberalism and its effort to free markets from state control and dismantle the welfare state.

Fukuyama shows that these problems with liberalism are nothing new.  And he also suggests that liberal democracies have the capacity to respond to both sets of concerns within its institutional framework, as it has proven in the past.  Here’s his concluding comment:

Liberalism’s present-day crisis is not new; since its invention in the 17th century, liberalism has been repeatedly challenged by thick communitarians on the right and progressive egalitarians on the left. Liberalism properly understood is perfectly compatible with communitarian impulses and has been the basis for the flourishing of deep and diverse forms of civil society. It is also compatible with the social justice aims of progressives: One of its greatest achievements was the creation of modern redistributive welfare states in the late 20th century. Liberalism’s problem is that it works slowly through deliberation and compromise, and never achieves its communal or social justice goals as completely as their advocates would like. But it is hard to see how the discarding of liberal values is going to lead to anything in the long term other than increasing social conflict and ultimately a return to violence as a means of resolving differences.

At 4,500 words, this is a longer piece than most of my postings, but I think you’ll find it’s an easy read.  I’m no philosopher, and I tend to get tied in knots trying to follow philosophical arguments.  But I found that his discussion was something even I could understand.  Hope you enjoy it as much as I did. 

Fukuyama Image

Liberalism and Its Discontents

The challenges from the left and the right.

Francis Fukuyama

05 Oct 2020, 12:15 am

Today, there is a broad consensus that democracy is under attack or in retreat in many parts of the world. It is being contested not just by authoritarian states like China and Russia, but by populists who have been elected in many democracies that seemed secure.

The “democracy” under attack today is a shorthand for liberal democracy, and what is really under greatest threat is the liberal component of this pair. The democracy part refers to the accountability of those who hold political power through mechanisms like free and fair multiparty elections under universal adult franchise. The liberal part, by contrast, refers primarily to a rule of law that constrains the power of government and requires that even the most powerful actors in the system operate under the same general rules as ordinary citizens. Liberal democracies, in other words, have a constitutional system of checks and balances that limits the power of elected leaders.

Democracy itself is being challenged by authoritarian states like Russia and China that manipulate or dispense with free and fair elections. But the more insidious threat arises from populists within existing liberal democracies who are using the legitimacy they gain through their electoral mandates to challenge or undermine liberal institutions. Leaders like Hungary’s Viktor Orbán, India’s Narendra Modi, and Donald Trump in the United States have tried to undermine judicial independence by packing courts with political supporters, have openly broken laws, or have sought to delegitimize the press by labeling mainstream media as “enemies of the people.” They have tried to dismantle professional bureaucracies and to turn them into partisan instruments. It is no accident that Orbán puts himself forward as a proponent of “illiberal democracy.”

The contemporary attack on liberalism goes much deeper than the ambitions of a handful of populist politicians, however. They would not be as successful as they have been were they not riding a wave of discontent with some of the underlying characteristics of liberal societies. To understand this, we need to look at the historical origins of liberalism, its evolution over the decades, and its limitations as a governing doctrine.

What Liberalism Was

Classical liberalism can best be understood as an institutional solution to the problem of governing over diversity. Or to put it in slightly different terms, it is a system for peacefully managing diversity in pluralistic societies. It arose in Europe in the late 17th and 18th centuries in response to the wars of religion that followed the Protestant Reformation, wars that lasted for 150 years and killed major portions of the populations of continental Europe.

While Europe’s religious wars were driven by economic and social factors, they derived their ferocity from the fact that the warring parties represented different Christian sects that wanted to impose their particular interpretation of religious doctrine on their populations. This was a period in which the adherents of forbidden sects were persecuted—heretics were regularly tortured, hanged, or burned at the stake—and their clergy hunted. The founders of modern liberalism like Thomas Hobbes and John Locke sought to lower the aspirations of politics, not to promote a good life as defined by religion, but rather to preserve life itself, since diverse populations could not agree on what the good life was. This was the distant origin of the phrase “life, liberty, and the pursuit of happiness” in the Declaration of Independence. The most fundamental principle enshrined in liberalism is one of tolerance: You do not have to agree with your fellow citizens about the most important things, but only that each individual should get to decide what those things are without interference from you or from the state. The limits of tolerance are reached only when the principle of tolerance itself is challenged, or when citizens resort to violence to get their way.

Understood in this fashion, liberalism was simply a pragmatic tool for resolving conflicts in diverse societies, one that sought to lower the temperature of politics by taking questions of final ends off the table and moving them into the sphere of private life. This remains one of its most important selling points today: If diverse societies like India or the United States move away from liberal principles and try to base national identity on race, ethnicity, or religion, they are inviting a return to potentially violent conflict. The United States suffered such conflict during its Civil War, and Modi’s India is inviting communal violence by shifting its national identity to one based on Hinduism.

There is however a deeper understanding of liberalism that developed in continental Europe that has been incorporated into modern liberal doctrine. In this view, liberalism is not simply a mechanism for pragmatically avoiding violent conflict, but also a means of protecting fundamental human dignity.

The ground of human dignity has shifted over time. In aristocratic societies, it was an attribute only of warriors who risked their lives in battle. Christianity universalized the concept of dignity based on the possibility of human moral choice: Human beings had a higher moral status than the rest of created nature but lower than that of God because they could choose between right and wrong. Unlike beauty or intelligence or strength, this characteristic was universally shared and made human beings equal in the sight of God. By the time of the Enlightenment, the capacity for choice or individual autonomy was given a secular form by thinkers like Rousseau (“perfectibility”) and Kant (a “good will”), and became the ground for the modern understanding of the fundamental right to dignity written into many 20th-century constitutions. Liberalism recognizes the equal dignity of every human being by granting them rights that protect individual autonomy: rights to speech, to assembly, to belief, and ultimately to participate in self-government.

Liberalism thus protects diversity by deliberately not specifying higher goals of human life. This disqualifies religiously defined communities as liberal. Liberalism also grants equal rights to all people considered full human beings, based on their capacity for individual choice. Liberalism thus tends toward a kind of universalism: Liberals care not just about their rights, but about the rights of others outside their particular communities. Thus the French Revolution carried the Rights of Man across Europe. From the beginning the major arguments among liberals were not over this principle, but rather over who qualified as rights-bearing individuals, with various groups—racial and ethnic minorities, women, foreigners, the propertyless, children, the insane, and criminals—excluded from this magic circle.

A final characteristic of historical liberalism was its association with the right to own property. Property rights and the enforcement of contracts through legal institutions became the foundation for economic growth in Britain, the Netherlands, Germany, the United States, and other states that were not necessarily democratic but protected property rights. For that reason liberalism is strongly associated with economic growth and modernization. Rights were protected by an independent judiciary that could call on the power of the state for enforcement. Properly understood, rule of law referred both to the application of day-to-day rules that governed interactions between individuals and to the design of political institutions that formally allocated political power through constitutions. The class that was most committed to liberalism historically was the class of property owners, not just agrarian landlords but the myriads of middle-class business owners and entrepreneurs that Karl Marx would label the bourgeoisie.

Liberalism is connected to democracy, but is not the same thing as it. It is possible to have regimes that are liberal but not democratic: Germany in the 19th century and Singapore and Hong Kong in the late 20th century come to mind. It is also possible to have democracies that are not liberal, like the ones Viktor Orbán and Narendra Modi are trying to create that privilege some groups over others. Liberalism is allied to democracy through its protection of individual autonomy, which ultimately implies a right to political choice and to the franchise. But it is not the same as democracy. From the French Revolution on, there were radical proponents of democratic equality who were willing to abandon liberal rule of law altogether and vest power in a dictatorial state that would equalize outcomes. Under the banner of Marxism-Leninism, this became one of the great fault lines of the 20th century. Even in avowedly liberal states, like many in late 19th- and early 20th-century Europe and North America, there were powerful trade union movements and social democratic parties that were more interested in economic redistribution than in the strict protection of property rights.

Liberalism also saw the rise of another competitor besides communism: nationalism. Nationalists rejected liberalism’s universalism and sought to confer rights only on their favored group, defined by culture, language, or ethnicity. As the 19th century progressed, Europe reorganized itself from a dynastic to a national basis, with the unification of Italy and Germany and with growing nationalist agitation within the multiethnic Ottoman and Austro-Hungarian empires. In 1914 this exploded into the Great War, which killed millions of people and laid the kindling for a second global conflagration in 1939.

The defeat of Germany, Italy, and Japan in 1945 paved the way for a restoration of liberalism as the democratic world’s governing ideology. Europeans saw the folly of organizing politics around an exclusive and aggressive understanding of nation, and created the European Community and later the European Union to subordinate the old nation-states to a cooperative transnational structure. For its part, the United States played a powerful role in creating a new set of international institutions, including the United Nations (and affiliated Bretton Woods organizations like the World Bank and IMF), GATT and the World Trade Organization, and cooperative regional ventures like NATO and NAFTA.

The largest threat to this order came from the former Soviet Union and its allied communist parties in Eastern Europe and the developing world. But the former Soviet Union collapsed in 1991, as did the perceived legitimacy of Marxism-Leninism, and many former communist countries sought to incorporate themselves into existing international institutions like the EU and NATO. This post-Cold War world would collectively come to be known as the liberal international order.

But the period from 1950 to the 1970s was the heyday of liberal democracy in the developed world. Liberal rule of law abetted democracy by protecting ordinary people from abuse: The U.S. Supreme Court, for example, was critical in breaking down legal racial segregation through decisions like Brown v. Board of Education. And democracy protected the rule of law: When Richard Nixon engaged in illegal wiretapping and use of the CIA, it was a democratically elected Congress that helped drive him from power. Liberal rule of law laid the basis for the strong post-World War II economic growth that then enabled democratically elected legislatures to create redistributive welfare states. Inequality was tolerable in this period because most people could see their material conditions improving. In short, this period saw a largely happy coexistence of liberalism and democracy throughout the developed world.

Discontents

Liberalism has been a broadly successful ideology, and one that is responsible for much of the peace and prosperity of the modern world. But it also has a number of shortcomings, some of which were triggered by external circumstances, and others of which are intrinsic to the doctrine. The first lies in the realm of economics, the second in the realm of culture.

The economic shortcomings have to do with the tendency of economic liberalism to evolve into what has come to be called “neoliberalism.” Neoliberalism is today a pejorative term used to describe a form of economic thought, often associated with the University of Chicago or the Austrian school, and economists like Friedrich Hayek, Milton Friedman, George Stigler, and Gary Becker. They sharply denigrated the role of the state in the economy, and emphasized free markets as spurs to growth and efficient allocators of resources. Many of the analyses and policies recommended by this school were in fact helpful and overdue: Economies were overregulated, state-owned companies inefficient, and governments responsible for the simultaneous high inflation and low growth experienced during the 1970s.

But valid insights about the efficiency of markets evolved into something of a religion, in which state intervention was opposed not based on empirical observation but as a matter of principle. Deregulation produced lower airline ticket prices and shipping costs for trucks, but also laid the ground for the great financial crisis of 2008 when it was applied to the financial sector. Privatization was pushed even in cases of natural monopolies like municipal water or telecom systems, leading to travesties like the privatization of Mexico’s TelMex, where a public monopoly was transformed into a private one. Perhaps most important, the fundamental insight of trade theory, that free trade leads to higher wealth for all parties concerned, neglected the further insight that this was true only in the aggregate, and that many individuals would be hurt by trade liberalization. The period from the 1980s onward saw the negotiation of both global and regional free trade agreements that shifted jobs and investment away from rich democracies to developing countries, increasing within-country inequalities. In the meantime, many countries starved their public sectors of resources and attention, leading to deficiencies in a host of public services from education to health to security.

The result was the world that emerged by the 2010s in which aggregate incomes were higher than ever but inequality within countries had also grown enormously. Many countries around the world saw the emergence of a small class of oligarchs, multibillionaires who could convert their economic resources into political power through lobbyists and purchases of media properties. Globalization enabled them to move their money to safe jurisdictions easily, starving states of tax revenue and making regulation very difficult. Globalization also entailed liberalization of rules concerning migration. Foreign-born populations began to increase in many Western countries, abetted by crises like the Syrian civil war that sent more than a million refugees into Europe. All of this paved the way for the populist reaction that became clearly evident in 2016 with Britain’s Brexit vote and the election of Donald Trump in the United States.

The second discontent with liberalism as it evolved over the decades was rooted in its very premises. Liberalism deliberately lowered the horizon of politics: A liberal state will not tell you how to live your life, or what a good life entails; how you pursue happiness is up to you. This produces a vacuum at the core of liberal societies, one that often gets filled by consumerism or pop culture or other random activities that do not necessarily lead to human flourishing. This has been the critique of a group of (mostly) Catholic intellectuals including Patrick Deneen, Sohrab Ahmari, Adrian Vermeule, and others, who feel that liberalism offers “thin gruel” for anyone with deeper moral commitments.

This leads us to a deeper stratum of discontent. Liberal theory, both in its economic and political guises, is built around individuals and their rights, and the political system protects their ability to make these choices autonomously. Indeed, in neoclassical economic theory, social cooperation arises only as a result of rational individuals deciding that it is in their self-interest to work with other individuals. Among conservative intellectuals, Patrick Deneen has gone the furthest by arguing that this whole approach is deeply flawed precisely because it is based on this individualistic premise, and sanctifies individual autonomy above all other goods. Thus for him, the entire American project based as it was on Lockean individualistic principles was misfounded. Human beings for him are not primarily autonomous individuals, but deeply social beings who are defined by their obligations and ties to a range of social structures, from families to kin groups to nations.

This social understanding of human nature was a truism taken for granted by most thinkers prior to the Western Enlightenment. It is also one that is one supported by a great deal of recent research in the life sciences that shows that human beings are hard-wired to be social creatures: Many of our most salient faculties are ones that lead us to cooperate with one another in groups of various sizes and types. This cooperation does not arise necessarily from rational calculation; it is supported by emotional faculties like pride, guilt, shame, and anger that reinforce social bonds. The success of human beings over the millennia that has allowed our species to completely dominate its natural habitat has to do with this aptitude for following norms that induce social cooperation.

By contrast, the kind of individualism celebrated in liberal economic and political theory is a contingent development that emerged in Western societies over the centuries. Its history is long and complicated, but it originated in the inheritance rules set down by the Catholic Church in early medieval times which undermined the extended kinship networks that had characterized Germanic tribal societies. Individualism was further validated by its functionality in promoting market capitalism: Markets worked more efficiently if individuals were not constrained by obligations to kin and other social networks. But this kind of individualism has always been at odds with the social proclivities of human beings. It also does not come naturally to people in certain other non-Western societies like India or the Arab world, where kin, caste, or ethnic ties are still facts of life.

The implication of these observations for contemporary liberal societies is straightforward. Members of such societies want opportunities to bond with one another in a host of ways: as citizens of a nation, members of an ethnic or racial group, residents of a region, or adherents to a particular set of religious beliefs. Membership in such groups gives their lives meaning and texture in a way that mere citizenship in a liberal democracy does not.

Many of the critics of liberalism on the right feel that it has undervalued the nation and traditional national identity: Thus Viktor Orbán has asserted that Hungarian national identity is based on Hungarian ethnicity and on maintenance of traditional Hungarian values and cultural practices. New nationalists like Yoram Hazony celebrate nationhood and national culture as the rallying cry for community, and they bemoan liberalism’s dissolving effect on religious commitment, yearning for a thicker sense of community and shared values, underpinned by virtues in service of that community.

There are parallel discontents on the left. Juridical equality before the law does not mean that people will be treated equally in practice. Racism, sexism, and anti-gay bias all persist in liberal societies, and those injustices have become identities around which people could mobilize. The Western world has seen the emergence of a series of social movements since the 1960s, beginning with the civil rights movement in the United States, and movements promoting the rights of women, indigenous peoples, the disabled, the LGBT community, and the like. The more progress that has been made toward eradicating social injustices, the more intolerable the remaining injustices seem, and thus the moral imperative to mobilizing to correct them. The complaint of the left is different in substance but similar in structure to that of the right: Liberal society does not do enough to root out deep-seated racism, sexism, and other forms of discrimination, so politics must go beyond liberalism. And, as on the right, progressives want the deeper bonding and personal satisfaction of associating—in this case, with people who have suffered from similar indignities.

This instinct for bonding and the thinness of shared moral life in liberal societies has shifted global politics on both the right and the left toward a politics of identity and away from the liberal world order of the late 20th century. Liberal values like tolerance and individual freedom are prized most intensely when they are denied: People who live in brutal dictatorships want the simple freedom to speak, associate, and worship as they choose. But over time life in a liberal society comes to be taken for granted and its sense of shared community seems thin. Thus in the United States, arguments between right and left increasingly revolve around identity, and particularly racial identity issues, rather than around economic ideology and questions about the appropriate role of the state in the economy.

There is another significant issue that liberalism fails to grapple adequately with, which concerns the boundaries of citizenship and rights. The premises of liberal doctrine tend toward universalism: Liberals worry about human rights, and not just the rights of Englishmen, or white Americans, or some other restricted class of people. But rights are protected and enforced by states which have limited territorial jurisdiction, and the question of who qualifies as a citizen with voting rights becomes a highly contested one. Some advocates of migrant rights assert a universal human right to migrate, but this is a political nonstarter in virtually every contemporary liberal democracy. At the present moment, the issue of the boundaries of political communities is settled by some combination of historical precedent and political contestation, rather than being based on any clear liberal principle.

Conclusion

Vladimir Putin told the Financial Times that liberalism has become an “obsolete” doctrine. While it may be under attack from many quarters today, it is in fact more necessary than ever.

It is more necessary because it is fundamentally a means of governing over diversity, and the world is more diverse than it ever has been. Democracy disconnected from liberalism will not protect diversity, because majorities will use their power to repress minorities. Liberalism was born in the mid-17th century as a means of resolving religious conflicts, and it was reborn again after 1945 to solve conflicts between nationalisms. Any illiberal effort to build a social order around thick ties defined by race, ethnicity, or religion will exclude important members of the community, and down the road will lead to conflict. Russia itself retains liberal characteristics: Russian citizenship and nationality is not defined by either Russian ethnicity or the Orthodox religion; the Russian Federation’s millions of Muslim inhabitants enjoy equal juridical rights. In situations of de facto diversity, attempts to impose a single way of life on an entire population is a formula for dictatorship.

The only other way to organize a diverse society is through formal power-sharing arrangements among different identity groups that give only a nod toward shared nationality. This is the way that Lebanon, Iraq, Bosnia, and other countries in the Middle East and the Balkans are governed. This type of consociationalism leads to very poor governance and long-term instability, and works poorly in societies where identity groups are not geographically based. This is not a path down which any contemporary liberal democracy should want to tread.

That being said, the kinds of economic and social policies that liberal societies should pursue is today a wide-open question. The evolution of liberalism into neoliberalism after the 1980s greatly reduced the policy space available to centrist political leaders, and permitted the growth of huge inequalities that have been fueling populisms of the right and the left. Classical liberalism is perfectly compatible with a strong state that seeks social protections for populations left behind by globalization, even as it protects basic property rights and a market economy. Liberalism is necessarily connected to democracy, and liberal economic policies need to be tempered by considerations of democratic equality and the need for political stability.

I suspect that most religious conservatives critical of liberalism today in the United States and other developed countries do not fool themselves into thinking that they can turn the clock back to a period when their social views were mainstream. Their complaint is a different one: that contemporary liberals are ready to tolerate any set of views, from radical Islam to Satanism, other than those of religious conservatives, and that they find their own freedom constrained.

This complaint is a serious one: Many progressives on the left have shown themselves willing to abandon liberal values in pursuit of social justice objectives. There has been a sustained intellectual attack on liberal principles over the past three decades coming out of academic pursuits like gender studies, critical race theory, postcolonial studies, and queer theory, that deny the universalistic premises underlying modern liberalism. The challenge is not simply one of intolerance of other views or “cancel culture” in the academy or the arts. Rather, the challenge is to basic principles that all human beings were born equal in a fundamental sense, or that a liberal society should strive to be color-blind. These different theories tend to argue that the lived experiences of specific and ever-narrower identity groups are incommensurate, and that what divides them is more powerful than what unites them as citizens. For some in the tradition of Michel Foucault, foundational approaches to cognition coming out of liberal modernity like the scientific method or evidence-based research are simply constructs meant to bolster the hidden power of racial and economic elites.

The issue here is thus not whether progressive illiberalism exists, but rather how great a long-term danger it represents. In countries from India and Hungary to the United States, nationalist conservatives have actually taken power and have sought to use the power of the state to dismantle liberal institutions and impose their own views on society as a whole. That danger is a clear and present one.

Progressive anti-liberals, by contrast, have not succeeded in seizing the commanding heights of political power in any developed country. Religious conservatives are still free to worship in any way they see fit, and indeed are organized in the United States as a powerful political bloc that can sway elections. Progressives exercise power in different and more nuanced ways, primarily through their dominance of cultural institutions like the mainstream media, the arts, and large parts of academia. The power of the state has been enlisted behind their agenda on such matters as striking down via the courts conservative restrictions on abortion and gay marriage and in the shaping of public school curricula. An open question for the future is whether cultural dominance today will ultimately lead to political dominance in the future, and thus a more thoroughgoing rollback of liberal rights by progressives.

Liberalism’s present-day crisis is not new; since its invention in the 17th century, liberalism has been repeatedly challenged by thick communitarians on the right and progressive egalitarians on the left. Liberalism properly understood is perfectly compatible with communitarian impulses and has been the basis for the flourishing of deep and diverse forms of civil society. It is also compatible with the social justice aims of progressives: One of its greatest achievements was the creation of modern redistributive welfare states in the late 20th century. Liberalism’s problem is that it works slowly through deliberation and compromise, and never achieves its communal or social justice goals as completely as their advocates would like. But it is hard to see how the discarding of liberal values is going to lead to anything in the long term other than increasing social conflict and ultimately a return to violence as a means of resolving differences.

Francis Fukuyama, chairman of the editorial board of American Purpose, directs the Center on Democracy, Development and the Rule of Law at Stanford University.