Posted in Empire, History, Meritocracy, Social status

Craig Brown – Ninety-Nine Glimpses of Princess Margaret

Here’s a challenge to any writer.  How do you write a book about someone famous who never did anything?  Craig Brown found an answer with his book, Nine-Nine Glimpses of Princess Margaret.  

Princess Margaret

In this book, he provides not a biography but a set of impressions of Queen Elizabeth’s younger sister as they were recounted by the people around her.  It’s as if she only existed in her reflection.  And he lays out these impressions in a series of 99 brief but poisonously pleasurable chapters.  The result is a feast for the reader and a model for writers of how to make something out of nothing.

Another thing I like about this book is that it undercuts some of my own critique of the meritocracy, which I frequently belabor in this blog.  Nothing like looking at minor royals to make meritocracy look pretty good.  At least people do something to gain their renown.

Brown says he came upon the idea for this book while researching another one, when he kept finding Princess Margaret listed in a vast array of books about the UK in the late twentieth century.  

It is like playing ‘Where’s Wally?’, or staring at clouds in search of a face. Leave it long enough, and she’ll be there, rubbing shoulders with philosophers, film stars, novelists, politicians.

I spy with my little eye, something beginning with M!

Here she is, sitting above Marie Antoinette in Margaret Drabble’s biography of Angus Wilson:

Maraini, Dacia
Marchant, Bill (Sir Herbert)
Maresfield Park
Margaret, Princess
Marie Antoinette
Market Harborough

The reflections she left in these sources are anything but pretty.  As Brown puts it,

It has been said that history is written by the victors, but, on the most basic level, this is not quite true: it is written by the writers.

Princess Margaret had the misfortune to be surrounded by catty people who were eager to leave a written record of their encounters with her — for consumption by people like me who love to read gossipy accounts about the one percent.

In part these accounts serve as a welcome counterpoint to the typical syrupy stories promoted by the royal family, for example,

The queen mother:

Along with radiance, she emitted delight. Her authorised biographer, William Shawcross, chronicles this trail of delight. Wherever she goes, she delights everyone, and they are in turn delighted by her delight, whereupon she is delighted that they are delighted that she is delighted that … and so forth. If you shut his book too abruptly, you’ll notice delight oozing out of its sides.

But from the age of twenty-five, Princess Margaret was rarely described as ‘radiant’, other than on her wedding day, traditionally an occasion on which the adjective is obligatory, to be withheld only if the bride is actually hauled sobbing to the altar.

Most of the stories follow another arc: the Princess arrives late, delaying dinner to catch up with her punishing schedule of drinking and smoking. At the table, she grows more and more relaxed; by midnight, it dawns on the assembled company that she is in it for the long haul, which means that they will be too, since protocol dictates that no one can leave before she does. Then, just as everyone else is growing more chatty and carefree, the Princess abruptly remounts her high horse and upbraids a hapless guest for over-familiarity: ‘When you say my sister, I imagine you are referring to Her Majesty the Queen?’

At times, the reader feels sorry for the princess serving as everyone’s favorite punching bag.  As a royal, your status is purely at the mercy of birth order, establishing your position in the line for the crown.

How odd, to emerge from the womb fourth in line, to go up a notch at the age of six, up another notch that same year, and then to find yourself hurtling down, down, down to fourth place at the birth of Prince Charles in 1948, fifth at the birth of Princess Anne in 1950, then downhill all the way, overtaken by a non-stop stream of riff-raff – Prince Andrew and Prince Edward and Peter Phillips and Princess Beatrice and the rest of them, down, down, down, until by the time of your death you have plummeted to number eleven, behind Zara Phillips, later to become Zara Tindall, mother of Mia Tindall, who, if you were still alive, would herself be one ahead of you, even when she was still in nappies. Not many women have to face the fact that their careers peaked at the age of six, or to live with the prospect of losing their place in the pecking order to a succession of newborn babies, and to face demotion every few years thereafter. Small wonder, then, if Princess Margaret felt short-changed by life.

Her life was defined by deficit.

She remained conscious of her image as the one who wasn’t, and to some extent played on it: the one who wasn’t the Queen; the one who wasn’t taught constitutional history because she wasn’t the one who’d be needing it; the one who wasn’t in the first coach, and wouldn’t ever be first onto the Buckingham Palace balcony; the one who wasn’t given the important duties, but was obliged to make do with the also-rans: the naming of the more out-of-the-way council building, school, hospital or regiment, the state visit to the duller country, the patronage of the more obscure charity, the glad-handing of the smaller fry – the deputies, the vices, the second-in-commands. Her most devoted friends praised her stoicism for assuming the role of lightning rod. ‘For nearly five decades,’ said Reinaldo Herrera, ‘she bore with great dignity the criticism and envy that people dared not show the Queen.’

But sympathy for her situation is hard to sustain for very long, when she spends so much of her time putting other people down.

Her antennae for transgressions were unusually sensitive, quivering into action at the slightest opportunity. ‘I detested Queen Mary,’ she told Gore Vidal. ‘She was rude to all of us except Lilibet, who was going to be Queen. Of course, she had an inferiority complex. We were Royal, and she was not.’ Unlike her, Queen Mary had been born a Serene Highness, not a Royal Highness. The difference, invisible to most, was monumental to Princess Margaret, who treasured the definite article in Her Royal Highness the Princess Margaret. Lacking that ‘the’, her grandmother was in some sense below the salt.

Far more than her sister, she was given to pulling rank. She once reminded her children that she was royal and they were not, and their father was most certainly not. ‘I am unique,’ she would sometimes pipe up at dinner parties. ‘I am the daughter of a King and the sister of a Queen.’ It was no ice-breaker.

Margaret had been born to the King-Emperor at a time when the map of the world was still largely pink. Her sense of entitlement, never modest, grew bigger and bigger with each passing year, gathering weight and speed as the British Empire grew smaller and smaller, and her role in it smaller still.

As a result, she played her role as an awkward mix of princess and bohemian, leaving those around her on edge about whether she was going to go high or go low.

She was of royalty, yet divorced from it; royalty set at an oblique angle, royalty through the looking glass, royalty as pastiche.

She was cabaret camp, Ma’am Ca’amp: she was Noël Coward, cigarette holders, blusher, Jean Cocteau, winking, sighing, dark glasses, Bet Lynch, charades, Watteau, colourful cocktails at midday, ballet, silk, hoity-toity, dismissive overstatement, arriving late, entering with a flourish, exiting with a flounce, pausing for effect, making a scene.

It is languid, bored, world-weary, detached, bored, fidgety, demanding, entitled, disgruntled, bored. It carries the seeds of its own sadness and scatters them around like confetti. It looks in the mirror for protracted periods of time, but avoids exchanging glances with itself. It is disappointment hiding behind the shield of hauteur, keeping pity at bay. ‘I have never known an unhappier woman,’ says John Julius.

Read the book.  You’ll have a hard time putting it down.

Posted in Democracy, Empire, History, Politics

“The Crown” and the Long Tradition of Petitioning the Monarch for Redress of Grievances

In episode 5 of The Crown‘s season 4, a desperate out-of-work painter named Michael Fagan breaks into Buckingham Palace, enters the queen’s bedroom, sits on the foot of her bed, and asks her for a cigarette.  “Filthy habit,” she replies. “Yes, I know, I’m trying to quit,” he says.  Then he gets down to business, by asking her to provide relief from his dire economic condition.  The incident is real, occurring on January 9, 1982, during the time when Margaret Thatcher was prime minister. In light of the bizarre circumstances of the encounter, both parties were remarkably calm. When security officers arrived, the queen asked them to delay arrest until she could shake the intruder’s hand.  

Fagan and the Queen

Watching this I was struck by the way it represented a much broader historical phenomenon:  the longstanding practice of people petitioning the sovereign for the redress of grievances.  In many ways, this practice is an answer to the question of why monarchy was such a durable form of governance over the centuries.  The answer is that, except in absolutist regimes, the king was able to position himself as head of state rather than head of government.  Government was the domain of his ministers, who set policy and passed laws.  The king appointed his ministers but could claim some distance from their policies, able to dispatch them quickly — by dismissal or decapitation — if the policies didn’t work out.  In this way, the king could represent himself as the guardian of the people while the ministers were just the functionaries of government.  And as sovereign, he granted ordinary people the right to petition him when government policies put them in jeopardy.  This gave a remarkable durability of the monarchy, as it was able to evade responsibility for bad outcomes and earn affection from the populace by occasionally redressing their grievances.  It was a very effective good-cop/bad-cop arrangement that has endured right up to the present.  

As constitutional monarch, Queen Elizabeth appointed Margaret Thatcher as prime minister, but she had plausible deniability for being responsible for Thatcher’s draconian policies, which had caused so much economic harm to UK workers like Fagan.  So it made sense for him to approach her directly in seeking relief, as so many had approached monarchs in the past.  In his extreme state — barred access to his children, perpetually out of work, and with no hope in sight — he had nothing to lose by his daring palace break-in.

There is a long history of petitions to the crown.  In 1774, the First Continental Congress of the American colonies sent a petition to George III requesting relief from a set of grievances they put before him.

Most Gracious Sovereign: We, your Majesty’s faithful subjects of the Colonies of New-Hampshire, Massachusetts Bay, Rhode-Island and Providence Plantations, Connecticut, New-York, New-Jersey, Pennsylvania, the Counties of New-Castle, Kent, and Sussex, on Delaware, Maryland, Virginia, North Carolina, and South Carolina, in behalf of ourselves and the inhabitants of those Colonies who have deputed us to represent them in General Congress, by this our humble Petition, beg leave to lay our Grievances before the Throne.

After laying out these grievances in detail, the Congress closed the petition in a tone of hope and respect:

We therefore most earnestly beseech your Majesty, that your Royal authority and interposition may be used for our relief, and that a gracious Answer may be given to this Petition.

That your Majesty may enjoy every felicity through a long and glorious Reign, over loyal and happy subjects, and that your descendants may inherit your prosperity and Dominions till time shall be no more, is, and always will be, our sincere and fervent prayer.

The king, of course, did not respond as they had requested and the ultimate result for the American war of independence.  So petitioning the sovereign has been no guarantee of success, but the sheer possibility of the king’s intervention has kept hope alive.  

Consider another famous petition, whose outcome was both faster and worse.  In January of 1905, a group of 135,000 workingmen presented the following plea to Tsar Nicholas II at his winter palace in St. Petersburg.

Sire,—

We working men of St. Petersburg, our wives and children, and our parents, helpless, aged men and women, have come to you, О Tsar, in quest of justice and protection. We have been beggared, oppressed, over-burdened with excessive toil, treated with contumely. We are not recognized as normal human beings, but are dealt with as slaves who have to bear their bitter lot in silence. Patiently we endured this; but now we are being thrust deeper into the slough of right-lessness and ignorance, are being suffocated by despotism and arbitrary whims, and now, О Tsar, we have no strength left. The awful moment has come when death is better than the prolongation of our unendurable tortures.

After a long list of grievances and demands, the petition closed with the following words:

Those, Sire, constitute our principal needs, which we come to lay before you. Give orders and swear that they shall be fulfilled, and you will render Russia happy and glorious, and will impress your name on our hearts and on the hearts of our children, and our children’s children for all time. But if you withhold the word, if you are not responsive to our petition, we will die here on this square before your palace, for we have nowhere else to go to and no reason to repair elsewhere. For us there are but two roads, one leading to liberty and happiness, the other to the tomb. Point, Sire, to either of them; we will take it, even though it lead to death. Should our lives serve as a holocaust of agonizing Russia, we will not grudge these sacrifices; we gladly offer them up.

The Tsar chose Door No. 2 and several hundred workers were gunned down in front of the palace, an event remembered in history as Bloody Sunday.

Michael Fagan’s petition didn’t end well either.  Thatcher’s policies remained in effect; he was declared insane by a court and sentenced to a mental hospital.  Released three months later, he spent years in and out of prison.  But he’s also been talking about the incident ever since.  Wouldn’t you?

Constitutional monarchs like Queen Elizabeth continue to enjoy the benefit of being head of state without the responsibility and accountability that comes from being head of government.  Presidents, in governments like the United States, don’t have the luxury of enjoying prestige without power, of being the symbol of the nation while remaining above the fray.  On the other hand, maybe pinning your hopes on someone who can’t deliver for you is a fool’s game.  Maybe we’re better off with a leader who can be held to account.

Posted in History, Meritocracy, Politics, Populism

Graeme Wood — The Next Decade Could Be Even Worse

This post is a piece by Graeme Wood from the December Atlantic.  Here’s a link to the original.  

It’s a profile of Peter Turchin, a population ecologist who decided to turn his skills in mathematical modeling toward big history — looking for patterns across long expanses of time that help explain the rise and fall of civilizations.  His prognosis for our own is not very promising.  He sees rising prospects for civil unrest.

The fundamental problems, he says, are a dark triad of social maladies: a bloated elite class, with too few elite jobs to go around; declining living standards among the general population; and a government that can’t cover its financial positions. His models, which track these factors in other societies across history, are too complicated to explain in a nontechnical publication. But they’ve succeeded in impressing writers for nontechnical publications, and have won him comparisons to other authors of “megahistories,” such as Jared Diamond and Yuval Noah Harari.

I’m a fan of megahistories, by people like Diamond and Harari and historians like Walter Scheidel and Ian Morris, whom I’ve discussed here.  I’m particularly intrigued by his analysis of the overproduction of elites, which resonates with the critiques of meritocracy that I’ve explored in this blog (e.g., here, here, and here.)

Of the three factors driving social violence, Turchin stresses most heavily “elite overproduction”—­the tendency of a society’s ruling classes to grow faster than the number of positions for their members to fill. One way for a ruling class to grow is biologically—think of Saudi Arabia, where princes and princesses are born faster than royal roles can be created for them. In the United States, elites over­produce themselves through economic and educational upward mobility: More and more people get rich, and more and more get educated. Neither of these sounds bad on its own. Don’t we want everyone to be rich and educated? The problems begin when money and Harvard degrees become like royal titles in Saudi Arabia. If lots of people have them, but only some have real power, the ones who don’t have power eventually turn on the ones who do.

When you’re reading the following passage, I dare you not to think about Donald Trump and Steve Bannon.

Elite overproduction creates counter-elites, and counter-elites look for allies among the commoners. If commoners’ living standards slip—not relative to the elites, but relative to what they had before—they accept the overtures of the counter-elites and start oiling the axles of their tumbrels. Commoners’ lives grow worse, and the few who try to pull themselves onto the elite lifeboat are pushed back into the water by those already aboard. The final trigger of impending collapse, Turchin says, tends to be state insolvency. At some point rising in­security becomes expensive. The elites have to pacify unhappy citizens with handouts and freebies—and when these run out, they have to police dissent and oppress people. Eventually the state exhausts all short-term solutions, and what was heretofore a coherent civilization disintegrates.

See what you think about his take on things.

Turchin Illustration

The Next Decade Could Be Even Worse

A historian believes he has discovered iron laws that predict the rise and fall of societies. He has bad news.

Graeme Wood

Peter Turchin, one of the world’s experts on pine beetles and possibly also on human beings, met me reluctantly this summer on the campus of the University of Connecticut at Storrs, where he teaches. Like many people during the pandemic, he preferred to limit his human contact. He also doubted whether human contact would have much value anyway, when his mathematical models could already tell me everything I needed to know.

But he had to leave his office sometime. (“One way you know I am Russian is that I cannot think sitting down,” he told me. “I have to go for a walk.”) Neither of us had seen much of anyone since the pandemic had closed the country several months before. The campus was quiet. “A week ago, it was even more like a neutron bomb hit,” Turchin said. Animals were timidly reclaiming the campus, he said: squirrels, woodchucks, deer, even an occasional red-tailed hawk. During our walk, groundskeepers and a few kids on skateboards were the only other representatives of the human population in sight.

The year 2020 has been kind to Turchin, for many of the same reasons it has been hell for the rest of us. Cities on fire, elected leaders endorsing violence, homicides surging—­­to a normal American, these are apocalyptic signs. To Turchin, they indicate that his models, which incorporate thousands of years of data about human history, are working. (“Not all of human history,” he corrected me once. “Just the last 10,000 years.”) He has been warning for a decade that a few key social and political trends portend an “age of discord,” civil unrest and carnage worse than most Americans have experienced. In 2010, he predicted that the unrest would get serious around 2020, and that it wouldn’t let up until those social and political trends reversed. Havoc at the level of the late 1960s and early ’70s is the best-case scenario; all-out civil war is the worst.

The fundamental problems, he says, are a dark triad of social maladies: a bloated elite class, with too few elite jobs to go around; declining living standards among the general population; and a government that can’t cover its financial positions. His models, which track these factors in other societies across history, are too complicated to explain in a nontechnical publication. But they’ve succeeded in impressing writers for nontechnical publications, and have won him comparisons to other authors of “megahistories,” such as Jared Diamond and Yuval Noah Harari. The New York Times columnist Ross Douthat had once found Turchin’s historical model­ing unpersuasive, but 2020 made him a believer: “At this point,” Douthat recently admitted on a podcast, “I feel like you have to pay a little more attention to him.”

Diamond and Harari aimed to describe the history of humanity. Turchin looks into a distant, science-fiction future for peers. In War and Peace and War (2006), his most accessible book, he likens himself to Hari Seldon, the “maverick mathematician” of Isaac Asimov’s Foundation series, who can foretell the rise and fall of empires. In those 10,000 years’ worth of data, Turchin believes he has found iron laws that dictate the fates of human societies.

The fate of our own society, he says, is not going to be pretty, at least in the near term. “It’s too late,” he told me as we passed Mirror Lake, which UConn’s website describes as a favorite place for students to “read, relax, or ride on the wooden swing.” The problems are deep and structural—not the type that the tedious process of demo­cratic change can fix in time to forestall mayhem. Turchin likens America to a huge ship headed directly for an iceberg: “If you have a discussion among the crew about which way to turn, you will not turn in time, and you hit the iceberg directly.” The past 10 years or so have been discussion. That sickening crunch you now hear—steel twisting, rivets popping—­­is the sound of the ship hitting the iceberg.

“We are almost guaranteed” five hellish years, Turchin predicts, and likely a decade or more. The problem, he says, is that there are too many people like me. “You are ruling class,” he said, with no more rancor than if he had informed me that I had brown hair, or a slightly newer iPhone than his. Of the three factors driving social violence, Turchin stresses most heavily “elite overproduction”—­the tendency of a society’s ruling classes to grow faster than the number of positions for their members to fill. One way for a ruling class to grow is biologically—think of Saudi Arabia, where princes and princesses are born faster than royal roles can be created for them. In the United States, elites over­produce themselves through economic and educational upward mobility: More and more people get rich, and more and more get educated. Neither of these sounds bad on its own. Don’t we want everyone to be rich and educated? The problems begin when money and Harvard degrees become like royal titles in Saudi Arabia. If lots of people have them, but only some have real power, the ones who don’t have power eventually turn on the ones who do.

In the United States, Turchin told me, you can see more and more aspirants fighting for a single job at, say, a prestigious law firm, or in an influential government sinecure, or (here it got personal) at a national magazine. Perhaps seeing the holes in my T-shirt, Turchin noted that a person can be part of an ideological elite rather than an economic one. (He doesn’t view himself as a member of either. A professor reaches at most a few hundred students, he told me. “You reach hundreds of thousands.”) Elite jobs do not multiply as fast as elites do. There are still only 100 Senate seats, but more people than ever have enough money or degrees to think they should be running the country. “You have a situation now where there are many more elites fighting for the same position, and some portion of them will convert to counter-elites,” Turchin said.

Donald Trump, for example, may appear elite (rich father, Wharton degree, gilded commodes), but Trumpism is a counter-elite movement. His government is packed with credentialed nobodies who were shut out of previous administrations, sometimes for good reasons and sometimes because the Groton-­Yale establishment simply didn’t have any vacancies. Trump’s former adviser and chief strategist Steve Bannon, Turchin said, is a “paradigmatic example” of a counter-elite. He grew up working-class, went to Harvard Business School, and got rich as an investment banker and by owning a small stake in the syndication rights to Seinfeld. None of that translated to political power until he allied himself with the common people. “He was a counter-elite who used Trump to break through, to put the white working males back in charge,” Turchin said.

Elite overproduction creates counter-elites, and counter-elites look for allies among the commoners. If commoners’ living standards slip—not relative to the elites, but relative to what they had before—they accept the overtures of the counter-elites and start oiling the axles of their tumbrels. Commoners’ lives grow worse, and the few who try to pull themselves onto the elite lifeboat are pushed back into the water by those already aboard. The final trigger of impending collapse, Turchin says, tends to be state insolvency. At some point rising in­security becomes expensive. The elites have to pacify unhappy citizens with handouts and freebies—and when these run out, they have to police dissent and oppress people. Eventually the state exhausts all short-term solutions, and what was heretofore a coherent civilization disintegrates.

Turchin’s prognostications would be easier to dismiss as barstool theorizing if the disintegration were not happening now, roughly as the Seer of Storrs foretold 10 years ago. If the next 10 years are as seismic as he says they will be, his insights will have to be accounted for by historians and social scientists—assuming, of course, that there are still universities left to employ such people.

Turchin was born in 1957 in Obninsk, Russia, a city built by the Soviet state as a kind of nerd heaven, where scientists could collaborate and live together. His father, Valen­tin, was a physicist and political dissident, and his mother, Tatiana, had trained as a geologist. They moved to Moscow when he was 7 and in 1978 fled to New York as political refugees. There they quickly found a community that spoke the household language, which was science. Valen­tin taught at the City University of New York, and Peter studied biology at NYU and earned a zoology doctorate from Duke.

Turchin wrote a dissertation on the Mexican bean beetle, a cute, ladybug­like pest that feasts on legumes in areas between the United States and Guatemala. When Turchin began his research, in the early 1980s, ecology was evolving in a way that some fields already had. The old way to study bugs was to collect them and describe them: count their legs, measure their bellies, and pin them to pieces of particle­board for future reference. (Go to the Natural History Museum in London, and in the old storerooms you can still see the shelves of bell jars and cases of specimens.) In the ’70s, the Australian physicist Robert May had turned his attention to ecology and helped transform it into a mathematical science whose tools included supercomputers along with butterfly nets and bottle traps. Yet in the early days of his career, Turchin told me, “the majority of ecologists were still quite math-phobic.”

Turchin did, in fact, do fieldwork, but he contributed to ecology primarily by collecting and using data to model the dynamics of populations—for example, determining why a pine-beetle population might take over a forest, or why that same population might decline. (He also worked on moths, voles, and lemmings.)

In the late ’90s, disaster struck: Turchin realized that he knew everything he ever wanted to know about beetles. He compares himself to Thomasina Coverly, the girl genius in the Tom Stoppard play Arcadia, who obsessed about the life cycles of grouse and other creatures around her Derbyshire country house. Stoppard’s character had the disadvantage of living a century and a half before the development of chaos theory. “She gave up because it was just too complicated,” Turchin said. “I gave up because I solved the problem.”

Turchin published one final monograph, Complex Population Dynamics: A Theoretical / Empirical Synthesis (2003), then broke the news to his UConn colleagues that he would be saying a permanent sayonara to the field, although he would continue to draw a salary as a tenured professor in their department. (He no longer gets raises, but he told me he was already “at a comfortable level, and, you know, you don’t need so much money.”) “Usually a midlife crisis means you divorce your old wife and marry a graduate student,” Turchin said. “I divorced an old science and married a new one.”

One of his last papers appeared in the journal Oikos. “Does population ecology have general laws?” Turchin asked. Most ecologists said no: Populations have their own dynamics, and each situation is different. Pine beetles reproduce, run amok, and ravage a forest for pine-beetle reasons, but that does not mean mosquito or tick populations will rise and fall according to the same rhythms. Turchin suggested that “there are several very general law-like propositions” that could be applied to ecology. After its long adolescence of collecting and cataloging, ecology had enough data to describe these universal laws—and to stop pretending that every species had its own idiosyncrasies. “Ecologists know these laws and should call them laws,” he said. Turchin proposed, for example, that populations of organisms grow or decline exponentially, not linearly. This is why if you buy two guinea pigs, you will soon have not just a few more guinea pigs but a home—and then a neighborhood—full of the damn things (as long as you keep feeding them). This law is simple enough to be understood by a high-school math student, and it describes the fortunes of everything from ticks to starlings to camels. The laws Turchin applied to ecology—and his insistence on calling them laws—­generated respectful controversy at the time. Now they are cited in textbooks.

Having left ecology, Turchin began similar research that attempted to formulate general laws for a different animal species: human beings. He’d long had a hobby­ist’s interest in history. But he also had a predator’s instinct to survey the savanna of human knowledge and pounce on the weakest prey. “All sciences go through this transition to mathematization,” Turchin told me. “When I had my midlife crisis, I was looking for a subject where I could help with this transition to a mathematized science. There was only one left, and that was history.”

Historians read books, letters, and other texts. Occasionally, if they are archaeologically inclined, they dig up potsherds and coins. But to Turchin, relying solely on these methods was the equivalent of studying bugs by pinning them to particleboard and counting their antennae. If the historians weren’t going to usher in a mathematical revolution themselves, he would storm their departments and do it for them.

“There is a longstanding debate among scientists and philosophers as to whether history has general laws,” he and a co-author wrote in Secular Cycles (2009). “A basic premise of our study is that historical societies can be studied with the same methods physicists and biologists used to study natural systems.” Turchin founded a journal, Cliodynamics, dedicated to “the search for general principles explaining the functioning and dynamics of historical societies.” (The term is his coinage; Clio is the muse of history.) He had already announced the discipline’s arrival in an article in Nature, where he likened historians reluctant to build general principles to his colleagues in biology “who care most for the private life of warblers.” “Let history continue to focus on the particular,” he wrote. Cliodynamics would be a new science. While historians dusted bell jars in the basement of the university, Turchin and his followers would be upstairs, answering the big questions.

To seed the journal’s research, Turchin masterminded a digital archive of historical and archaeological data. The coding of its records requires finesse, he told me, because (for example) the method of determining the size of the elite-aspirant class of medieval France might differ from the measure of the same class in the present-day United States. (For medieval France, a proxy is the membership in its noble class, which became glutted with second and third sons who had no castles or manors to rule over. One American proxy, Turchin says, is the number of lawyers.) But once the data are entered, after vetting by Turchin and specialists in the historical period under review, they offer quick and powerful suggestions about historical phenomena.

Historians of religion have long pondered the relationship between the rise of complex civilization and the belief in gods—especially “moralizing gods,” the kind who scold you for sinning. Last year, Turchin and a dozen co-authors mined the database (“records from 414 societies that span the past 10,000 years from 30 regions around the world, using 51 measures of social complexity and 4 measures of supernatural enforcement of morality”) to answer the question conclusively. They found that complex societies are more likely to have moralizing gods, but the gods tend to start their scolding after the societies get complex, not before. As the database expands, it will attempt to remove more questions from the realm of humanistic speculation and sock them away in a drawer marked answered.

One of Turchin’s most unwelcome conclusions is that complex societies arise through war. The effect of war is to reward communities that organize themselves to fight and survive, and it tends to wipe out ones that are simple and small-scale. “No one wants to accept that we live in the societies we do”—rich, complex ones with universities and museums and philosophy and art—“because of an ugly thing like war,” he said. But the data are clear: Darwinian processes select for complex socie­ties because they kill off simpler ones. The notion that democracy finds its strength in its essential goodness and moral improvement over its rival systems is likewise fanciful. Instead, democratic societies flourish because they have a memory of being nearly obliterated by an external enemy. They avoided extinction only through collective action, and the memory of that collective action makes democratic politics easier to conduct in the present, Turchin said. “There is a very close correlation between adopting democratic institutions and having to fight a war for survival.”

Also unwelcome: the conclusion that civil unrest might soon be upon us, and might reach the point of shattering the country. In 2012, Turchin published an analysis of political violence in the United States, again starting with a database. He classified 1,590 incidents—riots, lynchings, any political event that killed at least one person—from 1780 to 2010. Some periods were placid and others bloody, with peaks of brutality in 1870, 1920, and 1970, a 50-year cycle. Turchin excludes the ultimate violent incident, the Civil War, as a “sui generis event.” The exclusion may seem suspicious, but to a statistician, “trimming outliers” is standard practice. Historians and journalists, by contrast, tend to focus on outliers—­because they are interesting—and sometimes miss grander trends.

Certain aspects of this cyclical view require relearning portions of American history, with special attention paid to the numbers of elites. The industrialization of the North, starting in the mid-19th century, Turchin says, made huge numbers of people rich. The elite herd was culled during the Civil War, which killed off or impoverished the southern slaveholding class, and during Reconstruction, when America experienced a wave of assassinations of Republican politicians. (The most famous of these was the assassination of James A. Garfield, the 20th president of the United States, by a lawyer who had demanded but not received a political appointment.) It wasn’t until the Progressive reforms of the 1920s, and later the New Deal, that elite overproduction actually slowed, at least for a time.

This oscillation between violence and peace, with elite over­production as the first horseman of the recurring American apocalypse, inspired Turchin’s 2020 prediction. In 2010, when Nature surveyed scientists about their predictions for the coming decade, most took the survey as an invitation to self-promote and rhapsodize, dreamily, about coming advances in their fields. Turchin retorted with his prophecy of doom and said that nothing short of fundamental change would stop another violent turn.

Turchin’s prescriptions are, as a whole, vague and unclassifiable. Some sound like ideas that might have come from Senator Elizabeth Warren—tax the elites until there are fewer of them—while others, such as a call to reduce immigration to keep wages high for American workers, resemble Trumpian protectionism. Other policies are simply heretical. He opposes credential-­oriented higher education, for example, which he says is a way of mass-producing elites without also mass-­producing elite jobs for them to occupy. Architects of such polices, he told me, are “creating surplus elites, and some become counter-elites.” A smarter approach would be to keep the elite numbers small, and the real wages of the general population on a constant rise.

How to do that? Turchin says he doesn’t really know, and it isn’t his job to know. “I don’t really think in terms of specific policy,” he told me. “We need to stop the runaway process of elite overproduction, but I don’t know what will work to do that, and nobody else does. Do you increase taxation? Raise the minimum wage? Universal basic income?” He conceded that each of these possibilities would have unpredictable effects. He recalled a story he’d heard back when he was still an ecologist: The Forest Service had once implemented a plan to reduce the population of bark beetles with pesticide—only to find that the pesticide killed off the beetles’ predators even more effectively than it killed the beetles. The intervention resulted in more beetles than before. The lesson, he said, was to practice “adaptive management,” changing and modulating your approach as you go.

Eventually, Turchin hopes, our understanding of historical dynamics will mature to the point that no government will make policy without reflecting on whether it is hurtling toward a mathematically pre­ordained disaster. He says he could imagine an Asimovian agency that keeps tabs on leading indicators and advises accordingly. It would be like the Federal Reserve, but instead of monitoring inflation and controlling monetary supply, it would be tasked with averting total civilizational collapse.

Historians have not, as a whole, accepted Turchin’s terms of surrender graciously. Since at least the 19th century, the discipline has embraced the idea that history is irreducibly complex, and by now most historians believe that the diversity of human activity will foil any attempt to come up with general laws, especially predictive ones. (As Jo Guldi, a historian at Southern Methodist University, put it to me, “Some historians regard Turchin the way astronomers regard Nostradamus.”) Instead, each historical event must be lovingly described, and its idiosyncrasies understood to be limited in relevance to other events. The idea that one thing causes another, and that the causal pattern can tell you about sequences of events in another place or century, is foreign territory.

One might even say that what defines history as a humanistic enterprise is the belief that it is not governed by scientific laws—that the working parts of human societies are not like billiard balls, which, if arranged at certain angles and struck with a certain amount of force, will invariably crack just so and roll toward a corner pocket of war, or a side pocket of peace. Turchin counters that he has heard claims of irreducible complexity before, and that steady application of the scientific method has succeeded in managing that complexity. Consider, he says, the concept of temperature—­something so obviously quantifiable now that we laugh at the idea that it’s too vague to measure. “Back before people knew what temperature was, the best thing you could do is to say you’re hot or cold,” Turchin told me. The concept depended on many factors: wind, humidity, ordinary human differences in perception. Now we have thermometers. Turchin wants to invent a thermometer for human societies that will measure when they are likely to boil over into war.

One social scientist who can speak to Turchin in his own mathematical argot is Dingxin Zhao, a sociology professor at the University of Chicago who is—incredibly—­also a former mathematical ecologist. (He earned a doctorate modeling carrot-weevil population dynamics before earning a second doctorate in Chinese political sociology.) “I came from a natural-science background,” Zhao told me, “and in a way I am sympathetic to Turchin. If you come to social science from natural sciences, you have a powerful way of looking at the world. But you may also make big mistakes.”

Zhao said that human beings are just much more complicated than bugs. “Biological species don’t strategize in a very flexible way,” he told me. After millennia of evolutionary R&D, a woodpecker will come up with ingenious ways to stick its beak into a tree in search of food. It might even have social characteristics—an alpha woodpecker might strong-wing beta woodpeckers into giving it first dibs on the tastiest termites. But humans are much wilier social creatures, Zhao said. A woodpecker will eat a termite, but it “will not explain that he is doing so because it is his divine right.” Humans pull ideological power moves like this all the time, Zhao said, and to understand “the decisions of a Donald Trump, or a Xi Jinping,” a natural scientist has to incorporate the myriad complexities of human strategy, emotion, and belief. “I made that change,” Zhao told me, “and Peter Turchin has not.”

Turchin is nonetheless filling a historiographical niche left empty by academic historians with allergies not just to science but to a wide-angle view of the past. He places himself in a Russian tradition prone to thinking sweeping, Tolstoyan thoughts about the path of history. By comparison, American historians mostly look like micro-historians. Few would dare to write a history of the United States, let alone one of human civilization. Turchin’s approach is also Russian, or post-Soviet, in its rejection of the Marxist theory of historical progress that had been the official ideology of the Soviet state. When the U.S.S.R. collapsed, so too did the requirement that historical writing acknowledge international communism as the condition toward which the arc of history was bending. Turchin dropped ideology altogether, he says: Rather than bending toward progress, the arc in his view bends all the way back on itself, in a never-­ending loop of boom and bust. This puts him at odds with American historians, many of whom harbor an unspoken faith that liberal democracy is the end state of all history.

Writing history in this sweeping, cyclical way is easier if you are trained outside the field. “If you look at who is doing these megahistories, more often than not, it’s not actual historians,” Walter Scheidel, an actual historian at Stanford, told me. (Scheidel, whose books span millennia, takes Turchin’s work seriously and has even co-written a paper with him.) Instead they come from scientific fields where these taboos do not dominate. The genre’s most famous book, Guns, Germs, and Steel (1997), beheld 13,000 years of human history in a single volume. Its author, Jared Diamond, spent the first half of his career as one of the world’s foremost experts on the physiology of the gall­bladder. Steven Pinker, a cognitive psychologist who studies how children acquire parts of speech, has written a megahistory about the decline of violence across thousands of years, and about human flourishing since the Enlightenment. Most historians I asked about these men—and for some reason megahistory is nearly always a male pursuit—used terms like laughingstock and patently tendentious to describe them.

Pinker retorts that historians are resentful of the attention “disciplinary carpet­baggers” like himself have received for applying scientific methods to the humanities and coming up with conclusions that had eluded the old methods. He is skeptical of Turchin’s claims about historical cycles, but he believes in data-driven historical inquiry. “Given the noisiness of human behavior and the prevalence of cognitive biases, it’s easy to delude oneself about a historical period or trend by picking whichever event suits one’s narrative,” he says. The only answer is to use large data sets. Pinker thanks traditional historians for their work collating these data sets; he told me in an email that they “deserve extraordinary admiration for their original research (‘brushing the mouse shit off moldy court records in the basement of town halls,’ as one historian put it to me).” He calls not for surrender but for a truce. “There’s no reason that traditional history and data science can’t merge into a cooperative enterprise,” Pinker wrote. “Knowing stuff is hard; we need to use every available tool.”

Guldi, the Southern Methodist University professor, is one scholar who has embraced tools previously scorned by historians. She is a pioneer of data-driven history that considers timescales beyond a human lifetime. Her primary technique is the mining of texts—for example, sifting through the millions and millions of words captured in parliamentary debate in order to understand the history of land use in the final century of the British empire. Guldi may seem a potential recruit to cliodynamics, but her approach to data sets is grounded in the traditional methods of the humanities. She counts the frequency of words, rather than trying to find ways to compare big, fuzzy categories among civilizations. Turchin’s conclusions are only as good as his databases, she told me, and any database that tries to code something as complex as who constitutes a society’s elites—then tries to make like-to-like comparisons across millennia and oceans—will meet with skepticism from traditional historians, who deny that the subject to which they have devoted their lives can be expressed in Excel format. Turchin’s data are also limited to big-­picture characteristics observed over 10,000 years, or about 200 lifetimes. By scientific standards, a sample size of 200 is small, even if it is all humanity has.

Yet 200 lifetimes is at least more ambitious than the average historical purview of only one. And the reward for that ambition—­­in addition to the bragging rights for having potentially explained everything that has ever happened to human beings—includes something every writer wants: an audience. Thinking small rarely gets you quoted in The New York Times. Turchin has not yet attracted the mass audiences of a Diamond, Pinker, or Harari. But he has lured connoisseurs of political catastrophe, journalists and pundits looking for big answers to pressing questions, and true believers in the power of science to conquer uncertainty and improve the world. He has certainly outsold most beetle experts.

If he is right, it is hard to see how history will avoid assimilating his insights—if it can avoid being abolished by them. Privately, some historians have told me they consider the tools he uses powerful, if a little crude. Clio­dynamics is now on a long list of methods that arrived on the scene promising to revolutionize history. Many were fads, but some survived that stage to take their rightful place in an expanding historiographical tool kit. Turchin’s methods have already shown their power. Cliodynamics offers scientific hypotheses, and human history will give us more and more opportunities to check its predictions—­revealing whether Peter Turchin is a Hari Seldon or a mere Nostradamus. For my own sake, there are few thinkers whom I am more eager to see proved wrong.

This article appears in the December 2020 print edition with the headline “The Historian Who Sees the Future.” It was first published online on November 12, 2020.

GRAEME WOOD is a staff writer at The Atlantic and the author of The Way of the Strangers: Encounters With the Islamic State.

Posted in Course Syllabus, History, History of School Reform Class, School reform

Class on History of School Reform in the U.S.

This post contains all of the material for the class on the History of School Reform in the US that I taught for at the Stanford Graduate School of Education for 15 years.  In retirement I wanted to make the course available on the internet to anyone who is interested.  If you are a college teacher, feel free to use any of it in whole or part.  If you are a student or a group of students, you can work your way through the class on your own at your own pace.  Any benefits that accrue are purely intrinsic, since no one will get college credits.  But that also means you’re free to pursue the parts of the class that you want and you don’t have any requirements or papers.  How great is that.

I’m posting the full syllabus below.  But it would be more useful to download it as a Word document through this link.  Feel free to share this with anyone you like.

All of the course materials are embedded in the syllabus through hyperlinks to a Google drive.  For each week, this includes a link to tips for approaching the readings, links to the PDFs of the readings, and a link to the slides for that week’s class.  Slides also include links to additional sources.  So the syllabus is all that is needed to gain access to the full class.

What are the central themes of this class? 

One is that reform of schooling is all about trying to improve it.  But whether a particular reform constitutes an improvement or a detriment to school and society rests in the eye of the beholder.  It all depends on what goals you want schools to accomplish, but the fact is that we don’t fully agree on what those goals are.  So reform is not a linear process leading inevitably toward a better future but a cyclical process resulting from efforts to accomplish alternative goals that are in tension with each other.  This produces a series on long-term pendulum swings between alternative visions of education.

A second theme is that examining the history of reform efforts can provide us with rich insight into the nature of the educational system.  Think of reforms as a series of experiments in school improvement.  How the system reacts to each of these experimental interventions tells us something important about how the system operates. 

One conclusion I have drawn from examining this process is that reformers have been overconfident about their understanding of what the problems with schooling are and therefore what solutions are required.  The fact is that a lot of reform efforts make schools worse.  By failing to understand the complexity of the system and the way it has evolved over time, reformers have frequently been throwing a monkey wrench into the works.  They try to solve one problem and in the process create another.  (This is one reason by Dick Elmore and Milbrey McLaughlin call school reform “steady work.”) 

So one message the course sends to students is to show a little humility in your efforts to improve schools.  Learn more about how the system works before tinkering with it.  And consider the possibility that you might make things worse.

I hope you find this useful.

History of School Reform in the US

A Ten-Week Class

David Labaree

Web: http://www.stanford.edu/~dlabaree/

Twitter: @Dlabaree

Blog: https://davidlabaree.com/

Course Description

In this course, we will explore the history of school reform in the United States.  In only 10 weeks we will not be able to pursue a systematic study of this history from beginning to end, so instead we will explore a few of the major issues in this history and examine some pertinent cases of school reform to consider their consequences.  School reform is the intended change of schooling toward accomplishment of a valued goal.  One problem with reform, therefore, is intent.  Education is an extraordinarily complex social institution – involving a vast array of people, structures, and organizations – which means that reforming education in ways that make it produce the intended results is quite difficult.  Frequently reforms unintentionally generate new problems, which then require a new wave of reform to deal with them.  (This is why Elmore and McLaughlin call school reform “steady work.”)  A second problem with reform is that reasonable people can disagree over the goals of schooling, which means that what is a positive reform for some people may be a negative change for others.  The result is that your reaction to the success or failure of a reform effort depends on where you stand on its value, since the failure of a bad reform is a good thing.

Major Issues in the History of School Reform:  Framing our look at the history of reform will be two core books:  Tinkering Toward Utopia, which David Tyack and Larry Cuban wrote in response to what they learned from teaching this class at Stanford for a number of years; and Someone Has to Fail, the book I wrote after teaching the same course for a decade.  We’ll read their book at the start of the class and read mine in pieces across the quarter.  A key theme in Tyack and Cuban is the paradox of school reform, in which it seems that schools are constantly being bounced around by a stream of reform efforts while at the same time they never seem to change.  They unravel this paradox by separating the history of reform into two interacting elements:  the noisy and often contradictory rounds of reform rhetoric that intrude upon schools at irregular intervals, and the slower and steadier process of evolutionary change in the structure of schooling that takes place largely outside of public view.  We will look at both aspects of reform, with special attention to assessing the outcomes of reform in the realm of the structure and practice of schooling itself.  My own book takes a more jaundiced view of reform, examining why the common school movement was such a success and later reforms were such failures.  In the early part of the book, the focus is on how the loosely coupled organization of schooling and the peculiar characteristics of teaching as a practice have put severe limits on the possibilities of reform.  In the latter part, I explore why the failure of reform is largely good news, protecting the system from damaging experiments based on misguided visions of what schools can do to solve social problems.  I argue that schools are a terrible way to solve most of the social problems that they are asked to address.  I also suggest that schools are doing what educational consumers want from them – providing us with social access and social advantage – even if they don’t do what reformers ask of them.

The class starts with the work of David Cohen, Richard Elmore, and Milbrey McLaughlin, who consider the organizational and pedagogical reasons it has been so difficult to change the basic grammar of schooling through deliberate reform efforts.  Next we read Tyack and Cuban to get an overview of the subject.  Then we look at my representation of the two most important reform movements in the history of American schools, one promoting the common school and the other pushing for progressive education.  Next we look at the rhetorics of school reform by examining a series of reform documents from the last 200 years.  We will then look in detail at the nature and variety of school reform rhetoric, through a close study of a few key reform texts over the years, including pedagogical progressivism, administrative progressivism, desegregation, the standards movement, and school choice.  In succeeding weeks, we explore the core factors that make the school system so resistant to reform and consider some of the kinds of reform practices that are more likely to bring about results.  Then we examine the system’s core social role, showing how the system continually adapts to pressure for greater social access by stratifying instruction in a way the preserves social advantage.  In week 8 we look at issues surrounding race and American schooling.  In week 9, we put the issue of school reform in the larger context of state-driven social change efforts, by focusing on James Scott’s framework, which examines why it has been so hard over the years for governments to impose order on complex social institutions such as schooling.  For the last class, we read the final chapters in my book, talk about what schools can do, and what they can’t do.

What This Class Is and Is Not About:  This class is intended to encourage you to think hard about the things that make educational reform so complex, contradictory, difficult, and often dysfunctional.  Its focus is on analyzing what happens to reform efforts between initial proposals and eventual outcomes.  This means that its aim is not to provide you with a how-to manual that will enable you to be a successful reformer.  I don’t think such a manual exists, and the dream of finding the one right way to fix things has done a lot of damage to schools over the years.  Instead, think of this class as an exercise in realism, a set of cautionary tales that I hope will help you locate your own efforts to improve schools within a useful historical framework.  The idea is to encourage students to develop a rich understanding of the American system of schooling – even a grudging respect for it – before trying to institute reforms, and to instill a little humility into people’s plans for saving the world with better schools.

Audience

This class was originally designed for master’s and doctoral students in education, but it has also works for graduate or undergraduate students in any field who are interested in learning about the nature of the American system of education.

Readings

Books:  The following books are used in the course; both are in print.  Also, pirated digital versions of both books can be found online.

Tyack, David & Cuban, Larry. (1995). Tinkering toward utopia: Reflections on a century of public school reform. Cambridge: Harvard University Press.

Labaree, David F. (2010). Someone has to fail: The zero-sum game of public schooling. Cambridge: Harvard University Press.

            Assigned Articles and Other Readings:  All other readings are available in PDF on the course Google Drive.

 Course Outline

Below are the topics we will cover, week by week, with the readings for each week.  For each week, I provide:  a link to tips for how to approach each week’s readings; links for access to the PDFs of these readings; a link to the class slides for that week.

Week 1

Introduction to course

Tips for week 1 readings

Elmore, Richard F., & McLaughlin, Milbrey W. (1988).  Steady work.  Santa Monica, CA: Rand.

Cohen, David K. (1988), Teaching practice: Plus que ça change.  In Phillip W. Jackson (ed.), Contributing to Educational change (pp. 27-84).  Berkeley: McCutchan.

Labaree, David F. (2010).  Someone has to fail: The zero-sum game of public schooling. Cambridge: Harvard University Press.  Introduction.

Class slides for week 1:  slides 1a, slides 1b, slides 1c

 Week 2

The History of Educational Reform:  An Overview

Tips for week 2 readings

Tyack, David & Cuban, Larry. (1995). Tinkering toward utopia: Reflections on a century of public school reform. Cambridge: Harvard University Press.

Metz, Mary H. (1990). Real school: A universal drama amid disparate experience. In Douglas E. Mitchell & Margaret E. Goertz (Eds.), Education Politics for the New Century (pp. 75-91). New York: Falmer.

Class slides for week 2

Week 3

The Two Major Reform Movements – Common School and Progressivism; Schooling and the Meritocracy

Tips for week 3 readings

Labaree.  Someone has to fail.  Chapters 1, 2, and 3.

McClay, William M. (2016). A distant elite: How meritocracy went wrong. The Hedgehog Review 18:2 (Summer).

Class slides for week 3

Week 4

Factors That Make Reform Difficult

Tips for week 4 readings

Labaree.  Someone has to fail.  Chapters 4 and 5

Meyer, John W. & Rowan, Brian. (1983). The structure of educational organizations. In Organizational environments: Ritual and rationality (pp. 71-97), edited by John W. Meyer and William R. Scott. Beverly Hills, CA: Sage.

Cuban, Larry. (2013). Why so many structural changes in schools and so little reform in teaching practice? In Inside the black box of classroom practice: Change without reform in American education (pp. 155-187). Cambridge: Harvard Education Press.

Check out Larry Cuban’s blog on school reform and classroom practice, always a good read: http://larrycuban.wordpress.com/.

Class slides for week 4

 Week 5

The Rhetorics of Reform:  Cases in Point

Read any four of these closely; lightly skim the rest.

Tips for week 5 readings

Common School Movement

Mann, Horace. (1848). Twelfth Annual Report to the State Board of Education of Massachusetts.  Selections.

Committee of 10

Committee of 10. (1893). Report to the National Council of Education.  Selections.

Pedagogical Progressivism

Dewey, John. (1902/1990). The child and the curriculum. In Philip W. Jackson (ed.), The school and society and the child and the curriculum (pp. 181-209). Chicago: University of Chicago Press.

Administrative Progressivism

Commission on the Reorganization of Secondary Education. (1918). Cardinal principles of secondary education. Washington, DC: National Education Association.

Desegregation

Brown v. Board of Education of Topeka, 347 U.S. 483 (1954).

Standards Movement 1.0

National Commission on Excellence in Education. (1983). A nation at risk: The imperative for educational reform. Washington, DC: U.S. Department of Education.

School Choice

Walberg, Herbert J. & Best, Joseph L. (2003). Failure of the public school monopoly. In Education and capitalism: How overcoming our fear of markets and economics can improve America’s schools (pp. 3-32). Stanford: Hoover Institution Press.

Standards Movement 2.0

No Child Left Behind Act.  (2002).  Public Law 107-110.  Title I.

School Choice 2.0

https://drive.google.com/open?id=1-d5LuTZ5YYkLbsL0S268vtaRLFoEqJ_b

Class slides for week 5

Week 6

Making Educational Change

Tips for week 6 readings

Fullan, Michael G. (2001). The new meaning of educational change (3rd ed.). New York: Teachers College Press. Chapter 2.

Fullan, Michael G. (2001). The new meaning of educational change (3rd ed.). New York: Teachers College Press. Chapter 5.

Wolf, Shelby A., Borko, Hilda, Elliott, Rebekah L., & McIver, Monette C. (2000). “That dog won’t hunt!:” Exemplary school change efforts within the Kentucky reform. American Educational Research Journal, 37:2, 349-393.

Delpit, Lisa. (1995).  The silenced dialogue.  In Other people’s children (pp. 21-47).  New York: New Press.

Class slides for week 6

Week 7

Balancing Social Access and Social Advantage

Tips for week 7 readings

Labaree, David F.  (2013).  Balancing access and advantage in the history of American schooling. In Rolf Becker, Patrick Bühler, & Thomas Bühler (Eds.), Bildungsungleichheit und Gerechtigkeit: Wissenschaftliche und Gesellschaftliche Herausforderungen (pp. 101-114).  Bern: Haupt Verlag.

Cohen, David. K., & Neufeld, Barbara. (1981). The failure of high schools and the progress of education. Daedelus, 110 (Summer), 69-89.

Labaree, David F.  (1997).  The middle class and the high school.  In How to Succeed in School Without Really Learning: The Credentials Race in American Education (pp. 92-109).  New Haven, CT: Yale University Press.

Ladson-Billings, Gloria.  (1995).  But that’s just good teaching! The case for culturally relevant pedagogy.  Theory into Practice, 34:3, pp. 159-165.

Class slides for week 7

Week 8

Race and American Schooling

Tips for week 8 readings

Nieto, Sonia (1994).  Affirmation, solidarity, and critique: Moving beyond tolerance in multicultural education.  Multicultural Education, 1:4, pp. 9-12, 35-38.

Fine, Michelle. (1986). Why urban adolescents drop into and out of public high school. The Teachers College Record, 87(3), 393-409.

McWhorter, John. (2018). There’s nothing wrong with Black English. The Atlantic. https://www.theatlantic.com/politics/archive/2018/08/who-gets-to-use-black-english/566867/?utm_source=twb.

Recommended:  The Problem We All Live With.  (2015). This American Life Podcast (July 31).  Available in audio (below) and in transcripthttp://www.thisamericanlife.org/radio-archives/episode/562/the-problem-we-all-live-with.

Class slides for week 8

Week 9

Problems in Making Systematic Reform of Education

Tips for week 9 readings

Scott, James. (1999).  Seeing like a state.  New Haven: Yale University Press.  Pay close attention to Introduction, chapters 1-2 and 9-10.  Skim through the rest looking for examples.

Class slides for week 9

Week 10

Conclusions

Tips for week 10 readings

Labaree.  Someone has to fail.  Chapters 6, 7, and 8.

Cohen, David K. (1990). A revolution in one classroom: The case of Mrs. Oublier,” Educational Evaluation and Policy Analysis, 12:3, pp. 311-329.

March, James G. (1975). Education and the pursuit of optimism. Texas Tech Journal of Education, 2:1, 5-17.

Class slides for week 10

Guidelines for Critical Reading

As a critical reader of a particular text (a book, article, speech, proposal), you need to use the following questions as a framework to guide you as you read:

  1. What’s the point? This is the analysis issue: what is the author’s angle?
  2. Who says? This is the validity issue: On what (data, literature) are the claims based?
  3. What’s new? This is the value-added issue: What does the author contribute that we don’t already know?
  4. Who cares? This is the significance issue, the most important issue of all, the one that subsumes all the others: Is this work worth doing?  Is the text worth reading?  Does it contribute something important?

If this is the way critical readers are going to approach a text, then as an analytical writer you need to guide readers toward the desired answers to each of these questions.

 Guidelines for Analytical Writing

 In writing papers for any course, keep in mind the following points.

  1. Pick an important issue: Make sure that your analysis meets the “so what” test. Why should anyone care about this topic, anyway?  Pick an issue or issues that matters and that you really care about.
  2. Keep focused: Don’t lose track of the point you are trying to make and make sure the reader knows where you are heading and why.
  3. Aim for clarity: Don’t assume that the reader knows what you’re talking about; it’s your job to make your points clearly.  In part this means keeping focused and avoiding distracting clutter.  But in part it means that you need to make more than elliptical references to concepts and sources or to professional experience.  When referring to readings (from the course or elsewhere), explain who said what and why this point is pertinent to the issue at hand.  When drawing on your own experiences or observations, set the context so the reader can understand what you mean.  Proceed as though you were writing for an educated person who is neither a member of this class nor a professional colleague, someone who has not read the material you are referring to.
  4. Provide analysis: A good paper is more than a catalogue of facts, concepts, experiences, or references; it is more than a description of the content of a set of readings; it is more than an expression of your educational values or an announcement of your prescription for what ails education.  A good paper is a logical and coherent analysis of the issues raised within your chosen area of focus.  This means that your paper should aim to explain rather than describe.  If you give examples, be sure to tell the reader what they mean in the context of your analysis.  Make sure the reader understands the connection between the various points in your paper.
  5. Provide depth, insight, and connections: The best papers are ones that go beyond making obvious points, superficial comparisons, and simplistic assertions.  They dig below the surface of the issue at hand, demonstrating a deeper level of understanding and an ability to make interesting connections.
  6. Support your analysis with evidence: You need to do more than simply state your ideas, however informed and useful these may be.  You also need to provide evidence that reassures the reader that you know what you are talking about, thus providing a foundation for your argument.  Evidence comes in part from the academic literature, whether encountered in this course or elsewhere.  Evidence can also come from your own experience.  Remember that you are trying to accomplish two things with the use of evidence.  First, you are saying that it is not just you making this assertion but that authoritative sources and solid evidence back you up.  Second, you are supplying a degree of specificity and detail, which helps to flesh out an otherwise skeletal argument.
  7. Recognize complexity and acknowledge multiple viewpoints. The issues in the history of American education are not simple, and your paper should not propose simple solutions to complex problems.  It should not reduce issues to either/or, black/white, good/bad.  Your paper should give evidence that you understand and appreciate more than one perspective on an issue.  This does not mean you should be wishy-washy.  Instead, you should aim to make a clear point by showing that you have considered alternate views.
  8. Challenge assumptions. The paper should show that you have learned something by doing this paper. There should be evidence that you have been open to changing your mind.
  9. Do not overuse quotation: In a short paper, long quotations (more than a sentence or two in length) are generally not appropriate.  Even in longer papers, quotations should be used sparingly unless they constitute a primary form of data for your analysis.  In general, your paper is more effective if written primarily in your own words, using ideas from the literature but framing them in your own way in order to serve your own analytical purposes.  However, selective use of quotations can be very useful as a way of capturing the author’s tone or conveying a particularly aptly phrased point.
  10. Cite your sources: You need to identify for the reader where particular ideas or examples come from.  Note that citing a source is not sufficient to fulfill the requirement to provide evidence for your argument.  As spelled out in #6 above, you need to transmit to the reader some of the substance of what appears in the source cited, so the reader can understand the connection with the point you are making and can have some meat to chew on.  The best analytical writing provides a real feel for the material and not just a list of assertions and citations.  Depth, insight, and connections count for more than a superficial collection of glancing references.  In other words, don’t just mention an array of sources without drawing substantive points and examples from these sources; and don’t draw on ideas from such sources without identifying the ones you used.
  11. Take care in the quality of your prose: A paper that is written in a clear and effective style makes a more convincing argument than one written in a murky manner, even when both writers start with the same basic understanding of the issues.  However, writing that is confusing usually signals confusion in a person’s thinking.  After all, one key purpose of writing is to put down your ideas in a way that permits you and others to reflect on them critically, to see if they stand up to analysis.  So you should take the time to reflect on your own ideas on paper and revise them as needed.
Posted in Higher Education, History, Race

Du Bois — Of the Coming of John

This post is a classic piece by W. E. B. Du Bois called “Of the Coming of John.”  It’s a chapter from his book, The Souls of Black Folkpublished in 1903.  Here’s a link to the online version.

It’s a heartbreaking work of fiction filled with a lot of hard truths.  It’s the story of two boys named John, one Black and one white, who great up together in a small town in Jim Crow Alabama at the turn of the 20th century.  Both went away to college up North and both came home to visit their families at the same time.

The story is about race and about education.  It tells of a racial divide that education can’t cure, a tragedy just waiting to unfold.  It also tells how education divides people of all races.  Education, it tells us, can be both liberating and alienating.  For the Black John, education showed him a whole new world, different from anything he  had ever experienced, and a way of living that was less confining for people like him.  But it also left him an alien in his own hometown, who no longer felt comfortable there and who no longer could communicate with family and friends in the old way.  College also left the other John alienated from his home environment, but it did nothing to change his thinking about the racial divide there.  When the two Johns collided on their home ground, the end was ugly and somehow unavoidable.

Du Bois himself had a different story.  He was born right after the Civil War in an integrated town in Massachusetts and went on to study at the University of Berlin and Harvard, where he became the first Black person to earn a Ph.D.  You can hear the echoes of his own education running through the story, from the poem by Elizabeth Barrett Browning at the beginning to the phrase from a German song at the end.

It’s grim to read this story, but it’s also a pleasure to spend some time inside the mind of one of America’s great scholars and civil rights leaders.

Du Bois Cover

Of the Coming of John

W. E. B Du Bois

What bring they ‘neath the midnight,

Beside the River–sea?

They bring the human heart wherein

No nightly calm can be;

That droppeth never with the wind,

Nor drieth with the dew;

O calm it, God; thy calm is broad

To cover spirits too.

The river floweth on.

MRS. BROWNING.

Carlisle Street runs westward from the centre of Johnstown, across a great black bridge, down a hill and up again, by little shops and meat–markets, past single–storied homes, until suddenly it stops against a wide green lawn. It is a broad, restful place, with two large buildings outlined against the west. When at evening the winds come swelling from the east, and the great pall of the city’s smoke hangs wearily above the valley, then the red west glows like a dreamland down Carlisle Street, and, at the tolling of the supper–bell, throws the passing forms of students in dark silhouette against the sky. Tall and black, they move slowly by, and seem in the sinister light to flit before the city like dim warning ghosts. Perhaps they are; for this is Wells Institute, and these black students have few dealings with the white city below.

And if you will notice, night after night, there is one dark form that ever hurries last and late toward the twinkling lights of Swain Hall,—for Jones is never on time. A long, straggling fellow he is, brown and hard–haired, who seems to be growing straight out of his clothes, and walks with a half–apologetic roll. He used perpetually to set the quiet dining–room into waves of merriment, as he stole to his place after the bell had tapped for prayers; he seemed so perfectly awkward. And yet one glance at his face made one forgive him much,—that broad, good–natured smile in which lay no bit of art or artifice, but seemed just bubbling good–nature and genuine satisfaction with the world.

He came to us from Altamaha, away down there beneath the gnarled oaks of Southeastern Georgia, where the sea croons to the sands and the sands listen till they sink half drowned beneath the waters, rising only here and there in long, low islands. The white folk of Altamaha voted John a good boy,—fine plough–hand, good in the rice–fields, handy everywhere, and always good–natured and respectful. But they shook their heads when his mother wanted to send him off to school. “It’ll spoil him,—ruin him,” they said; and they talked as though they knew. But full half the black folk followed him proudly to the station, and carried his queer little trunk and many bundles. And there they shook and shook hands, and the girls kissed him shyly and the boys clapped him on the back. So the train came, and he pinched his little sister lovingly, and put his great arms about his mother’s neck, and then was away with a puff and a roar into the great yellow world that flamed and flared about the doubtful pilgrim. Up the coast they hurried, past the squares and palmettos of Savannah, through the cotton–fields and through the weary night, to Millville, and came with the morning to the noise and bustle of Johnstown.

And they that stood behind, that morning in Altamaha, and watched the train as it noisily bore playmate and brother and son away to the world, had thereafter one ever–recurring word,—”When John comes.” Then what parties were to be, and what speakings in the churches; what new furniture in the front room,—perhaps even a new front room; and there would be a new schoolhouse, with John as teacher; and then perhaps a big wedding; all this and more—when John comes. But the white people shook their heads.

At first he was coming at Christmas–time,—but the vacation proved too short; and then, the next summer,—but times were hard and schooling costly, and so, instead, he worked in Johnstown. And so it drifted to the next summer, and the next,—till playmates scattered, and mother grew gray, and sister went up to the Judge’s kitchen to work. And still the legend lingered,—”When John comes.”

Up at the Judge’s they rather liked this refrain; for they too had a John—a fair–haired, smooth–faced boy, who had played many a long summer’s day to its close with his darker namesake. “Yes, sir! John is at Princeton, sir,” said the broad–shouldered gray–haired Judge every morning as he marched down to the post–office. “Showing the Yankees what a Southern gentleman can do,” he added; and strode home again with his letters and papers. Up at the great pillared house they lingered long over the Princeton letter,—the Judge and his frail wife, his sister and growing daughters. “It’ll make a man of him,” said the Judge, “college is the place.” And then he asked the shy little waitress, “Well, Jennie, how’s your John?” and added reflectively, “Too bad, too bad your mother sent him off—it will spoil him.” And the waitress wondered.

Thus in the far–away Southern village the world lay waiting, half consciously, the coming of two young men, and dreamed in an inarticulate way of new things that would be done and new thoughts that all would think. And yet it was singular that few thought of two Johns,—for the black folk thought of one John, and he was black; and the white folk thought of another John, and he was white. And neither world thought the other world’s thought, save with a vague unrest.

Up in Johnstown, at the Institute, we were long puzzled at the case of John Jones. For a long time the clay seemed unfit for any sort of moulding. He was loud and boisterous, always laughing and singing, and never able to work consecutively at anything. He did not know how to study; he had no idea of thoroughness; and with his tardiness, carelessness, and appalling good–humor, we were sore perplexed. One night we sat in faculty–meeting, worried and serious; for Jones was in trouble again. This last escapade was too much, and so we solemnly voted “that Jones, on account of repeated disorder and inattention to work, be suspended for the rest of the term.”

It seemed to us that the first time life ever struck Jones as a really serious thing was when the Dean told him he must leave school. He stared at the gray–haired man blankly, with great eyes. “Why,—why,” he faltered, “but—I haven’t graduated!” Then the Dean slowly and clearly explained, reminding him of the tardiness and the carelessness, of the poor lessons and neglected work, of the noise and disorder, until the fellow hung his head in confusion. Then he said quickly, “But you won’t tell mammy and sister,—you won’t write mammy, now will you? For if you won’t I’ll go out into the city and work, and come back next term and show you something.” So the Dean promised faithfully, and John shouldered his little trunk, giving neither word nor look to the giggling boys, and walked down Carlisle Street to the great city, with sober eyes and a set and serious face.

Perhaps we imagined it, but someway it seemed to us that the serious look that crept over his boyish face that afternoon never left it again. When he came back to us he went to work with all his rugged strength. It was a hard struggle, for things did not come easily to him,—few crowding memories of early life and teaching came to help him on his new way; but all the world toward which he strove was of his own building, and he builded slow and hard. As the light dawned lingeringly on his new creations, he sat rapt and silent before the vision, or wandered alone over the green campus peering through and beyond the world of men into a world of thought. And the thoughts at times puzzled him sorely; he could not see just why the circle was not square, and carried it out fifty–six decimal places one midnight,—would have gone further, indeed, had not the matron rapped for lights out. He caught terrible colds lying on his back in the meadows of nights, trying to think out the solar system; he had grave doubts as to the ethics of the Fall of Rome, and strongly suspected the Germans of being thieves and rascals, despite his textbooks; he pondered long over every new Greek word, and wondered why this meant that and why it couldn’t mean something else, and how it must have felt to think all things in Greek. So he thought and puzzled along for himself,—pausing perplexed where others skipped merrily, and walking steadily through the difficulties where the rest stopped and surrendered.

Thus he grew in body and soul, and with him his clothes seemed to grow and arrange themselves; coat sleeves got longer, cuffs appeared, and collars got less soiled. Now and then his boots shone, and a new dignity crept into his walk. And we who saw daily a new thoughtfulness growing in his eyes began to expect something of this plodding boy. Thus he passed out of the preparatory school into college, and we who watched him felt four more years of change, which almost transformed the tall, grave man who bowed to us commencement morning. He had left his queer thought–world and come back to a world of motion and of men. He looked now for the first time sharply about him, and wondered he had seen so little before. He grew slowly to feel almost for the first time the Veil that lay between him and the white world; he first noticed now the oppression that had not seemed oppression before, differences that erstwhile seemed natural, restraints and slights that in his boyhood days had gone unnoticed or been greeted with a laugh. He felt angry now when men did not call him “Mister,” he clenched his hands at the “Jim Crow” cars, and chafed at the color–line that hemmed in him and his. A tinge of sarcasm crept into his speech, and a vague bitterness into his life; and he sat long hours wondering and planning a way around these crooked things. Daily he found himself shrinking from the choked and narrow life of his native town. And yet he always planned to go back to Altamaha,—always planned to work there. Still, more and more as the day approached he hesitated with a nameless dread; and even the day after graduation he seized with eagerness the offer of the Dean to send him North with the quartette during the summer vacation, to sing for the Institute. A breath of air before the plunge, he said to himself in half apology.

It was a bright September afternoon, and the streets of New York were brilliant with moving men. They reminded John of the sea, as he sat in the square and watched them, so changelessly changing, so bright and dark, so grave and gay. He scanned their rich and faultless clothes, the way they carried their hands, the shape of their hats; he peered into the hurrying carriages. Then, leaning back with a sigh, he said, “This is the World.” The notion suddenly seized him to see where the world was going; since many of the richer and brighter seemed hurrying all one way. So when a tall, light–haired young man and a little talkative lady came by, he rose half hesitatingly and followed them. Up the street they went, past stores and gay shops, across a broad square, until with a hundred others they entered the high portal of a great building.

He was pushed toward the ticket–office with the others, and felt in his pocket for the new five–dollar bill he had hoarded. There seemed really no time for hesitation, so he drew it bravely out, passed it to the busy clerk, and received simply a ticket but no change. When at last he realized that he had paid five dollars to enter he knew not what, he stood stockstill amazed. “Be careful,” said a low voice behind him; “you must not lynch the colored gentleman simply because he’s in your way,” and a girl looked up roguishly into the eyes of her fair–haired escort. A shade of annoyance passed over the escort’s face. “You WILL not understand us at the South,” he said half impatiently, as if continuing an argument. “With all your professions, one never sees in the North so cordial and intimate relations between white and black as are everyday occurrences with us. Why, I remember my closest playfellow in boyhood was a little Negro named after me, and surely no two,—WELL!” The man stopped short and flushed to the roots of his hair, for there directly beside his reserved orchestra chairs sat the Negro he had stumbled over in the hallway. He hesitated and grew pale with anger, called the usher and gave him his card, with a few peremptory words, and slowly sat down. The lady deftly changed the subject.

All this John did not see, for he sat in a half–daze minding the scene about him; the delicate beauty of the hall, the faint perfume, the moving myriad of men, the rich clothing and low hum of talking seemed all a part of a world so different from his, so strangely more beautiful than anything he had known, that he sat in dreamland, and started when, after a hush, rose high and clear the music of Lohengrin’s swan. The infinite beauty of the wail lingered and swept through every muscle of his frame, and put it all a–tune. He closed his eyes and grasped the elbows of the chair, touching unwittingly the lady’s arm. And the lady drew away. A deep longing swelled in all his heart to rise with that clear music out of the dirt and dust of that low life that held him prisoned and befouled. If he could only live up in the free air where birds sang and setting suns had no touch of blood! Who had called him to be the slave and butt of all? And if he had called, what right had he to call when a world like this lay open before men?

Then the movement changed, and fuller, mightier harmony swelled away. He looked thoughtfully across the hall, and wondered why the beautiful gray–haired woman looked so listless, and what the little man could be whispering about. He would not like to be listless and idle, he thought, for he felt with the music the movement of power within him. If he but had some master–work, some life–service, hard,—aye, bitter hard, but without the cringing and sickening servility, without the cruel hurt that hardened his heart and soul. When at last a soft sorrow crept across the violins, there came to him the vision of a far–off home, the great eyes of his sister, and the dark drawn face of his mother. And his heart sank below the waters, even as the sea–sand sinks by the shores of Altamaha, only to be lifted aloft again with that last ethereal wail of the swan that quivered and faded away into the sky.

It left John sitting so silent and rapt that he did not for some time notice the usher tapping him lightly on the shoulder and saying politely, “Will you step this way, please, sir?” A little surprised, he arose quickly at the last tap, and, turning to leave his seat, looked full into the face of the fair–haired young man. For the first time the young man recognized his dark boyhood playmate, and John knew that it was the Judge’s son. The White John started, lifted his hand, and then froze into his chair; the black John smiled lightly, then grimly, and followed the usher down the aisle. The manager was sorry, very, very sorry,—but he explained that some mistake had been made in selling the gentleman a seat already disposed of; he would refund the money, of course,—and indeed felt the matter keenly, and so forth, and—before he had finished John was gone, walking hurriedly across the square and down the broad streets, and as he passed the park he buttoned his coat and said, “John Jones, you’re a natural–born fool.” Then he went to his lodgings and wrote a letter, and tore it up; he wrote another, and threw it in the fire. Then he seized a scrap of paper and wrote: “Dear Mother and Sister—I am coming—John.”

“Perhaps,” said John, as he settled himself on the train, “perhaps I am to blame myself in struggling against my manifest destiny simply because it looks hard and unpleasant. Here is my duty to Altamaha plain before me; perhaps they’ll let me help settle the Negro problems there,—perhaps they won’t. ‘I will go in to the King, which is not according to the law; and if I perish, I perish.'” And then he mused and dreamed, and planned a life–work; and the train flew south.

Down in Altamaha, after seven long years, all the world knew John was coming. The homes were scrubbed and scoured,—above all, one; the gardens and yards had an unwonted trimness, and Jennie bought a new gingham. With some finesse and negotiation, all the dark Methodists and Presbyterians were induced to join in a monster welcome at the Baptist Church; and as the day drew near, warm discussions arose on every corner as to the exact extent and nature of John’s accomplishments. It was noontide on a gray and cloudy day when he came. The black town flocked to the depot, with a little of the white at the edges,—a happy throng, with “Good–mawnings” and “Howdys” and laughing and joking and jostling. Mother sat yonder in the window watching; but sister Jennie stood on the platform, nervously fingering her dress, tall and lithe, with soft brown skin and loving eyes peering from out a tangled wilderness of hair. John rose gloomily as the train stopped, for he was thinking of the “Jim Crow” car; he stepped to the platform, and paused: a little dingy station, a black crowd gaudy and dirty, a half–mile of dilapidated shanties along a straggling ditch of mud. An overwhelming sense of the sordidness and narrowness of it all seized him; he looked in vain for his mother, kissed coldly the tall, strange girl who called him brother, spoke a short, dry word here and there; then, lingering neither for handshaking nor gossip, started silently up the street, raising his hat merely to the last eager old aunty, to her open–mouthed astonishment. The people were distinctly bewildered. This silent, cold man,—was this John? Where was his smile and hearty hand–grasp? “‘Peared kind o’ down in the mouf,” said the Methodist preacher thoughtfully. “Seemed monstus stuck up,” complained a Baptist sister. But the white postmaster from the edge of the crowd expressed the opinion of his folks plainly. “That damn Nigger,” said he, as he shouldered the mail and arranged his tobacco, “has gone North and got plum full o’ fool notions; but they won’t work in Altamaha.” And the crowd melted away.

The meeting of welcome at the Baptist Church was a failure. Rain spoiled the barbecue, and thunder turned the milk in the ice–cream. When the speaking came at night, the house was crowded to overflowing. The three preachers had especially prepared themselves, but somehow John’s manner seemed to throw a blanket over everything,—he seemed so cold and preoccupied, and had so strange an air of restraint that the Methodist brother could not warm up to his theme and elicited not a single “Amen”; the Presbyterian prayer was but feebly responded to, and even the Baptist preacher, though he wakened faint enthusiasm, got so mixed up in his favorite sentence that he had to close it by stopping fully fifteen minutes sooner than he meant. The people moved uneasily in their seats as John rose to reply. He spoke slowly and methodically. The age, he said, demanded new ideas; we were far different from those men of the seventeenth and eighteenth centuries,—with broader ideas of human brotherhood and destiny. Then he spoke of the rise of charity and popular education, and particularly of the spread of wealth and work. The question was, then, he added reflectively, looking at the low discolored ceiling, what part the Negroes of this land would take in the striving of the new century. He sketched in vague outline the new Industrial School that might rise among these pines, he spoke in detail of the charitable and philanthropic work that might be organized, of money that might be saved for banks and business. Finally he urged unity, and deprecated especially religious and denominational bickering. “To–day,” he said, with a smile, “the world cares little whether a man be Baptist or Methodist, or indeed a churchman at all, so long as he is good and true. What difference does it make whether a man be baptized in river or washbowl, or not at all? Let’s leave all that littleness, and look higher.” Then, thinking of nothing else, he slowly sat down. A painful hush seized that crowded mass. Little had they understood of what he said, for he spoke an unknown tongue, save the last word about baptism; that they knew, and they sat very still while the clock ticked. Then at last a low suppressed snarl came from the Amen corner, and an old bent man arose, walked over the seats, and climbed straight up into the pulpit. He was wrinkled and black, with scant gray and tufted hair; his voice and hands shook as with palsy; but on his face lay the intense rapt look of the religious fanatic. He seized the Bible with his rough, huge hands; twice he raised it inarticulate, and then fairly burst into words, with rude and awful eloquence. He quivered, swayed, and bent; then rose aloft in perfect majesty, till the people moaned and wept, wailed and shouted, and a wild shrieking arose from the corners where all the pent–up feeling of the hour gathered itself and rushed into the air. John never knew clearly what the old man said; he only felt himself held up to scorn and scathing denunciation for trampling on the true Religion, and he realized with amazement that all unknowingly he had put rough, rude hands on something this little world held sacred. He arose silently, and passed out into the night. Down toward the sea he went, in the fitful starlight, half conscious of the girl who followed timidly after him. When at last he stood upon the bluff, he turned to his little sister and looked upon her sorrowfully, remembering with sudden pain how little thought he had given her. He put his arm about her and let her passion of tears spend itself on his shoulder.

Long they stood together, peering over the gray unresting water.

“John,” she said, “does it make every one—unhappy when they study and learn lots of things?”

He paused and smiled. “I am afraid it does,” he said.

“And, John, are you glad you studied?”

“Yes,” came the answer, slowly but positively.

She watched the flickering lights upon the sea, and said thoughtfully, “I wish I was unhappy,—and—and,” putting both arms about his neck, “I think I am, a little, John.”

It was several days later that John walked up to the Judge’s house to ask for the privilege of teaching the Negro school. The Judge himself met him at the front door, stared a little hard at him, and said brusquely, “Go ’round to the kitchen door, John, and wait.” Sitting on the kitchen steps, John stared at the corn, thoroughly perplexed. What on earth had come over him? Every step he made offended some one. He had come to save his people, and before he left the depot he had hurt them. He sought to teach them at the church, and had outraged their deepest feelings. He had schooled himself to be respectful to the Judge, and then blundered into his front door. And all the time he had meant right,—and yet, and yet, somehow he found it so hard and strange to fit his old surroundings again, to find his place in the world about him. He could not remember that he used to have any difficulty in the past, when life was glad and gay. The world seemed smooth and easy then. Perhaps,—but his sister came to the kitchen door just then and said the Judge awaited him.

The Judge sat in the dining–room amid his morning’s mail, and he did not ask John to sit down. He plunged squarely into the business. “You’ve come for the school, I suppose. Well John, I want to speak to you plainly. You know I’m a friend to your people. I’ve helped you and your family, and would have done more if you hadn’t got the notion of going off. Now I like the colored people, and sympathize with all their reasonable aspirations; but you and I both know, John, that in this country the Negro must remain subordinate, and can never expect to be the equal of white men. In their place, your people can be honest and respectful; and God knows, I’ll do what I can to help them. But when they want to reverse nature, and rule white men, and marry white women, and sit in my parlor, then, by God! we’ll hold them under if we have to lynch every Nigger in the land. Now, John, the question is, are you, with your education and Northern notions, going to accept the situation and teach the darkies to be faithful servants and laborers as your fathers were,—I knew your father, John, he belonged to my brother, and he was a good Nigger. Well—well, are you going to be like him, or are you going to try to put fool ideas of rising and equality into these folks’ heads, and make them discontented and unhappy?”

“I am going to accept the situation, Judge Henderson,” answered John, with a brevity that did not escape the keen old man. He hesitated a moment, and then said shortly, “Very well,—we’ll try you awhile. Good–morning.”

It was a full month after the opening of the Negro school that the other John came home, tall, gay, and headstrong. The mother wept, the sisters sang. The whole white town was glad. A proud man was the Judge, and it was a goodly sight to see the two swinging down Main Street together. And yet all did not go smoothly between them, for the younger man could not and did not veil his contempt for the little town, and plainly had his heart set on New York. Now the one cherished ambition of the Judge was to see his son mayor of Altamaha, representative to the legislature, and—who could say?—governor of Georgia. So the argument often waxed hot between them. “Good heavens, father,” the younger man would say after dinner, as he lighted a cigar and stood by the fireplace, “you surely don’t expect a young fellow like me to settle down permanently in this—this God–forgotten town with nothing but mud and Negroes?” “I did,” the Judge would answer laconically; and on this particular day it seemed from the gathering scowl that he was about to add something more emphatic, but neighbors had already begun to drop in to admire his son, and the conversation drifted.

“Heah that John is livenin’ things up at the darky school,” volunteered the postmaster, after a pause.

“What now?” asked the Judge, sharply.

“Oh, nothin’ in particulah,—just his almighty air and uppish ways. B’lieve I did heah somethin’ about his givin’ talks on the French Revolution, equality, and such like. He’s what I call a dangerous Nigger.”

“Have you heard him say anything out of the way?”

“Why, no,—but Sally, our girl, told my wife a lot of rot. Then, too, I don’t need to heah: a Nigger what won’t say ‘sir’ to a white man, or—”

“Who is this John?” interrupted the son.

“Why, it’s little black John, Peggy’s son,—your old playfellow.”

The young man’s face flushed angrily, and then he laughed.

“Oh,” said he, “it’s the darky that tried to force himself into a seat beside the lady I was escorting—”

But Judge Henderson waited to hear no more. He had been nettled all day, and now at this he rose with a half–smothered oath, took his hat and cane, and walked straight to the schoolhouse.

For John, it had been a long, hard pull to get things started in the rickety old shanty that sheltered his school. The Negroes were rent into factions for and against him, the parents were careless, the children irregular and dirty, and books, pencils, and slates largely missing. Nevertheless, he struggled hopefully on, and seemed to see at last some glimmering of dawn. The attendance was larger and the children were a shade cleaner this week. Even the booby class in reading showed a little comforting progress. So John settled himself with renewed patience this afternoon.

“Now, Mandy,” he said cheerfully, “that’s better; but you mustn’t chop your words up so: ‘If—the–man—goes.’ Why, your little brother even wouldn’t tell a story that way, now would he?”

“Naw, suh, he cain’t talk.”

“All right; now let’s try again: ‘If the man—’

“John!”

The whole school started in surprise, and the teacher half arose, as the red, angry face of the Judge appeared in the open doorway.

“John, this school is closed. You children can go home and get to work. The white people of Altamaha are not spending their money on black folks to have their heads crammed with impudence and lies. Clear out! I’ll lock the door myself.”

Up at the great pillared house the tall young son wandered aimlessly about after his father’s abrupt departure. In the house there was little to interest him; the books were old and stale, the local newspaper flat, and the women had retired with headaches and sewing. He tried a nap, but it was too warm. So he sauntered out into the fields, complaining disconsolately, “Good Lord! how long will this imprisonment last!” He was not a bad fellow,—just a little spoiled and self–indulgent, and as headstrong as his proud father. He seemed a young man pleasant to look upon, as he sat on the great black stump at the edge of the pines idly swinging his legs and smoking. “Why, there isn’t even a girl worth getting up a respectable flirtation with,” he growled. Just then his eye caught a tall, willowy figure hurrying toward him on the narrow path. He looked with interest at first, and then burst into a laugh as he said, “Well, I declare, if it isn’t Jennie, the little brown kitchen–maid! Why, I never noticed before what a trim little body she is. Hello, Jennie! Why, you haven’t kissed me since I came home,” he said gaily. The young girl stared at him in surprise and confusion,—faltered something inarticulate, and attempted to pass. But a wilful mood had seized the young idler, and he caught at her arm. Frightened, she slipped by; and half mischievously he turned and ran after her through the tall pines.

Yonder, toward the sea, at the end of the path, came John slowly, with his head down. He had turned wearily homeward from the schoolhouse; then, thinking to shield his mother from the blow, started to meet his sister as she came from work and break the news of his dismissal to her. “I’ll go away,” he said slowly; “I’ll go away and find work, and send for them. I cannot live here longer.” And then the fierce, buried anger surged up into his throat. He waved his arms and hurried wildly up the path.

The great brown sea lay silent. The air scarce breathed. The dying day bathed the twisted oaks and mighty pines in black and gold. There came from the wind no warning, not a whisper from the cloudless sky. There was only a black man hurrying on with an ache in his heart, seeing neither sun nor sea, but starting as from a dream at the frightened cry that woke the pines, to see his dark sister struggling in the arms of a tall and fair–haired man.

He said not a word, but, seizing a fallen limb, struck him with all the pent–up hatred of his great black arm, and the body lay white and still beneath the pines, all bathed in sunshine and in blood. John looked at it dreamily, then walked back to the house briskly, and said in a soft voice, “Mammy, I’m going away—I’m going to be free.”

She gazed at him dimly and faltered, “No’th, honey, is yo’ gwine No’th agin?”

He looked out where the North Star glistened pale above the waters, and said, “Yes, mammy, I’m going—North.”

Then, without another word, he went out into the narrow lane, up by the straight pines, to the same winding path, and seated himself on the great black stump, looking at the blood where the body had lain. Yonder in the gray past he had played with that dead boy, romping together under the solemn trees. The night deepened; he thought of the boys at Johnstown. He wondered how Brown had turned out, and Carey? And Jones,—Jones? Why, he was Jones, and he wondered what they would all say when they knew, when they knew, in that great long dining–room with its hundreds of merry eyes. Then as the sheen of the starlight stole over him, he thought of the gilded ceiling of that vast concert hall, heard stealing toward him the faint sweet music of the swan. Hark! was it music, or the hurry and shouting of men? Yes, surely! Clear and high the faint sweet melody rose and fluttered like a living thing, so that the very earth trembled as with the tramp of horses and murmur of angry men.

He leaned back and smiled toward the sea, whence rose the strange melody, away from the dark shadows where lay the noise of horses galloping, galloping on. With an effort he roused himself, bent forward, and looked steadily down the pathway, softly humming the “Song of the Bride,”—

“Freudig gefuhrt, ziehet dahin.”

Amid the trees in the dim morning twilight he watched their shadows dancing and heard their horses thundering toward him, until at last they came sweeping like a storm, and he saw in front that haggard white–haired man, whose eyes flashed red with fury. Oh, how he pitied him,—pitied him,—and wondered if he had the coiling twisted rope. Then, as the storm burst round him, he rose slowly to his feet and turned his closed eyes toward the Sea.

And the world whistled in his ears.

Posted in Empire, History, Modernity

Mikhail — How the Ottomans Shaped the Modern World

This post is a reflection on the role that the Ottoman Empire played in shaping the modern world.  It draws on a new book by Alan Mikhail, God’s Shadow: Sultan Selim, His Ottoman Empire, and the Making of the Modern World.  

The Ottomans are the Rodney Dangerfields of empires: They don’t get no respect.  If we picture them at all, it’s either the exotic image of turbans and concubines in Topkapi Palace or the sad image of the “sick man of Europe” in the days before World War I, which finally put them out of their misery.  Neither does them justice.  For a long time, they were the most powerful empire in the world, which dramatically shaped life on three continents — Europe, Asia, and Africa. 

But what makes their story so interesting is that it is more than just an account of some faded glory in the past.  As Mikhail points out, the Ottomans left an indelible stamp on the modern world.  It was their powerful presence in the middle of Eurasia that pushed the minor but ambitious states of Western Europe to set sail for the East and West Indies.  The Dutch, Portuguese, Spanish, and English couldn’t get to the treasures of China and India by land because of the impassable presence of the Ottomans.  So they either had to sail east around Africa to get there or forge a new path to the west, which led them to the Americas.  In fact, they did both, and the result was the riches that turned them into imperial powers who came to dominate much of the known world.  

Without the Ottomans, there would not have been the massive expansion of world trade, the Spanish empire, the riches and technological innovations that spurred the industrial revolution and empowered the English and American empires.

God's Shadow

Here are some passages from the book that give you a feel of the impact the Ottomans had:

For half a century before 1492, and for centuries afterward, the Ottoman Empire stood as the most powerful state on earth: the largest empire in the Mediterranean since ancient Rome, and the most enduring in the history of Islam. In the decades around 1500, the Ottomans controlled more territory and ruled over more people than any other world power. It was the Ottoman monopoly of trade routes with the East, combined with their military prowess on land and on sea, that pushed Spain and Portugal out of the Mediterranean, forcing merchants and sailors from these fifteenth-century kingdoms to become global explorers as they risked treacherous voyages across oceans and around continents—all to avoid the Ottomans.

From China to Mexico, the Ottoman Empire shaped the known world at the turn of the sixteenth century. Given its hegemony, it became locked in military, ideological, and economic competition with the Spanish and Italian states, Russia, India, and China, as well as other Muslim powers. The Ottomans influenced in one way or another nearly every major event of those years, with reverberations down to our own time. Dozens of familiar figures, such as Columbus, Vasco da Gama, Montezuma, the reformer Luther, the warlord Tamerlane, and generations of popes—as well as millions of other greater and lesser historical personages—calibrated their actions and defined their very existence in reaction to the reach and grasp of Ottoman power.

Other facts, too, have blotted out our recognition of the Ottoman influence on our own history. Foremost, we tend to read the history of the last half-millennium as “the rise of the West.” (This anachronism rings as true in Turkey and the rest of the Middle East as it does in Europe and America.) In fact, in 1500, and even in 1600, there was no such thing as the now much-vaunted notion of “the West.” Throughout the early modern centuries, the European continent consisted of a fragile collection of disparate kingdoms and small, weak principalities locked in constant warfare. The large land-based empires of Eurasia were the dominant powers of the Old World, and, apart from a few European outposts in and around the Caribbean, the Americas remained the vast domain of its indigenous peoples. The Ottoman Empire held more territory in Europe than did most European-based states. In 1600, if asked to pick a single power that would take over the world, a betting man would have put his money on the Ottoman Empire, or perhaps China, but certainly not on any European entity.

The sheer scope was the empire at its height was extraordinary:

For close to four centuries, from 1453 until well into the exceedingly fractured 1800s, the Ottomans remained at the center of global politics, economics, and war. As European states rose and fell, the Ottomans stood strong. They battled Europe’s medieval and early modern empires, and in the twentieth century continued to fight in Europe, albeit against vastly different enemies. Everyone from Machiavelli to Jefferson to Hitler—quite an unlikely trio—was forced to confront the challenge of the Ottomans’ colossal power and influence. Counting from their first military victory, at Bursa, they ruled for nearly six centuries in territories that today comprise some thirty-three countries. Their armies would control massive swaths of Europe, Africa, and Asia; some of the world’s most crucial trade corridors; and cities along the shores of the Mediterranean, Red, Black, and Caspian seas, the Indian Ocean, and the Persian Gulf. They held Istanbul and Cairo, two of the largest cities on earth, as well as the holy cities of Mecca, Medina, and Jerusalem, and what was the world’s largest Jewish city for over four hundred years, Salonica (Thessaloniki in today’s Greece). From their lowly beginnings as sheep-herders on the long, hard road across Central Asia, the Ottomans ultimately succeeded in proving themselves the closest thing to the Roman Empire since the Roman Empire itself.

One of the interesting things about the Ottomans was how cosmopolitan and relatively tolerant they were.  The Spanish threw the Muslims and Jews out of Spain but the Ottomans welcomed a variety of peoples, cultures, languages, and religions.  It wasn’t until relatively late that the empire came to be predominately Muslim.

Although all religious minorities throughout the Mediterranean were subjected to much hardship, the Ottomans, despite what Innocent thought, never persecuted non-Muslims in the way that the Inquisition persecuted Muslims and Jews—and, despite the centuries of calls for Christian Crusades, Muslims never attempted a war against the whole of Christianity. While considered legally inferior to Muslims, Christians and Jews in the Ottoman Empire (as elsewhere in the lands of Islam) had more rights than other religious minorities around the world. They had their own law courts, freedom to worship in the empire’s numerous synagogues and churches, and communal autonomy. While Christian Europe was killing its religious minorities, the Ottomans protected theirs and welcomed those expelled from Europe. Although the sultans of the empire were Muslims, the majority of the population was not. Indeed, the Ottoman Empire was effectively the Mediterranean’s most populous Christian state: the Ottoman sultan ruled over more Christian subjects than the Catholic pope.

The sultan who moved the Ottoman empire into the big leagues — tripling its size — was Selim the Grim, who is the central figure of this book (look at his image on the book’s cover and you’ll see how he earned the name).  His son was Suleyman the Magnificent, whose long rule made him the lasting symbol of the empire at its peak.  Another sign of the heterogeneous nature of the Ottomans is that the sultans themselves were of mixed blood.

Because, in this period, Ottoman sultans and princes produced sons not from their wives but from their concubines, all Ottoman sultans were the sons of foreign, usually Christian-born, slaves like Gülbahar [Selim’s mother].

In the exceedingly cosmopolitan empire, the harem ensured that a non-Turkish, non-Muslim, non-elite diversity was infused into the very bloodline of the imperial family. As the son of a mother with roots in a far-off land, a distant culture, and a religion other than Islam, Selim viscerally experienced the ethnically and religiously amalgamated nature of the Ottoman Empire, and grew up in provincial Amasya with an expansive outlook on the fifteenth-century world.

Posted in History, Liberal democracy, Philosophy

Fukuyama — Liberalism and Its Discontents

This post is a brilliant essay by Francis Fukuyama, “Liberalism and Its Discontents.”  In it, he explores the problems facing liberal democracy today.  As always, it is threatened by autocratic regimes around the world.  But what’s new since the fall of the Soviet Union is the threat from illiberal democracy, both at home and abroad, in the form of populism of the right and the left.  
His argument is a strong defense of the liberal democratic order, but it is also a very smart analysis of how liberal democracy has sowed the seeds of its own downfall.  He shows how much it depends on the existence of a vibrant civil society and robust social capital, both of which its own emphasis on individual liberty tends to undermine.  He also shows how its stress on free markets has fostered the rise of the neoliberal religion, which seeks to subordinate the once robust liberal state to the market.  And he notes how its tolerance of diverse viewpoints leaves it vulnerable to illiberal views that seek to wipe it out of existence.
This essay was published in the inaugural issue of the magazine American Purpose On October 5, 2020.  Here’s a link to the original.
It’s well worth your while to give this essay a close read.

Illustration_AmericanPurpose_Edited

Liberalism and Its Discontents

The challenges from the left and the right.

Francis Fukuyama

Today, there is a broad consensus that democracy is under attack or in retreat in many parts of the world. It is being contested not just by authoritarian states like China and Russia, but by populists who have been elected in many democracies that seemed secure.

The “democracy” under attack today is a shorthand for liberal democracy, and what is really under greatest threat is the liberal component of this pair. The democracy part refers to the accountability of those who hold political power through mechanisms like free and fair multiparty elections under universal adult franchise. The liberal part, by contrast, refers primarily to a rule of law that constrains the power of government and requires that even the most powerful actors in the system operate under the same general rules as ordinary citizens. Liberal democracies, in other words, have a constitutional system of checks and balances that limits the power of elected leaders.Democracy itself is being challenged by authoritarian states like Russia and China that manipulate or dispense with free and fair elections. But the more insidious threat arises from populists within existing liberal democracies who are using the legitimacy they gain through their electoral mandates to challenge or undermine liberal institutions. Leaders like Hungary’s Viktor Orbán, India’s Narendra Modi, and Donald Trump in the United States have tried to undermine judicial independence by packing courts with political supporters, have openly broken laws, or have sought to delegitimize the press by labeling mainstream media as “enemies of the people.” They have tried to dismantle professional bureaucracies and to turn them into partisan instruments. It is no accident that Orbán puts himself forward as a proponent of “illiberal democracy.”

The contemporary attack on liberalism goes much deeper than the ambitions of a handful of populist politicians, however. They would not be as successful as they have been were they not riding a wave of discontent with some of the underlying characteristics of liberal societies. To understand this, we need to look at the historical origins of liberalism, its evolution over the decades, and its limitations as a governing doctrine.

What Liberalism Was

Classical liberalism can best be understood as an institutional solution to the problem of governing over diversity. Or to put it in slightly different terms, it is a system for peacefully managing diversity in pluralistic societies. It arose in Europe in the late 17th and 18th centuries in response to the wars of religion that followed the Protestant Reformation, wars that lasted for 150 years and killed major portions of the populations of continental Europe.

While Europe’s religious wars were driven by economic and social factors, they derived their ferocity from the fact that the warring parties represented different Christian sects that wanted to impose their particular interpretation of religious doctrine on their populations. This was a period in which the adherents of forbidden sects were persecuted—heretics were regularly tortured, hanged, or burned at the stake—and their clergy hunted. The founders of modern liberalism like Thomas Hobbes and John Locke sought to lower the aspirations of politics, not to promote a good life as defined by religion, but rather to preserve life itself, since diverse populations could not agree on what the good life was. This was the distant origin of the phrase “life, liberty, and the pursuit of happiness” in the Declaration of Independence. The most fundamental principle enshrined in liberalism is one of tolerance: You do not have to agree with your fellow citizens about the most important things, but only that each individual should get to decide what those things are without interference from you or from the state. The limits of tolerance are reached only when the principle of tolerance itself is challenged, or when citizens resort to violence to get their way.

Understood in this fashion, liberalism was simply a pragmatic tool for resolving conflicts in diverse societies, one that sought to lower the temperature of politics by taking questions of final ends off the table and moving them into the sphere of private life. This remains one of its most important selling points today: If diverse societies like India or the United States move away from liberal principles and try to base national identity on race, ethnicity, or religion, they are inviting a return to potentially violent conflict. The United States suffered such conflict during its Civil War, and Modi’s India is inviting communal violence by shifting its national identity to one based on Hinduism.

There is however a deeper understanding of liberalism that developed in continental Europe that has been incorporated into modern liberal doctrine. In this view, liberalism is not simply a mechanism for pragmatically avoiding violent conflict, but also a means of protecting fundamental human dignity.

The ground of human dignity has shifted over time. In aristocratic societies, it was an attribute only of warriors who risked their lives in battle. Christianity universalized the concept of dignity based on the possibility of human moral choice: Human beings had a higher moral status than the rest of created nature but lower than that of God because they could choose between right and wrong. Unlike beauty or intelligence or strength, this characteristic was universally shared and made human beings equal in the sight of God. By the time of the Enlightenment, the capacity for choice or individual autonomy was given a secular form by thinkers like Rousseau (“perfectibility”) and Kant (a “good will”), and became the ground for the modern understanding of the fundamental right to dignity written into many 20th-century constitutions. Liberalism recognizes the equal dignity of every human being by granting them rights that protect individual autonomy: rights to speech, to assembly, to belief, and ultimately to participate in self-government.

Liberalism thus protects diversity by deliberately not specifying higher goals of human life. This disqualifies religiously defined communities as liberal. Liberalism also grants equal rights to all people considered full human beings, based on their capacity for individual choice. Liberalism thus tends toward a kind of universalism: Liberals care not just about their rights, but about the rights of others outside their particular communities. Thus the French Revolution carried the Rights of Man across Europe. From the beginning the major arguments among liberals were not over this principle, but rather over who qualified as rights-bearing individuals, with various groups—racial and ethnic minorities, women, foreigners, the propertyless, children, the insane, and criminals—excluded from this magic circle.

A final characteristic of historical liberalism was its association with the right to own property. Property rights and the enforcement of contracts through legal institutions became the foundation for economic growth in Britain, the Netherlands, Germany, the United States, and other states that were not necessarily democratic but protected property rights. For that reason liberalism strongly associated with economic growth and modernization. Rights were protected by an independent judiciary that could call on the power of the state for enforcement. Properly understood, rule of law referred both to the application of day-to-day rules that governed interactions between individuals and to the design of political institutions that formally allocated political power through constitutions. The class that was most committed to liberalism historically was the class of property owners, not just agrarian landlords but the myriads of middle-class business owners and entrepreneurs that Karl Marx would label the bourgeoisie.

Liberalism is connected to democracy, but is not the same thing as it. It is possible to have regimes that are liberal but not democratic: Germany in the 19th century and Singapore and Hong Kong in the late 20th century come to mind. It is also possible to have democracies that are not liberal, like the ones Viktor Orbán and Narendra Modi are trying to create that privilege some groups over others. Liberalism is allied to democracy through its protection of individual autonomy, which ultimately implies a right to political choice and to the franchise. But it is not the same as democracy. From the French Revolution on, there were radical proponents of democratic equality who were willing to abandon liberal rule of law altogether and vest power in a dictatorial state that would equalize outcomes. Under the banner of Marxism-Leninism, this became one of the great fault lines of the 20th century. Even in avowedly liberal states, like many in late 19th- and early 20th-century Europe and North America, there were powerful trade union movements and social democratic parties that were more interested in economic redistribution than in the strict protection of property rights.

Liberalism also saw the rise of another competitor besides communism: nationalism. Nationalists rejected liberalism’s universalism and sought to confer rights only on their favored group, defined by culture, language, or ethnicity. As the 19th century progressed, Europe reorganized itself from a dynastic to a national basis, with the unification of Italy and Germany and with growing nationalist agitation within the multiethnic Ottoman and Austro-Hungarian empires. In 1914 this exploded into the Great War, which killed millions of people and laid the kindling for a second global conflagration in 1939.

The defeat of Germany, Italy, and Japan in 1945 paved the way for a restoration of liberalism as the democratic world’s governing ideology. Europeans saw the folly of organizing politics around an exclusive and aggressive understanding of nation, and created the European Community and later the European Union to subordinate the old nation-states to a cooperative transnational structure. For its part, the United States played a powerful role in creating a new set of international institutions, including the United Nations (and affiliated Bretton Woods organizations like the World Bank and IMF), GATT and the World Trade Organization, and cooperative regional ventures like NATO and NAFTA.

The largest threat to this order came from the former Soviet Union and its allied communist parties in Eastern Europe and the developing world. But the former Soviet Union collapsed in 1991, as did the perceived legitimacy of Marxism-Leninism, and many former communist countries sought to incorporate themselves into existing international institutions like the EU and NATO. This post-Cold War world would collectively come to be known as the liberal international order.

But the period from 1950 to the 1970s was the heyday of liberal democracy in the developed world. Liberal rule of law abetted democracy by protecting ordinary people from abuse: The U.S. Supreme Court, for example, was critical in breaking down legal racial segregation through decisions like Brown v. Board of Education. And democracy protected the rule of law: When Richard Nixon engaged in illegal wiretapping and use of the CIA, it was a democratically elected Congress that helped drive him from power. Liberal rule of law laid the basis for the strong post-World War II economic growth that then enabled democratically elected legislatures to create redistributive welfare states. Inequality was tolerable in this period because most people could see their material conditions improving. In short, this period saw a largely happy coexistence of liberalism and democracy throughout the developed world.

Discontents

Liberalism has been a broadly successful ideology, and one that is responsible for much of the peace and prosperity of the modern world. But it also has a number of shortcomings, some of which were triggered by external circumstances, and others of which are intrinsic to the doctrine. The first lies in the realm of economics, the second in the realm of culture.

The economic shortcomings have to do with the tendency of economic liberalism to evolve into what has come to be called “neoliberalism.” Neoliberalism is today a pejorative term used to describe a form of economic thought, often associated with the University of Chicago or the Austrian school, and economists like Friedrich Hayek, Milton Friedman, George Stigler, and Gary Becker. They sharply denigrated the role of the state in the economy, and emphasized free markets as spurs to growth and efficient allocators of resources. Many of the analyses and policies recommended by this school were in fact helpful and overdue: Economies were overregulated, state-owned companies inefficient, and governments responsible for the simultaneous high inflation and low growth experienced during the 1970s.

But valid insights about the efficiency of markets evolved into something of a religion, in which state intervention was opposed not based on empirical observation but as a matter of principle. Deregulation produced lower airline ticket prices and shipping costs for trucks, but also laid the ground for the great financial crisis of 2008 when it was applied to the financial sector. Privatization was pushed even in cases of natural monopolies like municipal water or telecom systems, leading to travesties like the privatization of Mexico’s TelMex, where a public monopoly was transformed into a private one. Perhaps most important, the fundamental insight of trade theory, that free trade leads to higher wealth for all parties concerned, neglected the further insight that this was true only in the aggregate, and that many individuals would be hurt by trade liberalization. The period from the 1980s onward saw the negotiation of both global and regional free trade agreements that shifted jobs and investment away from rich democracies to developing countries, increasing within-country inequalities. In the meantime, many countries starved their public sectors of resources and attention, leading to deficiencies in a host of public services from education to health to security.

The result was the world that emerged by the 2010s in which aggregate incomes were higher than ever but inequality within countries had also grown enormously. Many countries around the world saw the emergence of a small class of oligarchs, multibillionaires who could convert their economic resources into political power through lobbyists and purchases of media properties. Globalization enabled them to move their money to safe jurisdictions easily, starving states of tax revenue and making regulation very difficult. Globalization also entailed liberalization of rules concerning migration. Foreign-born populations began to increase in many Western countries, abetted by crises like the Syrian civil war that sent more than a million refugees into Europe. All of this paved the way for the populist reaction that became clearly evident in 2016 with Britain’s Brexit vote and the election of Donald Trump in the United States.

The second discontent with liberalism as it evolved over the decades was rooted in its very premises. Liberalism deliberately lowered the horizon of politics: A liberal state will not tell you how to live your life, or what a good life entails; how you pursue happiness is up to you. This produces a vacuum at the core of liberal societies, one that often gets filled by consumerism or pop culture or other random activities that do not necessarily lead to human flourishing. This has been the critique of a group of (mostly) Catholic intellectuals including Patrick Deneen, Sohrab Ahmari, Adrian Vermeule, and others, who feel that liberalism offers “thin gruel” for anyone with deeper moral commitments.

This leads us to a deeper stratum of discontent. Liberal theory, both in its economic and political guises, is built around individuals and their rights, and the political system protects their ability to make these choices autonomously. Indeed, in neoclassical economic theory, social cooperation arises only as a result of rational individuals deciding that it is in their self-interest to work with other individuals. Among conservative intellectuals, Patrick Deneen has gone the furthest by arguing that this whole approach is deeply flawed precisely because it is based on this individualistic premise, and sanctifies individual autonomy above all other goods. Thus for him, the entire American project based as it was on Lockean individualistic principles was misfounded. Human beings for him are not primarily autonomous individuals, but deeply social beings who are defined by their obligations and ties to a range of social structures, from families to kin groups to nations.

This social understanding of human nature was a truism taken for granted by most thinkers prior to the Western Enlightenment. It is also one that is one supported by a great deal of recent research in the life sciences that shows that human beings are hard-wired to be social creatures: Many of our most salient faculties are ones that lead us to cooperate with one another in groups of various sizes and types. This cooperation does not arise necessarily from rational calculation; it is supported by emotional faculties like pride, guilt, shame, and anger that reinforce social bonds. The success of human beings over the millennia that has allowed our species to completely dominate its natural habitat has to do with this aptitude for following norms that induce social cooperation.

By contrast, the kind of individualism celebrated in liberal economic and political theory is a contingent development that emerged in Western societies over the centuries. Its history is long and complicated, but it originated in the inheritance rules set down by the Catholic Church in early medieval times which undermined the extended kinship networks that had characterized Germanic tribal societies. Individualism was further validated by its functionality in promoting market capitalism: Markets worked more efficiently if individuals were not constrained by obligations to kin and other social networks. But this kind of individualism has always been at odds with the social proclivities of human beings. It also does not come naturally to people in certain other non-Western societies like India or the Arab world, where kin, caste, or ethnic ties are still facts of life.

The implication of these observations for contemporary liberal societies is straightforward. Members of such societies want opportunities to bond with one another in a host of ways: as citizens of a nation, members of an ethnic or racial group, residents of a region, or adherents to a particular set of religious beliefs. Membership in such groups gives their lives meaning and texture in a way that mere citizenship in a liberal democracy does not.

Many of the critics of liberalism on the right feel that it has undervalued the nation and traditional national identity: Thus Viktor Orbán has asserted that Hungarian national identity is based on Hungarian ethnicity and on maintenance of traditional Hungarian values and cultural practices. New nationalists like Yoram Hazony celebrate nationhood and national culture as the rallying cry for community, and they bemoan liberalism’s dissolving effect on religious commitment, yearning for a thicker sense of community and shared values, underpinned by virtues in service of that community.

There are parallel discontents on the left. Juridical equality before the law does not mean that people will be treated equally in practice. Racism, sexism, and anti-gay bias all persist in liberal societies, and those injustices have become identities around which people could mobilize. The Western world has seen the emergence of a series of social movements since the 1960s, beginning with the civil rights movement in the United States, and movements promoting the rights of women, indigenous peoples, the disabled, the LGBT community, and the like. The more progress that has been made toward eradicating social injustices, the more intolerable the remaining injustices seem, and thus the moral imperative to mobilizing to correct them. The complaint of the left is different in substance but similar in structure to that of the right: Liberal society does not do enough to root out deep-seated racism, sexism, and other forms of discrimination, so politics must go beyond liberalism. And, as on the right, progressives want the deeper bonding and personal satisfaction of associating—in this case, with people who have suffered from similar indignities.

This instinct for bonding and the thinness of shared moral life in liberal societies has shifted global politics on both the right and the left toward a politics of identity and away from the liberal world order of the late 20th century. Liberal values like tolerance and individual freedom are prized most intensely when they are denied: People who live in brutal dictatorships want the simple freedom to speak, associate, and worship as they choose. But over time life in a liberal society comes to be taken for granted and its sense of shared community seems thin. Thus in the United States, arguments between right and left increasingly revolve around identity, and particularly racial identity issues, rather than around economic ideology and questions about the appropriate role of the state in the economy.

There is another significant issue that liberalism fails to grapple adequately with, which concerns the boundaries of citizenship and rights. The premises of liberal doctrine tend toward universalism: Liberals worry about human rights, and not just the rights of Englishmen, or white Americans, or some other restricted class of people. But rights are protected and enforced by states which have limited territorial jurisdiction, and the question of who qualifies as a citizen with voting rights becomes a highly contested one. Some advocates of migrant rights assert a universal human right to migrate, but this is a political nonstarter in virtually every contemporary liberal democracy. At the present moment, the issue of the boundaries of political communities is settled by some combination of historical precedent and political contestation, rather than being based on any clear liberal principle.

Conclusion

Vladimir Putin told the Financial Times that liberalism has become an “obsolete” doctrine. While it may be under attack from many quarters today, it is in fact more necessary than ever.

It is more necessary because it is fundamentally a means of governing over diversity, and the world is more diverse than it ever has been. Democracy disconnected from liberalism will not protect diversity, because majorities will use their power to repress minorities. Liberalism was born in the mid-17th century as a means of resolving religious conflicts, and it was reborn again after 1945 to solve conflicts between nationalisms. Any illiberal effort to build a social order around thick ties defined by race, ethnicity, or religion will exclude important members of the community, and down the road will lead to conflict. Russia itself retains liberal characteristics: Russian citizenship and nationality is not defined by either Russian ethnicity or the Orthodox religion; the Russian Federation’s millions of Muslim inhabitants enjoy equal juridical rights. In situations of de facto diversity, attempts to impose a single way of life on an entire population is a formula for dictatorship.

The only other way to organize a diverse society is through formal power-sharing arrangements among different identity groups that give only a nod toward shared nationality. This is the way that Lebanon, Iraq, Bosnia, and other countries in the Middle East and the Balkans are governed. This type of consociationalism leads to very poor governance and long-term instability, and works poorly in societies where identity groups are not geographically based. This is not a path down which any contemporary liberal democracy should want to tread.

That being said, the kinds of economic and social policies that liberal societies should pursue is today a wide-open question. The evolution of liberalism into neoliberalism after the 1980s greatly reduced the policy space available to centrist political leaders, and permitted the growth of huge inequalities that have been fueling populisms of the right and the left. Classical liberalism is perfectly compatible with a strong state that seeks social protections for populations left behind by globalization, even as it protects basic property rights and a market economy. Liberalism is necessarily connected to democracy, and liberal economic policies need to be tempered by considerations of democratic equality and the need for political stability.

I suspect that most religious conservatives critical of liberalism today in the United States and other developed countries do not fool themselves into thinking that they can turn the clock back to a period when their social views were mainstream. Their complaint is a different one: that contemporary liberals are ready to tolerate any set of views, from radical Islam to Satanism, other than those of religious conservatives, and that they find their own freedom constrained.

This complaint is a serious one: Many progressives on the left have shown themselves willing to abandon liberal values in pursuit of social justice objectives. There has been a sustained intellectual attack on liberal principles over the past three decades coming out of academic pursuits like gender studies, critical race theory, postcolonial studies, and queer theory, that deny the universalistic premises underlying modern liberalism. The challenge is not simply one of intolerance of other views or “cancel culture” in the academy or the arts. Rather, the challenge is to basic principles that all human beings were born equal in a fundamental sense, or that a liberal society should strive to be color-blind. These different theories tend to argue that the lived experiences of specific and ever-narrower identity groups are incommensurate, and that what divides them is more powerful than what unites them as citizens. For some in the tradition of Michel Foucault, foundational approaches to cognition coming out of liberal modernity like the scientific method or evidence-based research are simply constructs meant to bolster the hidden power of racial and economic elites.

The issue here is thus not whether progressive illiberalism exists, but rather how great a long-term danger it represents. In countries from India and Hungary to the United States, nationalist conservatives have actually taken power and have sought to use the power of the state to dismantle liberal institutions and impose their own views on society as a whole. That danger is a clear and present one.

Progressive anti-liberals, by contrast, have not succeeded in seizing the commanding heights of political power in any developed country. Religious conservatives are still free to worship in any way they see fit, and indeed are organized in the United States as a powerful political bloc that can sway elections. Progressives exercise power in different and more nuanced ways, primarily through their dominance of cultural institutions like the mainstream media, the arts, and large parts of academia. The power of the state has been enlisted behind their agenda on such matters as striking down via the courts conservative restrictions on abortion and gay marriage and in the shaping of public school curricula. An open question for the future is whether cultural dominance today will ultimately lead to political dominance in the future, and thus a more thoroughgoing rollback of liberal rights by progressives.

Liberalism’s present-day crisis is not new; since its invention in the 17th century, liberalism has been repeatedly challenged by thick communitarians on the right and progressive egalitarians on the left. Liberalism properly understood is perfectly compatible with communitarian impulses and has been the basis for the flourishing of deep and diverse forms of civil society. It is also compatible with the social justice aims of progressives: One of its greatest achievements was the creation of modern redistributive welfare states in the late 20th century. Liberalism’s problem is that it works slowly through deliberation and compromise, and never achieves its communal or social justice goals as completely as their advocates would like. But it is hard to see how the discarding of liberal values is going to lead to anything in the long term other than increasing social conflict and ultimately a return to violence as a means of resolving differences.

Francis Fukuyama, chairman of the editorial board of American Purpose, directs the Center on Democracy, Development and the Rule of Law at Stanford University.

Posted in History, History of education, War

An Affair to Remember: America’s Brief Fling with the University as a Public Good

This post is an essay about the brief but glorious golden age of the US university during the three decades after World War II.  

American higher education rose to fame and fortune during the Cold War, when both student enrollments and funded research shot upward. Prior to World War II, the federal government showed little interest in universities and provided little support. The war spurred a large investment in defense-based scientific research in universities, and the emergence of the Cold War expanded federal investment exponentially. Unlike a hot war, the Cold War offered an extended period of federally funded research public subsidy for expanding student enrollments. The result was the golden age of the American university. The good times continued for about 30 years and then began to go bad. The decline was triggered by the combination of a decline in the perceived Soviet threat and a taxpayer revolt against high public spending; both trends culminating with the fall of the Berlin Wall in 1989. With no money and no enemy, the Cold War university fell as quickly as it arose. Instead of seeing the Cold War university as the norm, we need to think of it as the exception. What we are experiencing now in American higher education is a regression to the mean, in which, over the long haul, Americans have understood higher education to be a distinctly private good.

I originally presented this piece in 2014 at a conference at Catholic University in Leuven, Belgium.  It was then published in the Journal of Philosophy of Education in 2016 (here’s a link to the JOPE version) and then became a chapter in my 2017 book, A Perfect Mess.  Waste not, want not.  Hope you enjoy it.

Cold War

An Affair to Remember:

America’s Brief Fling with the University as a Public Good

David F. Labaree

            American higher education rose to fame and fortune during the Cold War, when both student enrollments and funded research shot upward.  Prior to World War II, the federal government showed little interest in universities and provided little support.  The war spurred a large investment in defense-based scientific research in universities for reasons of both efficiency and necessity:  universities had the researchers and infrastructure in place and the government needed to gear up quickly.  With the emergence of the Cold War in 1947, the relationship continued and federal investment expanded exponentially.  Unlike a hot war, the Cold War offered a long timeline for global competition between communism and democracy, which meant institutionalizing the wartime model of federally funded research and building a set of structures for continuing investment in knowledge whose military value was unquestioned. At the same time, the communist challenge provided a strong rationale for sending a large number of students to college.  These increased enrollments would educate the skilled workers needed by the Cold War economy, produce informed citizens to combat the Soviet menace, and demonstrate to the world the broad social opportunities available in a liberal democracy.  The result of this enormous public investment in higher education has become known as the golden age of the American university.

            Of course, as is so often the case with a golden age, it didn’t last.  The good times continued for about 30 years and then began to go bad.  The decline was triggered by the combination of a decline in the perceived Soviet threat and a taxpayer revolt against high public spending; both trends with the fall of the Berlin Wall in 1989.  With no money and no enemy, the Cold War university fell as quickly as it arose. 

            In this paper I try to make sense of this short-lived institution.  But I want to avoid the note of nostalgia that pervades many current academic accounts, in which professors and administrators grieve for the good old days of the mid-century university and spin fantasies of recapturing them.  Barring another national crisis of the same dimension, however, it just won’t happen.  Instead of seeing the Cold War university as the norm that we need to return to, I suggest that it’s the exception.  What we’re experiencing now in American higher education is, in many ways, a regression to the mean. 

            My central theme is this:  Over the long haul, Americans have understood higher education as a distinctly private good.  The period from 1940 to 1970 was the one time in our history when the university became a public good.  And now we are back to the place we have always been, where the university’s primary role is to provide individual consumers a chance to gain social access and social advantage.  Since students are the primary beneficiaries, then they should also foot the bill; so state subsidies are hard to justify.

            Here is my plan.  First, I provide an overview of the long period before 1940 when American higher education functioned primarily as a private good.  During this period, the beneficiaries changed from the university’s founders to its consumers, but private benefit was the steady state.  This is the baseline against which we can understand the rapid postwar rise and fall of public investment in higher education.  Next, I look at the huge expansion of public funding for higher education starting with World War II and continuing for the next 30 years.  Along the way I sketch how the research university came to enjoy a special boost in support and rising esteem during these decades.  Then I examine the fall from grace toward the end of the century when the public-good rationale for higher ed faded as quickly as it had emerged.  And I close by exploring the implications of this story for understanding the American system of higher education as a whole. 

            During most of its history, the central concern driving the system has not been what it can do for society but what it can do for me.  In many ways, this approach has been highly beneficial.  Much of its success as a system – as measured by wealth, rankings, and citations – derives from its core structure as a market-based system producing private goods for consumers rather than a politically-based system producing public goods for state and society.  But this view of higher education as private property is also a key source of the system’s pathologies.  It helps explain why public funding for higher education is declining and student debt is rising; why private colleges are so much richer and more prestigious that public colleges; why the system is so stratified, with wealthy students attending the exclusive colleges at the top where social rewards are high and with poor students attending the inclusive colleges at the bottom where such rewards are low; and why quality varies so radically, from colleges that ride atop the global rankings to colleges that drift in intellectual backwaters.

The Private Origins of the System

            One of the peculiar aspects of the history of American higher education is that private colleges preceded public.  Another, which in part follows from the first, is that private colleges are also more prestigious.  Nearly everywhere else in the world, state-supported and governed universities occupy the pinnacle of the national system while private institutions play a small and subordinate role, supplying degrees of less distinction and serving students of less ability.  But in the U.S., the top private universities produce more research, gain more academic citations, attract better faculty and students, and graduate more leaders of industry, government, and the professions.  According to the 2013 Shanghai rankings, 16 of the top 25 universities in the U.S. are private, and the concentration is even higher at the top of this list, where private institutions make up 8 of the top 10 (Institute of Higher Education, 2013). 

            This phenomenon is rooted in the conditions under which colleges first emerged in the U.S.  American higher education developed into a system in the early 19th century, when three key elements were in place:  the state was weak, the market was strong, and the church was divided.  The federal government at the time was small and poor, surviving largely on tariffs and the sale of public lands, and state governments were strapped simply trying to supply basic public services.  Colleges were a low priority for government since they served no compelling public need – unlike public schools, which states saw as essential for producing citizens for the republic.  So colleges only emerged when local promoters requested and received a  corporate charter from the state.  These were private not-for-profit institutions that functioned much like any other corporation.  States provided funding only sporadically and only if an institution’s situation turned dire.  And after the Dartmouth College decision in 1819, the Supreme Court made clear that a college’s corporate charter meant that it could govern itself without state interference.  Therefore, in the absence of state funding and control, early American colleges developed a market-based system of higher education. 

            If the roots of the American system were private, they were also extraordinarily local.  Unlike the European university, with its aspirations toward universality and its history of cosmopolitanism, the American college of the nineteenth century was a home-town entity.  Most often, it was founded to advance the parochial cause of promoting a particular religious denomination rather than to promote higher learning.  In a setting where no church was dominant and all had to compete for visibility, stature, and congregants, founding colleges was a valuable way to plant the flag and promote the faith.  This was particularly true when the population was rapidly expanding into new territories to the west, which meant that no denomination could afford to cede the new terrain to competitors.  Starting a college in Ohio was a way to ensure denominational growth, prepare clergy, and spread the word.

            At the same time, colleges were founded with an eye toward civic boosterism, intended to shore up a community’s claim to be a major cultural and commercial center rather than a sleepy farm town.  With a college, a town could claim that it deserved to gain lucrative recognition as a stop on the railroad line, the site for a state prison, the county seat, or even the state capital.  These consequences would elevate the value of land in the town, which would work to the benefit of major landholders.  In this sense, the nineteenth century college, like much of American history, was in part the product of a land development scheme.  In general, these two motives combined: colleges emerged as a way to advance both the interests of particular sects and also the interests of the towns where they were lodged.  Often ministers were also land speculators.  It was always better to have multiple rationales and sources of support than just one (Brown (1995); Boorstin (1965); Potts (1971).  In either case, however, the benefits of founding a college accrued to individual landowners and particular religious denominations and not to the larger public.

As a result these incentives, church officials and civic leaders around the country scrambled to get a state charter for a college, establish a board of trustees made up of local notables, and install a president.  The latter (usually a clergyman) would rent a local building, hire a small and not very accomplished faculty, and serve as the CEO of a marginal educational enterprise, one that sought to draw tuition-paying students from the area in order to make the college a going concern.  With colleges arising to meet local and sectarian needs, the result was the birth of a large number of small, parochial, and weakly funded institutions in a very short period of time in the nineteenth century, which meant that most of these colleges faced a difficult struggle to survive in the competition with peer institutions.  In the absence of reliable support from church or state, these colleges had to find a way to get by on their own. 

            Into this mix of private colleges, state and local governments began to introduce public institutions.  First came a series of universities established by individual states to serve their local populations.  Here too competition was a bigger factor than demand for learning, since a state government increasingly needed to have a university of its own in order to keep up with its neighbors.  Next came a group of land-grant colleges that began to emerge by midcentury.  Funded by grants of land from the federal government, these were public institutions that focused on providing practical education for occupations in agriculture and engineering.  Finally was an array of normal schools, which aimed at preparing teachers for the expanding system of public elementary education.  Like the private colleges, these public institutions emerged to meet the economic needs of towns that eagerly sought to house them.  And although they colleges were creatures of the state, they had only limited public funding and had to rely heavily on student tuition and private donations.

            The rate of growth of this system of higher education was staggering.  At the beginning of the American republic in 1790 the country had 19 institutions calling themselves colleges or universities (Tewksbury (1932), Table 1; Collins, 1979, Table 5.2).  By 1880, it had 811, which doesn’t even include the normal schools.  As a comparison, this was five times as many institutions as existed that year in all of Western Europe (Ruegg (2004).  To be sure, the American institutions were for the most part colleges in name only, with low academic standards, an average student body of 131 (Carter et al. (2006), Table Bc523) and faculty of 14 (Carter et al. (2006), Table Bc571).  But nonetheless this was a massive infrastructure for a system of higher education. 

            At a density of 16 colleges per million of population, the U.S. in 1880 had the most overbuilt system of higher education in the world (Collins, 1979, Table 5.2).  Created in order to meet the private needs of land speculators and religious sects rather that the public interest of state and society, the system got way ahead of demand for its services.  That changed in the 1880s.  By adopting parts of the German research university model (in form if not in substance), the top level of the American system acquired a modicum of academic respectability.  In addition – and this is more important for our purposes here – going to college finally came to be seen as a good investment for a growing number of middle-class student-consumers. 

            Three factors came together to make college attractive.  Primary among these was the jarring change in the structure of status transmission for middle-class families toward the end of the nineteenth century.  The tradition of passing on social position to your children by transferring ownership of the small family business was under dire threat, as factories were driving independent craft production out of the market and department stories were making small retail shops economically marginal.  Under these circumstances, middle class families began to adopt what Burton Bledstein calls the “culture of professionalism” (Bledstein, 1976).  Pursuing a profession (law, medicine, clergy) had long been an option for young people in this social stratum, but now this attraction grew stronger as the definition of profession grew broader.  With the threat of sinking into the working class becoming more likely, families found reassurance in the prospect of a form of work that would buffer their children from the insecurity and degradation of wage labor.  This did not necessarily mean becoming a traditional professional, where the prospects were limited and entry costs high, but instead it meant becoming a salaried employee in a management position that was clearly separated from the shop floor.  The burgeoning white-collar work opportunities as managers in corporate and government bureaucracies provided the promise of social status, economic security, and protection from downward mobility.  And the best way to certify yourself as eligible for this kind of work was to acquire a college degree. 

            Two other factors added to the attractions of college.  One was that a high school degree – once a scarce commodity that became a form of distinction for middle class youth during the nineteenth century – was in danger of becoming commonplace.  Across the middle of the century, enrollments in primary and grammar schools were growing fast, and by the 1880s they were filling up.  By 1900, the average American 20-year-old had eight years of schooling, which meant that political pressure was growing to increase access to high school (Goldin & Katz, 2008, p. 19).  This started to happen in the 1880s, and for the next 50 years high school enrollments doubled every decade.  The consequences were predictable.  If the working class was beginning to get a high school education, then middle class families felt compelled to preserve their advantage by pursuing college.

            The last piece that fell into place to increase the drawing power of college for middle class families was the effort by colleges in the 1880s and 90s to make undergraduate enrollment not just useful but enjoyable.  Ever desperate to find ways to draw and retain students, colleges responded to competitive pressure by inventing the core elements that came to define the college experience for American students in the twentieth century.  These included fraternities and sororities, pleasant residential halls, a wide variety of extracurricular entertainments, and – of course – football.  College life became a major focus of popular magazines, and college athletic events earned big coverage in newspapers.  In remarkably short order, going to college became a life stage in the acculturation of middle class youth.  It was the place where you could prepare for a respectable job, acquire sociability, learn middle class cultural norms, have a good time, and meet a suitable spouse.  And, for those who were so inclined, was the potential fringe benefit of getting an education.

            Spurred by student desire to get ahead or stay ahead, college enrollments started growing quickly.  They were at 116,000  in 1879, 157,000 in 1889, 238,000 in 1899, 355,000 in 1909, 598,000 in 1919, 1,104,000 in 1929, and 1,494,000 in 1939 (Carter et al. (2006), Table Bc523).  This was a rate of increase of more than 50 percent a decade – not as fast as the increases that would come at midcentury, but still impressive.  During this same 60-year period, total college enrollment as a proportion of the population 18-to-24 years old rose from 1.6 percent to 9.1 percent (Carter et al. (2006), Table Bc524).  By 1930, U.S. had three times the population of the U.K. and 20 times the number of college students (Levine. 1986, p. 135).  And the reason they were enrolling in such numbers was clear.  According to studies in the 1920s, almost two-thirds of undergraduates were there to get ready for a particular job, mostly in the lesser professions and middle management (Levine, 1986, p. 40).  Business and engineering were the most popular majors and the social sciences were on the rise.  As David Levine put it in his important book about college in the interwar years, “Institutions of higher learning were no longer content to educate; they now set out to train, accredit, and impart social status to their students” (Levine, 1986, p. 19.

            Enrollments were growing in public colleges faster than in private colleges, but only by a small amount.  In fact it wasn’t until 1931 – for the first time in the history of American higher education – that the public sector finally accounted for a majority of college students (Carter et al., 2006, Tables Bc531 and Bc534).  The increases occurred across all levels of the system, including the top public research universities; but the largest share of enrollments flowed into the newer institutions at the bottom of the system:  the state colleges that were emerging from normal schools, urban commuter colleges (mostly private), and an array of public and private junior colleges that offered two-year vocational programs. 

            For our purposes today, the key point is this:  The American system of colleges and universities that emerged in the nineteenth century and continued until World War II was a market-driven structure that construed higher education as a private good.  Until around 1880, the primary benefits of the system went to the people who founded individual institutions – the land speculators and religious sects for whom a new college brought wealth and competitive advantage.  This explains why colleges emerged in such remote places long before there was substantial student demand.  The role of the state in this process was muted.  The state was too weak and too poor to provide strong support for higher education, and there was no obvious state interest that argued for doing so.  Until the decade before the war, most student enrollments were in the private sector, and even at the war’s start the majority of institutions in the system were private (Carter et al., 2006, Tables Bc510 to Bc520).  

            After 1880, the primary benefits of the system went to the students who enrolled.  For them, it became the primary way to gain entry to the relatively secure confines of salaried work in management and the professions.  For middle class families, college in this period emerged as the main mechanism for transmitting social advantage from parents to children; and for others, it became the object of aspiration as the place to get access to the middle class.  State governments put increasing amounts of money into support for public higher education, not because of the public benefits it would produce but because voters demanded increasing access to this very attractive private good.

The Rise of the Cold War University

            And then came the Second World War.  There is no need here to recount the devastation it brought about or the nightmarish residue it left.  But it’s worth keeping in mind the peculiar fact that this conflict is remembered fondly by Americans, who often refer to it as the Good War (Terkel, 1997).  The war cost a lot of American lives and money, but it also brought a lot of benefits.  It didn’t hurt, of course, to be on the winning side and to have all the fighting take place on foreign territory.  And part of the positive feeling associated with the war comes from the way it thrust the country into a new role as the dominant world power.  But perhaps even more the warm feeling arises from the memory of this as a time when the country came together around a common cause.  For citizens of the United States – the most liberal of liberal democracies, where private liberty is much more highly valued than public loyalty – it was a novel and exciting feeling to rally around the federal government.  Usually viewed with suspicion as a threat to the rights of individuals and a drain on private wealth, the American government in the 1940s took on the mantle of good in the fight against evil.  Its public image became the resolute face of a white-haired man dressed in red, white, and blue, who pointed at the viewer in a famous recruiting poster.  It’s slogan: “Uncle Sam Wants You.” 

            One consequence of the war was a sharp increase in the size of the U.S. government.  The historically small federal state had started to grow substantially in the 1930s as a result of the New Deal effort to spend the country out of a decade-long economic depression, a time when spending doubled.  But the war raised the level of federal spending by a factor of seven, from $1,000 to $7,000 per capita.  After the war, the level dropped back to $2,000; and then the onset of the Cold War sent federal spending into a sharp, and this time sustained, increase – reaching $3,000 in the 50s, 4,000 in the 60s, and regaining the previous high of $7,000 in the 80s, during the last days of the Soviet Union (Garrett & Rhine, 2006, figure 3). 

            If for Americans in general World War II carries warm associations, for people in higher education it marks the beginning of the Best of Times – a short but intense period of generous public funding and rapid expansion.  Initially, of course, the war brought trouble, since it sent most prospective college students into the military.  Colleges quickly adapted by repurposing their facilities for military training and other war-related activities.  But the real long-term benefits came when the federal government decided to draw higher education more centrally into the war effort – first, as the central site for military research and development; and second, as the place to send veterans when the war was over.  Let me say a little about each.

            In the first half of the twentieth century, university researchers had to scrabble around looking for funding, forced to rely on a mix of foundations, corporations, and private donors.  The federal government saw little benefit in employing their services.  In a particularly striking case at the start of World War, the professional association of academic chemists offered its help to the War Department, which declined “on the grounds that it already had a chemist in its employ” (Levine, 1986, p. 51).[1]  The existing model was for government to maintain its own modest research facilities instead of relying on the university. 

            The scale of the next war changed all this.  At the very start, a former engineering dean from MIT, Vannevar Bush, took charge of mobilizing university scientists behind the war effort as head of the Office of Scientific Research and Development.  The model he established for managing the relationship between government and researchers set the pattern for university research that still exists in the U.S. today: Instead of setting up government centers, the idea was to farm out research to universities.  Issue a request for proposals to meet a particular research need; award the grant to the academic researchers who seemed best equipped to meet this need; and pay 50 percent or more overhead to the university for the facilities that researchers would use.  This method drew on the expertise and facilities that already existed at research universities, which both saved the government from having to maintain a costly permanent research operation and also gave it the flexibility to draw on the right people for particular projects.  For universities, it provided a large source of funds, which enhanced their research reputations, helped them expand faculty, and paid for infrastructure.  It was a win-win situation.  It also established the entrepreneurial model of the university researcher in perpetual search for grant money.  And for the first time in the history of American higher education, the university was being considered a public good, whose research capacity could serve the national interest by helping to win a war. 

            If universities could meet one national need during the war by providing military research, they could meet another national need after the war by enrolling veterans.  The GI Bill of Rights, passed by congress in 1944, was designed to pay off a debt and resolve a manpower problem.  Its official name, the Servicemen’s Readjustment Act of 1944, reflects both aims.  By the end of the war there were 15 million men and women who had served in the military, who clearly deserved a reward for their years of service to the country.  The bill offered them the opportunity to continue their education at federal expense, which included attending the college of their choice.  This opportunity also offered another public benefit, since it responded to deep concern about the ability of the economy to absorb this flood of veterans.  The country had been sliding back into depression at the start of the war, and the fear was that massive unemployment at war’s end was a real possibility.  The strategy worked.  Under the GI Bill, about two million veterans eventually attended some form of college.  By 1948, when veteran enrollment peaked, American colleges and universities had one million more students than 10 years earlier (Geiger (2004), pp. 40-41; Carter et al. (2006), Table Bc523).  This was another win-win situation.  The state rewarded national service, headed off mass unemployment, and produced a pile of human capital for future growth.  Higher education got a flood of students who could pay their own way.  The worry, of course, was what was going to happen when the wartime research contracts ended and the veterans graduated. 

            That’s where the Cold War came in to save the day.  And the timing was perfect.  The first major action of the new conflict – the Berlin Blockade – came in 1948, the same year that veteran enrollments at American colleges reached their peak.  If World War II was good for American higher education, the Cold War was a bonanza.  The hot war meant boom and bust – providing a short surge of money and students followed by a sharp decline.  But the Cold War was a prolonged effort to contain Communism.  It was sustainable because actual combat was limited and often carried out by proxies.  For universities this was a gift that, for 30 years, kept on giving.  The military threat was massive in scale – nothing less than the threat of nuclear annihilation.  And supplementing it was an ideological challenge – the competition between two social and political systems for hearts and minds.  As a result, the government needed top universities to provide it with massive amounts of scientific research that would support the military effort.  And it also needed all levels of the higher education system to educate the large numbers of citizens required to deal with the ideological menace.  We needed to produce the scientists and engineers who would allow us to compete with Soviet technology.  We needed to provide high-level human capital in order to promote economic growth and demonstrate the economic superiority of capitalism over communism.  And we needed to provide educational opportunity for our own racial minorities and lower classes in order to show that our system is not only effective but also fair and equitable.  This would be a powerful weapon in the effort to win over the third world with the attractions of the American Way.  The Cold War American government treated higher education system as a highly valuable public good, which would make a large contribution to the national interest; and the system was pleased to be the object of so much federal largesse (Loss, 2012).

            On the research side, the impact of the Cold War on American universities was dramatic.  The best way to measure this is by examining patterns of federal research and development spending over the years, which traces the ebb and flow of national threats across the last 60 years.  Funding rose slowly  from $13 billion in 1953 (in constant 2014 dollars) until the Sputnik crisis (after the Soviets succeeded in placing the first satellite in earth orbit), when funding jumped to $40 billion in 1959 and rose rapidly to a peak of $88 billion in 1967.  Then the amount backed off to $66 billion in 1975, climbing to a new peak of $104 billion in 1990 just before the collapse of the Soviet Union and then dropping off.  It started growing again in 2002 after the attack on the twin towers, reaching an all-time high of $151 billion in 2010 and has been declining ever since (AAAS, 2014).[2] 

            Initially, defense funding accounted for 85 percent of federal research funding, gradually falling back to about half in 1967, as nondefense funding increased, but remaining in a solid majority position up until the present.  For most of the period after 1957, however, the largest element in nondefense spending was research on space technology, which arose directly from the Soviet Sputnik threat.  If you combine defense and space appropriations, this accounts for about three-quarters of federal research funding until 1990.  Defense research closely tracked perceived threats in the international environment, dropping by 20 percent after 1989 and then making a comeback in 2001.  Overall,  federal funding during the Cold War for research of all types grew in constant dollars from $13 billion in 1953 to $104 in 1990, an increase of 700 percent.  These were good times for university researchers (AAAS, 2014).

            At the same time that research funding was growing rapidly, so were college enrollments.  The number of students in American higher education grew from 2.4 million in 1949 to 3.6 million in 1959; but then came the 1960s, when enrollments more than doubled, reaching 8 million in 1969.  The number hit 11.6 million in 1979 and then began to slow down – creeping up to 13.5 million in 1989 and leveling off at around 14 million in the 1990s (Carter et al., 2006, Table Bc523; NCES, 2014, Table 303.10).  During the 30 years between 1949 and 1979, enrollments increased by more than 9 million students, a growth of almost 400 percent.  And the bulk of the enrollment increases in the last two decades were in part-time students and at two-year colleges.  Among four-year institutions, the primary growth occurred not at private or flagship public universities but at regional state universities, the former normal schools.  The Cold War was not just good for research universities; it was also great for institutions of higher education all the way down the status ladder.

            In part we can understand this radical growth in college enrollments as an extension of the long-term surge in consumer demand for American higher education as a private good.  Recall that enrollments started accelerating late in the nineteenth century, when college attendance started to provide an edge in gaining middle class jobs.  This meant that attending college gave middle-class families a way to pass on social advantage while attending high school gave working-class families a way to gain social opportunity.  But by 1940, high school enrollments had become universal.  So for working-class families, the new zone of social opportunity became higher education.  This increase in consumer demand provided a market-based explanation for at least part of the flood of postwar enrollments.

            At the same time, however, the Cold War provided a strong public rationale for broadening access to college.  In 1946, President Harry Truman appointed a commission to provide a plan for expanding access to higher education, which was first time in American history that a president sought advice about education at any level.  The result was a six-volume report with the title Higher Education for American Democracy.  It’s no coincidence that the report was issued in 1947, the starting point of the Cold War.  The authors framed the report around the new threat of atomic war, arguing that “It is essential today that education come decisively to grips with the world-wide crisis of mankind” (President’s Commission, 1947, vol. 1, p. 6).  What they proposed as a public response to the crisis was a dramatic increase in access to higher education.

            The American people should set as their ultimate goal an educational system in which at no level – high school, college, graduate school, or professional school – will a qualified individual in any part of the country encounter an insuperable economic barrier to the attainment of the kind of education suited to his aptitudes and interests.
        This means that we shall aim at making higher education equally available to all young people, as we now do education in the elementary and high schools, to the extent that their capacity warrants a further social investment in their training (President’s Commission, 1947, vol. 1, p. 36).

Tellingly, the report devotes a lot of space exploring the existing barriers to educational opportunity posed by class and race – exactly the kinds of issues that were making liberal democracies look bad in light of the egalitarian promise of communism.

Decline of the System’s Public Mission

            So in the mid twentieth century, Americans went through an intense but brief infatuation with higher education as a public good.  Somehow college was going to help save us from the communist menace and the looming threat of nuclear war.  Like World War II, the Cold War brought together a notoriously individualistic population around the common goal of national survival and the preservation of liberal democracy.  It was a time when every public building had an area designated as a bomb shelter.  In the elementary school I attended in the 1950s, I can remember regular air raid drills.  The alarm would sound and teachers would lead us downstairs to the basement, whose concrete-block walls were supposed to protect us from a nuclear blast.  Although the drills did nothing to preserve life, they did serve an important social function.  Like Sunday church services, these rituals drew individuals together into communities of faith where we enacted our allegiance to a higher power. 

            For American college professors, these were the glory years, when fear of annihilation gave us a glamorous public mission and what seemed like an endless flow of public funds and funded students.  But it did not – and could not – last.  Wars can bring great benefits to the home front, but then they end.  The Cold War lasted longer than most, but this longevity came at the expense of intensity.  By the 1970s, the U.S. had lived with the nuclear threat for 30 years without any sign that the worst case was going to materialize.  You can only stand guard for so long before attention begins to flag and ordinary concerns start to push back to the surface.  In addition, waging war is extremely expensive, draining both public purse and public sympathy.  The two Cold War conflicts that engaged American troops cost a lot, stirred strong opposition, and ended badly, providing neither the idealistic glow of the Good War nor the satisfying closure of unconditional surrender by the enemy.  Korea ended with a stalemate and the return to the status quo ante bellum.  Vietnam ended with defeat and the humiliating image in 1975 of the last Americans being plucked off a rooftop in Saigon – which the victors then promptly renamed Ho Chi Minh City.

            The Soviet menace and the nuclear threat persisted, but in a form that – after the grim experience of war in the rice paddies – seemed distant and slightly unreal.  Add to this the problem that, as a tool for defeating the enemy, the radical expansion of higher education by the 70s did not appear to be a cost-effective option.  Higher ed is a very labor-intensive enterprise, in which size brings few economies of scale, and its public benefits in the war effort were hard to pin down.  As the national danger came to seem more remote, the costs of higher ed became more visible and more problematic.  Look around any university campus, and the primary beneficiaries of public largesse seem to be private actors – the faculty and staff who work there and the students whose degrees earn them higher income.  So about 30 years into the Cold War, the question naturally arose:  Why should the public pay so much to provide cushy jobs for the first group and to subsidize the personal ambition of the second?  If graduates reap the primary benefits of a college education, shouldn’t they be paying for it rather than the beleaguered taxpayer?

            The 1970s marked the beginning of the American tax revolt, and not surprisingly this revolt emerged first in the bellwether state of California.  Fueled by booming defense plants and high immigration, California had a great run in the decades after 1945.  During this period, the state developed the most comprehensive system of higher education in the country.  In 1960 it formalized this system with a Master Plan that offered every Californian the opportunity to attend college in one of three state systems.  The University of California focused on research, graduate programs, and educating the top high school graduates.  California State University (developed mostly from former teachers colleges) focused on undergraduate programs for the second tier of high school graduates.  The community college system offered the rest of the population two-year programs for vocational training and possible transfer to one of the two university systems.  By 1975, there were 9 campuses in the University of California, 23 in California State University, and xx in the community college system, with a total enrollment across all systems of 1.5 million students – accounting for 14 percent of the college students in the U.S. (Carter et al., 2006, Table Bc523; Douglass, 2000, Table 1).  Not only was the system enormous, but the Master Plan declared it illegal to charge California students tuition.  The biggest and best public system of higher education in the country was free.

            And this was the problem.  What allowed the system to grow so fast was a state fiscal regime that was quite rare in the American context – one based on high public services supported by high taxes.  After enjoying the benefits of this combination for a few years, taxpayers suddenly woke up to the realization that this approach to paying for higher education was at core un-American.  For a country deeply grounded in liberal democracy, the system of higher ed for all at no cost to the consumer looked a lot like socialism.  So, of course, it had to go.  In the mid-1970s the country’s first taxpayer revolt emerged in California, culminating in a successful campaign in 1978 to pass a state-wide initiative that put a limit on increases in property taxes.  Other tax limitation initiatives followed (Martin, 2008).  As a result, the average state appropriation per student at University of California dropped from about $3,400 (in 1960 dollars) in 1987 to $1,100 in 2010, a decline of 68 percent (UC Data Analysis (2014).  This quickly led to a steady increase in fees charged to students at California’s colleges and universities.  (It turned out that tuition was illegal but demanding fees from students was not.)  In 1960 dollars, the annual fees for in-state undergraduates at the University of California rose from $317 in 1987 to $1,122 in 2010, an increase of more than 250 percent (UC Data Analysis (2014).  This pattern of tax limitations and tuition increases spread across the country.  Nationwide during the same period of time, the average state appropriation per student at a four year public college fell from $8,500 to $5,900 (in 2012 dollars), a decline of 31 percent, while average undergraduate tuition doubled, rising from $2,600 to $5,200 (SHEEO, 2013, Figure 3).

            The decline in the state share of higher education costs was most pronounced at the top public research universities, which had a wider range of income sources.  By 2009, the average such institution was receiving only 25 percent of its revenue from state government (National Science Board (2012), Figure 5).  An extreme case is University of Virginia, where in 2013 the state provided less than six percent of the university’s operating budget (University of Virginia, 2014). 

            While these changes were happening at the state level, the federal government was also backing away from its Cold War generosity to students in higher education.  Legislation such as the National Defense Education Act (1958) and Higher Education Act (1965) had provided support for students through a roughly equal balance of grants and loans.  But in 1980 the election of Ronald Reagan as president meant that the push to lower taxes would become national policy.  At this point, support for students shifted from cash support to federally guaranteed loans.  The idea was that a college degree was a great investment for students, which would pay long-term economic dividends, so they should shoulder an increasing share of the cost.  The proportion of total student support in the form of loans was 54 percent in 1975, 67 percent in 1985, and 78 percent in 1995, and the ratio has remained at that level ever since (McPherson & Schapiro, 1998, Table 3.3; College Board, 2013, Table 1).  By 1995, students were borrowing $41 billion to attend college, which grew to $89 billion in 2005 (College Board, 2014, Table 1).  At present, about 60 percent of all students accumulate college debt, most of it in the form of federal loans, and the total student debt load has passed $1 trillion.

            At the same time that the federal government was cutting back on funding college students, it was also reducing funding for university research.  As I mentioned earlier, federal research grants in constant dollars peaked at about $100 billion in 1990, the year after the fall of the Berlin wall – a good marker for the end of the Cold War.  At this point defense accounted for about two-thirds of all university research funding – three-quarters if your include space research.  Defense research declined by about 20 percent during the 90s and didn’t start rising again substantially until 2002, the year after the fall of the Twin Towers and the beginning of the new existential threat known as the War on Terror.  Defense research reached a new peak in 2009 at a level about a third above the Cold War high, and it has been declining steadily ever since.  Increases in nondefense research helped compensate for only a part of the loss of defense funds (AAAS, 2014).

Conclusion

            The American system of higher education came into existence as a distinctly private good.  It arose in the nineteenth century to serve the pursuit of sectarian advantage and land speculation, and then in the twentieth century it evolved into a system for providing individual consumers a way to get ahead or stay ahead in the social hierarchy.  Quite late in the game it took World War II to give higher education an expansive national mission and reconstitute it as a public good.  But hot wars are unsustainable for long, so in 1945 the system was sliding quickly back toward public irrelevance before it was saved by the timely arrival of the Cold War.  As I have shown, the Cold War was very very good for American system of higher education.  It produced a massive increase in funding by federal and state governments, both for university research and for college student subsidies, and – more critically – it sustained this support for a period of three decades.  But these golden years gradually gave way before a national wave of taxpayer fatigue and the surprise collapse of the Soviet Union.  With the nation strapped for funds and with its global enemy dissolved, it no longer had the urgent need to enlist America’s colleges and universities in a grand national cause.  The result was a decade of declining research support and static student enrollments. In 2002 the wars in Afghanistan and Iraq brought a momentary surge in both, but these measures peaked after only eight years and then went again into decline.  Increasingly, higher education is returning to its roots as a private good.

            So what are we to take away from this story of the rise and fall of the Cold War university?  One conclusion is that the golden age of the American university in the mid twentieth century was a one-off event.  Wars may be endemic but the Cold War was unique.  So American university administrators and professors need to stop pining for a return to the good old days and learn how to live in the post-Cold-War era.  The good news is that the impact of the surge in public investment in higher education has left the system in a radically stronger condition than it was in before World War II.  Enrollments have gone from 1.5 million to 21 million; federal research funding has gone from zero to $135 billion; federal grants and loans to college students have gone from zero to $170 billion (NCES, 2014, Table 303.10; AAAS, 2014; College Board, 2014, Table 1).  And the American system of colleges and universities went from an international also-ran to a powerhouse in the world economy of higher education.  Even though all of the numbers are now dropping, they are dropping from a very high level, which is the legacy of the Cold War.  So really, we should stop whining.  We should just say thanks to the bomb for all that it did for us and move on.

            The bad news, of course, is that the numbers really are going down.  Government funding for research is declining and there is no prospect for a turnaround in the foreseeable future.  This is a problem because the federal government is the primary source of funds for basic research in the U.S.; corporations are only interested in investing in research that yields immediate dividends.  During the Cold War, research universities developed a business plan that depended heavily on external research funds to support faculty, graduate students, and overhead.  That model is now broken.  The cost of pursuing a college education is increasingly being borne by the students themselves, as states are paying a declining share of the costs of higher education.  Tuition is rising and as a result student loans are rising.  Public research universities are in a particularly difficult position because their state funding is falling most rapidly.  According to one estimate, at the current rate of decline the average state fiscal support for public higher education will reach zero in 2059 (Mortenson, 2012). 

            But in the midst of all of this bad news, we need to keep in mind that the American system of higher education has a long history of surviving and even thriving under conditions of at best modest public funding.  At its heart, this is a system of higher education based not on the state but the market.  In the hardscrabble nineteenth century, the system developed mechanisms for getting by without the steady support of funds from church or state.  It learned how to attract tuition-paying students, give them the college experience they wanted, get them to identify closely with the institution, and then milk them for donations when they graduate.  Football, fraternities, logo-bearing T shirts, and fund-raising operations all paid off handsomely.  It learned how to adapt quickly to trends in the competitive environment, whether it’s the adoption of intercollegiate football, the establishment of research centers to capitalize on funding opportunities, or providing students with food courts and rock-climbing walls.  Public institutions have a long history of behaving much like private institutions because they were never able to count on continuing state funding. 

            This system has worked well over the years.  Along with the Cold War, it has enabled American higher education to achieve an admirable global status.  By the measures of citations, wealth, drawing power, and Nobel prizes, the system has been very effective.  But it comes with enormous costs.  Private universities have serious advantages over public universities, as we can see from university rankings.  The system is the most stratified structure of higher education in the world.  Top universities in the U.S. get an unacknowledged subsidy from the colleges at the bottom of the hierarchy, which receive less public funding, charge less tuition, and receive less generous donations.  And students sort themselves into institutions in the college hierarchy that parallels their position in the status hierarchy.  Students with more cultural capital and economic capital gain greater social benefit from the system than those with less, since they go to college more often, attend the best institutions, and graduate at a much higher rate.  Nearly everyone can go to college in the U.S., but the colleges that are most accessible provide the least social advantage. 

            So, conceived and nurtured into maturity as a private good, the American system of higher education remains a market-based organism.  It took the threat of nuclear war to turn it – briefly – into a public good.  But these days seem as remote as the time when schoolchildren huddled together in a bomb shelter. 

References

American Association for the Advancement of Science. (2014). Historical Trends in Federal R & D: By Function, Defense and Nondefense R & D, 1953-2015.  http://www.aaas.org/page/historical-trends-federal-rd (accessed 8-21-14.

Bledstein, B. J. (1976). The Culture of Professionalism: The Middle Class and the Development of Higher Education in America. New York:  W. W. Norton.

Boorstin, D. J. (1965). Culture with Many Capitals: The  Booster College. In The Americans: The National Experience (pp. 152-161). New York: Knopf Doubleday.

Brown, D. K. (1995). Degrees of Control: A Sociology of Educational Expansion and Occupational Credentialism. New York: Teachers College Press.

Carter, S. B., et al. (2006). Historical Statistics of the United States, Millennial Education on Line. New York: Cambridge University Press.

College Board. (2013). Trends in student aid, 2013. New York: The College Board.

College Board. (2014). Trends in Higher Education: Total Federal and Nonfederal Loans over Time.  https://trends.collegeboard.org/student-aid/figures-tables/growth-federal-and-nonfederal-loans-over-time (accessed 9-4-14).

Collins, R. (1979). The Credential Society: An Historical Sociology of Education and Stratification. New York: Academic Press.

Douglass, J. A. (2000). The California Idea and American Higher Education: 1850 to the 1960 Master Plan. Stanford, CA: Stanford University Press.

Garrett, T. A., & Rhine, R. M. (2006).  On the Size and Growth of Government. Federal Reserve Bank of St. Louis Review, 88:1 (pp. 13-30).

Geiger, R. L. (2004). To Advance Knowledge: The Growth of American research Universities, 1900-1940. New Brunswick: Transaction.

Goldin, C. & Katz, L. F. (2008). The Race between Education and Technology. Cambridge: Belknap Press of Harvard University Press.

Institute of Higher Education, Shanghai Jiao Tong University.  (2013).  Academic Ranking of World Universities – 2013.  http://www.shanghairanking.com/ARWU2013.html (accessed 6-11-14).

Levine, D. O. (1986). The American college and the culture of aspiration, 1914-1940 Ithaca: Cornell University Press.

Loss, C. P.  (2011).  Between citizens and the state: The politics of American higher education in the 20th century. Princeton, NJ: Princeton University Press.

Martin, I. W. (2008). The Permanent Tax Revolt: How the Property Tax Transformed American Politics. Stanford, CA: Stanford University Press.

McPherson, M. S. & Schapiro, M. O.  (1999).  Reinforcing Stratification in American Higher Education:  Some Disturbing Trends.  Stanford: National Center for Postsecondary Improvement.

Mortenson, T. G. (2012).  State Funding: A Race to the Bottom.  The Presidency (winter).  http://www.acenet.edu/the-presidency/columns-and-features/Pages/state-funding-a-race-to-the-bottom.aspx (accessed 10-18-14).

National Center for Education Statistics. (2014). Digest of Education Statistics, 2013. Washington, DC: US Government Printing Office.

National Science Board. (2012). Diminishing Funding Expectations: Trends and Challenges for Public Research Universities. Arlington, VA: National Science Foundation.

Potts, D. B. (1971).  American Colleges in the Nineteenth Century: From Localism to Denominationalism. History of Education Quarterly, 11: 4 (pp. 363-380).

President’s Commission on Higher Education. (1947). Higher education for American democracy, a report. Washington, DC: US Government Printing Office.

Rüegg, W. (2004). European Universities and Similar Institutions in Existence between 1812 and the End of 1944: A Chronological List: Universities.  In Walter Rüegg, A History of the University in Europe, vol. 3. London: Cambridge University Press.

State Higher Education Executive Officers (SHEEO). (2013). State Higher Education Finance, FY 2012. www.sheeo.org/sites/default/files/publications/SHEF-FY12.pdf (accessed 9-8-14).

Terkel, S. (1997). The Good War: An Oral History of World War II. New York: New Press.

Tewksbury, D. G. (1932). The Founding of American Colleges and Universities before the Civil War. New York: Teachers College Press.

U of California Data Analysis. (2014). UC Funding and Fees Analysis.  http://ucpay.globl.org/funding_vs_fees.php (accessed 9-2-14).

University of Virginia (2014). Financing the University 101. http://www.virginia.edu/finance101/answers.html (accessed 9-2-14).

[1] Under pressure of the war effort, the department eventually relented and enlisted the help of chemists to study gas warfare.  But the initial response is telling.

[2] Not all of this funding went into the higher education system.  Some went to stand-alone research organizations such as the Rand Corporation and American Institute of Research.  But these organizations in many ways function as an adjunct to higher education, with researcher moving freely between them and the university.

Posted in Ed schools, Higher Education, History

Too Easy a Target: The Trouble with Ed Schools and the Implications for the University

This post is a piece I published in Academe (the journal of AAUP) in 1999.  It provides an overview of the argument in my 2004 book, The Trouble with Ed Schools. I reproduce it here as a public service:  if you read this, you won’t need to read my book much less buy it.  You’re welcome.  Also, looking through it 20 years later, I was pleasantly surprised to find that it was kind of a fun read.  Here’s a link to the original.

The book and the article tell the story of the poor beleauguered ed school, maligned by one and all.  It’s a story of irony, in which an institution does what everyone asked of it and is thoroughly punished for the effort.  And it’s also a reverse Horatio Alger story, in which the beggar boy never makes it.  Here’s a glimpse of the argument, which starts with the ed school’s terrible reputation:

So how did things get this bad? No occupational group or subculture acquires a label as negative as this one without a long history of status deprivation. Critics complain about the weakness and irrelevance of teacher ed, but they rarely look at the reasons for its chronic status problems. If they did, they might find an interesting story, one that presents a more sympathetic, if not more flattering, portrait of the education school. They would also find, however, a story that portrays the rest of academe in a manner that is less self-serving than in the standard account. The historical part of this story focuses on the way that American policy makers, taxpayers, students, and universities collectively produced exactly the kind of education school they wanted. The structural part focuses on the nature of teaching as a form of social practice and the problems involved in trying to prepare people to pursue this practice.

Enjoy.

Ed Schools Cover

Too Easy a Target:

The Trouble with Ed Schools and the Implications for the University

By David F. Labaree

This is supposed to be the era of political correctness on American university campuses, a time when speaking ill of oppressed minorities is taboo. But while academics have to tiptoe around most topics, there is still one subordinate group that can be shelled with impunity — the sad sacks who inhabit the university’s education school. There is no need to take aim at this target because it is too big to miss, and there is no need to worry about hitting innocent bystanders because everyone associated with the ed school is understood to be guilty as charged.

Of course, education in general is a source of chronic concern and an object of continuous criticism for most Americans. Yet, as the annual Gallup Poll of attitudes toward education shows, citizens give good grades to their local schools at the same time that they express strong fears about the quality of public education elsewhere in the country. The vision is one of general threats to education that have not yet reached the neighborhood school but may do so in the near future. These threats include everything from the multicultural curriculum to the decline in the family, the influence of television, and the consequences of chronic poverty.

One such threat is the hapless education school, whose alleged incompetence and supposedly misguided ideas are seen as producing poorly prepared teachers and inadequate curricula. For the public, this institution is remote enough to be suspect (unlike the local school) and accessible enough to be scorned (unlike the more arcane university). For the university faculty, it is the ideal scapegoat, allowing blame for problems with schools to fall upon teacher education in particular rather than higher education in general.

For years, writers from right to left have been making the same basic complaints about the inferior quality of education faculties, the inadequacy of education students, and, to quote James Koerner’s 1963 classic, The Miseducation of American Teachers, their “puerile, repetitious, dull, and ambiguous” curriculum. This kind of complaining about ed schools is as commonplace as griping about the cold in the middle of winter. But something new has arisen in the defamatory discourse about these beleaguered institutions: the attacks are now coming from their own leaders. The victims are joining the victimizers.

So how did things get this bad? No occupational group or subculture acquires a label as negative as this one without a long history of status deprivation. Critics complain about the weakness and irrelevance of teacher ed, but they rarely look at the reasons for its chronic status problems. If they did, they might find an interesting story, one that presents a more sympathetic, if not more flattering, portrait of the education school. They would also find, however, a story that portrays the rest of academe in a manner that is less self-serving than in the standard account. The historical part of this story focuses on the way that American policy makers, taxpayers, students, and universities collectively produced exactly the kind of education school they wanted. The structural part focuses on the nature of teaching as a form of social practice and the problems involved in trying to prepare people to pursue this practice.

Decline of Normal Schools

Most education schools grew out of the normal schools that emerged in the second half of the nineteenth century. Their founders initially had heady dreams that these schools could become model institutions that would establish high-quality professional preparation for teachers along with a strong professional identity. For a time, some of the normal schools came close to realizing these dreams.

Soon, however, burgeoning enrollments in the expanding common schools produced an intense demand for new teachers to fill a growing number of classrooms, and the normal schools turned into teacher factories. They had to produce many teachers quickly and cheaply, or else school districts around the country would hire teachers without this training — or perhaps any form of professional preparation. So normal schools adapted by stressing quantity over quality, establishing a disturbing but durable pattern of weak professional preparation and low academic standards.

At the same time, normal schools had to confront a strong consumer demand from their own students, many of whom saw the schools as an accessible form of higher education rather than as a site for teacher preparation. Located close to home, unlike the more centrally located state universities and land grant colleges, the normal schools were also easier to get into and less costly. As a result, many students enrolled who had little or no interest in teaching; instead, they wanted an advanced educational credential that would gain them admission to attractive white-collar positions. They resisted being trapped within a single vocational track — the teacher preparation program — and demanded a wide array of college-level liberal arts classes and programs. Since normal schools depended heavily on tuition for their survival, they had little choice but to comply with the demands of their “customers.”

This compliance reinforced the already-established tendency toward minimizing the extent and rigor of teacher education. It also led the normal schools to transform themselves into the model of higher education that their customers wanted, first by changing into teachers’ colleges (with baccalaureate programs for nonteachers), then into state liberal-arts colleges, and finally into the general-purpose regional state universities they are today.

As the evolving colleges moved away from being normal schools, teacher education programs became increasingly marginal within their own institutions, which were coming to imitate the multipurpose university by giving pride of place to academic departments, graduate study, and preparation for the more prestigious professions. Teacher education came to be perceived as every student’s second choice, and the ed school professors came to be seen as second-class citizens in the academy.

Market Pressures in the Present

Market pressures on education schools have changed over the years, but they have not declined. Teaching is a very large occupation in the United States, with about 3 million practitioners in total. To fill all the available vacancies, approximately one in every five college graduates must enter teaching each year. If education schools do not prepare enough candidates, state legislators will authorize alternative routes into the profession (requiring little or no professional education), and school boards will hire such prospects to place warm bodies in empty classrooms.

Education schools that try to increase the duration and rigor of teacher preparation by focusing more intensively on smaller cohorts of students risk leaving the bulk of teaching in the hands of practitioners who are prepared at less demanding institutions or who have not been prepared at all. In addition, such efforts run into strong opposition from within the university, which needs ed students to provide the numbers that bring legislative appropriations and tuition payments. Subsidies from the traditionally cost-effective teacher-education factories support the university’s more prestigious, but less lucrative, endeavors. As a result, universities do not want their ed schools to turn into boutique programs for the preparation of a few highly professionalized teachers.

Another related source of institutional resistance arises whenever education schools try to promote quality over quantity. This resistance comes from academic departments, which have traditionally relied on the ability of their universities to provide teaching credentials as a way to induce students to major in “impractical” subjects. Departments such as English, history, and music have sold themselves to undergraduates for years with the argument that “you can always teach” these subjects. As a result, these same departments become upset when the education school starts to talk about upgrading, downsizing, or limiting access.

Stigmatized Populations and Soft Knowledge

The fact that education schools serve stigmatized populations aggravates the market pressures that have seriously undercut the status and the role of these schools. One such population is women, who currently account for about 70 percent of American teachers. Another is the working class, whose members have sought out the respectable knowledge-based white-collar work of teaching as a way to attain middle-class standing. Children make up a third stigmatized population. In a society that rewards contact with adults more than contact with children, and in a university setting that is more concerned with serious adult matters than with kid stuff, education schools lose out, because they are indelibly associated with children.

Teachers also suffer from an American bias in favor of doing over thinking. Teachers are the largest and most visible single group of intellectual workers in the United States — that is, people who make their living through the production and transmission of ideas. More accessible than the others in this category, teachers constitute the street-level intellectuals of our society. As the only intellectuals with whom most people will ever have close contact, teachers take the brunt of the national prejudice against book learning and those pursuits that are scornfully labeled as “academic.”

Another problem facing education schools is the low status of the knowledge they deal with: it is soft rather than hard, applied rather than pure. Hard disciplines (which claim to produce findings that are verifiable, definitive, and cumulative) outrank soft disciplines (whose central problem is interpretation and whose findings are always subject to debate and reinterpretation by others). Likewise, pure intellectual pursuits (which are oriented toward theory and abstracted from particular contexts) outrank those that are applied (which concentrate on practical work and concrete needs).

Knowledge about education is necessarily soft. Education is an extraordinarily complex social activity carried out by quirky and willful actors, and it steadfastly resists any efforts to reduce it to causal laws or predictive theories. Researchers cannot even count on being able to build on the foundation of other people’s work, since the validity of this work is always only partially established. Instead, they must make the best of a difficult situation. They try to interpret what is going on in education, but the claims they make based on these interpretations are highly contingent. Education professors can rarely speak with unclouded authority about their area of expertise or respond definitively when others challenge their authority. Outsiders find it child’s play to demonstrate the weaknesses of educational research and hold it up for ridicule for being inexact, contradictory, and impotent.

Knowledge about education is also necessarily applied. Education is not a discipline, defined by a theoretical apparatus and a research methodology, but an institutional area. As a result, education schools must focus their energies on the issues that arise from this area and respond to the practical concerns confronting educational practitioners in the field — even if doing so leads them into areas in which their constructs are less effective and their chances for success less promising. This situation unavoidably undermines the effectiveness and the intellectual coherence of educational research and thus also calls into question the academic stature of the faculty members who produce that research.

No Prestige for Practical Knowledge

Another related knowledge-based problem faces the education school. A good case can be made for the proposition that American education — particularly higher education — has long placed a greater emphasis on the exchange value of the educational experience (providing usable credentials that can be cashed in for a good job) than on its use value (providing usable knowledge). That is, what consumers have sought and universities have sold in the educational marketplace is not the content of the education received at the university (what the student actually learns there) but the form of this education (what the student can buy with a university degree).

One result of this commodification process is that universities have a strong incentive to promote research over teaching, for publications raise the visibility and prestige of the institution much more effectively than does instruction (which is less visible and more difficult to measure). And a prestigious faculty raises the exchange value of the university’s diploma, independently of whatever is learned in the process of acquiring this diploma. By relying heavily on its faculty’s high-status work in fields of hard knowledge, the university’s marketing effort does not leave an honored role for an education school that produces soft knowledge about practical problems.

A Losing Status, but a Winning Role?

What all of this suggests is that education schools are poorly positioned to play the university status game. They serve the wrong clientele and produce the wrong knowledge; they bear the mark of their modest origins and their traditionally weak programs. And yet they are pressured by everyone from their graduates’ employers to their university colleagues to stay the way they are, since they fulfill so many needs for so many constituencies.

But consider for a moment what would happen if we abandoned the status perspective in establishing the value of higher education. What if we focus instead on the social role of the education school rather than its social position in the academic firmament? What if we consider the possibility that education schools — toiling away in the dark basement of academic ignominy — in an odd way have actually been liberated by this condition from the constraints of academic status attainment? Is it possible that ed schools may have stumbled on a form of academic practice that could serve as a useful model for the rest of the university? What if the university followed this model and stopped selling its degrees on the basis of institutional prestige grounded in the production of abstract research and turned its focus on instruction in usable knowledge?

Though the university status game, with its reliance on raw credentialism — the pursuit of university degrees as a form of cultural currency that can be exchanged for social position — is not likely to go away soon, it is now under attack. Legislators, governors, business executives, and educational reformers are beginning to declare that indeed the emperor is wearing no clothes: that there is no necessary connection between university degrees and student knowledge or between professorial production and public benefit; that students need to learn something when they are in the university; that the content of what they learn should have some intrinsic value; that professors need to develop ideas that have a degree of practical significance; and that the whole university enterprise will have to justify the huge public and private investment it currently requires.

The market-based pattern of academic life has always had an element of the confidence game, since the whole structure depends on a network of interlocking beliefs that are tenuous at best: the belief that graduates of prestigious universities know more and can do more than other graduates; the belief that prestigious faculty make for a good university; and the belief that prestigious research makes for a good faculty. The problem is, of course, that when confidence in any of these beliefs is shaken, the whole structure can come tumbling down. And when it does, the only recourse is to rebuild on the basis of substance rather than reputation, demonstrations of competence rather than symbols of merit.

This dreaded moment is at hand. The fiscal crisis of the state, the growing political demand for accountability and utility, and the intensification of competition in higher education are all undermining the credibility of the current pattern of university life. Today’s relentless demand for lower taxes and reduced public services makes it hard for the university to justify a high level of public funding on the grounds of prestige alone. State governments are demanding that universities produce measurable beneficial outcomes for students, businesses, and other taxpaying sectors of the community. And, by withholding higher subsidies, states are throwing universities into a highly competitive situation in which they vie with one another to see who can attract the most tuition dollars and the most outside research grants, and who can keep the tightest control over internal costs.

In this kind of environment, education schools have a certain advantage over many other colleges and departments in the university. Unlike their competitors across campus, they offer traditionally low-cost programs designed explicitly to be useful, both to students and to the community. They give students practical preparation for and access to a large sector of employment opportunities. Their research focuses on an area about which Americans worry a great deal, and they offer consulting services and policy advice. In short, their teaching, research, and service activities are all potentially useful to students and community alike. How many colleges of arts and letters can say the same?

But before we get carried away with the counterintuitive notion that ed schools might serve as a model for a university under fire, we need to understand that these brow-beaten institutions will continue to gain little credit for their efforts to serve useful social purposes, in spite of the current political saliency of such efforts. One reason for that is the peculiar nature of the occupation – teaching — for which ed schools are obliged to prepare candidates. Another is the difficulty that faces any academic unit that tries to walk the border between theory and practice.

A Peculiar Kind of Professional

Teaching is an extraordinarily complex job. Researchers have estimated that the average teacher makes upward of 150 conscious instructional decisions during the course of the day, each of which has potentially significant consequences for the students involved. From the standpoint of public relations, however, the key difficulty is that, for the outsider, teaching looks all too easy. Its work is so visible, the skills required to do it seem so ordinary, and the knowledge it seeks to transmit is so generic. Students spend a long time observing teachers at work. If you figure that the average student spends 6 hours a day in school for 180 days a year over the course of 12 years, that means that a high school graduate will have logged about 13,000 hours watching teachers do their thing. No other social role (with the possible exception of parent) is so well known to the general public. And certainly no other form of paid employment is so well understood by prospective practitioners before they take their first day of formal professional education.

By comparison, consider other occupations that require professional preparation in the university. Before entering medical, law, or business school, students are lucky if they have spent a dozen hours in close observation of a doctor, lawyer, or businessperson at work. For these students, professional school provides an introduction to the mysteries of an arcane and remote field. But for prospective teachers, the education school seems to offer at best a gloss on a familiar topic and at worst an unnecessary hurdle for twelve-year apprentices who already know their stuff.

Not only have teacher candidates put in what one scholar calls a long “apprenticeship of observation,” but they have also noted during this apprenticeship that the skills a teacher requires are no big deal. For one thing, ordinary adult citizens already know the subject matter that elementary and secondary school teachers seek to pass along to their students — reading, writing, and math; basic information about history, science, and literature; and so on. Because there is nothing obscure about these materials, teaching seems to have nothing about it that can match the mystery and opaqueness of legal contracts, medical diagnoses, or business accounting.

Of course, this perception by the prospective teacher and the public about the skills involved in teaching leaves out the crucial problem of how a teacher goes about teaching ordinary subjects to particular students. Reading is one thing, but knowing how to teach reading is another matter altogether. Ed schools seek to fill this gap in knowledge by focusing on the pedagogy of teaching particular subjects to particular students, but they do so over the resistance of teacher candidates who believe they already know how to teach and a public that fails to see pedagogy as a meaningful skill.

Compounding this resistance to the notion that teachers have special pedagogical skills is the student’s general experience (at least in retrospect) that learning is not that hard — and, therefore, by the skills a teacher extension, that teaching is not hard either. Unlike doctors and lawyers, who use their arcane expertise for the benefit of the client without passing along the expertise itself, teachers are in the business of giving away their expertise. Their goal is to empower the student to the point at which the teacher is no longer needed and the student can function effectively without outside help. The best teachers make learning seem easy and make their own role in the learning process seem marginal. As a result, it is easy to underestimate the difficulty of being a good teacher — and of preparing people to become good teachers.

Finally, the education school does not have exclusive rights to the subject matter that teachers teach. The only part of the teacher’s knowledge over which the ed school has some control is the knowledge about how to teach. Teachers learn about English, history, math, biology, music, and other subjects from the academic departments at the university in charge of these areas of knowledge. Yet, despite the university’s shared responsibility for preparing teachers, ed schools are held accountable for the quality of the teachers and other educators they produce, often taking the blame for the deficiencies of an inadequate university education.

The Border Between Theory and Practice

The intellectual problem facing American education schools is as daunting as the instructional problem, for the territory in which ed schools do research is the mine-strewn border between theory and practice. Traditionally, the university’s peculiar area of expertise has been theory, while the public school is a realm of practice.  In reality, the situation is more complicated, since neither institution can function without relying on both forms of knowledge. Education schools exist, in part, to provide a border crossing between these two countries, each with its own distinctive language and culture and its own peculiar social structure. When an ed school is working well, it presents a model of fluid interaction between university and school and encourages others on both sides of the divide to follow suit. The ideal is to encourage the development of teachers and other educators who can draw on theory to inform their instructional practice, while encouraging university professors to become practice-oriented theoreticians, able to draw on issues from practice in their theory building and to produce theories with potential use value.

In reality, no education school (or any other institution, for that matter) can come close to meeting this ideal. The tendency is to fall on one side of the border or the other — where life is more comfortable and the responsibilities more clear cut — rather than to hold the middle ground and retain the ability to work well in both domains.

But because of their location in the university and their identification with elementary and secondary schools, ed schools have had to keep working along the border. In the process, they draw unrelenting fire from both sides. The university views colleges of education as nothing but trade schools, which supply vocational training but no academic curriculum. Students, complaining that ed-school courses are too abstract and academic, demand more field experience and fewer course requirements. From one perspective, ed-school research is too soft, too applied, and totally lacking in academic rigor, while from another, it is impractical and irrelevant, serving a university agenda while being largely useless to the schools.

Of course, both sides may be right. After years of making and attending presentations at the annual meeting of the American Educational Research Association, I am willing to concede that much of the work produced by educational researchers is lacking in both intellectual merit and practical application. But I would also argue that there is something noble and necessary about the way that the denizens of ed schools continue their quest for a workable balance between theory and practice. If only others in the academy would try to accomplish a marriage of academic elegance and social impact.

A Model for Academe

So where does this leave us in thinking about the poor beleaguered ed school? And what lessons, if any, can be learned from its checkered history?

The genuine instructional and intellectual weakness of ed schools results from the way the schools did what was demanded of them, which, though understandable, was not exactly honorable. Even so, much of the scorn that has come down on the ed school stems from its lowly status rather than from any demonstrable deficiencies in the educational role it has played. But then institutional status has circular quality about it, which means that predictions of high or low institutional quality become self-fulfilling.

In some ways, ed schools have been doing things right. They have wrestled vigorously (if not always to good effect) with the problems of public education, an area that is of deep concern to most citizens. This has meant tackling social problems of great complexity and practical importance, even though the university does not place much value on the production of this kind of messy, indeterminate, and applied knowledge.

Oddly enough, the rest of the university could learn a lot from the example of the ed school. The question, however, is whether others in the university will see the example of the ed school as positive or negative. If academics consider this story in light of the current political and fiscal climate, then the ed school could serve as a model for a way to meet growing public expectations for universities to teach things that students need to know and to generate knowledge that benefits the community.

But it seems more likely that academics will consider this story a cautionary tale about how risky and unrewarding such a strategy can be. After all, education schools have demonstrated that they are neither very successful at accomplishing the marriage of theory and practice nor well rewarded for trying. In fact, the odor of failure and disrespect continues to linger in the air around these institutions. In light of such considerations, academics are likely to feel more comfortable placing their chips in the university’s traditional confidence game, continuing to pursue academic status and to market educational credentials. And from this perspective, the example of the ed school is one they should avoid like the plague. 

Posted in Democracy, History, Liberty, Race

The Central Link between Liberty and Slavery in American History

In this post, I explore insights from two important books about the peculiar way in which liberty and slavery jointly emerged from the context of colonial America. One is a new book by David Stasavage, The Decline and Rise of Democracy. The other is a 1992 book by Toni Morrison, Playing in the Dark: Whiteness and the Literary Imagination. The core point I draw from Stasavage is that the same factors that nurtured the development of political liberty in the American context also led to the development of slavery. The related point I draw from Morrison is that the existence of slavery was fundamental in energizing the colonists’ push for self rule.

Stasavage Cover

The Stasavage book explores the history of democracy in the world, starting with early forms that emerged in premodern North America, Europe, and Africa and then fell into decline, followed by the rise of modern parliamentary democracy.  He contrasts this with an alternative form of governance, autocracy, which grew up in a large number of times and places but appeared earliest and most enduringly in China.

He argues that three conditions were necessary for the emergence of early democracy. One is small scale, which allows people to confer as a group instead of relying on a distant leader.  Another is that rulers lack the knowledge about what people were producing, such as an administrative bureaucracy could provide, which means they needed to share power in order to be able to levy taxes effectively.  But I want to focus on the third factor — the existence of an exit option — which is most salient to the colonial American case.  Here’s how he describes it:

The third factor that led to early democracy involved the balance between how much rulers needed their people and how much people could do without their rulers. When rulers had a greater need for revenue, they were more likely to accept governing in a collaborative fashion, and this was even more likely if they needed people to fight wars. With inadequate means of simply compelling people to fight, rulers offered them political rights. The flip side of all this was that whenever the populace found it easier to do without a particular ruler—say by moving to a new location—then rulers felt compelled to govern more consensually. The idea that exit options influence hierarchy is, in fact, so general it also applies to species other than humans. Among species as diverse as ants, birds, and wasps, social organization tends to be less hierarchical when the costs of what biologists call “dispersal” are low.

The central factor that supported the development of democracy in the British colonies was the scarcity of labor:

A broad manhood suffrage took hold in the British part of colonial North America not because of distinctive ideas but for the simple reason that in an environment where land was abundant and labor was scarce, ordinary people had good exit options. This was the same fundamental factor that had favored democracy in other societies.

And this was also the factor that promoted slavery:  “Political rights for whites and slavery for Africans derived from the same underlying environmental condition of labor scarcity.”  Because of this scarcity, North American agricultural enterprises in the colonies needed a way to ensure a flow of laborers to the colonies and a way to keep them on the job once they got there.  The central mechanisms for doing that were indentured servitude and slavery.  Some indentured servants were recruited in Britain with the promise of free passage to the new world in return for a contract to work for a certain number of years.  Others were simply kidnapped, shipped, and then forced to work off their passage.  At the same time Africans initially came to the colonies in a variety of statuses, but this increasingly shifted toward full slavery.  Here’s how he describes the situation in Tidewater colonies.

The early days of forced shipment of English to Virginia sounds like it would have been an environment ripe for servitude once they got there. In fact, it did not always work that way. Once they finished their period of indenture, many English migrants established farms of their own. This exit option must have been facilitated by the fact that they looked like Virginia’s existing British colonists, and they also sounded like them. They would have also shared a host of other cultural commonalities. In other words, they had a good outside option.

Now consider the case of Africans in Virginia, Maryland, and the other British colonies in North America who began arriving in 1619. The earliest African arrivals to Virginia and Maryland came in a variety of situations. Some were free and remained so, some were indentured under term contracts analogous to those of many white migrants, and some came entirely unfree. Outside options also mattered for Africans, and for several obvious reasons they were much worse than those for white migrants. Africans looked different than English people, they most often would not have arrived speaking English, or being aware of English cultural practices, and there is plenty of evidence that people in Elizabethan and Jacobean England associated dark skin with inferiority or other negative qualities. Outside options for Africans were remote to nonexistent. The sustainability of slavery in colonies like Virginia and Maryland depended on Africans not being able to escape and find labor elsewhere. For slave owners it of course helped that they had the law on their side. This law evolved quickly to define exactly what a “slave” was, there having been no prior juridical definition of the term. Africans were now to be slaves whereas kidnapped British boys were bound by “the custom of the country,” meaning that eventual release could be expected.

So labor scarcity and the existence of an attractive exit option provided the formative conditions for developing both white self-rule and Black enslavement.

Morrison Book Cover

Toni Morrison’s book is an reflection on the enduring impact of whiteness and blackness in shaping American literature.  In the passage below, from the chapter titled “Romancing the Shadow,” she is talking about the romantic literary tradition in the U.S.

There is no romance free of what Herman Melville called “the power of blackness,” especially not in a country in which there was a resident population, already black, upon which the imagination could play; through which historical, moral, metaphysical, and social fears, problems, and dichotomies could be articulated. The slave population, it could be and was assumed, offered itself up as surrogate selves for meditation on problems of human freedom, its lure and its elusiveness. This black population was available for meditations on terror — the terror of European outcasts, their dread of failure, powerlessness, Nature without limits, natal loneliness, internal aggression, evil, sin, greed. In other words , this slave population was understood to have offered itself up for reflections on human freedom in terms other than the abstractions of human potential and the rights of man.

The ways in which artists — and the society that bred them — transferred internal conflicts to a “blank darkness,” to conveniently bound and violently silenced black bodies, is a major theme in American literature. The rights of man, for example, an organizing principle upon which the nation was founded, was inevitably yoked to Africanism. Its history, its origin is permanently allied with another seductive concept: the hierarchy of race…. The concept of freedom did not emerge in a vacuum. Nothing highlighted freedom — if it did not in fact create it — like slavery.

Black slavery enriched the country’s creative possibilities. For in that construction of blackness and enslavement could be found not only the not-free but also, with the dramatic polarity created by skin color, the projection of the not-me. The result was a playground for the imagination. What rose up out of collective needs to allay internal fears and to rationalize external exploitation was an American Africanism — a fabricated brew of darkness, otherness, alarm, and desire that is uniquely American.

Such a lovely passage describing such an ugly distinction.  She’s saying that for Caucasian plantation owners in the Tidewater colonies, the presence of Black slaves was a vivid and visceral reminder of what it means to be not-free and thus decidedly not-me.  For people like Jefferson and Washington and Madison, the most terrifying form of unfreedom was in their faces every day.  More than their pale brethren in the Northern colonies, they had a compelling desire to never be treated by the king even remotely like the way they treated their own slaves.  

“The concept of freedom did not emerge in a vacuum. Nothing highlighted freedom — if it did not in fact create it — like slavery.”