Posted in Technology, Writing

Angus Fletcher — Why Computers Will Never Write Good Novels

This post is a piece by Angus Fletcher about the limitations of artificial intelligence.  In it he argues that computers can do a lot of amazing things that would stump simple humans, but they can’t write a good novel.  
Here’s a link to the original in NAUTILUS.  

The core of his argument is this.  Computers think in equations, A equals Z, which means they can determine correlation but not causation.  Meanwhile, brains think by causation, A causes Z, which is built into their neurology.  Here’s how he explains it:

[N]eurons can control the direction of our ideas. This control is made possible by the fact that our neurons, as modern neuroscientists and electrophysiologists have demonstrated, fire in a single direction: from dendrite to synapse. So when a synapse of neuron A opens a connection to a dendrite of neuron Z, the ending of A becomes the beginning of Z, producing the one-way circuit A → Z.

This one-way circuit is our brain thinking: A causes Z. Or to put it technically, it’s our brain performing causal reasoning.

Causal reasoning is the neural root of tomorrow-dreaming teased at this article’s beginning. It’s our brain’s ability to think: this-leads-to-that. It can be based on some data or no data—or even go against all data. And it’s such an automatic outcome of our neuronal anatomy that from the moment we’re born, we instinctively think in its story sequences, cataloguing the world into mother-leads-to-pleasure and cloud-leads-to-rain and violence-leads-to-pain. Allowing us, as we grow, to invent afternoon plans, personal biographies, scientific hypotheses, business proposals, military tactics, technological blueprints, assembly lines, political campaigns, and other original chains of cause-and-effect.

But as natural as causal reasoning feels to us, computers can’t do it. That’s because the syllogistic thought of the computer ALU is composed of mathematical equations, which (as the term “equation” implies) take the form of A equals Z. And unlike the connections made by our neurons, A equals Z is not a one-way route. It can be reversed without changing its meaning: A equals Z means exactly the same as Z equals A, just as 2 + 2 = 4 means precisely the same as 4 = 2 + 2.

This feature of A equals Z means that computers can’t think in A causes Z. The closest they can get is “if-then” statements such as: “If Bob bought this toothpaste, then he will buy that toothbrush.” This can look like causation but it’s only correlation. Bob buying toothpaste doesn’t cause him to buy a toothbrush. What causes Bob to buy a toothbrush is a third factor: wanting clean teeth.

To tell a story you need to link events into a causal chain, which is the narrative flow we see in literature.  This means that AI may be able to imitate the patterns that it finds in literature, but it can’t create a novel.

See what you think.

Why Computers Will Never Write Good Novels

The power of narrative flows only from the human brain.

You’ve been hoaxed.

The hoax seems harmless enough. A few thousand AI researchers have claimed that computers can read and write literature. They’ve alleged that algorithms can unearth the secret formulas of fiction and film. That Bayesian software can map the plots of memoirs and comic books. That digital brains can pen primitive lyrics1 and short stories—wooden and weird, to be sure, yet evidence that computers are capable of more.

But the hoax is not harmless. If it were possible to build a digital novelist or poetry analyst, then computers would be far more powerful than they are now. They would in fact be the most powerful beings in the history of Earth. Their power would be the power of literature, which although it seems now, in today’s glittering silicon age, to be a rather unimpressive old thing, springs from the same neural root that enables human brains to create, to imagine, to dream up tomorrows. It was the literary fictions of H.G. Wells that sparked Robert Goddard to devise the liquid-fueled rocket, launching the space epoch; and it was poets and playwrights—Homer in The Iliad, Karel Čapek in Rossumovi Univerzální Roboti—who first hatched the notion of a self-propelled metal robot, ushering in the wonder-horror of our modern world of automata.

At the bottom of literature’s strange and branching multiplicity is an engine of causal reasoning.

If computers could do literature, they could invent like Wells and Homer, taking over from sci-fi authors to engineer the next utopia-dystopia. And right now, you probably suspect that computers are on the verge of doing just so: Not too far in the future, maybe in my lifetime even, we’ll have a computer that creates, that imagines, that dreams. You think that because you’ve been duped by the hoax. The hoax, after all, is everywhere: college classrooms, public libraries, quiz games, IBM, Stanford, Oxford, Hollywood. It’s become such a pop-culture truism that Wired enlisted an algorithm, SciFiQ, to craft “the perfect piece of science fiction.”2

Yet despite all this gaudy credentialing, the hoax is a complete cheat, a total scam, a fiction of the grossest kind. Computers can’t grasp the most lucid haiku. Nor can they pen the clumsiest fairytale. Computers cannot read or write literature at all. And they never, never will.

I can prove it to you.

Computers possess brains of unquestionable brilliance, a brilliance that dates to an early spring day in 1937 when a 21-year-old master’s student found himself puzzling over an ungainly contraption that looked like three foosball tables pressed side-to-side in an electrical lab at the Massachusetts Institute of Technology.

The student was Claude Shannon. He’d earned his undergraduate diploma a year earlier from the University of Michigan, where he’d become fascinated with a system of logic devised during the 1850s by George Boole, a self-taught Irish mathematician who’d managed to vault himself, without a university degree, into an Algebra professorship at Queen’s College, Cork. And eight decades after Boole pulled off that improbable leap, Shannon pulled off another. The ungainly foosball contraption that sprawled before him was a “differential analyzer,” a wheel-and-disc analogue computer that solved physics equations with the help of electronic switchboards. Those switchboards were a convoluted mess of ad hoc cables and transistors that seemed to defy reason when suddenly Shannon had a world-changing epiphany: Those switchboards and Boole’s logic spoke the same language. Boole’s logic could simplify the switchboards, condensing them into circuits of elegant precision. And the switchboards could then solve all of Boole’s logic puzzles, ushering in history’s first automated logician.

The hoax is everywhere: college classrooms, IBM, Stanford, Oxford, Hollywood.

With this jump of insight, the architecture of the modern computer was born. And as the ensuing years have proved, the architecture is one of enormous potency. It can search a trillion webpages, dominate strategy games, and pick lone faces out of a crowd—and every day, it stretches still further, automating more of our vehicles, dating lives, and daily meals. Yet as dazzling as all these tomorrow-works are, the best way to understand the true power of computer thought isn’t to peer forward into the future fast-approaching. It’s to look backward in time, returning our gaze to the original source of Shannon’s epiphany. Just as that epiphany rested on the earlier insights of Boole, so too did Boole’s insights3 rest on a work more ancient still: a scroll authored by the Athenian polymath Aristotle in the fourth century B.C.

The scroll’s title is arcane: Prior Analytics. But its purpose is simple: to lay down a method for finding the truth. That method is the syllogism. The syllogism distills all logic down to three basic functions: AND, OR, NOT. And with those functions, the syllogism unerringly distinguishes what’s TRUE from what’s FALSE.

So powerful is Aristotle’s syllogism that it became the uncontested foundation of formal logic throughout Byzantine antiquity, the Arabic middle ages, and the European Enlightenment. When Boole laid the mathematical groundwork for modern computing, he could begin by observing:

The subject of Logic stands almost exclusively associated with the great name of Aristotle. As it was presented to ancient Greece … it has continued to the present day.

This great triumph prompted Boole to declare that Aristotle had identified “the fundamental laws of those operations of the mind by which reasoning is performed.” Inspired by the Greek’s achievement, Boole decided to carry it one step further. He would translate Aristotle’s syllogisms into “the symbolical language of a Calculus,” creating a mathematics that thought like the world’s most rational human.

In 1854, Boole published his mathematics as The Laws of ThoughtThe Laws converted Aristotle’s FALSE and TRUE into two digits—zero and 1—that could be crunched by AND-OR-NOT algebraic equations. And 83 years later, those equations were given life by Claude Shannon. Shannon discerned that the differential analyzer’s electrical off/on switches could be used to animate Boole’s 0/1 bits. And Shannon also experienced a second, even more remarkable, realization: The same switches could automate Boole’s mathematical syllogisms. One arrangement of off/on switches could calculate AND, and a second could calculate OR, and a third could calculate NOT, Frankensteining an electron-powered thinker into existence.

Shannon’s mad-scientist achievement established the blueprint for the computer brain. That brain, in homage to Boole’s arithmetic and Aristotle’s logic, is known now as the Arithmetic Logic Unit or ALU. Since Shannon’s breakthrough in 1937, the ALU has undergone a legion of upgrades: Its clunky off/on switch-arrangements have shrunk to miniscule transistors, been renamed logic gates, multiplied into parallel processors, and used to perform increasingly sophisticated styles of mathematics. But through all these improvements, the ALU’s core design has not changed. It remains as Shannon drew it up, an automated version of the syllogism, so syllogistic reasoning is the only kind of thinking that computers can do. Aristotle’s AND-OR-NOT is hardwired in.

This hardwiring has hardly seemed a limitation. In the late 19th century, the American philosopher C.S. Peirce deduced that AND-OR-NOT could be used to compute the essential truth of anything: “mathematics, ethics, metaphysics, psychology, phonetics, optics, chemistry, comparative anatomy, astronomy, gravitation, thermodynamics, economics, the history of science, whist, men and women, wine, meteorology.” And in our own time, Peirce’s deduction has been bolstered by the advent of machine learning. Machine learning marshals the ALU’s logic gates to perform the most astonishing feats of artificial intelligence, enabling Google’s DeepMind, IBM’s Watson, Apple’s Siri, Baidu’s PaddlePaddle, and Amazon’s Web Services to reckon a person’s odds of getting sick, alert companies to possible frauds, winnow out spam, become a whiz at multiplayer video games, and estimate the likelihood that you’d like to purchase something you don’t even know exists.

Although these remarkable displays of computer cleverness all originate in the Aristotelian syllogisms that Boole equated with the human mind, it turns out that the logic of their thought is different from the logic that you and I typically use to think.

Very, very different indeed.

The difference was detected back in the 16th century.

It was then that Peter Ramus, a half-blind, 20-something professor at the University of Paris, pointed out an awkward fact that no reputable academic had previously dared to admit: Aristotle’s syllogisms were extremely hard to understand.4 When students first encountered a syllogism, they were inevitably confused by its truth-generating instructions:

If no β is α, then no α is β, for if some α (let us say δ) were β, then β would be α, for δ is β. But if all β is α, then some α is β, for if no α were β, then no β could be α …

And even after students battled through their initial perplexity, valiantly wrapping their minds around Aristotle’s abstruse mathematical procedures, it still took years to acquire anything like proficiency in Logic.

This, Ramus thundered, was oxymoronic. Logic was, by definition, logical. So, it should be immediately obvious, flashing through our mind like a beam of clearest light. It shouldn’t slow down our thoughts, requiring us to labor, groan, and painstakingly calculate. All that head-strain was proof that Logic was malfunctioning—and needed a fix.

Ramus’ denunciation of Aristotle stunned his fellow professors. And Ramus then startled them further. He announced that the way to make Logic more intuitive was to turn away from the syllogism. And to turn toward literature.

Do we make ourselves more logical by using computers? Or by reading poetry?

Literature exchanged Aristotle’s AND-OR-NOT for a different logic: the logic of nature. That logic explained why rocks dropped, why heavens rotated, why flowers bloomed, why hearts kindled with courage. And by doing so, it equipped us with a handbook of physical power. Teaching us how to master the things of our world, it upgraded our brains into scientists.

Literature’s facility at this practical logic was why, Ramus declared, God Himself had used myths and parables to convey the workings of the cosmos. And it was why literature remained the fastest way to penetrate the nuts and bolts of life’s operation. What better way to grasp the intricacies of reason than by reading Plato’s Socratic dialogues? What better way to understand the follies of emotion than by reading Aesop’s fable of the sour grapes? What better way to fathom war’s empire than by reading Virgil’s Aeneid? What better way to pierce that mystery of mysteries—love—than by reading the lyrics of Joachim du Bellay?

Inspired by literature’s achievement, Ramus tore up Logic’s traditional textbooks. And to communicate life’s logic in all its rich variety, he crafted a new textbook filled with sonnets and stories. These literary creations explained the previously incomprehensible reasons of lovers, philosophers, fools, and gods—and did so with such graceful intelligence that learning felt easy. Where the syllogisms of Aristotle had ached our brains, literature knew just how to talk so that we’d comprehend, quickening our thoughts to keep pace with its own.

Ramus’ new textbook premiered in the 1540s, and it struck thousands of students as a revelation. For the first time in their lives, those students opened a Logic primer—and felt the flow of their innate method of reasoning, only executed faster and more precisely. Carried by a wave of student enthusiasm, Ramus’ textbooks became bestsellers across Western Europe, inspiring educators from Berlin to London to celebrate literature’s intuitive logic: “Read Homer’s Iliad and that most worthy ornament of our English tongue, the Arcadia of Sir Philip Sidney—and see the true effects of Natural Logic, far different from the Logic dreamed up by some curious heads in obscure schools.”5

Four-hundred years before Shannon, here was his dream of a logic-enhancer—and yet the blueprint was radically different. Where Shannon tried to engineer a go-faster human mind with electronics, Ramus did it with literature.

So who was right? Do we make ourselves more logical by using computers? Or by reading poetry? Does our next-gen brain lie in the CPU’s Arithmetic Logic Unit? Or in the fables of our bookshelf?

To our 21st-century eyes, the answer seems obvious: The AND-OR-NOT logic of Aristotle, Boole, and Shannon is the undisputed champion. Computers—and their syllogisms—rule our schools, our offices, our cars, our homes, our everything. Meanwhile, nobody today reads Ramus’ textbook. Nor does anyone see literature as the logic of tomorrow. In fact, quite the opposite: Enrollments in literature classes at universities worldwide are contracting dramatically. Clearly, there is no “natural logic” inside our heads that’s accelerated by the writings of Homer and Maya Angelou.

Except, there is. In a recent plot twist, neuroscience has shown that Ramus got it right.

Our neurons can fire—or not.

This basic on/off function, observed pioneering computer scientist John von Neumann, makes our neurons appear similar—even identical—to computer transistors. Yet transistors and neurons are different in two respects. The first difference was once thought to be very important, but is now viewed as basically irrelevant. The second has been almost entirely overlooked, but is very important indeed.

The first—basically irrelevant—difference is that transistors speak in digital while neurons speak in analogue. Transistors, that is, talk the TRUE/FALSE absolutes of 1 and 0, while neurons can be dialed up to “a tad more than 0” or “exactly ¾.” In computing’s early days, this difference seemed to doom artificial intelligences to cogitate in black-and-white while humans mused in endless shades of gray. But over the past 50 years, the development of Bayesian statistics, fuzzy sets, and other mathematical techniques have allowed computers to mimic the human mental palette, effectively nullifying this first difference between their brains and ours.

The second—and significant—difference is that neurons can control the direction of our ideas. This control is made possible by the fact that our neurons, as modern neuroscientists and electrophysiologists have demonstrated, fire in a single direction: from dendrite to synapse. So when a synapse of neuron A opens a connection to a dendrite of neuron Z, the ending of A becomes the beginning of Z, producing the one-way circuit A → Z.

This one-way circuit is our brain thinking: A causes Z. Or to put it technically, it’s our brain performing causal reasoning.

The best that computers can do is spit out word soups. They leave our neurons unmoved.

Causal reasoning is the neural root of tomorrow-dreaming teased at this article’s beginning. It’s our brain’s ability to think: this-leads-to-that. It can be based on some data or no data—or even go against all data. And it’s such an automatic outcome of our neuronal anatomy that from the moment we’re born, we instinctively think in its story sequences, cataloguing the world into mother-leads-to-pleasure and cloud-leads-to-rain and violence-leads-to-pain. Allowing us, as we grow, to invent afternoon plans, personal biographies, scientific hypotheses, business proposals, military tactics, technological blueprints, assembly lines, political campaigns, and other original chains of cause-and-effect.

But as natural as causal reasoning feels to us, computers can’t do it. That’s because the syllogistic thought of the computer ALU is composed of mathematical equations, which (as the term “equation” implies) take the form of A equals Z. And unlike the connections made by our neurons, A equals Z is not a one-way route. It can be reversed without changing its meaning: A equals Z means exactly the same as Z equals A, just as 2 + 2 = 4 means precisely the same as 4 = 2 + 2.

This feature of A equals Z means that computers can’t think in A causes Z. The closest they can get is “if-then” statements such as: “If Bob bought this toothpaste, then he will buy that toothbrush.” This can look like causation but it’s only correlation. Bob buying toothpaste doesn’t cause him to buy a toothbrush. What causes Bob to buy a toothbrush is a third factor: wanting clean teeth.

Computers, for all their intelligence, cannot grasp this. Judea Pearl, the computer scientist whose groundbreaking work in AI led to the development of Bayesian networks, has chronicled that the if-then brains of computers see no meaningful difference between Bob buying a toothbrush because he bought toothpaste and Bob buying a toothbrush because he wants clean teeth. In the language of the ALU’s transistors, the two equate to the very same thing.

This inability to perform causal reasoning means that computers cannot do all sorts of stuff that our human brain can. They cannot escape the mathematical present-tense of 2 + 2 is 4 to cogitate in was or will be. They cannot think historically or hatch future schemes to do anything, including take over the world.

And they cannot write literature.

Literature is a wonderwork of imaginative weird and dynamic variety. But at the bottom of its strange and branching multiplicity is an engine of causal reasoning. The engine we call narrative.

Narrative cranks out chains of this-leads-to-that. Those chains form literature’s story plots and character motives, bringing into being the events of The Iliad and the soliloquies of Hamlet. And those chains also comprise the literary device known as the narrator, which (as narrative theorists from the Chicago School6 onward have shown) generate novelistic style and poetic voice, creating the postmodern flair of “Rashōmon” and the fierce lyricism of I Know Why the Caged Bird Sings.

No matter how nonlogical, irrational, or even madly surreal literature may feel, it hums with narrative logics of cause-and-effect. When Gabriel García Márquez begins One Hundred Years of Solitude with a mind-bending scene of discovering ice, he’s using story to explore the causes of Colombia’s circular history. When William S. Burroughs dishes out delirious syntax in his opioid-memoir Naked Lunch—“his face torn like a broken film of lust and hungers of larval organs stirring”—he’s using style to explore the effects of processing reality through the pistons of a junk-addled mind.

Narrative’s technologies of plot, character, style, and voice are why, as Ramus discerned all those centuries ago, literature can plug into our neurons to accelerate our causal reasonings, empowering Angels in America to propel us into empathy, The Left Hand of Darkness to speed us into imagining alternate worlds, and a single scrap of Nas, “I never sleep, because sleep is the cousin of death,” to catapult us into grasping the anxious mindset of the street.

None of this narrative think-work can be done by computers, because their AND-OR-NOT logic cannot run sequences of cause-and-effect. And that inability is why no computer will ever pen a short story, no matter how many pages of Annie Proulx or O. Henry are fed into its data banks. Nor will a computer ever author an Emmy-winning television series, no matter how many Fleabag scripts its silicon circuits digest.

The best that computers can do is spit out word soups. Those word soups are syllogistically equivalent to literature. But they’re narratively different. As our brains can instantly discern, the verbal emissions of computers have no literary style or poetic voice. They lack coherent plots or psychologically comprehensible characters. They leave our neurons unmoved.

This isn’t to say that AI is dumb; AI’s rigorous circuitry and prodigious data capacity make it far smarter than us at Aristotelian logic. Nor is it to say that we humans possess some metaphysical creative essence—like freewill—that computers lack. Our brains are also machines, just ones with a different base mechanism.

But it is to say that there’s a dimension—the narrative dimension of time—that exists beyond the ALU’s mathematical present. And our brains, because of the directional arrow of neuronal transmission, can think in that dimension.

Our thoughts in time aren’t necessarily right, good, or true—in fact, strictly speaking, since time lies outside the syllogism’s timeless purview, none of our this-leads-to-that musings qualify as candidates for rightness, goodness, or truth. They exist forever in the realm of the speculative, the counterfactual, and the fictional. But even so, their temporality allows our mortal brain to do things that the superpowered NOR/NAND gates of computers never will. Things like plan, experiment, and dream.

Things like write the world’s worst novels—and the greatest ones, too.

Angus Fletcher is Professor of Story Science at Ohio State’s Project Narrative and the author of Wonderworks: The 25 Most Powerful Inventions in the History of LiteratureHis peer-reviewed proof that computers cannot read literature was published in January 2021 in the literary journal, Narrative.


1. Hopkins, J. & Kiela, D. Automatically generating rhythmic verse with neural networks. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics 168-178 (2017).

2. Marche, S. I enlisted an algorithm to help me write the perfect piece of science fiction. This is our story. Wired (2017).

3. Corcoran, J. Aristotle’s Prior Analytics and Boole’s Laws of ThoughtHistory and Philosophy of Logic 24, 261-288 (2003).

4. Sharratt, P. Nicolaus Nancekius, “Petri Rami Vita.” Hunanistica Lovaniensia 24, 161-277 (1975).

5. France, A. The Lawiers Logike William How, London, U.K. (1588).

6. Phelan, J. The Chicago School. In Grishakova, M. & Salupere, S. (Eds.) Theoretical Schools and Circles in the Twentieth Century Humanities Routledge, New York, NY (2015).

This post is a piece by Angus Fletcher about the limitations of artificial intelligence.  In it he argues that computers can do a lot of amazing things that would stump simple humans, but they can’t write a good novel.  

Posted in Accountability, Curriculum, School reform

Let’s Measure What No One Teaches: PISA, NCLB, and the Shrinking Aims of Education

This post is a piece I published in Teachers College Record in 2014.  Here’s a link to the original.  

It’s an analysis of two major players in the world movement for educational accountability:  OECD’s Program for International Student Assessment (PISA), and the US No Child Left Behind law.  The core argument is this:

Both PISA and NCLB, I argue, are cases of how we are shrinking the aims of education. One approach focuses on mastery of skills that are relevant but not taught and the other on mastery of content that is taught but not relevant. Neither seems a sensible basis for understanding the quality of schooling or for making educational policy.

See what you think.

Let’s Measure What No One Teaches: PISA, NCLB, and the Shrinking Aims of Education

by David Labaree – 2014

Accountability is all the rage in the world of educational policy. Increasingly, we want to hold students accountable for learning; and we want to hold teachers, schools, school districts, states, and nations accountable for ensuring that learning in each of their domains of responsibility occurs at the highest possible level. Not to do so, we hear repeatedly, is to consign ourselves to a chronic condition of economic stagnation, national decline, and social inequality (Friedman, 2012; McKinsey & Company, 2009). It seems that everything depends on ramping up the quantity and quality of learning in our schools.

Of course, you can’t hold schools accountable without a valid and reliable method of educational accounting, and this means you need to develop robust measures of how well or how badly schools are doing their job. The result has been a mad dash to produce a better test, one that will allow us to measure the educational performance of actors at all levels—from individual students to whole countries—and to rank these actors in relation to each other. We need such a ranking both to honor the high performers and to shame the laggards, motivating the latter to rise up in the rankings by emulating their betters.

As usual in matters of measurement, the devil is in the details. The key problems are how to define a positive educational outcome, how to measure this outcome, and in particular how to measure it in a way that will allow comparison across all of the actors at a particular level: students in a class, teachers in a school, schools in a district, districts in a state, states in a country, and countries in the world. All three problems make things difficult for test designers, and the difficulties escalate the farther away you get from the classroom because of the need to maintain comparability for performance measures across educational settings that are increasingly heterogeneous. Once you reach the level where you are trying to compare educational quality across states or across countries, you need measures that are highly abstract in relation to particular classroom practices, reducing the complexity of education to the elements that are found anywhere.

In this paper, I examine two current and prominent systems of comparative measurement that are at far remove from individual classrooms. One is the Program for International Student Assessment (PISA), which compares performance across more than 60 countries. The other is the array of state-level systems of educational accountability in the United States that are assembled under the umbrella of the federal law No Child Left Behind (NCLB). My aim is to explore how each of these ambitious initiatives deals with the problems of measuring educational quality at the broadest level. The PISA approach is to measure a set of cognitive skills that graduates will need in order to be productive workers anywhere in the advanced world. The NCLB approach is to measure how well students perform in meeting the school curriculum standards of individual states. So one examines mastery of particular skills; the other examines mastery of a particular curriculum. The second strategy was not feasible for the PISA designers, because the school curriculum varies so much between countries.

Despite the striking differences between the two assessment systems, I argue that they have a lot in common. They both reduce the salient outcomes of schooling to learning, and they both reduce learning to the acquisition of economically useful skills (human capital). Both claim that their measures provide useful data for policy makers who want to increase the effectiveness of schools in producing human capital. But neither can make this case persuasively. PISA measures what it considers economically relevant skills but can’t show that schools are actually teaching these skills. NCLB measures mastery of the school curriculum but can’t show that learning these school subjects is economically useful. One measures what is relevant but not taught; the other measures what is taught but not relevant. So even if we assume that the core of education is learning and the core of learning is human capital—which I do not—it makes little sense to use either system of assessment to assess the effectiveness of schools. I close by examining how much the accountability effort has narrowed the meaning of education.


PISA emerged from a series of other international efforts to assess school achievement that arose after the Second World War. The main precursor organization was the International Association for the Evaluation of Educational Achievement (IEA), an offshoot of UNESCO, which conducted a 12-country pilot study in 1959, followed by the First International Mathematics Study in 1964, the First International Science Study in 1971, the second version of each of these studies in 1982 and 1984, and the Third International Mathematics and Science Study (TIMSS) in 1995 (IEA, n.d.). The latter test has been given at four-year intervals ever since under the generic name Trends in International Mathematics and Science Study. TIMSS was the first rock star of international educational assessment. It became headline news in countries around the world, as policy makers, pundits, and press expressed triumph or (more commonly) chagrin about the state of their school system compared with others in the world community. The original TIMSS was such a hit that IEA felt compelled to keep the initials to preserve a successful brand; there would be no FIMSS. The organizers of TIMSS invented the framework for the international measurement of school quality that we are stuck with today: the focus on cross-national comparison; the periodic repetition of the assessment to provide a slow-motion picture of educational change; and the all-important league tables that show where each country ranks in comparison with the others. Building on the huge success of TIMSS, the IEA broadened its scope by launching Progress in International Reading Literacy Study (PIRLS) in 2006 and repeated it in 2011.

But by the time PIRLS came out, IEA’s testing initiatives had already been eclipsed by PISA, which had the powerful backing of the Organization of Economic Cooperation and Development (OECD). It is hard to compete as a standalone educational testing outfit when confronted with OECD, which—as the economic policy arm of the 34 wealthiest countries in the world—was emerging as the central player in the domain of international educational policy. In 1997 the Directorate for Education of OECD pulled together the plan for an international assessment of education achievement across literacy, math, and science (in effect incorporating both TIMSS and PIRLS) for its member countries and any others that wanted to take part. The idea was to establish a cycle of international tests that would measure one of these subject areas every three years and then, after a full nine-year cycle, start over again. PISA tested literacy in 2000, math in 2003, and science in 2006, and then started the second cycle with literacy in 2009 and math in 2012. In the first year 43 countries took part, and by 2012 this had increased to 65 (OECD, n.d.). OECD proudly pronounces that the participating countries account for “90% of the world’s economy” (PISA, n.d., p. 4).

So how did the test makers construct PISA? As with any international comparison, of course, their core problem was the incompatibility of curricula across countries. So one thing was clear from the outset: They could not measure how well students around the world were learning the subjects that these students were actually being taught in school. Instead they had to come up with another approach. They also had to deal with the incompatibility of school structures across countries, where the length of compulsory schooling varies and so does the meaning of being in a particular grade level in the system. They came up with ingenious responses to these two comparison problems. Instead of looking at, say, high school seniors—which would mean different things in different systems—they chose “to measure how well young adults, at age 15 and therefore approaching the end of compulsory schooling, are prepared to meet the challenges of today’s knowledge societies” (PISA, 2001a, p. 14).

Identifying the target population was the easy part; the hard part was figuring out what skills these students are supposed to have acquired by the time they are 15. Because they could not examine mastery of the school curriculum, they decided to approach the issue from another direction. Instead of asking what skills schools require students to learn in each country, they decided to ask what skills “today’s knowledge societies” (that is, the advanced economies of the world that constitute the OECD membership) require of young people who enter into the 21st century workforce. Here is the way they put it in the first PISA report:

The assessment is forward-looking, focusing on young people’s ability to use their knowledge and skills to meet real-life challenges, rather than on the extent to which they have mastered a specific school curriculum. . . .

PISA is based on a dynamic model of lifelong learning in which new knowledge and skills necessary for successful adaptation to a changing world are continuously acquired throughout life. PISA focuses on things that 15-year-olds will need in their future lives and seeks to assess what they can do with what they have learned. The assessment is informed – but not constrained – by the common denominator of national curricula. PISA does assess students’ knowledge, but it also examines their ability to reflect on the knowledge and experience, and to apply that knowledge and experience to real world issues. (PISA, 2001a, p. 14)

There are two obvious problems with PISA’s approach to measuring student skills. First, in order to maintain comparability across school systems with incompatible curricula, they have chosen to measure skills that are not necessarily found in the curriculum of any school system. Comparative measures require a common denominator; and when no such common element exists across the subjects taught in various national school systems—when there are no subjects that everyone teaches—then you need to measure skills that no one teaches. This is leveling the playing field with a bulldozer. For PISA, comparability trumps content.

Of course, PISA claims that it is measuring something more important than what schools teach, namely what they should be teaching if they are doing their job. Any school system worth its salt, it implies, should be providing students with the skills required for work in the knowledge economy of the new world order. By measuring how well students have mastered these skills, PISA is demonstrating how effective schools are in meeting their aims as human capital producers. But even if we are willing to accept PISA’s position that schools should be held accountable for how effectively students learn what schools don’t necessarily teach them—a very big “if” indeed—that still leaves a second problem: The skills PISA measures have an uncertain provenance. As the 2000 report puts it, these measures are “informed – but not constrained – by the common denominator of national curricula.” The executive summary of the report explains the approach this way: “PISA assessed young people’s capacities to use their knowledge and skills to meet real-life challenges, rather than merely looking at how well they had mastered a specific school curriculum” (PISA, 2001b, p. 2).

The psychometricians posit that the skill set they test for is essential for the modern workplace in advanced economies, but they provide little basis for justifying that this toolkit is indeed just what the economy needs—other than by repeated statements about the relevance of what they are testing to “real world issues” and “real-life challenges.” Stefan Hopmann (2008) summarizes the findings of a five other researchers with this devastating assessment of

[t]he assumption that what PISA measures is somehow important knowledge for the future. There is no research available that proves this assertion beyond the point that knowing something is always good and knowing more is better. There is not even research showing that PISA covers enough to be representative of the school subjects involved or the general knowledge-base. PISA items are based on the practical reasoning of its researchers and on pre-tests of what works in most or all settings – and not on systematic research on current or future knowledge structures and needs. (p. 438)

In other words, they just made it up. “We assert that these are the skills people need to have,” they seem to be saying, “and we assert that schools should be held accountable for how well students learn these skills.” Because they can’t compare schools systems based on what they teach, they invent a skill set that no one teaches and then uses mastery of it as the measure of effective schools.


The roots of the No Child Left Behind law in the United States can be found in the educational standards movement that began to emerge in the 1960s at the state level across the country. The core aim of the movement was to tighten the focus of the school curriculum to core academic subjects and to increase student achievement in these subjects. In part the movement was a curricular reaction to the emphasis on inclusive access and social equality (rather than academic learning) that emerged during midcentury efforts to desegregate schools; and in part it was a response to the diffusion of the school curriculum that arose from the administrative progressive movement early in the century, which tended to emphasize breadth of preparation for life more than academic learning. These two concerns merged into a comprehensive effort to narrow the curriculum to the four traditional academic subjects (math, science, English, and social studies); to set standards for performance in each of these areas for students at different levels of the system; and to hold students and schools accountable for attaining these standards through high-stakes tests.

In 1983, the standards movement gained national focus when the federal Department of Education issued an enormously influential report—A Nation at Risk—asserting that setting and attaining high academic standards was now an essential goal of national educational policy. The report presented a frightening vision of what was at stake in this reform effort: “We report to the American people that . . . the educational foundations of our society are presently being eroded by a rising tide of mediocrity that threatens our very future as a Nation and a people” (NCEE, 1983, p. 7). Long before TIMSS and PISA, the standards movement was framing education as a force that would determine the competitive position of nations in the world economy.

We live among determined, well-educated, and strongly motivated competitors. We compete with them for international standing and markets, not only with products but also with the ideas of our laboratories and neighborhood workshops. America’s position in the world may once have been reasonably secure with only a few exceptionally well-trained men and women. It is no longer. (p. 8)

Back then the threat was seen as Japan, now it is China, but the story is still the same: There is a league table of nations, and we are plummeting down in the rankings. National power is at stake; this power depends in the economy; and the economy depends on educational excellence. The rationale for PISA was already in place.

The standards movement continued in the United States over the next two decades, making progress at the state level but never quite being able to bring about a strong national mandate. Presidents George H. W. Bush and Bill Clinton tried to establish national educational goals, but these died in the face of resistance from states that feared loss of control over education, traditionally lodged at the state level. The problem was that the movement’s goal was unexciting: to make schools more efficient at promoting academic learning in service of economic growth. An admirable goal perhaps, but not one that could draw passionate support from the average citizen. This changed when the movement shifted strategy. In the late 1990s leaders augmented the movement’s social efficiency appeal (increased educational quality as the source of economic growth) with a social equality appeal (increased educational quality as way to reduce the differences between rich and poor). The result was a broad constituency for reform that, at the start of the presidency of George W. Bush, brought about No Child Left Behind, which became law in 2002. The law spelled out its broad equity appeal in the bill’s title and its opening sentence: “The purpose of this title is to ensure that all children have a fair, equal, and significant opportunity to obtain a high-quality education and reach, at a minimum, proficiency on challenging State academic achievement standards and state academic assessments” (No Child Left Behind Act, 2002, Title 1, Section 1001).

The core mechanism for raising educational standards through NCLB was the addition of federal rewards and punishments to reinforce educational standards that already existed at the state level. Every state needed to set appropriate curriculum standards and to establish achievement tests that would measure how well students, schools, and school systems were meeting these standards. The law set guidelines for standards and tests and set criteria for defining success and failure, but it left the details up to each state. There was no national curriculum against which to measure student achievement, so the federal government really had no choice. Over the history of American schooling, curriculum has been largely determined by each local school district. States set broad guidelines, but it was not until the emergence of the standards movement in the 1960s that states began to define specific standards for what subjects schools should teach and what level of achievement students should attain in these subjects.

The contrast with PISA is striking. PISA was faced with enormous variation in curriculum across the countries studied and had no authority to impose a standard curriculum, so it opted to define a set of what it considered necessary skills for modern economic life and then test to see how well students around the world acquired these skills. But under NCLB, individual states were able to establish curriculum standards for all the schools within their boundaries and then create tests that were closely aligned with these standards. So PISA was testing how well students acquired skills that may or may not have been part of the program of study in school, whereas NCLB was testing how well students learned the core academic subjects required in a particular state. The alignment between state standards and state tests allowed NCLB to dodge the problem facing the PISA psychometricians about how to measure school learning in the absence of a common curriculum. This made it possible for each state to rate and rank individual school districts according to a common achievement criterion. But the NCLB approach did not allow for comparison across states, each of which had its own curriculum standards and its own tests. In PISA the emphasis is on comparison over content; in NCLB it is on content over comparison.

The inability to compare how each state is performing against the others is a problem in the view of the current global accountability regime, which places league tables at the center of its mission. Pitting districts against each other within a state is not sufficient; we also need to pit the states against each other. There are ways the latter can be accomplished in the United States, but not through NCLB. TIMSS and PISA provide data on the performance of individual states, which allow interested parties to rank the states in order of performance from one to 50. But these tests are not aligned with state curricula, so they are little help in gauging how well students are learning what they are being taught.

There is another American test, administered by the federal government, which seeks to measure achievement across states, the National Assessment of Educational Progress (NAEP). It likes to call itself “the nation’s report card.” These assessments are carried out periodically and provide a satisfying set of tables showing where every state ranks in each of the subject areas tested, with Massachusetts usually at the top and Mississippi at the bottom. The problem with NAEP is that it rates subject-matter learning across states that have different curricula; the advantage is that at least it is operating in a single country, where the degree of curricular convergence is greater than can be found across countries. In this way it operates in a liminal space between the U.S. state tests, which are closely aligned with subjects taught, and PISA, which has no such alignment at all. Given the pressure to develop comparative measures and construct rankings, this is seen as an unacceptable state of affairs.

As a result, there is now a movement in the United States to develop a shared set of curriculum standards for the country as a whole; it is known as the Common Core. As of this writing, 45 of the 50 states have adopted some version of these standards, and the next step is to construct tests that will align with them. So the effort in the United States to rationalize schooling into a form that is easily measured and easily compared is moving ahead rapidly. The standards movement in the individual states paved the way for a national mandate for curriculum standards, the NCLB law, which then—by demonstrating the impossibility of establishing national educational standards without a national curriculum—paved the way for the Common Core. Note how the American trajectory from standards to NCLB to Common Core paralleled the European trajectory from IEA to TIMSS-PIRLS to PISA. In both cases, we see how the rise of local measures of educational achievement quickly created pressure for a common standard that would allow the broadest level of comparison.

Through the Common Core initiative, NCLB is trying to solve its comparability problem, but that still leaves it with a relevance problem. The question is: How socially relevant is the knowledge that it is testing? PISA dealt with this issue in a bold manner. Barred from measuring how well students learn the curriculum, it chose to measure a set of skills that it asserted were socially relevant. Of course, it failed to establish that these tested skills were in fact what students would need in order to be productive members of the new knowledge economy, and it never even tried to show that these skills were taught in school. But at least it tried to show that the test scores matter because they show how well students are prepared for work.

NCLB, however, in both its current and its future (Common Core) phases, is content to focus entirely on measuring how well students learn key elements of the content that schools teach in English, math, science, and social studies. It assumes that this academic subject matter constitutes human capital—providing the knowledge and skills that students will need in order to increase economic productivity, the gross domestic product, and national power. This is a common assumption that runs through the discourse in educational policy around the world right now, but constant repetition does not make it true. There is no room here to provide a detailed critique of human capital theory, but the elements of such a critique are evident in the literature: The relationship between schooling and economic development may be more the result of credentialing and signaling than academic learning (Collins, 1979; Spence, 1973; Thurow, 1972); the useful learning that occurs in school may have more to do with the informal curriculum (school process) than with the formal curriculum (school content) (Dreeben, 1968; Jackson, 1968); and the correlation between expanding enrollments and expanding economies may be better explained by the fact that wealthy economies can afford more schooling than that more schooling creates wealthier economies (Labaree, 2010; Ramirez & Boli, 1987). As a result, although NCLB, unlike PISA, measures at least some of what students learn in school, it cannot demonstrate that this learning has the social utility it claims.


The differences between PISA and NCLB are dramatic. One ignores the school curriculum and measures skills that it claims are economically useful. The other hews closely to the school curriculum, measuring how well students learn it but without establishing that such learning has economic value. So in these ways, they seem to be adopting exactly opposite approaches to educational assessment, differences forced on them by the constraints of the systems they are testing.

But in most other ways they are two peas in a pod. Both PISA and NCLB represent radically reductionist visions of education. They both reduce education to learning; and they both reduce learning to the small subset of knowledge and skill that is seen to be economically relevant. In the end, they both conceive of education simply as the efficient production of economically useful skills.

First let us consider the reduction of education to learning. Gert Biesta (2009, p. 37) argues that

the past two decades have witnessed a remarkable rise of the concept of “learning” with a subsequent decline in the concept of “education” . . . This rise of what I have called “the new language of learning” is manifest, for example, in the redefinition of teaching as the facilitation of learning and of education as the provision of learning opportunities or learning experiences; it can be seen in the use of the term “learner” instead of “student” or “pupil.”

He goes on to define this change as a central point in the evolution of educational policy discourse:

The rise of the new language of learning can be seen as the expression of a more general trend to which I now wish to refer – with a deliberately ugly term – as the “learnification” of education: the translation of everything there is to say about education in terms of learning and learners. (p. 38)

The idea of education as learning is new. It was certainly absent at the birth of national systems of universal education. The strong consensus view in the history of education is that national systems of schooling arose in the long 19th century as part of effort to establish the nation state (Ramirez & Boli, 1987; Tröhler, Popkewitz, & Labaree, 2011; Tyack, 1966). The idea was to use schools to turn subjects into citizens, creating a political community by drawing all citizens into a school to provide them with a shared experience and create a level playing field. Learning the formal curriculum was secondary.

But if education is not primarily about academic learning, then what is it about? In addition to the goal of building a nation, consider some of the other outcomes of education that people over the years have found valuable. Most important is that education is the mechanism by which modern societies allocate social positions, making a person’s future status in large part the result of that person’s performance in school. As a result educational systems have become the primary means for both pursuing social opportunity and preserving social advantage. In addition, education can promote a wide array of other ends: personal enlightenment, aesthetic pleasure, religious belief, critical thinking, open mindedness, tolerance for others, immersion in a culture, understanding nature, understanding society, figuring out how things work, cultural play, social engagement, personal fulfillment, spiritual growth, and so on. It can open up the world to young people, show them the possibilities for their future lives, and prepare them to construct social roles for themselves. Some of these goals involve learning; some even involve learning the formal curriculum. But most of them arise from the experience of education; from social and cultural exchange with students, teachers, and authors; from a variety of educationally structured activities in and around schools.

Even within the conceptualization of education as learning, the PISA–NCLB approach is remarkably narrow. First, it eliminates from consideration all forms of school-based learning except what is contained in the formal curriculum. Gone is all the learning that students gain from the process of schooling: learning from classroom interactions about power and position, leading and following (Jackson, 1968); learning central norms of modern life (such as achievement, individualism, universalism, specificity) from the process of negotiating school routines and age-graded instruction (Dreeben, 1968); and learning from social interaction in groups such as athletic teams, school plays, and debate societies (Brooks, 2011). Even the literature of human capital theory is increasingly attentive to the economic value of such extracurricular learning (Heckman, 2000; Heckman & Rubinstein, 2001), but that element is missing from the accountability vision of education.

Second, within the narrow bounds of the formal school curriculum, PISA and NCLB insist on tightening the focus further to a subset of this curriculum that they consider of most importance. Look at the three areas that PISA tests: literacy, science, and math. These three school subjects are also the focus of NCLB, with the addition of a small piece of social studies in some states. Largely missing are history and social studies. Completely missing are art, music, health, and physical education; interdisciplinary programs, project-based learning, and student-initiated studies. Missing too is the rich diversity that is found in the big three. English is reduced to literacy skills; forget about literature. Science becomes a generic composite entity, denied its various disciplinary dimensions and conceptual richness. Math becomes a narrow skill set rather than an immersion in a wide array of quantitative cognitive pursuits.

Third, within the few core subjects that they examine, PISA and NCLB insist on approaching this kind of learning from the most narrowly utilitarian perspective. The only learning that is seen as worthwhile is the kind that is immediately useful, and what is considered useful learning is defined by what carries economic value. Recall that NCLB is rooted in the fear of economic decline and national weakness put forward by the authors of A Nation at Risk. And recall as well that PISA emerged from the Directorate for Education of the Organization of Economic Cooperation and Development. The driving force in both assessment regimes is the vision of education as an engine of human capital production. Everything else in the domains of school literacy, science, and math is secondary.

Consider some of what is lost by this vision of education by looking at it against broader frameworks for understanding educational goals. In an earlier historical analysis of the politics of American education, I identified three broad goals for schooling that have contended for primacy over the years in debates about policy directions (Labaree, 1997). Democratic equality, which is about preparing competent citizens, guided the founding of universal schooling in the United States and remained prominent in later reforms. Social efficiency, which is about preparing productive workers, emerged in the progressive movement at the start of the 20th century and has grown steadily stronger. Social mobility, which is about attaining or preserving social position, was guiding behavior of educational consumers from the very beginning and emerged as a major goal in the mid-20th century. PISA and NCLB put all their money on the last two goals. Schooling, they say, should be focused on developing the economically useful skills that will expand GDP and thus benefit society as a whole (social efficiency); and when individual educational consumers acquire these skills, they will gain greater opportunity (social mobility).

Gert Biesta (2009) takes another approach, talking three broad functions that education serves when viewed from a philosophical perspective. One function is qualification, in which education provides students with skills, knowledge, and dispositions that allow them to do things in a modern society, particularly related to economic roles. Another function is socialization, in which education provides students with the orientations and capacities required to “become members of particular social, cultural, and political ‘orders’” (p. 40). A third is subjectification, in which education provides students with the capacity to resist simple incorporation into society by developing independence from the larger orders. He argues that schools in general tend to focus explicitly on the first, implicitly (through process rather than curriculum) on the second, and more problematically on the third. Unlike my typology, which is drawn from the history of policy debates about education, his is focused on the array of possibilities inherent within education that may or may not be realized in particular schools of school systems. This is useful in thinking about what is emphasized and what is missing in a particular conception of what constitutes a good school. From this perspective, the PISA–NCLB vision of education focuses heavily on qualification and largely ignores the other two functions. The skew is extreme.

This brings us back to where we started. If we are going to understand the meaning of the new accountability regime in education, as represented by PISA and NCLB, we need to understand the particulars of how it defines and measures educational quality. Everything depends on how you define a good school and on what evidence would be needed to persuade you that a particular school or school system is good or bad. As we have seen, for all their differences, PISA and NCLB employ an extraordinarily narrow definition of education, and they deploy an extraordinarily impoverished metric for assessing educational quality. To hold schools accountable in these terms is to do them great harm. Both programs talk about setting a high standard so schools will race to the top; but in both cases the mechanics of the assessment regime set a diminished standard for schools, which drives them to race to the educational cellar.


Biesta, G. (2009). Good education in an age of measurement: On the need to reconnect with the question of purpose in education. Educational Assessment, Evaluation and Accountability21(1), 33–36.

Brooks, D. (2011, January 17). Amy Chua is a wimp. New York Times. Retrieved from

Collins, R. (1979). The credential society: An historical sociology of education and stratification. New York: Academic Press.

Dreeben, R. (1968). On what is learned in school. Reading, MA: Addison-Wesley.

Friedman, T. (2012, August 7). Average is over, part II. New York Times. Retrieved from

Heckman, J. J. (2000). Policies to foster human capital. Research in Economics, 54, 3–56.

Heckman, J. J., & Rubinstein, Y. (2001). American Economic Review, 91(2), 145–149.

Hopmann, S. T. (2008). No child, no school, no state left behind: Schooling in the age of accountability. Journal of Curriculum Studies, 40, 4, 417–456.

International Association for the Evaluation of Educational Achievement (IEA). (n.d.). Brief history of IEA. Retrieved from

Jackson, P. (1968). Life in classrooms. New York: Holt, Rinehart and Winston.

Labaree, D. F. (1997). Public good, private goods: The American struggle over educational goals. American Educational Research Journal, 34(1):Spring, 39–81.

Labaree, D. F. (2010). Someone has to fail: The zero-sum game of public schooling. Cambridge: Harvard University Press.McKinsey & Company. (2009). The economic impact of the achievement gap in American schools. Retrieved from

National Commission on Excellence in Education (NCEE). (1983). A nation at risk: The imperative for educational reform. Washington, DC: U.S. Department of Education.

No Child Left Behind Act. (2002). Public Law 107-110.

Organization for Economic Cooperation and Development. (n.d.). PISA FAQ. (accessed 2-12-13).

Program for International Student Assessment (PISA). (n.d.). PISA brochure. Retrieved from

Program for International Student Assessment (PISA). (2001a). Knowledge and skills for life: First results from PISA 2000. Paris: OECD Publishing.

Program for International Student Assessment (PISA). (2001b). Knowledge and skills for life: First results from PISA 2000Executive Summary. Paris: OECD Publishing.

Ramirez, F. O., & Boli, J. (1987). The political construction of mass schooling: European origins and worldwide institutionalization. Sociology of Education, 60(1), 2–18

Spence, M. A. (1973). Job market signaling. Quarterly Journal of Economics87, 355–374.

Thurow, L. (1972). Education and economic equality. Public Interest28(Summer), 66–81.

Tröhler, D., Popkewitz, T., & Labaree, D. F. (2011). Schooling and the making of citizens in the long nineteenth century: Comparative visions. New York: Routledge.

Tyack, D. (1966). Forming the national character. Harvard Education Review, 36(1), 29–41.


Cite This Article as: Teachers College Record Volume 116 Number 9, 2014, p. 1-14 ID Number: 17533, Date Accessed: 2/23/2021 5:44:50 PM
Posted in Higher Education, Meritocracy, Writing

Caitlin Flanagan — The Fury of Prep-School Parents

This post is a scorching essay by Caitlin Flanagan, “The Fury of the Prep-School Parents,” which appeared two years ago in Atlantic.  Here’s a link to the original.

I’m posting it here for two reasons.  One is that it’s a great case in point about the pathologies that arise from the new American meritocracy based on exclusive college credentials.

The other is that it’s a great example of good writing.  For example:

At this point, we’ve reached peak private school. The shortage of spaces at elite colleges has driven these people mad, and there is nothing at all left to contain their behavior; their true motivation for sending their kids to these schools has been laid bare. Yes, it was nice to have such a lovely campus, and yes, the Emily Dickinson/Walt Whitman unit was a delight to hear about over dinner. But they would have sent their kids to barracks to watch The Flintstones for four years if it came with guaranteed admission to Harvard.

Love the last two sentences.  And here’s another:

They are like people who arrive for a week at a five-star hotel only to find out there aren’t enough lounge chairs by the pool and the main dining room is fully booked. At $750 a night? That’s not going to stand. There’s a stern call to the manager, followed by a complimentary upgrade to club level, a bottle of champagne on ice, and a suddenly available two-top at Terrazzo.

I think you enjoy going along for the ride.

The Fury of the Prep-School Parents

Sidwell Friends School in Northwest Washington
Sidwell Friends School in Northwest WashingtonMANUEL BALCE CENETA / AP
Burner phones, FBI stakeouts, search warrants—Season 6 of The Wire? No, just our social betters street fighting their children’s way into elite colleges. In March, we got Operation Varsity Blues, which charged a group of wealthy parents and an alleged conman with conspiring to get lackluster students into posh colleges in a scheme so improbably complex that it triggered the use of the RICO statute. Earlier this month, Sidwell Friends School, bastion of the Washington, D.C., elite, was the site of a fantastical, Real Housewives of the Independent Schools cavalcade of hideous parental behavior, which apparently included a “verbal assault” on college counselors, secretly taping conversations with them, calling them from blocked phone numbers to run down other kids in the applicant pool, and trying to obtain copies of other students’ records.

At this point, we’ve reached peak private school. The shortage of spaces at elite colleges has driven these people mad, and there is nothing at all left to contain their behavior; their true motivation for sending their kids to these  schools has been laid bare. Yes, it was nice to have such a lovely campus, and yes, the Emily Dickinson/Walt Whitman unit was a delight to hear about over dinner. But they would have sent their kids to barracks to watch The Flintstones for four years if it came with guaranteed admission to Harvard.

The proximate cause of this wretched behavior is one of the oldest fomenters of gang violence: incursion on territory. Private-school kids used to have an expectation of fairly stress-free placement at top colleges. That’s what prep schools were preparing you for—from Milton to Harvard, end of story. But now, as the top colleges seek increasing diversity of race and socioeconomic background in their student bodies, they hold fewer and fewer spaces for modestly rich white kids with strong but not dazzling records. And their parents aren’t taking it.

They are like people who arrive for a week at a five-star hotel only to find out there aren’t enough lounge chairs by the pool and the main dining room is fully booked. At $750 a night? That’s not going to stand. There’s a stern call to the manager, followed by a complimentary upgrade to club level, a bottle of champagne on ice, and a suddenly available two-top at Terrazzo. Maybe there was a time when a certain kind of restrained behavior was expected from the American upper class; you certainly encounter it in novels. But today’s rich people are a different breed, and they are especially unsuited to the fact that an elite-college education is one of the few expensive things that is for sale, but that not everyone is allowed to buy.

But the problem isn’t simply one of supply and demand. It’s also the result of parents who seem to have a great deal in common—the Volvo XC40, Costa Rican vacations, Hillbilly Elegy—but whose only truly shared value is the desire for their children to attend elite colleges. This wasn’t always the case. Most of the famous private schools began with a specific religious affiliation, and while they gradually began to extend admission to people of other faiths, they maintained certain expectations for how those students would conform to the institutional creed.

Former Senator Al Franken tells the story of transferring from a Minneapolis public school to one of the city’s storied private institutions, the Blake School. One day, his math teacher asked him to stay after class. Franken assumed the man wanted to praise him for his good work, but that was not the case.

“I notice in chapel you don’t sing the hymns,” the teacher said. Franken explained that he didn’t sing them because he was Jewish.

After a pause, the man asked him a question. “You want to get into a good college, don’t you?” he said. Yes, Franken said. “And to get into college you need good math grades?” Yes, Franken said.

“I’d sing the hymns,” the teacher said.

Ridding themselves of religious and racial biases has been the great task these institutions have faced for the past four decades. While most of the top schools have retained a nominal connection to their original faiths, these creeds need not trouble any families who do not share them. Today, it is mostly the second-rate institutions (the Catholic schools, of course; Jewish day schools) that still expect religious observance.

While the chaplain of an Episcopal school was once a formidable figure on campus, a man with a great deal of moral authority over the parents, that has changed. Now, if the school still has an Episcopal priest on staff, the poor gal is stashed in a windowless office with a coexist bumper sticker on her laptop case, and a list of phone numbers of local imams willing to speak at chapel services. Indeed, the extent to which an independent school still hews to its religious roots is often the extent to which it is hampered in its ability to deal with monstrous parents. Sidwell Friends, like most Quaker schools, has been able to retain many of its faith traditions despite welcoming a diverse student body; it’s the least oppressive religion on Earth. But the silent search for the inner spark of God is not much help when rabid parents are underfoot. Quakers are pacifists, for God’s sake. Conscientious objectors. If they weren’t going to take on Adolf Hitler, they sure as hell aren’t going to take on a Kalorama mom with blood in her eyes and Duke on her mind.

The elite schools have exchanged religion for a shared code of social justice, multiculturalism, and global citizenship. These are high-minded ideals, and certainly preferable to a strictly Christian code inflexibly forced upon nonbelievers in search of only a fine education and not religious intolerance. But they are ideals that hamper senior administrators from putting dreadful parents in their place.  The real god of these schools is the god of money: cash on the barrelhead, or—if need be—a healthy pledge from Grandpa’s eventual estate. (Ever notice who sponsors Grandparents’ Day at these places? The development office. They lured Grandpa Joe into their lair using your 10-year-old as bait. It’s not shameless; it’s vile.)

In the old days, a misbehaving parent would face a private audience with the headmaster—no cups of tea this time; certainly no glass of sherry—and a merciless dressing-down. But today, that horrible parent could be a major donor, and guess how headmasters are largely evaluated?  By the amount of money they raise. Here’s one thing you need to know about private schools: They have two honor codes, two community-standards contracts, and two disciplinary codes. One is for everyone, and the other is for big donors.

And all this is why two employees of Sidwell Friends, who are leaving the school in June, are the new heroes of everyone who has ever worked at an independent school. Like Norma Rae Wilson shutting down the cotton mill, like Mario Savio announcing that he could not take part, he could not even passively take part in the functioning of a corrupt machine, these two counselors apparently had enough. There should be statues erected in their honor; they should be the main speakers at next year’s National Association for College Admission Counseling Conference.

I don’t know the exact circumstances under which they decided to resign, but I do know this: Independent-school employees rarely quit just because parents behave badly. They usually quit when they become demoralized by an administration that does not unequivocally take their side against raging parents.

America’s top private schools create the leadership class, students who soar like rockets through the best colleges, just like they soared through the top prep schools. Soon these kids will have some actual power. What have they learned along the way? That money, brutish behavior, and selfish demands will always get you what you want.

CAITLIN FLANAGAN is a staff writer at The Atlantic. She is the author of Girl Land and To Hell With All That.
Posted in Writing

Lauren Oyler — The Case for Semicolons

Ok, today let’s talk about something really important:  punctuation.  I know, it’s exciting but it really does matter.  If you want to be understood, you need to punctuate your writing in a way that fosters understanding.

Today’s post is a lovely essay by Lauren Oyler from the Times last week.  Here’s a link to the original.  She’s talking about the most marginal of punctuation marks, the semicolon.  I’m mean, who can bet passionate about a punctuation mark that is never required?  You can always just use a period, so who cares?  For Oyler, this is precisely its charm.

That semicolons, unlike most other punctuation marks, are fully optional and relatively unusual lends them power; when you use one, you are doing something purposefully, by choice, at a time when motivations are vague and intentions often denied. And there are very few opportunities in life to have it both ways; semicolons are the rare instance in which you can; there is absolutely no downside.

And how to you use one?

For those who don’t know the rules — and I don’t blame you! — a semicolon does what it says on the box. Isn’t that nice? It’s a period on top of a comma, and it works like both a period and a comma.

Nuff said.  Enjoy.

The Case for Semicolons

There are very few opportunities in life to have it both ways; semicolons are the rare instance in which you can; there is absolutely no downside.

Credit…Illustration by Simoul Alva

For several years I have returned, frequently, to a vague memory of reading an article in which a woman recommended eating dessert after every meal. This article was probably in a women’s magazine; the woman was possibly from France. The darkly pragmatic angle was that regular indulgences head sugar cravings off at the pass. The spiritual angle, and the better one, was that harmless indulgences are good, and you shouldn’t overthink them — even after breakfast.

This philosophy can be applied to all sorts of low-stakes situations, particularly those burdened by longstanding beliefs about their secretly grave consequences. While I will happily treat myself to a little morning cookie — this strikes me as more Italian than French — my real indulgence is punctuation, which, despite its unflagging service to the essential project of communication, is often subject to pointless regimes of austerity. The saddest, most unfairly represented victim is the semicolon. I try to eat one after every meal.

I don’t remember when I first learned about semicolons, nor do I have a mental list of remarkable semicolons in literature. I don’t want to have to treasure them, though the typical advice for writers of all levels is to use them sparingly, as if there’s a limited supply. This only breeds fear, which in turn breeds stigma: Semicolons are ugly, pretentious and unnecessary; they immaturely try to have it both ways. There are so many things to fear in life, but punctuation is not one of them. That semicolons, unlike most other punctuation marks, are fully optional and relatively unusual lends them power; when you use one, you are doing something purposefully, by choice, at a time when motivations are vague and intentions often denied. And there are very few opportunities in life to have it both ways; semicolons are the rare instance in which you can; there is absolutely no downside.

A semicolon does what it says on the box. Isn’t that nice?

For those who don’t know the rules — and I don’t blame you! — a semicolon does what it says on the box. Isn’t that nice? It’s a period on top of a comma, and it works like both a period and a comma. You can use it to separate two independent clauses — two sentences that work on their own — or to separate items in a series that would be particularly unwieldy with only commas, often because the items contain commas. (“Today I ate three desserts: a tiny cookie, which was free with my espresso; a bigger cookie, which was unfortunately a little dry; and a milkshake, which maybe took things too far.”)

You can also break these rules. I don’t, but you can. That’s another thing that’s good about them. As Cecelia Watson writes in her excellent 2019 book, “Semicolon: The Past, Present, and Future of a Misunderstood Mark,” the idea of a bygone era of seriousness and difficulty, in which everyone knew and followed grammar rules, is fallacious; grammar rules emerged only in the 1800s, and they have been hotly debated ever since. Read the work of any great author, and you will find idiosyncratic, often technically incorrect, punctuation; read the email of any interesting person, and you will find the same thing. Some people will use a semicolon just before a conjunction: “I ate three desserts yesterday; but my life didn’t change.” I think that’s terrible. But here’s another good thing about semicolons: You can use their ambiguity, or flexibility, to achieve whatever tone you like.

So they’re not scary. Are they ugly? That’s an opinion. Theodor Adorno said they looked like “a drooping mustache,” but in his view, that’s good — all punctuation marks, and the downtrodden semicolon especially, are “friendly spirits whose bodiless presence nourishes the body of language”; they ought to be defended. What’s more: Why does your text message, email, tweet, article or book need to be pretty? Is that not also a little pretentious? According to Kurt Vonnegut’s often-taught (and, if you read the full quote, both a little ironic and offensive) advice, “all they do is show you’ve been to college,” but these days anyone can look up how to use a semicolon, and such ideas about pretensions are circumscribed, unimaginative; they imply that the full range and joys of English expression are available only to people with bachelor’s degrees. Besides, all punctuation can be confusing, subject to interpretation. The seemingly harmless period becomes a knife when it appears at the end of a one-line text message, worsening intergenerational conflict as older people tend not to realize they sound curt to their younger interlocutors; the comma often shows up whenever someone wants a pause, even though the pauses most useful in reading are not the same as those you want in speech. The dash is considered a guilty pleasure, sort of chaotic; exclamation points will, for years, be associated with Donald Trump.

That semicolons aren’t popular on social media — where oversimplification and directness reign and the presence of too much grammatical flair is likely to limit “engagement” — is perhaps the only argument some readers will need to be convinced of their value. For the rest of the skeptical, the semicolon conveys a very specific kind of connection between ideas that is particularly useful now — it asserts a link where the reader might not necessarily see one while establishing the fragility of that link at the same time. The world is not accurately described through sets of declarations and mere pauses, without qualification or adjustment; occasionally we are lucky enough to see it many ways, at once.

Lauren Oyler is a writer whose debut novel, “Fake Accounts,” is now available from Catapult.

Posted in Culture, Gaming, Politics, Social media

Online Gaming Is Teaching Us that We Live in a World Without Constraints

This post is a piece by Francis Fukuyama published February 8 in American Purpose.  Here’s a link to the original.

What I particularly like about this brief analysis is the way he uses his own experience trying to work with an engineering design and manufacturing software to throw light on the recent invasion of the US Capitol.  The problem, he says, is that people who are living in the social media world and who play online games have become accustomed to operating in a world with no constraints.  Without actually having to face the people you’re talking to on social media, you feel free of the usual norms of civility.  And in game world, you can get killed in action and get reborn to play again.  Storming the citadel of American democracy is just another harmless game event, which you can live-stream to your followers.  Thus the surprise when participants suddenly found themselves subject to arrest.  What — is this real?


Francis Fukuyama
Over the past few weeks I’ve been teaching myself how to use Autodesk’s Fusion 360. Fusion 360 is an amazing software package written primarily for mechanical engineers. It incorporates a CAD (computer assisted design) program for making drawings of anything from a small screw to a giant imaginary spaceship. In this, it performs like Autodesk’s earlier program AutoCAD, or the popular Sketchup. But it also incorporates CAM (computer assisted manufacturing) software, which allows you to use your design to control machines that will actually produce the object you imagined, whether through a CNC (computer numeric control) milling machine, or an additive 3D printer.

Fusion 360 is an unbelievably complex program, and there are legions of engineers whose life work is spent inside of it. If you are new to the program, one of the things you don’t at first understand is the concept of constraints. In a simple drawing program like PowerPoint, or even a more sophisticated one like Adobe Illustrator, you can draw figures anywhere you want, move them, modify them, or add to them at will. This doesn’t work in Fusion 360 because the figures have hidden constraints. Once you draw a rectangle you can’t change the corner angles, or set the parallel sides at different angles. The purpose of these constraints is to mimic the behavior of physical objects in the real world: just as you can’t move your hand through a metal plate, neither can you move the representation of your hand through a plate in Fusion. This is what allows you to create software versions of, for example, a piston being moved up and down by a crankshaft, or to test whether parts with the dimensions you specified will actually mate with one another. (If you want to get a sense of constraints in Fusion 360, see this YouTube tutorial by Paul McWhorter.)

Fusion 360 is unusual in the way it imposes constraints. Much of the online world is characterized by lack of constraints. There are many software programs where you can, in effect, move your hand through a steel plate. Online, you can talk to anyone you want, free of the tyranny of distance. Because so much online interaction is anonymous, you are also free of the normal constraints of civility that characterize face-to-face interactions. This is all the more true of the gaming world. If you play a first-person shooter game like Call of Duty, you are given a physically endowed body, limitless energy, and the ability to move effortlessly. Life’s most important constraint is the fear of death, or death itself. But in online gaming you don’t have to worry about being killed; if that happens, you just get a new life. Nor does taking lives have any consequences either: you aren’t arrested and put in jail. You get to try out new activities like spraying a room full of people with machinegun fire and committing mass murder. Online gaming blends seamlessly with the plague of superhero movies that Hollywood has churned out, where all men have bulging muscles and all women have gigantic breasts.

In watching the pro-Trump rioters who attacked Congress on January 6, it occurred to me that we have produced an entire generation that has grown up not just on Facebook, but in this gaming/superhero world where normal constraints do not exist. It is notable how gaming memes have proliferated among right-wing groups, like Pepe the Frog or the Red Pill.  

Creating new memes has itself has become a way of advancing in that hierarchy, as Tara Isabelle Burton has documented in her book Strange Rites. From the many background reports and interviews with participants in the riot, it would appear that very few of them were unemployed factory workers angry about their jobs has been exported overseas. Many seemed to be middle-class men and women with steady but perhaps boring jobs and marriages. They were living in right-wing filter bubbles, yes, but many of the younger ones were also living in an online gaming world in which normal constraints did not apply. It seemed that many of them had a hard time distinguishing between online fantasy and the real world, driven as they were by QAnon conspiracy theories and hidden signal from their President, Donald Trump. Most had no real plan for what they would do when they got into the Capitol, because you don’t really need a plan when you play an online shooter game. Many of them were busy taking selfies and boasting about their experiences, oblivious of the fact that they were providing evidence that would be used to charge them with crimes.

This is part of a much broader phenomenon that Adam Garfinkle has recently called the “Collapse of Reality” in these pages. We are used to being fed a continuous stream of engaging spectacles. We increasingly don’t have direct experience of reality that is not mediated by a screen, and not influenced by algorithms that are intended to alter our behavior. This narrowing realm of autonomy is masked by the apparent endless expansion of consumer choice that fools us into thinking that we are in fact masters of our environment.

Right-wing politics in the United States used to revolve around policy issues like free trade, immigration, taxes, regulation, and the like. In recent years it has become increasingly detached from specific policy positions. I’m old enough to remember the days when Republicans attacked Obama’s “reset” with Russia as the hopelessly naïve coddling of a dictator. After January 6th, by contrast, a female rioter who had taken a laptop from Nancy Pelosi’s office in the Capitol tried to sell it to the Russians, because they were being helpful to Trump. What we have now on the right instead of commitments to policy and ideology is a cult of personality, and beyond that, devotion to outlandish fantasy stories about a global cabal of Satanists intent on drinking children’s blood. It appears that a substantial part of the Republican Party thinks it is living inside a video game, where they can put their hands through steel plates and overturn national elections. The “metaverse” online world first imagined by the Cyberpunk science fiction author Neal Stephenson in his novel Snow Crash has in fact become reality, or rather has melded with reality in ways much darker than Stephenson—always a hugely imaginative writer—ever himself imagined.

Americans need to return to a world full of constraints. And I need to learn how to use Fusion 360 more effectively.

Posted in Educational goals, Teaching

Work with What You’ve Got: Advice for Teachers from Ken Teitelbaum

This post is a review I wrote of a new book by Ken Teitelbaum, which will eventually appear in the Journal of Curriculum and Pedagogy

At its heart, this is a book of advice for teachers, and its messages really resonate with me.  You can’t change the world, but you can do something important where you are.  Small victories are a big deal.

Teitelbaum Cover

Critical Issues in Democratic Schooling: Curriculum, Teaching, and Socio-Political Realities, by Kenneth Teitelbaum. Routledge, 2020. 310 pages.

On the face of it, this book shouldn’t work.  The publisher didn’t do it any favors by giving it a graceless title and a glumly generic cover, which fairly shout the news that this is going to be a tedious academic monograph: read it at your own risk.  Equally off-putting is the price, $120 in hardcover and $48.50 in paper. 

The reader’s stomach sinks on hearing the author say in the introduction that “Some of what I have included has its genesis in comments I was invited to give as a dean or department chair, or shared in classes, and so is not available to be read anywhere else. Other essays were originally published as journal articles, book chapters, and newspaper op-eds….”  Lord, you think.  This isn’t a book; it’s a scrapbook, filled with fragments and academic orphans – 21 short chapters published here solely on the principle of waste not, want not. 

In the unlikely event that you ignore all the warning signs and keep reading, however, you will discover that this book really works.  You’ll find yourself in the hands of a thoughtful, gracious, and extraordinarily experienced senior educator who knows how to write and has important insights to pass along.  You’ll come to see the book as an illuminating set of reflections about what it means to be a teacher, teacher educator, administrator, and education scholar and what gives meaning to all of these practices.  Throughout, Teitelbaum’s voice is engagingly conversational, avoiding the usual academic jargon and pontification in favor of just talking to you as a fellow educator about why we do what we do, how we do it, and why it matters.  He draws a lot from his own experiences – as teacher, teacher educator, scholar, chair, and dean – which makes his insights both more personal and more compelling, as he somehow manages to be inspiring without being preachy or unrealistic.  And he welcomes a continuing engagement with the reader, even giving us his email address.

The chapters are gathered together into three sections: Teaching and Teacher Education, Curriculum Studies, and Multiculturalism and Social Justice.  My favorite is the first.  Here he walks us through some of the things that make teaching so demanding and at the same time so rewarding.  People think teaching is easy, just a matter of being good with kids, but it’s actually extraordinarily difficult.  You can’t make students learn but instead must find ways to motivate learning, which means your professional success or failure is in their hands as much as yours.  And it doesn’t help that you’re in an educational system that is increasingly driven by accountability metrics that undercut the ongoing pedagogical relationship that makes learning possible.

But he always balances the bad news with the good.  Teaching can bring rewards that are hard to find in other lines of work.  The self-contained classroom may lead to professional isolation but it also allows you a remarkable degree of autonomy, so you can establish your own approach to engaging your students.  It’s a practice that draws upon the whole person of the teacher – your intellect, affect, values, experience, and personality.  You get the satisfaction of watching students develop under your tutelage and of knowing that you’re making a small but meaningful contribution to forming a better society.  He also walks the reader through the challenges and satisfactions of being a teacher educator, trying to shape the teachers who will try to shape the students.   

In the second section, Teitelbaum takes us into his scholarly field of curriculum studies, showing the importance of intellectual exploration for being an effective educator.  You need to understand the system in order to function within it, and you need to explore the nature of curriculum in order to recognize what we are actually teaching students and why.  Theory can help us see that what students are learning in school comes only in part from the formal curriculum of school subjects.  A lot of learning comes not from the content of schooling but the process.  We think of process as the means to facilitate learning while failing to recognize that this process may be teaching them lessons we don’t want them to learn: do what you’re told, play the game, suppress yourself. 

In the third section, he explores issues of justice and multiculturalism.  As his pattern throughout the book, he approaches these issues as both ethical questions that teachers need to wrestle with in trying to forge a more equitable society and also as practical techniques for enabling them to do their work effectively.  You can’t engage students if you can’t establish some grounds for mutual understanding.  It’s nice to see an academic talk about this issues judiciously, without feeling the need to shout.

If I were still teaching prospective teachers, I would have them read chapters from this book in order to get a richly informed and gently instructive introduction to the profession.  They need to hear that being an educator is a balancing act.  You are an independent actor in a complex organizational setting, which both facilitates and limits the possibilities of action for you.  Teitelbaum consistently establishes a reasonable middle ground between trying to change the world and just doing your job.  His message is simple and compelling:  “Work with what you’ve got.”  You do what you can within the constraints of your current situation, pushing the limits without threatening your own viability.  And you do what you can within the constraints of your own personal capacities without worrying about what you’re not able to do.  “Keep trying to understand and to create positive change. Don’t get down or lose hope. Keep trying to help carve out islands of decency wherever and whenever possible, even if in limited ways.”  Small victories are a great accomplishment.

David Labaree

Posted in Accountability, Higher Education, History of education

Accountability Could Kill US Higher Ed

This is a new piece I wrote as the foreword to a book by J. M. Beach — The Myths of Measurement and Meritocracy: Why Accountability Metrics in Higher Education are Unfair and Increase Inequality — which will be published this summer by Rowman and Littlefield.  Two weeks ago, I posted the foreword I wrote for the first volume in this series, which looked at the impact of accountability on elementary and secondary education.

For me, this was a chance to provide a brief summary of my thoughts about the dire problems that accountability metrics pose for US higher ed.  See what you think.


In this book, J. M. Beach argues that the accountability movement, which has already done so much damage to American public schools, is now coming after higher education as well, and he shows that this effort is not only based on faulty measures but also promises to lay waste to a system that is the envy of the world.

The idea that a system of education should be held accountable to the public is in itself hardly problematic.  Of course it should.  We need to know that higher education is fulfilling the goals we set for it and having a beneficial impact of both the individuals who enroll in it and the society in which they live and work.  The problem is in the metrics used to measure institutional effectiveness, including such things as student evaluations of their teachers, tests of student knowledge acquisition, graduation rates, and income levels after graduation. 

A key problem with these metrics is that they narrow the aims for higher education to a few outcomes that are readily measurable but not very important.  We want this system to produce competent citizens for our democracy, productive workers for our economy, and social opportunity for people from all walks of life.  None of the current metrics captures these broader outcomes that we want to see from our system of colleges and universities. 

In the first volume of this series, Beach explores the problem of accountability for American public schools, and many of the dysfunctions that arise from holding these institutions accountable also apply to higher education.  One is that teaching and learning in colleges and universities, as in elementary and secondary school, requires the mutual cooperation of teacher and student.  Teachers can’t make students learn; they need to encourage them to learn, which is much more difficult to carry out and impossible to standardize.  Another is that the system of higher education is astonishingly complex and radically decentralized, with some 4,000 degree-granting institutions operating independently and largely under campus-level control.  These campuses vary considerably in governance, program focus, selectivity, funding, and degrees offered, which undercuts the likelihood that any standards of performance would apply to all.

But the American system of higher education has some core characteristics that mark it off from elementary and secondary schooling and further confound efforts to apply accountability measures across the board. 

First, whereas attendance at the lower levels of schooling is mandated by law, college attendance is purely voluntary.  A college can’t count on having students enroll; it has to induce them to do so.  And in a crowded higher education market, it has to compete effectively with other colleges who are also marketing their wares.  Exacerbating the stakes in this competition is the fact that public and private colleges alike are heavily dependent on student tuition and fees to maintain financial viability.  For most of US history, state funding has fallen far short of covering college costs, and this has become increasingly true in the last 50 years, as the share of college budgets covered by state appropriations has been steadily declining. 

A major consequence of this situation is that, much more than the school system, the higher education system has be highly sensitive to consumer preferences.  Colleges are compelled to be adept at figuring out what students want and giving it to them.  If they don’t, the college down the road will do so, so they need to keep attuned to current trends and keep ahead of the curve.  Think of how the college dorm room with bunk beds and a bathroom down the hall has turned into a suite of rooms with hotel amenities.  Look at the growth of food courts to replace the cafeteria and elaborate athletic facilities to replace the lowly gym. 

As early at the 1880s colleges realized they needed more than academics to keep the consumer happy, so they encouraged fraternities and sororities and intercollegiate athletics in order to rev up campus social life.  These elements not only attracted and retained students, but they also built a loyalty to the institution that would pay off in having loyal alumni who would send their children to the alma mater and make generous donations to the annual fundraising campaign.  Think of all of the ways that American students and alumni display the college logo, on sweatshirts and caps and bumper stickers and yard signs on game days.  Students at European universities don’t wear the brand the way that American students do.  The reason is that here college is not where you go; it’s who you are.

With the student-consumer as king of campus, colleges also need to be worried about demanding too much studiousness from their clientele.  Party schools build more loyalty than academic rigor and stringent grading.  With student course evaluations playing such an important role in college accountability, it’s no wonder that grades on campus have inflated so much, with the average grade moving from C to something more like A-.  Low grades don’t make for happy campers and future donors.

In addition to consumerism, the US higher education system has another trait that distinguishes it from other systems around the world.  It’s a system without a plan.  No one could have designed a system as fiendishly complex as this:  public, private, religious, and for-profit; associate, bachelor’s, master’s, professional, and doctoral degrees; residential and commuter; urban and rural; teaching-focused and research-focused; selective and open access. 

The present system emerged organically in the early nineteenth century as a collection of private colleges with corporate charters and only occasional state funding.  Each operated as an independent enterprise with a private board of governors and a president as CEO, whose job was  to negotiate a path toward viability through the crowded college market.  When states began opening public institutions, they followed the model of independence and self sufficiency laid out by their private predecessors, since they too had to function without reliable state funding. 

The result was the evolution of a system whose vulnerabilities in the nineteenth century turned into strengths in the twentieth.  The forced autonomy that came from underfunding made these institutions into a formidable array of competitors who figured out how to survive and thrive in a demanding environment.  Relatively free of control from the state, they were able to adapt quickly to changing market conditions, pursuing opportunities as they emerged – for new programs of study, lines of research, funding prospects, donor bases, and avenues for excellence, which would draw in the best students and faculty and enhance the brand.

What makes the system so effective today is that it has never had to be held accountable to a monolithic regime of metrics set by the state.  Emergent rather than planned, arising from the bottom up rather than imposed from the top down, the system has assumed a dominant position in the global ecology of higher education.  Accountability could kill it.

David Labaree

Palo Alto, CA

Posted in Credentialing, Educational goals, History of education, School reform

Consuming the Public School

This essay is a piece I published in Educational Theory in 2011.  Here’s a link to a PDF of the original.

In this essay I examine the tension between two competing visions of the purposes of education that have shaped American public schools. From one perspective, we have seen schooling as a way to preserve and promote public aims, such as keeping the faith, shoring up the republic, or promoting economic growth. From the other perspective, we have seen schooling as a way to advance the interests of individual educational consumers in the pursuit of social access and social advantage. In the first half of the essay I show the evolution of the public vision over time, from an emphasis on religious aims to political ones to economic ones and, finally, to an embrace of individual opportunity. In the second half, I show how the consumerist vision of schooling has not only come to dominate in the rhetoric of school reform but also in shaping the structure of the school system.

Consuming the Public School

David F. Labaree

In the course of the last 400 years, the tension between two competing visions of the purposes of education has shaped American public schools.1 From one perspective, we have seen schooling as a way to preserve and promote public aims, such as keeping the faith, shoring up the republic, or promoting economic growth. From the other perspective, we have seen schooling as a way to advance the interests of individual educational consumers in the pursuit of social access and social advantage. This essay explores the evolution of that tension across the history of American schools.

I begin this account by tracing the emergence of this tension in the colonial period. Then in the first half of the essay, I explore the evolution of the public vision of education during the course of American history through the rhetoric of the country’s most significant school reform movements. Here I argue that over time the public mission of American schools shifted from keeping the faith, to preserving the republic, to stimulating the economy, and finally to promoting social opportunity.

In the second half of the essay, I examine the impact that the private vision—expressed through consumer demand—had in reshaping the structure of the school system across the same period of time. In this context, I argue that educational consumers have long expressed a consistent preference (through their enrollment choices and their votes) for a school system that was less focused on producing benefits for the community as a whole than on providing selective benefits to the students who earned its diplomas. Families have been willing to acknowledge that the system should provide educational access for other people’s children, but only as long as it also has provided educational advantage for their own. This consumerism has been a factor in shaping schools from the beginning, but during the second half of the twentieth century, it also came to reshape the public vision of education around the consumerist principle of equal individual opportunity.

The Core Tension Between Public Aims and Private Interests in Schooling

At the very beginning of schooling in the American colonies, a tension arose between two different visions of the purposes of education, and this tension has been with us ever since. First, let us examine the origins of this tension in the colonial period.

Colonial Schooling: Preserving the Faith

At the heart of the push for schooling in colonial America was a profoundly conservative vision of education’s public mission: to preserve the religious community and maintain the faith. The language of the 1647 Massachusetts law that mandated schooling vividly makes this argument for education:

It being one chief project of that old deluder, Satan, to keep men from the knowledge of the Scriptures, as in former times by keeping them in an unknown tongue, so in these latter times by persuading from the use of tongues, that so at least the true sense and meaning of the original might be clouded by false glosses of saint seeming deceivers, that learning may not be buried in the grave of our fathers in the church and commonwealth, the Lord assisting our endeavors,—

It is therefore ordered, that every township in this jurisdiction, after the Lord hath increased them to the number of fifty householders, shall then forthwith appoint one within their town to teach all such children as shall resort to him to write and read.2

Thus only through education could congregants acquire “the true sense and meaning” of the Bible and thereby save themselves from the “false glosses of saint seeming deceivers.” Such a mission was too important to be left to chance or to the option of individual parents. Instead it required action by public authority to make schooling happen.

At the same time that the official rationale for communities to provide education in colonial America was proclaimed to be the pursuit of a religious ideal, another more pragmatic reason quietly emerged that pushed individuals to seek education on their own. In order to engage in commerce, people needed to be reasonably good at reading, writing, and arithmetic. Without these skills, storekeepers and merchants and tradesmen and clerks would be unable to make contracts, correspond with customers, or keep accounts. From this angle, schooling was a practical necessity for anyone who hoped to make a living by means of commercial activity in a country where, from the beginning, trade was a central fact of life.

An Emerging Pattern

So before there was an American system of education—before there was even an American nation—schooling in America was an important and growing component of ordinary life, and it educated a larger share of the populace than did schooling in the rest of the world.3 The two factors that propelled this growth of schooling, however, were quite different in character. From the view of religion, schooling was the pursuit of a high ideal, a way to keep the faith and promote piety. Religion gave schooling a public rationale that was explicit, openly expounded by preachers, political leaders, journalists, and parents.

From the view of commerce, schooling was the pursuit of a mundane interest, a way to make a living in an increasingly trade‐oriented economy. This rationale for schooling was well understood but only rarely made explicit. The prevailing religious rhetoric about education, backed by the full authority of scripture, made it difficult for anyone to argue the case in public for schooling as a way to get ahead financially, since to do so would seem at best unworthy and at worst irreligious. And the two factors differed not only in the goal they set for schooling but also in the agents who would carry out this goal. Whereas the religious view stimulated top‐down efforts by government and the church to promote and provide education for the populace, the commercial view stimulated bottom‐up efforts by individual consumers to pursue education for their own ends.

From the colonial period to the present, the economic rationale for schooling in the United States has gradually grown in intensity, and in the twentieth century it became increasingly explicit as a primary goal for education. Meanwhile the religious rationale for schooling has gradually faded into the background, giving way to more secular educational goals. During this entire time, however, the pressures that have sought to shape educational change in the United States have continually taken the form of these two early impulses to provide and pursue schooling.

The history of American education is in many ways an expression of this ongoing tension between schooling as the pursuit of gradually evolving cultural ideals and schooling as the pursuit of increasingly compelling economic practicalities. The first of these rationales has propelled most educational reform movements, which have demanded that schools adapt themselves to new ideals and help society realize these ideals—whether this ideal be religious faith, civic virtue, economic efficiency, racial equality, or individual liberty. These ideals have formed the core of the rhetoric of the major school reform movements.

The second rationale is what has propelled individuals to demand educational opportunity and to avail themselves of it when it is made available. Prior to the mid‐twentieth century, however, this second form of pressure for educational change flew under the radar for the most part; this is evidenced by the fact that it is largely missing from the language of reform documents and educational politics. Still, while the first approach has set waves of reform episodically rolling across the surface of education, the second has been the source of a steady current of incremental change flowing beneath the surface. As David Tyack and Larry Cuban point out in their influential essay on American schooling, the history of school reform in this country has been an odd mix of turbulent reform rhetoric, which has only modestly affected the underlying structure of schooling, and a slow and silent evolutionary process, which has exerted substantial change in this structure over a sufficiently long period of time that this change is barely visible.4

The Evolution of the Public Vision of Schooling: From Faith to Citizenship to Economic Growth to Equal Opportunity

Reform visions of schooling have long promoted education as a public good, but the definition of the public vision has shifted over time. What started as a purely religious argument turned into a secular political argument, then a pragmatic economic argument, and finally an individual access argument. The consumer interest in schooling as a private good was there from the beginning, but by the end of this period it had worked its way into the heart of the public vision of education. Let us examine briefly how this change played out in the rhetoric of the major movements for American school reform. What follows is not intended as a history of school reform; it is far too selective and cryptic for that. Instead it is a sketch of major trends in the evolution of the public vision of education as viewed through the lens of school reform documents.

Common School Movement: Citizenship

As secretary of the Massachusetts Board of Public Education in the 1840s, Horace Mann became the most effective champion of the American common school movement, which established the American public school system in the years before the Civil War. Mann’s Twelfth Annual Report, published in 1848, provided a comprehensive summary of the argument for the common schools. In it he made clear that the primary rationale for this institution was political: to create citizens with the knowledge, skills, and public spirit required to maintain a republic and to protect it from the sources of faction, class, and self‐interest that pose the primary threat to its existence. After exploring the dangers that the rapidly expanding market economy posed to the fabric of republican community by introducing class conflict, he proclaimed:

Education, then, beyond all other devices of human origin, is the great equalizer of the conditions of men—the balance‐wheel of the social machinery…. The spread of education, by enlarging the cultivated class or caste, will open a wider area over which the social feelings will expand; and, if this education should be universal and complete, it would do more than all things else to obliterate factitious distinctions in society.5

A few pages later, Mann summed up his argument with the famous statement, “It may be an easy thing to make a Republic; but it is a very laborious thing to make Republicans; and woe to the republic that rests upon no better foundations than ignorance, selfishness, and passion.”6 In his view, then, the common school system was given the centrally important political task of making citizens for a republic. And toward this end, its greatest contribution was its commonness, drawing together all members of society into a single institution and providing them with the shared educational experience and civic grounding that they needed in order to function as members of a functional republican community. For the common school movement, all other goals were subordinate to this one.

Progressive Movement: Social Efficiency

The progressive education movement arrived on the scene in the United States at the start of the twentieth century. Pedagogical progressives such as John Dewey and William Kilpatrick argued for child‐centered pedagogy, discovery learning, and student engagement, while the dominant strand of progressive reformers (that is, administrative progressives) such as Edward Thorndike and Ellwood Cubberley argued for social efficiency and preparing students for their future social roles.7 

In 1918, the Commission on the Reorganization of Secondary Education issued a report to the National Education Association titled Cardinal Principles of Secondary Education, which spelled out the administrative progressive position on education more clearly and consequentially than any other single document. The report announced at the very beginning that secondary schools need to change in response to changes in society, which “call for a degree of intelligence and efficiency on the part of every citizen that can not be secured through elementary education alone, or even through secondary education unless the scope of that education is broadened.”8 According to the authors, schools exist to help individuals adapt to the needs of society; as society becomes more complex, schools must transform themselves accordingly; and in this way they will help citizens develop the socially necessary qualities of “intelligence and efficiency.”

This focus on social efficiency, however, did not deter the authors from drawing on political rhetoric to support their position. In a 12,000‐word report, they used the terms “democracy” or “democratic” no fewer than forty times. But what did they mean by democratic education? At one point in boldfaced type they state that “education in a democracy, both within and without the school, should develop in each individual the knowledge, interests, ideals, habits, and powers whereby he will find his place and use that place to shape both himself and society toward ever nobler ends.”9 

So, whereas Mann’s reports used political arguments to support a primarily political purpose for schooling (preparing citizens with civic virtue), the commission’s report used political arguments about the requirements of democracy to support a vision of schooling that was primarily economic (preparing efficient workers). In addition, the report preserved the concern of common school proponents about school as a public good, but only by redefining the public good in economic terms. Yes, education serves the interests of society as a whole, said these progressives; but it does so not by producing civic virtue but by producing what we would later come to call human capital.

Desegregation Movement: Equal Opportunity

If the administrative progressive movement marginalized the political argument for education, using it as window dressing for a vision of education as a way to create productive workers, the civil rights movement brought politics back to the center of the debate about schools—but now in a form that drew largely from consumerism. In the 1954 decision of the U.S. Supreme Court, Brown v. Board of Education of Topeka, Kansas, Chief Justice Earl Warren, speaking for a unanimous court, made a forceful political argument for the need to desegregate American schools.10 The key question the opinion asked and answered was this: “Does segregation of children in public schools solely on the basis of race, even though the physical facilities and other ‘tangible’ factors may be equal, deprive the children of the minority group of equal educational opportunities? We believe that it does.”

The Court’s reasoning moved through two main steps in reaching this conclusion. First, Warren argued that the social meaning of education had changed dramatically in the ninety years since the passage of the Fourteenth Amendment. In the years after the Civil War, “the curriculum was usually rudimentary; ungraded schools were common in rural areas; the school term was but three months a year in many states; and compulsory school attendance was virtually unknown.” As a result, education was not seen as an essential right of any citizen; but that had now changed:

Today, education is perhaps the most important function of state and local governments…. [I]t is a principal instrument in awakening the child to cultural values, in preparing him for later professional training, and in helping him to adjust normally to his environment. In these days, it is doubtful that any child may reasonably be expected to succeed in life if he is denied the opportunity of an education. Such an opportunity, where the state has undertaken to provide it, is a right which must be made available to all on equal terms.

This led to the second part of the argument: “Segregation with the sanction of law … has a tendency to [retard] the educational and mental development of Negro children and to deprive them of some of the benefits they would receive in a racial[ly] integrated school system.” In combination, these two arguments—education is an essential right and segregated education is inherently harmful—led Warren to his conclusion:

We conclude that, in the field of public education, the doctrine of ‘separate but equal’ has no place. Separate educational facilities are inherently unequal. Therefore, we hold that the plaintiffs … are, by reason of the segregation complained of, deprived of the equal protection of the laws guaranteed by the Fourteenth Amendment.

The argument in this decision was at heart political, asserting that education is a constitutional right of every citizen that must be granted to everyone on equal terms. But note that the political vision in Brown is quite different from the political vision put forward by Mann. For the common school movement, schools were critically important in the effort to build a republic; their purpose was political. But for the desegregation movement, schools were critically important as a mechanism of social opportunity. Their purpose was to promote social mobility. Politics was just the means by which one could demand access to this attractive educational commodity.

In this sense, then, Brown depicted education as a private good, whose benefits go to the degree holder and not to society as a whole. The Court’s argument was not that granting African Americans access to equal education would enhance society, both black and white; instead, it argued that African Americans were suffering from segregation and would benefit from desegregation.11 Quality education was an important form of property that they had been denied, and the remedy was to give them access to it. In this decision, republican equality for citizens had turned into equal opportunity for consumers.

Standards Movement: Social Efficiency

In 1983, the National Commission for Excellence in Education produced a report titled A Nation at Risk, which helped turn the emerging standards effort into a national reform movement. The report got off to a fast start, issuing a dire warning about how bad things were and how important it was to reform the educational system:

Our Nation is at risk. Our once unchallenged preeminence in commerce, industry, science, and technological innovation is being overtaken by competitors throughout the world…. [T]he educational foundations of our society are presently being eroded by a rising tide of mediocrity that threatens our very future as a Nation and a people.12

This passage set the tone for the rest of the report. It asserted a vision of education as an intensely public good: All Americans benefit from its successes, and all are threatened by its failures. The nation is at risk.

But the report represented education as a particular type of public good, which benefited American society by giving it the human capital it needed in order to be economically competitive with other nations:

We live among determined, well‐educated, and strongly motivated competitors. We compete with them for international standing and markets, not only with products but also with the ideas of our laboratories and neighborhood workshops. America’s position in the world may once have been reasonably secure with only a few exceptionally well‐trained men and women. It is no longer.13

The risk to the nation posed here was primarily economic, and the main role that education could play in alleviating this risk was to develop a more efficient mechanism for turning students into productive workers. In parallel with the argument in Cardinal PrinciplesA Nation at Risk asserted that the issue of wealth production was the most important motive in seeking higher educational standards.

School Choice Movement: Consumerism and Social Efficiency

The school choice movement had its roots in the work of Milton Friedman, who devoted a chapter to the subject in his 1962 book, Capitalism and Freedom.14 But the movement really took off as a significant reform effort in the 1990s, and a major text that shaped the policy discourse of this movement was a 1990 book by John Chubb and Terry Moe—Politics, Markets, and America’s Schools. The argument they raised in favor of school choice consisted of two key elements.

First, they used the scholarly literature on school effectiveness to argue that schools are most effective at promoting student learning when they have the greatest degree of autonomy in administration, teaching, and curriculum. Second, they argued that democratic governance of school systems necessarily leads to bureaucratic control of schools, which radically limits autonomy; whereas market‐based governance, based on empowering educational consumers instead of empowering the state, leads to more school autonomy. As a result, they concluded, we need to shift from democratic to market control of schooling in order to make schools more educationally effective.

Chubb and Moe welcomed the fact that, by shifting control from a democratic polity to the educational consumer, the proposed school choice system would change education from a public good to a private good:

Under a system of democratic control, the public schools are governed by an enormous, far‐flung constituency in which the interests of parents and students carry no special status or weight. When markets prevail, parents and students are thrust onto center stage, along with the owners and staff of schools; most of the rest of society plays a distinctly secondary role, limited for the most part to setting the framework within which educational choices get made.15

In this way, then, the rhetoric of the school choice movement at the close of the twentieth century represented the opposite end of the scale from the rhetoric of the common school movement that set in motion the American public school system in middle of the nineteenth century. In educational reform rhetoric, we have moved all the way from a political rationale for education to a market rationale, and from seeing education as a public good to seeing it as a private good. Instead of extolling the benefits of having a common school system promote a single virtuous republican community, reformers were extolling the benefits of having an atomized school system serve the differential needs of a vast array of disparate consumer subcultures.

Incorporating the Politics of Equal Opportunity

The start of the twenty‐first century saw an interesting shift in the rhetoric of the standards movement and the choice movement, as both incorporated the language of equal opportunity from the civil rights movement. In their original form, both movements ran into significant limitations in their ability to draw support, and both turned to a very effective political argument from the civil rights movement to add passion and breadth to their mode of appeal.

The New Standards Movement. In January 2002, President George W. Bush signed into law a wide‐reaching piece of standards legislation passed with broad bipartisan support. The title of this law explains the rhetorical shift involved in gaining approval for it: the No Child Left Behind Act.16 Listen to the language in the opening section of this act, which constitutes the most powerful accomplishment of the school standards movement: “The purpose of this title is to ensure that all children have a fair, equal, and significant opportunity to obtain a high‐quality education and reach, at a minimum, proficiency on challenging State academic achievement standards and state academic assessments.”

This end would be accomplished by aligning education “with challenging State academic standards,” “meeting the educational needs of low‐achieving children in our Nation’s highest‐poverty schools,” “closing the achievement gap between high‐ and low‐performing children,” “holding schools accountable for improving the academic achievement of all students,” “targeting … schools where needs are greatest,” and “using State assessment systems designed to ensure that students are meeting challenging State academic achievement and content standards.”

What we find here is a marriage of the standards movement and the civil rights movement. From the former comes the focus on rigorous academic subjects, core curriculum for all students, and testing and accountability; from the latter comes the urgent call to reduce social inequality by increasing educational opportunity. The opening sentence captures both elements succinctly.

The New Choice Movement. In the late 1990s, the politics of school choice became more robust with the introduction of a new approach to the choice movement’s rhetorical repertoire. A 2005 book by Julian Betts and Tom Loveless, Getting Choice RightEnsuring Equity and Efficiency in Educational Policy, reflected the change. Choice could now be presented as a way to spread social opportunity to the disadvantaged:

Indeed, the question of school choice is not an “if” or a “when.” We have always had school choice in the United States, through the right of parents to send their child to a private school and through the ability of parents to pick a public school for their child by choosing where to live. Clearly, affluent parents have typically been the main beneficiaries of these forms of school choice.

In recent decades new forms of school choice have arisen that have fundamentally changed the education landscape. In many cases these new mechanisms have provided less affluent families with their first taste of school choice.17

In both contemporary school reform movements, the ideal of schooling for equal opportunity had moved to the center of the reform rationale.

The Evolving Structure of the Public School: 

Consumer Demand for Access and Advantage

Educational reform has been only one part of the story of change in the publicness of the American public school. I argue that the education market was more effective than reform movements in shifting the focus of the public school system from creating republican community to promoting individual opportunity. In the context of this essay, I define the education market as the sum of the actions of all educational consumers as they pursue their individual interests through schooling. I define consumers, in turn, as individuals who are acting toward education in their roles as educational consumers, as opposed to their numerous other roles, such as citizens and taxpayers and friends and caring parents and spiritual beings. Their consumer role focuses on the acquisition of education as a private good, for themselves and their children, an acquisition that can enhance their social opportunities in competition with others.

From early in the history of American education, American families and individuals have looked on education as an important way to get ahead and stay ahead in a market society. Even before formal schooling was commonplace, families sought to provide their children with the kinds of literacy and numeracy skills that were essential for anyone who wanted to function effectively in the commercial life of the colonies. At stake was not just success but survival.

The introduction of universal public education in the common school era made such basic skills available to everyone in the white population at public expense. This meant that a common school education became established as the baseline level of formal skill for the American populace in the nineteenth century. For the small number of students who gained a more advanced education at an academy, high school, or college, this educational advantage gave them an edge in the competition for the equally small number of clerical, managerial, and professional roles. Late in the nineteenth century, the number of office jobs increased, which raised the value of a high school education, and by the start of the twentieth century, employers increasingly came to use educational qualifications to decide who was qualified for particular jobs, including both white collar and blue collar positions. At this point, the economic returns on the consumer’s investment in education became quite substantial all across the occupational spectrum.18

For our purposes, in trying to understand the factors affecting the public character of American public schools, the consumer effect on the school system is quite different in both form and function from the reformer effect. One distinction is that reformers over the years have tended to treat education as a public good. They have seen their reform efforts as the solution of a social problem, and the benefits of this reform would be shared by everyone, whether or not they or their children were in school. In contrast, consumers approach education as a private good, which is the personal property of the individual who acquires it. By the late twentieth century, reformers began to graft the private good approach onto the root of their traditional public good approach. But even then they were using the equal opportunity argument to provide political support for a program for schooling that was still primarily focused on producing human capital for the public good.

Another distinction is that reformers are intentionally trying to change the school system and improve society through their reform efforts. In contrast, consumers are simply pursuing their own interests through the medium of education. They are not trying to change schools or reform society; they are just trying to get ahead or at least not fall behind. But, in combination, their individual decisions about pursuing education do exert a significant impact on the school system. These choices shift enrollments from some programs to others and from one level of the system to another. They pressure political leaders to shift public resources into the educational system and to move resources within the system to the locations that are in greatest demand. At the same time, these educational actions by consumers end up exerting a powerful influence not only on schools but also on society. When consumers use education to address their own social problems, the social consequences are no less substantial for being unintended.

The American school system was a deliberate creation of the common school movement, but once the system was set in motion, consumers rather than reformers became its driving force. Consumers drove the extraordinary expansion of American school enrollments to a level higher than anywhere else in the world, starting with the surge from primary school into grammar school in the late nineteenth century, into high school in the first half of the twentieth century, and into college in the second half. Reformers did not make school expansion happen; they just tried to put this consumer‐generated school capacity to use in service of their own social goals, particularly the goal of social efficiency.

Not only did consumers flood the system with students, but they also transformed the system’s structure. They turned the common school, where everyone underwent the same educational experience, into the uncommon school, where everyone entered the same institution but then pursued different programs. Their most consequential creation in this regard was the tracked comprehensive high school, which established the model for the reconstructed (not reformed) educational system that emerged at the start of the twentieth century and is still very much with us.

At the heart of this reconstructed system is the peculiarly American balance between access and advantage. This balance was not the brainchild of school reformers—that is, it was not proposed as the educational solution to a social problem. Instead, it was the unintended outcome of the actions of individual consumers competing for valuable credentials in the education market. Like any other market, the education market consists of a diverse array of actors competing for advantage by acquiring and exchanging commodities; the difference is that the commodities here are educational credentials.

As a result, the education market does not speak with a single voice but with competing voices, and it exerts its impact not by pushing in a single direction but by pushing in multiple directions. When the common school system was introduced into a society with an unequal distribution of social advantages, families naturally started to use it in their efforts to improve or preserve their social situation. The men overseeing the common school inadvertently set off the competition for educational advantage when they created the public high school as a way to lure middle‐class families into the public school system. So from the very start, the American school system simultaneously provided broad access to schooling at one level, and exclusive access to schooling at a higher level. The race was on.

For most of the nineteenth century, the high school remained largely a middle‐class preserve within the school system. During the same period, working‐class enrollments gradually expanded from the lower grades into the grammar school grades. By the 1870s and 1880s, grammar school enrollments were nearing universality in the United States, which led naturally to an increase in consumer demand for access to the high school. Before the end of the century, the system yielded to this demand and began opening a series of new high schools, which led to a rapid expansion of high school enrollments. Increased access for working‐class families, however, undercut the advantage that high school attendance had long brought middle‐class families. How was education supposed to meet both of these consumer demands within the same school system?

It turns out that the education market was much more adept at constructing such educational solutions to complex social problems than was the school reform process. With a little help from the progressives (who thought tracking was socially efficient), consumer demand created the tracked comprehensive high school. It provided broad access to high school for the entire population while at the same time preserving educational advantage for middle‐class students in the upper academic tracks, which started channeling graduates into college. This reconstructed school system really could have it both ways. But how did the education market bring about this remarkable institutional response to a pressing set of social problems?

In a functioning liberal democracy, consumer demand quickly translates into political demand. Working‐class families did not have the social position or wealth of their middle‐class counterparts, but they did have the numbers. It was very difficult for a democratic government—then or now—to resist strong demand from a majority of voters for broad access to an attractive publicly provided commodity such as schooling, at least for any length of time. At the same time, middle‐class citizens—then and now—retained substantial influence in spite of their smaller numbers, so government also had difficulty ignoring their demands to preserve a special place for them within the public school system. If democracy is the art of compromise, the comprehensive high school is the ultimate example of such a compromise frozen in institutional form.

One additional factor makes the education market so effective at shaping the school system. Markets are dynamic; they operate interactively. Individual educational consumers are playing in a game where everyone knows the rules and all actors are able to adjust their behavior in reaction to the behavior of other actors. By the start of the twentieth century, a new rule was emerging in American society: the educational level of prospective employees sets their qualifications for a particular occupational level. To get more pay, get more schooling.

The problem was that some people already had an educational edge, and they had the means to maintain that edge. Their children were in high school and yours were not. So you demanded and gained access to high school, only to find that the ground had shifted. First, it was no longer the same high school but a new one with its own internal hierarchy that placed your children at the bottom. Second, high school was no longer the top of the educational line; college was. The middle‐class students in the upper tracks were now heading to college, leaving your children in the same relative position they occupied before—one step behind in the race for educational advantage. The only real difference was that now everyone had more education than before. In the nineteenth century, the credential of advantage was the high school diploma. In the early twentieth century, it was the college degree. By the late twentieth century, it was the graduate degree. The race continues.

Over the years, therefore, educational consumers have been more effective than school reformers in shaping the American school system. Consumers were the ones who developed the institutional core of the system: its delicate balance between access and advantage, and the corresponding organizational structure of the system, combining equality and hierarchy. Consumers have also been more effective than reformers in exerting influence on American society through the medium of schooling. These social effects were not the intention that was guiding consumer behavior in the education market; instead, consumers were trying to use education for their own personal ends, and the societal consequences of their actions were a side effect. For individuals, the school system often has served their purposes: some have found that gaining more education enabled them to get ahead, and others have found that it helped them hold onto their competitive edge. But collectively the social impact of market pressure on schools has cost consumers dearly.

As I and others have argued elsewhere, the system of schooling that consumers created has not been able to increase social equality, nor has it been able to increase upward mobility.19 The population as a whole has seen its standard of living and quality of life rise as the economy has grown, but schooling has had no effect on the relative position of social groups in the social hierarchy. The rise in the education level of Americans in the last 200 years has been extraordinarily rapid, but this change has not succeeded in shuffling the social deck. People who had an educational edge on the competition were, by and large, able to maintain this edge by increasing their schooling at the same rate as those below them in the status order. The overall effect of this process over time was to increase the average education level of everyone in the labor queue, which artificially inflated educational requirements for jobs. As a result, people were spending more time and money on schooling just in order to keep from falling behind. They were forced to run in order to stay in place.

The education market, therefore, had the cumulative effect of undercutting the economistic version of the public goal for public education that most twentieth‐century school reformers aspired to attain—by sharply reducing schooling’s social efficiency. At the same time, it turned the pursuit of social opportunity through schooling into an educational treadmill.

The core of the problem is Americans’ insistence on having things both ways through the magical medium of education. We want schools to express our highest ideals as a society and our greatest aspirations as individuals, but only as long as they remain ineffective in actually enabling us to achieve these goals, since we really do not want to acknowledge that these two aims are at odds with each other. We ask schools to promote equality while preserving privilege, so we perpetuate a system that is too busy balancing opposites to promote student learning. We focus on making the system inclusive at one level and exclusive at the next, in order to make sure that it meets demands for both access and advantage.

As a result, the system continues to lure us to pursue the dream of fixing society by reforming schools while continually frustrating our ability to meet these goals. It locks us in a spiral of educational expansion and credential inflation that has come to deplete our resources and exhaust our vitality. And we cannot find a simple cure for this syndrome because we will not accept any remedy that would mean giving up one of our aims for education in favor of another. We want it both ways.


1 The argument in this essay is drawn from David Labaree, Someone Has to Fail: The Zero‐Sum Game of Public Schooling (Cambridge, Massachusetts: Harvard University Press, 2010). Adapted by permission.

2 Lawrence A. Cremin, American Education: The Colonial Experience, 1607–1983 (New York: Harper and Row, 1970), 181.

3 Kenneth A. Lockridge, Literacy in Colonial New England: An Enquiry in the Social Context of Literacy in the Early Modern West (New York: Norton, 1974).

4 David Tyack and Larry Cuban, Tinkering Toward Utopia: Reflections on a Century of School Reform (Cambridge, Massachusetts: Harvard University Press, 1995).

5 Quoted in Lawrence A. Cremin, ed., The Republic and the School: Horace Mann on the Education of Free Men (New York: Teachers College Press, 1957), 87.

6 Ibid., 92.

7 The terms administrative and pedagogical progressives come from David Tyack, The One Best System (Cambridge, Massachusetts: Harvard University Press, 1974).

8 Commission on the Reorganization of Secondary Education, Cardinal Principles of Secondary EducationBulletin no. 35, U.S. Department of Interior, Bureau of Education (Washington, D.C.: U.S. Government Printing Office, 1918), 1.

9 Ibid., 3.

10 Brown v. Board of Education of Topeka, Kansas, 347 U.S. 483 (1954).

11 Large numbers of middle‐class white families interpreted the principle the same way. When the courts later moved to enforce desegregation through mandatory busing, these families moved out of the jurisdiction in order to avoid having black opportunity come at the expense of white advantage.

12 National Commission on Excellence in Education, A Nation at Risk: The Imperative for Educational Reform (Washington, D.C.: U.S. Department of Education, 1983), 7.

13 Ibid., 8.

14 Milton Friedman, Capitalism and Freedom (Chicago: University of Chicago Press, 1962).

15 John E. Chubb and Terry M. Moe, Politics, Markets, and America’s Schools (Washington, D.C.: Brookings Institution Press, 1990), 35.

16 No Child Left Behind Act of 2001 (HR 1, 107th Cong., P.L. 107‐110, 115 Stat. 1425, January 8, 2002).

17 Julian Betts and Tom Loveless, Getting Choice RightEnsuring Equity and Efficiency in Educational Policy (Washington, D.C.: Brookings Institution Press, 2005), 1–2.

18 For a rich account of the history of the rising relevance of high school on job prospects during this period, see Claudia Goldin and Lawrence F. Katz, The Race Between Education and Technology (Cambridge, Massachusetts: Belknap Press of Harvard University Press, 2008).

19 See, for example, Labaree, Someone Has to Fail; Raymond Boudon, Education, Opportunity, and Social Inequality: Changing Prospects in Western Society (New York: Wiley, 1974); and Lester Thurow, “Education and Economic Equality,” Public Interest 28 (Summer 1972): 66–81.

Posted in Credentialing, Higher Education, Inequality, Meritocracy

Harold Wechsler — An Academic Gresham’s Law

This post is a favorite piece by an old friend and terrific scholar, Harold Wechsler, who sadly died several years ago.  Here’s a link to the original, which appeared in Teachers College Record in 1981.

In this paper, Wechsler explores a longstanding issue in American higher education.  How do students and colleges respond when the initial core group of college students — wealthy white males — face newcomers that don’t look like them?  First it was poor students, then women, then Jews, and finally nonwhites. 

Each time the colleges feared that the newcomers would drive away the core constituency.  But in fact, as Wechsler shows, this typically didn’t happen.  Instead, the core group simply segregated itself from the newcomers in order to avoid social “contamination.”  Colleges often helped facilitate this process of segregation.  

It’s a lovely argument. The only thing I would add is this.  Throughout the history of US higher ed, colleges have been trying to balance two concerns.  They need to keep their core constituents happy, so they will remain loyal to the institution, send their kids there, and donate lots of money.  In addition, educating children of the leading families was the best way to graduate future leaders, which provides a halo of prestige for the institution.  A key thing that makes students want to enroll in a particular college is the status it will provide them.  So keeping this core group is very much in the interest of the college.

At the same time, however, a college needs to maintain its credibility as an institution of high academic quality.  It needs smart kids as well as rich kids in its student body.  Having some students who really capable and want to study keeps the faculty happy — a relief from the run-of-the-mill party animals that have always populated American colleges.  And the college needs a modest admixture of smart students in order to keep its reputation as an elevated academic institution rather than simply an exclusive social club. 

In fact, it’s also to the advantage of rich male WASP students to attend a college that has smart poor, female, Jewish, and Black students.  Part of the payoff for graduating from college is that it certifies you as a person of academic merit.  This allows you to assume upper-level social roles on the basis of achievement rather than birth.  It gives you credibility.  College works best for you by admitting you well-born and then graduating you well-educated.  It performs a kind of alchemy, transforming social privilege into individual merit.  

As a result, both colleges and their core constituents benefit by having a student body that is a judicious mix of dumb rich kids and smart poor kids.  And over time, you need to gradually increase the rate of merit admissions.  The tricky part is getting the balance just right.  Go too slow, and you lose academic credibility.  Go to fast, and you lose core donors.  The cautionary tale of the latter is Yale in the 1970s under President Kingman Brewster, who jacked up the merit factor in admissions and suffered a donor strike by wealthy alumni.  

The task of balancing privilege and merit falls on the admissions department, the marketing arm of the college.  In my view, the best account of how this works is Jerry Karabel’s book, The Chosen.

Hope you enjoy Wechsler’s essay.

An Academic Gresham’s Law: Group Repulsion as a Theme in American Higher Education

by Harold S. Wechsler – 1981

Throughout the educational history of American society, the entrance of a new group was extremely threatening to the established traditionalists already in power at educational institutions. An analysis is done of the advent of various groups onto the American educational scene. Implications are made for modern comparisons. 

The arrival of a new constituency on a college campus has rarely been an occasion for unmitigated joy. Perhaps such students brought with them much-needed tuition dollars. In that case, their presence was accepted and tolerated. Yet higher-education officials, and often students from traditional constituencies, usually perceived the arrival of new groups not as a time for rejoicing, but as a problem: a threat to an institution’s stated and unstated missions (official fear) or to its social life (student fear). Most recently, America has witnessed dramas played out between black students and white students and officials, as the former attempted to obtain access to higher education, first in the South and then in the North. Brown v. Board of Education and its subsequent application to higher education have resulted in only a gradual effort at integration in the South, and then only after almost a decade of outright resistance. In the North, the existence of selective colleges and universities in or near urban ghettos produced persistent demands for the “opening up” of such institutions to a local constituency. In both cases, acquiescence to black demands was feared as inimical to the interests of the college’s traditional constituencies and to its missions. The possibility that a new group might “repel” a more traditional constituency has for more than two centuries proved a persistent theme in American higher education and has not been aimed at any one new constituency in particular. Institutional officials (administrators and occasionally trustees; faculty usually played a peripheral role in these issues)1 often feared the physical exodus of traditional students resulting in a perhaps undesirable change in the institution’s status and mission. However, traditional students only infrequently lifted up stakes; more often they simply adopted a policy of segregating themselves from the insurgent group. Depending on whether the traditional group was a positive or a negative reference group, insurgent students would counter-segregate by forming structures either emulating or rejecting majority group arrangements.

In this article we will discuss four instances of this inverse Gresham’s law of academic relations—real or imagined—and analyze official and student responses. In each case the entrance of a new group brought about less-than-apocalyptic changes. In the case of relatively wealthy students in nineteenth-century New England colleges, the arrival of poorer students led to a decline in activities conducted by the student body as a whole and to a rise of stratified eating and living arrangements. Ultimately, the wealthier students watched as the number of poorer brethren declined. Late in the nineteenth century the arrival of women on previously all-male campuses led to other forms of social segregation, which apprehensive administrators thought of abetting by segregating academic exercises by sex. Some years later, the arrival of a considerable number of Jewish students on east coast campuses caused concern lest gentile students seek out less “cosmopolitan” surroundings. Most recently, the arrival of significant numbers of black students at previously all-white (or almost so) institutions occasioned fears of “white flight” similar to what was perceived as happening in integrated elementary and secondary schools. In all of these cases, students adopted modest recourses—various informally segregated arrangements for living, eating, and socializing supplemented or took the place of officially sanctioned arrangements. Usually, college authorities acquiesced in or even abetted these arrangements, believing them preferable to a student exodus.


Perhaps American college officials acquired their fear of student exodus from its perceived frequency in the medieval universities. Migrations sometimes led to the founding of rival universities. Even temporary student absences brought about local economic hardship. But in the case of the early Italian universities such migration resulted from disputes not between groups of students, but between local authorities and representatives of these student-run institutions. Early universities were quite heterogeneous, attracting students from much of Europe. By the mid-thirteenth century, the major universities had recognized the existence of “nations” that had fraternal, legal, and educational functions. Each nation contained a diversified membership, but offered cohesion and sanctuary for foreign students in a strange locality by guaranteeing the legitimacy of their members’ presence.

In American higher education, which lacked formal groupings such as nations, the questions of incorporation or rejection of an aspirant group onto a campus with a traditional constituency have had to be handled on an ad hoc basis.

During the first two centuries of higher education in America, students from increasingly diverse class backgrounds found such instruction relevant to their interests; for such institutions as reliable information exists it appears that such heterogeneity could be incorporated within the formal collegiate structure by relaxation of rules calling for continual interaction of the entire student body. Early versions of the laws of Harvard provided that no student could live or eat away from the college without the permission of the president,2 but seventeenth-century Harvard was not a gentleman’s institution. In preparing young men for the clergy and magistracy it often found that the most pious students were also the poorest.3 Tuition charges were relatively low and meal charges varied according to quantity and quality consumed.4 “A few resident students had board bills of less than a pound a quarter, a fourth of what their richer friends ate.”5 By the early eighteenth century, the Harvard student body’s composition had significantly changed. The increase in enrollment, Samuel Eliot Morison wrote, consisted “of young men [who] came to be made gentlemen, not to study.”6 When the increase forced Harvard to permit some students to live away from the college, a definite bifurcation in the student body ensued. Not only did the pious students domicile and board together, they formed the first student societies—early manifestations of an extracurriculum that wary college officials found themselves forced to tolerate. Thus, the first manifestation of group repulsion consisted of a self-imposed segregation of pious students in response to “the onslaughts and influence of their more licentious classmates’ thievery and tormenting.”7

At other colonial colleges, authorities permitted internal segregation from the outset. William and Mary provided in its 1729 statutes for tuition-paying and scholarship students. For the former, “we leave their parents and guardians at liberty whether they shall lodge and eat within the college or elsewhere in the town, or any country village near the town.” Such students simply observed the public hours of study. Poor students aspiring to the ministry would receive scholarship aid according to “their poverty, their ingeniousness, learning, piety, and good behavior, as to their morals.” In this case, a college provided for a bifurcated student body in its statutes.8 Yale and Kings College made provision for domicile outside the college grounds; the latter institution in fact had no dormitory during its initial years, and after its completion the enrollment rapidly exceeded the building’s capacity.9 Yale formally “ranked” its matriculants and followed these rankings when it was time to “declame.” No formal ranking system existed at Kings College, but President Samuel Johnson did enter each matriculant on the college’s rolls in roughly the order of his social status. The children of New York’s elite families readily identified each other and sought each other’s company. A ‘few select ones” gathered regularly for conversation in John Jay’s room, and those of high social standing met at a weekly “Social Club.”10. At pre-Elevolutionary Princeton as at Harvard and Yale, the poorer students largely aspired to the ministry; their more wealthy classmates, however, were in one way or another also touched by evangelical religion. This, and the lack of lousing alternatives to Nassau Hall, may have mitigated some social cleavages existing between rich and poor students.11

Thus, in the colonial college, we have evidence of mutual repulsion. The pious students who were initially attracted to each college were joined within a generation or two by students from wealthier backgrounds who attended college more as a means of elite socialization than as a means of curriculum mastery. In most cases, college officials desired attendance of both groups, although they emphatically did not desire the increase in disciplinary problems almost always attendant on the arrival of the children of the more wealthy.

During the years between the Revolution and the Civil War, the trend toward segregation by social class appears to have continued, and as the proportion of students from more modest backgrounds again increased, the fissures became more formal. Perhaps the distinctive feature in this period consisted of official acceptance of many segregated arrangements. In fact, according to one recent study, postcolonial New England colleges systematically courted poor male students. “Provincial colleges devised calendars congenial to seasons to work in nearby fields and schools, and adopted inexpensive living arrangements. Most important, they made tuition cheap, almost a charity.”12 Driven off the land by economic necessity, propelled toward the ministry by the revivals of the Second Great Awakening and attracted by recruiting efforts and special accommodations offered by the provincial colleges, students from modest backgrounds formed significant constituencies at a number of these institutions.13

Their absorption by the colleges required further abandonment of the ideal of community often enshrined in the college statutes. Keeping the bill of fare within range of the poorest students meant that students with fuller purses might find the menu unpalatable. Maintenance of spartanlike dormitories often led to demands by wealthier students for the privilege of domicile in more comfortable quarters. Actually, both wealthy and poorer students had motives for living and boarding off campus. The former could often locate more comfortable accommodations and food of better quality. They might move into a boarding house with students of like background, thus cementing the social contacts for which many apparently came.14 Sometimes such arrangements formed the bases for fraternities, which received their initial impetus in the mid-nineteenth century. College officials tried at first to suppress such illiberal social organizations. “They create class and factions, and put men socially in regard to each other into an artificial and false position”15 said Mark Hopkins, Williams’s president. But the fraternities’ rapid proliferation and participation by a sizable number of tuition-paying students argued against actions more drastic than an increasingly perfunctory chapel exhortation against such undemocratic institutions.16

The poorer students likewise found other options more attractive than commons. Many boarded in their rooms, while others founded student-run boarding clubs that often provided better and cheaper fare. Not only rich students lived in town; poorer students often found the lodgings offered by a charitable family or in a poor section more satisfactory than rooms in the college halls.

The intensity of this mutual segregation may be discerned from an account in a contemporary novel, in which two financially well-off Harvard students visit a poor classmate who resides in Divinity Hall, the traditional campus abode for poor but earnest students. When a student replied “Oh, down in Divinity,” to the question “Where do you room?” the rejoinder was inevitably, “Down in Divinity? What in the name of all that is wonderful, makes you go down there among all those scrubs?” A pious atmosphere and economy provided two answers. A resident found the theological library, located in the Hall, “a delightful place to go into and mouse around when you are tired of study, and have nothing in particular to do.”17 Evening services proved genuinely inspirational. “The music of the choir and organ rolls up through the silent halls, and sounds very beautiful,” the resident commented. As for meals, the students “mostly keep themselves entirely. . . .We leave our basket and pail just outside our door over night, and in the morning take in our milk and our fresh loaf; and some of the men down here live on bread and milk for the most part, or make it answer for breakfast and tea.”18 By contrast, one visitor commented that he spent eight dollars a week for boarding out while the Divinity resident replied that he could eat satisfying meals for about a fifth of that sum.19 Perhaps the greatest fissure between the two groups lay in their respective attitudes toward their studies. The Divinity resident professed a love for mathematics above all else. “I think it a beautiful science: it is all explained and proved so fully and exactly as you go on, and the way is made so smooth and serene, one need never make any mistakes.” One visitor, who by his very willingness to visit a student in Divinity demonstrated that his attitudes were far from the most extreme, replied that he enjoyed Greek the best. “But I am afraid I should not do much work of any sort unless I were obliged to,” he continued. “We come here for the most part because we do, and without even asking the reason why. . . . I think study is the last thing we come for. Of course, the work is all an imposition, and the instructors are our natural enemies. That is the way most of the fellows feel, I know.”20 The gulf between a student maintaining such an attitude and another who loved his study (“I take the purest and deepest pleasure in it, and I thought everyone else did too; and I still think you must be wrong”)21 was unfathomable, and it is highly unlikely that dialogues such as this constituted normal fare.

In the case of the relations between rich and poor students, officials demonstrated increased tolerance toward student-imposed practices of segregation. Segregation permitted a high level of enrollment, the veneer of adherence to the official goals of inculcating discipline and piety, and the acceptance of tuition from those students more prone to pranks than piety and more often in attendance for social than academic reasons. Perhaps the only time the poor but pious students attained any prestige at the institutions ostensibly founded for them occurred during the religious revivals, which occurred with less frequency as the century progressed. In this example, college authorities accepted the social arrangements devised by the students. When other distinctive groups arrived, their reaction would be less sanguine.


Given the popularity of coeducational living arrangements on the modern campus, and all that such arrangements imply, one reads almost with astonishment a University of Wisconsin alumnus’s 1877 statement that “the feeling of hostility [of the men students] was exceedingly intense and bitter. As I now recollect, the entire body of students were without exception opposed to the admission of the young ladies, and the anathemas heaped upon the regents were loud and deep.”22 Perhaps male resistance to coeducation can be traced to a fear that women students might outperform them in the classroom, or to a more generalized desire to retain a specific image of the American woman. The stated objections included women’s purported mental incapacity and frail health, and the possibility of increased disciplinary problems. Although time dispelled these fears at the University of Wisconsin, similar concerns at Columbia led to rejection of a coeducation plan. John Burgess, dean of the political science faculty, successfully argued that women students were subject to monthly incapacities, that they would prove too distracting, and that an influx of women would repel Columbia’s traditional male constituency, thus reducing the institution to a female seminary.23 Burgess related that this argument won the day and Columbia College was thus spared coeducation.24

Concern persisted that women students might arrive on campus in such proportions as to pose a threat to the male students and, ultimately, to drive them out. Anxiety increased as the proportion of women among the national undergraduate population rose from 21.0 percent in 1870 to 47.3 percent in 1920.2525 Thus, early in the twentieth century authorities at a number of colleges began to reevaluate their commitment to coeducation and to suggest that some restrictive measures might be in order. A few institutions contemplated a limitation on enrollment of women students; however, the fear of tuition loss and of competitive advantage occurring to nearby colleges led most institutions to opt for less drastic measures. Several major institutions proposed, though few actually adopted, a system of academic segregation whereby course registration might be restricted to members of one sex. President Charles Van Hise of Wisconsin justified such measures as a necessary counteraction to a tendency toward “natural segregation.” “With the increase in the number of women in the colleges of liberal arts of coeducational institutions, certain courses have become popular with the women, so that they greatly outnumber the men,” he observed. “As soon as this situation obtains there is a tendency for the men not to elect these courses, even if otherwise they are attractive to them.”26 Similarly, he cited instances where the presence of large numbers of male students proved a disincentive for women’s registration. Listing language and literature as areas of male reluctance and political economy as unattractive to women in coeducational settings, Van Hise argued that equality of result might best be obtained by segregation.

University of Chicago authorities actually established, albeit briefly, separate junior (freshman and sophomore year) colleges for men and women. As women’s enrollment increased so did the debate over the merits of coinstruction. Under President Harper’s proposals social association and equal academic opportunity would continue as would the administration of the junior colleges by a single dean. However, they did provide that, when economically feasible, admission to elective and required junior college courses offered in multiple sections would be restricted to members of one sex.27 Harper expected that the financial viability proviso would result in continued coinstruction in about one-third of all courses28 and that other university divisions would retain joint instruction.

Neosegregationists such as Harper or Julius Sachs also defended such arrangements on the grounds that coeducation diminished intellectual standards, or on the basis of current psychological theory. In instructional situations, wrote Harper, “the terms and tone of association are fixed too little by the essential character of the thing to be done and too much by the fact that both men and women are doing it.”29 In his widely quoted chapter “Adolescent Girls and their Education,” G. Stanley Hall remarked that it was comparatively easy to educate boys since they “are less peculiarly responsive in mental tone to the physical and psychic environment, tend more strongly and early to special interests, and react more vigorously against the obnoxious elements of their surroundings.” In contrast, woman, “in every fiber of her soul and body is a more generic creature than man, nearer to the race, and demands more and more with advancing age an education that is essentially liberal and humanistic.” He concluded that “nature decrees that with advancing civilization the sexes shall not approximate, but differentiate, and we shall probably be obliged to carry sex distinctions, at least of method, into many if not most of the topics of the higher education.”30

But the fear of higher education’s feminization never lurked too deep beneath the surface. “Whenever the elective system permits,” wrote Julius Sachs, “the young men are withdrawing from courses which are the favorite choice of the girls, the literary courses; the male students discard them as feminized, they turn by preference to subjects in which esthetic discrimination plays no part.”31

The coeducationists were fully aware of the psychological and the “diminution of intellectual standards” arguments. “I have never chanced again upon a book that seemed to me so to degrade me in my woman hood as the seventh and seventeenth chapters on women and women’s education, of President Stanley Hall’s Adolescence,” wrote President M. Carey Thomas of Bryn Mawr.32 But the battle would be won or lost on the repulsion argument. Did peculiarly female traits lead women to favor the liberal over the practical to such an extent as to dissuade male students from following a liberal sequence?

Not so, replied University of Chicago Dean of Women Students Marion Talbot. With access to other professions closed off, many college women opted for secondary-school teaching careers. With a next-to-nil chance for a woman to obtain a secondary-school position in chemistry or zoology, Talbot noted, a woman’s choice of history or English in college proved to be a shrewd and practical decision—although one that might go against her personal interests. Talbot argued that despite a general belief to the contrary, “considerations of sex are rarely taken into account by women any more than by men on making a choice of studies.”33 President M. Carey Thomas, citing statistics revealing similar registration patterns for electives by male and female students at single-sex colleges, argued that the disproportionate figures reported in western coeducational institutions resulted from external circumstances, not a priori causes. “I am told,” she wrote, “that economics in many western colleges is simply applied economics and deals almost exclusively with banking, railroad rates, etc., and is therefore, of course, not elected by women who are at present unable to use it practically, whereas in the eastern colleges for women theoretical economics is perhaps their favorite study.”34 Men and women alike, Thomas said, make rational choices among available subjects, and were unlikely to avoid a subject solely because of a preponderance of registration by members of the opposite sex.

The coeducationists experienced considerable success in avoiding re-segregation, but whether statistical, psychological, or economic arguments proved most persuasive is a moot question. In Wisconsin, the university regents addressed the issue in 1908 and reaffirmed their traditional pro-coeducation policy. The following year the state legislature strengthened the laws concerning university admission by adding a specific provision that “all schools and colleges of the university shall, in their respective departments and class exercises, be open without distinction to students of both sexes.”35

But if women continued to obtain access to undergraduate education, they found few postbaccalaureate options; in fact, their ability to enter certain professions actually diminished during the early twentieth century. Mary Roth Walsh in her important book on women in the medical profession reported that institutions such as Tufts and Western Reserve, which had heretofore admitted significant numbers of women to the freshmen medical class, ceased to do so. Northwestern decided in 1902 without warning to close the women’s division of its medical school. At Johns Hopkins and the University of Michigan, which remained coeducational, the percentage of females in the student body declined respectively from 33 percent in 1896 to 10 percent in 1916 and from 25 percent in 1890 to 3 percent in 1910.36

Just as “cheap money drives dear money out of circulation,” editorialized the Boston Transcript at the time of Tufts’ coeducation controversy, “the weaker sex drives out the stronger.”37 Although most authorities cite other factors as prompting a retreat from an elective system, the emergence of distribution requirements at most colleges assured that male students would attend courses in “feminized” disciplines. On the other hand, certain disciplines rapidly evolved into male preserves entered only by women students willing to pay major social and psychological costs. Outside the classroom, many colleges established administrative positions (dean of women students, etc.) that attempted to regulate students’ social lives. Officials increasingly tolerated fraternities and sororities on condition that they adopt elaborate sets of rules, many specifically dealing with male-female interactions. Many colleges constructed dormitories and student centers segregated by sex.

Women administrators often supported such policies not simply as a defense—or as making a virtue of necessity—but because they believed that some social segregation would allow women undergraduates to assume leadership roles in activities that, if coeducational, would have inevitably been reserved for men. Thus, although academic practices have been emphasized here, coeducational institutions evolved elaborate social practices as well, permitting in many cases absorption of female students in significant numbers without serious redefinition of institutional missions.


Stephen Duggan’s enthusiasm for his Jewish students at the College of the City of New York had few bounds. He admired their motivation, ambitious-ness, sincerity, and intelligence; most of all he esteemed their ability to overcome the numerous hardships of life on the Lower East Side and to succeed at an institution with unfamiliar academic and social norms. “No teacher could have had a finer student body to work with,” he wrote. “They were studious, keen and forthright. They did not hesitate to analyze any subject to its fundamentals regardless of tradition or age. . . . I do not hesitate to say that I learned a great deal as a result of the keen questioning of these young men. It was fatal to evade; one had always to be on the qui vive. I found these students like students everywhere, very grateful for an evident interest in their personal welfare. . . . Some of their views were quite different from those held by students in a college situated in a less cosmopolitan atmosphere. . . . They formed the most socially minded group of young people that I know.”38

However, many college and university officials proved far less sanguine concerning the rapid influx of Jewish students into many of America’s colleges and universities. “Where Jews become numerous they drive off other people and then leave themselves,” wrote Harvard President Abbott Lawrence Lowell in 1922. Denying that moral character or individual qualities created the problem, Lowell attributed its cause to “the fact of segregation by groups which repel their group.”39 He refused to speculate over whether to blame Jewish “clannishness” or Gentile anti-Semitism for the Jewish tendency to “form a distinct body, and cling, or are driven, together, apart from the great mass of undergraduates.”40 Lowell had observed that summer resorts, preparatory schools, and colleges, such as City College, New York University, and Columbia College, had all experienced the same phenomenon.

Lowell’s solution, in the words of his biographer, “was a quota, usually called by Jewish writers a numerus clausus.” Quotas, Lowell reasoned, had been employed in other social sectors with little or no objection. “Why anyone should regard himself as injured or offended by a limitation of the proportion of Jews in the student body, provided that the limitation were generous, Lowell could not understand.”41

Others attributed the repulsion between Jew and Gentile to individual, rather than group characteristics. Frederick Paul Keppel, dean of Columbia, College from 1910 to 1918, distinguished between desirable and undesirable Jewish students. His Barnard counterpart, Virginia Gildersleeve, wrote that “many of our Jewish students have been charming and cultivated human beings. On the other hand. . . the intense ambition of the Jews for education has brought to college girls from a lower social level than that of most of the non-Jewish students. Such girls have compared unfavorably in many instances with the bulk of the undergraduates.”42

During the height of nativist sentiment immediately after World War I, many college authorities concluded that a Jewish influx threatened the character of their institutions. They proposed a variety of remedies, all aimed at limiting Jewish enrollment. Williams College, reported Harvard Philosophy Professor William Earnest Hocking, enlisted the aid of Jewish alumni in screening Jewish candidates. Other institutions employed psychological or character tests. Most devices appear to have resulted in a diminution in the number of Jewish students. At Columbia College, the percentage varied from 40 percent just after World War I to less than 20 percent by the mid-1930s. At Barnard the figure hovered around 20 percent while Radcliffe had 12 or 13 percent, Vassar had 6 percent, Bryn Mawr had 8 or 9 percent, and Wellesley had about 10 or 11 percent.43 Most institutions that restricted Jewish access did not remove the barriers until after a change in national sentiment brought about by the events of World War II.

Other administrators proved more tolerant of the Jewish influx. Some did not believe they posed a threat to institutional missions. Others, shrewdly, cynically, or both, believed that the students could settle any difficulties among themselves and that few direct measures need be taken. When an irate constituent charged the University of Chicago with anti-Semitism because of Jewish exclusion from campus fraternities, President Henry Judson replied that no official discrimination existed and that such exclusionary practices by students were a social, not a religious problem, best left for the students to settle among themselves.

And “settle” they did, with a vengeance. “The University of Chicago,” wrote the noted journalist Vincent Sheean, “one of the largest and richest institutions of learning in the world, was partly inhabited by a couple of thousand young nincompoops whose ambition was to get into the right fraternity or club, go to the right parties, and get elected to something or other.”44 Although the administration had segregated many extracurricular activities by sex, Chicago’s undergraduate women demonstrated their ability to construct a social system no less rigid or more intellectually oriented than that of their male counterparts. Again Sheean, “The women undergraduates had a number of clubs to which all the ‘nice’ girls were supposed to belong. Four or five of these clubs were ‘good’ and the rest ‘bad.’ Their goodness or badness were absolute, past, present and future, and could not be called into question.” Although no sorority houses existed, the women “maintained a rigid solidarity and succeeded in imposing upon the undergraduate society a tone of intricate, overweening snobbery.”45

Into such a social system, Jews had no access. Sheean related his own encounter with the Chicago students. Just after World War I, he inadvertently pledged a “Jewish fraternity,” although not Jewish himself. Lucy, a student with whom Sheean had conducted a flirtatious relationship, warned him to break his pledge. As she explained, “The Jews . . . could not possibly go to the ‘nice’ parties in the college. They could not be elected to any class office, or to office in any club, or to any fraternity except the two that they themselves had organized; they could not dance with whom they pleased or go out with the girls they wanted to go out with; they could not even walk across the quadrangles with a ‘nice’ girl if she could possibly escape.”46 Thus, contrary to many administrators’ fears, most Gentile students had no intention of abandoning established colleges and universities in face of a Jewish influx. Perhaps it would prove necessary to tolerate their presence in class, but the student culture successfully limited all other interaction.

Jews responded predictably to such restrictions. Jacob Schiff, the financier-turned-philanthropist, anonymously endowed the Barnard Hall student center as a countermove to the self-selecting student culture. Centrally located, its facilities would be open to all. At the same time Schiff and others repeatedly urged that administrators take steps to abolish fraternities and sororities that discriminated against Jews. To such requests most administrators responded that changes in interpersonal relationships could come about only through education; that administrative coercion would probably result in greater anti-Semitism. Even after World War II, many colleges and universities only haltingly pressured local fraternities to abolish discriminatory charter provisions or to disaffiliate from national orders mandating discriminatory policies.

Jewish students responded to social exclusion either by increased emphasis on their academic work (thereby earning the reputation of “grind”) or by establishment of predominantly Jewish academic and social organizations. At Harvard College in 1906, a group of Jewish undergraduates organized the first Menorah Society, which had as its purpose “the promotion in American colleges and universities of the study of Jewish history, culture and problems, and the advancement of Jewish ideals.”47 Far more resembling typical nineteenth-century collegiate literary societies than fraternities, Menorah societies florished on a number of colleges before and after World War I.48

At a typical meeting a Jewish academic from the campus or from a Menorah speakers’ bureau might lecture or lead a discussion, or the society’s membership might choose to discuss a book or topic of Jewish interest. The scope included the history and culture of the Jewish people “so conceived that nothing Jewish, of whatever age or clime, shall be alien to it.” Its broader purpose was to secure campus recognition of the seriousness or worthiness of its subject matter. “It must demonstrate to the whole student body that the study of Jewish history and culture is a serious and liberal pursuit; it must really afford its members a larger knowledge of the content and meaning of the Jewish tradition.”49 Deliberately eschewing a primarily social purpose (“. . . a Menorah Society is not a social organization. Its activities may, indeed, partake of a sociable nature, but only so far as its real objects can thereby be the more fully carried out”), Menorah quickly found itself caught between Jewish student organizations with objectives less lofty than Menorah’s academic goals, such as the Student Zionist Organization, and a quickening demand for Jewish fraternities and sororities.

The first Jewish fraternity in America was founded in New York City in 1898. Established under the watchful eye of Columbia University Semitics Professor Richard Gottheil, the Zeta Beta Tau (ZBT) fraternity originally professed ideals more ambitious than friendship and brotherhood. It aimed, wrote an early member, “to inspire the students with a sense of Jewish national pride and patriotism.”50 Although the movement’s early Zionist orientation gradually diminished, it attempted to retain the intellectual and service ideals on which it was founded. Richard Gottheil repeatedly expressed concern that the fraternity’s uniqueness might be lost. “For the Jew carries with him wherever he goes,” Gottheil said, “the great heritage of thought and of impulse which has been handed down from father to son during the last twenty-five centuries.” The organization assumed the form of a Greek letter fraternity “in order to fall in with the University habits of the community in which we live.” But, he concluded, “we can have no use for those men who are Zeta Beta Tau men simply for the sake of belonging to a Greek Letter Fraternity. We wish to set an example, not to proclaim ourselves a holy people, but to live as such.”51

Some Jews feared that creation of such groups as ZBT and Menorah might serve to enhance the Jewish stereotype. When some undergraduate Radcliffe women approached their fellow student Ruth Mack, daughter of Harvard alumnus and future Overseer Judge Julian Mack, about Menorah membership, she responded cautiously. She wondered whether the group “would tend to segregate the Jewish girls from the non-Jewish,” adding that at Radcliffe “there seems to be so little of this grouping, that I think it a pity to introduce anything which encourages it.” Although recognizing the organization’s intellectual bent, Ruth Mack feared that banding together “would produce snobbery on both sides.” A general student turnout for meetings on Jewish institutions and ideals would be unobjectionable, she explained, but “while we can say that die Menorah is not limited to Jews, we can do little to make the non-Jews come out.”52 The pressures on a woman like Ruth Mack were considerable. On the one hand her strong Jewish identity inclined her to join; on the other she feared alienation from Radcliffe’s Gentiles especially because she found the Jewish upperclassmen “less attractive, intellectually, etc., than the parallel class among the non-Jew.” Although many at the early twentieth-century college paid lip-service to “democracy”53 on campus (by which was meant equal opportunity to succeed in the campus student culture), Jews often found themselves arbitrarily disqualified, thus producing dilemmas such as that of Ruth Mack.

By the 1920s Jewish fraternities had become virtually indistinguishable from their Gentile counterparts and Menorah Societies began to atrophy. “Between 1920 and 1930,” wrote Horace Kallen, an original member of Harvard Menorah,

the tradition of a love of learning which they [Jewish students] brought to college has been dissipated. The adult responsibility which they felt for the problems of their own people and of die community at large, and which was signalyzed [sic] by their membership in such organizations as the Menorah Societies, the Zionist, the Liberal, or the Social Question Clubs, has been destroyed. As their numbers grew, their fields of interest and modes of behavior conformed more and more to the prevailing conditions of undergraduate life. Although excluded by expanding anti-Semitism from participation in that life, they reproduce it, heightened, in an academic ghetto of fraternities, sororities and the like. And they emulate the invidious distinctions they suffer from by projecting them upon the Jews too proud, too poor, or too Jewish to be eligible for “collegiate” secret societies of Jews.54

Thus, although on many private college campuses officials limited access by Jewish students, those Jewish undergraduates who obtained admission gradually arrived at an acceptable modus vivendi with their fellow students and with the authorities.


During the brief tenure of Harvard President Edward Everett (1846-1848), it became known that a black student would present himself for the college’s admissions examination. Although the student had tutored one of Everett’s sons, and was the best scholar in his class, rumors spread that Harvard would not permit his matriculation, no matter how well he performed on the exams. The student never entered Harvard (due to “illness” according to contemporary accounts), but Everett took the occasion to announce Harvard’s policy. “The admission to Harvard College depends upon examinations,” he said, “and if this boy passes the examination, he will be admitted; and if the white students choose to withdraw, all the income of the College will be devoted to his education.”55 The student threat to withdraw from the college to which Everett alluded left him undaunted; however, one of his successors at Harvard, Abbott Lawrence Lowell, took a similar threat quite seriously. In 1914 Lowell closed the freshman dormitories, which supposedly had been built to reduce student social segregation, to black students, claiming, when the practice became public knowledge several years later, that he did not wish to offend the sensibilities of white students. “To maintain that compulsory residence in the Freshman Dormitories—which has proved a great benefit in breaking up the social cliques, that did much injury to the College—should not be established for 99 1/2 percent of the students because the remaining one half of one percent could not properly be included seems to me an untenable position,”56 wrote Lowell to Roscoe Conkling Bruce, a black alumnus of Harvard seeking dormitory accommodations for his son. After a public controversy arose when Lowell denied dormitory access to the younger Bruce, the Harvard Corporation published an ambiguous rule continuing compulsory dormitory residence (exemptions permitted) and providing that “men of the white and colored races shall not be compelled to live and eat together, nor shall any man be excluded by reason of his color.”57 Whether integrated dormitories would have led to a mass exodus of white students is debatable. In practice, Harvard had few black applicants. But it did wish to attract students from the South, and did not wish to acquire a reputation for forced “race mingling” or for “social equality.” As a result, Harvard’s dormitories remained segregated de facto until the early 1950s.58

By no means was Harvard alone in confronting the housing problem. Various administrations offered different solutions ranging from outright prohibition to integration. The University of Chicago permitted the majority of residents in each dormitory to decide who should join them.59 Apparently, at Smith, when two Southern students protested the admission of a black student to their dormitory, President William A. Neilson expressed his willingness for the protesting students to move to another dormitory, although it might prove difficult to find accommodations for them. The students replied that they had wanted the black student removed, but Neilson remained adamant. The students thereupon decided they would remain in the same dormitory.60

Thus in a manner reminiscent of officially sanctioned schisms among rich and poor students, a number of colleges created and/or tolerated Jim Crow dormitories, in the process sometimes undercutting claims to formal neutrality in social areas. Colleges could not at the same time argue that education provided the most effective means for overcoming intolerance while in practice facilitating social segregation. Usually hovering between 1/2 and 2 percent, the proportion of black students on northern campuses rarely if ever reached the point where officials feared a massive white exodus. Most probably they wanted to avoid a reputation for liberalism in an area surrounded with many social taboos.

As usual, the dominant student group managed social relations so as not to be inconvenienced by the presence of a distinctive minority. A black freshman enrolling at a predominantly white institution during these years arrived already knowing that he or she would lack much social life. “First of all,” wrote one, “being a Negro, I was exempt from all the sororities on the campus. I knew that 1 would never dress for a sorority ‘rush’ party, or become a pledge. I knew, also, that I would never dance at the Sigma Chi or the Delta Tau houses.”61 A study published in 1942 indicated that, without underestimating the difficulties of economic or academic adjustments, almost all black students were most dissatisfied with their social lives.62 Some suppressed their aspiration for a full social life and concentrated on their work. “My reaction was to show these people that I was a good student. . . . I cannot help feeling . . . that if I am down scholastically, and a Negro also, I might as well leave this place.” But even the students with the strongest defenses could not completely escape the results of social ostracism. “There was the time when I was one among three hundred girls at a social dance, and the instructor and one other girl ventured to drag me over the floor, when all of the other girls had run frantically clutching at each other to dance with everyone else but me, simply because I was a Negro, a brown conspicuous person. That was the time I went home and fell across the bed and cried, cried until I was exhausted. That was the time I hated a white college.”63

Conditions changed only gradually after World War II. In 1955 the Supreme Court affirmed a lower court ruling that Brown /. Board of Education applied to segregated institutions of higher education;64 however, it took another decade for the first black students to gain admittance to several major southern institutions. Only with the successful prosecution of Adams v. Richardson in the 1970s have a number of southern states been forced to draw up comprehensive programs for the integration of their higher education systems. In the North the large in-migrations of blacks during the 1950s and 1960s produced major changes in the racial composition of elementary and secondary schools, but did not yield in due course similar changes in colleges and universities.

At first, black students experienced considerable overt hostility in newly integrated campuses. A constant barrage of insults and threats against black students at Louisiana State University (L.S.U.) was supplemented by a series of “pranks” including cross burnings and by several violent occurrences.65 Although crude manifestations of prejudice decreased over time,66 black students continued to report incidents, slights, and alienation. At the University of Illinois, an impersonal environment in which white students displayed few initiatives toward blacks (no blacks belonged to any white fraternity) led to disaffection and isolation.67 As racial tension generally increased in the United States of the late 1960s, black students became less willing to overlook or accept such conditions.

During the 1950s and 1960s, many colleges had undertaken a series of reforms in an attempt to remove any vestige of discrimination against minority students. Some integrated their dormitories; others required fraternity and sorority chapters to drop restrictions against minority group access or to withdraw from national organizations that mandated retention of such restrictions. Sometimes administrators undertook such reforms vigorously; all too often changes resulted only from outside pressure. There is thus a special irony in the rise of separatist demands by black and other racial minority students, which came just when authorities had concluded that integration could not be left to “education,” and that significant minority representation would not necessarily result in a majority exodus. White students, separatist minority students argued, would never fully accept nonwhites as social equals. Instead, they called for a series of exclusively nonwhite extracurricular activities and residential accommodations to supplement their demand for a separate academic program. “The black women [are thinking about] pushing for a Black Women’s Living Center,” said a junior black woman on a predominantly white campus in the early 1970s.

We want to get these pockets of black students out of these all-white dormitories and get them into a house of their own. The sororities and fraternities do it; why can’t black people live together? Let’s face it. Black people are just more comfortable with black people. I don’t particularly like being questioned about my hair or style of life by white people. There are certain foods I like to eat which this school ignores or can’t cook. Secondly, it would be a unifying device to get everyone together in a living situation. To me it’s only natural. Before coming [to this school], I came from an all-black community and it’s natural for me to live in one. . . . Of course, those who raise arguments against it don’t say or may not realize that. . . unification is a threat.68

Black students quickly came to realize the necessity of significant enrollment increases as prerequisite to all such demands. Otherwise, separation would inevitably lead to increased social isolation and would restrict their ability to create an institutionalized social life. A campus with fewer than fifty black students, commented a black undergraduate, “has a vacuum of social activities for blacks.”69 Since most courtship on American campuses is intraracial, a small number of minority students implies an almost nonexistent pool of available dates, even when there are roughly equal numbers of men and women minority students. In addition, small numbers usually mean that attempts at formal social organization will rarely outlast the founders; recruitment often proves difficult even with sizable availability pools.

Unlike other groups, which confronted administrators with an “excess” number admitted by normal entrance processes, black students demanded modifications of admissions policies so as to insure inclusion of an adequate contingent. Many colleges made such commitments. Events at the City University of New York proved most spectacular. After a lengthy sit-in by black and Puerto Rican students at the university’s City College, the university adopted an open admissions system in which students would be admitted either by high school grade point average (traditional method) or by high school rank in class (new method) 70

In more “selective” institutions, that is, in colleges where subjective considerations entered into a competitive admissions process, admissions officers agreed either to take race explicitly into account or at least to make special efforts to recruit minority students who met traditional criteria. Although the absolute number of black students has increased significantly in the last decade, there have been recent signs of “slippage,” and on many of the more selective predominantly white campuses the number of minority students remains 3 to 10 percent—lower than the number considered desirable by minority group members.


In many ways, the black separatists of the 1960s wanted precisely what other groups that had been victims of “repulsion” had traditionally attained: the ability to establish a set of social relationships paralleling those of the socially dominant group. However, they put forward their claims at a time when college administrators had finally overcome their fear that group repulsion would lead to an unacceptable change in institutional mission. Whether by compulsion or by volition, authorities in the 1950s and 1960s began to argue that in regulating their internal affairs, they could keep up with changes in “generally socially acceptable boundaries,” and, on a number of occasions, go beyond them. Practice often fell short of ideals, and black students who directly or vicariously experienced discrimination proved less hesitant than previous groups to protest. Particularly striking on campuses with sufficient numbers of blacks was a tendency similar to that of other groups herein discussed to emulate certain aspects of the majority extracurriculum. Thus black fraternities appeared on a number of campuses that, although espousing social consciousness, retained the paraphernalia of fraternities including distinctive insignias and symbols, and various rites and “customs.”

In general we may say that the initial representatives of a new campus group needed the strength to prove themselves academically while surviving socially. Although one might speculate that such pioneers were highly self-selected, we know little about dropout rates for such students. L.S.U. did show rather high attrition among its first classes of black students, but these students were subjected to crass physical abuse as well as the lesser forms of insults often experienced by other groups.71 We might guess that pioneer students needed rather extraordinary motivation to come to quick terms with an institution whose traditional occupants exhibited attitudes varying from indifference to hostility.

This last points to the potential significance of family in explaining motivation. Lacking numerical peer support, insurgents may have relied quite heavily on their families for needed backing. Marion Talbot’s parents strongly encouraged her educational aspirations—so much so that her mother’s considerable educational reform activities (she was pivotal in gaining establishment of Girls’ Latin School in Boston) partly derived from obstacles faced by Marion.72 Similarly majoritarian attitudes may well have been initially acquired off campus, and then subjected to strong peer reinforcement. Sheean at Chicago was surprised to learn that his roommate, who had also intended to pledge the “Jewish” fraternity, had learned, “from his father probably,” about anti-Semitism and about “the ridicule, the complicated varieties of discrimination and prejudice, to which any Gentile who belonged to a Jewish fraternity would have to submit throughout four years of college.”73 Although many studies of campus peer groups emphasize discontinuities with the student’s previous home life, it may very well be that in some areas peer groups serve to reinforce previously acquired attitudes.

There yet remains to be explored a fundamental set of questions. First, why would members of an insurgent group invade what must have almost appeared to be enemy territory? And, second, why did traditional constituencies not abandon their campuses for “safer” environs, as so many administrators feared they would? Of course, one answer to the latter question is that administrators often retained or obtained the ability to control access by “distinctive” groups. But abandonment rarely occurred even when such measures were not employed.

The fear of group repulsion bears a remarkable resemblance to the contemporary fear of “white flight” often discussed with respect to elementary and secondary education. Should the percentage of minority students in a given school exceed some subjectively sensed percentage, according to the fear, white parents will begin to move into more homogeneous neighborhoods. In due course the school will become populated almost exclusively by minorities. Current literature contains considerable speculation as to the existence and extent of white flight; a resolution of that debate goes far beyond the scope of this article. But it is very much to the point of this article to say a word about what white elementary and secondary school students are supposedly fleeing from. White parents when interviewed often claim that they withdrew their children not because of the increased presence of minority students per se, but because the quality of education and resources offered appears to deteriorate concomitant with their appearance. In contrast, the quality of most colleges’ and universities’ educational product remained relatively unchanged despite minority group influxes. If anything, most institutions experienced sizable expansions in endowment, faculty, and facilities. And, by the third decade of the twentieth century, the prestige order among American institutions of higher education had become relatively entrenched; most institutions could “survive” even a sizable onslaught by a significant number of minority students. The more prestigious institutions could provide social and economic mobility to minority students without detracting from the status they accorded members of traditional constituencies. A Harvard or a Swarthmore, for example, could remain attractive to a student from a traditional constituency in a way that an urban high school could not. Apparently the reasoning behind magnet high schools recognizes this, at least implicitly. Such schools aim to provide sufficient educational quality and services to overcome white hesitancy over sending children to schools with a sizable minority constituency.74 For colleges and universities, majority students usually remained on the rolls despite minority student presence so long as they and/or college officials could devise ways of avoiding undesired social intercourse. Minorities desired to attend such institutions despite expected exhibitions of prejudice not only because of the expected quality of education and the tangible rewards obtainable for acquiring such education, but also for social reasons. The ability to replicate the majority extracurriculum meant that minority students could learn the same social lessons that extracurriculum taught majority students: how to identify desirable and undesirable acquaintances, how to exercise leadership, how to function in various group settings, cooperatively and competitively, and so forth. Even if in the larger society one found discrimination similar to that existing on the campus, minority students could employ such lessons profitably within their own groups, especially since such college-educated youth usually constituted the recognized future leaders of their groups.

In short, few minority students in the periods discussed in this article found their college careers to be completely clear sailing, but most were convinced that whatever abuse they endured would in the long run be well worth the price.

1 See Laurence Veysey, The Emergence of the American University (Chicago: University of Chicago Press, 1965), pp. 294-302.

2 David F. Allmendinger, Jr., Paupers and Scholars: The Transformation of Student Life in New England 1760-1860 (New York: St. Martin’s Press, 1975), p. 82.

3 Kathryn M. Moore, “Freedom and Constraint in Eighteenth Century Harvard,” Journal of Higher Education 47 (November/December 1976): 650-51.

4 Margery Somers Foster, “Out of Smalle Beginnings . . .”: An Economic History of Harvard College in the Puritan Period (1636 to 1712) (Cambridge: Belknap Press of Harvard University Press, 1962), pp. 65-68.

5 Ibid., p. 68.

6 Samuel Eliot Morison, Three Centuries of Harvard 1636-1936 (Cambridge: Harvard University Press, 1936), p. 60.

7 Moore, “Freedom and Constraint in Eighteenth Century Harvard,” p. 653.

8 “Statutes of William and Mary, 1727” in American Higher Education: A Documentary History, ed. Richard Hofstadter and Wilson Smith (Chicago and London: University of Chicago Press, 1961), vol. 1, pp. 47-48.

9 David C. Humphrey, From Kings College to Columbia 1746-1800 (New York: Columbia University Press, 1976), p. 204.

10 Ibid., p. 196.

11 Howard Miller, “Evangelical Religion and Colonial Princeton,” in Schooling and Society, ed. Lawrence Stone (Baltimore: Johns Hopkins University Press, 1976), pp. 135-39.

12 Allmendinger, Paupers and Scholars, pp. 9-11.

13 Ibid. Allmendinger also emphasizes the charitable support offered by the American Education Society and local groups toward meeting educational expenses—see pp. 54-78.

14 Ibid., pp. 85-86.

15 Frederick Rudolph, The American College and University: A History (New York: Vintage, 1962), p. 148.

16 Ibid., pp. 149-50.

17 George Henry Tripp, Student Life at Harvard (Boston: Lockwood, Brooksand Co., 1876), p. 317.

18 Ibid., p. 318.

19 Ibid., pp. 318-19.

20 Ibid., p. 323.

21 Ibid., p. 324.

22 Statement of James L. High, an 1864 University of Wisconsin alumnus as quoted in Helen R. Olin, The Women of a State University, An Illustration of the Working of Coeducation in the Middle West (New York and London: G. P. Putnam’s Sons, 1909), pp. 101-02.

23 “And a Hebrew female seminary, in the character of the student body, at that,” Burgess commented. John W. Burgess, Reminiscences of an American Scholar: The Beginning of Columbia University (New York: Columbia University Press, 1934), p. 242.

24 Ibid., pp. 241-42.

25 Mabel Newcomer, A Century of Higher Education for American Women (New York: Harper and Brothers, 1959), p. 46.

26 Olin, The Women of a State University, pp. 112-13.

27 The University of Chicago, The President’s Report: Administration, The Decennial Publications, First Series, vol. I (Chicago: University of Chicago Press, 1903), p. cxi.

28 Ibid., p. cvi.

29 Ibid., p. cxi.

30 G. Stanley Hall, Adolescence: Its Psychology and its Relations to Physiology, Anthropology, Sociology, Sex, Crime, Religion, and Education, vol. 2 (New York: D. Appleton, 1908), pp. 616-17.

31 Julius Sachs, “The Intellectual Reactions of Co-education,” Educational Review 35 (May 1908): 470.

32 M. Carey Thomas, “Present Tendencies in Women’s College and University Education,” Educational Review 35 (January 1908): 65.

33 Marion Talbot, “Report of the Dean of Women,” in The University of Chicago, The President’s Report, pp. 140, 141.

34 Thomas, “Present Tendencies in Women’s College and University Education,” p. 73.

35 Olin, The Women of a State University, pp. 139-40.

36 Mary Roth Walsh, Doctors Wanted: No Women Need Apply (New Haven: Yale University Press, 1977), pp. 200-06.

37 Women’s Journal, January 1, 1910, as quoted in ibid., p. 201.

38 Stephen Duggan, A Professor at Large (New York: Macmillan, 1943), pp. 10-11.

39 Abbott Lawrence Lowell to Rufus S. Tucker, May 20, 1922, A. L. Lowell Papers, 1919-1922, Harvard University Archives, file 1056: “Jews.”

40 Abbott Lawrence Lowell to William Ernest Hocking, May 19, 1922, in ibid.

41 Henry Aaron Yeomans, Abbott Lawrence Lowell 1856-1943 (Cambridge: Harvard University Press, 1948), p. 212.

42 Virginia Gildersleeve to Annie Nathan Meyer, March 31, 1933, Annie Nathan Meyer Papers, American Jewish Archives, “Virginia Gildersleeve” file.

43 Virginia Gildersleeve to Annie Nathan Meyer, May 6,1929, Barnard College Archives, DO 28-29, box 1, file 1.

44 Vincent Sheean, Personal History (Garden City, N.Y.: Doubleday Doran, 1936), p. 9.

45 Ibid., p. 10.

46 Ibid, p. 14.

47 Henry Hurwitz and I. Leo Sharfman, eds., The Menorah Movement for the Study and Advancement of Jewish Culture and Ideals: History, Purposes, Activities (Ann Arbor, Mich.: Intercollegiate Menorah Association, 1914).

48 On literary societies see Rudolph, The American College and University, pp. 137-46; and James McLachlan, “The Choice of Hercules: American Student Societies in the Early 19th Century,” in The University in Society, ed. Lawrence Stone (Princeton: Princeton University Press, 1974), pp. 449-94.

49 Hurwitz and Sharfman, The Menorah Movement for the Study and Advancement of Jewish Culture and Ideals, pp. 10-11.

50 Zeta Beta Tau, The First Twenty Years (New York: Zeta Beta Tau, 1924), p. 15.

51 Ibid., p. 59.

52 Ruth Mack to Julian Mack, November 26, 1914, “Letters and notes used by Harry Barnard in Researching Mack’s biography,” American Jewish Archives, box 1068, “Letters and notes concerning time period 1900-1929” file.

53 The most famous fictional elaboration of this theme is contained in Owen Johnson, Stover at Yale (New York: Collier Books, 1968 [1912]).

54 Horace Kallen, College Prolongs Infancy (New York: John Day, 1932), p. 24.

55 Paul Revere Frothingham, Edward Everett: Orator and Statesman (Boston and New York: Houghton Mifflin, 1925). My thanks to Richard Yanikoski, who is writing a thesis on Everett, for calling this incident to my attention. It is also recounted in Gordon W. Allport, The Nature of Prejudice (Garden City, N.Y.: Doubleday Anchor, 1958), p. 471.

56 Abbott Lawrence Lowell to Roscoe Conkling Bruce, January 6, 1923, as quoted in Nell Painter, “Jim Crow at Harvard,” New England Quarterly 44 (1971): 629.

57 Painter, “Jim Crow at Harvard,” p. 634.

58 Ibid., n. 26. See also Marcia Synnott, “A Social History of Admission Policies at Harvard, Yale and Princeton 1900-1930” (Ph.D. diss., University of Massachusetts, 1974), pp. 368-80, 396-98.

59 William Henderson to E. D. Burton, April 5, 1923, University Presidents’ Papers, 1889-1925, The University of Chicago Library, “Racial Issues” file. In 1907 five white students moved out of a dormitory at the University of Chicago when university officials assigned a black student to it. Apparently this was an inadvertent breach of the university’s segregationist policy. See S. Breckinridge to H. P. Judson, June 20, 1907, and S. Breckinridge to R. S. Goodspeed, June 20, 1907, University Presidents’ Papers, 1889-1925, The University of Chicago Library, “Racial Issues” file.

60 B. S. Hurlbut to E. D. Burton, April 2,1923, University Presidents’ Papers, 1889-1925, The University of Chicago Library, “Racial Issues” file.

61 Edythe Hargrove, “How I Feel as a Negro at a White College,” Journal of Negro Education 11 (October 1942): 484.

62 William H. Boone, “Problems of Adjustment of Negro Students at a White School,” Journal of Negro Education 11 (October 1942): 481.

63 Hargrove, “How I Feel as a Negro at a White College,” p. 485. The school in question was the University of Michigan.

64 Frasier v. Board of Trustees of University of North Carolina, 134 F. Supp. 589 (1955) (M.D. North Carolina); affirmed 350 U.S. 979 (1956).

65 Hansjorg Elshorst, “Two Years after Integration: Race Relations at a Deep South University,” Phylon 28 (Spring 1967): 41: “A student was threatened with a knife while in his room, another was hit by acid, one was attacked with fists and a girl was hit while in the library.”

66 For exceptions see Meyer Weinberg, Minority Students: A Research Appraisal (Washington, D.C.: U.S. Department of Health, Education, and Welfare-National Institute of Education, 1977), p. 199.

67 Aaron Bindman, “Participation of Negro Students in an Integrated University” (Ph.D. diss., University of Illinois, 1965), passim.

68 Charles V. Willie and Arline Sakuma McCord, Black Students at White Colleges (New York: Praeger, 1972), p. 6.

69 Ibid., p. 25.

70 See David E. Lavin, Richard D. Alba, and Richard Silberstein, “Open Admissions arid Equal Access: A Study of Ethnic Groups in the City University of New York,” Harvard Educational Review 49 (February 1979): 53-93; and Harold S. Wechsler, The Qualified Student: A History of Selective College Admission in America 1870-1970 (New York: Wiley-Interscience, 1977), chap. 11.

71 Elshorst, “Two Years after Integration,” p. 51.

72 Richard J. Storr, “Marion Talbot,” in Notable American Women 1607-1958: A Biographical Dictionary, vol. 3 (Cambridge: Belknap Press of Harvard University Press, 1971), p. 423.

73 Sheean, Personal History, p. 16.

74 For a critical study of magnet schools, see James E. Rosenbaum and Stefan Presser, “Voluntary Racial Integration in a Magnet School” School Review 86 (February 1978): 156-86.

I wish to thank Ann Breslin, Deborah Gardner, Lynn Gordon, Walter Metzger, and Paul Ritterband for their comments on this paper. I completed this work during my tenure as a Spencer Fellow of the National Academy of Education. I wish to thank the Spencer Foundation and the members of the Academy for their support.

Cite This Article as: Teachers College Record Volume 82 Number 4, 1981, p. 567-588 ID Number: 961, Date Accessed: 1/19/2021 6:13:20 PM
Posted in History, Leadershjp, Modernity, Welfare

Beneficent Buffoon — The Case of Napoleon III

History is full of ironies.  One is that sometimes buffoons can be more beneficent national leaders than great men.  A case in point is Napoleon III.  My source for this analysis is the new book by Alan Strauss-Schom, The Shadow Emperor: A Biography of Napoleon III.

Louis Napoleon Bonaparte was the undistinguished nephew and heir to one of the towering figures in world history, Napoleon Bonaparte, who became emperor of France and ran roughshod over Europe for nearly 15 years.  For comparison, just look at the images of the two men below.  The first says: twit alert.  The second says: danger ahead.

Napoleon IIINapoleon

Perhaps the classic and certainly most devastating comparison between the two men came from Karl Marx on the opening page of his book about the coup that made the nephew emperor, The Eighteenth Brumaire of Louis Bonaparte.  (Eighteenth Brumaire was the date in the revolutionary calendar (November 9, 1799) when the uncle seized power and later crowned himself emperor.)  Here are the opening sentences to the book:  “Hegel says somewhere that great historic facts and personages recur twice.  He forgot to say, “Once as tragedy, and again as farce.”

It’s hard to question the accuracy of Marx’s judgment.  Though Louis always dreamed of succeeding his uncle as emperor of France, it took him three tries to get it right. 

In 1836, he and 11 others walked into the French garrison in Strasbourg, where he asked the men to support him in restoring the empire. The puzzled troops declined, and King Louis Phillippe banished him to England. 

Then in 1840 he sailed from London to Boulogne, this time with 55 men in uniform — few with military training and many his personal servants dressed up as soldiers — and marched up to the doors of the chateau, demanding to be let in.  Local troops routed this motley group and captured Louis.  After trial, he was sentenced to prison, where he kept busy for six yeards writing pamphlets and fathering two children with a maidservant. 

Then “Early on May 25, 1846, after shaving off his distinctive mustache and donning a long black wig, a workman’s blue blouse and trousers, wooden sabots, with a clay pipe in his mouth, and carrying a plank over his shoulder, Louis Napoléon sauntered lazily past some of the carpenters carrying out repairs in the fort.”  He headed back to London to make his next move.  Really, you can’t make this up.

His time finally came with the revolution of 1848, which displaced Louis Phillippe and set up the Second Republic.  The popularity of Napoleon I had been growing, as shown by the massive public ceremony in 1840 when his remains were brought back from Elba and installed with honors in a tomb at the Invalides.  So Louis returned to Paris, was elected to the assembly, and then won election as president. 

When the assembly voted to restrict the ballot and thus deny him the support he would need for reelection, he planned another coup.  This time he had sense enough to let his half brother Auguste de Morny handle the arrangements for an attempt to take place on an auspicious date for the empire — December 2, 1851 — which was the anniversary of the Napoleon’s great victory at Austerlitz and also his coronation.  For once everything went off with military precision.  It was also a very popular move with the public, who on December 20 voted to give him the presidency for a ten-year term.  The tally was 7 million to 600,000.  Next year, he was made emperor, affirmed by an even larger majority of the voters.  Not bad for the leader of the gang that couldn’t shoot straight.  

But he hadn’t lost his knack for bumbling.  He stumbled into several unnecessary wars — the Crimean War and the Italian wars — where more soldiers died of incompetent leadership and disease than from battle.  He invaded Mexico to try to install an Austrian archduke as emperor, which ended in humiliating defeat.  He doubled down on the French occupation of Algeria,  a conflict that would continue  for the next century.  And he sent troops into Indo-China, which also led to a century of colonial conflict.  That’s a lot of damage for his 19 years as emperor.

But the worst came at the end, when Louis allowed Bismarck to lure him into a war with Prussia, which the latter had been planning for years as a way to revenge Prussia’s defeat by Napoleon Bonaparte and a way to bring the German states together into a German Empire.  Louis’s troops were badly equipped and had no war plans to follow.  Worst, they had the emperor in their midst to muddle lines of authority.  The battle of Sedan in 1851 was a devastating defeat for the French, and Napoleon himself surrendered to the Prussian king.

The result was devastating for France and for Europe as a whole (helping set up conditions for the First World War).  With the army at Sedan out of the way, the Prussians marched on Paris, defeated a leaderless empire, and marched triumphantly through the city.  France set up the Third Republic and then immediately had a bloody battle with the Paris Commune, where workers had risen in revolt.  Napoleon III died two years later in English exile.

Oh yes, one more thing:  “he made a laughingstock of himself by having to bed every silk skirt in Paris.”

The list of his conquests was not only long, it was public, including the companions and ladies-in-waiting of the empress. For all his charms and admitted interest in major social causes, including new hospitals, schools, and housing, Napoléon III was at the same time thick-skinned to the point of deeply wounding and publicly humiliating Eugénie. Harriet Howard and her children and his own prison-born bastards were long out of sight. But then there had been Augustine Brohan, Alice Ozy, Countess Parada, Countess de La Bédoyère, the Countess Walewska, Madame Rimsky-Korsakov, and now La Castiglione; and later, Marguerite Bellanger, Valtesse de la Bigne, and Countess Mercy-Argenteau, without counting “the actresses” and the ladies of the court.  Louis Napoléon was indeed a womanizer on an imperial scale.

Ok, you’re asking.  Where’s the beneficent part?  When did the doofus actually do good?

First of all — competence aside, Napoleon III was not a bad guy.  For one thing, he was remarkably likeable, something that no one ever said about his uncle.  Queen Victoria and Prince Albert adored him.  So did his jailers.

Regardless of his Germanic accent in French and English, the prince’s winning Old World courtesy and mild temperament won over everyone, including the guards. Despite his notorious reputation, despite his two failed attempts to overthrow the government of France, despite his having shot an unarmed French soldier, it was simply very difficult to dislike the man.

More substantively, Louis’s heart was not in making war but in liberal reform, modernizing France, and support for the working class.  Strauss-Schom puts it this way:

The longest lasting and certainly the most complex conflict of the Second Empire was the battle between Louis Napoléon Bonaparte, the follower of Saint-Simon, and the Emperor Napoléon III, the leader of his country…. 

As a liberal reformer, Louis Napoleon did some amazing things.  His most visible gift was the complete remaking of the city of Paris, which at the time he took power was a collection of medieval villages with narrow, filthy streets, no sanitation or running water, and an appalling death rate.  He turned it into the magnificent modern city that we all love, with broad boulevards, expansive squares, and stunning buildings.  He made Baron Haussmann prefect of Paris, and the rest is history.

By the time Haussmann stepped down in January 1870, he had overseen the demolition of 19,722 buildings, which had been replaced by some 43,777 new structures, all with running water and sanitary facilities. He had designed and overseen the construction of ninety-five kilometers of broad new gas-lit streets, including most of the great thoroughfares of the capital. 

And the improvements were not just to the physical environment; he also had a big impact on social welfare.

The last vestiges of the eighteenth century were carried away with the rubble from the demolished medieval buildings. A fresh breeze wafted across the French capital, transforming not only the avenues and architecture but the entire attitude and outlook of the people liberated from the restraining values and ideas of the past. Thanks to Louis Napoléon’s emphasis on public education, the working classes were finally taught to read and write, and new book publishers, new newspapers, reviews, and magazines multiplied, bringing literary creation as well as news from across the world and the ever expanding empire.

The living and working conditions of the working class — totally ignored by Napoléon — became a lifelong preoccupation with Napoléon III. Crowded medieval tenements, contaminated drinking water, and the lack of sanitary facilities and sewage disposal resulted in tens of thousands of deaths in Paris alone every year.  At the same time, the vast rebuilding of the capital put many tens of thousands of the unemployed to work. Louis Napoléon also introduced farsighted job-creation and old-age pension schemes for the working class, not to mention mandatory education at the primary school level.

The rebuilding of Paris did come with a major cost, however, one with long-term consequences:  “no provision was made for rehousing most of those displaced, who were destined “to disappear” into the outlying suburbs. ” Sounds like Robert Moses in NYC, doesn’t it?  These suburbs (banlieus) became a major breeding ground for hostility among their working class and immigrant inhabitants.  But even here Louis tried in his own way do to something about the problem.  “He was personally to buy out of his own pocket some of the condemned properties near the Gare de Lyon and build a new Saint-Simon-style housing project for a few thousand people.”

Louis also did a lot to advance science, technology, finance, business, agriculture, and the environment.  He built miles of railroads and created a fleet of steamships, 

Louis Napoléon introduced recent agricultural methods, creating numerous model farms, and supported Louis Pasteur and other scientists in their research. He also supported the creation of public and private sector banking credit facilities for the advancement and expansion of agriculture. Millions of acres of wasteland and swamp were reclaimed on his orders and turned into agricultural production, this at a time when there was neither public environmental interest in, nor demand for, such government programs. As a result of his determination, corn and wheat production soared, as did the nation’s wine output. The physical and chemical sciences were also greatly encouraged with fresh funding and prizes, and new specialist hospitals were built throughout the country.

Here is how Strauss-Schom sums up the comparison between the two Napoleons:

A very dynamic and demanding Napoléon I had dominated the empire he created, while a far less vigorous and egotistical Louis Napoléon happily remained in the shadows of the very empire he in turn created. And yet despite some errors of judgment, obvious lack of organizational skills, and sometimes maddening indecision, beneath that quiet, good-humored exterior, Louis Napoléon achieved vastly much more in the long run for France than did the brilliant, swaggering soldier Napoléon I. His Second Empire now left France one of the most prosperous, modern, and progressive states in Europe. Rarely in the history of any country does it fall to one head of state to so completely alter the face and future of his civilization, to bring one’s country and its infrastructure out of an older traditional century into a thriving modern age. This most unlikely Louis Napoléon Bonaparte was just such a man.

Sometimes it’s the doofus who does good.