Posted in History, Schooling, Welfare

Michael Katz — Public Education as Welfare

In this post, I reproduce a seminal essay by Michael Katz called “Public Education as Welfare.” It was originally published in Dissent in 2010 (link to the original) and it draws on his book, The Price of Citizenship: Redefining the American Welfare State.  

I encountered this essay when I was working on a piece of my own about the role that US public schools play as social welfare agencies.  My interest emerged from an op-ed about what is lost when schools close that I published a couple weeks ago and then posted here.  Michael was my dissertation advisor back at Penn, and I remembered he had written about the connection between schooling and welfare.  As you’ll see when I publish my essay here in a week or so, my focus is on the welfare function of schooling in companion with its other functions: building political community, promoting economic growth, and providing advantage in the competition for social position.  

Katz takes a much broader approach, seeking to locate schools as a central component of the peculiar form of the American welfare state.  He does a brilliant job of locating schooling in relation to the complex array of other public and private programs that constitute this rickety and fiendishly complex structure.  Enjoy.

Katz Cover

Public Education as Welfare

Michael B. Katz

Welfare is the most despised public institution in America. Public education is the most iconic. To associate them with each other will strike most Americans as bizarre, even offensive. Thelin would be less surprising to nineteenth century reformers for whom crime, poverty, and ignorance formed an unholy trinity against which they struggled. Nor would it raise British eyebrows. Ignorance was one of the “five giants” to be slain by the new welfare state proposed in the famous Beveridge Report. National health insurance, the cornerstone of the British welfare state, and the 1944 Education Act, which introduced the first national system of secondary education to Britain, were passed by Parliament only two years apart. Yet, in the United States, only a few students of welfare and education have even suggested that the two might stand together.

Why this mutual neglect? And how does public education fit into the architecture of the welfare state? It is important to answer these questions. Both the welfare state and the public school system are enormous and in one way or another touch every single American. Insight into the links between the two will illuminate the mechanisms through which American governments try to accomplish their goals; and it will show how institutions whose public purpose is egalitarian in fact reproduce inequality.

The definition and boundaries of the welfare state remain contentious topics. I believe that the “term “welfare state” refers to a collection of programs designed to assure economic security to all citizens by guaranteeing the fundamental necessities of life: food, shelter, medical care, protection in childhood, and support in old age. In the United States, the term generally excludes private efforts to provide these goods. But the best way to understand a nation’s welfare state is not to apply a theoretically driven definition but, rather, to examine the mechanisms through which legislators, service providers, and employers, whether public, private, or a mix of the two, try to prevent or respond to poverty, illness, dependency, economic security, and old age.

Where does public education fit within this account? First, most concretely, for more than century schools have been used as agents of the welfare state to deliver social services, such as nutrition and health. Today, in poor neighborhoods, they often provide hot breakfasts among other services. More to the point, public school systems administer one of the nation’s largest programs of economic redistribution. Most accounts of the financing of public education stress the opposite point by highlighting inequities, “savage inequalities,” to borrow Jonathan Kool’s phrase, that shortchange city youngsters and racial minorities. These result mostly from the much higher per-pupil spending in affluent suburbs than in poor inner cities, where yields from property taxes are much lower. All this is undeniable as well as unacceptable.

But tilt the angle and look at the question from another perspective. Consider how much the average family with children pays in property taxes, the principal support for schools. Then focus on per-pupil expenditure, even in poor districts. You will find that families, including poor city families, receive benefits worth much more than they have contributed. Wealthier families, childless and empty-nest couples, and businesses subsidize families with children in school.

There is nothing new about this. The mid-nineteenth-century founders of public school systems, like Horace Mann, and their opponents understood the redistributive character of public education. To build school systems, early school promoters needed to persuade the wealthy and childless that universal, free education would their interests by reducing the incidence of crime, lowering the cost of poor relief, improving the skills and attitudes of workers, assimilating immigrants—and therefore saving them money in the long run. So successful were early school promoters that taxation for public education lost its controversial quality. With just a few exceptions, debates focused on the amount of taxes, not on their legitimacy. The exceptions occurred primarily around the founding of high schools that working-class and other voters correctly observed would serve only a small fraction of families at a time when most youngsters in their early teens were sent out to work or kept at home to help their families. For the most part, however, the redistributive quality of public education sank further from public consciousness. This is what early school promoters wanted and had worked to make happen. When they began their working the early nineteenth century, “public” usually referred to schools widely available and either free or cheap—in short, schools for the poor. School promoters worked tirelessly to break this link between public and pauper that inhibited the development of universal public education systems. So successful were they that today the linkage seems outrageous—though in cities where most of the remaining affluent families send their children to private schools, the association of public with pauper has reemerged with renewed ferocity.

As a concrete example, here is a back-of-the envelope illustration. In 2003–2004, public elementary and secondary education in the United States cost $403 billion or, on average, $8,310 per student (or, taking the median, $7,860). Most families paid nothing like the full cost of this education in taxes. Property taxes, which account for a huge share of spending on public schools, average $935 per person or, for family of four, something under $4,000, less than half the average per-pupil cost. As rough as these figures are, they do suggest that most families with school-age children receive much more from spending on public education than they contribute in taxes. (A similar point could be made about public higher education.)

Taxpayers provide this subsidy because they view public education as a crucial public good. It prevents poverty, lowers the crime rate, prepares young people for the work force, and fosters social mobility—or so the story goes. The reality, as historians of education have shown, is a good deal more complex. Public education is the mechanism through which the United States solves problems and attempts to reach goals achieved more directly or through different mechanisms in other countries. International comparisons usually brand the United States a welfare laggard because it spends less of its national income on welfare related benefits than do other advanced industrial democracies. But the comparisons leave out spending on public education, private social services, employer-provided health care and pensions, and benefits delivered through the tax code, a definitional weakness whose importance will become clearer when I describe the architecture of the welfare state.

***

Almost thirty-five years ago, in Social Control of the Welfare State, Morris Janowitz pointed out that “the most significant difference between the institutional bases of the welfare state in Great Britain and the United States was the emphasis placed on public education—especially for lower income groups—in the United States. Massive support for the expansion of public education . . . in the United States must be seen as a central component of the American notion of welfare . . .” In the late nineteenth and early twentieth centuries, while other nations were introducing unemployment, old age, and health insurance, the United States was building high schools for a huge surge in enrollment. “One would have to return to the 1910s to find levels of secondary school enrollment in the United States that match those in 1950s Western Europe,” point out economists Claudia Golden and Lawrence F. Katz in The Race Between Education and Technology. European nations were about generation behind the United States in expanding secondary education; the United States was about a generation behind Europe in instituting its welfare state.

If we think of education as a component, wean see that the U.S. welfare state focuses on enhancing equality of opportunity in contrast to European welfare states, which have been more sympathetic to equality of condition. In the United States, equality always has primarily about a level playing field where individuals can compete unhindered by obstacles that crimp the full expression of their native talents; education has served as the main mechanism for leveling the field. European concepts of equality more often focus on group inequality and the collective mitigation of handicaps and risks that, in the United States, have been left for individuals to deal with on their own.

***

Public education is part of the American welfare state. But which one? Each part is rooted in a different place in American history. Think of the welfare state as a loosely constructed, largely unplanned structure erected by many different people over centuries. This rickety structure, which no sane person would have designed, consists of two main divisions, the public and private welfare states, with subdivisions within each. The divisions of the public welfare state are public assistance, social insurance, and taxation. Public assistance (called outdoor relief through most of its history) originated with the Elizabethan poor laws brought over by the colonists. It consists of means-tested benefits. Before 1996, the primary example was Aid to Families with Dependent Children (AFDC), and since 1996, it has been Temporary Assistance to Needy Families (TANF)—the programs current-day Americans usually have in mind when they speak of “welfare.”

Social insurance originated in Europe in the late nineteenth century and made its way slowly to the United States. The first form of U.S. social insurance was workers’ compensation, instituted by several state governments in the early twentieth century. Social insurance benefits accrue to individuals on account of fixed criteria such as age. They are called insurance because they are allegedly based on prior contributions. The major programs—Social Security for the elderly and unemployment insurance—emerged in 1935 when Congress passed the Social Security Act. Social insurance benefits are much higher than benefits provided through public assistance, and they carry no stigma.

The third track in the public welfare state is taxation. U.S. governments, both federal and state, administer important benefits through the tax code rather than through direct grants. Thesis the most modern feature of the welfare state. The major example of a benefit aimed at poor people is the Earned Income Tax Credit, which expanded greatly during the Clinton presidency.

Within the private welfare state are two divisions: charities and social services and employee benefits. Charities and social services have along and diverse history. In the 1960s, governments started to fund an increasing number of services through private agencies. (In America, governments primarily write checks; they do not usually operate programs.) More and more dependent on public funding, private agencies increasingly became, in effect, government providers, a transformation with profound implications for their work. Employee benefits constitute the other division in the private welfare state. These date primarily from the period after the Second World War. They expanded as a result of the growth of unions, legitimated by the 1935 Wagner Act and 1949 decisions of the National Labor Relations Board, which held that employers were required to bargain over, though not required to provide, employee benefits.

Some economists object to including these benefits within the welfare state, but they are mistaken. Employee benefits represent the mechanism through which the United States has chosen to meet the health care needs of majority of its population. About 60 percent of Americans receive their health insurance through their employer, and many receive pensions as well. If unions had bargained hard for a public rather than a private welfare state, the larger American welfare state would look very different. Moreover, the federal government encourages the delivery of healthcare and pensions through private employers by allowing them to deduct the cost from taxes, and it supervises them with massive regulations, notably the Employee Retirement Security Act of 1974.

The first thing to stress about this welfare state is that its divisions are not distinct. They overlap and blend in complicated ways, giving the American welfare state a mixed economy not usefully described as either public or private. At the same time, federalism constrains its options, with some benefits provided by federal government and others offered through state and local governments. Throughout the twentieth century, one great problem facing would-be welfare state builders was designing benefits to pass constitutional muster.

How does public education fit into this odd, bifurcated structure? It shares characteristics with social insurance, public assistance, and social services. At first, it appears closest to social insurance. Its benefits are universal and not means tested, which makes them similar to Social Security (although Social Security benefits received by high income individuals are taxed). But education benefits are largely in kind, as are food stamps, housing, and Medicare. (In-kind benefits are “government provision of goods and services to those in need of them” rather than of “income sufficient to meet their needs via the market.”) Nor are the benefits earned by recipients through prior payroll contributions or employment. This separates them from Social Security, unemployment insurance, and workers’ compensation. Public education is also an enormous source of employment, second only to health care in the public welfare state.

Even more important, public education is primarily local. Great variation exists among states and, within states, among municipalities. In this regard, it differs completely from Social Security and Medicare, whose nationally-set benefits are uniform across the nation. It is more like unemployment insurance, workers’ compensation, and TANF (and earlier AFDC), which vary by state, but not by municipality within states. The adequacy of educational benefits, by contrast, varies with municipal wealth. Education, in fact, is the only public benefit financed largely by property taxes. This confusing mix of administrative and financial patterns provides another example of how history shapes institutions and policy.

Because of its differences from both social insurance and public assistance, public education composes a separate division within the public welfare state. But it moves in the same directions as the rest. The forces redefining the American welfare state have buffeted public schools as well as public assistance, social insurance, and private welfare.

***

Since the 1980s, the pursuit of three objectives has driven change in the giant welfare state edifice. These objectives are, first, a war on dependence in all its forms—not only the dependence of young unmarried mothers on welfare but all forms of dependence on public and private support, including the dependence of workers on paternalistic employers for secure, long-term jobs and benefits. Second is the devolution of authority—the transfer of power from the federal government to the states, from states to localities, and from the public to the private sector. Last is the application of free market models to social policy. Everywhere the market triumphed as template for a reengineered welfare state. This is not a partisan story. Broad consensus on these objectives crossed party lines. Within the reconfigured welfare state, work in the regular labor market emerged as the gold standard, the mark of first-class citizenship, carrying with it entitlement to the most generous benefits. The corollary, of course, was that failure or inability to join the regular labor force meant relegation to second-class citizenship, where benefits were mean, punitive, or just unavailable.

The war on dependence, the devolution of authority, and the application of market models also run through the history of public education in these decades. The attack on “social promotion,” emphasis on high-stakes tests, implementation of tougher high school graduation requirements, and transmutation of “accountability” into the engine of school reform: all these developments are of a piece with the war on dependence. They call for students to stand on their own with rewards distributed strictly according to personal (testable) merit. Other developments point to the practice of devolution in public education. Prime example is the turn toward site-based management—that is, the decentralization of significant administrative authority from central offices to individual schools. The most extreme example is Chicago’s 1989 school reform, which put local school councils in charge of each school, even giving them authority to hire and fire principals.

At the same time, a countervailing trend, represented by the 2002 federal No Child Left Behind legislation and the imposition of standards, limited the autonomy of teachers and schools and imposed new forms of centralization. At least, that was the intent. In fact, left to develop their own standards, many states avoided penalties mandated in No Child Left Behind by lowering the bar and making it easier for students to pass the required tests. In 2010, the nation’s governors and state school superintendents convened a panel of experts to reverse this race to the bottom. The panel recommended combining a set of national standards—initially for English and math—with local autonomy in curriculum design and teaching methods. The Obama administration endorsed the recommendations and included them in its educational reform proposals.

In this slightly schizoid blend of local autonomy and central control, trends in public education paralleled developments in the administration of public assistance: the 1996 federal “welfare reform” legislation mandated asset of outcomes but left states autonomy in reaching them. In both education and public assistance, the mechanism of reform became the centralization of acceptable outcomes and the decentralization of the means for achieving them.

***

As for the market as a template for reform, it was everywhere in education as well as the rest of the welfare state. Markets invaded schools with compulsory viewing of the advertising on Chris Whittle’s Channel One “free” television news for schools, and with the kickbacks to schools from Coke, Pepsi, and other products sold in vending machines—money schools desperately needed as their budgets for sports, arts, and culture were cut. Some school districts turned over individual schools to for-profit corporations such as Edison Schools, while advocacy of vouchers and private charter schools reflected the belief that blending competition among providers with parental choice would expose poorly performing schools and teachers and motivate others to improve.

Unlike the situation in the rest of the welfare state, educational benefits cannot be tied to employment. But they are stratified nonetheless by location, wealth, and race. The forces eroding the fiscal capacities of cities and old suburbs—withdrawal of federal aid and shrinking tax base—have had a devastating impact on public education and on children and adolescents, relegating a great many youngsters living in poor or near-poor families to second class citizenship. In the educational division of the public welfare state test results play the role taken on elsewhere by employment. They are gatekeepers to the benefits of first-class citizenship. The danger is that high-stakes tests and stiffer graduation requirements will further stratify citizenship among the young, with kids failing tests joining stay-at-home mothers and out-of-work black men as the “undeserving poor.” In this way, public education complements the rest of the welfare state as a mechanism for reproducing, as well as mitigating, inequality in America.

***

Michael B. Katz is Walter H. Annenberg Professor of History at the University of Pennsylvania. His conception of the architecture of the American welfare state and the forces driving change within it are elaborated in his book The Price of Citizenship: Redefining the American Welfare State, updated edition (University of Pennsylvania Press).

Posted in Capitalism, History, Modernity, Religion, Theory

Blaustein: Searching for Consolation in Max Weber’s Work Ethic

 

Last summer I posted a classic lecture by the great German sociologist, Max Weber, “Science as a vocation.” Recently I ran across a terrific essay by George Blaustein about Weber’s vision of the modern world, drawing on this lecture and two other seminal works: the lecture “Politics as a Vocation” (delivered a year after the science lecture) and the seminal book The Protestant Ethic and the Spirit of CapitalismHere’s a link to the original Blaustein essay on the New Republic website.

Like so many great theorists (Marx, Durkheim, Foucault, etc.), Weber was intensely interested in understanding the formation of modernity.  How did the shift from premodern to modern come about?  What prompted it?  What are the central characteristics of modernity?  What are the main forces that drive it?  As Blaustein shows so adeptly, Weber’s take is a remarkably gloomy one.  He sees the change as one of disenchantment, in which we lost the certitudes of faith and tradition and are left with a regime of soulless rationalism and relentless industry.  Here’s how he put it in his science lecture:

The fate of our times is characterized by rationalization and intellectualization and, above all, by the ‘disenchantment of the world.’ Precisely the ultimate and most sublime values have retreated from public life either into the transcendental realm of mystic life or into the brotherliness of direct and personal human relations….

In his view, there is no turning back, no matter how much you feel you have lost, unless you are willing to surrender reason to faith.  This he is not willing to do, but he understands why others might choose differently.

To the person who cannot bear the fate of the times like a man, one must say: may he rather return silently, without the usual publicity build-up of renegades, but simply and plainly. The arms of the old churches are opened widely and compassionately for him. After all, they do not make it hard for him. One way or another he has to bring his ‘intellectual sacrifice‘ — that is inevitable. If he can really do it, we shall not rebuke him.

In The Protestant Ethic, he explores the Calvinist roots of the capitalist work ethic, in which the living saints worked hard in this world to demonstrate (especially to themselves) that they had been elected to eternal life in the next world.  Instead of earning to spend on themselves, they reinvested their earnings in economic capital on earth and spiritual capital in heaven.  But the ironic legacy of this noble quest is our own situation. in which we work in order to work, without purpose or hope.  Here’s how he puts it in the famous words that close his book.

The Puritan wanted to work in a calling; we are forced to do so. For when asceticism was carried out of monastic cells into everyday life, and began to dominate worldly morality, it did its part in building the tremendous cosmos of the modern economic order. This order is now bound to the technical and economic conditions of machine production which to-day determine the lives of all the individuals who are born into this mechanism, not only those directly concerned with economic acquisition, with irresistible force.  Perhaps it will so determine them until the last ton of fossilized coal is burnt.  In Baxter’s view the care for external goods should only lie on the shoulders of the “saint like a light cloak, which can be thrown aside at any moment.” But fate decreed that the cloak should become an iron cage.

I hope you gain as much insight from this essay as I did.

Protestant Ethic

Searching for Consolation in Max Weber’s Work Ethic

People worked hard long before there was a thing called the “work ethic,” much less a “Protestant work ethic.” The phrase itself emerged early in the twentieth century and has since congealed into a cliché. It is less a real thing than a story that people, and nations, tell themselves about themselves. I am from the United States but now live in Amsterdam; the Dutch often claim the mantle of an industrious, Apollonian Northern Europe, as distinct from a dissolute, Dionysian, imaginary South. Or the Dutch invoke the Protestant ethic with self-deprecating smugness: Alas, we are so productive. Both invocations are absurd. The modern Dutch, bless them, are at least as lazy as everyone else, and their enjoyments are vulgar and plentiful.

In the U.S., meanwhile, celebrations of the “work ethic” add insult to the injury of overwhelming precarity. As the pandemic loomed, it should have been obvious that the U.S. would particularly suffer. People go to work because they have no choice. Those who did not face immediate economic peril could experience quarantine as a kind of relief and then immediately feel a peculiar guilt for that very feeling of relief. Others, hooray, could sustain and perform their work ethic from home.

The German sociologist Max Weber was the first great theorist of the Protestant ethic. If all scholarship is autobiography, it brings an odd comfort to learn that he had himself suffered a nervous breakdown. Travel was his main strategy of recuperation, and it brought him to the Netherlands and to the U.S., among other places. The Hague was “bright and shiny,” he wrote in 1903. “Everyone is well-to-do, exceedingly ungraceful, and rather untastefully dressed.” He had dinner in a vegetarian restaurant. (“No drinks, no tips.”) Dutch architecture made him feel “like Gulliver when he returned from Brobdingnag.” America, by contrast, was Brobdingnag. Weber visited the U.S. for three months in 1904 and faced the lurid enormity of capitalism. Chicago, with its strikes, slaughterhouses, and multi-ethnic working class, seemed to him “like a man whose skin has been peeled off and whose intestines are seen at work.”

Weber theorized the rise of capitalism, the state and its relationship to violence, the role of “charisma” in politics. Again and again he returned, as we still do, to the vocation—the calling—as both a crushing predicament and a noble aspiration. He died 100 years ago, in a later wave of the Spanish flu. It is poignant to read him now, in our own era of pandemic and cataclysm. It might offer consolation. Or it might fail to console.

The Protestant Ethic and the Spirit of Capitalism emerged, in part, from that American journey. It first appeared in two parts, in 1904 and 1905, in a journal, the Archiv für Sozialwissenschaft und Sozialpolitik. A revised version appeared in 1920, shortly before his death. Race did not figure into his account of capitalism’s rise, though the American color line had confronted him vividly. In 1906 he would publish W.E.B. Du Bois’s “The Negro Question in the United States” in the same journal, which he edited.

Modern invocations of the work ethic are usually misreadings: The Protestant Ethic was more lament than celebration. Weber sought to narrate the arrival of what had become a no-longer-questioned assumption: that our duty was to labor in a calling, even to labor for labor’s sake. He sought the origins of this attitude toward work and the meaning of life, of an ethic that saved money but somehow never enjoyed it, of a joyless and irrational rationality. He found the origins in Calvinism, specifically in what he called Calvinism’s “this-worldly asceticism.”

Weber’s argument was not that Calvinism caused capitalism; rather, The Protestant Ethic was a speculative psycho-historical excavation of capitalism’s emergence. The interpretation, like most of his interpretations, had twists that are not easy to summarize. It was, after all, really the failure of Calvinism—in the sense of the unmeetableness of Calvinism’s demands on an individual psyche and soul—that generated a proto-capitalist orientation to the world. The centerpiece of Calvin’s theology—the absolute, opaque sovereignty of God and our utter noncontrol over our own salvation—was, in Weber’s account, impossibly severe, unsustainable for the average person. The strictures of that dogma ended up creating a new kind of individual and a new kind of community: a community bound paradoxically together by their desperate anxiety about their individual salvation. Together and alone.

The germ of the capitalist “spirit” lay in the way Calvinists dealt with that predicament. They labored in their calling, for what else was there to do? To work for work’s sake was Calvinism’s solution to the problem of itself. Having foreclosed all other Christian comforts—a rosary, an indulgence, a ritual, a communion—Weber’s original Calvinists needed always to perform their own salvation, to themselves and to others, precisely because they could never be sure of it. No wonder they would come to see their material blessings as a sign that they were in fact blessed. And no wonder their unlucky descendants would internalize our economic miseries as somehow just.

Calvinism, in other words, was less capitalism’s cause than its ironic precondition. The things people did for desperate religious reasons gave way to a secular psychology. That secular psychology was no “freer” than the religious one; we had been emancipated into jobs. “The Puritans wanted to be men of the calling,” Weber wrote; “we, on the other hand, must be.” As a historical process—i.e., something happening over time—this process was gradual enough that the people participating in it did not really apprehend it as it happened. In Milton’s Paradise Lost, when Adam and Eve are expelled from Eden and into the world, the archangel Michael offers faith as a consolation within the worldliness that is humanity’s lot: The faithful, Michael promises Adam, “shal[l] possess / A Paradise within thee, happier by far.” Those lines appeared in 1674, more than a century after John Calvin’s death; for Weber, they were an inadvertent expression of the capitalist spirit’s historical unfolding. Only later still could the gloomy sociologist see, mirrored in that Puritan epic, our own dismal tendency to approach life itself as a task.

For historians of capitalism, the book is inspiring but soon turns frustrating. Weberian interpretations tend to stand back from history’s contingencies and exploitations in order to find some churning and ultimately unstoppable process: “rationalization,” for instance, by which tradition gives way ironically but inexorably to modernity. Humans wanted things like wholeness, community, or salvation; but our efforts, systematized in ways our feeble consciousness can’t ever fully grasp, end up ushering in anomie, bureaucracy, or profit. The Weberian analysis then offers no relief from that process, only a fatalism without a teleology. The moral of the story, if there is a moral, is to reconcile yourself to the modernity that has been narrated and to find in the narrative itself something like an intellectual consolation, which is the only consolation that matters.

Still, the book’s melancholy resonates, if only aesthetically. At moments, it even stabs with a sharpness that Weber could not have foreseen: The “monstrous cosmos” of capitalism now “determines, with overwhelming coercion, the style of life not only of those directly involved in business but of every individual who is born into this mechanism,” he wrote in the book’s final pages, “and may well continue to do so until the day that the last ton of fossil fuel has been consumed.” Gothic images—ghosts and shadowy monsters—abound in what is, at times, a remarkably literary portrait. “The idea of the ‘duty in a calling’ haunts our lives like the ghost of once-held religious beliefs.”

The book’s most famous image is the “iron cage.” For Puritans, material concerns were supposed to lie lightly on one’s shoulders, “like a thin cloak which can be thrown off at any time” (Weber was quoting the English poet Richard Baxter), but for us moderns, “fate decreed that the cloak should become an iron cage.” That morsel of sociological poetry was not in fact Weber’s but that of the American sociologist Talcott Parsons, whose English translation in 1930 became the definitive version outside of Germany. Weber’s phrase was “stahlhartes Gehäuse”—a shell hard as steel. It describes not a room we can’t leave but a suit we can’t take off.

One wonders what Weber would make of our era’s quarantines. What is a Zoom meeting but another communal experience of intense loneliness? Weber’s portrait of Calvinist isolation might ring a bell. Working from home traps us ever more firmly in the ideology or mystique of a calling. We might then take refuge in a secondary ethic, what we might call the iron cage of “fulfillment.” It is built on the ruins of the work ethic or, just as plausibly, it is the work ethic’s ironic apotheosis: secular salvation through sourdough.

It brings a sardonic pleasure to puncture the mental and emotional habits of a service economy in Weberian terms. But it doesn’t last. The so-called work ethic is no longer a spiritual contagion but a medical one, especially in America. Weber’s interpretation now offers little illumination and even less consolation. It is not some inner ethic that brings, say, Amazon’s workers to the hideously named “fulfillment centers”; it is a balder cruelty.

The breakdown happened in 1898, when Weber was 34. “When he was overloaded with work,” his wife, Marianne, wrote in her biography of him, after his death, “an evil thing from the unconscious underground of life stretched out its claws toward him.” His father, a politician in the National Liberal Party, had died half a year earlier, mere weeks after a family standoff that remained unresolved. In the dispute, Max had defended his devoutly religious mother against his autocratic father. The guilt was severe. (The Protestant Ethic would lend itself too easily to a Freudian reading.) A psychiatrist diagnosed him with neurasthenia, then the modern medical label for depression, anxiety, panic, fatigue. The neurasthenic brain, befitting an industrial age, was figured as an exhausted steam engine. Marianne, elsewhere in her biography, described the condition as an uprising to be squashed: “Weber’s mind laboriously maintained its dominion over its rebellious vassals.”

As an undergraduate at the University of Heidelberg, Weber had studied law. His doctoral dissertation was on medieval trading companies. By his early thirties he was a full professor in economics and finance, in Freiburg and then back in Heidelberg. After his breakdown, he was released from teaching and eventually given a long leave of absence. He resigned his professorship in 1903, keeping only an honorary title for more than a decade. Weberian neurasthenia meant a life of travel; medical sojourns in Alpine clinics; and convalescent trips to France, Italy, and Austro-Hungary—extravagant settings for insomnia and a genuine inner turmoil. Money was not the problem. Marianne, a prolific scholar and a key but complex figure in the history of German feminism, would inherit money from the family’s linen factory.

Though only an honorary professor, with periods of profound study alternating with periods of depression, Weber loomed large in German academic life. In 1917, students in Munich invited the “myth of Heidelberg,” as he was known, to lecture about “the vocation of scholarship.” He did not mention his peculiar psychological and institutional trajectory in that lecture, now a classic, though one can glimpse it between the lines. “Wissenschaft als Beruf” (“Science as a Vocation”) and another lecture from a year and a half later, “Politik als Beruf” (“Politics as a Vocation”) are Weber’s best-known texts outside The Protestant Ethic. A new English translation by Damion Searls rescues them from the formal German (as translations sometimes must) and from the viscous English into which they’re usually rendered. It restores their vividness and eloquence as lectures.

Of course, now they would be Zoom lectures, which would entirely break the spell. Picture him: bearded and severe, a facial scar still visible from his own college days in a dueling fraternity. He would see not a room full of students but rather his own face in a screen, looking back at him yet unable to make true eye contact. Neurasthenia would claw at him again.

Some lines from “Wissenschaft als Beruf,” even today, would have worked well in the graduation speeches that have been canceled because of the pandemic. Notably: “Nothing is humanly worth doing except what someone can do with passion.” Sounds nice! “Wissenschaft als Beruf” approached the confines of the calling in a more affirmative mode. Other parts of the speech, though—and even that inspirational line, in context—boast a bleak and bracing existentialism. My favorite moment is when Weber channeled Tolstoy on the meaningless of death (and life!) in a rationalized, disenchanted modernity. Since modern scholarship is now predicated on the nonfinality of truth, Weber said, and since any would-be scholar will absorb “merely a tiny fraction of all the new ideas that intellectual life continually produces,” and since “even those ideas are merely provisional, never definitive,” death can no longer mark a life’s harmonious conclusion. Death is now “simply pointless.” And the kicker: “And so too is life as such in our culture, which in its meaningless ‘progression’ stamps death with its own meaninglessness.” If only I had heard that from a graduation speaker.

Weber’s subject was the meaning of scholarship in a “disenchanted” world. “Disenchantment” is another one of Weber’s processes—twisted, born of unintended consequences, but nevertheless unstoppable. It meant a scholar could no longer find “proof of God’s providence in the anatomy of a louse.” Worse, the modern scholar was doomed to work in so dismal an institution as a university. “There are a lot of mediocrities in leading university positions,” said Weber about the bureaucratized university of his day, “safe mediocrities or partisan careerists” serving the powers that funded them. Still true.

So why do it? To be a scholar meant caring, as if it mattered, about a thing that objectively does not matter and caring as if “the fate of his very soul depends on whether he gets this specific conjecture exactly right about this particular point in this particular manuscript.” Scholarship was the good kind of calling, insofar as one could make one’s way to some kind of meaning, however provisional that meaning was, and however fleeting and inscrutable the spark of “inspiration.”

That part of the sermon is no longer quite so moving. Weber styled himself a tough-minded realist when it came to institutions, but our era’s exploitation of adjunct academic labor punctures the romance that Weber could nevertheless still inflate. Universities in an age of austerity do not support or reward scholarly inquiry as a self-justifying vocation. Scholars must act more and more like entrepreneurs, manufacturing and marketing our own “relevance.” For some university managers (as for many corporate CEOs), the coronavirus is as much an opportunity as a crisis, to further strip and “streamline” the university—to conjoin, cheaply, the incompatible ethics of efficiency and intellect. And we teachers are stuck in the gears: The digital technologies by which we persist in our Beruf will only further erode our professional stability. “Who but a blessed, tenured few,” the translation’s editors, Paul Reitter and Chad Wellmon, ask, “could continue to believe that scholarship is a vocation?”

And yet as a sermon on teaching, Weber’s lecture still stirs me. Having given up on absolute claims about truth or beauty, and having given up on academic inquiry revealing the workings of God, he arrived at a religious truth about pedagogy that you can still hang a hat on:

If we understand our own subject (I am necessarily assuming we do), we can force, or at least help, an individual to reckon with the ultimate meaning of his own actions. This strikes me as no small matter, in terms of a student’s inner life too.

I want this to be true. On good days, teaching delivers what Weber called that “strange intoxication,” even on Zoom.

An enormous historical gulf divides the two vocation lectures, though they were delivered only 14 months apart. In November 1917, Weber didn’t even mention the war. When it broke out in 1914, he served for a year as a medical director in the reserve forces; he did not see combat but supported German aspiration to the status of Machtstaat and its claim to empire. The war dragged miserably on, but in late 1917 it was far from clear that Germany would lose. Tsarist Russia had collapsed, and the American entry into the war had not proved decisive. The defeat that Germany would experience in the coming months was then unimaginable.

Weber was a progressive nationalist, moving between social democracy and the political center. During the war, besides his essays on the sociology of religion, he wrote about German political futures and criticized military management, all while angling for some role in the affairs of state himself. As the tide turned, he argued for military retrenchment as the honorable course. A month after Germany’s surrender on November 11, 1918, he stood unsuccessfully for election to parliament with the new German Democratic Party, of which he was a founder.

In January 1919 he returned to a Munich gripped by socialist revolution. It was now the capital of the People’s State of Bavaria, which would be short-lived. Weber, for years, had dismissed both pacifism and revolution as naïve. Many in the room where he spoke supported the revolution that he so disdained, and many of them had seen industrial slaughter in the state’s trenches. Part of the lecture’s mystique is its timing: He stood at a podium in the eye of the storm.

“Politik als Beruf” would seem to speak to our times, from one era of calamity and revolution to another. It is about the modern state and its vast machineries. It is about statesmen and epigones, bureaucracy and its discontents, “leadership” and political breakdown. To that moment’s overwhelming historical flux, Weber brought, or tried to bring, the intellectual sturdiness of sociological categories, “essential” vocabularies that could in theory apply at any time.

He offered a now-famous definition of the state in general: “the state is the only human community that (successfully) claims a monopoly on legitimate physical violence for itself, within a certain geographical territory.… All other groups and individuals are granted the right to use physical violence only insofar as the state allows it.” This definition, powerfully tautological, was the sociological floor on which stood all of the battles over what we might want the state to be. Philosophically, it operated beneath all ideological or moral debates over rights, democracy, welfare. It countered liberalism’s fantasy of a social contract, because Weber’s state, both foundationally and when push came to shove, was not contractual but coercive.

It was a bracing demystification. Legitimacy had nothing to do with justice; it meant only that the people acquiesced to the state’s authority. Some regimes “legitimately” protected “rights,” while others “legitimately” trampled them. Why did we acquiesce? Weber identified three “pure” categories of acquiescence: We’re conditioned to it, by custom or tradition; or we’re devoted to a leader’s charisma; or we’ve been convinced that the state’s legitimacy is in fact just, that its laws are valid and its ministers competent. Real political life, Weber wryly said, was always a cocktail of these three categories of acquiescence, never mind what stories we might tell ourselves about why we go along with anything.

With that floor of a definition laid, varieties of statehood could now emerge. Every state was a configuration of power and bureaucratic machinery, and the many massive apparatuses that made it up had their own deep sociological genealogies, each with their own Weberian twists. So did the apparatuses that produced those people who felt called to politics. Weber’s sweep encompassed parliaments, monarchs, political parties, corporations, newspapers, universities (law schools especially), a professional civil service, militaries.

Any reader now will be tempted to decode our politicians in Weber’s terms. Trump: ostensibly from the world of business, which, in Weber’s scheme, would usually keep such a figure out of electoral politics (although Weber did note that “plutocratic leaders certainly can try to live ‘from’ politics,” to “exploit their political dominance for private economic gain”). Maybe we’d say that Trump hijacked the apparatus of the administrative state, already in a state of erosion, and that he grifts from that apparatus while wrecking it further. Or maybe Trump is returning American politics to the pre-professional, “amateur” spoils system of the nineteenth century. Or he is himself a grotesque amateur, brought to the fore by an already odious political party that somehow collapsed to victory. Or maybe Trump is an ersatz aristocrat, from inherited wealth, who only played a businessman on television. (Weber’s writings do not anticipate our hideous celebrity politics.) Or Trump is a would-be warlord, postmodern or atavistically neo-feudal, committed to stamping a personal brand on the formerly “professional” military. Or, or, or. All are true, in their way. Maybe Weber would see in Trump a moron on the order of Kaiser Wilhelm—an equally cogent analysis.

Do these decodings clarify the matter or complicate it? Do they help us at all? They deliver a rhetorical satisfaction, certainly, and maybe an intellectual consolation. Then what? “Politik als Beruf” leaves sociology behind and becomes a secular sermon about “leadership,” and here the spell begins to break. Weber sought political salvation, of a kind, in charisma. The word is now a cliché, but for him it had a specific charge. Politics, he told his listeners in so many words, was a postlapsarian business. It cannot save any souls, because violence and coercion are conceptually essential to politics. A disenchanted universe is still a fallen universe. What had emerged from the fall was the monstrous apparatus of the modern nation-state. It was there, with its attendant armies of professionals and hangers-on, it fed you or it starved you. It was a mountain that no one really built but that we all had to live on.

Politics for Weber was brutally Darwinian in the end: Some states succeeded, and others failed. His Germany did not deserve defeat any more than the Allies deserved victory. That same moral arbitrariness made him look with a kind of grudging respect at Britain and the U.S.—made him even congratulate America for graduating from political amateurism into professional power. Meanwhile, he belittled revolutionaries. Anyone who imagined they could escape power’s realities or usher in some fundamentally new arrangement of power, he mocked. “Let’s be honest with ourselves here,” he said to the revolutionists in Munich. A belief in a revolutionary cause, “as subjectively sincere as it may be, is almost always merely a moral ‘legitimation’ for the desire for power, revenge, booty, and benefits.” (He was recycling a straw-man argument he had made for several years.)

To be enchanted by this argument is to end up thinking in a particular way about history with a capital h and politics with a capital p. History was always a kind of test of the state: wars, economic calamities, pandemics. Such things arrived, like natural disasters. For all the twists and complexities of Weber’s sociology, this conception of History is superficial, and its prescription for Politics thin. He demystified the state only to remystify the statesman. It is an insider’s sermon, because politics was an insider’s game, and it is the state’s insiders who, nowadays, will thrill to it. Very well.

“The relationship between violence and the state is particularly close at present,” Weber said, early in his lecture. At present could mean this week, this decade, this century, this modernity. The lecture retains, no doubt, a curious power in times of calamity. I am inclined to call it a literary power. Weber held two things in profound narrative tension: We feel both the state’s glacial inevitability and the terror of its collapse. Without a bureaucrat’s “discipline and self-restraint, which is in the deepest sense ethical,” Weber said in passing, “the whole system would fall apart.” So too would it fall apart without a leader’s charisma. If this horror vacui was powerful, for Weber and his listeners, it was because in 1919 things would fall apart, or were falling apart, or had already fallen apart. The lecture contemplates that layered historical collapse with both dread and wonder.

A century on, Weber’s definition of the state is still, sometimes, a good tool to think with. The coronavirus lockdowns, for instance, laid bare the state’s essentially coercive function. In Europe, on balance, lockdowns have been accepted—acquiesced to—as a benevolent coercion, an expression of a trusted bureaucracy and a responsible leadership. In some American states, too. The lockdowns even generated their own (in Weberian terms) legitimating civic rituals. Fifteen months after “Politik als Beruf,” Weber himself would die of the flu that his lecture did not mention.

In the Netherlands, where I live and teach, the drama of that lecture, even in a pandemic, might fall on deaf ears. The peril and fragileness that Weber channeled can be hard to imagine in the low countries, which boasted an “intelligent lockdown” that needed no spectacular show of coercion. History, here, tends not to feel like an onrushing avalanche, or a panorama of sin and suffering, or a test we might fail, but rather a march of manageable problems, all of which seem—seem—solvable. This conception is a luxury.

As for the study of the U.S., which I suppose is my own meaningful or meaningless calling, Weber said, in 1917, that “it is often possible to see things in their purest form there.” In the century since his death, the transatlantic tables have turned, and American Studies often becomes the study of political breakdown. The vocabulary of failed statehood abounds in commentaries on America, from within and without, while American liberals look often to Germany’s Angela Merkel as the paragon of Weberian statesmanship. Step back from such commentaries, though, and American history will overwhelm even Weber’s bleak definition. America sits atop other kinds of violence, it accommodates a privatized violence, it outsources violence, it brings its wars back home.

I started this essay before the murder of George Floyd, and I am finishing it during the uprising that has followed in its wake. Weber’s definition of the state, ironically, can now fit with a political temperament more far radical than Weber’s own. The uprising has as its premise that the social contract, if it ever held, has long since been broken: The state’s veil is thus drawn back. The uprising then looks Weber’s definition in the eye: The monstrous state’s violence is unjust, therefore we do not accept it as legitimate.

I was looking in Weber for illumination, or consolation, or something. I haven’t found a rudder for the present, and I don’t know how to end. But the desire for consolation brought to my mind, of all things, the unconsoling diary of Franz Kafka. I read it years ago, and every once in a while its last lines will suddenly haunt me, like the opposite of a mantra, for reasons I don’t entirely understand. Kafka died in 1924, more an outsider than an insider; his diary’s last entry reflects, in an elliptical or inscrutable way, on another disease—tuberculosis—and on another calling. “More and more fearful as I write. Every word,” he felt, was “twisted in the hands of the spirit” and became “a spear turned against the speaker.” He also looked for consolation. “The only consolation would be: it happens whether you like or no. And what you like is of infinitesimally little help.” He then looked beyond it. “More than consolation is: You too have weapons.”

 

Posted in Empire, History, Modernity, War

What If Napoleon Had Won at Waterloo?

Today I want to explore an interesting case of counterfactual history.  What would have happened if Napoleon Bonaparte had won in 1815 at the Battle of Waterloo?  What consequences might have followed for Europe in the next two centuries?  That he might have succeeded is not mere fantasy.  According to the victor, Lord Wellington, the battle was “the nearest-run thing you ever saw in your life.”

The standard account, written by the winners, is that the allies arrayed against Napoleon (primarily Britain, Prussia, Austria, and Russia) had joined together to stop him from continuing to rampage across the continent, conquering one territory after another.  From this angle, they were the saviors of freedom, who finally succeeded in vanquishing and deposing the evil dictator. 

I want to explore an alternative interpretation, which draws on two sources.  One is an article in Smithsonian Magazine by Andrew Roberts, “Why We’d Be Better Off if Napoleon never Lost at Waterloo.”  The other is a book by the same author, Napoleon: A Life

Napoleon

The story revolves around two different Napoleons:  the general and the ruler.  As a general, he was one of the greatest in history.  Depending on how you count, he fought 60 or 70 battles and lost only seven of them, mostly at the end.  In the process, he conquered (or controlled through alliance) most of Western Europe.  So the allies had reason to fear him and to eliminate the threat he posed.  

As a ruler, however, Napoleon looks quite different.  In this role, he was the agent of the French Revolution and its Enlightenment principles, which he succeeded in institutionalizing within France and spreading across the continent.  Andrews notes in his article that Napoleon 

said he would be remembered not for his military victories, but for his domestic reforms, especially the Code Napoleon, that brilliant distillation of 42 competing and often contradictory legal codes into a single, easily comprehensible body of French law. In fact, Napoleon’s years as first consul, from 1799 to 1804, were extraordinarily peaceful and productive. He also created the educational system based on lycées and grandes écoles and the Sorbonne, which put France at the forefront of European educational achievement. He consolidated the administrative system based on departments and prefects. He initiated the Council of State, which still vets the laws of France, and the Court of Audit, which oversees its public accounts. He organized the Banque de France and the Légion d’Honneur, which thrive today. He also built or renovated much of the Parisian architecture that we still enjoy, both the useful—the quays along the Seine and four bridges over it, the sewers and reservoirs—and the beautiful, such as the Arc de Triomphe, the Rue de Rivoli and the Vendôme column.

He stood as the antithesis of the monarchical state system at the time, grounded in preserving the feudal privileges of the nobility and the church and the subordination of peasants and workers.  As a result, he ended up creating a lot of enemies, who initiated most of the battles he fought.  In addition, however, he drew a lot of support from key actors within the territories he conquered, to whom he looked less like an invader than a liberator.  Andrews points out in his book that:

Napoleon’s political support from inside the annexed territories came from many constituencies: urban elites who didn’t want to return to the rule of their local Legitimists, administrative reformers who valued efficiency, religious minorities such as Protestants and Jews whose rights were protected by law, liberals who believed in concepts such as secular education and the liberating power of divorce, Poles and other nationalities who hoped for national self-determination, businessmen (at least until the Continental System started to bite), admirers of the simplicity of the Code Napoléon, opponents of the way the guilds had worked to restrain trade, middle-class reformers, in France those who wanted legal protection for their purchases of hitherto ecclesiastical or princely confiscated property, and – especially in Germany – peasants who no longer had to pay feudal dues.

When the allies defeated Napoleon the first time, they exiled him to Elba and installed Louis XVIII as king, seeking to sweep away all of the gains from the revolution and the empire.  Louis failed spectacularly in gaining local support for the reversion to the Ancien Regime.  Sensing this, Napoleon escaped to the mainland after only nine months and headed for Paris.  The royalist troops sent to stop him instead rallied to his cause, and in 18 days he was eating Louis’s dinner in the Tuileries, restored as emperor without anyone firing a single shot in defense of the Bourbons.  Quite a statement about how the French, as opposed to the allies, viewed his return.  

Once back in charge, Napoleon sent a note to the allies, reassuring them that he was content to rule at home and leave conquest to the past: “After presenting the spectacle of great campaigns to the world, from now on it will be more pleasant to know no other rivalry than that of the benefits of peace, of no other struggle than the holy conflict of the happiness of peoples.” 

They weren’t buying it.  They had reason to be suspicious, but instead of waiting and seeing they launched an all out assault on France in an effort to get him out of the way.  Andrews argues, and I agree, that their aim was not defensive but actively reactionary.  His liberalized and modernized France posed a threat to the preservation of the traditional powers of monarchy, nobility, and church.  They sought to tamp out the fires of reform and revolution before it reared up in their own domains.  In this sense, then, Andrews says Waterloo was a battle that didn’t need to happen.  It was an unprovoked, preemptive strike.

Andrews concludes his Smithsonian article with this assessment of what might have been if Waterloo had turned out differently:

If Napoleon had remained emperor of France for the six years remaining in his natural life, European civilization would have benefited inestimably. The reactionary Holy Alliance of Russia, Prussia and Austria would not have been able to crush liberal constitutionalist movements in Spain, Greece, Eastern Europe and elsewhere; pressure to join France in abolishing slavery in Asia, Africa and the Caribbean would have grown; the benefits of meritocracy over feudalism would have had time to become more widely appreciated; Jews would not have been forced back into their ghettos in the Papal States and made to wear the yellow star again; encouragement of the arts and sciences would have been better understood and copied; and the plans to rebuild Paris would have been implemented, making it the most gorgeous city in the world.

Napoleon deserved to lose Waterloo, and Wellington to win it, but the essential point in this bicentenary year is that the epic battle did not need to be fought—and the world would have been better off if it hadn’t been.

What followed his loss was a century of reaction across the continent of Europe. The Bourbons were restored and the liberal gains in Germany, Spain, Austria and Italy were rolled back.  Royalist statesmen such as Metternich and Bismarck aggressively defended their regimes against reform efforts by liberals and Marxists alike.  These regimes persisted until the First World War, which they precipitated and which eventually brought them all down — Hohenzollerns, Habsburgs, Romanovs, and Ottomans.  The reactions to the fall of these monarchies in turn set the stage for the Second World War.

You can only play out historical counterfactuals so far, before the chain of contingencies becomes too long and the analysis turns wholly speculative.  But it seems quite reasonable to me to think that, if Napoleon had won at Waterloo, this history would have played out quite differently.  The existence proof of a modern liberal state in the middle of Europe would have shored up reform efforts in the surrounding monarchies and headed off the reactionary status quo that finally erupted in the Great War that extinguished them all.

Posted in Academic writing, History, Writing

On Writing: How the King James Bible and How It Shaped the English Language and Still Teaches Us How to Write

When you’re interested in improving your writing, it’s a good idea to have some models to work from.  I’ve presented some of my favorite models in this blog.  These have included a number of examples of good writing by both academics (Max Weber, E.P. Thompson, Jim March, and Mary Metzand nonacademics (Frederick Douglass, Elmore Leonard).

Today I want to explore one of the two most influential forces in shaping the English language over the years:  The King James Bible.  (The other, of course, is Shakespeare.)  Earlier I presented one analysis by Ann Wroe, which focused on the thundering sound of the prose in this extraordinary text.  Today I want to draw on two other pieces of writing that explore the powerful model that this bible provides us all for how to write in English with power and grace.  One is by Adam Nicholson, who wrote a book on the subject (God’s Secretaries: The Making of the King James Bible).  The other, which I reprint in full at the end of this post, is by Charles McGrath.  

The impulse to produce a bible in English arose with the English reformation, as a Protestant vernacular alternative to the Latin version that was canonical in the Catholic church.  The text was commissioned in 1604 by King James, who succeeded Elizabeth I after her long reign, and it was constructed by a committee of 54 scholars.  They went back to the original texts in Hebrew and Greek, but they drew heavily on earlier English translations. 

The foundational translation was written by William Tyndale, who was executed for heresy in Antwerp in 1536, and this was reworked into what became known as the Geneva bible by Calvinists who were living in Switzerland.  One aim of the committee was to produce a version that was more compatible with the beliefs of English and Scottish versions of the faith, but for James the primary impetus was to remove the anti-royalist tone that was embedded within the earlier text.  Recent scholars have concluded that 84% of the words in the King James New Testament and 76% in the Old Testament are Tyndale’s.

As Nicholson puts it, the language of the King James Bible is an amazing mix — “majestic but intimate, the voice of the universe somehow heard in the innermost part of the ear.”

You don’t have to be a Christian to hear the power of those words—simple in vocabulary, cosmic in scale, stately in their rhythms, deeply emotional in their impact. Most of us might think we have forgotten its words, but the King James Bible has sewn itself into the fabric of the language. If a child is ever the apple of her parents’ eye or an idea seems as old as the hills, if we are at death’s door or at our wits’ end, if we have gone through a baptism of fire or are about to bite the dust, if it seems at times that the blind are leading the blind or we are casting pearls before swine, if you are either buttering someone up or casting the first stone, the King James Bible, whether we know it or not, is speaking through us. The haves and have-nots, heads on plates, thieves in the night, scum of the earth, best until last, sackcloth and ashes, streets paved in gold, and the skin of one’s teeth: All of them have been transmitted to us by the translators who did their magnificent work 400 years ago.

Wouldn’t it be lovely if we academics could write in way that sticks in people’s minds for 400 years?  Well, maybe that’s a bit too much to hope for.  But even if we can’t aspire to be epochally epigrammatic, there are still lessons we can learn from Tyndale and the Group of 54.  

One such lesson is the power of simplicity.  Too often scholars feel the compulsion to gussy up their language with jargon and Latinate constructions in the name of professionalism.  If any idiot can understand what you’re saying, then you’re not being a serious scholar.  But the magic of the King James Bible is that it uses simple Anglo-Saxon words to make the most profound statements.  Listen to this passage from Ecclesiastes:

I returned and saw under the sun, that the race is not to the swift, nor the battle to the strong, neither yet bread to the wise, nor yet riches to men of understanding, nor yet favor to men of skill, but time and chance happeneth to them all.

Or this sentence from Paul’s letter to the Phillipians:

Finally, brethren, whatsoever things are true, whatsoever things are honest, whatsoever things are just, whatsoever things are pure, whatsoever things are lovely, whatsoever things are of good report; if there be any virtue, and if there be any praise, think on these things.

Or the stunning opening line of the Gospel of John:

In the beginning was the Word, and the Word was with God, and the Word was God.

This is a text that can speak clearly to the untutored while at the same time elevating them to a higher plane.  For us it’s a model for how to match simplicity with profundity.

KJB

Why the King James Bible Endures

By CHARLES McGRATH

The King James Bible, which was first published 400 years ago next month, may be the single best thing ever accomplished by a committee. The Bible was the work of 54 scholars and clergymen who met over seven years in six nine-man subcommittees, called “companies.” In a preface to the new Bible, Miles Smith, one of the translators and a man so impatient that he once walked out of a boring sermon and went to the pub, wrote that anything new inevitably “endured many a storm of gainsaying, or opposition.” So there must have been disputes — shouting; table pounding; high-ruffed, black-gowned clergymen folding their arms and stomping out of the room — but there is no record of them. And the finished text shows none of the PowerPoint insipidness we associate with committee-speak or with later group translations like the 1961 New English Bible, which T.S. Eliot said did not even rise to “dignified mediocrity.” Far from bland, the King James Bible is one of the great masterpieces of English prose.

The issue of how, or even whether, to translate sacred texts was a fraught one in those days, often with political as well as religious overtones, and it still is. The Roman Catholic Church, for instance, recently decided to retranslate the missal used at Mass to make it more formal and less conversational. Critics have complained that the new text is awkward and archaic, while its defenders (some of whom probably still prefer the Mass in Latin) insist that’s just the point — that language a little out of the ordinary is more devotional and inspiring. No one would ever say that the King James Bible is an easy read. And yet its very oddness is part of its power.

From the start, the King James Bible was intended to be not a literary creation but rather a political and theological compromise between the established church and the growing Puritan movement. What the king cared about was clarity, simplicity, doctrinal orthodoxy. The translators worked hard on that, going back to the original Hebrew, Greek and Aramaic, and yet they also spent a lot of time tweaking the English text in the interest of euphony and musicality. Time and again the language seems to slip almost unconsciously into iambic pentameter — this was the age of Shakespeare, commentators are always reminding us — and right from the beginning the translators embraced the principles of repetition and the dramatic pause: “In the beginning God created the Heaven, and the Earth. And the earth was without forme, and void, and darkenesse was upon the face of the deepe: and the Spirit of God mooved upon the face of the waters.”

The influence of the King James Bible is so great that the list of idioms from it that have slipped into everyday speech, taking such deep root that we use them all the time without any awareness of their biblical origin, is practically endless: sour grapes; fatted calf; salt of the earth; drop in a bucket; skin of one’s teeth; apple of one’s eye; girded loins; feet of clay; whited sepulchers; filthy lucre; pearls before swine; fly in the ointment; fight the good fight; eat, drink and be merry.

But what we also love about this Bible is its strangeness — its weird punctuation, odd pronouns (as in “Our Father, which art in heaven”), all those verbs that end in “eth”: “In the morning it flourisheth, and groweth up; in the evening it is cut downe, and withereth.” As Robert Alter has demonstrated in his startling and revealing translations of the Psalms and the Pentateuch, the Hebrew Bible is even stranger, and in ways that the King James translators may not have entirely comprehended, and yet their text performs the great trick of being at once recognizably English and also a little bit foreign. You can hear its distinctive cadences in the speeches of Lincoln, the poetry of Whitman, the novels of Cormac McCarthy.

Even in its time, the King James Bible was deliberately archaic in grammar and phraseology: an expression like “yea, verily,” for example, had gone out of fashion some 50 years before. The translators didn’t want their Bible to sound contemporary, because they knew that contemporaneity quickly goes out of fashion. In his very useful guide, “God’s Secretaries: The Making of the King James Bible,” Adam Nicolson points out that when the Victorians came to revise the King James Bible in 1885, they embraced this principle wholeheartedly, and like those people who whack and scratch old furniture to make it look even more ancient, they threw in a lot of extra Jacobeanisms, like “howbeit,” “peradventure, “holden” and “behooved.”

This is the opposite, of course, of the procedure followed by most new translations, starting with Good News for Modern Man, a paperback Bible published by the American Bible Society in 1966, whose goal was to reflect not the language of the Bible but its ideas, rendering them into current terms, so that Ezekiel 23:20, for example (“For she doted upon their paramours, whose flesh is as the flesh of asses, and whose issue is like the issue of horses”) becomes “She was filled with lust for oversexed men who had all the lustfulness of donkeys or stallions.”

There are countless new Bibles available now, many of them specialized: a Bible for couples, for gays and lesbians, for recovering addicts, for surfers, for skaters and skateboarders, not to mention a superheroes Bible for children. They are all “accessible,” but most are a little tone-deaf, lacking in grandeur and majesty, replacing “through a glasse, darkly,” for instance, with something along the lines of “like a dim image in a mirror.” But what this modernizing ignores is that the most powerful religious language is often a little elevated and incantatory, even ambiguous or just plain hard to understand. The new Catholic missal, for instance, does not seem to fear the forbidding phrase, replacing the statement that Jesus is “one in being with the Father” with the more complicated idea that he is “consubstantial with the Father.”

Not everyone prefers a God who talks like a pal or a guidance counselor. Even some of us who are nonbelievers want a God who speaketh like — well, God. The great achievement of the King James translators is to have arrived at a language that is both ordinary and heightened, that rings in the ear and lingers in the mind. And that all 54 of them were able to agree on every phrase, every comma, without sounding as gassy and evasive as the Financial Crisis Inquiry Commission, is little short of amazing, in itself proof of something like divine inspiration.

 

Posted in Empire, History, Resilience, War

Resilience in the Face of Climate Change and Epidemic: Ancient Rome and Today’s America

Tell me if you think this sounds familiar:  In its latter years (500-700 ACE), the Roman Empire faced a formidable challenge from two devastating environmental forces — dramatic climate change and massive epidemic.  As Mark Twain is supposed to have said, “History doesn’t repeat itself, but it often rhymes.”

During our own bout of climate change and ravaging disease, I’ve been reading Kyle Harper’s book The Fate of Rome: Climate, Disease, and the End of Empire.  The whole time, rhymes were running through my head.  We all know that things did not turn out well for Rome, whose civilization went through the most devastating collapse in world history.  The state disintegrated, population fell in half, and the European standard of living did not recover the level it had in 500 until a thousand years later.

Fate of Rome Cover

So Rome ended badly, but what about us?  The American empire may be eclipsing, but it’s not like the end is near.  Rome was dependent on animal power and a fragile agricultural base, and its medical “system” did more harm than good.  All in all we seem much better equipped to deal with climate change and disease than they were.  As a result, I’m not suggesting that we’re headed for the same calamitous fall that faced Roman civilization, but I do think we can learn something important by observing how they handled their own situation.

What’s so interesting about the fall of Rome is that it took so long.  The empire held on for 500 years, even under circumstances where its fall was thoroughly overdetermined.  The traditional story of the fall is about fraying political institutions in an overextended empire, surrounded by surging “barbarian” states that were prodded into existence by Rome’s looming threat.

To this political account, Harper adds the environment.  The climate was originally very kind to Rome, supporting growth during a long period of warm and wet weather known as the Roman Climate Optimum (200 BCE to 150 ACE).  But then conditions grew increasingly unstable, leading to the Late Antique Little Ice Age (450-700), with massive crop failures brought on by a drop in solar energy and massive volcanic eruptions.  In the midst of this arose a series of epidemics, fostered (like our own) by the opening up of trade routes, which culminated in the bubonic plague (541-749) that killed off half of the populace.

What kept Rome going all this time was a set of resilient civic institutions.  That’s what I think we can learn from the Roman case.  My fear is that our own institutions are considerably more fragile.  In this analysis, I’m picking up on a theme from an earlier blog post:  The Triumph of Efficiency over Effectiveness: A Brief for Resilience through Redundancy.

Here is how Harper describes the institutional framework of this empire:

Rome was ruled by a monarch in all but name, who administered a far-flung empire with the aid, first and foremost, of the senatorial aristocracy. It was an aristocracy of wealth, with property requirements for entry, and it was a competitive aristocracy of service. Low rates of intergenerational succession meant that most aristocrats “came from families that sent representatives into politics for only one generation.”

The emperor was the commander-in-chief, but senators jealously guarded the right to the high posts of legionary command and prestigious governorships. The imperial aristocracy was able to control the empire with a remarkably thin layer of administrators. This light skein was only successful because it was cast over a foundational layer of civic aristocracies across the empire. The cities have been called the “load-bearing” pillars of the empire, and their elites were afforded special inducements, including Roman citizenship and pathways into the imperial aristocracy. The low rates of central taxation left ample room for peculation by the civic aristocracy. The enormous success of the “grand bargain” between the military monarchy and the local elites allowed imperial society to absorb profound but gradual changes—like the provincialization of the aristocracy and bureaucracy—without jolting the social order.

The Roman frontier system epitomized the resilience of the empire; it was designed to bend but not break, to bide time for the vast logistical superiority of the empire to overwhelm Rome’s adversaries. Even the most developed rival in the orbit of Rome would melt before the advance of the legionary columns. The Roman peace, then, was not the prolonged absence of war, but its dispersion outward along the edges of empire.

The grand and decisive imperial bargain, which defined the imperial regime in the first two centuries, was the implicit accord between the empire and “the cities.” The Romans ruled through cities and their noble families. The Romans coaxed the civic aristocracies of the Mediterranean world into their imperial project. By leaving tax collection in the hands of the local gentry, and bestowing citizenship liberally, the Romans co-opted elites across three continents into the governing class and thereby managed to command a vast empire with only a few hundred high-ranking Roman officials. In retrospect, it is surprising how quickly the empire ceased to be a mechanism of naked extraction, and became a sort of commonwealth.

Note that last part:  Rome “became a sort of commonwealth.”  It conquered much of the Western world and incorporated one-quarter of the earth’s population, but the conquered territories were generally better off under Rome than they had been before — benefiting from citizenship, expanded trade, and growing standards of living.  It was a remarkably stratified society, but its benefits extended even to the lower orders.  (For more on this issue, see my earlier post about Walter Scheidel’s book on the social benefits of war.)

At the heart of the Roman system were three cultural norms that guided civic life: self sufficiency, reciprocity, and patronage.  Let me focus on the latter, which seems to be dangerously absent in our own society at the moment.

The expectation of paternalistic generosity lay heavily on the rich, ensuring that less exalted members of society had an emergency lien on their stores of wealth. Of course, the rich charged for this insurance, in the form of respect and loyalty, and in the Roman Empire there was a constant need to monitor the fine line between clientage and dependence.

A key part of the grand bargain engineered by Rome was the state’s responsibility to feed its citizens.

The grain dole was the political entitlement of an imperial people, under the patronage of the emperor.

Preparation for famine — a chronic threat to premodern agricultural societies — was at the center of the system’s institutional resilience.  This was particularly important in an empire as thoroughly city-centered as Rome.  Keep in mind that Rome during the empire was the first city in the world to have 1 million residents; the second was London 1500 year later.

These strategies of resilience, writ large, were engrained in the practices of the ancient city. Diversification and storage were adapted to scale. Urban food storage was the first line of redundancy. Under the Roman Empire, the monumental dimensions of storage facilities attest the political priority of food security. Moreover, cities grew organically along the waters, where they were not confined to dependence on a single hinterland.

When food crisis did unfold, the Roman government stood ready to intervene, sometimes through direct provision but more often simply by the suppression of unseemly venality.

The most familiar system of resilience was the food supply of Rome. The remnants of the monumental public granaries that stored the food supply of the metropolis are still breathtaking.

Wouldn’t it be nice if we in the U.S. could face the challenges of climate change and pandemic as a commonwealth?  If so, we would be working to increase the resilience of our system:  by sharing the burden and spreading the wealth: by building up redundancy to store up for future challenges; by freeing ourselves from the ideology of economic efficiency in the service of social effectiveness.  Wouldn’t that be nice.

Posted in Culture, History, Politics, Populism, Sociology

Colin Woodard: Maps that Show the Historical Roots of Current US Political Faultlines

This post is a commentary on Colin Woodard’s book American Nations: A History of the Eleven Rival Regional Cultures of North America.  

Woodard argues that the United States is not a single national culture but  a collection of national cultures, each with its own geographic base.  The core insight for this analytical approach comes from “Wilbur Zelinsky of Pennsylvania State University [who] formulated [a] theory in 1973, which he called the Doctrine of First Effective Settlement. ‘Whenever an empty territory undergoes settlement, or an earlier population is dislodged by invaders, the specific characteristics of the first group able to effect a viable, self-perpetuating society are of crucial significance for the later social and cultural geography of the area, no matter how tiny the initial band of settlers may have been,’ Zelinsky wrote. ‘Thus, in terms of lasting impact, the activities of a few hundred, or even a few score, initial colonizers can mean much more for the cultural geography of a place than the contributions of tens of thousands of new immigrants a few generations later.’”

I’m suspicious of theories that smack of cultural immutability and cultural determinism, but Woodard’s account is more sophisticated than that.  His is a story of the power of founders in a new institutional setting, who lay out the foundational norms for a society that lacks any cultural history of its own or which expelled the preexisting cultural group (in the U.S. case, Native Americans).  So part of the story is about the acculturation of newcomers into an existing worldview.  But another part is the highly selective nature of immigration, since new arrivals often seek out places to settle that are culturally compatible.  They may target a particular destination because its cultural characteristics, creating a pipeline of like-minded immigrants; or they choose to move on to another territory if the first port of entry is not to their taste.  Once established, these cultures often expanded westward as the country developed, extending the size and geographical scope of each nation.

Why does he insist on calling them nations?  At first this bothered me a bit, but then I realized he was using the term “nation” in Benedict Anderson’s sense as “imagined communities.”  Tidewater and Yankeedom are not nation states; they are cultural components of the American state.  But they do act as nations for their citizens.  Each of these nations is a community of shared values and worldviews that binds people together who have never met and often live far away.  The magic of the nation is that it creates a community of common sense and purpose that extends well beyond the reach of normal social interaction.  If you’re Yankee to the core, you can land in a strange town in Yankeedom and feel at home.  These are my people.  I belong here.

He argues that these national groupings continue to have a significant impact of the cultural geography of the US, shaping people’s values, styles of social organization, views of religion and government, and ultimately how they vote.  The kicker is the alignment between the spatial distribution of these cultures and the current voting patterns.  He lays out this argument succinctly in a 2018 op-ed he wrote for the New York Times.  I recommend reading it.

The whole analysis is neatly summarized in the two maps he deployed in that op-ed, which I have reproduced below.

The Map of America’s 11 Nations

11 Nations Map

This first map shows the geographic boundaries of the various cultural groupings in the U.S.  It all started on the east coast with the founding cultural binary that shaped the formation of the country in the late 18th century — New England Yankees and Tidewater planters.  He argues that they are direct descendants of the two factions in the English civil war of the mid 17th century, with the Yankees as the Calvinist Roundheads, who (especially after being routed by the restoration in England) sought to establish a new theocratic society in the northeast founded on strong government, and the Anglican Cavaliers, who sought to reproduce the decentralized English aristocratic ideal on Virginia plantations.  In between was the Dutch entrepot of New York, focused on commerce and multiculturalism (think “Hamilton”), and the Quaker colony in Pennsylvania, founded on equality and suspicion of government.  The US constitution was an effort to balance all of these cultural priorities within a single federal system.

Then came two other groups that didn’t fit well into any of these four cultural enclaves.  The immigrants to the Deep South originated in the slave societies of British West Indies, bringing with them a rigid caste structure and a particularly harsh version of chattel slavery.  Immigrants to Greater Appalachia came from the Scots-Irish clan cultures in Northern Ireland and the Scottish borderlands, with a strong commitment to individual liberty, resentment of government, and a taste for violence.

Tidewater and Yankeedom dominated the presidency and federal government for the country’s first 40 years.  But in 1828 the US elected its first president from rapidly expanding Appalachia, Andrew Jackson.  And by then the massive westward expansion of the Deep South, along with the extraordinary wealth and power that accrued from its cotton-producing slave economy, created the dynamics leading to the Civil War.  This pitted the four nations of the northeast against Tidewater and Deep South, with Appalachia split between the two, resentful of both Yankee piety and Southern condescension.  The multiracial and multicultural nations of French New Orleans and the Mexican southwest (El Norte) were hostile to the Deep South and resented its efforts to expand its dominion westward.

The other two major cultural groupings emerged in the mid 19th century.  The thin strip along the west coast consisted of Yankees in the cities and Appalachians in the back country, combining the utopianism of the former with the radical individualism of the latter.  The Far West is the one grouping that is based not on cultural geography but physical geography.  A vast arid area unsuited to farming, it became the domain of the only two entities powerful enough to control it — large corporations (railroad and mining), which exploited it, and the federal government, which owned most of the land and provided armed protection from Indians.

So let’s jump ahead and look at the consequences of this cultural landscape for our current political divisions.  Examine the electoral map for the 2016 presidential race, which shows the vote in Woodard’s 11 nations.

The 2016 Electoral Map

2016 Vote Map

Usually you see voting maps with results by state.  Here instead we see voting results by county, which allows for a more fine-tuned analysis.  Woodard assigns each county to one of the 11 “nations” and then shows the red or blue vote margin for each cultural grouping.

It’s striking to see how well the nations match the vote.  The strongest vote for Clinton came from the Left Coast, El Norte, and New Netherland, with substantial support from Yankeedom, Tidewater, and Spanish Caribbean.  Midlands was only marginally supportive of the Democrat.  Meanwhile the Deep South and Far West were modestly pro-Trump (about as much as Yankeedom was pro-Clinton), but the true kicker was Appalachia, which voted overwhelmingly for Trump (along with New France in southern Louisiana).

Appalachia forms the heart of Trump’s electoral base of support.  It’s an area that resents intellectual, cultural, and political elites; that turns away from mainstream religious denominations in favor of evangelical sects; and that lags behing behind in the 21st century information economy.  As a result, this is the heartland of populism.  It’s no wonder that the portrait on the wall in Trump’s Oval portrays Andrew Jackson.

Now one more map, this time showing were in the country people have been social distancing and where they haven’t, as measure by how much they were traveling away from home (using cell phone data).  It comes from a piece Woodard recently published in Washington Monthly.

Social Distancing Map

Once again, the patterns correspond nicely to the 11 nations.  Here’s how Woodard summarizes the data:

Yankeedom, the Midlands, New Netherland, and the Left Coast show dramatic decreases in movement – 70 to 100 percent in most counties, whether urban or rural, rich, or poor.

Across much of Greater Appalachia, the Deep South and the Far West, by contrast, travel fell by only 15 to 50 percent. This was true even in much of Kentucky, the interior counties of Washington and Oregon, where Democratic governors had imposed a statewide shelter-in-place order.

Not surprisingly, most of the states where governors imposed stay-at-home orders by March 27 are located in or dominated by one or a combination of the communitarian nations. This includes states whose governors are Republicans: Ohio, New Hampshire, Vermont, and Massachusetts.

Most of the laggard governors lead states dominated by individualistic nations. In the Deep South and Greater Appalachia you find Florida’s Ron DeSantis, who allowed spring breakers to party on the beaches. There’s Brian Kemp of Georgia who left matters in the hands of local officials for much of the month and then, on April 2, claimed to have just learned the virus can be transmitted by asymptomatic individuals. You have Asa Hutchinson of Arkansas, who on April 7 denied mayors the power to impose local lockdowns. And then there’s Mississippi’s Tate Reeves, who resisted action because “I don’t like government telling private business what they can and cannot do.”

Nothing like a pandemic to show what your civic values are.  Is it all about us or all about me?

Posted in Higher Education, History

The Exceptionalism of American Higher Education

This post is an op-ed I published on my birthday (May 17) in 2018 on the online international opinion site, Project Syndicate.  The original is hidden behind a paywall; here are PDFs in English, Spanish, and Arabic.

It’s a brief essay about what is distinctive about the American system of higher education, drawn from my book, A Perfect Mess: The Unlikely Ascendancy of American Higher Education.

Web Image

The Exceptionalism of American Higher Education

 By David F. Labaree

STANFORD – In the second half of the twentieth century, American universities and colleges emerged as dominant players in the global ecology of higher education, a dominance that continues to this day. In terms of the number of Nobel laureates produced, eight of the world’s top ten universities are in the United States. Forty-two of the world’s 50 largest university endowments are in America. And, when ranked by research output, 15 of the top 20 institutions are based in the US.

Given these metrics, few can dispute that the American model of higher education is the world’s most successful. The question is why, and whether the US approach can be exported.

While America’s oldest universities date to the seventeenth and eighteenth centuries, the American system of higher education took shape in the early nineteenth century, under conditions in which the market was strong, the state was weak, and the church was divided. The “university” concept first arose in medieval Europe, with the strong support of monarchs and the Catholic Church. But in the US, with the exception of American military academies, the federal government never succeeded in establishing a system of higher education, and states were too poor to provide much support for colleges within their borders.

In these circumstances, early US colleges were nonprofit corporations that had state charters but little government money. Instead, they relied on student tuition, as well as donations from local elites, most of whom were more interested in how a college would increase the value of their adjoining property than they were in supporting education.

As a result, most US colleges were built on the frontier rather than in cities; the institutions were used to attract settlers to buy land. In this way, the first college towns were the equivalent of today’s golf-course developments – verdant enclaves that promised a better quality of life. At the same time, religious denominations competed to sponsor colleges in order to plant their own flags in new territories.

What this competition produced was a series of small, rural, and underfunded colleges led by administrators who had to learn to survive in a highly competitive environment, and where supply long preceded demand. As a result, schools were positioned to capitalize on the modest advantages they did have. Most were highly accessible (there was one in nearly every town), inexpensive (competition kept a lid on tuition), and geographically specific (colleges often became avatars for towns whose names they took). By 1880, there were five times as many colleges and universities in the US than in all of Europe.

The unintended consequence of this early saturation was a radically decentralized system of higher education that fostered a high degree of autonomy. The college president, though usually a clergyman, was in effect the CEO of a struggling enterprise that needed to attract and retain students and donors. Although university presidents often begged for, and occasionally received, state money, government funding was neither sizeable nor reliable.

In the absence of financial security, these educational CEO’s had to hustle. They were good at building long-term relationships with local notables and tuition-paying students. Once states began opening public colleges in the mid-nineteenth century, the new institutions adapted to the existing system. State funding was still insufficient, so leaders of public colleges needed to attract tuition from students and donations from graduates.

By the start of the twentieth century, when enrollments began to climb in response to a growing demand for white-collar workers, the mixed public-private system was set to expand. Local autonomy gave institutions the freedom to establish a brand in the marketplace, and in the absence of strong state control, university leaders positioned their institutions to take pursue opportunities and adapt to changing conditions. As funding for research grew after World War II, college administrators started competing vigorously for these new sources of support.

By the middle of the twentieth century, the US system of higher education reached maturity, as colleges capitalized on decentralized and autonomous governance structures to take advantage of the lush opportunities for growth that arose during the Cold War. Colleges were able to leverage the public support they had developed during the long lean years, when a university degree was highly accessible and cheap. With the exception of the oldest New England colleges – the “Ivies” – American universities never developed the elitist aura of Old World institutions like Oxford and Cambridge. Instead, they retained a populist ethos – embodied in football and fraternities and flexible academic standards – that continues to serve them well politically.

So, can other systems of higher learning adapt the US model of educational excellence to local conditions? The answer is straightforward: no.  You had to be there.

In the twenty-first century, it is not possible for colleges to emerge with the same degree of autonomy that American colleges enjoyed some 200 years ago before the development of a strong nation state. Today, most non-American institutions are wholly-owned subsidiaries of the state; governments set priorities, and administrators pursue them in a top-down manner. By contrast, American universities have retained the spirit of independence, and faculty are often given latitude to channel entrepreneurial ideas into new programs, institutes, schools, and research. This bottom-up structure makes the US system of higher education costly, consumer-driven, and deeply stratified. But this is also what gives the system its global edge.

 

Posted in History, Sociology, War

War! What Is It Good For?

This post is an overview of the 2014 book by Stanford classicist Ian Morris, War! What Is It Good For?  In it he makes the counter-intuitive argument that over time some forms of war have been socially productive.  In contrast with the message of 1970s song by the same name, war may in fact be good for something.

The central story is this.  Some wars lead to the incorporation of large numbers of people under a single imperial state.  In the short run, this is devastatingly destructive; but in the long run it can be quite beneficial.  Under such regimes (e.g., the early Roman and Chinese empires and the more recent British and American empires), the state imposes a new order that sharply reduces rates of violent death and fosters economic development.  The result is an environment that allows the population live longer and grow wealthier, not just in the imperial heartland but also in the newly colonized territories.  Morris War Cover

So how does this work?  He starts with a key distinction made by Mancur Olson.  All states are a form of banditry, Olson says, since they extract revenue by force.  Some are roving bandits, who sweep into town, sack the place, and then move on.  But others are stationary bandits, who are stuck in place.  In this situation, the state needs to develop a way to gain the greatest revenue from its territory over the long haul, which means establishing order and promoting economic development.  It has an incentive to foster the safety and productivity of its population.

Rulers steal from their people too, Olson recognized, but the big difference between Leviathan and the rape-and-pillage kind of bandit is that rulers are stationary bandits. Instead of stealing everything and hightailing it, they stick around. Not only is it in their interest to avoid the mistake of squeezing every last drop from the community; it is also in their interest to do whatever they can to promote their subjects’ prosperity so there will be more for the rulers to take later.

This argument is an extension of the one that Thomas Hobbes made in Leviathan:

Whatsoever therefore is consequent to a time or war where every man is enemy to every man, the same is consequent to the time wherein men live without other security than what their own strength and their own invention shall furnish them withal. In such condition there is no place for industry, because the fruit thereof is uncertain, and consequently no culture of the earth, no navigation nor use of the commodities that may be imported by sea, no commodious building, no instruments of moving and removing such things as require much force, no knowledge of the face of the earth; no account of time, no arts, no letters, no society, and, which is worst of all, continual fear and danger of violent death, and the life of man solitary, poor, nasty, brutish, and short.

(Wow, that boy could write.)

Morris says that stationary bandit states first arose with the emergence of agriculture, when tribes found that staying in place and tending their crops could support a larger population than roving across the landscape hunting and gathering.  This leads to what he calls caging.  People can’t easily move and the state has an incentive to protect them from marauders so it can harvest the surplus from this population for its own benefit.

Over time, these states have reduced violence to an extraordinary extent, reining in “the continual fear and danger of violent death.”

Averaged across the planet, violence killed about 1 person in every 4,375 in 2012, implying that just 0.7 percent of the people alive today will die violently, as against 1–2 percent of the people who lived in the twentieth century, 2–5 percent in the ancient empires, 5–10 percent in Eurasia in the age of steppe migrations, and a terrifying 10–20 percent in the Stone Age.

In the process, states found that they prospered most when they relaxed direct control of the economy and allowed markets to develop according to their own dynamic.  This created a paradoxical relationship between state and economy.

Markets could not work well unless governments got out of them, but markets could not work at all unless governments got into them, using force to pacify the world and keep the Beast at bay. Violence and commerce were two sides of the same coin, because the invisible hand needed an invisible fist to smooth the way before it could work its magic.

Empires, of course, don’t last forever.  At a certain point, hegemony yields to outside threats.  One chronic source of threat in Eurasian history was the roving bandit states of the Steppes that did in Rome and constantly harried China.  Another threat is the rise of a new hegemon.  The British global empire of the 18th and 19th century fostered the emergence of the United States, which became the empire of the late 20th and early 21st century, and this in turn fostered the development of China.

And there can be long periods of time between empires, when wars are largely unproductive.  After the fall of Rome, Europe experienced nearly a millennium of unproductive wars, as small states competed for dominance without anyone ever actually attaining it, a condition he calls “feudal anarchy.”  The result of a sharp increase in violence and and sharp decline in standard of living.  It wasn’t until the 16th century that Europe regained the per capita income enjoyed by Romans.

It seems to me, in fact, that “feudal anarchy” is an excellent description not just of western Europe between about 900 and 1400 but also of most of Eurasia’s lucky latitudes in the same period. From England to Japan, societies staggered toward feudal anarchy as their Leviathans dismembered themselves.

But 1400 saw the beginning of the 500-year war in which Europe strove mightily to dominate the world, finally producing the imperium of the British and then the Americans.

Morris’s conclusion from this extensive analysis is disturbing but also compelling:

The answer to the question in this book’s title is both paradoxical and horrible. War has been good for making humanity safer and richer, but it has done so through mass murder. But because war has been good for something, we must recognize that all this misery and death was not in vain. Given a choice of how to get from the poor, violent Stone Age to … peace and prosperity…, few of us, I am sure, would want war to be the way, but evolution—which is what human history is—is not driven by what we want. In the end, the only thing that matters is the grim logic of the game of death.

…while war is the worst imaginable way to create larger, more peaceful societies, it is pretty much the only way humans have found.

One way to test the validity of Morris’s argument in this book is to compare it to the analysis by his Stanford colleague, Walter Scheidel, in his latest book, Escape from Rome, which I reviewed here two weeks ago.  Scheidel argues that the fall of Rome, and the failure of any new empire to replace it for most of the next millennium, is the reason that Europe made the turn toward modernity before any other region of the world.  In Scheidel’s view, what Morris calls feudal anarchy, which shortened lifespans and fostered poverty for so long and for so many people, was the key spur to economic, social, technological, political, and military innovation — as competing states desperately sought to survive in the war of all against all.

Empires may keep the peace and promote commerce, but they also emphasize the preservation of power over the development of science and the invention of new technologies.  This is why the key engines of modernization in early modern Europe were not the large countries in the center — France and Spain — but the small countries on the margins, England and the Netherlands.

For most people, enjoying relative peace and prosperity within an empire is a lot better than the alternative.  But for the future global population as a whole, the greatest benefit may come from a sustained competition among warring states, which  spur the breakthrough innovations that have produced history’s most dramatic advances in peace and prosperity.  In this sense, even the unproductive wars of the feudal period may have been productive in the long run.  Once again, war was the answer.

Posted in Capitalism, Global History, Higher Education, History, State Formation, Theory

Escape from Rome: How the Loss of Empire Spurred the Rise of Modernity — and What this Suggests about US Higher Ed

This post is a brief commentary on historian Walter Scheidel’s latest book, Escape from Rome.  It’s a stunningly original analysis of a topic that has long fascinated scholars like me:  How did Europe come to create the modern world?  His answer is this:  Europe became the cauldron of modernity and the dominant power in the world because of the collapse of the Roman empire — coupled with the fact that no other power was able to replace it for the next millennium.  The secret of European success was the absence of central control.  This is what led to the extraordinary inventions that characterized modernity — in technology, energy, war, finance, governance, science, and economy.

Below I lay out central elements of his argument, providing a series of salient quotes from the text to flesh out the story.  In the last few years I’ve come to read books exclusively on Kindle and Epub, which allows me to copy passages that catch my interest into Evernote for future reference. So that’s were these quotes come from and why they don’t include page numbers.

At the end, I connect Scheidel’s analysis with my own take on the peculiar history of US higher education, as spelled out in my book A Perfect Mess.  My argument parallels his, showing how the US system arose in the absence of a strong state and dominant church, which fostered creative competition among colleges for students and money.  Out of this unpromising mess of institutions emerged a system of higher ed that came to dominate the academic world.

Escape from Rome

Here’s how Scheidel describes the consequences for Europe that arose from the fall of Rome and the long-time failure of efforts to impose a new empire there.

I argue that a single condition was essential in making the initial breakthroughs possible: competitive fragmentation of power. The nursery of modernity was riven by numerous fractures, not only by those between the warring states of medieval and early modern Europe but also by others within society: between state and church, rulers and lords, cities and magnates, knights and merchants, and, most recently, Catholics and Protestants. This often violent history of conflict and compromise was long but had a clear beginning: the fall of the Roman empire that had lorded it over most of Europe, much as successive Chinese dynasties lorded it over most of East Asia. Yet in contrast to China, nothing like the Roman empire ever returned to Europe.

Recurrent empire on European soil would have interfered with the creation and flourishing of a stable state system that sustained productive competition and diversity in design and outcome. This made the fall and lasting disappearance of hegemonic empire an indispensable precondition for later European exceptionalism and thus, ultimately, for the making of the modern world we now inhabit.

From this developmental perspective, the death of the Roman empire had a much greater impact than its prior existence and the legacy it bequeathed to later European civilization.

Contrast this with China, where dynasties rose and fell but where empire was a constant until the start of the 20th century.  It’s an extension of an argument that others, such as David Landes, have developed about the creative possibilities unleashed by a competitive state system in comparison to the stability and stasis of an imperial power.  Think about the relative stagnation of the  Ottoman, Austro-Hungarian, and Russian empires in 17th, 18th, and 19th centuries compared with the dynamic emerging nation states of Western Europe.  Think also of the paradox within Western Europe, in which the drivers of modernization came not from the richest and strongest imperial powers — Spain, France, and Austria — but from the marginal kingdom of England and the tiny Dutch republic.

The comparison between Europe in China during the second half of the first millennium is telling:

Two things matter most. One is the unidirectional character of European developments compared to the back and forth in China. The other is the level of state capacity and scale from and to which these shifts occurred. If we look at the notional endpoints of around 500 and 1000 CE, the dominant trends moved toward imperial restoration in China and toward inter-and intrastate fragmentation in Europe.

Scheidel shows how social power fragmented after the fall of Rome in such a way that made it impossible for a new hegemonic power to emerge.

After Rome’s collapse, the four principal sources of social power became increasingly unbundled. Political power was claimed by monarchs who gradually lost their grip on material resources and thence on their subordinates. Military power devolved upon lords and knights. Ideological power resided in the Catholic Church, which fiercely guarded its long-standing autonomy even as its leadership was deeply immersed in secular governance and the management of capital and labor. Economic power was contested between feudal lords and urban merchants and entrepreneurs, with the latter slowly gaining the upper hand. In the heyday of these fractures, in the High Middle Ages, weak kings, powerful lords, belligerent knights, the pope and his bishops and abbots, and autonomous capitalists all controlled different levers of social power. Locked in unceasing struggle, they were compelled to cooperate and compromise to make collective action possible.

He points out that “The Christian church was the most powerful and enduring legacy of the Roman empire,” becoming “Europe’s only functioning international organization.”  But in the realms of politics, war, and economy the local element was critical, which produced a situation where local innovation could emerge without interference from higher authority.

The rise of estates and the communal movement shared one crucial characteristic: they produced bodies such as citizen communes, scholarly establishments, merchant guilds, and councils of nobles and commoners that were, by necessity, relatively democratic in the sense that they involved formalized deliberative and consensus-building interactions. Over the long run, these bodies gave Latin Europe an edge in the development of institutions for impersonal exchange that operated under the rule of law and could be scaled up in response to technological change.

Under these circumstances, the states that started to emerge in Europe in the middle ages built on the base of distributed power and local initiative that developed in the vacuum left by the Roman Empire.

As state power recoalesced in Latin Europe, it did so restrained by the peculiar institutional evolution and attendant entitlements and liberties that this acutely fractured environment had engendered and that—not for want of rulers’ trying—could not be fully undone. These powerful medieval legacies nurtured the growth of a more “organic” version of the state—as opposed to the traditional imperial “capstone” state—in close engagement with organized representatives of civil society.

Two features were thus critical: strong local government and its routinized integration into polity-wide institutions, which constrained both despotic power and aristocratic autonomy, and sustained interstate conflict. Both were direct consequences of the fading of late Roman institutions and the competitive polycentrism born of the failure of hegemonic empire. And both were particularly prominent in medieval England: the least Roman of Western Europe’s former Roman provinces, it experienced what with the benefit of hindsight turned out to be the most propitious initial conditions for future transformative development.

The Pax Romana was replaced by a nearly constant state of war, with the proliferation of castle building and the dispersion of military capacity at the local level.  These wars were devastating for the participants but became primary spur for technological, political, and economic innovation.  Everyone needed to develop an edge to help with the inevitable coming conflict.

After the reformation, the small marginal and Protestant states on the North Sea enjoyed a paradoxical advantage in the early modern period, when Catholic Spain, France, and Austria were developing increasingly strong centralized states.  Their marginality allowed them to build most effectively on the inherited medieval model.

…it made a difference that the North Sea region was alone in preserving medieval decentralized political structures and communitarian legacies and building on them during the Reformation while more authoritarian monarchies rose across much of the continent—what Jan Luiten van Zanden deems “an unbroken democratic tradition” from the communal movement of the High Middle Ages to the Dutch Revolt and England’s Glorious Revolution.

England in particular benefited from the differential process of development in Europe.

Yet even as a comprehensive balance sheet remains beyond our reach, there is a case to be made that the British economy expanded and modernized in part because of rather than in spite of the tremendous burdens of war, taxation, and protectionism. By focusing on trade and manufacture as a means of strengthening the state, Britain’s elites came to pursue developmental policies geared toward the production of “goods with high(er) added value, that were (more) knowledge and capital intensive and that were better than those of foreign competitors so they could be sold abroad for a good price.”

Thanks to a combination of historical legacies and geography, England and then Britain happened to make the most of their pricey membership in the European state system. Economic growth had set in early; medieval integrative institutions and bargaining mechanisms were preserved and adapted to govern a more cohesive state; elite commitments facilitated high levels of taxation and public debt; and the wars that mattered most were won.

Reduced to its essentials, the story of institutional development followed a clear arc. In the Middle Ages, the dispersion of power within polities constrained the intensity of interstate competition by depriving rulers of the means to engage in sustained conflict. In the early modern period, these conditions were reversed. Interstate conflict escalated as diversity within states diminished and state capacity increased. Enduring differences between rival polities shaped and were in turn shaped by the ways in which elements of earlier domestic heterogeneity, bargaining and balancing survived and influenced centralization to varying degrees. The key to success was to capitalize on these medieval legacies in maximizing internal cohesion and state capacity later. This alone made it possible to prevail in interstate conflict without adopting authoritarian governance that stifled innovation. The closest approximations of this “Goldilocks scenario” could be found in the North Sea region, first in the Netherlands and then in England.

As maritime European states (England, Spain, Portugal, and the Dutch Republic) spread out across the globe, the competition increased exponentially — which then provided even stronger incentives for innovation at all levels of state and society.

Polycentrism was key. Interstate conflict did not merely foster technological innovation in areas such as ship design and weaponry that proved vital for global expansion, it also raised the stakes by amplifying both the benefits of overseas conquest and its inverse, the costs of opportunities forgone: successful ventures deprived rivals from rewards they might otherwise have reaped, and vice versa. States played a zero-sum game: their involvements overseas have been aptly described as “a competitive process driven as much by anxiety over loss as by hope of gain.”

In conclusion, Scheidel argues that bloody and costly conflict among competing states was the the source of rapid modernization and the rise of European domination of the globe.

I am advocating a perspective that steers clear of old-fashioned triumphalist narratives of “Western” exceptionalism and opposing denunciations of colonialist victimization. The question is not who did what to whom: precisely because competitive fragmentation proved so persistent, Europeans inflicted horrors on each other just as liberally as they meted them out to others around the globe. Humanity paid a staggering price for modernity. In the end, although this may seem perverse to those of us who would prefer to think that progress can be attained in peace and harmony, it was ceaseless struggle that ushered in the most dramatic and exhilaratingly open-ended transformation in the history of our species: the “Great Escape.” Long may it last.

I strongly recommend that you read this book.  There’s insight and provocation on every page.

The Parallel with the History US Higher Education

As I mentioned at the beginning, my own analysis of the emergence of American higher ed tracks nicely on Scheidel’s analysis of Europe after the fall of Rome.  US colleges arose in the early 19th century under conditions where the state was weak, the church divided, and the market strong.  In the absence of a strong central power and a reliable source of financial support, these colleges came into existence as corporations with state charters but not state funding.  (State colleges came later but followed the model of their private predecessors.)  Their creation had less to do with advancing knowledge than with serving more immediately practical aims.

One was to advance the faith in a highly competitive religious environment.  This provided a strong incentive to plant the denominational flag across the countryside, especially on steadily moving western frontier.  A college was a way for Lutherans and Methodists and Presbyterians and others to announce their presence, educate congregants, attract newcomers, and train clergy.  Thus the huge number of colleges in Ohio, the old Northwest Territory.

Another spur for college formation was the crass pursuit of money.  The early US was a huge, underpopulated territory which had too much land and not enough buyers.  This turned nearly everyone on the frontier into a land speculator (ministers included), feverishly coming up with schemes to make the land in their town more valuable for future residents than the land in other towns in the area.  One way to do this was to set up a school, telegraphing to prospects that this was a place to settle down and raise a family.  When other towns followed suit, you could up the ante by establishing a college, usually bearing the town name, which told the world that yours was not some dusty agricultural village but a vibrant center of culture.

The result was a vast number of tiny and unimpressive colleges scattered across the less populated parts of a growing country.  Without strong funding from church or state, they struggled to survive in a highly competitive setting.  This they managed by creating lean institutions that were adept at attracting and retaining student consumers and eliciting donations from alumni and from the wealthier people in town.  The result was the most overbuilt system of higher education the world has ever seen, with five times as many colleges in 1880 than the entire continent of Europe.

All the system was lacking was academic credibility and a strong incentive for student enrollment.  These conditions were met at the end of the century, with the arrival of the German research university to crown the system and give it legitimacy and with the rise of the corporation and its need for white collar workers.

At this point, the chaotic fragmentation and chronic competition that characterized the American system of higher education turned out to be enormously functional.  Free from the constraints that European nation states and national churches imposed on universities, American institutions could develop programs, lure students, hustle for dollars, and promote innovations in knowledge production and technology.  They knew how to make themselves useful to their communities and their states, developing a broad base of political and financial support and demonstrating their social and economic value.

Competing colleges, like competing states, promoted a bottom-up vitality in the American higher ed system that was generally lacking in the older institutions of Europe that were under control of a strong state or church.  Early institutional chaos led to later institutional strength, a system what was not created by design but emerged from an organic process of evolutionary competition.  In the absence of Rome (read: a hegemonic national university), the US higher education system became Rome.

Posted in Academic writing, Capitalism, History

E.P. Thompson: Time, Work-Discipline, and Industrial Capitalism

This post is a tribute to a wonderful essay by the great British historian of working-class history, E. P. Thompson.  His classic work is The Making of the English Working Class, published in 1966.  The paper I’m touting here provides a lovely window into the heart of his craft, which is an unlikely combination of Oxbridge erudition and Marxist analysis.

It’s the story of the rise of a new sense of time in the world that emerged with the arrival of capitalism, at which point suddenly time became money.  If you’re making shoes to order in a precapitalist workshop, you work until the order is completed and then you take it easy.  But if your labor is being hired by the hour, then your employer has an enormous incentive to squeeze as much productivity as possible out of every minute you are on the clock. The old model is more natural for humans: work until you’ve accomplished what you need and then stop.  Binge and break.  Think about the way college students spend their time when they’re not being supervised — a mix of all-nighters and partying.

Thompson captures the essence of the change between natural time and the time clock with this beautiful epigraph from Thomas Hardy’s Tess of the D’Urbervilles.

Tess … started on her way up the dark and crooked lane or street not made for hasty progress; a street laid out before inches of land had value, and when one-handed clocks sufficiently subdivided the day.

This quote and his analysis has had a huge impact on the way I came to see the world as a scholar of history.

Here’s a link to the paper, which was published in the journal Past and Present in 1967.  Enjoy.

front page time work discipline -- pp 67