Posted in Empire, History, Modernity

Mikhail — How the Ottomans Shaped the Modern World

This post is a reflection on the role that the Ottoman Empire played in shaping the modern world.  It draws on a new book by Alan Mikhail, God’s Shadow: Sultan Selim, His Ottoman Empire, and the Making of the Modern World.  

The Ottomans are the Rodney Dangerfields of empires: They don’t get no respect.  If we picture them at all, it’s either the exotic image of turbans and concubines in Topkapi Palace or the sad image of the “sick man of Europe” in the days before World War I, which finally put them out of their misery.  Neither does them justice.  For a long time, they were the most powerful empire in the world, which dramatically shaped life on three continents — Europe, Asia, and Africa. 

But what makes their story so interesting is that it is more than just an account of some faded glory in the past.  As Mikhail points out, the Ottomans left an indelible stamp on the modern world.  It was their powerful presence in the middle of Eurasia that pushed the minor but ambitious states of Western Europe to set sail for the East and West Indies.  The Dutch, Portuguese, Spanish, and English couldn’t get to the treasures of China and India by land because of the impassable presence of the Ottomans.  So they either had to sail east around Africa to get there or forge a new path to the west, which led them to the Americas.  In fact, they did both, and the result was the riches that turned them into imperial powers who came to dominate much of the known world.  

Without the Ottomans, there would not have been the massive expansion of world trade, the Spanish empire, the riches and technological innovations that spurred the industrial revolution and empowered the English and American empires.

God's Shadow

Here are some passages from the book that give you a feel of the impact the Ottomans had:

For half a century before 1492, and for centuries afterward, the Ottoman Empire stood as the most powerful state on earth: the largest empire in the Mediterranean since ancient Rome, and the most enduring in the history of Islam. In the decades around 1500, the Ottomans controlled more territory and ruled over more people than any other world power. It was the Ottoman monopoly of trade routes with the East, combined with their military prowess on land and on sea, that pushed Spain and Portugal out of the Mediterranean, forcing merchants and sailors from these fifteenth-century kingdoms to become global explorers as they risked treacherous voyages across oceans and around continents—all to avoid the Ottomans.

From China to Mexico, the Ottoman Empire shaped the known world at the turn of the sixteenth century. Given its hegemony, it became locked in military, ideological, and economic competition with the Spanish and Italian states, Russia, India, and China, as well as other Muslim powers. The Ottomans influenced in one way or another nearly every major event of those years, with reverberations down to our own time. Dozens of familiar figures, such as Columbus, Vasco da Gama, Montezuma, the reformer Luther, the warlord Tamerlane, and generations of popes—as well as millions of other greater and lesser historical personages—calibrated their actions and defined their very existence in reaction to the reach and grasp of Ottoman power.

Other facts, too, have blotted out our recognition of the Ottoman influence on our own history. Foremost, we tend to read the history of the last half-millennium as “the rise of the West.” (This anachronism rings as true in Turkey and the rest of the Middle East as it does in Europe and America.) In fact, in 1500, and even in 1600, there was no such thing as the now much-vaunted notion of “the West.” Throughout the early modern centuries, the European continent consisted of a fragile collection of disparate kingdoms and small, weak principalities locked in constant warfare. The large land-based empires of Eurasia were the dominant powers of the Old World, and, apart from a few European outposts in and around the Caribbean, the Americas remained the vast domain of its indigenous peoples. The Ottoman Empire held more territory in Europe than did most European-based states. In 1600, if asked to pick a single power that would take over the world, a betting man would have put his money on the Ottoman Empire, or perhaps China, but certainly not on any European entity.

The sheer scope was the empire at its height was extraordinary:

For close to four centuries, from 1453 until well into the exceedingly fractured 1800s, the Ottomans remained at the center of global politics, economics, and war. As European states rose and fell, the Ottomans stood strong. They battled Europe’s medieval and early modern empires, and in the twentieth century continued to fight in Europe, albeit against vastly different enemies. Everyone from Machiavelli to Jefferson to Hitler—quite an unlikely trio—was forced to confront the challenge of the Ottomans’ colossal power and influence. Counting from their first military victory, at Bursa, they ruled for nearly six centuries in territories that today comprise some thirty-three countries. Their armies would control massive swaths of Europe, Africa, and Asia; some of the world’s most crucial trade corridors; and cities along the shores of the Mediterranean, Red, Black, and Caspian seas, the Indian Ocean, and the Persian Gulf. They held Istanbul and Cairo, two of the largest cities on earth, as well as the holy cities of Mecca, Medina, and Jerusalem, and what was the world’s largest Jewish city for over four hundred years, Salonica (Thessaloniki in today’s Greece). From their lowly beginnings as sheep-herders on the long, hard road across Central Asia, the Ottomans ultimately succeeded in proving themselves the closest thing to the Roman Empire since the Roman Empire itself.

One of the interesting things about the Ottomans was how cosmopolitan and relatively tolerant they were.  The Spanish threw the Muslims and Jews out of Spain but the Ottomans welcomed a variety of peoples, cultures, languages, and religions.  It wasn’t until relatively late that the empire came to be predominately Muslim.

Although all religious minorities throughout the Mediterranean were subjected to much hardship, the Ottomans, despite what Innocent thought, never persecuted non-Muslims in the way that the Inquisition persecuted Muslims and Jews—and, despite the centuries of calls for Christian Crusades, Muslims never attempted a war against the whole of Christianity. While considered legally inferior to Muslims, Christians and Jews in the Ottoman Empire (as elsewhere in the lands of Islam) had more rights than other religious minorities around the world. They had their own law courts, freedom to worship in the empire’s numerous synagogues and churches, and communal autonomy. While Christian Europe was killing its religious minorities, the Ottomans protected theirs and welcomed those expelled from Europe. Although the sultans of the empire were Muslims, the majority of the population was not. Indeed, the Ottoman Empire was effectively the Mediterranean’s most populous Christian state: the Ottoman sultan ruled over more Christian subjects than the Catholic pope.

The sultan who moved the Ottoman empire into the big leagues — tripling its size — was Selim the Grim, who is the central figure of this book (look at his image on the book’s cover and you’ll see how he earned the name).  His son was Suleyman the Magnificent, whose long rule made him the lasting symbol of the empire at its peak.  Another sign of the heterogeneous nature of the Ottomans is that the sultans themselves were of mixed blood.

Because, in this period, Ottoman sultans and princes produced sons not from their wives but from their concubines, all Ottoman sultans were the sons of foreign, usually Christian-born, slaves like Gülbahar [Selim’s mother].

In the exceedingly cosmopolitan empire, the harem ensured that a non-Turkish, non-Muslim, non-elite diversity was infused into the very bloodline of the imperial family. As the son of a mother with roots in a far-off land, a distant culture, and a religion other than Islam, Selim viscerally experienced the ethnically and religiously amalgamated nature of the Ottoman Empire, and grew up in provincial Amasya with an expansive outlook on the fifteenth-century world.

Posted in History, Liberal democracy, Philosophy

Fukuyama — Liberalism and Its Discontents

This post is a brilliant essay by Francis Fukuyama, “Liberalism and Its Discontents.”  In it, he explores the problems facing liberal democracy today.  As always, it is threatened by autocratic regimes around the world.  But what’s new since the fall of the Soviet Union is the threat from illiberal democracy, both at home and abroad, in the form of populism of the right and the left.  
His argument is a strong defense of the liberal democratic order, but it is also a very smart analysis of how liberal democracy has sowed the seeds of its own downfall.  He shows how much it depends on the existence of a vibrant civil society and robust social capital, both of which its own emphasis on individual liberty tends to undermine.  He also shows how its stress on free markets has fostered the rise of the neoliberal religion, which seeks to subordinate the once robust liberal state to the market.  And he notes how its tolerance of diverse viewpoints leaves it vulnerable to illiberal views that seek to wipe it out of existence.
This essay was published in the inaugural issue of the magazine American Purpose On October 5, 2020.  Here’s a link to the original.
It’s well worth your while to give this essay a close read.

Illustration_AmericanPurpose_Edited

Liberalism and Its Discontents

The challenges from the left and the right.

Francis Fukuyama

Today, there is a broad consensus that democracy is under attack or in retreat in many parts of the world. It is being contested not just by authoritarian states like China and Russia, but by populists who have been elected in many democracies that seemed secure.

The “democracy” under attack today is a shorthand for liberal democracy, and what is really under greatest threat is the liberal component of this pair. The democracy part refers to the accountability of those who hold political power through mechanisms like free and fair multiparty elections under universal adult franchise. The liberal part, by contrast, refers primarily to a rule of law that constrains the power of government and requires that even the most powerful actors in the system operate under the same general rules as ordinary citizens. Liberal democracies, in other words, have a constitutional system of checks and balances that limits the power of elected leaders.Democracy itself is being challenged by authoritarian states like Russia and China that manipulate or dispense with free and fair elections. But the more insidious threat arises from populists within existing liberal democracies who are using the legitimacy they gain through their electoral mandates to challenge or undermine liberal institutions. Leaders like Hungary’s Viktor Orbán, India’s Narendra Modi, and Donald Trump in the United States have tried to undermine judicial independence by packing courts with political supporters, have openly broken laws, or have sought to delegitimize the press by labeling mainstream media as “enemies of the people.” They have tried to dismantle professional bureaucracies and to turn them into partisan instruments. It is no accident that Orbán puts himself forward as a proponent of “illiberal democracy.”

The contemporary attack on liberalism goes much deeper than the ambitions of a handful of populist politicians, however. They would not be as successful as they have been were they not riding a wave of discontent with some of the underlying characteristics of liberal societies. To understand this, we need to look at the historical origins of liberalism, its evolution over the decades, and its limitations as a governing doctrine.

What Liberalism Was

Classical liberalism can best be understood as an institutional solution to the problem of governing over diversity. Or to put it in slightly different terms, it is a system for peacefully managing diversity in pluralistic societies. It arose in Europe in the late 17th and 18th centuries in response to the wars of religion that followed the Protestant Reformation, wars that lasted for 150 years and killed major portions of the populations of continental Europe.

While Europe’s religious wars were driven by economic and social factors, they derived their ferocity from the fact that the warring parties represented different Christian sects that wanted to impose their particular interpretation of religious doctrine on their populations. This was a period in which the adherents of forbidden sects were persecuted—heretics were regularly tortured, hanged, or burned at the stake—and their clergy hunted. The founders of modern liberalism like Thomas Hobbes and John Locke sought to lower the aspirations of politics, not to promote a good life as defined by religion, but rather to preserve life itself, since diverse populations could not agree on what the good life was. This was the distant origin of the phrase “life, liberty, and the pursuit of happiness” in the Declaration of Independence. The most fundamental principle enshrined in liberalism is one of tolerance: You do not have to agree with your fellow citizens about the most important things, but only that each individual should get to decide what those things are without interference from you or from the state. The limits of tolerance are reached only when the principle of tolerance itself is challenged, or when citizens resort to violence to get their way.

Understood in this fashion, liberalism was simply a pragmatic tool for resolving conflicts in diverse societies, one that sought to lower the temperature of politics by taking questions of final ends off the table and moving them into the sphere of private life. This remains one of its most important selling points today: If diverse societies like India or the United States move away from liberal principles and try to base national identity on race, ethnicity, or religion, they are inviting a return to potentially violent conflict. The United States suffered such conflict during its Civil War, and Modi’s India is inviting communal violence by shifting its national identity to one based on Hinduism.

There is however a deeper understanding of liberalism that developed in continental Europe that has been incorporated into modern liberal doctrine. In this view, liberalism is not simply a mechanism for pragmatically avoiding violent conflict, but also a means of protecting fundamental human dignity.

The ground of human dignity has shifted over time. In aristocratic societies, it was an attribute only of warriors who risked their lives in battle. Christianity universalized the concept of dignity based on the possibility of human moral choice: Human beings had a higher moral status than the rest of created nature but lower than that of God because they could choose between right and wrong. Unlike beauty or intelligence or strength, this characteristic was universally shared and made human beings equal in the sight of God. By the time of the Enlightenment, the capacity for choice or individual autonomy was given a secular form by thinkers like Rousseau (“perfectibility”) and Kant (a “good will”), and became the ground for the modern understanding of the fundamental right to dignity written into many 20th-century constitutions. Liberalism recognizes the equal dignity of every human being by granting them rights that protect individual autonomy: rights to speech, to assembly, to belief, and ultimately to participate in self-government.

Liberalism thus protects diversity by deliberately not specifying higher goals of human life. This disqualifies religiously defined communities as liberal. Liberalism also grants equal rights to all people considered full human beings, based on their capacity for individual choice. Liberalism thus tends toward a kind of universalism: Liberals care not just about their rights, but about the rights of others outside their particular communities. Thus the French Revolution carried the Rights of Man across Europe. From the beginning the major arguments among liberals were not over this principle, but rather over who qualified as rights-bearing individuals, with various groups—racial and ethnic minorities, women, foreigners, the propertyless, children, the insane, and criminals—excluded from this magic circle.

A final characteristic of historical liberalism was its association with the right to own property. Property rights and the enforcement of contracts through legal institutions became the foundation for economic growth in Britain, the Netherlands, Germany, the United States, and other states that were not necessarily democratic but protected property rights. For that reason liberalism strongly associated with economic growth and modernization. Rights were protected by an independent judiciary that could call on the power of the state for enforcement. Properly understood, rule of law referred both to the application of day-to-day rules that governed interactions between individuals and to the design of political institutions that formally allocated political power through constitutions. The class that was most committed to liberalism historically was the class of property owners, not just agrarian landlords but the myriads of middle-class business owners and entrepreneurs that Karl Marx would label the bourgeoisie.

Liberalism is connected to democracy, but is not the same thing as it. It is possible to have regimes that are liberal but not democratic: Germany in the 19th century and Singapore and Hong Kong in the late 20th century come to mind. It is also possible to have democracies that are not liberal, like the ones Viktor Orbán and Narendra Modi are trying to create that privilege some groups over others. Liberalism is allied to democracy through its protection of individual autonomy, which ultimately implies a right to political choice and to the franchise. But it is not the same as democracy. From the French Revolution on, there were radical proponents of democratic equality who were willing to abandon liberal rule of law altogether and vest power in a dictatorial state that would equalize outcomes. Under the banner of Marxism-Leninism, this became one of the great fault lines of the 20th century. Even in avowedly liberal states, like many in late 19th- and early 20th-century Europe and North America, there were powerful trade union movements and social democratic parties that were more interested in economic redistribution than in the strict protection of property rights.

Liberalism also saw the rise of another competitor besides communism: nationalism. Nationalists rejected liberalism’s universalism and sought to confer rights only on their favored group, defined by culture, language, or ethnicity. As the 19th century progressed, Europe reorganized itself from a dynastic to a national basis, with the unification of Italy and Germany and with growing nationalist agitation within the multiethnic Ottoman and Austro-Hungarian empires. In 1914 this exploded into the Great War, which killed millions of people and laid the kindling for a second global conflagration in 1939.

The defeat of Germany, Italy, and Japan in 1945 paved the way for a restoration of liberalism as the democratic world’s governing ideology. Europeans saw the folly of organizing politics around an exclusive and aggressive understanding of nation, and created the European Community and later the European Union to subordinate the old nation-states to a cooperative transnational structure. For its part, the United States played a powerful role in creating a new set of international institutions, including the United Nations (and affiliated Bretton Woods organizations like the World Bank and IMF), GATT and the World Trade Organization, and cooperative regional ventures like NATO and NAFTA.

The largest threat to this order came from the former Soviet Union and its allied communist parties in Eastern Europe and the developing world. But the former Soviet Union collapsed in 1991, as did the perceived legitimacy of Marxism-Leninism, and many former communist countries sought to incorporate themselves into existing international institutions like the EU and NATO. This post-Cold War world would collectively come to be known as the liberal international order.

But the period from 1950 to the 1970s was the heyday of liberal democracy in the developed world. Liberal rule of law abetted democracy by protecting ordinary people from abuse: The U.S. Supreme Court, for example, was critical in breaking down legal racial segregation through decisions like Brown v. Board of Education. And democracy protected the rule of law: When Richard Nixon engaged in illegal wiretapping and use of the CIA, it was a democratically elected Congress that helped drive him from power. Liberal rule of law laid the basis for the strong post-World War II economic growth that then enabled democratically elected legislatures to create redistributive welfare states. Inequality was tolerable in this period because most people could see their material conditions improving. In short, this period saw a largely happy coexistence of liberalism and democracy throughout the developed world.

Discontents

Liberalism has been a broadly successful ideology, and one that is responsible for much of the peace and prosperity of the modern world. But it also has a number of shortcomings, some of which were triggered by external circumstances, and others of which are intrinsic to the doctrine. The first lies in the realm of economics, the second in the realm of culture.

The economic shortcomings have to do with the tendency of economic liberalism to evolve into what has come to be called “neoliberalism.” Neoliberalism is today a pejorative term used to describe a form of economic thought, often associated with the University of Chicago or the Austrian school, and economists like Friedrich Hayek, Milton Friedman, George Stigler, and Gary Becker. They sharply denigrated the role of the state in the economy, and emphasized free markets as spurs to growth and efficient allocators of resources. Many of the analyses and policies recommended by this school were in fact helpful and overdue: Economies were overregulated, state-owned companies inefficient, and governments responsible for the simultaneous high inflation and low growth experienced during the 1970s.

But valid insights about the efficiency of markets evolved into something of a religion, in which state intervention was opposed not based on empirical observation but as a matter of principle. Deregulation produced lower airline ticket prices and shipping costs for trucks, but also laid the ground for the great financial crisis of 2008 when it was applied to the financial sector. Privatization was pushed even in cases of natural monopolies like municipal water or telecom systems, leading to travesties like the privatization of Mexico’s TelMex, where a public monopoly was transformed into a private one. Perhaps most important, the fundamental insight of trade theory, that free trade leads to higher wealth for all parties concerned, neglected the further insight that this was true only in the aggregate, and that many individuals would be hurt by trade liberalization. The period from the 1980s onward saw the negotiation of both global and regional free trade agreements that shifted jobs and investment away from rich democracies to developing countries, increasing within-country inequalities. In the meantime, many countries starved their public sectors of resources and attention, leading to deficiencies in a host of public services from education to health to security.

The result was the world that emerged by the 2010s in which aggregate incomes were higher than ever but inequality within countries had also grown enormously. Many countries around the world saw the emergence of a small class of oligarchs, multibillionaires who could convert their economic resources into political power through lobbyists and purchases of media properties. Globalization enabled them to move their money to safe jurisdictions easily, starving states of tax revenue and making regulation very difficult. Globalization also entailed liberalization of rules concerning migration. Foreign-born populations began to increase in many Western countries, abetted by crises like the Syrian civil war that sent more than a million refugees into Europe. All of this paved the way for the populist reaction that became clearly evident in 2016 with Britain’s Brexit vote and the election of Donald Trump in the United States.

The second discontent with liberalism as it evolved over the decades was rooted in its very premises. Liberalism deliberately lowered the horizon of politics: A liberal state will not tell you how to live your life, or what a good life entails; how you pursue happiness is up to you. This produces a vacuum at the core of liberal societies, one that often gets filled by consumerism or pop culture or other random activities that do not necessarily lead to human flourishing. This has been the critique of a group of (mostly) Catholic intellectuals including Patrick Deneen, Sohrab Ahmari, Adrian Vermeule, and others, who feel that liberalism offers “thin gruel” for anyone with deeper moral commitments.

This leads us to a deeper stratum of discontent. Liberal theory, both in its economic and political guises, is built around individuals and their rights, and the political system protects their ability to make these choices autonomously. Indeed, in neoclassical economic theory, social cooperation arises only as a result of rational individuals deciding that it is in their self-interest to work with other individuals. Among conservative intellectuals, Patrick Deneen has gone the furthest by arguing that this whole approach is deeply flawed precisely because it is based on this individualistic premise, and sanctifies individual autonomy above all other goods. Thus for him, the entire American project based as it was on Lockean individualistic principles was misfounded. Human beings for him are not primarily autonomous individuals, but deeply social beings who are defined by their obligations and ties to a range of social structures, from families to kin groups to nations.

This social understanding of human nature was a truism taken for granted by most thinkers prior to the Western Enlightenment. It is also one that is one supported by a great deal of recent research in the life sciences that shows that human beings are hard-wired to be social creatures: Many of our most salient faculties are ones that lead us to cooperate with one another in groups of various sizes and types. This cooperation does not arise necessarily from rational calculation; it is supported by emotional faculties like pride, guilt, shame, and anger that reinforce social bonds. The success of human beings over the millennia that has allowed our species to completely dominate its natural habitat has to do with this aptitude for following norms that induce social cooperation.

By contrast, the kind of individualism celebrated in liberal economic and political theory is a contingent development that emerged in Western societies over the centuries. Its history is long and complicated, but it originated in the inheritance rules set down by the Catholic Church in early medieval times which undermined the extended kinship networks that had characterized Germanic tribal societies. Individualism was further validated by its functionality in promoting market capitalism: Markets worked more efficiently if individuals were not constrained by obligations to kin and other social networks. But this kind of individualism has always been at odds with the social proclivities of human beings. It also does not come naturally to people in certain other non-Western societies like India or the Arab world, where kin, caste, or ethnic ties are still facts of life.

The implication of these observations for contemporary liberal societies is straightforward. Members of such societies want opportunities to bond with one another in a host of ways: as citizens of a nation, members of an ethnic or racial group, residents of a region, or adherents to a particular set of religious beliefs. Membership in such groups gives their lives meaning and texture in a way that mere citizenship in a liberal democracy does not.

Many of the critics of liberalism on the right feel that it has undervalued the nation and traditional national identity: Thus Viktor Orbán has asserted that Hungarian national identity is based on Hungarian ethnicity and on maintenance of traditional Hungarian values and cultural practices. New nationalists like Yoram Hazony celebrate nationhood and national culture as the rallying cry for community, and they bemoan liberalism’s dissolving effect on religious commitment, yearning for a thicker sense of community and shared values, underpinned by virtues in service of that community.

There are parallel discontents on the left. Juridical equality before the law does not mean that people will be treated equally in practice. Racism, sexism, and anti-gay bias all persist in liberal societies, and those injustices have become identities around which people could mobilize. The Western world has seen the emergence of a series of social movements since the 1960s, beginning with the civil rights movement in the United States, and movements promoting the rights of women, indigenous peoples, the disabled, the LGBT community, and the like. The more progress that has been made toward eradicating social injustices, the more intolerable the remaining injustices seem, and thus the moral imperative to mobilizing to correct them. The complaint of the left is different in substance but similar in structure to that of the right: Liberal society does not do enough to root out deep-seated racism, sexism, and other forms of discrimination, so politics must go beyond liberalism. And, as on the right, progressives want the deeper bonding and personal satisfaction of associating—in this case, with people who have suffered from similar indignities.

This instinct for bonding and the thinness of shared moral life in liberal societies has shifted global politics on both the right and the left toward a politics of identity and away from the liberal world order of the late 20th century. Liberal values like tolerance and individual freedom are prized most intensely when they are denied: People who live in brutal dictatorships want the simple freedom to speak, associate, and worship as they choose. But over time life in a liberal society comes to be taken for granted and its sense of shared community seems thin. Thus in the United States, arguments between right and left increasingly revolve around identity, and particularly racial identity issues, rather than around economic ideology and questions about the appropriate role of the state in the economy.

There is another significant issue that liberalism fails to grapple adequately with, which concerns the boundaries of citizenship and rights. The premises of liberal doctrine tend toward universalism: Liberals worry about human rights, and not just the rights of Englishmen, or white Americans, or some other restricted class of people. But rights are protected and enforced by states which have limited territorial jurisdiction, and the question of who qualifies as a citizen with voting rights becomes a highly contested one. Some advocates of migrant rights assert a universal human right to migrate, but this is a political nonstarter in virtually every contemporary liberal democracy. At the present moment, the issue of the boundaries of political communities is settled by some combination of historical precedent and political contestation, rather than being based on any clear liberal principle.

Conclusion

Vladimir Putin told the Financial Times that liberalism has become an “obsolete” doctrine. While it may be under attack from many quarters today, it is in fact more necessary than ever.

It is more necessary because it is fundamentally a means of governing over diversity, and the world is more diverse than it ever has been. Democracy disconnected from liberalism will not protect diversity, because majorities will use their power to repress minorities. Liberalism was born in the mid-17th century as a means of resolving religious conflicts, and it was reborn again after 1945 to solve conflicts between nationalisms. Any illiberal effort to build a social order around thick ties defined by race, ethnicity, or religion will exclude important members of the community, and down the road will lead to conflict. Russia itself retains liberal characteristics: Russian citizenship and nationality is not defined by either Russian ethnicity or the Orthodox religion; the Russian Federation’s millions of Muslim inhabitants enjoy equal juridical rights. In situations of de facto diversity, attempts to impose a single way of life on an entire population is a formula for dictatorship.

The only other way to organize a diverse society is through formal power-sharing arrangements among different identity groups that give only a nod toward shared nationality. This is the way that Lebanon, Iraq, Bosnia, and other countries in the Middle East and the Balkans are governed. This type of consociationalism leads to very poor governance and long-term instability, and works poorly in societies where identity groups are not geographically based. This is not a path down which any contemporary liberal democracy should want to tread.

That being said, the kinds of economic and social policies that liberal societies should pursue is today a wide-open question. The evolution of liberalism into neoliberalism after the 1980s greatly reduced the policy space available to centrist political leaders, and permitted the growth of huge inequalities that have been fueling populisms of the right and the left. Classical liberalism is perfectly compatible with a strong state that seeks social protections for populations left behind by globalization, even as it protects basic property rights and a market economy. Liberalism is necessarily connected to democracy, and liberal economic policies need to be tempered by considerations of democratic equality and the need for political stability.

I suspect that most religious conservatives critical of liberalism today in the United States and other developed countries do not fool themselves into thinking that they can turn the clock back to a period when their social views were mainstream. Their complaint is a different one: that contemporary liberals are ready to tolerate any set of views, from radical Islam to Satanism, other than those of religious conservatives, and that they find their own freedom constrained.

This complaint is a serious one: Many progressives on the left have shown themselves willing to abandon liberal values in pursuit of social justice objectives. There has been a sustained intellectual attack on liberal principles over the past three decades coming out of academic pursuits like gender studies, critical race theory, postcolonial studies, and queer theory, that deny the universalistic premises underlying modern liberalism. The challenge is not simply one of intolerance of other views or “cancel culture” in the academy or the arts. Rather, the challenge is to basic principles that all human beings were born equal in a fundamental sense, or that a liberal society should strive to be color-blind. These different theories tend to argue that the lived experiences of specific and ever-narrower identity groups are incommensurate, and that what divides them is more powerful than what unites them as citizens. For some in the tradition of Michel Foucault, foundational approaches to cognition coming out of liberal modernity like the scientific method or evidence-based research are simply constructs meant to bolster the hidden power of racial and economic elites.

The issue here is thus not whether progressive illiberalism exists, but rather how great a long-term danger it represents. In countries from India and Hungary to the United States, nationalist conservatives have actually taken power and have sought to use the power of the state to dismantle liberal institutions and impose their own views on society as a whole. That danger is a clear and present one.

Progressive anti-liberals, by contrast, have not succeeded in seizing the commanding heights of political power in any developed country. Religious conservatives are still free to worship in any way they see fit, and indeed are organized in the United States as a powerful political bloc that can sway elections. Progressives exercise power in different and more nuanced ways, primarily through their dominance of cultural institutions like the mainstream media, the arts, and large parts of academia. The power of the state has been enlisted behind their agenda on such matters as striking down via the courts conservative restrictions on abortion and gay marriage and in the shaping of public school curricula. An open question for the future is whether cultural dominance today will ultimately lead to political dominance in the future, and thus a more thoroughgoing rollback of liberal rights by progressives.

Liberalism’s present-day crisis is not new; since its invention in the 17th century, liberalism has been repeatedly challenged by thick communitarians on the right and progressive egalitarians on the left. Liberalism properly understood is perfectly compatible with communitarian impulses and has been the basis for the flourishing of deep and diverse forms of civil society. It is also compatible with the social justice aims of progressives: One of its greatest achievements was the creation of modern redistributive welfare states in the late 20th century. Liberalism’s problem is that it works slowly through deliberation and compromise, and never achieves its communal or social justice goals as completely as their advocates would like. But it is hard to see how the discarding of liberal values is going to lead to anything in the long term other than increasing social conflict and ultimately a return to violence as a means of resolving differences.

Francis Fukuyama, chairman of the editorial board of American Purpose, directs the Center on Democracy, Development and the Rule of Law at Stanford University.

Posted in History, History of education, War

An Affair to Remember: America’s Brief Fling with the University as a Public Good

This post is an essay about the brief but glorious golden age of the US university during the three decades after World War II.  

American higher education rose to fame and fortune during the Cold War, when both student enrollments and funded research shot upward. Prior to World War II, the federal government showed little interest in universities and provided little support. The war spurred a large investment in defense-based scientific research in universities, and the emergence of the Cold War expanded federal investment exponentially. Unlike a hot war, the Cold War offered an extended period of federally funded research public subsidy for expanding student enrollments. The result was the golden age of the American university. The good times continued for about 30 years and then began to go bad. The decline was triggered by the combination of a decline in the perceived Soviet threat and a taxpayer revolt against high public spending; both trends culminating with the fall of the Berlin Wall in 1989. With no money and no enemy, the Cold War university fell as quickly as it arose. Instead of seeing the Cold War university as the norm, we need to think of it as the exception. What we are experiencing now in American higher education is a regression to the mean, in which, over the long haul, Americans have understood higher education to be a distinctly private good.

I originally presented this piece in 2014 at a conference at Catholic University in Leuven, Belgium.  It was then published in the Journal of Philosophy of Education in 2016 (here’s a link to the JOPE version) and then became a chapter in my 2017 book, A Perfect Mess.  Waste not, want not.  Hope you enjoy it.

Cold War

An Affair to Remember:

America’s Brief Fling with the University as a Public Good

David F. Labaree

            American higher education rose to fame and fortune during the Cold War, when both student enrollments and funded research shot upward.  Prior to World War II, the federal government showed little interest in universities and provided little support.  The war spurred a large investment in defense-based scientific research in universities for reasons of both efficiency and necessity:  universities had the researchers and infrastructure in place and the government needed to gear up quickly.  With the emergence of the Cold War in 1947, the relationship continued and federal investment expanded exponentially.  Unlike a hot war, the Cold War offered a long timeline for global competition between communism and democracy, which meant institutionalizing the wartime model of federally funded research and building a set of structures for continuing investment in knowledge whose military value was unquestioned. At the same time, the communist challenge provided a strong rationale for sending a large number of students to college.  These increased enrollments would educate the skilled workers needed by the Cold War economy, produce informed citizens to combat the Soviet menace, and demonstrate to the world the broad social opportunities available in a liberal democracy.  The result of this enormous public investment in higher education has become known as the golden age of the American university.

            Of course, as is so often the case with a golden age, it didn’t last.  The good times continued for about 30 years and then began to go bad.  The decline was triggered by the combination of a decline in the perceived Soviet threat and a taxpayer revolt against high public spending; both trends with the fall of the Berlin Wall in 1989.  With no money and no enemy, the Cold War university fell as quickly as it arose. 

            In this paper I try to make sense of this short-lived institution.  But I want to avoid the note of nostalgia that pervades many current academic accounts, in which professors and administrators grieve for the good old days of the mid-century university and spin fantasies of recapturing them.  Barring another national crisis of the same dimension, however, it just won’t happen.  Instead of seeing the Cold War university as the norm that we need to return to, I suggest that it’s the exception.  What we’re experiencing now in American higher education is, in many ways, a regression to the mean. 

            My central theme is this:  Over the long haul, Americans have understood higher education as a distinctly private good.  The period from 1940 to 1970 was the one time in our history when the university became a public good.  And now we are back to the place we have always been, where the university’s primary role is to provide individual consumers a chance to gain social access and social advantage.  Since students are the primary beneficiaries, then they should also foot the bill; so state subsidies are hard to justify.

            Here is my plan.  First, I provide an overview of the long period before 1940 when American higher education functioned primarily as a private good.  During this period, the beneficiaries changed from the university’s founders to its consumers, but private benefit was the steady state.  This is the baseline against which we can understand the rapid postwar rise and fall of public investment in higher education.  Next, I look at the huge expansion of public funding for higher education starting with World War II and continuing for the next 30 years.  Along the way I sketch how the research university came to enjoy a special boost in support and rising esteem during these decades.  Then I examine the fall from grace toward the end of the century when the public-good rationale for higher ed faded as quickly as it had emerged.  And I close by exploring the implications of this story for understanding the American system of higher education as a whole. 

            During most of its history, the central concern driving the system has not been what it can do for society but what it can do for me.  In many ways, this approach has been highly beneficial.  Much of its success as a system – as measured by wealth, rankings, and citations – derives from its core structure as a market-based system producing private goods for consumers rather than a politically-based system producing public goods for state and society.  But this view of higher education as private property is also a key source of the system’s pathologies.  It helps explain why public funding for higher education is declining and student debt is rising; why private colleges are so much richer and more prestigious that public colleges; why the system is so stratified, with wealthy students attending the exclusive colleges at the top where social rewards are high and with poor students attending the inclusive colleges at the bottom where such rewards are low; and why quality varies so radically, from colleges that ride atop the global rankings to colleges that drift in intellectual backwaters.

The Private Origins of the System

            One of the peculiar aspects of the history of American higher education is that private colleges preceded public.  Another, which in part follows from the first, is that private colleges are also more prestigious.  Nearly everywhere else in the world, state-supported and governed universities occupy the pinnacle of the national system while private institutions play a small and subordinate role, supplying degrees of less distinction and serving students of less ability.  But in the U.S., the top private universities produce more research, gain more academic citations, attract better faculty and students, and graduate more leaders of industry, government, and the professions.  According to the 2013 Shanghai rankings, 16 of the top 25 universities in the U.S. are private, and the concentration is even higher at the top of this list, where private institutions make up 8 of the top 10 (Institute of Higher Education, 2013). 

            This phenomenon is rooted in the conditions under which colleges first emerged in the U.S.  American higher education developed into a system in the early 19th century, when three key elements were in place:  the state was weak, the market was strong, and the church was divided.  The federal government at the time was small and poor, surviving largely on tariffs and the sale of public lands, and state governments were strapped simply trying to supply basic public services.  Colleges were a low priority for government since they served no compelling public need – unlike public schools, which states saw as essential for producing citizens for the republic.  So colleges only emerged when local promoters requested and received a  corporate charter from the state.  These were private not-for-profit institutions that functioned much like any other corporation.  States provided funding only sporadically and only if an institution’s situation turned dire.  And after the Dartmouth College decision in 1819, the Supreme Court made clear that a college’s corporate charter meant that it could govern itself without state interference.  Therefore, in the absence of state funding and control, early American colleges developed a market-based system of higher education. 

            If the roots of the American system were private, they were also extraordinarily local.  Unlike the European university, with its aspirations toward universality and its history of cosmopolitanism, the American college of the nineteenth century was a home-town entity.  Most often, it was founded to advance the parochial cause of promoting a particular religious denomination rather than to promote higher learning.  In a setting where no church was dominant and all had to compete for visibility, stature, and congregants, founding colleges was a valuable way to plant the flag and promote the faith.  This was particularly true when the population was rapidly expanding into new territories to the west, which meant that no denomination could afford to cede the new terrain to competitors.  Starting a college in Ohio was a way to ensure denominational growth, prepare clergy, and spread the word.

            At the same time, colleges were founded with an eye toward civic boosterism, intended to shore up a community’s claim to be a major cultural and commercial center rather than a sleepy farm town.  With a college, a town could claim that it deserved to gain lucrative recognition as a stop on the railroad line, the site for a state prison, the county seat, or even the state capital.  These consequences would elevate the value of land in the town, which would work to the benefit of major landholders.  In this sense, the nineteenth century college, like much of American history, was in part the product of a land development scheme.  In general, these two motives combined: colleges emerged as a way to advance both the interests of particular sects and also the interests of the towns where they were lodged.  Often ministers were also land speculators.  It was always better to have multiple rationales and sources of support than just one (Brown (1995); Boorstin (1965); Potts (1971).  In either case, however, the benefits of founding a college accrued to individual landowners and particular religious denominations and not to the larger public.

As a result these incentives, church officials and civic leaders around the country scrambled to get a state charter for a college, establish a board of trustees made up of local notables, and install a president.  The latter (usually a clergyman) would rent a local building, hire a small and not very accomplished faculty, and serve as the CEO of a marginal educational enterprise, one that sought to draw tuition-paying students from the area in order to make the college a going concern.  With colleges arising to meet local and sectarian needs, the result was the birth of a large number of small, parochial, and weakly funded institutions in a very short period of time in the nineteenth century, which meant that most of these colleges faced a difficult struggle to survive in the competition with peer institutions.  In the absence of reliable support from church or state, these colleges had to find a way to get by on their own. 

            Into this mix of private colleges, state and local governments began to introduce public institutions.  First came a series of universities established by individual states to serve their local populations.  Here too competition was a bigger factor than demand for learning, since a state government increasingly needed to have a university of its own in order to keep up with its neighbors.  Next came a group of land-grant colleges that began to emerge by midcentury.  Funded by grants of land from the federal government, these were public institutions that focused on providing practical education for occupations in agriculture and engineering.  Finally was an array of normal schools, which aimed at preparing teachers for the expanding system of public elementary education.  Like the private colleges, these public institutions emerged to meet the economic needs of towns that eagerly sought to house them.  And although they colleges were creatures of the state, they had only limited public funding and had to rely heavily on student tuition and private donations.

            The rate of growth of this system of higher education was staggering.  At the beginning of the American republic in 1790 the country had 19 institutions calling themselves colleges or universities (Tewksbury (1932), Table 1; Collins, 1979, Table 5.2).  By 1880, it had 811, which doesn’t even include the normal schools.  As a comparison, this was five times as many institutions as existed that year in all of Western Europe (Ruegg (2004).  To be sure, the American institutions were for the most part colleges in name only, with low academic standards, an average student body of 131 (Carter et al. (2006), Table Bc523) and faculty of 14 (Carter et al. (2006), Table Bc571).  But nonetheless this was a massive infrastructure for a system of higher education. 

            At a density of 16 colleges per million of population, the U.S. in 1880 had the most overbuilt system of higher education in the world (Collins, 1979, Table 5.2).  Created in order to meet the private needs of land speculators and religious sects rather that the public interest of state and society, the system got way ahead of demand for its services.  That changed in the 1880s.  By adopting parts of the German research university model (in form if not in substance), the top level of the American system acquired a modicum of academic respectability.  In addition – and this is more important for our purposes here – going to college finally came to be seen as a good investment for a growing number of middle-class student-consumers. 

            Three factors came together to make college attractive.  Primary among these was the jarring change in the structure of status transmission for middle-class families toward the end of the nineteenth century.  The tradition of passing on social position to your children by transferring ownership of the small family business was under dire threat, as factories were driving independent craft production out of the market and department stories were making small retail shops economically marginal.  Under these circumstances, middle class families began to adopt what Burton Bledstein calls the “culture of professionalism” (Bledstein, 1976).  Pursuing a profession (law, medicine, clergy) had long been an option for young people in this social stratum, but now this attraction grew stronger as the definition of profession grew broader.  With the threat of sinking into the working class becoming more likely, families found reassurance in the prospect of a form of work that would buffer their children from the insecurity and degradation of wage labor.  This did not necessarily mean becoming a traditional professional, where the prospects were limited and entry costs high, but instead it meant becoming a salaried employee in a management position that was clearly separated from the shop floor.  The burgeoning white-collar work opportunities as managers in corporate and government bureaucracies provided the promise of social status, economic security, and protection from downward mobility.  And the best way to certify yourself as eligible for this kind of work was to acquire a college degree. 

            Two other factors added to the attractions of college.  One was that a high school degree – once a scarce commodity that became a form of distinction for middle class youth during the nineteenth century – was in danger of becoming commonplace.  Across the middle of the century, enrollments in primary and grammar schools were growing fast, and by the 1880s they were filling up.  By 1900, the average American 20-year-old had eight years of schooling, which meant that political pressure was growing to increase access to high school (Goldin & Katz, 2008, p. 19).  This started to happen in the 1880s, and for the next 50 years high school enrollments doubled every decade.  The consequences were predictable.  If the working class was beginning to get a high school education, then middle class families felt compelled to preserve their advantage by pursuing college.

            The last piece that fell into place to increase the drawing power of college for middle class families was the effort by colleges in the 1880s and 90s to make undergraduate enrollment not just useful but enjoyable.  Ever desperate to find ways to draw and retain students, colleges responded to competitive pressure by inventing the core elements that came to define the college experience for American students in the twentieth century.  These included fraternities and sororities, pleasant residential halls, a wide variety of extracurricular entertainments, and – of course – football.  College life became a major focus of popular magazines, and college athletic events earned big coverage in newspapers.  In remarkably short order, going to college became a life stage in the acculturation of middle class youth.  It was the place where you could prepare for a respectable job, acquire sociability, learn middle class cultural norms, have a good time, and meet a suitable spouse.  And, for those who were so inclined, was the potential fringe benefit of getting an education.

            Spurred by student desire to get ahead or stay ahead, college enrollments started growing quickly.  They were at 116,000  in 1879, 157,000 in 1889, 238,000 in 1899, 355,000 in 1909, 598,000 in 1919, 1,104,000 in 1929, and 1,494,000 in 1939 (Carter et al. (2006), Table Bc523).  This was a rate of increase of more than 50 percent a decade – not as fast as the increases that would come at midcentury, but still impressive.  During this same 60-year period, total college enrollment as a proportion of the population 18-to-24 years old rose from 1.6 percent to 9.1 percent (Carter et al. (2006), Table Bc524).  By 1930, U.S. had three times the population of the U.K. and 20 times the number of college students (Levine. 1986, p. 135).  And the reason they were enrolling in such numbers was clear.  According to studies in the 1920s, almost two-thirds of undergraduates were there to get ready for a particular job, mostly in the lesser professions and middle management (Levine, 1986, p. 40).  Business and engineering were the most popular majors and the social sciences were on the rise.  As David Levine put it in his important book about college in the interwar years, “Institutions of higher learning were no longer content to educate; they now set out to train, accredit, and impart social status to their students” (Levine, 1986, p. 19.

            Enrollments were growing in public colleges faster than in private colleges, but only by a small amount.  In fact it wasn’t until 1931 – for the first time in the history of American higher education – that the public sector finally accounted for a majority of college students (Carter et al., 2006, Tables Bc531 and Bc534).  The increases occurred across all levels of the system, including the top public research universities; but the largest share of enrollments flowed into the newer institutions at the bottom of the system:  the state colleges that were emerging from normal schools, urban commuter colleges (mostly private), and an array of public and private junior colleges that offered two-year vocational programs. 

            For our purposes today, the key point is this:  The American system of colleges and universities that emerged in the nineteenth century and continued until World War II was a market-driven structure that construed higher education as a private good.  Until around 1880, the primary benefits of the system went to the people who founded individual institutions – the land speculators and religious sects for whom a new college brought wealth and competitive advantage.  This explains why colleges emerged in such remote places long before there was substantial student demand.  The role of the state in this process was muted.  The state was too weak and too poor to provide strong support for higher education, and there was no obvious state interest that argued for doing so.  Until the decade before the war, most student enrollments were in the private sector, and even at the war’s start the majority of institutions in the system were private (Carter et al., 2006, Tables Bc510 to Bc520).  

            After 1880, the primary benefits of the system went to the students who enrolled.  For them, it became the primary way to gain entry to the relatively secure confines of salaried work in management and the professions.  For middle class families, college in this period emerged as the main mechanism for transmitting social advantage from parents to children; and for others, it became the object of aspiration as the place to get access to the middle class.  State governments put increasing amounts of money into support for public higher education, not because of the public benefits it would produce but because voters demanded increasing access to this very attractive private good.

The Rise of the Cold War University

            And then came the Second World War.  There is no need here to recount the devastation it brought about or the nightmarish residue it left.  But it’s worth keeping in mind the peculiar fact that this conflict is remembered fondly by Americans, who often refer to it as the Good War (Terkel, 1997).  The war cost a lot of American lives and money, but it also brought a lot of benefits.  It didn’t hurt, of course, to be on the winning side and to have all the fighting take place on foreign territory.  And part of the positive feeling associated with the war comes from the way it thrust the country into a new role as the dominant world power.  But perhaps even more the warm feeling arises from the memory of this as a time when the country came together around a common cause.  For citizens of the United States – the most liberal of liberal democracies, where private liberty is much more highly valued than public loyalty – it was a novel and exciting feeling to rally around the federal government.  Usually viewed with suspicion as a threat to the rights of individuals and a drain on private wealth, the American government in the 1940s took on the mantle of good in the fight against evil.  Its public image became the resolute face of a white-haired man dressed in red, white, and blue, who pointed at the viewer in a famous recruiting poster.  It’s slogan: “Uncle Sam Wants You.” 

            One consequence of the war was a sharp increase in the size of the U.S. government.  The historically small federal state had started to grow substantially in the 1930s as a result of the New Deal effort to spend the country out of a decade-long economic depression, a time when spending doubled.  But the war raised the level of federal spending by a factor of seven, from $1,000 to $7,000 per capita.  After the war, the level dropped back to $2,000; and then the onset of the Cold War sent federal spending into a sharp, and this time sustained, increase – reaching $3,000 in the 50s, 4,000 in the 60s, and regaining the previous high of $7,000 in the 80s, during the last days of the Soviet Union (Garrett & Rhine, 2006, figure 3). 

            If for Americans in general World War II carries warm associations, for people in higher education it marks the beginning of the Best of Times – a short but intense period of generous public funding and rapid expansion.  Initially, of course, the war brought trouble, since it sent most prospective college students into the military.  Colleges quickly adapted by repurposing their facilities for military training and other war-related activities.  But the real long-term benefits came when the federal government decided to draw higher education more centrally into the war effort – first, as the central site for military research and development; and second, as the place to send veterans when the war was over.  Let me say a little about each.

            In the first half of the twentieth century, university researchers had to scrabble around looking for funding, forced to rely on a mix of foundations, corporations, and private donors.  The federal government saw little benefit in employing their services.  In a particularly striking case at the start of World War, the professional association of academic chemists offered its help to the War Department, which declined “on the grounds that it already had a chemist in its employ” (Levine, 1986, p. 51).[1]  The existing model was for government to maintain its own modest research facilities instead of relying on the university. 

            The scale of the next war changed all this.  At the very start, a former engineering dean from MIT, Vannevar Bush, took charge of mobilizing university scientists behind the war effort as head of the Office of Scientific Research and Development.  The model he established for managing the relationship between government and researchers set the pattern for university research that still exists in the U.S. today: Instead of setting up government centers, the idea was to farm out research to universities.  Issue a request for proposals to meet a particular research need; award the grant to the academic researchers who seemed best equipped to meet this need; and pay 50 percent or more overhead to the university for the facilities that researchers would use.  This method drew on the expertise and facilities that already existed at research universities, which both saved the government from having to maintain a costly permanent research operation and also gave it the flexibility to draw on the right people for particular projects.  For universities, it provided a large source of funds, which enhanced their research reputations, helped them expand faculty, and paid for infrastructure.  It was a win-win situation.  It also established the entrepreneurial model of the university researcher in perpetual search for grant money.  And for the first time in the history of American higher education, the university was being considered a public good, whose research capacity could serve the national interest by helping to win a war. 

            If universities could meet one national need during the war by providing military research, they could meet another national need after the war by enrolling veterans.  The GI Bill of Rights, passed by congress in 1944, was designed to pay off a debt and resolve a manpower problem.  Its official name, the Servicemen’s Readjustment Act of 1944, reflects both aims.  By the end of the war there were 15 million men and women who had served in the military, who clearly deserved a reward for their years of service to the country.  The bill offered them the opportunity to continue their education at federal expense, which included attending the college of their choice.  This opportunity also offered another public benefit, since it responded to deep concern about the ability of the economy to absorb this flood of veterans.  The country had been sliding back into depression at the start of the war, and the fear was that massive unemployment at war’s end was a real possibility.  The strategy worked.  Under the GI Bill, about two million veterans eventually attended some form of college.  By 1948, when veteran enrollment peaked, American colleges and universities had one million more students than 10 years earlier (Geiger (2004), pp. 40-41; Carter et al. (2006), Table Bc523).  This was another win-win situation.  The state rewarded national service, headed off mass unemployment, and produced a pile of human capital for future growth.  Higher education got a flood of students who could pay their own way.  The worry, of course, was what was going to happen when the wartime research contracts ended and the veterans graduated. 

            That’s where the Cold War came in to save the day.  And the timing was perfect.  The first major action of the new conflict – the Berlin Blockade – came in 1948, the same year that veteran enrollments at American colleges reached their peak.  If World War II was good for American higher education, the Cold War was a bonanza.  The hot war meant boom and bust – providing a short surge of money and students followed by a sharp decline.  But the Cold War was a prolonged effort to contain Communism.  It was sustainable because actual combat was limited and often carried out by proxies.  For universities this was a gift that, for 30 years, kept on giving.  The military threat was massive in scale – nothing less than the threat of nuclear annihilation.  And supplementing it was an ideological challenge – the competition between two social and political systems for hearts and minds.  As a result, the government needed top universities to provide it with massive amounts of scientific research that would support the military effort.  And it also needed all levels of the higher education system to educate the large numbers of citizens required to deal with the ideological menace.  We needed to produce the scientists and engineers who would allow us to compete with Soviet technology.  We needed to provide high-level human capital in order to promote economic growth and demonstrate the economic superiority of capitalism over communism.  And we needed to provide educational opportunity for our own racial minorities and lower classes in order to show that our system is not only effective but also fair and equitable.  This would be a powerful weapon in the effort to win over the third world with the attractions of the American Way.  The Cold War American government treated higher education system as a highly valuable public good, which would make a large contribution to the national interest; and the system was pleased to be the object of so much federal largesse (Loss, 2012).

            On the research side, the impact of the Cold War on American universities was dramatic.  The best way to measure this is by examining patterns of federal research and development spending over the years, which traces the ebb and flow of national threats across the last 60 years.  Funding rose slowly  from $13 billion in 1953 (in constant 2014 dollars) until the Sputnik crisis (after the Soviets succeeded in placing the first satellite in earth orbit), when funding jumped to $40 billion in 1959 and rose rapidly to a peak of $88 billion in 1967.  Then the amount backed off to $66 billion in 1975, climbing to a new peak of $104 billion in 1990 just before the collapse of the Soviet Union and then dropping off.  It started growing again in 2002 after the attack on the twin towers, reaching an all-time high of $151 billion in 2010 and has been declining ever since (AAAS, 2014).[2] 

            Initially, defense funding accounted for 85 percent of federal research funding, gradually falling back to about half in 1967, as nondefense funding increased, but remaining in a solid majority position up until the present.  For most of the period after 1957, however, the largest element in nondefense spending was research on space technology, which arose directly from the Soviet Sputnik threat.  If you combine defense and space appropriations, this accounts for about three-quarters of federal research funding until 1990.  Defense research closely tracked perceived threats in the international environment, dropping by 20 percent after 1989 and then making a comeback in 2001.  Overall,  federal funding during the Cold War for research of all types grew in constant dollars from $13 billion in 1953 to $104 in 1990, an increase of 700 percent.  These were good times for university researchers (AAAS, 2014).

            At the same time that research funding was growing rapidly, so were college enrollments.  The number of students in American higher education grew from 2.4 million in 1949 to 3.6 million in 1959; but then came the 1960s, when enrollments more than doubled, reaching 8 million in 1969.  The number hit 11.6 million in 1979 and then began to slow down – creeping up to 13.5 million in 1989 and leveling off at around 14 million in the 1990s (Carter et al., 2006, Table Bc523; NCES, 2014, Table 303.10).  During the 30 years between 1949 and 1979, enrollments increased by more than 9 million students, a growth of almost 400 percent.  And the bulk of the enrollment increases in the last two decades were in part-time students and at two-year colleges.  Among four-year institutions, the primary growth occurred not at private or flagship public universities but at regional state universities, the former normal schools.  The Cold War was not just good for research universities; it was also great for institutions of higher education all the way down the status ladder.

            In part we can understand this radical growth in college enrollments as an extension of the long-term surge in consumer demand for American higher education as a private good.  Recall that enrollments started accelerating late in the nineteenth century, when college attendance started to provide an edge in gaining middle class jobs.  This meant that attending college gave middle-class families a way to pass on social advantage while attending high school gave working-class families a way to gain social opportunity.  But by 1940, high school enrollments had become universal.  So for working-class families, the new zone of social opportunity became higher education.  This increase in consumer demand provided a market-based explanation for at least part of the flood of postwar enrollments.

            At the same time, however, the Cold War provided a strong public rationale for broadening access to college.  In 1946, President Harry Truman appointed a commission to provide a plan for expanding access to higher education, which was first time in American history that a president sought advice about education at any level.  The result was a six-volume report with the title Higher Education for American Democracy.  It’s no coincidence that the report was issued in 1947, the starting point of the Cold War.  The authors framed the report around the new threat of atomic war, arguing that “It is essential today that education come decisively to grips with the world-wide crisis of mankind” (President’s Commission, 1947, vol. 1, p. 6).  What they proposed as a public response to the crisis was a dramatic increase in access to higher education.

            The American people should set as their ultimate goal an educational system in which at no level – high school, college, graduate school, or professional school – will a qualified individual in any part of the country encounter an insuperable economic barrier to the attainment of the kind of education suited to his aptitudes and interests.
        This means that we shall aim at making higher education equally available to all young people, as we now do education in the elementary and high schools, to the extent that their capacity warrants a further social investment in their training (President’s Commission, 1947, vol. 1, p. 36).

Tellingly, the report devotes a lot of space exploring the existing barriers to educational opportunity posed by class and race – exactly the kinds of issues that were making liberal democracies look bad in light of the egalitarian promise of communism.

Decline of the System’s Public Mission

            So in the mid twentieth century, Americans went through an intense but brief infatuation with higher education as a public good.  Somehow college was going to help save us from the communist menace and the looming threat of nuclear war.  Like World War II, the Cold War brought together a notoriously individualistic population around the common goal of national survival and the preservation of liberal democracy.  It was a time when every public building had an area designated as a bomb shelter.  In the elementary school I attended in the 1950s, I can remember regular air raid drills.  The alarm would sound and teachers would lead us downstairs to the basement, whose concrete-block walls were supposed to protect us from a nuclear blast.  Although the drills did nothing to preserve life, they did serve an important social function.  Like Sunday church services, these rituals drew individuals together into communities of faith where we enacted our allegiance to a higher power. 

            For American college professors, these were the glory years, when fear of annihilation gave us a glamorous public mission and what seemed like an endless flow of public funds and funded students.  But it did not – and could not – last.  Wars can bring great benefits to the home front, but then they end.  The Cold War lasted longer than most, but this longevity came at the expense of intensity.  By the 1970s, the U.S. had lived with the nuclear threat for 30 years without any sign that the worst case was going to materialize.  You can only stand guard for so long before attention begins to flag and ordinary concerns start to push back to the surface.  In addition, waging war is extremely expensive, draining both public purse and public sympathy.  The two Cold War conflicts that engaged American troops cost a lot, stirred strong opposition, and ended badly, providing neither the idealistic glow of the Good War nor the satisfying closure of unconditional surrender by the enemy.  Korea ended with a stalemate and the return to the status quo ante bellum.  Vietnam ended with defeat and the humiliating image in 1975 of the last Americans being plucked off a rooftop in Saigon – which the victors then promptly renamed Ho Chi Minh City.

            The Soviet menace and the nuclear threat persisted, but in a form that – after the grim experience of war in the rice paddies – seemed distant and slightly unreal.  Add to this the problem that, as a tool for defeating the enemy, the radical expansion of higher education by the 70s did not appear to be a cost-effective option.  Higher ed is a very labor-intensive enterprise, in which size brings few economies of scale, and its public benefits in the war effort were hard to pin down.  As the national danger came to seem more remote, the costs of higher ed became more visible and more problematic.  Look around any university campus, and the primary beneficiaries of public largesse seem to be private actors – the faculty and staff who work there and the students whose degrees earn them higher income.  So about 30 years into the Cold War, the question naturally arose:  Why should the public pay so much to provide cushy jobs for the first group and to subsidize the personal ambition of the second?  If graduates reap the primary benefits of a college education, shouldn’t they be paying for it rather than the beleaguered taxpayer?

            The 1970s marked the beginning of the American tax revolt, and not surprisingly this revolt emerged first in the bellwether state of California.  Fueled by booming defense plants and high immigration, California had a great run in the decades after 1945.  During this period, the state developed the most comprehensive system of higher education in the country.  In 1960 it formalized this system with a Master Plan that offered every Californian the opportunity to attend college in one of three state systems.  The University of California focused on research, graduate programs, and educating the top high school graduates.  California State University (developed mostly from former teachers colleges) focused on undergraduate programs for the second tier of high school graduates.  The community college system offered the rest of the population two-year programs for vocational training and possible transfer to one of the two university systems.  By 1975, there were 9 campuses in the University of California, 23 in California State University, and xx in the community college system, with a total enrollment across all systems of 1.5 million students – accounting for 14 percent of the college students in the U.S. (Carter et al., 2006, Table Bc523; Douglass, 2000, Table 1).  Not only was the system enormous, but the Master Plan declared it illegal to charge California students tuition.  The biggest and best public system of higher education in the country was free.

            And this was the problem.  What allowed the system to grow so fast was a state fiscal regime that was quite rare in the American context – one based on high public services supported by high taxes.  After enjoying the benefits of this combination for a few years, taxpayers suddenly woke up to the realization that this approach to paying for higher education was at core un-American.  For a country deeply grounded in liberal democracy, the system of higher ed for all at no cost to the consumer looked a lot like socialism.  So, of course, it had to go.  In the mid-1970s the country’s first taxpayer revolt emerged in California, culminating in a successful campaign in 1978 to pass a state-wide initiative that put a limit on increases in property taxes.  Other tax limitation initiatives followed (Martin, 2008).  As a result, the average state appropriation per student at University of California dropped from about $3,400 (in 1960 dollars) in 1987 to $1,100 in 2010, a decline of 68 percent (UC Data Analysis (2014).  This quickly led to a steady increase in fees charged to students at California’s colleges and universities.  (It turned out that tuition was illegal but demanding fees from students was not.)  In 1960 dollars, the annual fees for in-state undergraduates at the University of California rose from $317 in 1987 to $1,122 in 2010, an increase of more than 250 percent (UC Data Analysis (2014).  This pattern of tax limitations and tuition increases spread across the country.  Nationwide during the same period of time, the average state appropriation per student at a four year public college fell from $8,500 to $5,900 (in 2012 dollars), a decline of 31 percent, while average undergraduate tuition doubled, rising from $2,600 to $5,200 (SHEEO, 2013, Figure 3).

            The decline in the state share of higher education costs was most pronounced at the top public research universities, which had a wider range of income sources.  By 2009, the average such institution was receiving only 25 percent of its revenue from state government (National Science Board (2012), Figure 5).  An extreme case is University of Virginia, where in 2013 the state provided less than six percent of the university’s operating budget (University of Virginia, 2014). 

            While these changes were happening at the state level, the federal government was also backing away from its Cold War generosity to students in higher education.  Legislation such as the National Defense Education Act (1958) and Higher Education Act (1965) had provided support for students through a roughly equal balance of grants and loans.  But in 1980 the election of Ronald Reagan as president meant that the push to lower taxes would become national policy.  At this point, support for students shifted from cash support to federally guaranteed loans.  The idea was that a college degree was a great investment for students, which would pay long-term economic dividends, so they should shoulder an increasing share of the cost.  The proportion of total student support in the form of loans was 54 percent in 1975, 67 percent in 1985, and 78 percent in 1995, and the ratio has remained at that level ever since (McPherson & Schapiro, 1998, Table 3.3; College Board, 2013, Table 1).  By 1995, students were borrowing $41 billion to attend college, which grew to $89 billion in 2005 (College Board, 2014, Table 1).  At present, about 60 percent of all students accumulate college debt, most of it in the form of federal loans, and the total student debt load has passed $1 trillion.

            At the same time that the federal government was cutting back on funding college students, it was also reducing funding for university research.  As I mentioned earlier, federal research grants in constant dollars peaked at about $100 billion in 1990, the year after the fall of the Berlin wall – a good marker for the end of the Cold War.  At this point defense accounted for about two-thirds of all university research funding – three-quarters if your include space research.  Defense research declined by about 20 percent during the 90s and didn’t start rising again substantially until 2002, the year after the fall of the Twin Towers and the beginning of the new existential threat known as the War on Terror.  Defense research reached a new peak in 2009 at a level about a third above the Cold War high, and it has been declining steadily ever since.  Increases in nondefense research helped compensate for only a part of the loss of defense funds (AAAS, 2014).

Conclusion

            The American system of higher education came into existence as a distinctly private good.  It arose in the nineteenth century to serve the pursuit of sectarian advantage and land speculation, and then in the twentieth century it evolved into a system for providing individual consumers a way to get ahead or stay ahead in the social hierarchy.  Quite late in the game it took World War II to give higher education an expansive national mission and reconstitute it as a public good.  But hot wars are unsustainable for long, so in 1945 the system was sliding quickly back toward public irrelevance before it was saved by the timely arrival of the Cold War.  As I have shown, the Cold War was very very good for American system of higher education.  It produced a massive increase in funding by federal and state governments, both for university research and for college student subsidies, and – more critically – it sustained this support for a period of three decades.  But these golden years gradually gave way before a national wave of taxpayer fatigue and the surprise collapse of the Soviet Union.  With the nation strapped for funds and with its global enemy dissolved, it no longer had the urgent need to enlist America’s colleges and universities in a grand national cause.  The result was a decade of declining research support and static student enrollments. In 2002 the wars in Afghanistan and Iraq brought a momentary surge in both, but these measures peaked after only eight years and then went again into decline.  Increasingly, higher education is returning to its roots as a private good.

            So what are we to take away from this story of the rise and fall of the Cold War university?  One conclusion is that the golden age of the American university in the mid twentieth century was a one-off event.  Wars may be endemic but the Cold War was unique.  So American university administrators and professors need to stop pining for a return to the good old days and learn how to live in the post-Cold-War era.  The good news is that the impact of the surge in public investment in higher education has left the system in a radically stronger condition than it was in before World War II.  Enrollments have gone from 1.5 million to 21 million; federal research funding has gone from zero to $135 billion; federal grants and loans to college students have gone from zero to $170 billion (NCES, 2014, Table 303.10; AAAS, 2014; College Board, 2014, Table 1).  And the American system of colleges and universities went from an international also-ran to a powerhouse in the world economy of higher education.  Even though all of the numbers are now dropping, they are dropping from a very high level, which is the legacy of the Cold War.  So really, we should stop whining.  We should just say thanks to the bomb for all that it did for us and move on.

            The bad news, of course, is that the numbers really are going down.  Government funding for research is declining and there is no prospect for a turnaround in the foreseeable future.  This is a problem because the federal government is the primary source of funds for basic research in the U.S.; corporations are only interested in investing in research that yields immediate dividends.  During the Cold War, research universities developed a business plan that depended heavily on external research funds to support faculty, graduate students, and overhead.  That model is now broken.  The cost of pursuing a college education is increasingly being borne by the students themselves, as states are paying a declining share of the costs of higher education.  Tuition is rising and as a result student loans are rising.  Public research universities are in a particularly difficult position because their state funding is falling most rapidly.  According to one estimate, at the current rate of decline the average state fiscal support for public higher education will reach zero in 2059 (Mortenson, 2012). 

            But in the midst of all of this bad news, we need to keep in mind that the American system of higher education has a long history of surviving and even thriving under conditions of at best modest public funding.  At its heart, this is a system of higher education based not on the state but the market.  In the hardscrabble nineteenth century, the system developed mechanisms for getting by without the steady support of funds from church or state.  It learned how to attract tuition-paying students, give them the college experience they wanted, get them to identify closely with the institution, and then milk them for donations when they graduate.  Football, fraternities, logo-bearing T shirts, and fund-raising operations all paid off handsomely.  It learned how to adapt quickly to trends in the competitive environment, whether it’s the adoption of intercollegiate football, the establishment of research centers to capitalize on funding opportunities, or providing students with food courts and rock-climbing walls.  Public institutions have a long history of behaving much like private institutions because they were never able to count on continuing state funding. 

            This system has worked well over the years.  Along with the Cold War, it has enabled American higher education to achieve an admirable global status.  By the measures of citations, wealth, drawing power, and Nobel prizes, the system has been very effective.  But it comes with enormous costs.  Private universities have serious advantages over public universities, as we can see from university rankings.  The system is the most stratified structure of higher education in the world.  Top universities in the U.S. get an unacknowledged subsidy from the colleges at the bottom of the hierarchy, which receive less public funding, charge less tuition, and receive less generous donations.  And students sort themselves into institutions in the college hierarchy that parallels their position in the status hierarchy.  Students with more cultural capital and economic capital gain greater social benefit from the system than those with less, since they go to college more often, attend the best institutions, and graduate at a much higher rate.  Nearly everyone can go to college in the U.S., but the colleges that are most accessible provide the least social advantage. 

            So, conceived and nurtured into maturity as a private good, the American system of higher education remains a market-based organism.  It took the threat of nuclear war to turn it – briefly – into a public good.  But these days seem as remote as the time when schoolchildren huddled together in a bomb shelter. 

References

American Association for the Advancement of Science. (2014). Historical Trends in Federal R & D: By Function, Defense and Nondefense R & D, 1953-2015.  http://www.aaas.org/page/historical-trends-federal-rd (accessed 8-21-14.

Bledstein, B. J. (1976). The Culture of Professionalism: The Middle Class and the Development of Higher Education in America. New York:  W. W. Norton.

Boorstin, D. J. (1965). Culture with Many Capitals: The  Booster College. In The Americans: The National Experience (pp. 152-161). New York: Knopf Doubleday.

Brown, D. K. (1995). Degrees of Control: A Sociology of Educational Expansion and Occupational Credentialism. New York: Teachers College Press.

Carter, S. B., et al. (2006). Historical Statistics of the United States, Millennial Education on Line. New York: Cambridge University Press.

College Board. (2013). Trends in student aid, 2013. New York: The College Board.

College Board. (2014). Trends in Higher Education: Total Federal and Nonfederal Loans over Time.  https://trends.collegeboard.org/student-aid/figures-tables/growth-federal-and-nonfederal-loans-over-time (accessed 9-4-14).

Collins, R. (1979). The Credential Society: An Historical Sociology of Education and Stratification. New York: Academic Press.

Douglass, J. A. (2000). The California Idea and American Higher Education: 1850 to the 1960 Master Plan. Stanford, CA: Stanford University Press.

Garrett, T. A., & Rhine, R. M. (2006).  On the Size and Growth of Government. Federal Reserve Bank of St. Louis Review, 88:1 (pp. 13-30).

Geiger, R. L. (2004). To Advance Knowledge: The Growth of American research Universities, 1900-1940. New Brunswick: Transaction.

Goldin, C. & Katz, L. F. (2008). The Race between Education and Technology. Cambridge: Belknap Press of Harvard University Press.

Institute of Higher Education, Shanghai Jiao Tong University.  (2013).  Academic Ranking of World Universities – 2013.  http://www.shanghairanking.com/ARWU2013.html (accessed 6-11-14).

Levine, D. O. (1986). The American college and the culture of aspiration, 1914-1940 Ithaca: Cornell University Press.

Loss, C. P.  (2011).  Between citizens and the state: The politics of American higher education in the 20th century. Princeton, NJ: Princeton University Press.

Martin, I. W. (2008). The Permanent Tax Revolt: How the Property Tax Transformed American Politics. Stanford, CA: Stanford University Press.

McPherson, M. S. & Schapiro, M. O.  (1999).  Reinforcing Stratification in American Higher Education:  Some Disturbing Trends.  Stanford: National Center for Postsecondary Improvement.

Mortenson, T. G. (2012).  State Funding: A Race to the Bottom.  The Presidency (winter).  http://www.acenet.edu/the-presidency/columns-and-features/Pages/state-funding-a-race-to-the-bottom.aspx (accessed 10-18-14).

National Center for Education Statistics. (2014). Digest of Education Statistics, 2013. Washington, DC: US Government Printing Office.

National Science Board. (2012). Diminishing Funding Expectations: Trends and Challenges for Public Research Universities. Arlington, VA: National Science Foundation.

Potts, D. B. (1971).  American Colleges in the Nineteenth Century: From Localism to Denominationalism. History of Education Quarterly, 11: 4 (pp. 363-380).

President’s Commission on Higher Education. (1947). Higher education for American democracy, a report. Washington, DC: US Government Printing Office.

Rüegg, W. (2004). European Universities and Similar Institutions in Existence between 1812 and the End of 1944: A Chronological List: Universities.  In Walter Rüegg, A History of the University in Europe, vol. 3. London: Cambridge University Press.

State Higher Education Executive Officers (SHEEO). (2013). State Higher Education Finance, FY 2012. www.sheeo.org/sites/default/files/publications/SHEF-FY12.pdf (accessed 9-8-14).

Terkel, S. (1997). The Good War: An Oral History of World War II. New York: New Press.

Tewksbury, D. G. (1932). The Founding of American Colleges and Universities before the Civil War. New York: Teachers College Press.

U of California Data Analysis. (2014). UC Funding and Fees Analysis.  http://ucpay.globl.org/funding_vs_fees.php (accessed 9-2-14).

University of Virginia (2014). Financing the University 101. http://www.virginia.edu/finance101/answers.html (accessed 9-2-14).

[1] Under pressure of the war effort, the department eventually relented and enlisted the help of chemists to study gas warfare.  But the initial response is telling.

[2] Not all of this funding went into the higher education system.  Some went to stand-alone research organizations such as the Rand Corporation and American Institute of Research.  But these organizations in many ways function as an adjunct to higher education, with researcher moving freely between them and the university.

Posted in Ed schools, Higher Education, History

Too Easy a Target: The Trouble with Ed Schools and the Implications for the University

This post is a piece I published in Academe (the journal of AAUP) in 1999.  It provides an overview of the argument in my 2004 book, The Trouble with Ed Schools. I reproduce it here as a public service:  if you read this, you won’t need to read my book much less buy it.  You’re welcome.  Also, looking through it 20 years later, I was pleasantly surprised to find that it was kind of a fun read.  Here’s a link to the original.

The book and the article tell the story of the poor beleauguered ed school, maligned by one and all.  It’s a story of irony, in which an institution does what everyone asked of it and is thoroughly punished for the effort.  And it’s also a reverse Horatio Alger story, in which the beggar boy never makes it.  Here’s a glimpse of the argument, which starts with the ed school’s terrible reputation:

So how did things get this bad? No occupational group or subculture acquires a label as negative as this one without a long history of status deprivation. Critics complain about the weakness and irrelevance of teacher ed, but they rarely look at the reasons for its chronic status problems. If they did, they might find an interesting story, one that presents a more sympathetic, if not more flattering, portrait of the education school. They would also find, however, a story that portrays the rest of academe in a manner that is less self-serving than in the standard account. The historical part of this story focuses on the way that American policy makers, taxpayers, students, and universities collectively produced exactly the kind of education school they wanted. The structural part focuses on the nature of teaching as a form of social practice and the problems involved in trying to prepare people to pursue this practice.

Enjoy.

Ed Schools Cover

Too Easy a Target:

The Trouble with Ed Schools and the Implications for the University

By David F. Labaree

This is supposed to be the era of political correctness on American university campuses, a time when speaking ill of oppressed minorities is taboo. But while academics have to tiptoe around most topics, there is still one subordinate group that can be shelled with impunity — the sad sacks who inhabit the university’s education school. There is no need to take aim at this target because it is too big to miss, and there is no need to worry about hitting innocent bystanders because everyone associated with the ed school is understood to be guilty as charged.

Of course, education in general is a source of chronic concern and an object of continuous criticism for most Americans. Yet, as the annual Gallup Poll of attitudes toward education shows, citizens give good grades to their local schools at the same time that they express strong fears about the quality of public education elsewhere in the country. The vision is one of general threats to education that have not yet reached the neighborhood school but may do so in the near future. These threats include everything from the multicultural curriculum to the decline in the family, the influence of television, and the consequences of chronic poverty.

One such threat is the hapless education school, whose alleged incompetence and supposedly misguided ideas are seen as producing poorly prepared teachers and inadequate curricula. For the public, this institution is remote enough to be suspect (unlike the local school) and accessible enough to be scorned (unlike the more arcane university). For the university faculty, it is the ideal scapegoat, allowing blame for problems with schools to fall upon teacher education in particular rather than higher education in general.

For years, writers from right to left have been making the same basic complaints about the inferior quality of education faculties, the inadequacy of education students, and, to quote James Koerner’s 1963 classic, The Miseducation of American Teachers, their “puerile, repetitious, dull, and ambiguous” curriculum. This kind of complaining about ed schools is as commonplace as griping about the cold in the middle of winter. But something new has arisen in the defamatory discourse about these beleaguered institutions: the attacks are now coming from their own leaders. The victims are joining the victimizers.

So how did things get this bad? No occupational group or subculture acquires a label as negative as this one without a long history of status deprivation. Critics complain about the weakness and irrelevance of teacher ed, but they rarely look at the reasons for its chronic status problems. If they did, they might find an interesting story, one that presents a more sympathetic, if not more flattering, portrait of the education school. They would also find, however, a story that portrays the rest of academe in a manner that is less self-serving than in the standard account. The historical part of this story focuses on the way that American policy makers, taxpayers, students, and universities collectively produced exactly the kind of education school they wanted. The structural part focuses on the nature of teaching as a form of social practice and the problems involved in trying to prepare people to pursue this practice.

Decline of Normal Schools

Most education schools grew out of the normal schools that emerged in the second half of the nineteenth century. Their founders initially had heady dreams that these schools could become model institutions that would establish high-quality professional preparation for teachers along with a strong professional identity. For a time, some of the normal schools came close to realizing these dreams.

Soon, however, burgeoning enrollments in the expanding common schools produced an intense demand for new teachers to fill a growing number of classrooms, and the normal schools turned into teacher factories. They had to produce many teachers quickly and cheaply, or else school districts around the country would hire teachers without this training — or perhaps any form of professional preparation. So normal schools adapted by stressing quantity over quality, establishing a disturbing but durable pattern of weak professional preparation and low academic standards.

At the same time, normal schools had to confront a strong consumer demand from their own students, many of whom saw the schools as an accessible form of higher education rather than as a site for teacher preparation. Located close to home, unlike the more centrally located state universities and land grant colleges, the normal schools were also easier to get into and less costly. As a result, many students enrolled who had little or no interest in teaching; instead, they wanted an advanced educational credential that would gain them admission to attractive white-collar positions. They resisted being trapped within a single vocational track — the teacher preparation program — and demanded a wide array of college-level liberal arts classes and programs. Since normal schools depended heavily on tuition for their survival, they had little choice but to comply with the demands of their “customers.”

This compliance reinforced the already-established tendency toward minimizing the extent and rigor of teacher education. It also led the normal schools to transform themselves into the model of higher education that their customers wanted, first by changing into teachers’ colleges (with baccalaureate programs for nonteachers), then into state liberal-arts colleges, and finally into the general-purpose regional state universities they are today.

As the evolving colleges moved away from being normal schools, teacher education programs became increasingly marginal within their own institutions, which were coming to imitate the multipurpose university by giving pride of place to academic departments, graduate study, and preparation for the more prestigious professions. Teacher education came to be perceived as every student’s second choice, and the ed school professors came to be seen as second-class citizens in the academy.

Market Pressures in the Present

Market pressures on education schools have changed over the years, but they have not declined. Teaching is a very large occupation in the United States, with about 3 million practitioners in total. To fill all the available vacancies, approximately one in every five college graduates must enter teaching each year. If education schools do not prepare enough candidates, state legislators will authorize alternative routes into the profession (requiring little or no professional education), and school boards will hire such prospects to place warm bodies in empty classrooms.

Education schools that try to increase the duration and rigor of teacher preparation by focusing more intensively on smaller cohorts of students risk leaving the bulk of teaching in the hands of practitioners who are prepared at less demanding institutions or who have not been prepared at all. In addition, such efforts run into strong opposition from within the university, which needs ed students to provide the numbers that bring legislative appropriations and tuition payments. Subsidies from the traditionally cost-effective teacher-education factories support the university’s more prestigious, but less lucrative, endeavors. As a result, universities do not want their ed schools to turn into boutique programs for the preparation of a few highly professionalized teachers.

Another related source of institutional resistance arises whenever education schools try to promote quality over quantity. This resistance comes from academic departments, which have traditionally relied on the ability of their universities to provide teaching credentials as a way to induce students to major in “impractical” subjects. Departments such as English, history, and music have sold themselves to undergraduates for years with the argument that “you can always teach” these subjects. As a result, these same departments become upset when the education school starts to talk about upgrading, downsizing, or limiting access.

Stigmatized Populations and Soft Knowledge

The fact that education schools serve stigmatized populations aggravates the market pressures that have seriously undercut the status and the role of these schools. One such population is women, who currently account for about 70 percent of American teachers. Another is the working class, whose members have sought out the respectable knowledge-based white-collar work of teaching as a way to attain middle-class standing. Children make up a third stigmatized population. In a society that rewards contact with adults more than contact with children, and in a university setting that is more concerned with serious adult matters than with kid stuff, education schools lose out, because they are indelibly associated with children.

Teachers also suffer from an American bias in favor of doing over thinking. Teachers are the largest and most visible single group of intellectual workers in the United States — that is, people who make their living through the production and transmission of ideas. More accessible than the others in this category, teachers constitute the street-level intellectuals of our society. As the only intellectuals with whom most people will ever have close contact, teachers take the brunt of the national prejudice against book learning and those pursuits that are scornfully labeled as “academic.”

Another problem facing education schools is the low status of the knowledge they deal with: it is soft rather than hard, applied rather than pure. Hard disciplines (which claim to produce findings that are verifiable, definitive, and cumulative) outrank soft disciplines (whose central problem is interpretation and whose findings are always subject to debate and reinterpretation by others). Likewise, pure intellectual pursuits (which are oriented toward theory and abstracted from particular contexts) outrank those that are applied (which concentrate on practical work and concrete needs).

Knowledge about education is necessarily soft. Education is an extraordinarily complex social activity carried out by quirky and willful actors, and it steadfastly resists any efforts to reduce it to causal laws or predictive theories. Researchers cannot even count on being able to build on the foundation of other people’s work, since the validity of this work is always only partially established. Instead, they must make the best of a difficult situation. They try to interpret what is going on in education, but the claims they make based on these interpretations are highly contingent. Education professors can rarely speak with unclouded authority about their area of expertise or respond definitively when others challenge their authority. Outsiders find it child’s play to demonstrate the weaknesses of educational research and hold it up for ridicule for being inexact, contradictory, and impotent.

Knowledge about education is also necessarily applied. Education is not a discipline, defined by a theoretical apparatus and a research methodology, but an institutional area. As a result, education schools must focus their energies on the issues that arise from this area and respond to the practical concerns confronting educational practitioners in the field — even if doing so leads them into areas in which their constructs are less effective and their chances for success less promising. This situation unavoidably undermines the effectiveness and the intellectual coherence of educational research and thus also calls into question the academic stature of the faculty members who produce that research.

No Prestige for Practical Knowledge

Another related knowledge-based problem faces the education school. A good case can be made for the proposition that American education — particularly higher education — has long placed a greater emphasis on the exchange value of the educational experience (providing usable credentials that can be cashed in for a good job) than on its use value (providing usable knowledge). That is, what consumers have sought and universities have sold in the educational marketplace is not the content of the education received at the university (what the student actually learns there) but the form of this education (what the student can buy with a university degree).

One result of this commodification process is that universities have a strong incentive to promote research over teaching, for publications raise the visibility and prestige of the institution much more effectively than does instruction (which is less visible and more difficult to measure). And a prestigious faculty raises the exchange value of the university’s diploma, independently of whatever is learned in the process of acquiring this diploma. By relying heavily on its faculty’s high-status work in fields of hard knowledge, the university’s marketing effort does not leave an honored role for an education school that produces soft knowledge about practical problems.

A Losing Status, but a Winning Role?

What all of this suggests is that education schools are poorly positioned to play the university status game. They serve the wrong clientele and produce the wrong knowledge; they bear the mark of their modest origins and their traditionally weak programs. And yet they are pressured by everyone from their graduates’ employers to their university colleagues to stay the way they are, since they fulfill so many needs for so many constituencies.

But consider for a moment what would happen if we abandoned the status perspective in establishing the value of higher education. What if we focus instead on the social role of the education school rather than its social position in the academic firmament? What if we consider the possibility that education schools — toiling away in the dark basement of academic ignominy — in an odd way have actually been liberated by this condition from the constraints of academic status attainment? Is it possible that ed schools may have stumbled on a form of academic practice that could serve as a useful model for the rest of the university? What if the university followed this model and stopped selling its degrees on the basis of institutional prestige grounded in the production of abstract research and turned its focus on instruction in usable knowledge?

Though the university status game, with its reliance on raw credentialism — the pursuit of university degrees as a form of cultural currency that can be exchanged for social position — is not likely to go away soon, it is now under attack. Legislators, governors, business executives, and educational reformers are beginning to declare that indeed the emperor is wearing no clothes: that there is no necessary connection between university degrees and student knowledge or between professorial production and public benefit; that students need to learn something when they are in the university; that the content of what they learn should have some intrinsic value; that professors need to develop ideas that have a degree of practical significance; and that the whole university enterprise will have to justify the huge public and private investment it currently requires.

The market-based pattern of academic life has always had an element of the confidence game, since the whole structure depends on a network of interlocking beliefs that are tenuous at best: the belief that graduates of prestigious universities know more and can do more than other graduates; the belief that prestigious faculty make for a good university; and the belief that prestigious research makes for a good faculty. The problem is, of course, that when confidence in any of these beliefs is shaken, the whole structure can come tumbling down. And when it does, the only recourse is to rebuild on the basis of substance rather than reputation, demonstrations of competence rather than symbols of merit.

This dreaded moment is at hand. The fiscal crisis of the state, the growing political demand for accountability and utility, and the intensification of competition in higher education are all undermining the credibility of the current pattern of university life. Today’s relentless demand for lower taxes and reduced public services makes it hard for the university to justify a high level of public funding on the grounds of prestige alone. State governments are demanding that universities produce measurable beneficial outcomes for students, businesses, and other taxpaying sectors of the community. And, by withholding higher subsidies, states are throwing universities into a highly competitive situation in which they vie with one another to see who can attract the most tuition dollars and the most outside research grants, and who can keep the tightest control over internal costs.

In this kind of environment, education schools have a certain advantage over many other colleges and departments in the university. Unlike their competitors across campus, they offer traditionally low-cost programs designed explicitly to be useful, both to students and to the community. They give students practical preparation for and access to a large sector of employment opportunities. Their research focuses on an area about which Americans worry a great deal, and they offer consulting services and policy advice. In short, their teaching, research, and service activities are all potentially useful to students and community alike. How many colleges of arts and letters can say the same?

But before we get carried away with the counterintuitive notion that ed schools might serve as a model for a university under fire, we need to understand that these brow-beaten institutions will continue to gain little credit for their efforts to serve useful social purposes, in spite of the current political saliency of such efforts. One reason for that is the peculiar nature of the occupation – teaching — for which ed schools are obliged to prepare candidates. Another is the difficulty that faces any academic unit that tries to walk the border between theory and practice.

A Peculiar Kind of Professional

Teaching is an extraordinarily complex job. Researchers have estimated that the average teacher makes upward of 150 conscious instructional decisions during the course of the day, each of which has potentially significant consequences for the students involved. From the standpoint of public relations, however, the key difficulty is that, for the outsider, teaching looks all too easy. Its work is so visible, the skills required to do it seem so ordinary, and the knowledge it seeks to transmit is so generic. Students spend a long time observing teachers at work. If you figure that the average student spends 6 hours a day in school for 180 days a year over the course of 12 years, that means that a high school graduate will have logged about 13,000 hours watching teachers do their thing. No other social role (with the possible exception of parent) is so well known to the general public. And certainly no other form of paid employment is so well understood by prospective practitioners before they take their first day of formal professional education.

By comparison, consider other occupations that require professional preparation in the university. Before entering medical, law, or business school, students are lucky if they have spent a dozen hours in close observation of a doctor, lawyer, or businessperson at work. For these students, professional school provides an introduction to the mysteries of an arcane and remote field. But for prospective teachers, the education school seems to offer at best a gloss on a familiar topic and at worst an unnecessary hurdle for twelve-year apprentices who already know their stuff.

Not only have teacher candidates put in what one scholar calls a long “apprenticeship of observation,” but they have also noted during this apprenticeship that the skills a teacher requires are no big deal. For one thing, ordinary adult citizens already know the subject matter that elementary and secondary school teachers seek to pass along to their students — reading, writing, and math; basic information about history, science, and literature; and so on. Because there is nothing obscure about these materials, teaching seems to have nothing about it that can match the mystery and opaqueness of legal contracts, medical diagnoses, or business accounting.

Of course, this perception by the prospective teacher and the public about the skills involved in teaching leaves out the crucial problem of how a teacher goes about teaching ordinary subjects to particular students. Reading is one thing, but knowing how to teach reading is another matter altogether. Ed schools seek to fill this gap in knowledge by focusing on the pedagogy of teaching particular subjects to particular students, but they do so over the resistance of teacher candidates who believe they already know how to teach and a public that fails to see pedagogy as a meaningful skill.

Compounding this resistance to the notion that teachers have special pedagogical skills is the student’s general experience (at least in retrospect) that learning is not that hard — and, therefore, by the skills a teacher extension, that teaching is not hard either. Unlike doctors and lawyers, who use their arcane expertise for the benefit of the client without passing along the expertise itself, teachers are in the business of giving away their expertise. Their goal is to empower the student to the point at which the teacher is no longer needed and the student can function effectively without outside help. The best teachers make learning seem easy and make their own role in the learning process seem marginal. As a result, it is easy to underestimate the difficulty of being a good teacher — and of preparing people to become good teachers.

Finally, the education school does not have exclusive rights to the subject matter that teachers teach. The only part of the teacher’s knowledge over which the ed school has some control is the knowledge about how to teach. Teachers learn about English, history, math, biology, music, and other subjects from the academic departments at the university in charge of these areas of knowledge. Yet, despite the university’s shared responsibility for preparing teachers, ed schools are held accountable for the quality of the teachers and other educators they produce, often taking the blame for the deficiencies of an inadequate university education.

The Border Between Theory and Practice

The intellectual problem facing American education schools is as daunting as the instructional problem, for the territory in which ed schools do research is the mine-strewn border between theory and practice. Traditionally, the university’s peculiar area of expertise has been theory, while the public school is a realm of practice.  In reality, the situation is more complicated, since neither institution can function without relying on both forms of knowledge. Education schools exist, in part, to provide a border crossing between these two countries, each with its own distinctive language and culture and its own peculiar social structure. When an ed school is working well, it presents a model of fluid interaction between university and school and encourages others on both sides of the divide to follow suit. The ideal is to encourage the development of teachers and other educators who can draw on theory to inform their instructional practice, while encouraging university professors to become practice-oriented theoreticians, able to draw on issues from practice in their theory building and to produce theories with potential use value.

In reality, no education school (or any other institution, for that matter) can come close to meeting this ideal. The tendency is to fall on one side of the border or the other — where life is more comfortable and the responsibilities more clear cut — rather than to hold the middle ground and retain the ability to work well in both domains.

But because of their location in the university and their identification with elementary and secondary schools, ed schools have had to keep working along the border. In the process, they draw unrelenting fire from both sides. The university views colleges of education as nothing but trade schools, which supply vocational training but no academic curriculum. Students, complaining that ed-school courses are too abstract and academic, demand more field experience and fewer course requirements. From one perspective, ed-school research is too soft, too applied, and totally lacking in academic rigor, while from another, it is impractical and irrelevant, serving a university agenda while being largely useless to the schools.

Of course, both sides may be right. After years of making and attending presentations at the annual meeting of the American Educational Research Association, I am willing to concede that much of the work produced by educational researchers is lacking in both intellectual merit and practical application. But I would also argue that there is something noble and necessary about the way that the denizens of ed schools continue their quest for a workable balance between theory and practice. If only others in the academy would try to accomplish a marriage of academic elegance and social impact.

A Model for Academe

So where does this leave us in thinking about the poor beleaguered ed school? And what lessons, if any, can be learned from its checkered history?

The genuine instructional and intellectual weakness of ed schools results from the way the schools did what was demanded of them, which, though understandable, was not exactly honorable. Even so, much of the scorn that has come down on the ed school stems from its lowly status rather than from any demonstrable deficiencies in the educational role it has played. But then institutional status has circular quality about it, which means that predictions of high or low institutional quality become self-fulfilling.

In some ways, ed schools have been doing things right. They have wrestled vigorously (if not always to good effect) with the problems of public education, an area that is of deep concern to most citizens. This has meant tackling social problems of great complexity and practical importance, even though the university does not place much value on the production of this kind of messy, indeterminate, and applied knowledge.

Oddly enough, the rest of the university could learn a lot from the example of the ed school. The question, however, is whether others in the university will see the example of the ed school as positive or negative. If academics consider this story in light of the current political and fiscal climate, then the ed school could serve as a model for a way to meet growing public expectations for universities to teach things that students need to know and to generate knowledge that benefits the community.

But it seems more likely that academics will consider this story a cautionary tale about how risky and unrewarding such a strategy can be. After all, education schools have demonstrated that they are neither very successful at accomplishing the marriage of theory and practice nor well rewarded for trying. In fact, the odor of failure and disrespect continues to linger in the air around these institutions. In light of such considerations, academics are likely to feel more comfortable placing their chips in the university’s traditional confidence game, continuing to pursue academic status and to market educational credentials. And from this perspective, the example of the ed school is one they should avoid like the plague. 

Posted in Democracy, History, Liberty, Race

The Central Link between Liberty and Slavery in American History

In this post, I explore insights from two important books about the peculiar way in which liberty and slavery jointly emerged from the context of colonial America. One is a new book by David Stasavage, The Decline and Rise of Democracy. The other is a 1992 book by Toni Morrison, Playing in the Dark: Whiteness and the Literary Imagination. The core point I draw from Stasavage is that the same factors that nurtured the development of political liberty in the American context also led to the development of slavery. The related point I draw from Morrison is that the existence of slavery was fundamental in energizing the colonists’ push for self rule.

Stasavage Cover

The Stasavage book explores the history of democracy in the world, starting with early forms that emerged in premodern North America, Europe, and Africa and then fell into decline, followed by the rise of modern parliamentary democracy.  He contrasts this with an alternative form of governance, autocracy, which grew up in a large number of times and places but appeared earliest and most enduringly in China.

He argues that three conditions were necessary for the emergence of early democracy. One is small scale, which allows people to confer as a group instead of relying on a distant leader.  Another is that rulers lack the knowledge about what people were producing, such as an administrative bureaucracy could provide, which means they needed to share power in order to be able to levy taxes effectively.  But I want to focus on the third factor — the existence of an exit option — which is most salient to the colonial American case.  Here’s how he describes it:

The third factor that led to early democracy involved the balance between how much rulers needed their people and how much people could do without their rulers. When rulers had a greater need for revenue, they were more likely to accept governing in a collaborative fashion, and this was even more likely if they needed people to fight wars. With inadequate means of simply compelling people to fight, rulers offered them political rights. The flip side of all this was that whenever the populace found it easier to do without a particular ruler—say by moving to a new location—then rulers felt compelled to govern more consensually. The idea that exit options influence hierarchy is, in fact, so general it also applies to species other than humans. Among species as diverse as ants, birds, and wasps, social organization tends to be less hierarchical when the costs of what biologists call “dispersal” are low.

The central factor that supported the development of democracy in the British colonies was the scarcity of labor:

A broad manhood suffrage took hold in the British part of colonial North America not because of distinctive ideas but for the simple reason that in an environment where land was abundant and labor was scarce, ordinary people had good exit options. This was the same fundamental factor that had favored democracy in other societies.

And this was also the factor that promoted slavery:  “Political rights for whites and slavery for Africans derived from the same underlying environmental condition of labor scarcity.”  Because of this scarcity, North American agricultural enterprises in the colonies needed a way to ensure a flow of laborers to the colonies and a way to keep them on the job once they got there.  The central mechanisms for doing that were indentured servitude and slavery.  Some indentured servants were recruited in Britain with the promise of free passage to the new world in return for a contract to work for a certain number of years.  Others were simply kidnapped, shipped, and then forced to work off their passage.  At the same time Africans initially came to the colonies in a variety of statuses, but this increasingly shifted toward full slavery.  Here’s how he describes the situation in Tidewater colonies.

The early days of forced shipment of English to Virginia sounds like it would have been an environment ripe for servitude once they got there. In fact, it did not always work that way. Once they finished their period of indenture, many English migrants established farms of their own. This exit option must have been facilitated by the fact that they looked like Virginia’s existing British colonists, and they also sounded like them. They would have also shared a host of other cultural commonalities. In other words, they had a good outside option.

Now consider the case of Africans in Virginia, Maryland, and the other British colonies in North America who began arriving in 1619. The earliest African arrivals to Virginia and Maryland came in a variety of situations. Some were free and remained so, some were indentured under term contracts analogous to those of many white migrants, and some came entirely unfree. Outside options also mattered for Africans, and for several obvious reasons they were much worse than those for white migrants. Africans looked different than English people, they most often would not have arrived speaking English, or being aware of English cultural practices, and there is plenty of evidence that people in Elizabethan and Jacobean England associated dark skin with inferiority or other negative qualities. Outside options for Africans were remote to nonexistent. The sustainability of slavery in colonies like Virginia and Maryland depended on Africans not being able to escape and find labor elsewhere. For slave owners it of course helped that they had the law on their side. This law evolved quickly to define exactly what a “slave” was, there having been no prior juridical definition of the term. Africans were now to be slaves whereas kidnapped British boys were bound by “the custom of the country,” meaning that eventual release could be expected.

So labor scarcity and the existence of an attractive exit option provided the formative conditions for developing both white self-rule and Black enslavement.

Morrison Book Cover

Toni Morrison’s book is an reflection on the enduring impact of whiteness and blackness in shaping American literature.  In the passage below, from the chapter titled “Romancing the Shadow,” she is talking about the romantic literary tradition in the U.S.

There is no romance free of what Herman Melville called “the power of blackness,” especially not in a country in which there was a resident population, already black, upon which the imagination could play; through which historical, moral, metaphysical, and social fears, problems, and dichotomies could be articulated. The slave population, it could be and was assumed, offered itself up as surrogate selves for meditation on problems of human freedom, its lure and its elusiveness. This black population was available for meditations on terror — the terror of European outcasts, their dread of failure, powerlessness, Nature without limits, natal loneliness, internal aggression, evil, sin, greed. In other words , this slave population was understood to have offered itself up for reflections on human freedom in terms other than the abstractions of human potential and the rights of man.

The ways in which artists — and the society that bred them — transferred internal conflicts to a “blank darkness,” to conveniently bound and violently silenced black bodies, is a major theme in American literature. The rights of man, for example, an organizing principle upon which the nation was founded, was inevitably yoked to Africanism. Its history, its origin is permanently allied with another seductive concept: the hierarchy of race…. The concept of freedom did not emerge in a vacuum. Nothing highlighted freedom — if it did not in fact create it — like slavery.

Black slavery enriched the country’s creative possibilities. For in that construction of blackness and enslavement could be found not only the not-free but also, with the dramatic polarity created by skin color, the projection of the not-me. The result was a playground for the imagination. What rose up out of collective needs to allay internal fears and to rationalize external exploitation was an American Africanism — a fabricated brew of darkness, otherness, alarm, and desire that is uniquely American.

Such a lovely passage describing such an ugly distinction.  She’s saying that for Caucasian plantation owners in the Tidewater colonies, the presence of Black slaves was a vivid and visceral reminder of what it means to be not-free and thus decidedly not-me.  For people like Jefferson and Washington and Madison, the most terrifying form of unfreedom was in their faces every day.  More than their pale brethren in the Northern colonies, they had a compelling desire to never be treated by the king even remotely like the way they treated their own slaves.  

“The concept of freedom did not emerge in a vacuum. Nothing highlighted freedom — if it did not in fact create it — like slavery.”

Posted in History, Schooling, Welfare

Michael Katz — Public Education as Welfare

In this post, I reproduce a seminal essay by Michael Katz called “Public Education as Welfare.” It was originally published in Dissent in 2010 (link to the original) and it draws on his book, The Price of Citizenship: Redefining the American Welfare State.  

I encountered this essay when I was working on a piece of my own about the role that US public schools play as social welfare agencies.  My interest emerged from an op-ed about what is lost when schools close that I published a couple weeks ago and then posted here.  Michael was my dissertation advisor back at Penn, and I remembered he had written about the connection between schooling and welfare.  As you’ll see when I publish my essay here in a week or so, my focus is on the welfare function of schooling in companion with its other functions: building political community, promoting economic growth, and providing advantage in the competition for social position.  

Katz takes a much broader approach, seeking to locate schools as a central component of the peculiar form of the American welfare state.  He does a brilliant job of locating schooling in relation to the complex array of other public and private programs that constitute this rickety and fiendishly complex structure.  Enjoy.

Katz Cover

Public Education as Welfare

Michael B. Katz

Welfare is the most despised public institution in America. Public education is the most iconic. To associate them with each other will strike most Americans as bizarre, even offensive. Thelin would be less surprising to nineteenth century reformers for whom crime, poverty, and ignorance formed an unholy trinity against which they struggled. Nor would it raise British eyebrows. Ignorance was one of the “five giants” to be slain by the new welfare state proposed in the famous Beveridge Report. National health insurance, the cornerstone of the British welfare state, and the 1944 Education Act, which introduced the first national system of secondary education to Britain, were passed by Parliament only two years apart. Yet, in the United States, only a few students of welfare and education have even suggested that the two might stand together.

Why this mutual neglect? And how does public education fit into the architecture of the welfare state? It is important to answer these questions. Both the welfare state and the public school system are enormous and in one way or another touch every single American. Insight into the links between the two will illuminate the mechanisms through which American governments try to accomplish their goals; and it will show how institutions whose public purpose is egalitarian in fact reproduce inequality.

The definition and boundaries of the welfare state remain contentious topics. I believe that the “term “welfare state” refers to a collection of programs designed to assure economic security to all citizens by guaranteeing the fundamental necessities of life: food, shelter, medical care, protection in childhood, and support in old age. In the United States, the term generally excludes private efforts to provide these goods. But the best way to understand a nation’s welfare state is not to apply a theoretically driven definition but, rather, to examine the mechanisms through which legislators, service providers, and employers, whether public, private, or a mix of the two, try to prevent or respond to poverty, illness, dependency, economic security, and old age.

Where does public education fit within this account? First, most concretely, for more than century schools have been used as agents of the welfare state to deliver social services, such as nutrition and health. Today, in poor neighborhoods, they often provide hot breakfasts among other services. More to the point, public school systems administer one of the nation’s largest programs of economic redistribution. Most accounts of the financing of public education stress the opposite point by highlighting inequities, “savage inequalities,” to borrow Jonathan Kool’s phrase, that shortchange city youngsters and racial minorities. These result mostly from the much higher per-pupil spending in affluent suburbs than in poor inner cities, where yields from property taxes are much lower. All this is undeniable as well as unacceptable.

But tilt the angle and look at the question from another perspective. Consider how much the average family with children pays in property taxes, the principal support for schools. Then focus on per-pupil expenditure, even in poor districts. You will find that families, including poor city families, receive benefits worth much more than they have contributed. Wealthier families, childless and empty-nest couples, and businesses subsidize families with children in school.

There is nothing new about this. The mid-nineteenth-century founders of public school systems, like Horace Mann, and their opponents understood the redistributive character of public education. To build school systems, early school promoters needed to persuade the wealthy and childless that universal, free education would their interests by reducing the incidence of crime, lowering the cost of poor relief, improving the skills and attitudes of workers, assimilating immigrants—and therefore saving them money in the long run. So successful were early school promoters that taxation for public education lost its controversial quality. With just a few exceptions, debates focused on the amount of taxes, not on their legitimacy. The exceptions occurred primarily around the founding of high schools that working-class and other voters correctly observed would serve only a small fraction of families at a time when most youngsters in their early teens were sent out to work or kept at home to help their families. For the most part, however, the redistributive quality of public education sank further from public consciousness. This is what early school promoters wanted and had worked to make happen. When they began their working the early nineteenth century, “public” usually referred to schools widely available and either free or cheap—in short, schools for the poor. School promoters worked tirelessly to break this link between public and pauper that inhibited the development of universal public education systems. So successful were they that today the linkage seems outrageous—though in cities where most of the remaining affluent families send their children to private schools, the association of public with pauper has reemerged with renewed ferocity.

As a concrete example, here is a back-of-the envelope illustration. In 2003–2004, public elementary and secondary education in the United States cost $403 billion or, on average, $8,310 per student (or, taking the median, $7,860). Most families paid nothing like the full cost of this education in taxes. Property taxes, which account for a huge share of spending on public schools, average $935 per person or, for family of four, something under $4,000, less than half the average per-pupil cost. As rough as these figures are, they do suggest that most families with school-age children receive much more from spending on public education than they contribute in taxes. (A similar point could be made about public higher education.)

Taxpayers provide this subsidy because they view public education as a crucial public good. It prevents poverty, lowers the crime rate, prepares young people for the work force, and fosters social mobility—or so the story goes. The reality, as historians of education have shown, is a good deal more complex. Public education is the mechanism through which the United States solves problems and attempts to reach goals achieved more directly or through different mechanisms in other countries. International comparisons usually brand the United States a welfare laggard because it spends less of its national income on welfare related benefits than do other advanced industrial democracies. But the comparisons leave out spending on public education, private social services, employer-provided health care and pensions, and benefits delivered through the tax code, a definitional weakness whose importance will become clearer when I describe the architecture of the welfare state.

***

Almost thirty-five years ago, in Social Control of the Welfare State, Morris Janowitz pointed out that “the most significant difference between the institutional bases of the welfare state in Great Britain and the United States was the emphasis placed on public education—especially for lower income groups—in the United States. Massive support for the expansion of public education . . . in the United States must be seen as a central component of the American notion of welfare . . .” In the late nineteenth and early twentieth centuries, while other nations were introducing unemployment, old age, and health insurance, the United States was building high schools for a huge surge in enrollment. “One would have to return to the 1910s to find levels of secondary school enrollment in the United States that match those in 1950s Western Europe,” point out economists Claudia Golden and Lawrence F. Katz in The Race Between Education and Technology. European nations were about generation behind the United States in expanding secondary education; the United States was about a generation behind Europe in instituting its welfare state.

If we think of education as a component, wean see that the U.S. welfare state focuses on enhancing equality of opportunity in contrast to European welfare states, which have been more sympathetic to equality of condition. In the United States, equality always has primarily about a level playing field where individuals can compete unhindered by obstacles that crimp the full expression of their native talents; education has served as the main mechanism for leveling the field. European concepts of equality more often focus on group inequality and the collective mitigation of handicaps and risks that, in the United States, have been left for individuals to deal with on their own.

***

Public education is part of the American welfare state. But which one? Each part is rooted in a different place in American history. Think of the welfare state as a loosely constructed, largely unplanned structure erected by many different people over centuries. This rickety structure, which no sane person would have designed, consists of two main divisions, the public and private welfare states, with subdivisions within each. The divisions of the public welfare state are public assistance, social insurance, and taxation. Public assistance (called outdoor relief through most of its history) originated with the Elizabethan poor laws brought over by the colonists. It consists of means-tested benefits. Before 1996, the primary example was Aid to Families with Dependent Children (AFDC), and since 1996, it has been Temporary Assistance to Needy Families (TANF)—the programs current-day Americans usually have in mind when they speak of “welfare.”

Social insurance originated in Europe in the late nineteenth century and made its way slowly to the United States. The first form of U.S. social insurance was workers’ compensation, instituted by several state governments in the early twentieth century. Social insurance benefits accrue to individuals on account of fixed criteria such as age. They are called insurance because they are allegedly based on prior contributions. The major programs—Social Security for the elderly and unemployment insurance—emerged in 1935 when Congress passed the Social Security Act. Social insurance benefits are much higher than benefits provided through public assistance, and they carry no stigma.

The third track in the public welfare state is taxation. U.S. governments, both federal and state, administer important benefits through the tax code rather than through direct grants. Thesis the most modern feature of the welfare state. The major example of a benefit aimed at poor people is the Earned Income Tax Credit, which expanded greatly during the Clinton presidency.

Within the private welfare state are two divisions: charities and social services and employee benefits. Charities and social services have along and diverse history. In the 1960s, governments started to fund an increasing number of services through private agencies. (In America, governments primarily write checks; they do not usually operate programs.) More and more dependent on public funding, private agencies increasingly became, in effect, government providers, a transformation with profound implications for their work. Employee benefits constitute the other division in the private welfare state. These date primarily from the period after the Second World War. They expanded as a result of the growth of unions, legitimated by the 1935 Wagner Act and 1949 decisions of the National Labor Relations Board, which held that employers were required to bargain over, though not required to provide, employee benefits.

Some economists object to including these benefits within the welfare state, but they are mistaken. Employee benefits represent the mechanism through which the United States has chosen to meet the health care needs of majority of its population. About 60 percent of Americans receive their health insurance through their employer, and many receive pensions as well. If unions had bargained hard for a public rather than a private welfare state, the larger American welfare state would look very different. Moreover, the federal government encourages the delivery of healthcare and pensions through private employers by allowing them to deduct the cost from taxes, and it supervises them with massive regulations, notably the Employee Retirement Security Act of 1974.

The first thing to stress about this welfare state is that its divisions are not distinct. They overlap and blend in complicated ways, giving the American welfare state a mixed economy not usefully described as either public or private. At the same time, federalism constrains its options, with some benefits provided by federal government and others offered through state and local governments. Throughout the twentieth century, one great problem facing would-be welfare state builders was designing benefits to pass constitutional muster.

How does public education fit into this odd, bifurcated structure? It shares characteristics with social insurance, public assistance, and social services. At first, it appears closest to social insurance. Its benefits are universal and not means tested, which makes them similar to Social Security (although Social Security benefits received by high income individuals are taxed). But education benefits are largely in kind, as are food stamps, housing, and Medicare. (In-kind benefits are “government provision of goods and services to those in need of them” rather than of “income sufficient to meet their needs via the market.”) Nor are the benefits earned by recipients through prior payroll contributions or employment. This separates them from Social Security, unemployment insurance, and workers’ compensation. Public education is also an enormous source of employment, second only to health care in the public welfare state.

Even more important, public education is primarily local. Great variation exists among states and, within states, among municipalities. In this regard, it differs completely from Social Security and Medicare, whose nationally-set benefits are uniform across the nation. It is more like unemployment insurance, workers’ compensation, and TANF (and earlier AFDC), which vary by state, but not by municipality within states. The adequacy of educational benefits, by contrast, varies with municipal wealth. Education, in fact, is the only public benefit financed largely by property taxes. This confusing mix of administrative and financial patterns provides another example of how history shapes institutions and policy.

Because of its differences from both social insurance and public assistance, public education composes a separate division within the public welfare state. But it moves in the same directions as the rest. The forces redefining the American welfare state have buffeted public schools as well as public assistance, social insurance, and private welfare.

***

Since the 1980s, the pursuit of three objectives has driven change in the giant welfare state edifice. These objectives are, first, a war on dependence in all its forms—not only the dependence of young unmarried mothers on welfare but all forms of dependence on public and private support, including the dependence of workers on paternalistic employers for secure, long-term jobs and benefits. Second is the devolution of authority—the transfer of power from the federal government to the states, from states to localities, and from the public to the private sector. Last is the application of free market models to social policy. Everywhere the market triumphed as template for a reengineered welfare state. This is not a partisan story. Broad consensus on these objectives crossed party lines. Within the reconfigured welfare state, work in the regular labor market emerged as the gold standard, the mark of first-class citizenship, carrying with it entitlement to the most generous benefits. The corollary, of course, was that failure or inability to join the regular labor force meant relegation to second-class citizenship, where benefits were mean, punitive, or just unavailable.

The war on dependence, the devolution of authority, and the application of market models also run through the history of public education in these decades. The attack on “social promotion,” emphasis on high-stakes tests, implementation of tougher high school graduation requirements, and transmutation of “accountability” into the engine of school reform: all these developments are of a piece with the war on dependence. They call for students to stand on their own with rewards distributed strictly according to personal (testable) merit. Other developments point to the practice of devolution in public education. Prime example is the turn toward site-based management—that is, the decentralization of significant administrative authority from central offices to individual schools. The most extreme example is Chicago’s 1989 school reform, which put local school councils in charge of each school, even giving them authority to hire and fire principals.

At the same time, a countervailing trend, represented by the 2002 federal No Child Left Behind legislation and the imposition of standards, limited the autonomy of teachers and schools and imposed new forms of centralization. At least, that was the intent. In fact, left to develop their own standards, many states avoided penalties mandated in No Child Left Behind by lowering the bar and making it easier for students to pass the required tests. In 2010, the nation’s governors and state school superintendents convened a panel of experts to reverse this race to the bottom. The panel recommended combining a set of national standards—initially for English and math—with local autonomy in curriculum design and teaching methods. The Obama administration endorsed the recommendations and included them in its educational reform proposals.

In this slightly schizoid blend of local autonomy and central control, trends in public education paralleled developments in the administration of public assistance: the 1996 federal “welfare reform” legislation mandated asset of outcomes but left states autonomy in reaching them. In both education and public assistance, the mechanism of reform became the centralization of acceptable outcomes and the decentralization of the means for achieving them.

***

As for the market as a template for reform, it was everywhere in education as well as the rest of the welfare state. Markets invaded schools with compulsory viewing of the advertising on Chris Whittle’s Channel One “free” television news for schools, and with the kickbacks to schools from Coke, Pepsi, and other products sold in vending machines—money schools desperately needed as their budgets for sports, arts, and culture were cut. Some school districts turned over individual schools to for-profit corporations such as Edison Schools, while advocacy of vouchers and private charter schools reflected the belief that blending competition among providers with parental choice would expose poorly performing schools and teachers and motivate others to improve.

Unlike the situation in the rest of the welfare state, educational benefits cannot be tied to employment. But they are stratified nonetheless by location, wealth, and race. The forces eroding the fiscal capacities of cities and old suburbs—withdrawal of federal aid and shrinking tax base—have had a devastating impact on public education and on children and adolescents, relegating a great many youngsters living in poor or near-poor families to second class citizenship. In the educational division of the public welfare state test results play the role taken on elsewhere by employment. They are gatekeepers to the benefits of first-class citizenship. The danger is that high-stakes tests and stiffer graduation requirements will further stratify citizenship among the young, with kids failing tests joining stay-at-home mothers and out-of-work black men as the “undeserving poor.” In this way, public education complements the rest of the welfare state as a mechanism for reproducing, as well as mitigating, inequality in America.

***

Michael B. Katz is Walter H. Annenberg Professor of History at the University of Pennsylvania. His conception of the architecture of the American welfare state and the forces driving change within it are elaborated in his book The Price of Citizenship: Redefining the American Welfare State, updated edition (University of Pennsylvania Press).

Posted in Capitalism, History, Modernity, Religion, Theory

Blaustein: Searching for Consolation in Max Weber’s Work Ethic

 

Last summer I posted a classic lecture by the great German sociologist, Max Weber, “Science as a vocation.” Recently I ran across a terrific essay by George Blaustein about Weber’s vision of the modern world, drawing on this lecture and two other seminal works: the lecture “Politics as a Vocation” (delivered a year after the science lecture) and the seminal book The Protestant Ethic and the Spirit of CapitalismHere’s a link to the original Blaustein essay on the New Republic website.

Like so many great theorists (Marx, Durkheim, Foucault, etc.), Weber was intensely interested in understanding the formation of modernity.  How did the shift from premodern to modern come about?  What prompted it?  What are the central characteristics of modernity?  What are the main forces that drive it?  As Blaustein shows so adeptly, Weber’s take is a remarkably gloomy one.  He sees the change as one of disenchantment, in which we lost the certitudes of faith and tradition and are left with a regime of soulless rationalism and relentless industry.  Here’s how he put it in his science lecture:

The fate of our times is characterized by rationalization and intellectualization and, above all, by the ‘disenchantment of the world.’ Precisely the ultimate and most sublime values have retreated from public life either into the transcendental realm of mystic life or into the brotherliness of direct and personal human relations….

In his view, there is no turning back, no matter how much you feel you have lost, unless you are willing to surrender reason to faith.  This he is not willing to do, but he understands why others might choose differently.

To the person who cannot bear the fate of the times like a man, one must say: may he rather return silently, without the usual publicity build-up of renegades, but simply and plainly. The arms of the old churches are opened widely and compassionately for him. After all, they do not make it hard for him. One way or another he has to bring his ‘intellectual sacrifice‘ — that is inevitable. If he can really do it, we shall not rebuke him.

In The Protestant Ethic, he explores the Calvinist roots of the capitalist work ethic, in which the living saints worked hard in this world to demonstrate (especially to themselves) that they had been elected to eternal life in the next world.  Instead of earning to spend on themselves, they reinvested their earnings in economic capital on earth and spiritual capital in heaven.  But the ironic legacy of this noble quest is our own situation. in which we work in order to work, without purpose or hope.  Here’s how he puts it in the famous words that close his book.

The Puritan wanted to work in a calling; we are forced to do so. For when asceticism was carried out of monastic cells into everyday life, and began to dominate worldly morality, it did its part in building the tremendous cosmos of the modern economic order. This order is now bound to the technical and economic conditions of machine production which to-day determine the lives of all the individuals who are born into this mechanism, not only those directly concerned with economic acquisition, with irresistible force.  Perhaps it will so determine them until the last ton of fossilized coal is burnt.  In Baxter’s view the care for external goods should only lie on the shoulders of the “saint like a light cloak, which can be thrown aside at any moment.” But fate decreed that the cloak should become an iron cage.

I hope you gain as much insight from this essay as I did.

Protestant Ethic

Searching for Consolation in Max Weber’s Work Ethic

People worked hard long before there was a thing called the “work ethic,” much less a “Protestant work ethic.” The phrase itself emerged early in the twentieth century and has since congealed into a cliché. It is less a real thing than a story that people, and nations, tell themselves about themselves. I am from the United States but now live in Amsterdam; the Dutch often claim the mantle of an industrious, Apollonian Northern Europe, as distinct from a dissolute, Dionysian, imaginary South. Or the Dutch invoke the Protestant ethic with self-deprecating smugness: Alas, we are so productive. Both invocations are absurd. The modern Dutch, bless them, are at least as lazy as everyone else, and their enjoyments are vulgar and plentiful.

In the U.S., meanwhile, celebrations of the “work ethic” add insult to the injury of overwhelming precarity. As the pandemic loomed, it should have been obvious that the U.S. would particularly suffer. People go to work because they have no choice. Those who did not face immediate economic peril could experience quarantine as a kind of relief and then immediately feel a peculiar guilt for that very feeling of relief. Others, hooray, could sustain and perform their work ethic from home.

The German sociologist Max Weber was the first great theorist of the Protestant ethic. If all scholarship is autobiography, it brings an odd comfort to learn that he had himself suffered a nervous breakdown. Travel was his main strategy of recuperation, and it brought him to the Netherlands and to the U.S., among other places. The Hague was “bright and shiny,” he wrote in 1903. “Everyone is well-to-do, exceedingly ungraceful, and rather untastefully dressed.” He had dinner in a vegetarian restaurant. (“No drinks, no tips.”) Dutch architecture made him feel “like Gulliver when he returned from Brobdingnag.” America, by contrast, was Brobdingnag. Weber visited the U.S. for three months in 1904 and faced the lurid enormity of capitalism. Chicago, with its strikes, slaughterhouses, and multi-ethnic working class, seemed to him “like a man whose skin has been peeled off and whose intestines are seen at work.”

Weber theorized the rise of capitalism, the state and its relationship to violence, the role of “charisma” in politics. Again and again he returned, as we still do, to the vocation—the calling—as both a crushing predicament and a noble aspiration. He died 100 years ago, in a later wave of the Spanish flu. It is poignant to read him now, in our own era of pandemic and cataclysm. It might offer consolation. Or it might fail to console.

The Protestant Ethic and the Spirit of Capitalism emerged, in part, from that American journey. It first appeared in two parts, in 1904 and 1905, in a journal, the Archiv für Sozialwissenschaft und Sozialpolitik. A revised version appeared in 1920, shortly before his death. Race did not figure into his account of capitalism’s rise, though the American color line had confronted him vividly. In 1906 he would publish W.E.B. Du Bois’s “The Negro Question in the United States” in the same journal, which he edited.

Modern invocations of the work ethic are usually misreadings: The Protestant Ethic was more lament than celebration. Weber sought to narrate the arrival of what had become a no-longer-questioned assumption: that our duty was to labor in a calling, even to labor for labor’s sake. He sought the origins of this attitude toward work and the meaning of life, of an ethic that saved money but somehow never enjoyed it, of a joyless and irrational rationality. He found the origins in Calvinism, specifically in what he called Calvinism’s “this-worldly asceticism.”

Weber’s argument was not that Calvinism caused capitalism; rather, The Protestant Ethic was a speculative psycho-historical excavation of capitalism’s emergence. The interpretation, like most of his interpretations, had twists that are not easy to summarize. It was, after all, really the failure of Calvinism—in the sense of the unmeetableness of Calvinism’s demands on an individual psyche and soul—that generated a proto-capitalist orientation to the world. The centerpiece of Calvin’s theology—the absolute, opaque sovereignty of God and our utter noncontrol over our own salvation—was, in Weber’s account, impossibly severe, unsustainable for the average person. The strictures of that dogma ended up creating a new kind of individual and a new kind of community: a community bound paradoxically together by their desperate anxiety about their individual salvation. Together and alone.

The germ of the capitalist “spirit” lay in the way Calvinists dealt with that predicament. They labored in their calling, for what else was there to do? To work for work’s sake was Calvinism’s solution to the problem of itself. Having foreclosed all other Christian comforts—a rosary, an indulgence, a ritual, a communion—Weber’s original Calvinists needed always to perform their own salvation, to themselves and to others, precisely because they could never be sure of it. No wonder they would come to see their material blessings as a sign that they were in fact blessed. And no wonder their unlucky descendants would internalize our economic miseries as somehow just.

Calvinism, in other words, was less capitalism’s cause than its ironic precondition. The things people did for desperate religious reasons gave way to a secular psychology. That secular psychology was no “freer” than the religious one; we had been emancipated into jobs. “The Puritans wanted to be men of the calling,” Weber wrote; “we, on the other hand, must be.” As a historical process—i.e., something happening over time—this process was gradual enough that the people participating in it did not really apprehend it as it happened. In Milton’s Paradise Lost, when Adam and Eve are expelled from Eden and into the world, the archangel Michael offers faith as a consolation within the worldliness that is humanity’s lot: The faithful, Michael promises Adam, “shal[l] possess / A Paradise within thee, happier by far.” Those lines appeared in 1674, more than a century after John Calvin’s death; for Weber, they were an inadvertent expression of the capitalist spirit’s historical unfolding. Only later still could the gloomy sociologist see, mirrored in that Puritan epic, our own dismal tendency to approach life itself as a task.

For historians of capitalism, the book is inspiring but soon turns frustrating. Weberian interpretations tend to stand back from history’s contingencies and exploitations in order to find some churning and ultimately unstoppable process: “rationalization,” for instance, by which tradition gives way ironically but inexorably to modernity. Humans wanted things like wholeness, community, or salvation; but our efforts, systematized in ways our feeble consciousness can’t ever fully grasp, end up ushering in anomie, bureaucracy, or profit. The Weberian analysis then offers no relief from that process, only a fatalism without a teleology. The moral of the story, if there is a moral, is to reconcile yourself to the modernity that has been narrated and to find in the narrative itself something like an intellectual consolation, which is the only consolation that matters.

Still, the book’s melancholy resonates, if only aesthetically. At moments, it even stabs with a sharpness that Weber could not have foreseen: The “monstrous cosmos” of capitalism now “determines, with overwhelming coercion, the style of life not only of those directly involved in business but of every individual who is born into this mechanism,” he wrote in the book’s final pages, “and may well continue to do so until the day that the last ton of fossil fuel has been consumed.” Gothic images—ghosts and shadowy monsters—abound in what is, at times, a remarkably literary portrait. “The idea of the ‘duty in a calling’ haunts our lives like the ghost of once-held religious beliefs.”

The book’s most famous image is the “iron cage.” For Puritans, material concerns were supposed to lie lightly on one’s shoulders, “like a thin cloak which can be thrown off at any time” (Weber was quoting the English poet Richard Baxter), but for us moderns, “fate decreed that the cloak should become an iron cage.” That morsel of sociological poetry was not in fact Weber’s but that of the American sociologist Talcott Parsons, whose English translation in 1930 became the definitive version outside of Germany. Weber’s phrase was “stahlhartes Gehäuse”—a shell hard as steel. It describes not a room we can’t leave but a suit we can’t take off.

One wonders what Weber would make of our era’s quarantines. What is a Zoom meeting but another communal experience of intense loneliness? Weber’s portrait of Calvinist isolation might ring a bell. Working from home traps us ever more firmly in the ideology or mystique of a calling. We might then take refuge in a secondary ethic, what we might call the iron cage of “fulfillment.” It is built on the ruins of the work ethic or, just as plausibly, it is the work ethic’s ironic apotheosis: secular salvation through sourdough.

It brings a sardonic pleasure to puncture the mental and emotional habits of a service economy in Weberian terms. But it doesn’t last. The so-called work ethic is no longer a spiritual contagion but a medical one, especially in America. Weber’s interpretation now offers little illumination and even less consolation. It is not some inner ethic that brings, say, Amazon’s workers to the hideously named “fulfillment centers”; it is a balder cruelty.

The breakdown happened in 1898, when Weber was 34. “When he was overloaded with work,” his wife, Marianne, wrote in her biography of him, after his death, “an evil thing from the unconscious underground of life stretched out its claws toward him.” His father, a politician in the National Liberal Party, had died half a year earlier, mere weeks after a family standoff that remained unresolved. In the dispute, Max had defended his devoutly religious mother against his autocratic father. The guilt was severe. (The Protestant Ethic would lend itself too easily to a Freudian reading.) A psychiatrist diagnosed him with neurasthenia, then the modern medical label for depression, anxiety, panic, fatigue. The neurasthenic brain, befitting an industrial age, was figured as an exhausted steam engine. Marianne, elsewhere in her biography, described the condition as an uprising to be squashed: “Weber’s mind laboriously maintained its dominion over its rebellious vassals.”

As an undergraduate at the University of Heidelberg, Weber had studied law. His doctoral dissertation was on medieval trading companies. By his early thirties he was a full professor in economics and finance, in Freiburg and then back in Heidelberg. After his breakdown, he was released from teaching and eventually given a long leave of absence. He resigned his professorship in 1903, keeping only an honorary title for more than a decade. Weberian neurasthenia meant a life of travel; medical sojourns in Alpine clinics; and convalescent trips to France, Italy, and Austro-Hungary—extravagant settings for insomnia and a genuine inner turmoil. Money was not the problem. Marianne, a prolific scholar and a key but complex figure in the history of German feminism, would inherit money from the family’s linen factory.

Though only an honorary professor, with periods of profound study alternating with periods of depression, Weber loomed large in German academic life. In 1917, students in Munich invited the “myth of Heidelberg,” as he was known, to lecture about “the vocation of scholarship.” He did not mention his peculiar psychological and institutional trajectory in that lecture, now a classic, though one can glimpse it between the lines. “Wissenschaft als Beruf” (“Science as a Vocation”) and another lecture from a year and a half later, “Politik als Beruf” (“Politics as a Vocation”) are Weber’s best-known texts outside The Protestant Ethic. A new English translation by Damion Searls rescues them from the formal German (as translations sometimes must) and from the viscous English into which they’re usually rendered. It restores their vividness and eloquence as lectures.

Of course, now they would be Zoom lectures, which would entirely break the spell. Picture him: bearded and severe, a facial scar still visible from his own college days in a dueling fraternity. He would see not a room full of students but rather his own face in a screen, looking back at him yet unable to make true eye contact. Neurasthenia would claw at him again.

Some lines from “Wissenschaft als Beruf,” even today, would have worked well in the graduation speeches that have been canceled because of the pandemic. Notably: “Nothing is humanly worth doing except what someone can do with passion.” Sounds nice! “Wissenschaft als Beruf” approached the confines of the calling in a more affirmative mode. Other parts of the speech, though—and even that inspirational line, in context—boast a bleak and bracing existentialism. My favorite moment is when Weber channeled Tolstoy on the meaningless of death (and life!) in a rationalized, disenchanted modernity. Since modern scholarship is now predicated on the nonfinality of truth, Weber said, and since any would-be scholar will absorb “merely a tiny fraction of all the new ideas that intellectual life continually produces,” and since “even those ideas are merely provisional, never definitive,” death can no longer mark a life’s harmonious conclusion. Death is now “simply pointless.” And the kicker: “And so too is life as such in our culture, which in its meaningless ‘progression’ stamps death with its own meaninglessness.” If only I had heard that from a graduation speaker.

Weber’s subject was the meaning of scholarship in a “disenchanted” world. “Disenchantment” is another one of Weber’s processes—twisted, born of unintended consequences, but nevertheless unstoppable. It meant a scholar could no longer find “proof of God’s providence in the anatomy of a louse.” Worse, the modern scholar was doomed to work in so dismal an institution as a university. “There are a lot of mediocrities in leading university positions,” said Weber about the bureaucratized university of his day, “safe mediocrities or partisan careerists” serving the powers that funded them. Still true.

So why do it? To be a scholar meant caring, as if it mattered, about a thing that objectively does not matter and caring as if “the fate of his very soul depends on whether he gets this specific conjecture exactly right about this particular point in this particular manuscript.” Scholarship was the good kind of calling, insofar as one could make one’s way to some kind of meaning, however provisional that meaning was, and however fleeting and inscrutable the spark of “inspiration.”

That part of the sermon is no longer quite so moving. Weber styled himself a tough-minded realist when it came to institutions, but our era’s exploitation of adjunct academic labor punctures the romance that Weber could nevertheless still inflate. Universities in an age of austerity do not support or reward scholarly inquiry as a self-justifying vocation. Scholars must act more and more like entrepreneurs, manufacturing and marketing our own “relevance.” For some university managers (as for many corporate CEOs), the coronavirus is as much an opportunity as a crisis, to further strip and “streamline” the university—to conjoin, cheaply, the incompatible ethics of efficiency and intellect. And we teachers are stuck in the gears: The digital technologies by which we persist in our Beruf will only further erode our professional stability. “Who but a blessed, tenured few,” the translation’s editors, Paul Reitter and Chad Wellmon, ask, “could continue to believe that scholarship is a vocation?”

And yet as a sermon on teaching, Weber’s lecture still stirs me. Having given up on absolute claims about truth or beauty, and having given up on academic inquiry revealing the workings of God, he arrived at a religious truth about pedagogy that you can still hang a hat on:

If we understand our own subject (I am necessarily assuming we do), we can force, or at least help, an individual to reckon with the ultimate meaning of his own actions. This strikes me as no small matter, in terms of a student’s inner life too.

I want this to be true. On good days, teaching delivers what Weber called that “strange intoxication,” even on Zoom.

An enormous historical gulf divides the two vocation lectures, though they were delivered only 14 months apart. In November 1917, Weber didn’t even mention the war. When it broke out in 1914, he served for a year as a medical director in the reserve forces; he did not see combat but supported German aspiration to the status of Machtstaat and its claim to empire. The war dragged miserably on, but in late 1917 it was far from clear that Germany would lose. Tsarist Russia had collapsed, and the American entry into the war had not proved decisive. The defeat that Germany would experience in the coming months was then unimaginable.

Weber was a progressive nationalist, moving between social democracy and the political center. During the war, besides his essays on the sociology of religion, he wrote about German political futures and criticized military management, all while angling for some role in the affairs of state himself. As the tide turned, he argued for military retrenchment as the honorable course. A month after Germany’s surrender on November 11, 1918, he stood unsuccessfully for election to parliament with the new German Democratic Party, of which he was a founder.

In January 1919 he returned to a Munich gripped by socialist revolution. It was now the capital of the People’s State of Bavaria, which would be short-lived. Weber, for years, had dismissed both pacifism and revolution as naïve. Many in the room where he spoke supported the revolution that he so disdained, and many of them had seen industrial slaughter in the state’s trenches. Part of the lecture’s mystique is its timing: He stood at a podium in the eye of the storm.

“Politik als Beruf” would seem to speak to our times, from one era of calamity and revolution to another. It is about the modern state and its vast machineries. It is about statesmen and epigones, bureaucracy and its discontents, “leadership” and political breakdown. To that moment’s overwhelming historical flux, Weber brought, or tried to bring, the intellectual sturdiness of sociological categories, “essential” vocabularies that could in theory apply at any time.

He offered a now-famous definition of the state in general: “the state is the only human community that (successfully) claims a monopoly on legitimate physical violence for itself, within a certain geographical territory.… All other groups and individuals are granted the right to use physical violence only insofar as the state allows it.” This definition, powerfully tautological, was the sociological floor on which stood all of the battles over what we might want the state to be. Philosophically, it operated beneath all ideological or moral debates over rights, democracy, welfare. It countered liberalism’s fantasy of a social contract, because Weber’s state, both foundationally and when push came to shove, was not contractual but coercive.

It was a bracing demystification. Legitimacy had nothing to do with justice; it meant only that the people acquiesced to the state’s authority. Some regimes “legitimately” protected “rights,” while others “legitimately” trampled them. Why did we acquiesce? Weber identified three “pure” categories of acquiescence: We’re conditioned to it, by custom or tradition; or we’re devoted to a leader’s charisma; or we’ve been convinced that the state’s legitimacy is in fact just, that its laws are valid and its ministers competent. Real political life, Weber wryly said, was always a cocktail of these three categories of acquiescence, never mind what stories we might tell ourselves about why we go along with anything.

With that floor of a definition laid, varieties of statehood could now emerge. Every state was a configuration of power and bureaucratic machinery, and the many massive apparatuses that made it up had their own deep sociological genealogies, each with their own Weberian twists. So did the apparatuses that produced those people who felt called to politics. Weber’s sweep encompassed parliaments, monarchs, political parties, corporations, newspapers, universities (law schools especially), a professional civil service, militaries.

Any reader now will be tempted to decode our politicians in Weber’s terms. Trump: ostensibly from the world of business, which, in Weber’s scheme, would usually keep such a figure out of electoral politics (although Weber did note that “plutocratic leaders certainly can try to live ‘from’ politics,” to “exploit their political dominance for private economic gain”). Maybe we’d say that Trump hijacked the apparatus of the administrative state, already in a state of erosion, and that he grifts from that apparatus while wrecking it further. Or maybe Trump is returning American politics to the pre-professional, “amateur” spoils system of the nineteenth century. Or he is himself a grotesque amateur, brought to the fore by an already odious political party that somehow collapsed to victory. Or maybe Trump is an ersatz aristocrat, from inherited wealth, who only played a businessman on television. (Weber’s writings do not anticipate our hideous celebrity politics.) Or Trump is a would-be warlord, postmodern or atavistically neo-feudal, committed to stamping a personal brand on the formerly “professional” military. Or, or, or. All are true, in their way. Maybe Weber would see in Trump a moron on the order of Kaiser Wilhelm—an equally cogent analysis.

Do these decodings clarify the matter or complicate it? Do they help us at all? They deliver a rhetorical satisfaction, certainly, and maybe an intellectual consolation. Then what? “Politik als Beruf” leaves sociology behind and becomes a secular sermon about “leadership,” and here the spell begins to break. Weber sought political salvation, of a kind, in charisma. The word is now a cliché, but for him it had a specific charge. Politics, he told his listeners in so many words, was a postlapsarian business. It cannot save any souls, because violence and coercion are conceptually essential to politics. A disenchanted universe is still a fallen universe. What had emerged from the fall was the monstrous apparatus of the modern nation-state. It was there, with its attendant armies of professionals and hangers-on, it fed you or it starved you. It was a mountain that no one really built but that we all had to live on.

Politics for Weber was brutally Darwinian in the end: Some states succeeded, and others failed. His Germany did not deserve defeat any more than the Allies deserved victory. That same moral arbitrariness made him look with a kind of grudging respect at Britain and the U.S.—made him even congratulate America for graduating from political amateurism into professional power. Meanwhile, he belittled revolutionaries. Anyone who imagined they could escape power’s realities or usher in some fundamentally new arrangement of power, he mocked. “Let’s be honest with ourselves here,” he said to the revolutionists in Munich. A belief in a revolutionary cause, “as subjectively sincere as it may be, is almost always merely a moral ‘legitimation’ for the desire for power, revenge, booty, and benefits.” (He was recycling a straw-man argument he had made for several years.)

To be enchanted by this argument is to end up thinking in a particular way about history with a capital h and politics with a capital p. History was always a kind of test of the state: wars, economic calamities, pandemics. Such things arrived, like natural disasters. For all the twists and complexities of Weber’s sociology, this conception of History is superficial, and its prescription for Politics thin. He demystified the state only to remystify the statesman. It is an insider’s sermon, because politics was an insider’s game, and it is the state’s insiders who, nowadays, will thrill to it. Very well.

“The relationship between violence and the state is particularly close at present,” Weber said, early in his lecture. At present could mean this week, this decade, this century, this modernity. The lecture retains, no doubt, a curious power in times of calamity. I am inclined to call it a literary power. Weber held two things in profound narrative tension: We feel both the state’s glacial inevitability and the terror of its collapse. Without a bureaucrat’s “discipline and self-restraint, which is in the deepest sense ethical,” Weber said in passing, “the whole system would fall apart.” So too would it fall apart without a leader’s charisma. If this horror vacui was powerful, for Weber and his listeners, it was because in 1919 things would fall apart, or were falling apart, or had already fallen apart. The lecture contemplates that layered historical collapse with both dread and wonder.

A century on, Weber’s definition of the state is still, sometimes, a good tool to think with. The coronavirus lockdowns, for instance, laid bare the state’s essentially coercive function. In Europe, on balance, lockdowns have been accepted—acquiesced to—as a benevolent coercion, an expression of a trusted bureaucracy and a responsible leadership. In some American states, too. The lockdowns even generated their own (in Weberian terms) legitimating civic rituals. Fifteen months after “Politik als Beruf,” Weber himself would die of the flu that his lecture did not mention.

In the Netherlands, where I live and teach, the drama of that lecture, even in a pandemic, might fall on deaf ears. The peril and fragileness that Weber channeled can be hard to imagine in the low countries, which boasted an “intelligent lockdown” that needed no spectacular show of coercion. History, here, tends not to feel like an onrushing avalanche, or a panorama of sin and suffering, or a test we might fail, but rather a march of manageable problems, all of which seem—seem—solvable. This conception is a luxury.

As for the study of the U.S., which I suppose is my own meaningful or meaningless calling, Weber said, in 1917, that “it is often possible to see things in their purest form there.” In the century since his death, the transatlantic tables have turned, and American Studies often becomes the study of political breakdown. The vocabulary of failed statehood abounds in commentaries on America, from within and without, while American liberals look often to Germany’s Angela Merkel as the paragon of Weberian statesmanship. Step back from such commentaries, though, and American history will overwhelm even Weber’s bleak definition. America sits atop other kinds of violence, it accommodates a privatized violence, it outsources violence, it brings its wars back home.

I started this essay before the murder of George Floyd, and I am finishing it during the uprising that has followed in its wake. Weber’s definition of the state, ironically, can now fit with a political temperament more far radical than Weber’s own. The uprising has as its premise that the social contract, if it ever held, has long since been broken: The state’s veil is thus drawn back. The uprising then looks Weber’s definition in the eye: The monstrous state’s violence is unjust, therefore we do not accept it as legitimate.

I was looking in Weber for illumination, or consolation, or something. I haven’t found a rudder for the present, and I don’t know how to end. But the desire for consolation brought to my mind, of all things, the unconsoling diary of Franz Kafka. I read it years ago, and every once in a while its last lines will suddenly haunt me, like the opposite of a mantra, for reasons I don’t entirely understand. Kafka died in 1924, more an outsider than an insider; his diary’s last entry reflects, in an elliptical or inscrutable way, on another disease—tuberculosis—and on another calling. “More and more fearful as I write. Every word,” he felt, was “twisted in the hands of the spirit” and became “a spear turned against the speaker.” He also looked for consolation. “The only consolation would be: it happens whether you like or no. And what you like is of infinitesimally little help.” He then looked beyond it. “More than consolation is: You too have weapons.”

 

Posted in Empire, History, Modernity, War

What If Napoleon Had Won at Waterloo?

Today I want to explore an interesting case of counterfactual history.  What would have happened if Napoleon Bonaparte had won in 1815 at the Battle of Waterloo?  What consequences might have followed for Europe in the next two centuries?  That he might have succeeded is not mere fantasy.  According to the victor, Lord Wellington, the battle was “the nearest-run thing you ever saw in your life.”

The standard account, written by the winners, is that the allies arrayed against Napoleon (primarily Britain, Prussia, Austria, and Russia) had joined together to stop him from continuing to rampage across the continent, conquering one territory after another.  From this angle, they were the saviors of freedom, who finally succeeded in vanquishing and deposing the evil dictator. 

I want to explore an alternative interpretation, which draws on two sources.  One is an article in Smithsonian Magazine by Andrew Roberts, “Why We’d Be Better Off if Napoleon never Lost at Waterloo.”  The other is a book by the same author, Napoleon: A Life

Napoleon

The story revolves around two different Napoleons:  the general and the ruler.  As a general, he was one of the greatest in history.  Depending on how you count, he fought 60 or 70 battles and lost only seven of them, mostly at the end.  In the process, he conquered (or controlled through alliance) most of Western Europe.  So the allies had reason to fear him and to eliminate the threat he posed.  

As a ruler, however, Napoleon looks quite different.  In this role, he was the agent of the French Revolution and its Enlightenment principles, which he succeeded in institutionalizing within France and spreading across the continent.  Andrews notes in his article that Napoleon 

said he would be remembered not for his military victories, but for his domestic reforms, especially the Code Napoleon, that brilliant distillation of 42 competing and often contradictory legal codes into a single, easily comprehensible body of French law. In fact, Napoleon’s years as first consul, from 1799 to 1804, were extraordinarily peaceful and productive. He also created the educational system based on lycées and grandes écoles and the Sorbonne, which put France at the forefront of European educational achievement. He consolidated the administrative system based on departments and prefects. He initiated the Council of State, which still vets the laws of France, and the Court of Audit, which oversees its public accounts. He organized the Banque de France and the Légion d’Honneur, which thrive today. He also built or renovated much of the Parisian architecture that we still enjoy, both the useful—the quays along the Seine and four bridges over it, the sewers and reservoirs—and the beautiful, such as the Arc de Triomphe, the Rue de Rivoli and the Vendôme column.

He stood as the antithesis of the monarchical state system at the time, grounded in preserving the feudal privileges of the nobility and the church and the subordination of peasants and workers.  As a result, he ended up creating a lot of enemies, who initiated most of the battles he fought.  In addition, however, he drew a lot of support from key actors within the territories he conquered, to whom he looked less like an invader than a liberator.  Andrews points out in his book that:

Napoleon’s political support from inside the annexed territories came from many constituencies: urban elites who didn’t want to return to the rule of their local Legitimists, administrative reformers who valued efficiency, religious minorities such as Protestants and Jews whose rights were protected by law, liberals who believed in concepts such as secular education and the liberating power of divorce, Poles and other nationalities who hoped for national self-determination, businessmen (at least until the Continental System started to bite), admirers of the simplicity of the Code Napoléon, opponents of the way the guilds had worked to restrain trade, middle-class reformers, in France those who wanted legal protection for their purchases of hitherto ecclesiastical or princely confiscated property, and – especially in Germany – peasants who no longer had to pay feudal dues.

When the allies defeated Napoleon the first time, they exiled him to Elba and installed Louis XVIII as king, seeking to sweep away all of the gains from the revolution and the empire.  Louis failed spectacularly in gaining local support for the reversion to the Ancien Regime.  Sensing this, Napoleon escaped to the mainland after only nine months and headed for Paris.  The royalist troops sent to stop him instead rallied to his cause, and in 18 days he was eating Louis’s dinner in the Tuileries, restored as emperor without anyone firing a single shot in defense of the Bourbons.  Quite a statement about how the French, as opposed to the allies, viewed his return.  

Once back in charge, Napoleon sent a note to the allies, reassuring them that he was content to rule at home and leave conquest to the past: “After presenting the spectacle of great campaigns to the world, from now on it will be more pleasant to know no other rivalry than that of the benefits of peace, of no other struggle than the holy conflict of the happiness of peoples.” 

They weren’t buying it.  They had reason to be suspicious, but instead of waiting and seeing they launched an all out assault on France in an effort to get him out of the way.  Andrews argues, and I agree, that their aim was not defensive but actively reactionary.  His liberalized and modernized France posed a threat to the preservation of the traditional powers of monarchy, nobility, and church.  They sought to tamp out the fires of reform and revolution before it reared up in their own domains.  In this sense, then, Andrews says Waterloo was a battle that didn’t need to happen.  It was an unprovoked, preemptive strike.

Andrews concludes his Smithsonian article with this assessment of what might have been if Waterloo had turned out differently:

If Napoleon had remained emperor of France for the six years remaining in his natural life, European civilization would have benefited inestimably. The reactionary Holy Alliance of Russia, Prussia and Austria would not have been able to crush liberal constitutionalist movements in Spain, Greece, Eastern Europe and elsewhere; pressure to join France in abolishing slavery in Asia, Africa and the Caribbean would have grown; the benefits of meritocracy over feudalism would have had time to become more widely appreciated; Jews would not have been forced back into their ghettos in the Papal States and made to wear the yellow star again; encouragement of the arts and sciences would have been better understood and copied; and the plans to rebuild Paris would have been implemented, making it the most gorgeous city in the world.

Napoleon deserved to lose Waterloo, and Wellington to win it, but the essential point in this bicentenary year is that the epic battle did not need to be fought—and the world would have been better off if it hadn’t been.

What followed his loss was a century of reaction across the continent of Europe. The Bourbons were restored and the liberal gains in Germany, Spain, Austria and Italy were rolled back.  Royalist statesmen such as Metternich and Bismarck aggressively defended their regimes against reform efforts by liberals and Marxists alike.  These regimes persisted until the First World War, which they precipitated and which eventually brought them all down — Hohenzollerns, Habsburgs, Romanovs, and Ottomans.  The reactions to the fall of these monarchies in turn set the stage for the Second World War.

You can only play out historical counterfactuals so far, before the chain of contingencies becomes too long and the analysis turns wholly speculative.  But it seems quite reasonable to me to think that, if Napoleon had won at Waterloo, this history would have played out quite differently.  The existence proof of a modern liberal state in the middle of Europe would have shored up reform efforts in the surrounding monarchies and headed off the reactionary status quo that finally erupted in the Great War that extinguished them all.

Posted in Academic writing, History, Writing

On Writing: How the King James Bible and How It Shaped the English Language and Still Teaches Us How to Write

When you’re interested in improving your writing, it’s a good idea to have some models to work from.  I’ve presented some of my favorite models in this blog.  These have included a number of examples of good writing by both academics (Max Weber, E.P. Thompson, Jim March, and Mary Metzand nonacademics (Frederick Douglass, Elmore Leonard).

Today I want to explore one of the two most influential forces in shaping the English language over the years:  The King James Bible.  (The other, of course, is Shakespeare.)  Earlier I presented one analysis by Ann Wroe, which focused on the thundering sound of the prose in this extraordinary text.  Today I want to draw on two other pieces of writing that explore the powerful model that this bible provides us all for how to write in English with power and grace.  One is by Adam Nicholson, who wrote a book on the subject (God’s Secretaries: The Making of the King James Bible).  The other, which I reprint in full at the end of this post, is by Charles McGrath.  

The impulse to produce a bible in English arose with the English reformation, as a Protestant vernacular alternative to the Latin version that was canonical in the Catholic church.  The text was commissioned in 1604 by King James, who succeeded Elizabeth I after her long reign, and it was constructed by a committee of 54 scholars.  They went back to the original texts in Hebrew and Greek, but they drew heavily on earlier English translations. 

The foundational translation was written by William Tyndale, who was executed for heresy in Antwerp in 1536, and this was reworked into what became known as the Geneva bible by Calvinists who were living in Switzerland.  One aim of the committee was to produce a version that was more compatible with the beliefs of English and Scottish versions of the faith, but for James the primary impetus was to remove the anti-royalist tone that was embedded within the earlier text.  Recent scholars have concluded that 84% of the words in the King James New Testament and 76% in the Old Testament are Tyndale’s.

As Nicholson puts it, the language of the King James Bible is an amazing mix — “majestic but intimate, the voice of the universe somehow heard in the innermost part of the ear.”

You don’t have to be a Christian to hear the power of those words—simple in vocabulary, cosmic in scale, stately in their rhythms, deeply emotional in their impact. Most of us might think we have forgotten its words, but the King James Bible has sewn itself into the fabric of the language. If a child is ever the apple of her parents’ eye or an idea seems as old as the hills, if we are at death’s door or at our wits’ end, if we have gone through a baptism of fire or are about to bite the dust, if it seems at times that the blind are leading the blind or we are casting pearls before swine, if you are either buttering someone up or casting the first stone, the King James Bible, whether we know it or not, is speaking through us. The haves and have-nots, heads on plates, thieves in the night, scum of the earth, best until last, sackcloth and ashes, streets paved in gold, and the skin of one’s teeth: All of them have been transmitted to us by the translators who did their magnificent work 400 years ago.

Wouldn’t it be lovely if we academics could write in way that sticks in people’s minds for 400 years?  Well, maybe that’s a bit too much to hope for.  But even if we can’t aspire to be epochally epigrammatic, there are still lessons we can learn from Tyndale and the Group of 54.  

One such lesson is the power of simplicity.  Too often scholars feel the compulsion to gussy up their language with jargon and Latinate constructions in the name of professionalism.  If any idiot can understand what you’re saying, then you’re not being a serious scholar.  But the magic of the King James Bible is that it uses simple Anglo-Saxon words to make the most profound statements.  Listen to this passage from Ecclesiastes:

I returned and saw under the sun, that the race is not to the swift, nor the battle to the strong, neither yet bread to the wise, nor yet riches to men of understanding, nor yet favor to men of skill, but time and chance happeneth to them all.

Or this sentence from Paul’s letter to the Phillipians:

Finally, brethren, whatsoever things are true, whatsoever things are honest, whatsoever things are just, whatsoever things are pure, whatsoever things are lovely, whatsoever things are of good report; if there be any virtue, and if there be any praise, think on these things.

Or the stunning opening line of the Gospel of John:

In the beginning was the Word, and the Word was with God, and the Word was God.

This is a text that can speak clearly to the untutored while at the same time elevating them to a higher plane.  For us it’s a model for how to match simplicity with profundity.

KJB

Why the King James Bible Endures

By CHARLES McGRATH

The King James Bible, which was first published 400 years ago next month, may be the single best thing ever accomplished by a committee. The Bible was the work of 54 scholars and clergymen who met over seven years in six nine-man subcommittees, called “companies.” In a preface to the new Bible, Miles Smith, one of the translators and a man so impatient that he once walked out of a boring sermon and went to the pub, wrote that anything new inevitably “endured many a storm of gainsaying, or opposition.” So there must have been disputes — shouting; table pounding; high-ruffed, black-gowned clergymen folding their arms and stomping out of the room — but there is no record of them. And the finished text shows none of the PowerPoint insipidness we associate with committee-speak or with later group translations like the 1961 New English Bible, which T.S. Eliot said did not even rise to “dignified mediocrity.” Far from bland, the King James Bible is one of the great masterpieces of English prose.

The issue of how, or even whether, to translate sacred texts was a fraught one in those days, often with political as well as religious overtones, and it still is. The Roman Catholic Church, for instance, recently decided to retranslate the missal used at Mass to make it more formal and less conversational. Critics have complained that the new text is awkward and archaic, while its defenders (some of whom probably still prefer the Mass in Latin) insist that’s just the point — that language a little out of the ordinary is more devotional and inspiring. No one would ever say that the King James Bible is an easy read. And yet its very oddness is part of its power.

From the start, the King James Bible was intended to be not a literary creation but rather a political and theological compromise between the established church and the growing Puritan movement. What the king cared about was clarity, simplicity, doctrinal orthodoxy. The translators worked hard on that, going back to the original Hebrew, Greek and Aramaic, and yet they also spent a lot of time tweaking the English text in the interest of euphony and musicality. Time and again the language seems to slip almost unconsciously into iambic pentameter — this was the age of Shakespeare, commentators are always reminding us — and right from the beginning the translators embraced the principles of repetition and the dramatic pause: “In the beginning God created the Heaven, and the Earth. And the earth was without forme, and void, and darkenesse was upon the face of the deepe: and the Spirit of God mooved upon the face of the waters.”

The influence of the King James Bible is so great that the list of idioms from it that have slipped into everyday speech, taking such deep root that we use them all the time without any awareness of their biblical origin, is practically endless: sour grapes; fatted calf; salt of the earth; drop in a bucket; skin of one’s teeth; apple of one’s eye; girded loins; feet of clay; whited sepulchers; filthy lucre; pearls before swine; fly in the ointment; fight the good fight; eat, drink and be merry.

But what we also love about this Bible is its strangeness — its weird punctuation, odd pronouns (as in “Our Father, which art in heaven”), all those verbs that end in “eth”: “In the morning it flourisheth, and groweth up; in the evening it is cut downe, and withereth.” As Robert Alter has demonstrated in his startling and revealing translations of the Psalms and the Pentateuch, the Hebrew Bible is even stranger, and in ways that the King James translators may not have entirely comprehended, and yet their text performs the great trick of being at once recognizably English and also a little bit foreign. You can hear its distinctive cadences in the speeches of Lincoln, the poetry of Whitman, the novels of Cormac McCarthy.

Even in its time, the King James Bible was deliberately archaic in grammar and phraseology: an expression like “yea, verily,” for example, had gone out of fashion some 50 years before. The translators didn’t want their Bible to sound contemporary, because they knew that contemporaneity quickly goes out of fashion. In his very useful guide, “God’s Secretaries: The Making of the King James Bible,” Adam Nicolson points out that when the Victorians came to revise the King James Bible in 1885, they embraced this principle wholeheartedly, and like those people who whack and scratch old furniture to make it look even more ancient, they threw in a lot of extra Jacobeanisms, like “howbeit,” “peradventure, “holden” and “behooved.”

This is the opposite, of course, of the procedure followed by most new translations, starting with Good News for Modern Man, a paperback Bible published by the American Bible Society in 1966, whose goal was to reflect not the language of the Bible but its ideas, rendering them into current terms, so that Ezekiel 23:20, for example (“For she doted upon their paramours, whose flesh is as the flesh of asses, and whose issue is like the issue of horses”) becomes “She was filled with lust for oversexed men who had all the lustfulness of donkeys or stallions.”

There are countless new Bibles available now, many of them specialized: a Bible for couples, for gays and lesbians, for recovering addicts, for surfers, for skaters and skateboarders, not to mention a superheroes Bible for children. They are all “accessible,” but most are a little tone-deaf, lacking in grandeur and majesty, replacing “through a glasse, darkly,” for instance, with something along the lines of “like a dim image in a mirror.” But what this modernizing ignores is that the most powerful religious language is often a little elevated and incantatory, even ambiguous or just plain hard to understand. The new Catholic missal, for instance, does not seem to fear the forbidding phrase, replacing the statement that Jesus is “one in being with the Father” with the more complicated idea that he is “consubstantial with the Father.”

Not everyone prefers a God who talks like a pal or a guidance counselor. Even some of us who are nonbelievers want a God who speaketh like — well, God. The great achievement of the King James translators is to have arrived at a language that is both ordinary and heightened, that rings in the ear and lingers in the mind. And that all 54 of them were able to agree on every phrase, every comma, without sounding as gassy and evasive as the Financial Crisis Inquiry Commission, is little short of amazing, in itself proof of something like divine inspiration.

 

Posted in Empire, History, Resilience, War

Resilience in the Face of Climate Change and Epidemic: Ancient Rome and Today’s America

Tell me if you think this sounds familiar:  In its latter years (500-700 ACE), the Roman Empire faced a formidable challenge from two devastating environmental forces — dramatic climate change and massive epidemic.  As Mark Twain is supposed to have said, “History doesn’t repeat itself, but it often rhymes.”

During our own bout of climate change and ravaging disease, I’ve been reading Kyle Harper’s book The Fate of Rome: Climate, Disease, and the End of Empire.  The whole time, rhymes were running through my head.  We all know that things did not turn out well for Rome, whose civilization went through the most devastating collapse in world history.  The state disintegrated, population fell in half, and the European standard of living did not recover the level it had in 500 until a thousand years later.

Fate of Rome Cover

So Rome ended badly, but what about us?  The American empire may be eclipsing, but it’s not like the end is near.  Rome was dependent on animal power and a fragile agricultural base, and its medical “system” did more harm than good.  All in all we seem much better equipped to deal with climate change and disease than they were.  As a result, I’m not suggesting that we’re headed for the same calamitous fall that faced Roman civilization, but I do think we can learn something important by observing how they handled their own situation.

What’s so interesting about the fall of Rome is that it took so long.  The empire held on for 500 years, even under circumstances where its fall was thoroughly overdetermined.  The traditional story of the fall is about fraying political institutions in an overextended empire, surrounded by surging “barbarian” states that were prodded into existence by Rome’s looming threat.

To this political account, Harper adds the environment.  The climate was originally very kind to Rome, supporting growth during a long period of warm and wet weather known as the Roman Climate Optimum (200 BCE to 150 ACE).  But then conditions grew increasingly unstable, leading to the Late Antique Little Ice Age (450-700), with massive crop failures brought on by a drop in solar energy and massive volcanic eruptions.  In the midst of this arose a series of epidemics, fostered (like our own) by the opening up of trade routes, which culminated in the bubonic plague (541-749) that killed off half of the populace.

What kept Rome going all this time was a set of resilient civic institutions.  That’s what I think we can learn from the Roman case.  My fear is that our own institutions are considerably more fragile.  In this analysis, I’m picking up on a theme from an earlier blog post:  The Triumph of Efficiency over Effectiveness: A Brief for Resilience through Redundancy.

Here is how Harper describes the institutional framework of this empire:

Rome was ruled by a monarch in all but name, who administered a far-flung empire with the aid, first and foremost, of the senatorial aristocracy. It was an aristocracy of wealth, with property requirements for entry, and it was a competitive aristocracy of service. Low rates of intergenerational succession meant that most aristocrats “came from families that sent representatives into politics for only one generation.”

The emperor was the commander-in-chief, but senators jealously guarded the right to the high posts of legionary command and prestigious governorships. The imperial aristocracy was able to control the empire with a remarkably thin layer of administrators. This light skein was only successful because it was cast over a foundational layer of civic aristocracies across the empire. The cities have been called the “load-bearing” pillars of the empire, and their elites were afforded special inducements, including Roman citizenship and pathways into the imperial aristocracy. The low rates of central taxation left ample room for peculation by the civic aristocracy. The enormous success of the “grand bargain” between the military monarchy and the local elites allowed imperial society to absorb profound but gradual changes—like the provincialization of the aristocracy and bureaucracy—without jolting the social order.

The Roman frontier system epitomized the resilience of the empire; it was designed to bend but not break, to bide time for the vast logistical superiority of the empire to overwhelm Rome’s adversaries. Even the most developed rival in the orbit of Rome would melt before the advance of the legionary columns. The Roman peace, then, was not the prolonged absence of war, but its dispersion outward along the edges of empire.

The grand and decisive imperial bargain, which defined the imperial regime in the first two centuries, was the implicit accord between the empire and “the cities.” The Romans ruled through cities and their noble families. The Romans coaxed the civic aristocracies of the Mediterranean world into their imperial project. By leaving tax collection in the hands of the local gentry, and bestowing citizenship liberally, the Romans co-opted elites across three continents into the governing class and thereby managed to command a vast empire with only a few hundred high-ranking Roman officials. In retrospect, it is surprising how quickly the empire ceased to be a mechanism of naked extraction, and became a sort of commonwealth.

Note that last part:  Rome “became a sort of commonwealth.”  It conquered much of the Western world and incorporated one-quarter of the earth’s population, but the conquered territories were generally better off under Rome than they had been before — benefiting from citizenship, expanded trade, and growing standards of living.  It was a remarkably stratified society, but its benefits extended even to the lower orders.  (For more on this issue, see my earlier post about Walter Scheidel’s book on the social benefits of war.)

At the heart of the Roman system were three cultural norms that guided civic life: self sufficiency, reciprocity, and patronage.  Let me focus on the latter, which seems to be dangerously absent in our own society at the moment.

The expectation of paternalistic generosity lay heavily on the rich, ensuring that less exalted members of society had an emergency lien on their stores of wealth. Of course, the rich charged for this insurance, in the form of respect and loyalty, and in the Roman Empire there was a constant need to monitor the fine line between clientage and dependence.

A key part of the grand bargain engineered by Rome was the state’s responsibility to feed its citizens.

The grain dole was the political entitlement of an imperial people, under the patronage of the emperor.

Preparation for famine — a chronic threat to premodern agricultural societies — was at the center of the system’s institutional resilience.  This was particularly important in an empire as thoroughly city-centered as Rome.  Keep in mind that Rome during the empire was the first city in the world to have 1 million residents; the second was London 1500 year later.

These strategies of resilience, writ large, were engrained in the practices of the ancient city. Diversification and storage were adapted to scale. Urban food storage was the first line of redundancy. Under the Roman Empire, the monumental dimensions of storage facilities attest the political priority of food security. Moreover, cities grew organically along the waters, where they were not confined to dependence on a single hinterland.

When food crisis did unfold, the Roman government stood ready to intervene, sometimes through direct provision but more often simply by the suppression of unseemly venality.

The most familiar system of resilience was the food supply of Rome. The remnants of the monumental public granaries that stored the food supply of the metropolis are still breathtaking.

Wouldn’t it be nice if we in the U.S. could face the challenges of climate change and pandemic as a commonwealth?  If so, we would be working to increase the resilience of our system:  by sharing the burden and spreading the wealth: by building up redundancy to store up for future challenges; by freeing ourselves from the ideology of economic efficiency in the service of social effectiveness.  Wouldn’t that be nice.