Posted in Politics, Populism, Social status, Sociology

Thomas Edsall: The Resentment that Never Sleeps

This post is a piece by Thomas Edsall published in the New York Times last week.  It explores in detail the recent literature about the role that declining social status has played in the rise of right-wing populism in the US and elsewhere.  Here’s a link to the original.

The argument is one that resonates in my own work posted here (see this, this, and this).  People are less concerned about getting ahead than they are about falling behind.  And one of the consequences of the degree-based meritocracy is the way it disparages people who lack the proper credentials, making clear to them that they are losing ground to the new educated elite.  Here is how Cecilia Ridgway puts it:

Status is as significant as money and power. At a macro level, status stabilizes resource and power inequality by transforming it into cultural status beliefs about group differences regarding who is “better” (esteemed and competent).

Those most affected tend to be neither at the top nor the bottom of the social hierarchy but somewhere in the lower middle regions.  Peter Hall says that

The people most often drawn to the appeals of right-wing populist politicians, such as Trump, tend to be those who sit several rungs up the socioeconomic ladder in terms of their income or occupation. My conjecture is that it is people in this kind of social position who are most susceptible to what Barbara Ehrenreich called a “fear of falling” — namely, anxiety, in the face of an economic or cultural shock, that they might fall further down the social ladder,” a phenomenon often described as “last place aversion.

This is one of the most trenchant analyses of Trumpism that I have yet encountered.  See what you think.

The Resentment That Never Sleeps

Rising anxiety over declining social status tells us a lot about how we got here and where we’re going.

More and more, politics determine which groups are favored and which are denigrated.

Roughly speaking, Trump and the Republican Party have fought to enhance the status of white Christians and white people without college degrees: the white working and middle class. Biden and the Democrats have fought to elevate the standing of previously marginalized groups: women, minorities, the L.G.B.T.Q. community and others.

The ferocity of this politicized status competition can be seen in the anger of white non-college voters over their disparagement by liberal elites, the attempt to flip traditional hierarchies and the emergence of identity politics on both sides of the chasm.

Just over a decade ago, in their paper “Hypotheses on Status Competition,” William C. Wohlforth and David C. Kang, professors of government at Dartmouth and the University of Southern California, wrote that “social status is one of the most important motivators of human behavior” and yet “over the past 35 years, no more than half dozen articles have appeared in top U.S. political science journals building on the proposition that the quest for status will affect patterns of interstate behavior.”

Scholars are now rectifying that omission, with the recognition that in politics, status competition has become increasingly salient, prompting a collection of emotions including envy, jealousy and resentment that have spurred ever more intractable conflicts between left and right, Democrats and Republicans, liberals and conservatives.

Hierarchal ranking, the status classification of different groups — the well-educated and the less-well educated, white people and Black people, the straight and L.G.B.T.Q. communities — has the effect of consolidating and seeming to legitimize existing inequalities in resources and power. Diminished status has become a source of rage on both the left and right, sharpened by divisions over economic security and insecurity, geography and, ultimately, values.

The stakes of status competition are real. Cecilia L. Ridgeway, a professor at Stanford, described the costs and benefits in her 2013 presidential address at the American Sociological Association.

Understanding “the effects of status — inequality based on differences in esteem and respect” is crucial for those seeking to comprehend “the mechanisms behind obdurate, durable patterns of inequality in society,” Ridgeway argued:

Failing to understand the independent force of status processes has limited our ability to explain the persistence of such patterns of inequality in the face of remarkable socioeconomic change.

“As a basis for social inequality, status is a bit different from resources and power. It is based on cultural beliefs rather than directly on material arrangements,” Ridgeway said:

We need to appreciate that status, like resources and power, is a basic source of human motivation that powerfully shapes the struggle for precedence out of which inequality emerges.

Ridgeway elaborated on this argument in an essay, “Why Status Matters for Inequality”:

Status is as significant as money and power. At a macro level, status stabilizes resource and power inequality by transforming it into cultural status beliefs about group differences regarding who is “better” (esteemed and competent).

In an email, Ridgeway made the case that “status is definitely important in contemporary political dynamics here and in Europe,” adding that

Status has always been part of American politics, but right now a variety of social changes have threatened the status of working class and rural whites who used to feel they had a secure, middle status position in American society — not the glitzy top, but respectable, ‘Main Street’ core of America. The reduction of working-class wages and job security, growing demographic diversity, and increasing urbanization of the population have greatly undercut that sense and fueled political reaction.

The political consequences cut across classes.

Peter Hall, a professor of government at Harvard, wrote by email that he and a colleague, Noam Gidron, a professor of political science at Hebrew University in Jerusalem, have found that

across the developed democracies, the lower people feel their social status is, the more inclined they are to vote for anti-establishment parties or candidates on the radical right or radical left.

Those drawn to the left, Hall wrote in an email, come from the top and bottom of the social order:

People who start out near the bottom of the social ladder seem to gravitate toward the radical left, perhaps because its program offers them the most obvious economic redress; and people near the top of the social ladder often also embrace the radical left, perhaps because they share its values.

In contrast, Hall continued,

The people most often drawn to the appeals of right-wing populist politicians, such as Trump, tend to be those who sit several rungs up the socioeconomic ladder in terms of their income or occupation. My conjecture is that it is people in this kind of social position who are most susceptible to what Barbara Ehrenreich called a “fear of falling” — namely, anxiety, in the face of an economic or cultural shock, that they might fall further down the social ladder,” a phenomenon often described as “last place aversion.

Gidron and Hall argue in their 2019 paper “Populism as a Problem of Social Integration” that

Much of the discontent fueling support for radical parties is rooted in feelings of social marginalization — namely, in the sense some people have that they have been pushed to the fringes of their national community and deprived of the roles and respect normally accorded full members of it.

In this context, what Gidron and Hall call “the subjective social status of citizens — defined as their beliefs about where they stand relative to others in society” serves as a tool to measure both levels of anomie in a given country, and the potential of radical politicians to find receptive publics because “the more marginal people feel they are to society, the more likely they are to feel alienated from its political system — providing a reservoir of support for radical parties.”

The populist rhetoric of politicians on both the radical right and left is often aimed directly at status concerns. They frequently adopt the plain-spoken language of the common man, self-consciously repudiating the politically correct or technocratic language of the political elites. Radical politicians on the left evoke the virtues of working people, whereas those on the right emphasize themes of national greatness, which have special appeal for people who rely on claims to national membership for a social status they otherwise lack. The “take back control” and “make America great again” slogans of the Brexit and Trump campaigns were perfectly pitched for such purposes.

Robert Ford, a professor of political science at the University of Manchester in the U.K., argued in an email that three factors have heightened the salience of status concerns.

The first, he wrote, is the vacuum created by “the relative decline of class politics.” The second is the influx of immigrants, “not only because different ‘ways of life’ are perceived as threatening to ‘organically grown’ communities, but also because this threat is associated with the notion that elites are complicit in the dilution of such traditional identities.”

The third factor Ford describes as “an asymmetrical increase in the salience of status concerns due to the political repercussions of educational expansion and generational value change,” especially “because of the progressive monopolization of politics by high-status professionals,” creating a constituency of “cultural losers of modernization” who “found themselves without any mainstream political actors willing to represent and defend their ‘ways of life’ ” — a role Trump sought to fill.

In their book, “Cultural Backlash,” Pippa Norris and Ronald Inglehart, political scientists at Harvard and the University of Michigan, describe the constituencies in play here — the “oldest (interwar) generation, non-college graduates, the working class, white Europeans, the more religious, men, and residents of rural communities” that have moved to the right in part in response to threats to their status:

These groups are most likely to feel that they have become estranged from the silent revolution in social and moral values, left behind by cultural changes that they deeply reject. The interwar generation of non-college educated white men — until recently the politically and socially dominant group in Western cultures — has passed a tipping point at which their hegemonic status, power, and privilege are fading.

The emergence of what political scientists call “affective polarization,” in which partisans incorporate their values, their race, their religion — their belief system — into their identity as a Democrat or Republican, together with more traditional “ideological polarization” based on partisan differences in policy stands, has produced heightened levels of partisan animosity and hatred.

Lilliana Mason, a political scientist at the University of Maryland, describes it this way:

The alignment between partisan and other social identities has generated a rift between Democrats and Republicans that is deeper than any seen in recent American history. Without the crosscutting identities that have traditionally stabilized the American two-party system, partisans in the American electorate are now seeing each other through prejudiced and intolerant eyes.

If polarization has evolved into partisan hatred, status competition serves to calcify the animosity between Democrats and Republicans.

In their July 2020 paper, “Beyond Populism: The Psychology of Status-Seeking and Extreme Political Discontent,” Michael Bang PetersenMathias Osmundsen and Alexander Bor, political scientists at Aarhus University in Denmark, contend there are two basic methods of achieving status: the “prestige” approach requiring notable achievement in a field and “dominance” capitalizing on threats and bullying. “Modern democracies,” they write,

are currently experiencing destabilizing events including the emergence of demagogic leaders, the onset of street riots, circulation of misinformation and extremely hostile political engagements on social media.

They go on:

Building on psychological research on status-seeking, we argue that at the core of extreme political discontent are motivations to achieve status via dominance, i.e., through the use of fear and intimidation. Essentially, extreme political behavior reflects discontent with one’s own personal standing and a desire to actively rectify this through aggression.

This extreme political behavior often coincides with the rise of populism, especially right-wing populism, but Petersen, Osmundsen and Bor contend that the behavior is distinct from populism:

The psychology of dominance is likely to underlie current-day forms of extreme political discontent — and associated activism — for two reasons: First, radical discontent is characterized by verbal or physical aggression, thus directly capitalizing on the competences of people pursuing dominance-based strategies. Second, current-day radical activism seems linked to desires for recognition and feelings of ‘losing out’ in a world marked by, on the one hand, traditional gender and race-based hierarchies, which limit the mobility of minority groups and, on the other hand, globalized competition, which puts a premium on human capital.

Extreme discontent, they continue,

is a phenomenon among individuals for whom prestige-based pathways to status are, at least in their own perception, unlikely to be successful. Despite their political differences, this perception may be the psychological commonality of, on the one hand, race- or gender-based grievance movements and, on the other hand, white lower-middle class right-wing voters.

The authors emphasize that the distinction between populism and status-driven dominance is based on populism’s “orientation toward group conformity and equality,” which stands “in stark contrast to dominance motivations. In contrast to conformity, dominance leads to self-promotion. In contrast to equality, dominance leads to support for steep hierarchies.”

Thomas Kurer, a political scientist at the University of Zurich, contends that status competition is a political tool deployed overwhelmingly by the right. By email, Kurer wrote:

It is almost exclusively political actors from the right and the radical right that actively campaign on the status issue. They emphasize implications of changing status hierarchies that might negatively affect the societal standing of their core constituencies and thereby aim to mobilize voters who fear, but have not yet experienced, societal regression. The observation that campaigning on potential status loss is much more widespread and, apparently, more politically worthwhile than campaigning on status gains and makes a lot of sense in light of the long-established finding in social psychology that citizens care much more about a relative loss compared to same-sized gains.

Kurer argued that it is the threat of lost prestige, rather than the actual loss, that is a key factor in status-based political mobilization:

Looking at the basic socio-demographic profile of a Brexiter or a typical supporter of a right-wing populist party in many advanced democracies suggests that we need to be careful with a simplified narrative of a ‘revolt of the left behind’. A good share of these voters can be found in what we might call the lower middle class, which means they might well have decent jobs and decent salaries — but they fear, often for good reasons, that they are not on the winning side of economic modernization.

Kurer noted that in his own April 2020 study, “The Declining Middle: Occupational Change, Social Status, and the Populist Right,” he found

that it is voters who are and remain in jobs susceptible to automation and digitalization, so called routine jobs, who vote for the radical right and not those who actually lose their routine jobs. The latter are much more likely to abstain from politics altogether.

In a separate study of British voters who supported the leave side of Brexit, “The malaise of the squeezed middle: Challenging the narrative of the ‘left behind’ Brexiter,” by Lorenza Antonucci of the University of Birmingham, Laszlo Horvath of the University of Exeter, Yordan Kutiyski of VU University Amsterdam and André Krouwel of the Vrije University of Amsterdam, found that this segment of the electorate

is associated more with intermediate levels of education than with low or absent education, in particular in the presence of a perceived declining economic position. Secondly, we find that Brexiters hold distinct psychosocial features of malaise due to declining economic conditions, rather than anxiety or anger. Thirdly, our exploratory model finds voting Leave associated with self-identification as middle class, rather than with working class. We also find that intermediate levels of income were not more likely to vote for remain than low-income groups.

In an intriguing analysis of the changing role of status in politics, Herbert Kitschelt, a political scientist at Duke, emailed the following argument. In the recent past, he wrote:

One unique thing about working class movements — particularly when infused with Marxism — is that they could dissociate class from social status by constructing an alternative status hierarchy and social theory: Workers may be poor and deprived of skill, but in world-historic perspective they are designated to be the victorious agents of overcoming capitalism in favor of a more humane social order.

Since then, Kitschelt continued, “the downfall of the working class over the last thirty years is not just a question of its numerical shrinkage, its political disorganization and stagnating wages. It also signifies a loss of status.” The political consequences are evident and can be seen in the aftermath of the defeat of President Trump:

Those who cannot adopt or compete in the dominant status order — closely associated with the acquisition of knowledge and the mastery of complex cultural performances — make opposition to this order a badge of pride and recognition. The proliferation of conspiracy theories is an indicator of this process. People make themselves believe in them, because it induces them into an alternative world of status and rank.

On the left, Kitschelt wrote, the high value accorded to individuality, difference and autonomy creates

a fundamental tension between the demand for egalitarian economic redistribution — and the associated hope for status leveling — and the prerogative awarded to individualist or voluntary group distinction. This is the locus, where identity politics — and the specific form of intersectionality as a mode of signaling multiple facets of distinctiveness — comes in.

In the contest of contemporary politics, status competition serves to exacerbate some of the worst aspects of polarization, Kitschelt wrote:

If polarization is understood as the progressive division of society into clusters of people with political preferences and ways of life that set them further and further apart from each other, status politics is clearly a reinforcement of polarization. This augmentation of social division becomes particularly virulent when it features no longer just a clash between high and low status groups in what is still commonly understood as a unified status order, but if each side produces its own status hierarchies with their own values.

These trends will only worsen as claims of separate “status hierarchies” are buttressed by declining economic opportunities and widespread alienation from the mainstream liberal culture.

Millions of voters, including the core group of Trump supporters — whites without college degrees — face bleak futures, pushed further down the ladder by meritocratic competition that rewards what they don’t have: higher education and high scores on standardized tests. Jockeying for place in a merciless meritocracy feeds into the status wars that are presently poisoning the country, even as exacerbated levels of competition are, theoretically, an indispensable component of contemporary geopolitical and economic reality.

Voters in the bottom half of the income distribution face a level of hypercompetition that has, in turn, served to elevate politicized status anxiety in a world where social and economic mobility has, for many, ground to a halt: 90 percent of the age cohort born in the 1940s looked forward to a better standard of living than their parents’, compared with 50 percent for those born since 1980. Even worse, those in the lower status ranks suffer the most lethal consequences of the current pandemic.

These forces in their totality suggest that Joe Biden faces the toughest challenge of his career in attempting to fulfill his pledge to the electorate: “We can restore the defining American promise, that no matter where you start in life, there’s nothing you can’t achieve. And, in doing so, we can restore the soul of our nation.”

Trump has capitalized on the failures of this American promise. Now we have to hope that Biden can deliver.

Posted in Higher Education, History of education, Organization Theory, Sociology

College: What Is It Good For?

This post is the text of a lecture I gave in 2013 at the annual meeting of the John Dewey Society.  It was published the following year in the Society’s journal, Education and Culture.  Here’s a link to the published version.           

The story I tell here is not a philosophical account of the virtues of the American university but a sociological account about how those virtues arose as unintended consequences of a system of higher education that arose for less elevated reasons.  Drawing my the analysis in the book I was writing at the time, A Perfect Mess, I show how the system emerged in large part out two impulses that had nothing to do with advancing knowledge.  One was in response to the competition among religious groups, seeking to plant the denominational flag on the growing western frontier and provide clergy for the newly arriving flock.  Another was in response to the competition among frontier towns to attract settlers who would buy land, using a college as a sign that this town was not just another dusty farm village but a true center of culture.

The essay then goes on to explore how the current positive social benefits of the US higher ed system are supported by the peculiar institutional form that characterizes American colleges and universities. 

My argument is that the true hero of the story is the evolved form of the American university, and that all the good things like free speech are the side effects of a structure that arose for other purposes.  Indeed, I argue that the institution – an intellectual haven in a heartless utilitarian world – depends on attributes that we would publicly deplore:  opacity, chaotic complexity, and hypocrisy.

In short, I’m portraying the system as one that is infused with irony, from its early origins through to its current functions.  Hope you enjoy it.

A Perfect Mess Cover

College — What Is It Good For

David F. Labaree

            I want to say up front that I’m here under false pretenses.  I’m not a Dewey scholar or a philosopher; I’m a sociologist doing history in the field of education.  And the title of my lecture is a bit deceptive.   I’m not really going to talk about what college is good for.  Instead I’m going to talk about how the institution we know as the modern American university came into being.  As a sociologist I’m more interested in the structure of the institution than in its philosophical aims.  It’s not that I’m opposed to these aims.  In fact, I love working in a university where these kinds of pursuits are open to us:   Where we can enjoy the free flow of ideas; where we explore any issue in the sciences or humanities that engages us; and where we can go wherever the issue leads without worrying about utility or orthodoxy or politics.  It’s a great privilege to work in such an institution.  And this is why I want to spend some time examining how this institution developed its basic form in the improbable context of the United States in the nineteenth century. 

            My argument is that the true hero of the story is the evolved form of the American university, and that all the good things like free speech are the side effects of a structure that arose for other purposes.  Indeed, I argue that the institution – an intellectual haven in a heartless utilitarian world – depends on attributes that we would publicly deplore:  opacity, chaotic complexity, and hypocrisy.

            I tell this story in three parts.  I start by exploring how the American system of higher education emerged in the nineteenth century, without a plan and without any apparent promise that it would turn out well.  By 1900, I show how all the pieces of the current system had come together.  This is the historical part.  Then I show how the combination of these elements created an astonishingly strong, resilient, and powerful structure.  I look at the way this structure deftly balances competing aims – the populist, the practical, and the elite.  This is the sociological part.  Then I veer back toward the issue raised in the title, to figure out what the connection is between the form of American higher education and the things that it is good for. This is the vaguely philosophical part.  I argue that the form serves the extraordinarily useful functions of protecting those of us in the faculty from the real world, protecting us from each other, and hiding what we’re doing behind a set of fictions and veneers that keep anyone from knowing exactly what is really going on. 

           In this light, I look at some of the things that could kill it for us.  One is transparency.  The current accountability movement directed toward higher education could ruin everything by shining a light on the multitude of conflicting aims, hidden cross-subsidies, and forbidden activities that constitute life in the university.  A second is disaggregation.  I’m talking about current proposals to pare down the complexity of the university in the name of efficiency:  Let online modules take over undergraduate teaching; eliminate costly residential colleges; closet research in separate institutes; and get rid of football.  These changes would destroy the synergy that comes from the university’s complex structure.  A third is principle.  I argue that the university is a procedural institution, which would collapse if we all acted on principle instead of form.   I end with a call for us to retreat from substance and stand shoulder-to-shoulder in defense of procedure.

Historical Roots of the System

            The origins of the American system of higher education could not have been more humble or less promising of future glory.  It was a system, but it had no overall structure of governance and it did not emerge from a plan.  It just happened, through an evolutionary process that had direction but no purpose.  We have a higher education system in the same sense that we have a solar system, each of which emerged over time according to its own rules.  These rules shaped the behavior of the system but they were not the product of Intelligent Design. 

            Yet something there was about this system that produced extraordinary institutional growth.  When George Washington assumed the presidency of the new republic in 1789, the U.S. already had 19 colleges and universities (Tewksbury, 1932, Table 1; Collins, 1979, Table 5.2).  By 1830 the numbers rose to 50 and then growth accelerated, with the total reaching 250 in 1860, 563 in 1870, and 811 in 1880.  To give some perspective, the number of universities in the United Kingdom between 1800 and 1880 rose from 6 to 10 and in all of Europe from 111 to 160 (Rüegg, 2004).  So in 1880 this upstart system had 5 times as many institutions of higher education as did the entire continent of Europe.  How did this happen?

            Keep in mind that the university as an institution was born in medieval Europe in the space between the dominant sources of power and wealth, the church and the state, and it drew  its support over the years from these two sources.  But higher education in the U.S. emerged in a post-feudal frontier setting where the conditions were quite different.  The key to understanding the nature of the American system of higher education is that it arose under conditions where the market was strong, the state was weak, and the church was divided.  In the absence of any overarching authority with the power and money to support a system, individual colleges had to find their own sources of support in order to get started and keep going.  They had to operate as independent enterprises in the competitive economy of higher education, and their primary reasons for being had little to do with higher learning.

            In the early- and mid-nineteenth century, the modal form of higher education in the U.S. was the liberal arts college.  This was a non-profit corporation with a state charter and a lay board, which would appoint a president as CEO of the new enterprise.  The president would then rent a building, hire a faculty, and start recruiting students.  With no guaranteed source of funding, the college had to make a go of it on its own, depending heavily on tuition from students and donations from prominent citizens, alumni, and religious sympathizers.  For college founders, location was everything.  However, whereas European universities typically emerged in major cities, these colleges in the U.S. arose in small towns far from urban population centers.  Not a good strategy if your aim was to draw a lot of students.  But the founders had other things in mind.

            One central motive for founding colleges was to promote religious denominations.  The large majority of liberal arts colleges in this period had a religious affiliation and a clergyman as president.  The U.S. was an extremely competitive market for religious groups seeking to spread the faith, and colleges were a key way to achieve this end.  With colleges, they could prepare its own clergy and provide higher education for their members; and these goals were particularly important on the frontier, where the population was growing and the possibilities for denominational expansion were the greatest.  Every denomination wanted to plant the flag in the new territories, which is why Ohio came to have so many colleges.  The denomination provided a college with legitimacy, students, and a built-in donor pool but with little direct funding.

            Another motive for founding colleges was closely allied with the first, and that was land speculation.  Establishing a college in town was not only a way to advance the faith, it was also a way to raise property values.  If town fathers could attract a college, they could make the case that the town was no mere agricultural village but a cultural center, the kind of place where prospective land buyers would want to build a house, set up a business, and raise a family.  Starting a college was cheap and easy.  It would bear the town’s name and serve as its cultural symbol.  With luck it would give the town leverage to become a county seat or gain a station on the rail line.  So a college was a good investment in a town’s future prosperity (Brown, 1995).

            The liberal arts college was the dominant but not the only form that higher education took in nineteenth century America.  Three other types of institutions emerged before 1880.  One was state universities, which were founded and governed by individual states but which received only modest state funding.  Like liberal arts colleges, they arose largely for competitive reasons.  They emerged in the new states as the frontier moved westward, not because of huge student demand but because of the need for legitimacy.  You couldn’t be taken seriously as a state unless you had a state university, especially if your neighbor had just established one. 

            The second form of institution was the land-grant college, which arose from federal efforts to promote land sales in the new territories by providing public land as a founding grant for new institutions of higher education.  Turning their backs on the classical curriculum that had long prevailed in colleges, these schools had a mandate to promote practical learning in fields such as agriculture, engineering, military science, and mining. 

            The third form was the normal school, which emerged in the middle of the century as state-founded high-school-level institutions for the preparation of teachers.  It wasn’t until the end of the century that these schools evolved into teachers colleges; and in the twentieth century they continued that evolution, turning first into full-service state colleges and then by midcentury into regional state universities. 

            Unlike liberal arts colleges, all three of these types of institutions were initiated by and governed by states, and all received some public funding.  But this funding was not nearly enough to keep them afloat, so they faced similar challenges as the liberal arts colleges, since their survival depended heavily on their ability to bring in student tuition and draw donations.  In short, the liberal arts college established the model for survival in a setting with a strong market, weak state, and divided church; and the newer public institutions had to play by the same rules.

            By 1880, the structure of the American system of higher education was well established.  It was a system made up of lean and adaptable institutions, with a strong base in rural communities, and led by entrepreneurial presidents, who kept a sharp eye out for possible threats and opportunities in the highly competitive higher-education market.  These colleges had to attract and keep the loyalty of student consumers, whose tuition was critical for paying the bills and who had plenty of alternatives in towns nearby.  And they also had to maintain a close relationship with local notables, religious peers, and alumni, who provided a crucial base of donations.

            The system was only missing two elements to make it workable in the long term.  It lacked sufficient students, and it lacked academic legitimacy.  On the student side, this was the most overbuilt system of higher education the world has ever seen.  In 1880, 811 colleges were scattered across a thinly populated countryside, which amounted to 16 colleges per million of population (Collins, 1979, Table 5.2).  The average college had only 131 students and 14 faculty and granted 17 degrees per year (Carter et al., 2006, Table Bc523, Table Bc571; U.S. Bureau of the Census, 1975, Series H 751).  As I have shown, these colleges were not established in response to student demand, but nonetheless they depended on students for survival.  Without a sharp growth in student enrollments, the whole system would have collapsed. 

            On the academic side, these were colleges in name only.  They were parochial in both senses of the word, small town institutions stuck in the boondocks and able to make no claim to advancing the boundaries of knowledge.  They were not established to promote higher learning, and they lacked both the intellectual and economic capital required to carry out such a mission.  Many high schools had stronger claims to academic prowess than these colleges.  European visitors in the nineteenth century had a field day ridiculing the intellectual poverty of these institutions.  The system was on death watch.  If it was going to be able to survive, it needed a transfusion that would provide both student enrollments and academic legitimacy. 

            That transfusion arrived just in time from a new European import, the German research university.  This model offered everything that was lacking in the American system.  It reinvented university professors as the best minds of the generation, whose expertise was certified by the new entry-level degree, the Ph.D., and who were pushing back the frontiers of knowledge through scientific research.  It introduced graduate students to the college campus, who would be selected for their high academic promise and trained to follow in the footsteps of their faculty mentors. 

            And at the same time that the German model offered academic credibility to the American system, the peculiarly Americanized form of this model made university enrollment attractive for undergraduates, whose focus was less on higher learning than on jobs and parties.  The remodeled American university provided credible academic preparation in the cognitive skills required for professional and managerial work; and it provided training in the social and political skills required for corporate employment, through the process of playing the academic game and taking on roles in intercollegiate athletics and on-campus social clubs.  It also promised a social life in which one could have a good time and meet a suitable spouse. 

            By 1900, with the arrival of the research university as the capstone, nearly all of the core elements of the current American system of higher education were in place.  Subsequent developments focused primarily on extending the system downward, adding layers that would make it more accessible to larger numbers of students – as normal schools evolved into regional state universities and as community colleges emerged as the open-access base of an increasingly stratified system.  Here ends the history portion of this account. Now we move on to the sociological part of the story.

Sociological Traits of the System

            When the research university model arrived to save the day in the 1880s, the American system of higher education was in desperate straits.  But at the same time this system had an enormous reservoir of potential strengths that prepared it for its future climb to world dominance.  Let’s consider some of these strengths.  First it had a huge capacity in place, the largest in the world by far:  campuses, buildings, faculty, administration, curriculum, and a strong base in the community.  All it needed was students and credibility. 

            Second, it consisted of a group of institutions that had figured out how to survive under dire Darwinian circumstances, where supply greatly exceeded demand and where there was no secure stream of funding from church or state.  In order to keep the enterprises afloat, they had learned how to hustle for market position, troll for students, and dun donors.  Imagine how well this played out when students found a reason to line up at their doors and donors suddenly saw themselves investing in a winner with a soaring intellectual and social mission. 

            Third, they had learned to be extraordinarily sensitive to consumer demand, upon which everything depended.  Fourth, as a result they became lean and highly adaptable enterprises, which were not bounded by the politics of state policy or the dogma of the church but could take advantage of any emerging possibility for a new program, a new kind of student or donor, or a new area of research.  Not only were they able to adapt but they were forced to do so quickly, since otherwise the competition would jump on the opportunity first and eat their lunch.

            By the time the research university arrived on the scene, the American system of higher education was already firmly established and governed by its own peculiar laws of motion and its own evolutionary patterns.  The university did not transform the system.  Instead it crowned the system and made it viable for a century of expansion and elevation.  Americans could not simply adopt the German university model, since this model depended heavily on strong state support, which was lacking in the U.S.  And the American system would not sustain a university as elevated as the German university, with its tight focus on graduate education and research at the expense of other functions.  American universities that tried to pursue this approach – such as Clark University and Johns Hopkins – found themselves quickly trailing the pack of institutions that adopted a hybrid model grounded in the preexisting American system.  In the U.S., the research university provided a crucial add-on rather than a transformation.  In this institutionally-complex market-based system, the research university became embedded within a convoluted but highly functional structure of cross-subsidies, interwoven income streams, widely dispersed political constituencies, and a bewildering array of goals and functions. 

            At the core of the system is a delicate balance among three starkly different models of higher education.  These three roughly correspond to Clark Kerr’s famous characterization of the American system as a mix of the British undergraduate college, the American land-grant college, and the German research university (Kerr, 2001, p. 14).  The first is the populist element, the second is the practical element, and the third is the elite element.  Let me say a little about each of these and make the case for how they work to reinforce each other and shore up the overall system.  I argue that these three elements are unevenly distributed across the whole system, with the populist and practical parts strongest in the lower tiers of the system, where access is easy and job utility are central, and the elite is strongest in the upper tier.  But I also argue that all three are present in the research university at the top of the system.  Consider how all these elements come together in a prototypical flagship state university.

            The populist element has its roots in the British residential undergraduate college, which colonists had in mind when they established the first American colleges; but the changes that emerged in the U.S. in the early nineteenth century were critical.  Key was the fact that American colleges during this period were broadly accessible in a way that colleges in the U.K. never were until the advent of the red-brick universities after the Second World War.  American colleges were not located in fashionable areas in major cities but in small towns in the hinterland.  There were far too many of them for them to be elite, and the need for students meant that tuition and academic standards both had to be kept relatively low.  The American college never exuded the odor of class privilege to the same degree as Oxbridge; its clientele was largely middle class.  For the new research university, this legacy meant that the undergraduate program provided critical economic and political support. 

            From the economic perspective, undergrads paid tuition, which – through large classes and thus the need for graduate teaching assistants – supported graduate programs and the larger research enterprise.  Undergrads, who were socialized in the rituals of football and fraternities, were also the ones who identified most closely with the university, which meant that in later years they became the most loyal donors.  As doers rather than thinkers, they were also the wealthiest group of alumni donors.  Politically, the undergraduate program gave the university a broad base of community support.  Since anyone could conceive of attending the state university, the institution was never as remote or alien as the German model.  Its athletic teams and academic accomplishments were a point of pride for state residents, whether or not they or their children ever attended.  They wore the school colors and cheered for it on game days.

            The practical element has its root in the land-grant college.  The idea here was that the university was not just an enterprise for providing liberal education for the elite but that it could also provide useful occupational skills for ordinary people.  Since the institution needed to attract a large group of students to pay the bills, the American university left no stone unturned when it came to developing programs that students might want.  It promoted itself as a practical and reliable mechanism for getting a good job.  This not only boosted enrollment, but it also sent a message to the citizens of the state that the university was making itself useful to the larger community, producing the teachers, engineers, managers, and dental hygienists that they needed.  

            This practical bent also extended to the university’s research effort, which was not just focusing on ivory tower pursuits.  Its researchers were working hard to design safer bridges, more productive crops, better vaccines, and more reliable student tests.  For example, when I taught at Michigan State I planted my lawn with Spartan grass seed, which was developed at the university.  These forms of applied research led to patents that brought substantial income back to the institution, but their most important function was to provide a broad base of support for the university among people who had no connection with it as an instructional or intellectual enterprise.  The idea was compelling: This is your university, working for you.

            The elite element has its roots in the German research university.  This is the component of the university formula that gives the institution academic credibility at the highest level.  Without it the university would just be a party school for the intellectually challenged and a trade school for job seekers.  From this angle, the university is the haven for the best thinkers, where professors can pursue intellectual challenges of the first order, develop cutting edge research in a wide array of domains, and train graduate students who will carry on these pursuits in the next generation.  And this academic aura envelops the entire enterprise, giving the lowliest freshman exposure to the most distinguished faculty and allowing the average graduate to sport a diploma burnished by the academic reputations of the best and the brightest.  The problem, of course, is that supporting professorial research and advanced graduate study is enormously expensive; research grants only provide a fraction of the needed funds. 

            So the populist and practical domains of the university are critically important components of the larger university package.  Without the foundation of fraternities and football, grass seed and teacher education, the superstructure of academic accomplishment would collapse of its own weight.  The academic side of the university can’t survive without both the financial subsidies and political support that come from the populist and the practical sides.  And the populist and practical sides rely on the academic legitimacy that comes from the elite side.  It’s the mixture of the three that constitutes the core strength of the American system of higher education.  This is why it is so resilient, so adaptable, so wealthy, and so powerful.  This is why its financial and political base is so broad and strong.  And this is why American institutions of higher education enjoy so much autonomy:  They respond to many sources of power in American society and they rely on many sources of support, which means they are not the captive of any single power source or revenue stream.

The Power of Form

            So my story about the American system of higher education is that it succeeded by developing a structure that allowed it to become both economically rich and politically autonomous.  It could tap multiple sources of revenue and legitimacy, which allowed it to avoid becoming the wholly owned subsidiary of the state, the church, or the market.  And by virtue of its structurally reinforced autonomy, college is good for a great many things.

            At last we come back to our topic.  What is college good for?  For those of us on faculties of research universities, they provide several core benefits that we see as especially important.  At the top of the list is that they preserve and promote free speech.  They are zones where faculty and students can feel free to pursue any idea, any line of argument, and any intellectual pursuit that they wish – free of the constraints of political pressure, cultural convention, or material interest.  Closely related to this is the fact that universities become zones where play is not only permissible but even desirable, where it’s ok to pursue an idea just because it’s intriguing, even though there is no apparent practical benefit that this pursuit would produce.

            This, of course, is a rather idealized version of the university.  In practice, as we know, politics, convention, and economics constantly intrude on the zone of autonomy in an effort to shape the process and limit these freedoms.  This is particularly true in the lower strata of the system.  My argument is not that the ideal is met but that the structure of American higher education – especially in the top tier of the system – creates a space of relative autonomy, where these constraining forces are partially held back, allowing the possibility for free intellectual pursuits that cannot be found anywhere else. 

            Free intellectual play is what we in the faculty tend to care about, but others in American society see other benefits arising from higher education that justify the enormous time and treasure that we devote to supporting the system.  Policymakers and employers put primary emphasis on higher education as an engine of human capital production, which provides the economically relevant skills that drive increases in worker productivity and growth in the GDP.  They also hail it as a place of knowledge production, where people develop valuable technologies, theories, and inventions that can feed directly into the economy.  And companies use it as a place to outsource much of their needs for workforce training and research-and-development. 

            These pragmatic benefits that people see coming from the system of higher education are real.  Universities truly are socially useful in such ways.  But it’s important to keep in mind that these social benefits only can arise if the university remains a preserve for free intellectual play.  Universities are much less useful to society if they restrict themselves to the training of individuals for particular present-day jobs, or to the production of research to solve current problems.  They are most useful if they function as storehouses for knowledges, skills, technologies, and theories – for which there is no current application but which may turn out to be enormously useful in the future.  They are the mechanism by which modern societies build capacity to deal with issues that have not yet emerged but sooner or later are likely to do so.

            But that is a discussion for another speech by another scholar.  The point I want make today about the American system of higher education is that it is good for a lot of things but it was established in order to accomplish none of these things.  As I have shown, the system that arose in the nineteenth century was not trying to store knowledge, produce capacity, or increase productivity.  And it wasn’t trying to promote free speech or encourage play with ideas.  It wasn’t even trying to preserve institutional autonomy.  These things happened as the system developed, but they were all unintended consequences.  What was driving development of the system was a clash of competing interests, all of which saw the college as a useful medium for meeting particular ends.  Religious denominations saw them as a way to spread the faith.  Town fathers saw them as a way to promote local development and increase property values.  The federal government saw them as a way to spur the sale of federal lands.  State governments saw them as a way to establish credibility in competition with other states.  College presidents and faculty saw them as a way to promote their own careers.  And at the base of the whole process of system development were the consumers, the students, without whose enrollment and tuition and donations the system would not have been able to persist.  The consumers saw the college as useful in a number of ways:  as a medium for seeking social opportunity and achieving social mobility; as a medium for preserving social advantage and avoiding downward mobility; as a place to have a good time, enjoy an easy transition to adulthood, pick up some social skills, and meet a spouse; even, sometimes, as a place to learn. 

            The point is that the primary benefits of the system of higher education derive from its form, but this form did not arise in order to produce these benefits.  We need to preserve the form in order to continue enjoying these benefits, but unfortunately the organizational  foundations upon which the form is built are, on the face of it, absurd.  And each of these foundational qualities is currently under attack from the perspective of alternative visions that, in contrast, have a certain face validity.  It the attackers accomplish their goals, the system’s form, which has been so enormously productive over the years, will collapse, and with this collapse will come the end of the university as we know it.  I didn’t promise this lecture would end well, did I?

            Let me spell out three challenges that would undercut the core autonomy and synergy that makes the system so productive in its current form.  On the surface, each of the proposed changes seems quite sensible and desirable.  Only by examining the implications of actually pursuing these changes can we see how they threaten the foundational qualities that currently undergird the system.  The system’s foundations are so paradoxical, however, that mounting a public defense of them would be difficult indeed.  Yet it is precisely these traits of the system that we need to defend in order to preserve the current highly functional form of the university.  In what follows, I am drawing inspiration from the work of Suzanne Lohmann (2004, 2006) a political scientist at UCLA, who is the scholar who has addressed these issues most astutely.

            One challenge comes from prospective reformers of American higher education who want to promote transparency.  Who can be against that?  This idea derives from the accountability movement, which has already swept across K-12 education and is now pounding the shores of higher education.  It simply asks universities to show people what they’re doing.  What is the university doing with its money and its effort?  Who is paying for what?  How do the various pieces of the complex structure of the university fit together?  And are they self-supporting or drawing resources from elsewhere?  What is faculty credit-hour production?  How is tuition related to instructional costs?  And so on.   These demands make a lot of sense. 

            The problem, however, as I have shown today, is that the autonomy of the university depends on its ability to shield its inner workings from public scrutiny.  It relies on opacity.  Autonomy will end if the public can see everything that is going on and what everything costs.  Consider all of the cross subsidies that keep the institution afloat:  undergraduates support graduate education, football supports lacrosse, adjuncts subsidize professors, rich schools subsidize poor schools.  Consider all of the instructional activities that would wilt in the light of day; consider all of the research projects that could be seen as useless or politically unacceptable.  The current structure keeps the inner workings of the system obscure, which protects the university from intrusions on its autonomy.  Remember, this autonomy arose by accident not by design; its persistence depends on keeping the details of university operations out of public view.

            A second and related challenge comes from reformers who seek to promote disaggregation.  The university is an organizational nightmare, they say, with all of those institutes and centers, departments and schools, programs and administrative offices.  There are no clear lines of authority, no mechanisms to promote efficiency and eliminate duplication, no tools to achieve economies of scale.  Transparency is one step in the right direction, they say, but the real reform that is needed is to take apart the complex interdependencies and overlapping responsibilities within the university and then figure out how each of these tasks could be accomplished in the most cost-effective and outcome-effective manner.  Why not have a few star professors tape lectures and then offer Massive Open Online Courses at colleges across the country?  Why not have institutions specialize in what they’re best at – remedial education, undergraduate instruction, vocational education, research production, graduate or student training?  Putting them together into a single institution is expensive and grossly inefficient. 

            But recall that it is precisely the aggregation of purposes and functions – the combination of the populist, the practical, and the elite – that has made the university so strong, so successful, and, yes, so useful.  This combination creates a strong base both financially and politically and allows for forms of synergy than cannot happen with a set of isolated educational functions.  The fact is that this institution can’t be disaggregated without losing what makes it the kind of university that students, policymakers, employers, and the general public find so compelling.  A key organizational element that makes the university so effective is its chaotic complexity.

            A third challenge comes not from reformers intruding on the university from the outside but from faculty members meddling with it from the inside.  The threat here arises from the dangerous practice of acting on academic principle.  Fortunately, this is not very common in academe.  But the danger is lurking in the background of every decision about faculty hires.  Here’s how it works.  You review a finalist for a faculty position in a field not closely connected to your own, and you find to your horror that the candidate’s intellectual domain seems absurd on the face of it (how can anyone take this type of work seriously?) and the candidate’s own scholarship doesn’t seem credible.  So you decide to speak against hiring the candidate and organize colleagues to support your position.  But then you happen to read a paper by Suzanne Lohmann, who points out something very fundamental about how universities work. 

            Universities are structured in a manner that protects the faculty from the outside world (that is, protecting them from the forces of transparency and disaggregation), but it’s also organized in a manner that protects the faculty from each other.  The latter is the reason we have such an enormous array of departments and schools in universities.  If every historian had to meet the approval of geologists and every psychologist had be meet the approval of law faculty, no one would ever be hired. 

           The simple fact is that part of what keeps universities healthy and autonomous is hypocrisy.  Because of the Balkanized structure of university organization, we all have our own protected spaces to operate in and we all pass judgment only on our own peers within that space.  To do otherwise would be disastrous.  We don’t have to respect each other’s work across campus, we merely need to tolerate it – grumbling about each other in private and making nice in public.  You pick your faculty, we’ll pick ours.  Lohmann (2006) calls this core procedure of the academy “log-rolling.”  If we all operated on principle, if we all only approved scholars we respected, then the university would be a much diminished place.  Put another way, I wouldn’t want to belong to a university that consisted only of people I found worthy.  Gone would be the diversity of views, paradigms, methodologies, theories, and world views that makes the university such a rich place.  The result is incredibly messy, and it permits a lot of quirky – even ridiculous – research agendas, courses, and instructional programs.  But in aggregate, this libertarian chaos includes an extraordinary range of ideas, capacities, theories, and social possibilities.  It’s exactly the kind of mess we need to treasure and preserve and defend against all opponents.

            So here is the thought I’m leaving you with.  The American system of higher education is enormously productive and useful, and it’s a great resource for students, faculty, policymakers, employers, and society.  What makes it work is not its substance but its form.  Crucial to its success is its devotion to three formal qualities:  opacity, chaotic complexity, and hypocrisy.  Embrace these forms and they will keep us free.

Posted in Credentialing, Higher Education, History of education, Sociology, Uncategorized

How Credentialing Theory Explains the Extraordinary Growth in US Higher Ed in the 19th Century

Today I am posting a piece I wrote in 1995. It was the foreword to a book by David K. Brown, Degrees of Control: A Sociology of Educational Expansion and Occupational Credentialism.  

I have long been interested in credentialing theory, but this is the only place where I ever tried to spell out in detail how the theory works.  For this purpose, I draw on the case of the rapid expansion of the US system of higher education in the 19th century and its transformation at the end of the century, which is the focus of Brown’s book.  Here’s a link to a pdf of the original. 

The case is particularly fruitful for demonstrating the value of credentialing theory, because the most prominent theory of education development simply can’t make sense of it.  Functionalist theory sees the emergence of educational systems as part of the process of modernization.  As societies become more complex, with a greater division of labor and a shift from manual to mental work, the economy requires workers with higher degrees of verbal and cognitive skills.  Elementary, secondary  and higher education arise over time in response to this need. 

The history of education in the U.S., however, poses a real problem for this explanation.  American higher education exploded in the 19th century, to the point that there were 800 some colleges in existence by 1880, which was more than the total number in the continent of Europe.  It was the highest rate of colleges per 100,000 population that the world have ever seen.   The problem is that this increase was not in respond to increasing demand from employers for college-educated workers.  While the rate of higher schooling was increasing across the century, the skill demands in the workforce were declining.  The growth of factory production was subdividing forms of skilled work, such as shoemaking, into a series of low-skilled tasks on the assembly line.  

This being the case, then, how can we understand the explosion of college founding in the 19th century?  Brown provides a compelling explanation, and I lay out his core arguments in my foreword.  I hope you find it illuminating.

 

Brown Cover

Preface

In this book, David Brown tackles an important question that has long puzzled scholars who wanted to understand the central role that education plays in American society: When compared with other Western countries, why did the United States experience such extraordinary growth in higher education? Whereas in most societies higher education has long been seen as a privilege that is granted to a relatively small proportion of the population, in the United States it has increasingly come to be seen as a right of the ordinary citizen. Nor was this rapid increase in accessibility very recent phenomenon. As Brown notes, between 1870 and 1930, the proportion of college-age persons (18 to 21 years old) who attended institutions of higher education rose from 1.7% to 13.0%. And this was long before the proliferation of regional state universities and community colleges made college attendance a majority experience for American youth.

The range of possible answers to this question is considerable, with each carrying its own distinctive image of the nature of American political and social life. For example, perhaps the rapid growth in the opportunity for higher education was an expression of egalitarian politics and a confirmation of the American Dream; or perhaps it was a political diversion, providing ideological cover for persistent inequality; or perhaps it was merely an accident — an unintended consequence of a struggle for something altogether different. In politically charged terrain such as this, one would prefer to seek guidance from an author who doesn’t ask the reader to march behind an ideological banner toward a preordained conclusion, but who instead rigorously examines the historical data and allows for the possibility of encountering surprises. What the reader wants, I think, is an analysis that is both informed by theory and sensitive to historical nuance.

In this book, Brown provides such an analysis. He approaches the subject from the perspective of historical sociology, and in doing so he manages to maintain an unusually effective balance between historical explanation and sociological theory-building. Unlike many sociologists dealing with history, he never oversimplifies the complexity of historical events in the rush toward premature theoretical closure; and unlike many historians dealing with sociology, he doesn’t merely import existing theories into his historical analysis but rather conceives of the analysis itself as a contribution to theory. His aim is therefore quite ambitious – to spell out a theoretical explanation for the spectacular growth and peculiar structure of American higher education, and to ground this explanation in an analysis of the role of college credentials in American life.

Traditional explanations do not hold up very well when examined closely. Structural-functionalist theory argues that an expanding economy created a powerful demand for advanced technical skills (human capital), which only a rapid expansion of higher education could fill. But Brown notes that during this expansion most students pursued programs not in vocational-technical areas but in liberal arts, meaning that the forms of knowledge they were acquiring were rather remote from the economically productive skills supposedly demanded by employers. Social reproduction theory sees the university as a mechanism that emerged to protect the privilege of the upper-middle class behind a wall of cultural capital, during a time (with the decline of proprietorship) when it became increasingly difficult for economic capital alone to provide such protection. But, while this theory points to a central outcome of college expansion, it fails to explain the historical contingencies and agencies that actually produced this outcome. In fact, both of these theories are essentially functionalist in approach, portraying higher education as arising automatically to fill a social need — within the economy, in the first case, and within the class system, in the second.

However, credentialing theory, as developed most extensively by Randall Collins (1979), helps explain the socially reproductive effect of expanding higher education without denying agency. It conceives of higher education diplomas as a kind of cultural currency that becomes attractive to status groups seeking an advantage in the competition for social positions, and therefore it sees the expansion of higher education as a response to consumer demand rather than functional necessity. Upper classes tend to benefit disproportionately from this educational development, not because of an institutional correspondence principle that preordains such an outcome, but because they are socially and culturally better equipped to gain access to and succeed within the educational market.

This credentialist theory of educational growth is the one that Brown finds most compelling as the basis for his own interpretation. However, when he plunges into a close examination of American higher education, he finds that Collins’ formulation of this theory often does not coincide very well with the historical evidence. One key problem is that Collins does not examine the nature of labor market recruitment, which is critical for credentialist theory, since the pursuit of college credentials only makes sense if employers are rewarding degree holders with desirable jobs. Brown shows that between 1800 and 1880 the number of colleges in the United States grew dramatically (as Collins also asserts), but that enrollments at individual colleges were quite modest. He argues that this binge of institution-creation was driven by a combination of religious and market forces but not (contrary to Collins) by the pursuit of credentials. There simply is no good evidence that a college degree was much in demand by employers during this period. Instead, a great deal of the growth in the number of colleges was the result of the desire by religious and ethnic groups to create their own settings for producing clergy and transmitting culture. In a particularly intriguing analysis, Brown argues that an additional spur to this growth came from markedly less elevated sources — local boosterism and land speculation — as development-oriented towns sought to establish colleges as a mechanism for attracting land buyers and new residents.

Brown’s version of credentialing theory identifies a few central factors that are required in order to facilitate a credential-driven expansion of higher education, and by 1880 several of these were already in place. One such factor is substantial wealth. Higher education is expensive, and expanding it for reasons of individual status attainment rather than for societal necessity is a wasteful use of a nation’s resources; it is only feasible for a very wealthy country. The United States was such a country in the late nineteenth century. A second factor is a broad institutional base. At this point, the United States had the largest number of colleges per million residents that the country has even seen, before or since. When combined with the small enrollments at each college, this meant that there was a great potential for growth within an already existing institutional framework. This potential was reinforced by a third factor, decentralized control. Colleges were governed by local boards rather than central state authorities, thus encouraging entrepreneurial behavior by college leaders, especially in the intensively competitive market environment they faced.

However, three other essential factors for rapid credential-based growth in higher education were still missing in 1880. For one thing, colleges were not going to be able to attract large numbers of new students, who were after all unlikely to be motivated solely by the love of learning, unless they could offer these students both a pleasant social experience and a practical educational experience — neither of which was the norm at colleges for most of the nineteenth century. Another problem was that colleges could not function as credentialing institutions until they had a monopoly over a particular form of credentials, but in 1880 they were still competing directly with high schools for the same students. Finally, their credentials were not going to have any value on the market unless employers began to demonstrate a distinct preference for hiring college graduates, and such a preference was still not obvious at this stage.

According to Brown, the 1880s saw a major shift in all three of these factors. The trigger for this change was a significant oversupply of institutions relative to existing demand. In this life or death situation, colleges desperately sought to increase the pool of potential students. It is no coincidence that this period marked the rapid diffusion of efforts to improve the quality of social life on campuses (from the promotion of athletics to the proliferation of fraternities), and also the shift toward curriculum with a stronger claim of practicality (emphasizing modern languages and science over Latin and Greek). At the same time, colleges sought to guarantee a flow of students from feeder institutions, which required them to establish a hierarchical relationship with high schools. The end of the century was the period in which colleges began requiring completion of a high school course as a prerequisite for college admission instead of the traditional entrance examination. This system provided high schools with a stable outlet for its graduates and colleges with predictable flow of reasonably well-prepared students. However, none of this would have been possible if the college degree had not acquired significant exchange value in the labor market. Without this, there would have been only social reasons for attending college, and high schools would have had little incentive to submit to college mandates.

Perhaps Brown’s strongest contribution to credential theory is his subtle and persuasive analysis of the reasoning that led employers to assert a preference for college graduates in the hiring process. Until now, this issue has posed a significant, perhaps fatal, problem for credentialing theory, which has asked the reader to accept two apparently contradictory assertions about credentials. First, the theory claims that a college degree has exchange value but not necessarily use value; that is, it is attractive to the consumer because it can be cashed in on a good job more or less independently of any learning that was acquired along the way. Second, this exchange value depends on the willingness of employers to hire applicants based on credentials alone, without direct knowledge of what these applicants know or what they can do. However this raises a serious question about the rationality of the employer in this process. After all, why would an employer, who presumably cares about the productivity of future employees, hire people based solely on college’s certification of competence in the absence of any evidence for that competence?

Brown tackles this issue with a nice measure of historical and sociological insight. He notes that the late nineteenth century saw the growing rationalization of work, which led to the development of large-scale bureaucracies to administer this work within both private corporations and public agencies. One result was the creation of a rapidly growing occupational sector for managerial employees who could function effectively within such a rationalized organizational structure. College graduates seemed to fit the bill for this kind of work. They emerged from the top level of the newly developed hierarchy of educational institutions and therefore seemed like natural candidates for management work in the upper levels of the new administrative hierarchy, which was based not on proprietorship or political office but on apparent skill. And what kinds of skills were called for in this line of work? What the new managerial employees needed was not so much the technical skills posited by human capital theory, he argues, but a general capacity to work effectively in a verbally and cognitively structured organizational environment, and also a capacity to feel comfortable about assuming positions of authority over other people.

These were things that the emerging American college could and indeed did provide. The increasingly corporate social structure of student life on college campuses provided good socialization for bureaucratic work, and the process of gaining access to and graduation from college provided students with an institutionalized confirmation of their social superiority and qualifications for leadership. Note that these capacities were substantive consequences of having attended college, but they were not learned as part of the college’s formal curriculum. That is, the characteristics that qualified college graduates for future bureaucratic employment were a side effect of their pursuit of a college education. In this sense, then, the college credential had a substantive meaning for employers that justified them in using it as a criterion for employment — less for the human capital that college provided than for the social capital that college conferred on graduates. Therefore, this credential, Brown argues, served an important role in the labor market by reducing the uncertainty that plagued the process of bureaucratic hiring. After all, how else was an employer to gain some assurance that candidate could do this kind of work? A college degree offered a claim to competence, which had enough substance behind it to be credible even if this substance was largely unrelated to the content of the college curriculum.

By the 1890s all the pieces were in place for a rapid expansion of college enrollments, strongly driven by credentialist pressures. Employers had reason to give preference to college graduates when hiring for management positions. As a result, middle-class families had an increasing incentive to provide their children with privileged access to an advantaged social position by sending them to college. For the students themselves, this extrinsic reward for attending college was reinforced by the intrinsic benefits accruing from an attractive social life on campus. All of this created a very strong demand for expanding college enrollments, and the pre-existing institutional conditions in higher education made it possible for colleges to respond to this demand in an aggressive fashion. There were a thousand independent institutions of higher education, accustomed to playing entrepreneurial roles in a competitive educational market, that were eager to capitalize on the surge of interest in attending college and to adapt themselves to the preferences of these new tuition-paying consumers. The result was a powerful and unrelenting surge of expansion in college enrollments that continued for the next century.

 

Brown provides a persuasive answer to the initial question about why American higher education expanded at such a rapid rate. But at this point the reader may well respond by asking the generic question that one should ask of any analyst, and that is, “So what?” More specifically, in light of the particular claims of this analysis, the question becomes: “What difference does it make that this expansion was spurred primarily by the pursuit of educational credentials?” In my view, at least, the answer to that question is clear. The impact of credentialism on both American society and the American educational system has been profound — profoundly negative. Consider some of the problems it has caused.

One major problem is that credentialism is astonishingly inefficient. Education is the largest single public investment made by most modern societies, and this is justified on the grounds that it provides a critically important contribution to the collective welfare. The public value of education is usually calculated as some combination of two types of benefits, the preparation of capable citizens (the political benefit) and the training of productive workers (the economic benefit). However the credentialist argument advanced by Brown suggests that these public benefits are not necessarily being met and that the primary beneficiaries are in fact private individuals. From this perspective, higher education (and the educational system more generally) exists largely as a mechanism for providing individuals with a cultural commodity that will give them a substantial competitive advantage in the pursuit of social position. In short, education becomes little but a vast public subsidy for private ambition.

The practical effect of this subsidy is the production of a glut of graduates. The difficulty posed by this outcome is not that the population becomes overeducated (since such a state is difficult to imagine) but that it becomes overcredentialed, since people are pursuing diplomas less for the knowledge they are thereby acquiring than for the access that the diplomas themselves will provide. The result is a spiral of credential inflation; for as each level of education in turn gradually floods with a crowd of ambitious consumers, individuals have to keep seeking ever higher levels of credentials in order to move a step ahead of the pack. In such a system nobody wins. Consumers have to spend increasing amounts of time and money to gain additional credentials, since the swelling number of credential holders keeps lowering the value of credentials at any given level. Taxpayers find an increasing share of scarce fiscal resources going to support an educational chase with little public benefit. Employers keep raising the entry-level education requirements for particular jobs, but they still find that they have to provide extensive training before employees can carry out their work productively. At all levels, this is an enormously wasteful system, one that rich countries like the United States can increasingly ill afford and that less developed countries, who imitate the U.S. educational model, find positively impoverishing.

A second major problem is that credentialism undercuts learning. In both college and high school, students are all too well aware that their mission is to do whatever it takes to acquire a diploma, which they can then cash in on what really matters — a good job. This has the effect of reifying the formal markers of academic progress-grades, credits, and degrees — and encouraging students to focus their attention on accumulating these badges of merit for the exchange value they offer. But at the same time this means directing attention away from the substance of education, reducing student motivation to learn the knowledge and skills that constitute the core of the educational curriculum. Under such conditions, it is quite rational, even if educationally destructive, for students to seek to acquire their badges of merit at a minimum academic cost, to gain the highest grade with the minimum amount of learning. This perspective is almost perfectly captured by a common student question, one that sends chills down the back of the learning-centered teacher but that makes perfect sense for the credential-oriented student: “ls this going to be on the test?” (Sedlak et al., 1986, p. 182). We have credentialism to thank for the aversion to learning that, to a great extent, lies at the heart of our educational system.

A third problem posed by credentialism is social and political more than educational. According to credentialing theory, the connection between social class and education is neither direct nor automatic, as suggested by social reproduction theory. Instead, the argument goes, market forces mediate between the class position of students and their access to and success within the educational system. That is, there is general competition for admission to institutions of higher education and for levels of achievement within these institutions. Class advantage is no guarantee of success in this competition, since such factors as individual ability, motivation, and luck all play a part in determining the result. Market forces also mediate between educational attainment (the acquisition of credentials) and social attainment (the acquisition of a social position). Some college degrees are worth more in the credentials market than others, and they provide privileged access to higher level positions independent of the class origins of the credential holder.

However, in both of these market competitions, one for acquiring the credential and the other for cashing it in, a higher class position provides a significant competitive edge. The economic, cultural, and social capital that come with higher class standing gives the bearer an advantage in getting into college, in doing well at college, and in translating college credentials into desirable social outcomes. The market-based competition that characterizes the acquisition and disposition of educational credentials gives the process a meritocratic set of possibilities, but the influence of class on this competition gives it a socially reproductive set of probabilities as well. The danger is that, as a result, a credential-driven system of education can provide meritocratic cover for socially reproductive outcomes. In the single-minded pursuit of educational credentials, both student consumers and the society that supports them can lose sight of an all-too-predictable pattern of outcomes that is masked by the headlong rush for the academic gold.

Posted in Culture, History, Politics, Populism, Sociology

Colin Woodard: Maps that Show the Historical Roots of Current US Political Faultlines

This post is a commentary on Colin Woodard’s book American Nations: A History of the Eleven Rival Regional Cultures of North America.  

Woodard argues that the United States is not a single national culture but  a collection of national cultures, each with its own geographic base.  The core insight for this analytical approach comes from “Wilbur Zelinsky of Pennsylvania State University [who] formulated [a] theory in 1973, which he called the Doctrine of First Effective Settlement. ‘Whenever an empty territory undergoes settlement, or an earlier population is dislodged by invaders, the specific characteristics of the first group able to effect a viable, self-perpetuating society are of crucial significance for the later social and cultural geography of the area, no matter how tiny the initial band of settlers may have been,’ Zelinsky wrote. ‘Thus, in terms of lasting impact, the activities of a few hundred, or even a few score, initial colonizers can mean much more for the cultural geography of a place than the contributions of tens of thousands of new immigrants a few generations later.’”

I’m suspicious of theories that smack of cultural immutability and cultural determinism, but Woodard’s account is more sophisticated than that.  His is a story of the power of founders in a new institutional setting, who lay out the foundational norms for a society that lacks any cultural history of its own or which expelled the preexisting cultural group (in the U.S. case, Native Americans).  So part of the story is about the acculturation of newcomers into an existing worldview.  But another part is the highly selective nature of immigration, since new arrivals often seek out places to settle that are culturally compatible.  They may target a particular destination because its cultural characteristics, creating a pipeline of like-minded immigrants; or they choose to move on to another territory if the first port of entry is not to their taste.  Once established, these cultures often expanded westward as the country developed, extending the size and geographical scope of each nation.

Why does he insist on calling them nations?  At first this bothered me a bit, but then I realized he was using the term “nation” in Benedict Anderson’s sense as “imagined communities.”  Tidewater and Yankeedom are not nation states; they are cultural components of the American state.  But they do act as nations for their citizens.  Each of these nations is a community of shared values and worldviews that binds people together who have never met and often live far away.  The magic of the nation is that it creates a community of common sense and purpose that extends well beyond the reach of normal social interaction.  If you’re Yankee to the core, you can land in a strange town in Yankeedom and feel at home.  These are my people.  I belong here.

He argues that these national groupings continue to have a significant impact of the cultural geography of the US, shaping people’s values, styles of social organization, views of religion and government, and ultimately how they vote.  The kicker is the alignment between the spatial distribution of these cultures and the current voting patterns.  He lays out this argument succinctly in a 2018 op-ed he wrote for the New York Times.  I recommend reading it.

The whole analysis is neatly summarized in the two maps he deployed in that op-ed, which I have reproduced below.

The Map of America’s 11 Nations

11 Nations Map

This first map shows the geographic boundaries of the various cultural groupings in the U.S.  It all started on the east coast with the founding cultural binary that shaped the formation of the country in the late 18th century — New England Yankees and Tidewater planters.  He argues that they are direct descendants of the two factions in the English civil war of the mid 17th century, with the Yankees as the Calvinist Roundheads, who (especially after being routed by the restoration in England) sought to establish a new theocratic society in the northeast founded on strong government, and the Anglican Cavaliers, who sought to reproduce the decentralized English aristocratic ideal on Virginia plantations.  In between was the Dutch entrepot of New York, focused on commerce and multiculturalism (think “Hamilton”), and the Quaker colony in Pennsylvania, founded on equality and suspicion of government.  The US constitution was an effort to balance all of these cultural priorities within a single federal system.

Then came two other groups that didn’t fit well into any of these four cultural enclaves.  The immigrants to the Deep South originated in the slave societies of British West Indies, bringing with them a rigid caste structure and a particularly harsh version of chattel slavery.  Immigrants to Greater Appalachia came from the Scots-Irish clan cultures in Northern Ireland and the Scottish borderlands, with a strong commitment to individual liberty, resentment of government, and a taste for violence.

Tidewater and Yankeedom dominated the presidency and federal government for the country’s first 40 years.  But in 1828 the US elected its first president from rapidly expanding Appalachia, Andrew Jackson.  And by then the massive westward expansion of the Deep South, along with the extraordinary wealth and power that accrued from its cotton-producing slave economy, created the dynamics leading to the Civil War.  This pitted the four nations of the northeast against Tidewater and Deep South, with Appalachia split between the two, resentful of both Yankee piety and Southern condescension.  The multiracial and multicultural nations of French New Orleans and the Mexican southwest (El Norte) were hostile to the Deep South and resented its efforts to expand its dominion westward.

The other two major cultural groupings emerged in the mid 19th century.  The thin strip along the west coast consisted of Yankees in the cities and Appalachians in the back country, combining the utopianism of the former with the radical individualism of the latter.  The Far West is the one grouping that is based not on cultural geography but physical geography.  A vast arid area unsuited to farming, it became the domain of the only two entities powerful enough to control it — large corporations (railroad and mining), which exploited it, and the federal government, which owned most of the land and provided armed protection from Indians.

So let’s jump ahead and look at the consequences of this cultural landscape for our current political divisions.  Examine the electoral map for the 2016 presidential race, which shows the vote in Woodard’s 11 nations.

The 2016 Electoral Map

2016 Vote Map

Usually you see voting maps with results by state.  Here instead we see voting results by county, which allows for a more fine-tuned analysis.  Woodard assigns each county to one of the 11 “nations” and then shows the red or blue vote margin for each cultural grouping.

It’s striking to see how well the nations match the vote.  The strongest vote for Clinton came from the Left Coast, El Norte, and New Netherland, with substantial support from Yankeedom, Tidewater, and Spanish Caribbean.  Midlands was only marginally supportive of the Democrat.  Meanwhile the Deep South and Far West were modestly pro-Trump (about as much as Yankeedom was pro-Clinton), but the true kicker was Appalachia, which voted overwhelmingly for Trump (along with New France in southern Louisiana).

Appalachia forms the heart of Trump’s electoral base of support.  It’s an area that resents intellectual, cultural, and political elites; that turns away from mainstream religious denominations in favor of evangelical sects; and that lags behing behind in the 21st century information economy.  As a result, this is the heartland of populism.  It’s no wonder that the portrait on the wall in Trump’s Oval portrays Andrew Jackson.

Now one more map, this time showing were in the country people have been social distancing and where they haven’t, as measure by how much they were traveling away from home (using cell phone data).  It comes from a piece Woodard recently published in Washington Monthly.

Social Distancing Map

Once again, the patterns correspond nicely to the 11 nations.  Here’s how Woodard summarizes the data:

Yankeedom, the Midlands, New Netherland, and the Left Coast show dramatic decreases in movement – 70 to 100 percent in most counties, whether urban or rural, rich, or poor.

Across much of Greater Appalachia, the Deep South and the Far West, by contrast, travel fell by only 15 to 50 percent. This was true even in much of Kentucky, the interior counties of Washington and Oregon, where Democratic governors had imposed a statewide shelter-in-place order.

Not surprisingly, most of the states where governors imposed stay-at-home orders by March 27 are located in or dominated by one or a combination of the communitarian nations. This includes states whose governors are Republicans: Ohio, New Hampshire, Vermont, and Massachusetts.

Most of the laggard governors lead states dominated by individualistic nations. In the Deep South and Greater Appalachia you find Florida’s Ron DeSantis, who allowed spring breakers to party on the beaches. There’s Brian Kemp of Georgia who left matters in the hands of local officials for much of the month and then, on April 2, claimed to have just learned the virus can be transmitted by asymptomatic individuals. You have Asa Hutchinson of Arkansas, who on April 7 denied mayors the power to impose local lockdowns. And then there’s Mississippi’s Tate Reeves, who resisted action because “I don’t like government telling private business what they can and cannot do.”

Nothing like a pandemic to show what your civic values are.  Is it all about us or all about me?

Posted in Power, Sociology, Students, Teaching

Willard Waller on the Power Struggle between Teachers and Students

In 1932, Willard Waller published his classic book, The Sociology of Teaching.  For years I used a chapter from it (“The Teacher-Pupil Relationship“) as a way to get students to think about the problem that most frightens rookie teachers and that continues to haunt even the most experienced practitioners:  how to gain and maintain control of the classroom.

The core problems facing you as a teacher in the classroom are these:  students radically outnumber you; they don’t want to be there; and your power to get them to do what you want is sharply limited.  Otherwise, teaching is a piece of cake.

They outnumber you:  Teaching is one of the few professions that are practiced in isolation from other professionals.  Most classrooms are self-contained structures with one teacher and 25 or 30 students, so teachers have to ply their craft behind closed doors without the support of their peers.  You can commiserate with colleagues about you class in the bar after work, but during the school day you are on your own, left to figure out a way to maintain control that works for you.

They’re conscripts:   Most professionals have voluntary clients, who come to them seeking help with a problem: write my will, fix my knee, do my taxes.  Students are not like that.  They’re in the classroom under compulsion.  The law mandates school attendance and so does the job market, since the only way to get a good job is to acquire the right educational credentials.  As a result, as a teacher you have to figure out how to motivate this group of conscripts to follow your lead and learn what you teach.  This poses a huge challenge, to face a room full of students who may be thinking, “Teach me, I dare you.”

Your powers are limited:   You have some implied authority as an adult and some institutional authority as the agent of the school, but the consequences students face for resisting you are relatively weak:  a low grade, a timeout in the back of the room, a referral to the principal, or a call to the parent.  In the long run, resisting school can ruin your future by consigning you to a bad job, low pay, and a shorter life.  And teachers try to use this angle:  Listen up, you’re going to need this some day.  But the long run is not very meaningful to kids, for whom adulthood is a distant fantasy but the reality of life in the classroom is here and now.  As a result, teachers rely on a kind of confidence game, pretending they have more power than they do and trying to keep students from realizing the truth.  You can only issue a few threats before students begin to realize how hollow they are.

One example of the limits of teacher power is something I remember teachers saying when I was in elementary school:  “Don’t let me you see you do that again!”  At the time this just meant “Don’t do it,” but now I’ve come to interpret the admonition more literally:  “Don’t let me you see you do that again!”  If I see you, I’ll have to call you on it in order to put down your challenge to my authority; but if you do it behind my back, I don’t have to respond and can save my ammunition for a direct threat.

Here’s how Waller sees the problem:

The weightiest social relationship of the teacher is his relationship to his students; it is this relationship which is teaching.  It is around this relationship that the teacher’s personality tends to be organized, and it is in adaptation to the needs of this relationship that the qualities of character which mark the teacher are produced. The teacher-pupil relationship is a special form of dominance and subordination, a very unstable relationship and in quivering equilibrium, not much supported by sanction and the strong arm of authority, but depending largely upon purely personal ascendancy.  Every  teacher is  a  taskmaster and  every  taskmaster is a hard man….

Ouch.  He goes on to describe the root of the conflict between teachers and students in the classroom:

The teacher-pupil relationship is a form of institutionalized dominance and subordination. Teacher and pupil confront each other in the school with an original conflict of desires, and however much that conflict may be reduced in amount, or however much it may be hidden, it still remains. The teacher represents the adult group, ever the enemy of the spontaneous life of groups of children. The teacher represents the formal curriculum, and his interest is in imposing that curriculum upon the children in the form of tasks; pupils are much more interested in life in their own world than in the desiccated bits of adult life which teachers have to offer. The teacher represents the established social order in the school, and his interest is in maintaining that order, whereas pupils have only a negative interest in that feudal superstructure.

I’ve always resonated with this depiction of the school curriculum:  “desiccated bits of adult life.”  Why indeed would students develop an appetite for the processed meat that emerges from textbooks?  Why would they be eager to learn the dry as toast knowledge that constitutes the formal curriculum, disconnected from context and bereft of meaning?

Waller Book Cover

An additional insight I gain from Waller is this:  that teaching has a great impact on teachers than on students.

Conflict is in the role, for the wishes of the teacher and the student are necessarily divergent, and more conflict because the teacher must protect himself from the possible destruction of his authority that might arise from this divergence of motives. Subordination is possible only because the subordinated one is a subordinate with a mere fragment of his personality, while the dominant one participates completely. The subject is a subject only part of the time and with a part of himself, but the king is all king.

What a great insight.  Students can phone it in.  They can pretend to be listening while lost in their own fantasies.  But teachers don’t enjoy this luxury.  They need to be totally immersed in the teacher role, making it a component of self and not a cloak lightly worn.  “The subject is a subject only part of the time and with a part of himself, but the king is all king.”

Here he talks about the resources that teachers and students bring to the struggle for power in the classroom:

Whatever the rules that the teacher lays down, the tendency of the pupils is to empty them of meaning. By mechanization of conformity, by “laughing off” the teacher or hating him out of all existence as a person, by taking refuge in self-initiated activities that are always just beyond the teacher’s reach, students attempt to neutralize teacher control. The teacher, however, is striving to read meaning into the rules and regulations, to make standards really standards, to force students really to conform. This is a battle which is not unequal. The power of the teacher to pass rules is not limited, but his power to enforce rules is, and so is his power to control attitudes toward rules.

He goes on to wrap up this point, repeating it in different forms in order to bring it home.

Teaching makes the teacher. Teaching is a boomerang that never fails to come back to the hand that threw it. Of teaching, too, it is true, perhaps, that it is more blessed to give than to receive, and it also has more effect. Between good teaching and bad there is a great difference where students are concerned, but none in this, that its most pronounced effect is upon the teacher. Teaching does something to those who teach.

I love this stuff, and students who have been teachers often appreciate the way he gives visibility to the visceral struggle for control that they experienced in the classroom.  But for a lot of students, teachers or not, he’s a hard sell.  One complaint is that he’s sexist.  Of course he is.  The teacher is always “he” and the milieu he’s describing has a masculine feel, focused more on power over students than on engagement with them.  But so what?  The power issue in the classroom is as real for female as male teachers.

A related complaint is that the situation he describes is dated; things are different in classrooms now than they were in the 1930s.  The teacher-student relationship today is warmer, more informal, more focused on drawing students into the process of learning than on driving them toward it.  In this context, teachers who exercise power in the classroom can just be seen as bad teachers.  Good teachers take a progressive approach, creating an atmosphere of positive feeling in which students and teachers like each other and interact through exchange rather than dictation.

Much of this is true, I think.  Classrooms are indeed warmer and more informal places than they used to be, as Larry Cuban has pointed out in his work.  But that doesn’t mean that the power struggle has disappeared.  Progressive teachers are engaged in the eternal pedagogical practice of getting students to do what teachers want.  This is an exercise in power, but contemporary teachers are just sneakier about it.  They find ways of motivating student compliance with their wishes through inducement, personal engagement, humor, and fostering affectionate connections with their students.

The most effective use of power is the one that is least visible.  Better to have students feel that what they’re doing in the classroom is the result of their own choice rather than the dictate of the teacher.  But this is still a case of a teacher imposing her will on students, and it’s still true that without imposing her will she won’t be able to teach effectively.  Waller just scrapes off the rose-tinted film of progressive posturing from the window into teaching, so you can see for yourself what’s really at stake in the pedagogical exchange.

It helps to realized that The Sociology of Teaching was used as a textbook for students who were preparing to become teachers.  In it, his voice is that of a grizzled homicide detective lecturing bright-eyed students at the police academy, revealing the true nature of the job they’re embarking on.  David Cohen caught Waller’s vision perfectly in a lovely essay, “Willard Waller, On Hating School and Loving Education,” which I highly recommend.  From his perspective, Waller was a jaded progressive, who pined for schools that were true to the progressive spirit but wanted to warn future teachers about the grim reality what was actually awaiting them.

Waller’s book has been out of print for years, but you can find a scanned version here.  Enjoy.

Posted in Inequality, School organization, Schooling, Sociology

Two Cheers for School Bureaucracy

This post is a piece I wrote for Kappan, published in the March 2020 edition.  Here’s a link to the PDF.

Bureaucracies are often perceived as inflexible, impersonal, hierarchical, and too devoted to rules and red tape. But here I make a case for these characteristics being a positive in the world of public education. U.S. schools are built within a liberal democratic system, where the liberal pursuit of self-interest is often in tension with the democratic pursuit of egalitarianism. In recent years, I argue, schools have tilted toward the liberal side, enabling privileged families to game the system to help their children get ahead. In such a system, an impersonal bureaucracy stands as a check that ensures that the democratic side of schooling, in which all children are treated equally, remains in effect.

 

Cover page from Two Cheers Magazine version-page-0.

 

Two Cheers for School Bureaucracy

By David F. Labaree

To call an organization “bureaucratic” has long been taken to mean that it is inflexible, impersonal, hierarchical, and strongly favors a literal rather than substantive interpretation of rules. In the popular imagination, bureaucracies make it difficult to accomplish whatever you want to do, forcing you to wade through a relentless proliferation of red tape.

School bureaucracy is no exception to this rule. Teachers, students, administrators, parents, citizens, reformers, and policymakers have long railed against it as a barrier that stands between them and the kind of schools they want and need. My aim here is to provide a little pushback against this received wisdom by proposing a modest defense of school bureaucracy. My core assertion is this: Bureaucracy may make it hard to change schools for the better, but at the same time it helps keep schools from turning for the worse.

Critiques of bureaucracy

Criticisms of school bureaucracy have taken different forms over the years. When I was in graduate school in the 1970s, the critique came from the left. From that perspective, the bureaucracy was a top-down system in which those at the top (policy makers, administrators) impose their will on the actors at the bottom (teachers, students, parents, and communities). Because the bureaucracy was built within a system that perpetuated inequalities of class, race, and gender, it tended to operate in a way that made sure that White males from the upper classes maintained their position, and that stifled grassroots efforts to bring about change from below. Central critical texts at the time were Class, Bureaucracy, and Schools, published in 1971 by Michael Katz (who was my doctoral advisor at the University of Pennsylvania) and Schooling in Capitalist America, published in 1976 by Samuel Bowles and Herbert Gintis.

By the 1990s, however, attacks on school bureaucracy started to come from the right. Building on the Reagan-era view of government as the problem rather than the solution, critics in the emergent school choice movement began to develop a critique of bureaucracy as a barrier to school effectiveness. The central text then was Politics, Markets, and America’s Schools by John Chubb and Terry Moe (1990), who argued that organizational autonomy was the key factor that made private and religious schools more effective than public schools. Because they didn’t have to follow the rigid rules laid down by the school-district bureaucracy, they were free to adapt to families’ demands for the kind of school that met their children’s needs. To Chubb and Moe, state control of schools inevitably stifles the imagination and will of local educators. According to their analysis, democratic control of schools fosters a bureaucratic structure to make sure all schools adhere to political admonitions from above. They proposed abandoning state control, releasing schools from the tyranny of bureaucracy and politics so they could respond to market pressures from educational consumers.

So the only thing the left and the right agree on is that school bureaucracy is a problem, one that arises from the very nature of bureaucracy itself — an organizational system defined as rule by offices (bureaus) rather than by people. The central function of any bureaucracy is to be a neutral structure that carries the aims of its designers at the top down to the ground level where the action takes place. Each actor in the system plays a role that is defined by their particular job description and aligned with the organization’s overall purpose, and the nature of this role is independent of the individual who fills it. Actors are interchangeable, but the roles remain. The problem arises if you want something from the bureaucracy that it is not programmed to provide. In that case, the organization does indeed come to seem inflexible, impersonal, hierarchical, and rigidly committed to following the rules.

The bureaucracy of schools

Embedded within the structure of the school bureaucracy are the contradictory values of liberal democracy. Liberalism brings a strong commitment to individual liberty, preservation of private property, and a tolerance of the kinds of social inequalities that arise if you leave people to pursue their own interests without state interference. It sees education as a private good (Labaree, 2018). These are the characteristics of school bureaucracy — private interests promoting outcomes that may be unequal — that upset the left. Democracy, on the other hand, brings a strong commitment to political and social equality, in which the citizenry establishes schooling for its collective betterment, and the structure of schooling seeks to provide equal benefits to all students. It sees education as a public good. These are the characteristics — collectivist and egalitarian — that upset the right.

Over the years, I have argued — in books such as How to Succeed in School without Really Learning (1997) and Someone Has to Fail (2012) — that the balance between the liberal and democratic in U.S. schools has tilted sharply toward the liberal. Increasingly, we treat schooling as a private good, whose benefits accrue primarily to the educational consumer who receives the degree. It has become the primary way for people to get ahead in society and a primary way for people who are already ahead to stay that way. It both promotes access and preserves advantage. Families that enjoy a social advantage have become increasingly effective at manipulating the educational system to ensure that their children will enjoy this same advantage. In a liberal democracy, where we are reluctant to constrain individual liberty, privileged parents have been able to game the structure of schooling to provide advantages for their children at the expense of other people’s children. They threaten to turn education into a zero-sum game whose winners get the best jobs.

Gaming the system

So how do upper-middle-class families boost their children’s chances for success in this competition? The first and most obvious step is to buy a house in a district with good schools. Real estate agents know that they’re selling a school system along with a house — I recall an agent once telling me not to consider a house on the other side of the street because it was in the wrong district — and the demand in areas with the best schools drives up housing prices. If you can’t move to such a district, you enter the lottery to gain access to the best schools of choice in town. Failing that, you send your children to private schools. Then, once you’ve placed them in a good school, you work to give your children an edge within that school. You already have a big advantage if you are highly educated and thus able to pass on to your children the cultural capital that constitutes the core of what schools teach and value. If students come to school already adept at the verbal and cognitive and behavioral skills that schools seek to instill, then they have a leg up over students who must rely on the school alone to teach them these skills.

In addition, privileged parents have a wealth of experience at doing school at the highest levels, and they use this social capital to game the system in favor of their kids: You work to get your children into the class of the best available teacher, then push to get them into the top reading group and the gifted and talented program. When they get to high school, you steer them into the top academic track and the most advanced placement classes, while also rounding out their college admissions portfolios with an impressive array of extracurricular activities and volunteer work. Then comes the race to get into the best college (meaning the one with the most selective admissions), using an array of techniques including the college tour, private admissions counselors, test prep tutoring, legacies, social networks, and strategic donations. Ideally, you save hundreds of thousands of dollars by securing this elite education within the public system. But whether you send your kids to public or private school, you seek out every conceivable way to mark them as smarter and more accomplished and more college-admissible than their classmates.

At first glance, these frantic efforts by upper-middle class parents to work the system for the benefit of their children can seem comically overwrought. Children from economically successful and highly educated families do better in school and in life than other children precisely because of the economic, cultural, and social advantages they have from birth. So why all fuss about getting kids into the best college instead of one of the best colleges? The fix is in, and it’s in their favor, so relax.

The anxiety about college admissions among these families is not irrational (see, for example, Doepke & Zilibotti, 2019). It arises from two characteristics of the system. First, in modern societies social position is largely determined by educational attainment rather than birth. Your parents may be doctors, but they can’t pass the family business on to their children. Instead, you must trace the same kind of stellar path through the educational system that your parents did. This leads to the second problem. If you’re born at the top of the system, the only mobility available to you is downward. And because jobs are allocated according to educational attainment, there are always a number of smart and motivated poor kids who may win the academic contest instead of you, who may not be as smart or motivated. There’s a real chance that you will end up at a lower social position than your parents, so your parents feel pressure to leave no stone unturned in the effort to give you an educational edge.

The bureaucracy barrier

Here is where bureaucracy enters the scene, as it can create barriers to the most affluent parents’ efforts to guarantee the success of their children. The school system, as a bureaucracy established in part with the egalitarian values of its democratic control structure, just doesn’t think your children are all that special. This is precisely the problem Chubb and Moe and other choice supporters have identified.

When we’re talking about a bureaucracy, roles are roles and rules are rules. The role of the teacher is to serve all students in the class and not just yours. School rules apply to everyone, so you can’t always be the exception. Get over it. At one level, your children are just part of the crowd of students in their school, subject to the same policies and procedures and educational experiences as all of the others. By and large, privileged parents don’t want to hear that.

So school bureaucracy sometimes succeeds in rolling back a few of the structures that privilege upper-middle class students.  They seek to eliminate ability grouping in favor of cooperative learning, abandon gifted programs for the few in favor of using the pedagogies of these programs for the many, and reduce high school tracking by creating heterogenous classrooms.

Of course, this doesn’t mean that the bureaucracy always or even usually wins out in the competition with parents seeking special treatment for their children.  Parents often succeed in fighting off efforts to eliminating ability groups, tracks, gifted programs, and other threats.  Private interests are relentless in trying to obtain private schooling at public expense, but every impediment to getting their way is infuriating to parents lobbying for privilege.

For these parents, the school bureaucracy becomes the enemy, which you need to bypass, suborn, or overrule in your effort to turn school to the benefit of your children. At the same time, this same bureaucracy becomes the friend and protector of the democratic side of liberal democratic schooling. Without it, empowered families would proceed unimpeded in their quest to make schooling a purely private good. So two cheers for bureaucracy.

References

Bowles, S. & Gintis, H. (1976). Schooling in capitalist America New York, NY: Basic Books.

Chubb, J. & Moe, T. (1990). Politics, markets, and America’s schools. Washington, DC: Brookings.

Doepke, M. & Zilibotti, F. (2019). The economic roots of helicopter parenting. Phi Delta Kappan, 100 (7), 22-27.

Katz, M. (1971). Class, bureaucracy, and schools. New York, NY: Praeger.

 Labaree, D.L. (1997) How to succeed in school without really learning. New Haven, CT: Yale University Press.

Labaree, D.L. (2018). Public schools for private gain: The declining American commitment to serving the public good. Phi Delta Kappan, 100 (3), 8-13

Labaree, D.L. (2010). Someone has to fail. Cambridge, MA: Harvard University Press.

 AUTHORID

DAVID F. LABAREE (dlabaree@stanford.edu; @DLabaree) is Lee L. Jacks Professor of Education, emeritus, at the Stanford University Graduate School of Education in Palo Alto, CA. He is the author, most recently, of A Perfect Mess: The Unlikely Ascendency of American Higher Education (University of Chicago Press, 2017).

 

ABSTRACT

Bureaucracies are often perceived as inflexible, impersonal, hierarchical, and too devoted to rules and red tape. But David Labaree makes a case for these characteristics being a positive in the world of public education. U.S. schools are built within a liberal democratic system, where the liberal pursuit of self-interest is often in tension with the democratic pursuit of egalitarianism. In recent years, Labaree argues, schools have tilted toward the liberal side, enabling privileged families to game the system to help their children get ahead. In such a system, an impersonal bureaucracy stands as a check that ensures that the democratic side of schooling, in which all children are treated equally, remains in effect.

 

 

 

Posted in Family, Meritocracy, Modernity, Schooling, Sociology, Teaching

What Schools Can Do that Families Can’t: Robert Dreeben’s Analysis

In this post, I explore a key issue in understanding the social role that schools play:  Why do we need schools anyway?  For thousands of years, children grew up learning the skills, knowledge, and values they would need in order to be fully functioning adults.  They didn’t need schools to accomplish this.  The family, the tribe, the apprenticeship, and the church were sufficient to provide them with this kind of acculturation.  Keep in mind that education is ancient but universal public schooling is a quite recent invention, which arose about 200 years ago as part of the creation of modernity.

Here I focus on a comparison between family and school as institutions for social learning.  In particular, I examine what social ends schools can accomplish that families can’t.  I’m drawing on a classic analysis by Robert Dreeben in his 1968 book, On What Is Learned in School.  Dreeben is a sociologist in the structural functionalist tradition who was a student of Talcott Parsons.  His book demonstrates the strengths of functionalism in helping us understand schooling as a critically important mechanism for societies to survive in competition with other societies in the modern era.  The section I’m focusing on here is chapter six, “The Contribution of Schooling to the Learning of Norms: Independence, Achievement, Universalism, and Specificity.”   I strongly recommend that you read the original, using the preceding link.  My discussion is merely a commentary on his text.

Dreeben Cover

I’m drawing on a set of slides I used when I taught this chapter in class.

This is structural functionalism at its best:

      • The structure of schooling teaches students values that modern societies require; the structure functions even if that outcome is unintended

He examines the social functions of the school compared with the family

      • Not the explicit learning that goes on in school – the subject matter, the curriculum (English, math, science, social studies)

      • Instead he looks as the social norms you learn in school

He’s not focusing on the explicit teaching that goes on in school – the formal curriculum

      • Instead he focuses on what the structure of the school setting teaches students – vs. what the structure of the family teaches children

      • The emphasis, therefore, is on the differences in social structure of the two settings

      • What can and can’t be learned in each setting?

Families and schools are parallel in several important ways

      • Socialization: they teach the young

        • Both provide the young with skills, knowledge, values, and norms

        • Both use explicit and implicit teaching

      • Selection: they set the young on a particular social trajectory in the social hierarchy

        • Both provide them with social means to attain a particular social position

        • School: via grades, credits and degrees

        • Families: via economic, social, and cultural capital

The difference between family and school boils down to preparing the young for two very different kinds of social relationships

      • Primary relationships, which families model as the relations between parent and child and between siblings

      • Secondary relationships, which schools model as the relations between teacher and student and between students

Each setting prepares children to take on a distinctive kind of relationship

Dreeben argues that schools teach students four norms that are central to the effective functioning of modern societies:  Independence, achievement, universalism, and specificity.  These are central to the kinds of roles we play in public life, which sociologists call secondary roles, roles that are institutionally structured in relation to other secondary roles, such as employee-employer, customer-clerk, bus rider-bus driver, teacher-student.  The norms that define proper behavior in secondary roles differ strikingly from the norms for another set of relationship defined as primary roles.  These are the intimate relationship we have with our closest friends and family members.  One difference is that we play a large number of secondary roles in order to function in complex modern societies but only a small number of primary roles.  Another is that secondary roles are strictly utilitarian, means to practical ends, whereas primary roles are ends in themselves.  A third is that secondary role relationships are narrowly defined; you don’t need or want to know much about the salesperson in the store in order to make your purchase.  Primary relationship are quite diffuse, requiring deeper involvement — friends vs. acquaintances.

As a result, each of the four norms that schools teach, which are essential for maintaining secondary role relationships, correspond to equal and opposite norms that are essential for maintaining primary role relationships.  Modern social life requires expertise at moving back and forth effortlessly between these different kinds of roles and the contrasting norms they require of us.  We have to be good at maintaining our work relations and our personal relations and knowing which norms apply to which setting.

Secondary Roles                      Primary Roles

(Work, public, school)           (Family, friends)

Independence                          Group orientation

Achievement                            Ascription

Universalism                            Particularism

Specificity                                  Diffuseness

Here is what’s involved in each of these contrasting norms:

Independence                            Group orientation

      Self reliance                                Dependence on group

      Individualism                             Group membership

      Individual effort                        Collective effort

      Act on your own                         Need/owe group support

Achievement                               Ascription

      Status based on what you do  Status based on who you are

      Active                                             Passive

      Earned                                           Inherited

                         Meritocracy                                  Aristocracy

Universalism                              Particularism

      Equality within category —       Personal uniqueness — my child

           a 5th grade student

      General rules apply to all        Different rules for us vs. them

      Central to fairness, justice      Central to being special

Specificity                                   Diffuseness

       Narrow relations                       Broad relations

       Extrinsic relations                    Intrinsic relations

       Means to an end                        An end in itself

Think about how the structure of the school differs from the structure of the family and what the consequences of these differences are.

Family vs. School:

Structure of the school (vs. structure of the family)

      • Teacher and student are both achieved roles (ascribed roles)

      • Large number of kids per adult (few)

      • No particularistic ties between teacher and students (blood ties)

      • Teachers deal with the class as a group (families as individuals based on sex and birth order)

      • Teacher and student are universalistic roles, with individuals being interchangeable in these roles (family roles are unique to that family and not interchangeable)

      • Relationship is short term, especially as you move up the grades (relations are lifelong)

      • Teachers and students are subject to objective evaluation (familie use subjective, emotional criteria)

      • Teachers and students both see their roles as means to an end (family relations are supposed to be selfless, ends in themselves)

      • Students are all the same age (in family birth order is central)

  Consider the modes of differentiation and stratification in families vs. schools.

Children in families:

Race, class, ethnicity, and religion are all the same

Age and gender are different

Children in schools:

Age is the same

Race, class, ethnicity, religion, and gender are different

This allows for meritocratic evaluation, fostering the learning of achievement and independence

Questions

Do you agree that characteristics of school as a social structure makes it effective at transmitting secondary social norms, preparing for secondary roles?

Do you agree that characteristics of family as a social structure makes it ineffective at transmitting secondary norms, preparing for secondary roles?

But consider this complication to the story

Are schools, workplaces, public interactions fully in tune with the secondary model?

Are families, friends fully in tune with the primary model?

How do these two intermingle?  Why?

      • Having friends at work and school, makes life nicer – and also makes you work more efficiently

      • Getting students to like you makes you a more effective teacher

      • But the norm for a professional or occupational relationship is secondary – that’s how you define a good teacher, lawyer, worker

      • The norm for primary relations is that they are ends in themselves not means to an end

      • Family members may use each other for personal gain, but that is not considered the right way to behave

Posted in History, Sociology, War

War! What Is It Good For?

This post is an overview of the 2014 book by Stanford classicist Ian Morris, War! What Is It Good For?  In it he makes the counter-intuitive argument that over time some forms of war have been socially productive.  In contrast with the message of 1970s song by the same name, war may in fact be good for something.

The central story is this.  Some wars lead to the incorporation of large numbers of people under a single imperial state.  In the short run, this is devastatingly destructive; but in the long run it can be quite beneficial.  Under such regimes (e.g., the early Roman and Chinese empires and the more recent British and American empires), the state imposes a new order that sharply reduces rates of violent death and fosters economic development.  The result is an environment that allows the population live longer and grow wealthier, not just in the imperial heartland but also in the newly colonized territories.  Morris War Cover

So how does this work?  He starts with a key distinction made by Mancur Olson.  All states are a form of banditry, Olson says, since they extract revenue by force.  Some are roving bandits, who sweep into town, sack the place, and then move on.  But others are stationary bandits, who are stuck in place.  In this situation, the state needs to develop a way to gain the greatest revenue from its territory over the long haul, which means establishing order and promoting economic development.  It has an incentive to foster the safety and productivity of its population.

Rulers steal from their people too, Olson recognized, but the big difference between Leviathan and the rape-and-pillage kind of bandit is that rulers are stationary bandits. Instead of stealing everything and hightailing it, they stick around. Not only is it in their interest to avoid the mistake of squeezing every last drop from the community; it is also in their interest to do whatever they can to promote their subjects’ prosperity so there will be more for the rulers to take later.

This argument is an extension of the one that Thomas Hobbes made in Leviathan:

Whatsoever therefore is consequent to a time or war where every man is enemy to every man, the same is consequent to the time wherein men live without other security than what their own strength and their own invention shall furnish them withal. In such condition there is no place for industry, because the fruit thereof is uncertain, and consequently no culture of the earth, no navigation nor use of the commodities that may be imported by sea, no commodious building, no instruments of moving and removing such things as require much force, no knowledge of the face of the earth; no account of time, no arts, no letters, no society, and, which is worst of all, continual fear and danger of violent death, and the life of man solitary, poor, nasty, brutish, and short.

(Wow, that boy could write.)

Morris says that stationary bandit states first arose with the emergence of agriculture, when tribes found that staying in place and tending their crops could support a larger population than roving across the landscape hunting and gathering.  This leads to what he calls caging.  People can’t easily move and the state has an incentive to protect them from marauders so it can harvest the surplus from this population for its own benefit.

Over time, these states have reduced violence to an extraordinary extent, reining in “the continual fear and danger of violent death.”

Averaged across the planet, violence killed about 1 person in every 4,375 in 2012, implying that just 0.7 percent of the people alive today will die violently, as against 1–2 percent of the people who lived in the twentieth century, 2–5 percent in the ancient empires, 5–10 percent in Eurasia in the age of steppe migrations, and a terrifying 10–20 percent in the Stone Age.

In the process, states found that they prospered most when they relaxed direct control of the economy and allowed markets to develop according to their own dynamic.  This created a paradoxical relationship between state and economy.

Markets could not work well unless governments got out of them, but markets could not work at all unless governments got into them, using force to pacify the world and keep the Beast at bay. Violence and commerce were two sides of the same coin, because the invisible hand needed an invisible fist to smooth the way before it could work its magic.

Empires, of course, don’t last forever.  At a certain point, hegemony yields to outside threats.  One chronic source of threat in Eurasian history was the roving bandit states of the Steppes that did in Rome and constantly harried China.  Another threat is the rise of a new hegemon.  The British global empire of the 18th and 19th century fostered the emergence of the United States, which became the empire of the late 20th and early 21st century, and this in turn fostered the development of China.

And there can be long periods of time between empires, when wars are largely unproductive.  After the fall of Rome, Europe experienced nearly a millennium of unproductive wars, as small states competed for dominance without anyone ever actually attaining it, a condition he calls “feudal anarchy.”  The result of a sharp increase in violence and and sharp decline in standard of living.  It wasn’t until the 16th century that Europe regained the per capita income enjoyed by Romans.

It seems to me, in fact, that “feudal anarchy” is an excellent description not just of western Europe between about 900 and 1400 but also of most of Eurasia’s lucky latitudes in the same period. From England to Japan, societies staggered toward feudal anarchy as their Leviathans dismembered themselves.

But 1400 saw the beginning of the 500-year war in which Europe strove mightily to dominate the world, finally producing the imperium of the British and then the Americans.

Morris’s conclusion from this extensive analysis is disturbing but also compelling:

The answer to the question in this book’s title is both paradoxical and horrible. War has been good for making humanity safer and richer, but it has done so through mass murder. But because war has been good for something, we must recognize that all this misery and death was not in vain. Given a choice of how to get from the poor, violent Stone Age to … peace and prosperity…, few of us, I am sure, would want war to be the way, but evolution—which is what human history is—is not driven by what we want. In the end, the only thing that matters is the grim logic of the game of death.

…while war is the worst imaginable way to create larger, more peaceful societies, it is pretty much the only way humans have found.

One way to test the validity of Morris’s argument in this book is to compare it to the analysis by his Stanford colleague, Walter Scheidel, in his latest book, Escape from Rome, which I reviewed here two weeks ago.  Scheidel argues that the fall of Rome, and the failure of any new empire to replace it for most of the next millennium, is the reason that Europe made the turn toward modernity before any other region of the world.  In Scheidel’s view, what Morris calls feudal anarchy, which shortened lifespans and fostered poverty for so long and for so many people, was the key spur to economic, social, technological, political, and military innovation — as competing states desperately sought to survive in the war of all against all.

Empires may keep the peace and promote commerce, but they also emphasize the preservation of power over the development of science and the invention of new technologies.  This is why the key engines of modernization in early modern Europe were not the large countries in the center — France and Spain — but the small countries on the margins, England and the Netherlands.

For most people, enjoying relative peace and prosperity within an empire is a lot better than the alternative.  But for the future global population as a whole, the greatest benefit may come from a sustained competition among warring states, which  spur the breakthrough innovations that have produced history’s most dramatic advances in peace and prosperity.  In this sense, even the unproductive wars of the feudal period may have been productive in the long run.  Once again, war was the answer.

Posted in Credentialing, Curriculum, Meritocracy, Sociology, Systems of Schooling

Mary Metz: Real School

This blog post is a tribute to the classic paper by Mary Metz, “Real School.”  In it she shows how schools follow a cultural script that demonstrates all of the characteristics we want to see in a school.  The argument, in line with neo-institutional theory (see this example by Meyer and Rowan), is that schools are organized around meeting our cultural expectations for the form that schools should take more than around producing particular outcomes.  Following the script keeps us reassured that the school we are associated with — as a parent, student, teacher, administrator, taxpayer, political leader, etc. — is indeed a real school.  It follows that the less effective a school is at producing desirable social outcomes — high scores, graduation rates, college attendance, future social position — the most closely we want it to follow the script.  It’s a lousy high school but it still has an advanced placement program, a football team, a debate team, and a senior prom.  So it’s a real high school.

Here’s the citation and a link to a PDF of the original article:

Metz, Mary H. (1990). Real school: A universal drama amid disparate experience. In Douglas E. Mitchell & Margaret E. Goertz (Eds.), Education Politics for the New Century (pp. 75-91). New York: Falmer.

And here’s a summary of some of its key points.

Roots of real school: the need for reassurance

  • We’re willing to setting for formal over substantive equity in schooling

  • The system provides formal equivalence across school settings, to reassure everyone that all kids get the same educational opportunity

  • Even though this is obviously not the case — as evidenced by the way parents are so careful where they send their kids, where they buy a house

  • What’s at stake is institutional legitimacy

  • Teachers, administrators, parents, citizens all want reassurance that their school is a real school

  • If not, then I’m not a real teacher, a real student, so what are we doing here?

This arises from the need for schools to balance conflicting outcomes within the same institution — schools need to provide both access and advantage, both equality and inequality

  • We want it both ways with our schools: we’re all equal, but I’m better than you

  • Both qualities are important for the social functions and public legitimacy of the social system

  • This means that school, on the face of it, needs to give everyone a fair shot

  • But it also means that school, in practice, needs to sort the winners from the losers

  • And winning only has meaning if it appears to be the result of individual merit

  • But who wants to leave this up for chance for their own children?

  • So parents use every tool they’ve got to game the system and get their children a leg up in the competition

  • And upper-middle-class parents have a lot of such tools — cultural capital, social capital, and economic capital

  • Yet they still need the formal equality of schooling as cover for this quest for advantage

So wWhy is it, as Metz shows, that schools that are least effective in producing student learning are the most diligent in doing real school?

  • Teachers and parents in these schools rarely demand the abandonment of real school — a failed model — in favor of something radically different

  • To the contrary, they demand even closer alignment with the real school model

  • They do so because they need to maintain the confidence in the system

  • More successful schools can stay a little farther from the script, because parents are more confident they will produce the right outcomes for their kids

  • Education is a confidence game – in both senses of the word: an effort to maintain confidence and an effort to con the consumer

The magic of school formalism

  • Formalism is central to the system and its effectiveness as a place to provide access and advantage at the same time

  • So you focus on structure and form and process more than on substantive learning

  • Meyer and Rowan‘s formalistic definition of a school:

    • “A school is an accredited institution where a certified teacher teaches a sanctioned curriculum to a matriculated student who then receives an authorized diploma.”

  • Students can make progress and graduate even if they’re not learning much

  • It helps that the quality of schooling is less visible than the quantity

Enjoy.

Real School Front Page

Posted in History of education, Meritocracy, Sociology, Systems of Schooling, Teaching

Pluck vs. Luck

This post is a piece I recently published in AeonHere’s the link to the original.  I wrote this after years of futile efforts to get Stanford students to think critically about how they got to their current location at the top of the meritocracy.  It was nearly impossible to get students to consider that their path to Palo Alto might have been the result of anything but smarts and hard work.  Luck of birth never seemed to be a major factor in the stories they told about how they got here.  I can understand this, since I’ve spent a lifetime patting myself on the back for my own academic accomplishments, feeling sorry for the poor bastards who didn’t have what it took to climb the academic ladder.

But in recent years, I have come to spend a lot of time thinking critically about the nature of the American meritocracy.  I’ve published a few pieces here on the subject, in which I explore the way in which this process of allocating status through academic achievement constitutes a nearly perfect system for reproducing social inequality — protected by a solid cover of legitimacy.  The story it tells to everyone in society, winners and losers alike, is that you got what you deserved.

So I started telling students my own story about how I got to Stanford — in two contrasting versions.  One is a traditional account of climbing the ladder through skill and grit, a story of merit rewarded.  The other is a more realistic account of getting ahead by leveraging family advantage, a story of having the right parents.

See what you think.

Pluck vs. Luck

David F. Labaree

Occupants of the American meritocracy are accustomed to telling stirring stories about their lives. The standard one is a comforting tale about grit in the face of adversity – overcoming obstacles, honing skills, working hard – which then inevitably affords entry to the Promised Land. Once you have established yourself in the upper reaches of the occupational pyramid, this story of virtue rewarded rolls easily off the tongue. It makes you feel good (I got what I deserved) and it reassures others (the system really works).

But you can also tell a different story, which is more about luck than pluck, and whose driving forces are less your own skill and motivation, and more the happy circumstances you emerged from and the accommodating structure you traversed.

As an example, here I’ll tell my own story about my career negotiating the hierarchy in the highly stratified system of higher education in the United States. I ended up in a cushy job as a professor at Stanford University. How did I get there? I tell the story both ways: one about pluck, the other about luck. One has the advantage of making me more comfortable. The other has the advantage of being more true.

I was born to a middle-class family and grew up in Philadelphia in the 1950s. As a skinny, shy kid who wasn’t good at sports, my early life revolved about being a good student. In upper elementary school, I became president of the student council and captain of the safety patrol (an office that conferred a cool red badge that I wore with pride). In high school, I continued to be the model student, eventually getting elected president of the student council (see a pattern here?) and graduating in 1965 near the top of my class. I was accepted at Harvard University with enough advanced-placement credits to skip freshman year (which, fortunately, I didn’t). There I majored in antiwar politics. Those were the days when an activist organisation such as Students for a Democratic Society was a big factor on campuses. I went to two of their annual conventions and wrote inflammatory screeds about Harvard’s elitism (who knew).

In 1970, I graduated with a degree in sociology and no job prospects. What do you do with a sociology degree, anyway? It didn’t help that the job market was in the doldrums. I eventually ended up back in Philadelphia with a job at the Federal Reserve Bank – first in public relations (leading school groups on tours) and then in bank relations (visiting banks around the Third Federal Reserve District). From student radical with a penchant for Marxist sociology, I suddenly became a banker wearing a suit every day and reading The Wall Street Journal. It got me out of the house and into my own apartment but it was not for me. Labarees don’t do finance.

After four years, I quit in disgust, briefly became a reporter at a suburban newspaper, hated that too, and then stumbled by accident into academic work. Looking for any old kind of work in the want ads in my old paper, I spotted an opening at Bucks County Community College, where I applied for three different positions – admissions officer, writing instructor, and sociology instructor. I got hired in the latter role, and the rest is history. I liked the work but realised that I needed a master’s degree to get a full-time job, so I entered the University of Pennsylvania sociology department. Once in the programme, I decided to continue on to get a PhD, supporting myself by teaching at the community college, Trenton State, and at Penn.

In 1981, as I was nearing the end of my dissertation, I started applying for faculty positions. Little did I know that the job market was lousy and that I would be continually applying for positions for the next four years.

As someone who started at the bottom, I can tell you that everything is better at the top

The first year yielded one job offer, at a place so depressing that I decided to stay in Philadelphia and continue teaching as an adjunct. That spring I got a one-year position in sociology at Georgetown University in Washington, DC. In the fall, with the clock ticking, I applied to 60 jobs around the country. This time, my search yielded four interviews, all tenure-track positions – at Yale University, at Georgetown, at the University of Cincinnati and at Widener University.

The only offer I got was the one I didn’t want, Widener – a small, non-selective private school in the Philadelphia suburbs that until the 1960s had been a military college. Three years past degree, I felt I had hit bottom in the meritocracy. The moment I got there, I started applying for jobs while desperately trying to write my way into a better one. I published a couple of journal articles and submitted a book proposal to Yale University Press. They hadn’t hired me but maybe they’d publish me.

Finally, a lifeline came my way. A colleague at the College of Education at Michigan State University encouraged me to apply for a position in history of education and I got the job. In the fall of 1985, I started as an assistant professor in the Department of Teacher Education at MSU. Fifteen years after college and four years after starting to look for faculty positions, my career in higher education finally took a big jump upward.

MSU was a wonderful place to work and to advance an academic career. I taught there for 18 years, moving through the ranks to full professor, and publishing three books and 20 articles and book chapters. Early on, I won two national awards for my first book and a university teaching award, and was later elected president of the History of Education Society and vice-president of the American Educational Research Association.

Then in 2002 came an opportunity to apply for a position in education at one of the world’s great universities, Stanford. It worked out, and I started there as a professor in 2003 in the School of Education, and stayed until retirement in 2018. I served in several administrative roles including associate dean, and was given an endowed chair. How cool.

As someone who started at the bottom of the hierarchy of US higher education, I can tell you that everything is better at the top. Everything: pay, teaching loads, intellectual culture, quality of faculty and students, physical surroundings, staff support, travel funds, perks. Even the weather is better. Making it in the meritocracy is as good as it gets. No matter how hard things go at first, talent will win out. Virtue earns its reward. Life is fair.

Of course, there’s also another story, one that’s less heartening but more realistic. A story that’s more about luck than pluck, and that features structural circumstances more than heroic personal struggle. So let me now tell that version.

Professor Robert M Labaree of Lincoln University in southeast Pennsylvania, the author’s grandfather. Photo courtesy of the author

The short story is that I’m in the family business. In the 1920s, my parents grew up as next-door neighbours on a university campus where their fathers were both professors. It was Lincoln University, a historically black institution in southeast Pennsylvania near the Mason-Dixon line. The students were black, the faculty white – most of the latter, like my grandfathers, were clergymen. The students were well-off financially, coming from the black bourgeoisie, whereas the highly educated faculty lived in the genteel poverty of university housing. It was a kind of cultural missionary setting, but more comfortable than the foreign missions. One grandfather had served as a missionary in Iran, where my father was born; that was hardship duty. But here was a place where upper-middle-class whites could do good and do well at the same time.

Both grandfathers were Presbyterian ministers, each descended from long lines of Presbyterian ministers. The Presbyterian clergy developed a well-earned reputation over the years of having modest middle-class economic capital and large stores of social and cultural capital. Relatively poor in money, they were rich in social authority and higher learning. In this tradition, education is everything. In part because of that, some ended up in US higher education, where in the 19th century most of the faculty were clergy (because they were well-educated men and worked for peanuts). My grandfather’s grandfather, Benjamin Labaree, was president of Middlebury College in the 1840s and ’50s. Two of my father’s cousins were professors; my brother is a professor. It’s the family business.

Rev Benjamin Labaree, who was president of Middlebury College, 1840-1866, and the author’s great-great-grandfather. Photo courtesy of the author

Like many retirees, I recently started to dabble in genealogy. Using Ancestry.com, I’ve traced back 10 or 12 generations on both sides of the family, some back to the 1400s, finding ancestors in the US, Scotland, England and France. They are all relentlessly upper-middle-class – mostly ministers, but also some physicians and other professionals. Not a peasant in the bunch, and no one in business. I’m to the manor born (well, really the manse). The most distant Labaree I’ve found is Jacques Laborie, born in 1668 in the village of Cardaillac in France. He served as a surgeon in the army of Louis XIV and then became ordained as a Calvinist minister in Zurich before Louis in 1685 expelled the reformed Protestants (Huguenots) from France. He moved to England, where he married another Huguenot, and then immigrated to Connecticut. Among his descendants were at least four generations of Presbyterian ministers, including two college professors. This is a good start for someone like me, seeking to climb the hierarchy of higher education – like being born on third base. But how did it work out in practice for my career?

I was the model Harvard student – a white, upper-middle-class male from an elite school

My parents both attended elite colleges, Princeton University and Wilson College (on ministerial scholarships), and they invested heavily in their children’s education. They sent us to a private high school and private colleges. It was a sacrifice to do this, but they thought it was worth it. Compared with our next-door neighbours, we lived modestly – driving an old station wagon instead of a new Cadillac – but we took pride in our cultural superiority. Labarees didn’t work in trade. Having blown their money on schooling and lived too long, my parents died broke. They were neither the first nor the last victims of the meritocracy, who gave their all so that their children could succeed.

This background gave me a huge edge in cultural and social capital. In my high school’s small and high-quality classrooms, I got a great education and learned how to write. The school traditionally sent its top five students every year to Princeton but I decided on Harvard instead. At the time, I was the model Harvard student – a white, upper-middle-class male from an elite school. No females and almost no minorities.

At Harvard, I distinguished myself in political activity rather than scholarship. I avoided seminars and honours programmes, where it was harder to hide and standards were higher. After the first year, I almost never attended discussion sections, and skipped the majority of the lectures as well, muddling through by doing the reading, and writing a good-enough paper or exam. I phoned it in. When I graduated, I had an underwhelming manuscript, with a 2.5 grade-point average (B-/C+). Not exactly an ideal candidate for graduate study, one would think.

And then there was that job at the bank, which got me out of the house and kept me fed and clothed until I finally recognised my family calling by going to grad school. After beating the bushes looking for work up and down the west coast, how did I get this job? Turned out that my father used to play in a string quartet with a guy who later became the vice-president for personnel at the Federal Reserve Bank. My father called, the friend said come down for an interview. I did and I got the job.

When I finally decided to pursue grad school, I took the Graduate Record Examinations and scored high. Great. The trouble is that an applicant with high scores and low grades is problematic, since this combination suggests high ability and bad attitude. But somehow I got into an elite graduate programme (though Princeton turned me down). Why? Because I went to Harvard, so who cares about the grades? It’s a brand that opens doors. Take my application to teach at the community college. Why hire someone with no graduate degree and a mediocre undergraduate transcript to teach college students? It turns out that the department chair who hired me also went to Harvard. Members of the club take care of each other.

If you have the right academic credentials, you get the benefit of the doubt. The meritocracy is quite forgiving toward its own. You get plenty of second and third chances where others would not. Picture if I had applied to Penn with the same grades and scores but with a degree from West Chester (state) University instead of Harvard. Would I really have had a chance? You can blow off your studies without consequence if you do it at the right school. Would I have been hired to teach at the community college with an off-brand BA? I think not.

And let’s reconsider my experience at Widener. For me – an upper-middle-class professor with two Ivy League degrees and generations of cultural capital – these students were a world apart. Of course, so were the community-college students I taught earlier, but they were taking courses on weekends while holding a job. That felt more like teaching night school than teaching college. At Widener, however, they were full-time students at a place that called itself a university, but to me this wasn’t a real university where I could be a real professor. Looking around the campus with the eye of a born-and-bred snob, I decided quickly that these were not my people. Most were the first in their families to be going to college and did not have the benefit of a strong high-school education.

In order to make it in academe, you need friends in high places. I had them

A student complained to me one day after she got back her exam that she’d received a worse grade than her friend who didn’t study nearly as hard. That’s not fair, she said. I shrugged it off at the time. Her answer to the essay exam question was simply not as good. But looking back, I realised that I was grading my students on skills I wasn’t teaching them. I assigned multiple readings and then gave take-home exams, which required students to weave together a synthesis of these readings in an essay that responded to a broad analytical question. That’s the kind of exam I was used to, but it required a set of analytical and writing skills that I assumed rather than provided. You can do well on a multiple-choice exam if you study the appropriate textbook chapters; the more time you invest, the higher the grade. That might not be a great way to learn, but it’s a system that rewards effort. My exams, however, rewarded discursive fluency and verbal glibness over diligent study. Instead of trying to figure out how to give these students the cultural capital they needed, I chose to move on to a place where students already had these skills. Much more comfortable.

Oh yes, and what about that first book, the one that won awards, gained me tenure, and launched my career? Well, my advisor at Penn, Michael Katz, had published a book with an editor at Praeger, Gladys Topkis, who then ended up at Yale University Press. With his endorsement, I sent her a proposal for a book based on my dissertation. She gave me a contract. When I submitted the manuscript, a reviewer recommended against publication, but she convinced the editorial board to approve it anyway. Without my advisor, no editor. And without the editor, no book, no awards, no tenure, and no career. It’s as simple as that. In order to make it in academe, you need friends in high places. I had them.

All of this, plus two more books at Yale, helped me make the move up to Stanford. Never would have happened otherwise. By then, on paper I began to look like a golden boy, checking all the right boxes for an elite institution. And when I announced that I was making the move to Stanford in the spring of 2003, before I even assumed the role, things started changing in my life. Suddenly, it seemed, I got a lot smarter. People wanted me to come give a lecture, join an editorial board, contribute to a book, chair a committee. An old friend, a professor in Sweden, invited me to become a visiting professor in his university. Slightly embarrassed, he admitted that this was because of my new label as a Stanford professor. Swedes know only a few universities in the US, he said, and Stanford is one of them. Like others who find a spot near the top of the meritocracy, I was quite willing to accept this honour, without worrying too much about whether it was justified. Like the pay and perks, it just seemed exactly what I deserved. Special people get special benefits; it only makes sense.

And speaking of special benefits, it certainly didn’t hurt that I am a white male – a category that dominates the professoriate, especially at the upper levels. Among full-time faculty members in US degree-granting institutions, 72 per cent of assistant professors and 81 per cent of full professors are white; meanwhile, 47 per cent of assistants and 66 per cent of professors are male. At the elite level, the numbers are even more skewed. At Stanford, whites make up 54 per cent of tenure-line assistant professors but 82 per cent of professors; under-represented minorities account for only 8 per cent of assistants and 5 per cent of professors. Meanwhile, males constitute 60 per cent of assistants and 78 per cent of professors. In US higher education, white males still rule.

Oh, and what about my endowed chair? Well, it turns out that when the holder of the chair retires, the honour moves on to someone else. I inherited the title in 2017 and held it for a year and a half before I retired and it passed on to the next person. What came with the title? Nothing substantial, no additional salary or research funds. Except I did get one material benefit from this experience, which I was allowed to keep when I gave up the title. It’s an uncomfortable, black, wooden armchair bearing the school seal. Mine came with a brass plaque on the back proclaiming: ‘Professor David Labaree, The Lee L Jacks Professor in Education’.

Now, as I fade into retirement, still enjoying the glow from my emeritus status at a brand-name university, it all feels right. I’ve got money to live on, a great support community, and status galore. I get to display my badges of merit for all to see – the Stanford logo on my jacket, and the Jacks emeritus title in my email signature. What’s not to like? The question about whether I deserve it or not fades into the background, crowded out by all the benefits. Enjoy. The sun’s always shining at the summit of the meritocracy.

Is there a moral to be drawn from these two stories of life in the meritocracy? The most obvious one is that this life is not fair. The fix is in. Children of parents who have already succeeded in the meritocracy have a big advantage over other children whose parents have not. They know how the game is played, and they have the cultural capital, the connections and the money to increase their children’s chances for success in this game. They know that the key is doing well at school, since it’s the acquisition of degrees that determines what jobs you get and the life you live. They also know that it’s not just a matter of being a good student but of attending the right school – one that fosters academic achievement and, even more important, occupies an elevated position in the status hierarchy of educational institutions. Brand names open doors. This allows highly educated, upper-middle-class families to game the meritocratic system and to hoard a disproportionate share of the advantages it offers.

In fact, the only thing that’s less fair than the meritocracy is the system it displaced, in which people’s futures were determined strictly by the lottery of birth. Lords begat lords, and peasants begat peasants. In contrast, the meritocracy is sufficiently open that some children of the lower classes can prove themselves in school and win a place higher up the scale. The probability of doing so is markedly lower than the chances of success enjoyed by the offspring of the credentialed elite, but the possibility of upward mobility is nonetheless real. And this possibility is part of what motivates privileged parents to work so frantically to pull every string and milk every opportunity for their children. Through the jousting grounds of schooling, smart poor kids can, at times, displace dumb rich kids. The result is a system of status attainment that provides advantages for some while at the same time spreading fear for their children’s future across families of all social classes. In the end, the only thing that the meritocracy equalises is anxiety.