Posted in History, History of education, War

An Affair to Remember: America’s Brief Fling with the University as a Public Good

This post is an essay about the brief but glorious golden age of the US university during the three decades after World War II.  

American higher education rose to fame and fortune during the Cold War, when both student enrollments and funded research shot upward. Prior to World War II, the federal government showed little interest in universities and provided little support. The war spurred a large investment in defense-based scientific research in universities, and the emergence of the Cold War expanded federal investment exponentially. Unlike a hot war, the Cold War offered an extended period of federally funded research public subsidy for expanding student enrollments. The result was the golden age of the American university. The good times continued for about 30 years and then began to go bad. The decline was triggered by the combination of a decline in the perceived Soviet threat and a taxpayer revolt against high public spending; both trends culminating with the fall of the Berlin Wall in 1989. With no money and no enemy, the Cold War university fell as quickly as it arose. Instead of seeing the Cold War university as the norm, we need to think of it as the exception. What we are experiencing now in American higher education is a regression to the mean, in which, over the long haul, Americans have understood higher education to be a distinctly private good.

I originally presented this piece in 2014 at a conference at Catholic University in Leuven, Belgium.  It was then published in the Journal of Philosophy of Education in 2016 (here’s a link to the JOPE version) and then became a chapter in my 2017 book, A Perfect Mess.  Waste not, want not.  Hope you enjoy it.

Cold War

An Affair to Remember:

America’s Brief Fling with the University as a Public Good

David F. Labaree

            American higher education rose to fame and fortune during the Cold War, when both student enrollments and funded research shot upward.  Prior to World War II, the federal government showed little interest in universities and provided little support.  The war spurred a large investment in defense-based scientific research in universities for reasons of both efficiency and necessity:  universities had the researchers and infrastructure in place and the government needed to gear up quickly.  With the emergence of the Cold War in 1947, the relationship continued and federal investment expanded exponentially.  Unlike a hot war, the Cold War offered a long timeline for global competition between communism and democracy, which meant institutionalizing the wartime model of federally funded research and building a set of structures for continuing investment in knowledge whose military value was unquestioned. At the same time, the communist challenge provided a strong rationale for sending a large number of students to college.  These increased enrollments would educate the skilled workers needed by the Cold War economy, produce informed citizens to combat the Soviet menace, and demonstrate to the world the broad social opportunities available in a liberal democracy.  The result of this enormous public investment in higher education has become known as the golden age of the American university.

            Of course, as is so often the case with a golden age, it didn’t last.  The good times continued for about 30 years and then began to go bad.  The decline was triggered by the combination of a decline in the perceived Soviet threat and a taxpayer revolt against high public spending; both trends with the fall of the Berlin Wall in 1989.  With no money and no enemy, the Cold War university fell as quickly as it arose. 

            In this paper I try to make sense of this short-lived institution.  But I want to avoid the note of nostalgia that pervades many current academic accounts, in which professors and administrators grieve for the good old days of the mid-century university and spin fantasies of recapturing them.  Barring another national crisis of the same dimension, however, it just won’t happen.  Instead of seeing the Cold War university as the norm that we need to return to, I suggest that it’s the exception.  What we’re experiencing now in American higher education is, in many ways, a regression to the mean. 

            My central theme is this:  Over the long haul, Americans have understood higher education as a distinctly private good.  The period from 1940 to 1970 was the one time in our history when the university became a public good.  And now we are back to the place we have always been, where the university’s primary role is to provide individual consumers a chance to gain social access and social advantage.  Since students are the primary beneficiaries, then they should also foot the bill; so state subsidies are hard to justify.

            Here is my plan.  First, I provide an overview of the long period before 1940 when American higher education functioned primarily as a private good.  During this period, the beneficiaries changed from the university’s founders to its consumers, but private benefit was the steady state.  This is the baseline against which we can understand the rapid postwar rise and fall of public investment in higher education.  Next, I look at the huge expansion of public funding for higher education starting with World War II and continuing for the next 30 years.  Along the way I sketch how the research university came to enjoy a special boost in support and rising esteem during these decades.  Then I examine the fall from grace toward the end of the century when the public-good rationale for higher ed faded as quickly as it had emerged.  And I close by exploring the implications of this story for understanding the American system of higher education as a whole. 

            During most of its history, the central concern driving the system has not been what it can do for society but what it can do for me.  In many ways, this approach has been highly beneficial.  Much of its success as a system – as measured by wealth, rankings, and citations – derives from its core structure as a market-based system producing private goods for consumers rather than a politically-based system producing public goods for state and society.  But this view of higher education as private property is also a key source of the system’s pathologies.  It helps explain why public funding for higher education is declining and student debt is rising; why private colleges are so much richer and more prestigious that public colleges; why the system is so stratified, with wealthy students attending the exclusive colleges at the top where social rewards are high and with poor students attending the inclusive colleges at the bottom where such rewards are low; and why quality varies so radically, from colleges that ride atop the global rankings to colleges that drift in intellectual backwaters.

The Private Origins of the System

            One of the peculiar aspects of the history of American higher education is that private colleges preceded public.  Another, which in part follows from the first, is that private colleges are also more prestigious.  Nearly everywhere else in the world, state-supported and governed universities occupy the pinnacle of the national system while private institutions play a small and subordinate role, supplying degrees of less distinction and serving students of less ability.  But in the U.S., the top private universities produce more research, gain more academic citations, attract better faculty and students, and graduate more leaders of industry, government, and the professions.  According to the 2013 Shanghai rankings, 16 of the top 25 universities in the U.S. are private, and the concentration is even higher at the top of this list, where private institutions make up 8 of the top 10 (Institute of Higher Education, 2013). 

            This phenomenon is rooted in the conditions under which colleges first emerged in the U.S.  American higher education developed into a system in the early 19th century, when three key elements were in place:  the state was weak, the market was strong, and the church was divided.  The federal government at the time was small and poor, surviving largely on tariffs and the sale of public lands, and state governments were strapped simply trying to supply basic public services.  Colleges were a low priority for government since they served no compelling public need – unlike public schools, which states saw as essential for producing citizens for the republic.  So colleges only emerged when local promoters requested and received a  corporate charter from the state.  These were private not-for-profit institutions that functioned much like any other corporation.  States provided funding only sporadically and only if an institution’s situation turned dire.  And after the Dartmouth College decision in 1819, the Supreme Court made clear that a college’s corporate charter meant that it could govern itself without state interference.  Therefore, in the absence of state funding and control, early American colleges developed a market-based system of higher education. 

            If the roots of the American system were private, they were also extraordinarily local.  Unlike the European university, with its aspirations toward universality and its history of cosmopolitanism, the American college of the nineteenth century was a home-town entity.  Most often, it was founded to advance the parochial cause of promoting a particular religious denomination rather than to promote higher learning.  In a setting where no church was dominant and all had to compete for visibility, stature, and congregants, founding colleges was a valuable way to plant the flag and promote the faith.  This was particularly true when the population was rapidly expanding into new territories to the west, which meant that no denomination could afford to cede the new terrain to competitors.  Starting a college in Ohio was a way to ensure denominational growth, prepare clergy, and spread the word.

            At the same time, colleges were founded with an eye toward civic boosterism, intended to shore up a community’s claim to be a major cultural and commercial center rather than a sleepy farm town.  With a college, a town could claim that it deserved to gain lucrative recognition as a stop on the railroad line, the site for a state prison, the county seat, or even the state capital.  These consequences would elevate the value of land in the town, which would work to the benefit of major landholders.  In this sense, the nineteenth century college, like much of American history, was in part the product of a land development scheme.  In general, these two motives combined: colleges emerged as a way to advance both the interests of particular sects and also the interests of the towns where they were lodged.  Often ministers were also land speculators.  It was always better to have multiple rationales and sources of support than just one (Brown (1995); Boorstin (1965); Potts (1971).  In either case, however, the benefits of founding a college accrued to individual landowners and particular religious denominations and not to the larger public.

As a result these incentives, church officials and civic leaders around the country scrambled to get a state charter for a college, establish a board of trustees made up of local notables, and install a president.  The latter (usually a clergyman) would rent a local building, hire a small and not very accomplished faculty, and serve as the CEO of a marginal educational enterprise, one that sought to draw tuition-paying students from the area in order to make the college a going concern.  With colleges arising to meet local and sectarian needs, the result was the birth of a large number of small, parochial, and weakly funded institutions in a very short period of time in the nineteenth century, which meant that most of these colleges faced a difficult struggle to survive in the competition with peer institutions.  In the absence of reliable support from church or state, these colleges had to find a way to get by on their own. 

            Into this mix of private colleges, state and local governments began to introduce public institutions.  First came a series of universities established by individual states to serve their local populations.  Here too competition was a bigger factor than demand for learning, since a state government increasingly needed to have a university of its own in order to keep up with its neighbors.  Next came a group of land-grant colleges that began to emerge by midcentury.  Funded by grants of land from the federal government, these were public institutions that focused on providing practical education for occupations in agriculture and engineering.  Finally was an array of normal schools, which aimed at preparing teachers for the expanding system of public elementary education.  Like the private colleges, these public institutions emerged to meet the economic needs of towns that eagerly sought to house them.  And although they colleges were creatures of the state, they had only limited public funding and had to rely heavily on student tuition and private donations.

            The rate of growth of this system of higher education was staggering.  At the beginning of the American republic in 1790 the country had 19 institutions calling themselves colleges or universities (Tewksbury (1932), Table 1; Collins, 1979, Table 5.2).  By 1880, it had 811, which doesn’t even include the normal schools.  As a comparison, this was five times as many institutions as existed that year in all of Western Europe (Ruegg (2004).  To be sure, the American institutions were for the most part colleges in name only, with low academic standards, an average student body of 131 (Carter et al. (2006), Table Bc523) and faculty of 14 (Carter et al. (2006), Table Bc571).  But nonetheless this was a massive infrastructure for a system of higher education. 

            At a density of 16 colleges per million of population, the U.S. in 1880 had the most overbuilt system of higher education in the world (Collins, 1979, Table 5.2).  Created in order to meet the private needs of land speculators and religious sects rather that the public interest of state and society, the system got way ahead of demand for its services.  That changed in the 1880s.  By adopting parts of the German research university model (in form if not in substance), the top level of the American system acquired a modicum of academic respectability.  In addition – and this is more important for our purposes here – going to college finally came to be seen as a good investment for a growing number of middle-class student-consumers. 

            Three factors came together to make college attractive.  Primary among these was the jarring change in the structure of status transmission for middle-class families toward the end of the nineteenth century.  The tradition of passing on social position to your children by transferring ownership of the small family business was under dire threat, as factories were driving independent craft production out of the market and department stories were making small retail shops economically marginal.  Under these circumstances, middle class families began to adopt what Burton Bledstein calls the “culture of professionalism” (Bledstein, 1976).  Pursuing a profession (law, medicine, clergy) had long been an option for young people in this social stratum, but now this attraction grew stronger as the definition of profession grew broader.  With the threat of sinking into the working class becoming more likely, families found reassurance in the prospect of a form of work that would buffer their children from the insecurity and degradation of wage labor.  This did not necessarily mean becoming a traditional professional, where the prospects were limited and entry costs high, but instead it meant becoming a salaried employee in a management position that was clearly separated from the shop floor.  The burgeoning white-collar work opportunities as managers in corporate and government bureaucracies provided the promise of social status, economic security, and protection from downward mobility.  And the best way to certify yourself as eligible for this kind of work was to acquire a college degree. 

            Two other factors added to the attractions of college.  One was that a high school degree – once a scarce commodity that became a form of distinction for middle class youth during the nineteenth century – was in danger of becoming commonplace.  Across the middle of the century, enrollments in primary and grammar schools were growing fast, and by the 1880s they were filling up.  By 1900, the average American 20-year-old had eight years of schooling, which meant that political pressure was growing to increase access to high school (Goldin & Katz, 2008, p. 19).  This started to happen in the 1880s, and for the next 50 years high school enrollments doubled every decade.  The consequences were predictable.  If the working class was beginning to get a high school education, then middle class families felt compelled to preserve their advantage by pursuing college.

            The last piece that fell into place to increase the drawing power of college for middle class families was the effort by colleges in the 1880s and 90s to make undergraduate enrollment not just useful but enjoyable.  Ever desperate to find ways to draw and retain students, colleges responded to competitive pressure by inventing the core elements that came to define the college experience for American students in the twentieth century.  These included fraternities and sororities, pleasant residential halls, a wide variety of extracurricular entertainments, and – of course – football.  College life became a major focus of popular magazines, and college athletic events earned big coverage in newspapers.  In remarkably short order, going to college became a life stage in the acculturation of middle class youth.  It was the place where you could prepare for a respectable job, acquire sociability, learn middle class cultural norms, have a good time, and meet a suitable spouse.  And, for those who were so inclined, was the potential fringe benefit of getting an education.

            Spurred by student desire to get ahead or stay ahead, college enrollments started growing quickly.  They were at 116,000  in 1879, 157,000 in 1889, 238,000 in 1899, 355,000 in 1909, 598,000 in 1919, 1,104,000 in 1929, and 1,494,000 in 1939 (Carter et al. (2006), Table Bc523).  This was a rate of increase of more than 50 percent a decade – not as fast as the increases that would come at midcentury, but still impressive.  During this same 60-year period, total college enrollment as a proportion of the population 18-to-24 years old rose from 1.6 percent to 9.1 percent (Carter et al. (2006), Table Bc524).  By 1930, U.S. had three times the population of the U.K. and 20 times the number of college students (Levine. 1986, p. 135).  And the reason they were enrolling in such numbers was clear.  According to studies in the 1920s, almost two-thirds of undergraduates were there to get ready for a particular job, mostly in the lesser professions and middle management (Levine, 1986, p. 40).  Business and engineering were the most popular majors and the social sciences were on the rise.  As David Levine put it in his important book about college in the interwar years, “Institutions of higher learning were no longer content to educate; they now set out to train, accredit, and impart social status to their students” (Levine, 1986, p. 19.

            Enrollments were growing in public colleges faster than in private colleges, but only by a small amount.  In fact it wasn’t until 1931 – for the first time in the history of American higher education – that the public sector finally accounted for a majority of college students (Carter et al., 2006, Tables Bc531 and Bc534).  The increases occurred across all levels of the system, including the top public research universities; but the largest share of enrollments flowed into the newer institutions at the bottom of the system:  the state colleges that were emerging from normal schools, urban commuter colleges (mostly private), and an array of public and private junior colleges that offered two-year vocational programs. 

            For our purposes today, the key point is this:  The American system of colleges and universities that emerged in the nineteenth century and continued until World War II was a market-driven structure that construed higher education as a private good.  Until around 1880, the primary benefits of the system went to the people who founded individual institutions – the land speculators and religious sects for whom a new college brought wealth and competitive advantage.  This explains why colleges emerged in such remote places long before there was substantial student demand.  The role of the state in this process was muted.  The state was too weak and too poor to provide strong support for higher education, and there was no obvious state interest that argued for doing so.  Until the decade before the war, most student enrollments were in the private sector, and even at the war’s start the majority of institutions in the system were private (Carter et al., 2006, Tables Bc510 to Bc520).  

            After 1880, the primary benefits of the system went to the students who enrolled.  For them, it became the primary way to gain entry to the relatively secure confines of salaried work in management and the professions.  For middle class families, college in this period emerged as the main mechanism for transmitting social advantage from parents to children; and for others, it became the object of aspiration as the place to get access to the middle class.  State governments put increasing amounts of money into support for public higher education, not because of the public benefits it would produce but because voters demanded increasing access to this very attractive private good.

The Rise of the Cold War University

            And then came the Second World War.  There is no need here to recount the devastation it brought about or the nightmarish residue it left.  But it’s worth keeping in mind the peculiar fact that this conflict is remembered fondly by Americans, who often refer to it as the Good War (Terkel, 1997).  The war cost a lot of American lives and money, but it also brought a lot of benefits.  It didn’t hurt, of course, to be on the winning side and to have all the fighting take place on foreign territory.  And part of the positive feeling associated with the war comes from the way it thrust the country into a new role as the dominant world power.  But perhaps even more the warm feeling arises from the memory of this as a time when the country came together around a common cause.  For citizens of the United States – the most liberal of liberal democracies, where private liberty is much more highly valued than public loyalty – it was a novel and exciting feeling to rally around the federal government.  Usually viewed with suspicion as a threat to the rights of individuals and a drain on private wealth, the American government in the 1940s took on the mantle of good in the fight against evil.  Its public image became the resolute face of a white-haired man dressed in red, white, and blue, who pointed at the viewer in a famous recruiting poster.  It’s slogan: “Uncle Sam Wants You.” 

            One consequence of the war was a sharp increase in the size of the U.S. government.  The historically small federal state had started to grow substantially in the 1930s as a result of the New Deal effort to spend the country out of a decade-long economic depression, a time when spending doubled.  But the war raised the level of federal spending by a factor of seven, from $1,000 to $7,000 per capita.  After the war, the level dropped back to $2,000; and then the onset of the Cold War sent federal spending into a sharp, and this time sustained, increase – reaching $3,000 in the 50s, 4,000 in the 60s, and regaining the previous high of $7,000 in the 80s, during the last days of the Soviet Union (Garrett & Rhine, 2006, figure 3). 

            If for Americans in general World War II carries warm associations, for people in higher education it marks the beginning of the Best of Times – a short but intense period of generous public funding and rapid expansion.  Initially, of course, the war brought trouble, since it sent most prospective college students into the military.  Colleges quickly adapted by repurposing their facilities for military training and other war-related activities.  But the real long-term benefits came when the federal government decided to draw higher education more centrally into the war effort – first, as the central site for military research and development; and second, as the place to send veterans when the war was over.  Let me say a little about each.

            In the first half of the twentieth century, university researchers had to scrabble around looking for funding, forced to rely on a mix of foundations, corporations, and private donors.  The federal government saw little benefit in employing their services.  In a particularly striking case at the start of World War, the professional association of academic chemists offered its help to the War Department, which declined “on the grounds that it already had a chemist in its employ” (Levine, 1986, p. 51).[1]  The existing model was for government to maintain its own modest research facilities instead of relying on the university. 

            The scale of the next war changed all this.  At the very start, a former engineering dean from MIT, Vannevar Bush, took charge of mobilizing university scientists behind the war effort as head of the Office of Scientific Research and Development.  The model he established for managing the relationship between government and researchers set the pattern for university research that still exists in the U.S. today: Instead of setting up government centers, the idea was to farm out research to universities.  Issue a request for proposals to meet a particular research need; award the grant to the academic researchers who seemed best equipped to meet this need; and pay 50 percent or more overhead to the university for the facilities that researchers would use.  This method drew on the expertise and facilities that already existed at research universities, which both saved the government from having to maintain a costly permanent research operation and also gave it the flexibility to draw on the right people for particular projects.  For universities, it provided a large source of funds, which enhanced their research reputations, helped them expand faculty, and paid for infrastructure.  It was a win-win situation.  It also established the entrepreneurial model of the university researcher in perpetual search for grant money.  And for the first time in the history of American higher education, the university was being considered a public good, whose research capacity could serve the national interest by helping to win a war. 

            If universities could meet one national need during the war by providing military research, they could meet another national need after the war by enrolling veterans.  The GI Bill of Rights, passed by congress in 1944, was designed to pay off a debt and resolve a manpower problem.  Its official name, the Servicemen’s Readjustment Act of 1944, reflects both aims.  By the end of the war there were 15 million men and women who had served in the military, who clearly deserved a reward for their years of service to the country.  The bill offered them the opportunity to continue their education at federal expense, which included attending the college of their choice.  This opportunity also offered another public benefit, since it responded to deep concern about the ability of the economy to absorb this flood of veterans.  The country had been sliding back into depression at the start of the war, and the fear was that massive unemployment at war’s end was a real possibility.  The strategy worked.  Under the GI Bill, about two million veterans eventually attended some form of college.  By 1948, when veteran enrollment peaked, American colleges and universities had one million more students than 10 years earlier (Geiger (2004), pp. 40-41; Carter et al. (2006), Table Bc523).  This was another win-win situation.  The state rewarded national service, headed off mass unemployment, and produced a pile of human capital for future growth.  Higher education got a flood of students who could pay their own way.  The worry, of course, was what was going to happen when the wartime research contracts ended and the veterans graduated. 

            That’s where the Cold War came in to save the day.  And the timing was perfect.  The first major action of the new conflict – the Berlin Blockade – came in 1948, the same year that veteran enrollments at American colleges reached their peak.  If World War II was good for American higher education, the Cold War was a bonanza.  The hot war meant boom and bust – providing a short surge of money and students followed by a sharp decline.  But the Cold War was a prolonged effort to contain Communism.  It was sustainable because actual combat was limited and often carried out by proxies.  For universities this was a gift that, for 30 years, kept on giving.  The military threat was massive in scale – nothing less than the threat of nuclear annihilation.  And supplementing it was an ideological challenge – the competition between two social and political systems for hearts and minds.  As a result, the government needed top universities to provide it with massive amounts of scientific research that would support the military effort.  And it also needed all levels of the higher education system to educate the large numbers of citizens required to deal with the ideological menace.  We needed to produce the scientists and engineers who would allow us to compete with Soviet technology.  We needed to provide high-level human capital in order to promote economic growth and demonstrate the economic superiority of capitalism over communism.  And we needed to provide educational opportunity for our own racial minorities and lower classes in order to show that our system is not only effective but also fair and equitable.  This would be a powerful weapon in the effort to win over the third world with the attractions of the American Way.  The Cold War American government treated higher education system as a highly valuable public good, which would make a large contribution to the national interest; and the system was pleased to be the object of so much federal largesse (Loss, 2012).

            On the research side, the impact of the Cold War on American universities was dramatic.  The best way to measure this is by examining patterns of federal research and development spending over the years, which traces the ebb and flow of national threats across the last 60 years.  Funding rose slowly  from $13 billion in 1953 (in constant 2014 dollars) until the Sputnik crisis (after the Soviets succeeded in placing the first satellite in earth orbit), when funding jumped to $40 billion in 1959 and rose rapidly to a peak of $88 billion in 1967.  Then the amount backed off to $66 billion in 1975, climbing to a new peak of $104 billion in 1990 just before the collapse of the Soviet Union and then dropping off.  It started growing again in 2002 after the attack on the twin towers, reaching an all-time high of $151 billion in 2010 and has been declining ever since (AAAS, 2014).[2] 

            Initially, defense funding accounted for 85 percent of federal research funding, gradually falling back to about half in 1967, as nondefense funding increased, but remaining in a solid majority position up until the present.  For most of the period after 1957, however, the largest element in nondefense spending was research on space technology, which arose directly from the Soviet Sputnik threat.  If you combine defense and space appropriations, this accounts for about three-quarters of federal research funding until 1990.  Defense research closely tracked perceived threats in the international environment, dropping by 20 percent after 1989 and then making a comeback in 2001.  Overall,  federal funding during the Cold War for research of all types grew in constant dollars from $13 billion in 1953 to $104 in 1990, an increase of 700 percent.  These were good times for university researchers (AAAS, 2014).

            At the same time that research funding was growing rapidly, so were college enrollments.  The number of students in American higher education grew from 2.4 million in 1949 to 3.6 million in 1959; but then came the 1960s, when enrollments more than doubled, reaching 8 million in 1969.  The number hit 11.6 million in 1979 and then began to slow down – creeping up to 13.5 million in 1989 and leveling off at around 14 million in the 1990s (Carter et al., 2006, Table Bc523; NCES, 2014, Table 303.10).  During the 30 years between 1949 and 1979, enrollments increased by more than 9 million students, a growth of almost 400 percent.  And the bulk of the enrollment increases in the last two decades were in part-time students and at two-year colleges.  Among four-year institutions, the primary growth occurred not at private or flagship public universities but at regional state universities, the former normal schools.  The Cold War was not just good for research universities; it was also great for institutions of higher education all the way down the status ladder.

            In part we can understand this radical growth in college enrollments as an extension of the long-term surge in consumer demand for American higher education as a private good.  Recall that enrollments started accelerating late in the nineteenth century, when college attendance started to provide an edge in gaining middle class jobs.  This meant that attending college gave middle-class families a way to pass on social advantage while attending high school gave working-class families a way to gain social opportunity.  But by 1940, high school enrollments had become universal.  So for working-class families, the new zone of social opportunity became higher education.  This increase in consumer demand provided a market-based explanation for at least part of the flood of postwar enrollments.

            At the same time, however, the Cold War provided a strong public rationale for broadening access to college.  In 1946, President Harry Truman appointed a commission to provide a plan for expanding access to higher education, which was first time in American history that a president sought advice about education at any level.  The result was a six-volume report with the title Higher Education for American Democracy.  It’s no coincidence that the report was issued in 1947, the starting point of the Cold War.  The authors framed the report around the new threat of atomic war, arguing that “It is essential today that education come decisively to grips with the world-wide crisis of mankind” (President’s Commission, 1947, vol. 1, p. 6).  What they proposed as a public response to the crisis was a dramatic increase in access to higher education.

            The American people should set as their ultimate goal an educational system in which at no level – high school, college, graduate school, or professional school – will a qualified individual in any part of the country encounter an insuperable economic barrier to the attainment of the kind of education suited to his aptitudes and interests.
        This means that we shall aim at making higher education equally available to all young people, as we now do education in the elementary and high schools, to the extent that their capacity warrants a further social investment in their training (President’s Commission, 1947, vol. 1, p. 36).

Tellingly, the report devotes a lot of space exploring the existing barriers to educational opportunity posed by class and race – exactly the kinds of issues that were making liberal democracies look bad in light of the egalitarian promise of communism.

Decline of the System’s Public Mission

            So in the mid twentieth century, Americans went through an intense but brief infatuation with higher education as a public good.  Somehow college was going to help save us from the communist menace and the looming threat of nuclear war.  Like World War II, the Cold War brought together a notoriously individualistic population around the common goal of national survival and the preservation of liberal democracy.  It was a time when every public building had an area designated as a bomb shelter.  In the elementary school I attended in the 1950s, I can remember regular air raid drills.  The alarm would sound and teachers would lead us downstairs to the basement, whose concrete-block walls were supposed to protect us from a nuclear blast.  Although the drills did nothing to preserve life, they did serve an important social function.  Like Sunday church services, these rituals drew individuals together into communities of faith where we enacted our allegiance to a higher power. 

            For American college professors, these were the glory years, when fear of annihilation gave us a glamorous public mission and what seemed like an endless flow of public funds and funded students.  But it did not – and could not – last.  Wars can bring great benefits to the home front, but then they end.  The Cold War lasted longer than most, but this longevity came at the expense of intensity.  By the 1970s, the U.S. had lived with the nuclear threat for 30 years without any sign that the worst case was going to materialize.  You can only stand guard for so long before attention begins to flag and ordinary concerns start to push back to the surface.  In addition, waging war is extremely expensive, draining both public purse and public sympathy.  The two Cold War conflicts that engaged American troops cost a lot, stirred strong opposition, and ended badly, providing neither the idealistic glow of the Good War nor the satisfying closure of unconditional surrender by the enemy.  Korea ended with a stalemate and the return to the status quo ante bellum.  Vietnam ended with defeat and the humiliating image in 1975 of the last Americans being plucked off a rooftop in Saigon – which the victors then promptly renamed Ho Chi Minh City.

            The Soviet menace and the nuclear threat persisted, but in a form that – after the grim experience of war in the rice paddies – seemed distant and slightly unreal.  Add to this the problem that, as a tool for defeating the enemy, the radical expansion of higher education by the 70s did not appear to be a cost-effective option.  Higher ed is a very labor-intensive enterprise, in which size brings few economies of scale, and its public benefits in the war effort were hard to pin down.  As the national danger came to seem more remote, the costs of higher ed became more visible and more problematic.  Look around any university campus, and the primary beneficiaries of public largesse seem to be private actors – the faculty and staff who work there and the students whose degrees earn them higher income.  So about 30 years into the Cold War, the question naturally arose:  Why should the public pay so much to provide cushy jobs for the first group and to subsidize the personal ambition of the second?  If graduates reap the primary benefits of a college education, shouldn’t they be paying for it rather than the beleaguered taxpayer?

            The 1970s marked the beginning of the American tax revolt, and not surprisingly this revolt emerged first in the bellwether state of California.  Fueled by booming defense plants and high immigration, California had a great run in the decades after 1945.  During this period, the state developed the most comprehensive system of higher education in the country.  In 1960 it formalized this system with a Master Plan that offered every Californian the opportunity to attend college in one of three state systems.  The University of California focused on research, graduate programs, and educating the top high school graduates.  California State University (developed mostly from former teachers colleges) focused on undergraduate programs for the second tier of high school graduates.  The community college system offered the rest of the population two-year programs for vocational training and possible transfer to one of the two university systems.  By 1975, there were 9 campuses in the University of California, 23 in California State University, and xx in the community college system, with a total enrollment across all systems of 1.5 million students – accounting for 14 percent of the college students in the U.S. (Carter et al., 2006, Table Bc523; Douglass, 2000, Table 1).  Not only was the system enormous, but the Master Plan declared it illegal to charge California students tuition.  The biggest and best public system of higher education in the country was free.

            And this was the problem.  What allowed the system to grow so fast was a state fiscal regime that was quite rare in the American context – one based on high public services supported by high taxes.  After enjoying the benefits of this combination for a few years, taxpayers suddenly woke up to the realization that this approach to paying for higher education was at core un-American.  For a country deeply grounded in liberal democracy, the system of higher ed for all at no cost to the consumer looked a lot like socialism.  So, of course, it had to go.  In the mid-1970s the country’s first taxpayer revolt emerged in California, culminating in a successful campaign in 1978 to pass a state-wide initiative that put a limit on increases in property taxes.  Other tax limitation initiatives followed (Martin, 2008).  As a result, the average state appropriation per student at University of California dropped from about $3,400 (in 1960 dollars) in 1987 to $1,100 in 2010, a decline of 68 percent (UC Data Analysis (2014).  This quickly led to a steady increase in fees charged to students at California’s colleges and universities.  (It turned out that tuition was illegal but demanding fees from students was not.)  In 1960 dollars, the annual fees for in-state undergraduates at the University of California rose from $317 in 1987 to $1,122 in 2010, an increase of more than 250 percent (UC Data Analysis (2014).  This pattern of tax limitations and tuition increases spread across the country.  Nationwide during the same period of time, the average state appropriation per student at a four year public college fell from $8,500 to $5,900 (in 2012 dollars), a decline of 31 percent, while average undergraduate tuition doubled, rising from $2,600 to $5,200 (SHEEO, 2013, Figure 3).

            The decline in the state share of higher education costs was most pronounced at the top public research universities, which had a wider range of income sources.  By 2009, the average such institution was receiving only 25 percent of its revenue from state government (National Science Board (2012), Figure 5).  An extreme case is University of Virginia, where in 2013 the state provided less than six percent of the university’s operating budget (University of Virginia, 2014). 

            While these changes were happening at the state level, the federal government was also backing away from its Cold War generosity to students in higher education.  Legislation such as the National Defense Education Act (1958) and Higher Education Act (1965) had provided support for students through a roughly equal balance of grants and loans.  But in 1980 the election of Ronald Reagan as president meant that the push to lower taxes would become national policy.  At this point, support for students shifted from cash support to federally guaranteed loans.  The idea was that a college degree was a great investment for students, which would pay long-term economic dividends, so they should shoulder an increasing share of the cost.  The proportion of total student support in the form of loans was 54 percent in 1975, 67 percent in 1985, and 78 percent in 1995, and the ratio has remained at that level ever since (McPherson & Schapiro, 1998, Table 3.3; College Board, 2013, Table 1).  By 1995, students were borrowing $41 billion to attend college, which grew to $89 billion in 2005 (College Board, 2014, Table 1).  At present, about 60 percent of all students accumulate college debt, most of it in the form of federal loans, and the total student debt load has passed $1 trillion.

            At the same time that the federal government was cutting back on funding college students, it was also reducing funding for university research.  As I mentioned earlier, federal research grants in constant dollars peaked at about $100 billion in 1990, the year after the fall of the Berlin wall – a good marker for the end of the Cold War.  At this point defense accounted for about two-thirds of all university research funding – three-quarters if your include space research.  Defense research declined by about 20 percent during the 90s and didn’t start rising again substantially until 2002, the year after the fall of the Twin Towers and the beginning of the new existential threat known as the War on Terror.  Defense research reached a new peak in 2009 at a level about a third above the Cold War high, and it has been declining steadily ever since.  Increases in nondefense research helped compensate for only a part of the loss of defense funds (AAAS, 2014).

Conclusion

            The American system of higher education came into existence as a distinctly private good.  It arose in the nineteenth century to serve the pursuit of sectarian advantage and land speculation, and then in the twentieth century it evolved into a system for providing individual consumers a way to get ahead or stay ahead in the social hierarchy.  Quite late in the game it took World War II to give higher education an expansive national mission and reconstitute it as a public good.  But hot wars are unsustainable for long, so in 1945 the system was sliding quickly back toward public irrelevance before it was saved by the timely arrival of the Cold War.  As I have shown, the Cold War was very very good for American system of higher education.  It produced a massive increase in funding by federal and state governments, both for university research and for college student subsidies, and – more critically – it sustained this support for a period of three decades.  But these golden years gradually gave way before a national wave of taxpayer fatigue and the surprise collapse of the Soviet Union.  With the nation strapped for funds and with its global enemy dissolved, it no longer had the urgent need to enlist America’s colleges and universities in a grand national cause.  The result was a decade of declining research support and static student enrollments. In 2002 the wars in Afghanistan and Iraq brought a momentary surge in both, but these measures peaked after only eight years and then went again into decline.  Increasingly, higher education is returning to its roots as a private good.

            So what are we to take away from this story of the rise and fall of the Cold War university?  One conclusion is that the golden age of the American university in the mid twentieth century was a one-off event.  Wars may be endemic but the Cold War was unique.  So American university administrators and professors need to stop pining for a return to the good old days and learn how to live in the post-Cold-War era.  The good news is that the impact of the surge in public investment in higher education has left the system in a radically stronger condition than it was in before World War II.  Enrollments have gone from 1.5 million to 21 million; federal research funding has gone from zero to $135 billion; federal grants and loans to college students have gone from zero to $170 billion (NCES, 2014, Table 303.10; AAAS, 2014; College Board, 2014, Table 1).  And the American system of colleges and universities went from an international also-ran to a powerhouse in the world economy of higher education.  Even though all of the numbers are now dropping, they are dropping from a very high level, which is the legacy of the Cold War.  So really, we should stop whining.  We should just say thanks to the bomb for all that it did for us and move on.

            The bad news, of course, is that the numbers really are going down.  Government funding for research is declining and there is no prospect for a turnaround in the foreseeable future.  This is a problem because the federal government is the primary source of funds for basic research in the U.S.; corporations are only interested in investing in research that yields immediate dividends.  During the Cold War, research universities developed a business plan that depended heavily on external research funds to support faculty, graduate students, and overhead.  That model is now broken.  The cost of pursuing a college education is increasingly being borne by the students themselves, as states are paying a declining share of the costs of higher education.  Tuition is rising and as a result student loans are rising.  Public research universities are in a particularly difficult position because their state funding is falling most rapidly.  According to one estimate, at the current rate of decline the average state fiscal support for public higher education will reach zero in 2059 (Mortenson, 2012). 

            But in the midst of all of this bad news, we need to keep in mind that the American system of higher education has a long history of surviving and even thriving under conditions of at best modest public funding.  At its heart, this is a system of higher education based not on the state but the market.  In the hardscrabble nineteenth century, the system developed mechanisms for getting by without the steady support of funds from church or state.  It learned how to attract tuition-paying students, give them the college experience they wanted, get them to identify closely with the institution, and then milk them for donations when they graduate.  Football, fraternities, logo-bearing T shirts, and fund-raising operations all paid off handsomely.  It learned how to adapt quickly to trends in the competitive environment, whether it’s the adoption of intercollegiate football, the establishment of research centers to capitalize on funding opportunities, or providing students with food courts and rock-climbing walls.  Public institutions have a long history of behaving much like private institutions because they were never able to count on continuing state funding. 

            This system has worked well over the years.  Along with the Cold War, it has enabled American higher education to achieve an admirable global status.  By the measures of citations, wealth, drawing power, and Nobel prizes, the system has been very effective.  But it comes with enormous costs.  Private universities have serious advantages over public universities, as we can see from university rankings.  The system is the most stratified structure of higher education in the world.  Top universities in the U.S. get an unacknowledged subsidy from the colleges at the bottom of the hierarchy, which receive less public funding, charge less tuition, and receive less generous donations.  And students sort themselves into institutions in the college hierarchy that parallels their position in the status hierarchy.  Students with more cultural capital and economic capital gain greater social benefit from the system than those with less, since they go to college more often, attend the best institutions, and graduate at a much higher rate.  Nearly everyone can go to college in the U.S., but the colleges that are most accessible provide the least social advantage. 

            So, conceived and nurtured into maturity as a private good, the American system of higher education remains a market-based organism.  It took the threat of nuclear war to turn it – briefly – into a public good.  But these days seem as remote as the time when schoolchildren huddled together in a bomb shelter. 

References

American Association for the Advancement of Science. (2014). Historical Trends in Federal R & D: By Function, Defense and Nondefense R & D, 1953-2015.  http://www.aaas.org/page/historical-trends-federal-rd (accessed 8-21-14.

Bledstein, B. J. (1976). The Culture of Professionalism: The Middle Class and the Development of Higher Education in America. New York:  W. W. Norton.

Boorstin, D. J. (1965). Culture with Many Capitals: The  Booster College. In The Americans: The National Experience (pp. 152-161). New York: Knopf Doubleday.

Brown, D. K. (1995). Degrees of Control: A Sociology of Educational Expansion and Occupational Credentialism. New York: Teachers College Press.

Carter, S. B., et al. (2006). Historical Statistics of the United States, Millennial Education on Line. New York: Cambridge University Press.

College Board. (2013). Trends in student aid, 2013. New York: The College Board.

College Board. (2014). Trends in Higher Education: Total Federal and Nonfederal Loans over Time.  https://trends.collegeboard.org/student-aid/figures-tables/growth-federal-and-nonfederal-loans-over-time (accessed 9-4-14).

Collins, R. (1979). The Credential Society: An Historical Sociology of Education and Stratification. New York: Academic Press.

Douglass, J. A. (2000). The California Idea and American Higher Education: 1850 to the 1960 Master Plan. Stanford, CA: Stanford University Press.

Garrett, T. A., & Rhine, R. M. (2006).  On the Size and Growth of Government. Federal Reserve Bank of St. Louis Review, 88:1 (pp. 13-30).

Geiger, R. L. (2004). To Advance Knowledge: The Growth of American research Universities, 1900-1940. New Brunswick: Transaction.

Goldin, C. & Katz, L. F. (2008). The Race between Education and Technology. Cambridge: Belknap Press of Harvard University Press.

Institute of Higher Education, Shanghai Jiao Tong University.  (2013).  Academic Ranking of World Universities – 2013.  http://www.shanghairanking.com/ARWU2013.html (accessed 6-11-14).

Levine, D. O. (1986). The American college and the culture of aspiration, 1914-1940 Ithaca: Cornell University Press.

Loss, C. P.  (2011).  Between citizens and the state: The politics of American higher education in the 20th century. Princeton, NJ: Princeton University Press.

Martin, I. W. (2008). The Permanent Tax Revolt: How the Property Tax Transformed American Politics. Stanford, CA: Stanford University Press.

McPherson, M. S. & Schapiro, M. O.  (1999).  Reinforcing Stratification in American Higher Education:  Some Disturbing Trends.  Stanford: National Center for Postsecondary Improvement.

Mortenson, T. G. (2012).  State Funding: A Race to the Bottom.  The Presidency (winter).  http://www.acenet.edu/the-presidency/columns-and-features/Pages/state-funding-a-race-to-the-bottom.aspx (accessed 10-18-14).

National Center for Education Statistics. (2014). Digest of Education Statistics, 2013. Washington, DC: US Government Printing Office.

National Science Board. (2012). Diminishing Funding Expectations: Trends and Challenges for Public Research Universities. Arlington, VA: National Science Foundation.

Potts, D. B. (1971).  American Colleges in the Nineteenth Century: From Localism to Denominationalism. History of Education Quarterly, 11: 4 (pp. 363-380).

President’s Commission on Higher Education. (1947). Higher education for American democracy, a report. Washington, DC: US Government Printing Office.

Rüegg, W. (2004). European Universities and Similar Institutions in Existence between 1812 and the End of 1944: A Chronological List: Universities.  In Walter Rüegg, A History of the University in Europe, vol. 3. London: Cambridge University Press.

State Higher Education Executive Officers (SHEEO). (2013). State Higher Education Finance, FY 2012. www.sheeo.org/sites/default/files/publications/SHEF-FY12.pdf (accessed 9-8-14).

Terkel, S. (1997). The Good War: An Oral History of World War II. New York: New Press.

Tewksbury, D. G. (1932). The Founding of American Colleges and Universities before the Civil War. New York: Teachers College Press.

U of California Data Analysis. (2014). UC Funding and Fees Analysis.  http://ucpay.globl.org/funding_vs_fees.php (accessed 9-2-14).

University of Virginia (2014). Financing the University 101. http://www.virginia.edu/finance101/answers.html (accessed 9-2-14).

[1] Under pressure of the war effort, the department eventually relented and enlisted the help of chemists to study gas warfare.  But the initial response is telling.

[2] Not all of this funding went into the higher education system.  Some went to stand-alone research organizations such as the Rand Corporation and American Institute of Research.  But these organizations in many ways function as an adjunct to higher education, with researcher moving freely between them and the university.

Posted in Higher Education, History of education, Organization Theory, Sociology

College: What Is It Good For?

This post is the text of a lecture I gave in 2013 at the annual meeting of the John Dewey Society.  It was published the following year in the Society’s journal, Education and Culture.  Here’s a link to the published version.           

The story I tell here is not a philosophical account of the virtues of the American university but a sociological account about how those virtues arose as unintended consequences of a system of higher education that arose for less elevated reasons.  Drawing my the analysis in the book I was writing at the time, A Perfect Mess, I show how the system emerged in large part out two impulses that had nothing to do with advancing knowledge.  One was in response to the competition among religious groups, seeking to plant the denominational flag on the growing western frontier and provide clergy for the newly arriving flock.  Another was in response to the competition among frontier towns to attract settlers who would buy land, using a college as a sign that this town was not just another dusty farm village but a true center of culture.

The essay then goes on to explore how the current positive social benefits of the US higher ed system are supported by the peculiar institutional form that characterizes American colleges and universities. 

My argument is that the true hero of the story is the evolved form of the American university, and that all the good things like free speech are the side effects of a structure that arose for other purposes.  Indeed, I argue that the institution – an intellectual haven in a heartless utilitarian world – depends on attributes that we would publicly deplore:  opacity, chaotic complexity, and hypocrisy.

In short, I’m portraying the system as one that is infused with irony, from its early origins through to its current functions.  Hope you enjoy it.

A Perfect Mess Cover

College — What Is It Good For

David F. Labaree

            I want to say up front that I’m here under false pretenses.  I’m not a Dewey scholar or a philosopher; I’m a sociologist doing history in the field of education.  And the title of my lecture is a bit deceptive.   I’m not really going to talk about what college is good for.  Instead I’m going to talk about how the institution we know as the modern American university came into being.  As a sociologist I’m more interested in the structure of the institution than in its philosophical aims.  It’s not that I’m opposed to these aims.  In fact, I love working in a university where these kinds of pursuits are open to us:   Where we can enjoy the free flow of ideas; where we explore any issue in the sciences or humanities that engages us; and where we can go wherever the issue leads without worrying about utility or orthodoxy or politics.  It’s a great privilege to work in such an institution.  And this is why I want to spend some time examining how this institution developed its basic form in the improbable context of the United States in the nineteenth century. 

            My argument is that the true hero of the story is the evolved form of the American university, and that all the good things like free speech are the side effects of a structure that arose for other purposes.  Indeed, I argue that the institution – an intellectual haven in a heartless utilitarian world – depends on attributes that we would publicly deplore:  opacity, chaotic complexity, and hypocrisy.

            I tell this story in three parts.  I start by exploring how the American system of higher education emerged in the nineteenth century, without a plan and without any apparent promise that it would turn out well.  By 1900, I show how all the pieces of the current system had come together.  This is the historical part.  Then I show how the combination of these elements created an astonishingly strong, resilient, and powerful structure.  I look at the way this structure deftly balances competing aims – the populist, the practical, and the elite.  This is the sociological part.  Then I veer back toward the issue raised in the title, to figure out what the connection is between the form of American higher education and the things that it is good for. This is the vaguely philosophical part.  I argue that the form serves the extraordinarily useful functions of protecting those of us in the faculty from the real world, protecting us from each other, and hiding what we’re doing behind a set of fictions and veneers that keep anyone from knowing exactly what is really going on. 

           In this light, I look at some of the things that could kill it for us.  One is transparency.  The current accountability movement directed toward higher education could ruin everything by shining a light on the multitude of conflicting aims, hidden cross-subsidies, and forbidden activities that constitute life in the university.  A second is disaggregation.  I’m talking about current proposals to pare down the complexity of the university in the name of efficiency:  Let online modules take over undergraduate teaching; eliminate costly residential colleges; closet research in separate institutes; and get rid of football.  These changes would destroy the synergy that comes from the university’s complex structure.  A third is principle.  I argue that the university is a procedural institution, which would collapse if we all acted on principle instead of form.   I end with a call for us to retreat from substance and stand shoulder-to-shoulder in defense of procedure.

Historical Roots of the System

            The origins of the American system of higher education could not have been more humble or less promising of future glory.  It was a system, but it had no overall structure of governance and it did not emerge from a plan.  It just happened, through an evolutionary process that had direction but no purpose.  We have a higher education system in the same sense that we have a solar system, each of which emerged over time according to its own rules.  These rules shaped the behavior of the system but they were not the product of Intelligent Design. 

            Yet something there was about this system that produced extraordinary institutional growth.  When George Washington assumed the presidency of the new republic in 1789, the U.S. already had 19 colleges and universities (Tewksbury, 1932, Table 1; Collins, 1979, Table 5.2).  By 1830 the numbers rose to 50 and then growth accelerated, with the total reaching 250 in 1860, 563 in 1870, and 811 in 1880.  To give some perspective, the number of universities in the United Kingdom between 1800 and 1880 rose from 6 to 10 and in all of Europe from 111 to 160 (Rüegg, 2004).  So in 1880 this upstart system had 5 times as many institutions of higher education as did the entire continent of Europe.  How did this happen?

            Keep in mind that the university as an institution was born in medieval Europe in the space between the dominant sources of power and wealth, the church and the state, and it drew  its support over the years from these two sources.  But higher education in the U.S. emerged in a post-feudal frontier setting where the conditions were quite different.  The key to understanding the nature of the American system of higher education is that it arose under conditions where the market was strong, the state was weak, and the church was divided.  In the absence of any overarching authority with the power and money to support a system, individual colleges had to find their own sources of support in order to get started and keep going.  They had to operate as independent enterprises in the competitive economy of higher education, and their primary reasons for being had little to do with higher learning.

            In the early- and mid-nineteenth century, the modal form of higher education in the U.S. was the liberal arts college.  This was a non-profit corporation with a state charter and a lay board, which would appoint a president as CEO of the new enterprise.  The president would then rent a building, hire a faculty, and start recruiting students.  With no guaranteed source of funding, the college had to make a go of it on its own, depending heavily on tuition from students and donations from prominent citizens, alumni, and religious sympathizers.  For college founders, location was everything.  However, whereas European universities typically emerged in major cities, these colleges in the U.S. arose in small towns far from urban population centers.  Not a good strategy if your aim was to draw a lot of students.  But the founders had other things in mind.

            One central motive for founding colleges was to promote religious denominations.  The large majority of liberal arts colleges in this period had a religious affiliation and a clergyman as president.  The U.S. was an extremely competitive market for religious groups seeking to spread the faith, and colleges were a key way to achieve this end.  With colleges, they could prepare its own clergy and provide higher education for their members; and these goals were particularly important on the frontier, where the population was growing and the possibilities for denominational expansion were the greatest.  Every denomination wanted to plant the flag in the new territories, which is why Ohio came to have so many colleges.  The denomination provided a college with legitimacy, students, and a built-in donor pool but with little direct funding.

            Another motive for founding colleges was closely allied with the first, and that was land speculation.  Establishing a college in town was not only a way to advance the faith, it was also a way to raise property values.  If town fathers could attract a college, they could make the case that the town was no mere agricultural village but a cultural center, the kind of place where prospective land buyers would want to build a house, set up a business, and raise a family.  Starting a college was cheap and easy.  It would bear the town’s name and serve as its cultural symbol.  With luck it would give the town leverage to become a county seat or gain a station on the rail line.  So a college was a good investment in a town’s future prosperity (Brown, 1995).

            The liberal arts college was the dominant but not the only form that higher education took in nineteenth century America.  Three other types of institutions emerged before 1880.  One was state universities, which were founded and governed by individual states but which received only modest state funding.  Like liberal arts colleges, they arose largely for competitive reasons.  They emerged in the new states as the frontier moved westward, not because of huge student demand but because of the need for legitimacy.  You couldn’t be taken seriously as a state unless you had a state university, especially if your neighbor had just established one. 

            The second form of institution was the land-grant college, which arose from federal efforts to promote land sales in the new territories by providing public land as a founding grant for new institutions of higher education.  Turning their backs on the classical curriculum that had long prevailed in colleges, these schools had a mandate to promote practical learning in fields such as agriculture, engineering, military science, and mining. 

            The third form was the normal school, which emerged in the middle of the century as state-founded high-school-level institutions for the preparation of teachers.  It wasn’t until the end of the century that these schools evolved into teachers colleges; and in the twentieth century they continued that evolution, turning first into full-service state colleges and then by midcentury into regional state universities. 

            Unlike liberal arts colleges, all three of these types of institutions were initiated by and governed by states, and all received some public funding.  But this funding was not nearly enough to keep them afloat, so they faced similar challenges as the liberal arts colleges, since their survival depended heavily on their ability to bring in student tuition and draw donations.  In short, the liberal arts college established the model for survival in a setting with a strong market, weak state, and divided church; and the newer public institutions had to play by the same rules.

            By 1880, the structure of the American system of higher education was well established.  It was a system made up of lean and adaptable institutions, with a strong base in rural communities, and led by entrepreneurial presidents, who kept a sharp eye out for possible threats and opportunities in the highly competitive higher-education market.  These colleges had to attract and keep the loyalty of student consumers, whose tuition was critical for paying the bills and who had plenty of alternatives in towns nearby.  And they also had to maintain a close relationship with local notables, religious peers, and alumni, who provided a crucial base of donations.

            The system was only missing two elements to make it workable in the long term.  It lacked sufficient students, and it lacked academic legitimacy.  On the student side, this was the most overbuilt system of higher education the world has ever seen.  In 1880, 811 colleges were scattered across a thinly populated countryside, which amounted to 16 colleges per million of population (Collins, 1979, Table 5.2).  The average college had only 131 students and 14 faculty and granted 17 degrees per year (Carter et al., 2006, Table Bc523, Table Bc571; U.S. Bureau of the Census, 1975, Series H 751).  As I have shown, these colleges were not established in response to student demand, but nonetheless they depended on students for survival.  Without a sharp growth in student enrollments, the whole system would have collapsed. 

            On the academic side, these were colleges in name only.  They were parochial in both senses of the word, small town institutions stuck in the boondocks and able to make no claim to advancing the boundaries of knowledge.  They were not established to promote higher learning, and they lacked both the intellectual and economic capital required to carry out such a mission.  Many high schools had stronger claims to academic prowess than these colleges.  European visitors in the nineteenth century had a field day ridiculing the intellectual poverty of these institutions.  The system was on death watch.  If it was going to be able to survive, it needed a transfusion that would provide both student enrollments and academic legitimacy. 

            That transfusion arrived just in time from a new European import, the German research university.  This model offered everything that was lacking in the American system.  It reinvented university professors as the best minds of the generation, whose expertise was certified by the new entry-level degree, the Ph.D., and who were pushing back the frontiers of knowledge through scientific research.  It introduced graduate students to the college campus, who would be selected for their high academic promise and trained to follow in the footsteps of their faculty mentors. 

            And at the same time that the German model offered academic credibility to the American system, the peculiarly Americanized form of this model made university enrollment attractive for undergraduates, whose focus was less on higher learning than on jobs and parties.  The remodeled American university provided credible academic preparation in the cognitive skills required for professional and managerial work; and it provided training in the social and political skills required for corporate employment, through the process of playing the academic game and taking on roles in intercollegiate athletics and on-campus social clubs.  It also promised a social life in which one could have a good time and meet a suitable spouse. 

            By 1900, with the arrival of the research university as the capstone, nearly all of the core elements of the current American system of higher education were in place.  Subsequent developments focused primarily on extending the system downward, adding layers that would make it more accessible to larger numbers of students – as normal schools evolved into regional state universities and as community colleges emerged as the open-access base of an increasingly stratified system.  Here ends the history portion of this account. Now we move on to the sociological part of the story.

Sociological Traits of the System

            When the research university model arrived to save the day in the 1880s, the American system of higher education was in desperate straits.  But at the same time this system had an enormous reservoir of potential strengths that prepared it for its future climb to world dominance.  Let’s consider some of these strengths.  First it had a huge capacity in place, the largest in the world by far:  campuses, buildings, faculty, administration, curriculum, and a strong base in the community.  All it needed was students and credibility. 

            Second, it consisted of a group of institutions that had figured out how to survive under dire Darwinian circumstances, where supply greatly exceeded demand and where there was no secure stream of funding from church or state.  In order to keep the enterprises afloat, they had learned how to hustle for market position, troll for students, and dun donors.  Imagine how well this played out when students found a reason to line up at their doors and donors suddenly saw themselves investing in a winner with a soaring intellectual and social mission. 

            Third, they had learned to be extraordinarily sensitive to consumer demand, upon which everything depended.  Fourth, as a result they became lean and highly adaptable enterprises, which were not bounded by the politics of state policy or the dogma of the church but could take advantage of any emerging possibility for a new program, a new kind of student or donor, or a new area of research.  Not only were they able to adapt but they were forced to do so quickly, since otherwise the competition would jump on the opportunity first and eat their lunch.

            By the time the research university arrived on the scene, the American system of higher education was already firmly established and governed by its own peculiar laws of motion and its own evolutionary patterns.  The university did not transform the system.  Instead it crowned the system and made it viable for a century of expansion and elevation.  Americans could not simply adopt the German university model, since this model depended heavily on strong state support, which was lacking in the U.S.  And the American system would not sustain a university as elevated as the German university, with its tight focus on graduate education and research at the expense of other functions.  American universities that tried to pursue this approach – such as Clark University and Johns Hopkins – found themselves quickly trailing the pack of institutions that adopted a hybrid model grounded in the preexisting American system.  In the U.S., the research university provided a crucial add-on rather than a transformation.  In this institutionally-complex market-based system, the research university became embedded within a convoluted but highly functional structure of cross-subsidies, interwoven income streams, widely dispersed political constituencies, and a bewildering array of goals and functions. 

            At the core of the system is a delicate balance among three starkly different models of higher education.  These three roughly correspond to Clark Kerr’s famous characterization of the American system as a mix of the British undergraduate college, the American land-grant college, and the German research university (Kerr, 2001, p. 14).  The first is the populist element, the second is the practical element, and the third is the elite element.  Let me say a little about each of these and make the case for how they work to reinforce each other and shore up the overall system.  I argue that these three elements are unevenly distributed across the whole system, with the populist and practical parts strongest in the lower tiers of the system, where access is easy and job utility are central, and the elite is strongest in the upper tier.  But I also argue that all three are present in the research university at the top of the system.  Consider how all these elements come together in a prototypical flagship state university.

            The populist element has its roots in the British residential undergraduate college, which colonists had in mind when they established the first American colleges; but the changes that emerged in the U.S. in the early nineteenth century were critical.  Key was the fact that American colleges during this period were broadly accessible in a way that colleges in the U.K. never were until the advent of the red-brick universities after the Second World War.  American colleges were not located in fashionable areas in major cities but in small towns in the hinterland.  There were far too many of them for them to be elite, and the need for students meant that tuition and academic standards both had to be kept relatively low.  The American college never exuded the odor of class privilege to the same degree as Oxbridge; its clientele was largely middle class.  For the new research university, this legacy meant that the undergraduate program provided critical economic and political support. 

            From the economic perspective, undergrads paid tuition, which – through large classes and thus the need for graduate teaching assistants – supported graduate programs and the larger research enterprise.  Undergrads, who were socialized in the rituals of football and fraternities, were also the ones who identified most closely with the university, which meant that in later years they became the most loyal donors.  As doers rather than thinkers, they were also the wealthiest group of alumni donors.  Politically, the undergraduate program gave the university a broad base of community support.  Since anyone could conceive of attending the state university, the institution was never as remote or alien as the German model.  Its athletic teams and academic accomplishments were a point of pride for state residents, whether or not they or their children ever attended.  They wore the school colors and cheered for it on game days.

            The practical element has its root in the land-grant college.  The idea here was that the university was not just an enterprise for providing liberal education for the elite but that it could also provide useful occupational skills for ordinary people.  Since the institution needed to attract a large group of students to pay the bills, the American university left no stone unturned when it came to developing programs that students might want.  It promoted itself as a practical and reliable mechanism for getting a good job.  This not only boosted enrollment, but it also sent a message to the citizens of the state that the university was making itself useful to the larger community, producing the teachers, engineers, managers, and dental hygienists that they needed.  

            This practical bent also extended to the university’s research effort, which was not just focusing on ivory tower pursuits.  Its researchers were working hard to design safer bridges, more productive crops, better vaccines, and more reliable student tests.  For example, when I taught at Michigan State I planted my lawn with Spartan grass seed, which was developed at the university.  These forms of applied research led to patents that brought substantial income back to the institution, but their most important function was to provide a broad base of support for the university among people who had no connection with it as an instructional or intellectual enterprise.  The idea was compelling: This is your university, working for you.

            The elite element has its roots in the German research university.  This is the component of the university formula that gives the institution academic credibility at the highest level.  Without it the university would just be a party school for the intellectually challenged and a trade school for job seekers.  From this angle, the university is the haven for the best thinkers, where professors can pursue intellectual challenges of the first order, develop cutting edge research in a wide array of domains, and train graduate students who will carry on these pursuits in the next generation.  And this academic aura envelops the entire enterprise, giving the lowliest freshman exposure to the most distinguished faculty and allowing the average graduate to sport a diploma burnished by the academic reputations of the best and the brightest.  The problem, of course, is that supporting professorial research and advanced graduate study is enormously expensive; research grants only provide a fraction of the needed funds. 

            So the populist and practical domains of the university are critically important components of the larger university package.  Without the foundation of fraternities and football, grass seed and teacher education, the superstructure of academic accomplishment would collapse of its own weight.  The academic side of the university can’t survive without both the financial subsidies and political support that come from the populist and the practical sides.  And the populist and practical sides rely on the academic legitimacy that comes from the elite side.  It’s the mixture of the three that constitutes the core strength of the American system of higher education.  This is why it is so resilient, so adaptable, so wealthy, and so powerful.  This is why its financial and political base is so broad and strong.  And this is why American institutions of higher education enjoy so much autonomy:  They respond to many sources of power in American society and they rely on many sources of support, which means they are not the captive of any single power source or revenue stream.

The Power of Form

            So my story about the American system of higher education is that it succeeded by developing a structure that allowed it to become both economically rich and politically autonomous.  It could tap multiple sources of revenue and legitimacy, which allowed it to avoid becoming the wholly owned subsidiary of the state, the church, or the market.  And by virtue of its structurally reinforced autonomy, college is good for a great many things.

            At last we come back to our topic.  What is college good for?  For those of us on faculties of research universities, they provide several core benefits that we see as especially important.  At the top of the list is that they preserve and promote free speech.  They are zones where faculty and students can feel free to pursue any idea, any line of argument, and any intellectual pursuit that they wish – free of the constraints of political pressure, cultural convention, or material interest.  Closely related to this is the fact that universities become zones where play is not only permissible but even desirable, where it’s ok to pursue an idea just because it’s intriguing, even though there is no apparent practical benefit that this pursuit would produce.

            This, of course, is a rather idealized version of the university.  In practice, as we know, politics, convention, and economics constantly intrude on the zone of autonomy in an effort to shape the process and limit these freedoms.  This is particularly true in the lower strata of the system.  My argument is not that the ideal is met but that the structure of American higher education – especially in the top tier of the system – creates a space of relative autonomy, where these constraining forces are partially held back, allowing the possibility for free intellectual pursuits that cannot be found anywhere else. 

            Free intellectual play is what we in the faculty tend to care about, but others in American society see other benefits arising from higher education that justify the enormous time and treasure that we devote to supporting the system.  Policymakers and employers put primary emphasis on higher education as an engine of human capital production, which provides the economically relevant skills that drive increases in worker productivity and growth in the GDP.  They also hail it as a place of knowledge production, where people develop valuable technologies, theories, and inventions that can feed directly into the economy.  And companies use it as a place to outsource much of their needs for workforce training and research-and-development. 

            These pragmatic benefits that people see coming from the system of higher education are real.  Universities truly are socially useful in such ways.  But it’s important to keep in mind that these social benefits only can arise if the university remains a preserve for free intellectual play.  Universities are much less useful to society if they restrict themselves to the training of individuals for particular present-day jobs, or to the production of research to solve current problems.  They are most useful if they function as storehouses for knowledges, skills, technologies, and theories – for which there is no current application but which may turn out to be enormously useful in the future.  They are the mechanism by which modern societies build capacity to deal with issues that have not yet emerged but sooner or later are likely to do so.

            But that is a discussion for another speech by another scholar.  The point I want make today about the American system of higher education is that it is good for a lot of things but it was established in order to accomplish none of these things.  As I have shown, the system that arose in the nineteenth century was not trying to store knowledge, produce capacity, or increase productivity.  And it wasn’t trying to promote free speech or encourage play with ideas.  It wasn’t even trying to preserve institutional autonomy.  These things happened as the system developed, but they were all unintended consequences.  What was driving development of the system was a clash of competing interests, all of which saw the college as a useful medium for meeting particular ends.  Religious denominations saw them as a way to spread the faith.  Town fathers saw them as a way to promote local development and increase property values.  The federal government saw them as a way to spur the sale of federal lands.  State governments saw them as a way to establish credibility in competition with other states.  College presidents and faculty saw them as a way to promote their own careers.  And at the base of the whole process of system development were the consumers, the students, without whose enrollment and tuition and donations the system would not have been able to persist.  The consumers saw the college as useful in a number of ways:  as a medium for seeking social opportunity and achieving social mobility; as a medium for preserving social advantage and avoiding downward mobility; as a place to have a good time, enjoy an easy transition to adulthood, pick up some social skills, and meet a spouse; even, sometimes, as a place to learn. 

            The point is that the primary benefits of the system of higher education derive from its form, but this form did not arise in order to produce these benefits.  We need to preserve the form in order to continue enjoying these benefits, but unfortunately the organizational  foundations upon which the form is built are, on the face of it, absurd.  And each of these foundational qualities is currently under attack from the perspective of alternative visions that, in contrast, have a certain face validity.  It the attackers accomplish their goals, the system’s form, which has been so enormously productive over the years, will collapse, and with this collapse will come the end of the university as we know it.  I didn’t promise this lecture would end well, did I?

            Let me spell out three challenges that would undercut the core autonomy and synergy that makes the system so productive in its current form.  On the surface, each of the proposed changes seems quite sensible and desirable.  Only by examining the implications of actually pursuing these changes can we see how they threaten the foundational qualities that currently undergird the system.  The system’s foundations are so paradoxical, however, that mounting a public defense of them would be difficult indeed.  Yet it is precisely these traits of the system that we need to defend in order to preserve the current highly functional form of the university.  In what follows, I am drawing inspiration from the work of Suzanne Lohmann (2004, 2006) a political scientist at UCLA, who is the scholar who has addressed these issues most astutely.

            One challenge comes from prospective reformers of American higher education who want to promote transparency.  Who can be against that?  This idea derives from the accountability movement, which has already swept across K-12 education and is now pounding the shores of higher education.  It simply asks universities to show people what they’re doing.  What is the university doing with its money and its effort?  Who is paying for what?  How do the various pieces of the complex structure of the university fit together?  And are they self-supporting or drawing resources from elsewhere?  What is faculty credit-hour production?  How is tuition related to instructional costs?  And so on.   These demands make a lot of sense. 

            The problem, however, as I have shown today, is that the autonomy of the university depends on its ability to shield its inner workings from public scrutiny.  It relies on opacity.  Autonomy will end if the public can see everything that is going on and what everything costs.  Consider all of the cross subsidies that keep the institution afloat:  undergraduates support graduate education, football supports lacrosse, adjuncts subsidize professors, rich schools subsidize poor schools.  Consider all of the instructional activities that would wilt in the light of day; consider all of the research projects that could be seen as useless or politically unacceptable.  The current structure keeps the inner workings of the system obscure, which protects the university from intrusions on its autonomy.  Remember, this autonomy arose by accident not by design; its persistence depends on keeping the details of university operations out of public view.

            A second and related challenge comes from reformers who seek to promote disaggregation.  The university is an organizational nightmare, they say, with all of those institutes and centers, departments and schools, programs and administrative offices.  There are no clear lines of authority, no mechanisms to promote efficiency and eliminate duplication, no tools to achieve economies of scale.  Transparency is one step in the right direction, they say, but the real reform that is needed is to take apart the complex interdependencies and overlapping responsibilities within the university and then figure out how each of these tasks could be accomplished in the most cost-effective and outcome-effective manner.  Why not have a few star professors tape lectures and then offer Massive Open Online Courses at colleges across the country?  Why not have institutions specialize in what they’re best at – remedial education, undergraduate instruction, vocational education, research production, graduate or student training?  Putting them together into a single institution is expensive and grossly inefficient. 

            But recall that it is precisely the aggregation of purposes and functions – the combination of the populist, the practical, and the elite – that has made the university so strong, so successful, and, yes, so useful.  This combination creates a strong base both financially and politically and allows for forms of synergy than cannot happen with a set of isolated educational functions.  The fact is that this institution can’t be disaggregated without losing what makes it the kind of university that students, policymakers, employers, and the general public find so compelling.  A key organizational element that makes the university so effective is its chaotic complexity.

            A third challenge comes not from reformers intruding on the university from the outside but from faculty members meddling with it from the inside.  The threat here arises from the dangerous practice of acting on academic principle.  Fortunately, this is not very common in academe.  But the danger is lurking in the background of every decision about faculty hires.  Here’s how it works.  You review a finalist for a faculty position in a field not closely connected to your own, and you find to your horror that the candidate’s intellectual domain seems absurd on the face of it (how can anyone take this type of work seriously?) and the candidate’s own scholarship doesn’t seem credible.  So you decide to speak against hiring the candidate and organize colleagues to support your position.  But then you happen to read a paper by Suzanne Lohmann, who points out something very fundamental about how universities work. 

            Universities are structured in a manner that protects the faculty from the outside world (that is, protecting them from the forces of transparency and disaggregation), but it’s also organized in a manner that protects the faculty from each other.  The latter is the reason we have such an enormous array of departments and schools in universities.  If every historian had to meet the approval of geologists and every psychologist had be meet the approval of law faculty, no one would ever be hired. 

           The simple fact is that part of what keeps universities healthy and autonomous is hypocrisy.  Because of the Balkanized structure of university organization, we all have our own protected spaces to operate in and we all pass judgment only on our own peers within that space.  To do otherwise would be disastrous.  We don’t have to respect each other’s work across campus, we merely need to tolerate it – grumbling about each other in private and making nice in public.  You pick your faculty, we’ll pick ours.  Lohmann (2006) calls this core procedure of the academy “log-rolling.”  If we all operated on principle, if we all only approved scholars we respected, then the university would be a much diminished place.  Put another way, I wouldn’t want to belong to a university that consisted only of people I found worthy.  Gone would be the diversity of views, paradigms, methodologies, theories, and world views that makes the university such a rich place.  The result is incredibly messy, and it permits a lot of quirky – even ridiculous – research agendas, courses, and instructional programs.  But in aggregate, this libertarian chaos includes an extraordinary range of ideas, capacities, theories, and social possibilities.  It’s exactly the kind of mess we need to treasure and preserve and defend against all opponents.

            So here is the thought I’m leaving you with.  The American system of higher education is enormously productive and useful, and it’s a great resource for students, faculty, policymakers, employers, and society.  What makes it work is not its substance but its form.  Crucial to its success is its devotion to three formal qualities:  opacity, chaotic complexity, and hypocrisy.  Embrace these forms and they will keep us free.

Posted in History of education, Public Good, Schooling, Welfare

Public Schooling as Social Welfare

This post is a follow-up to a piece I posted three weeks ago, which was Michael Katz’s 2020 essay, Public Education as Welfare.  Below is my own take on this subject, which I wrote for a book that will be published in recognition of the hundredth anniversary of the Horace Mann League.  The tentative title of the book is Public Education: The Cornerstone of American Democracy and the editors are David Berliner and Carl Hermanns.  All of the contributions focus on the role that public schools play in American life.  Here’s a link to a pdf of my piece.

Public Schooling as Social Welfare

David F. Labaree

            In the mid nineteenth century, Horace Mann made a forceful case for a distinctly political vision of public schooling, as a mechanism for creating citizens for the American republic. In the twentieth century, policymakers put forth an alternative economic vision for this institution, as a mechanism for turning out productive workers to promote growth of the American economy. In this essay, I explore a third view of public schooling, which is less readily recognizable than the other two but no less important.  This is a social vision, in which public schooling serves as a mechanism for promoting social welfare, by working to ameliorate the inequalities of American society.  

All three of these visions construe public schooling as a public good.  As a public good, its benefits flow to the entire community, including those who never attended school, by enriching the broad spectrum of political, economic, and social life.  But public schooling is also a private good.  As such, its benefits accrue only to its graduates, who use their diplomas to gain selective access to jobs at the expense of those who lack these credentials. 

Consider the relative costs and benefits of these two types of goods.  Investing in public goods is highly inclusive, in that every dollar invested goes to support the common weal.  But at the same time this investment is also highly contingent, since individuals will gain the benefits even if they don’t contribute, getting a free ride on the contributions of others.  The usual way around the free rider problem is to make such investment mandatory for everyone through the mechanism of taxation.  By contrast, investment in private goods is self-sustaining, with no state action needed.  Individuals have a strong incentive to invest because only they gain the benefit.  In addition, as a private good its effects are highly exclusive, benefiting some people at the expense of others and thus tending to increase social inequality. 

Like the political and economic visions of schooling, the welfare vision carries the traits of its condition as a public good.  Its scope is inclusive, its impact is egalitarian, and its sustainability depends heavily on state mandate.  But it lacks a key advantage shared by the other two, whose benefits clearly flow to the population as a whole.  Everyone benefits by being part of a polity in which citizens are capable, law abiding, and informed.  Everyone benefits by being part of an economy in which workers contribute productively to the general prosperity. 

In contrast, however, it’s less obvious that everyone benefits from transferring public resources to disadvantaged citizens in order to improve their quality of life.  The word welfare carries a foul odor in American politics, redolent of laziness, bad behavior, and criminality.  It’s so bad that in 1980 the federal government changed the name of the Department Health, Education, and Welfare to Health and Human Services just to get rid of the stigmatized term.

So one reason that the welfare function doesn’t jump to mind when you think of schools is that we really don’t want to associate the two.  Don’t besmirch schooling by calling it welfare.  Michael Katz caught this feeling in the opening sentences of his 2010 essay, “Public Education as Welfare,” which serves as a reference point for my own essay:  “Welfare is the most despised public institution in America. Public education is the most iconic. To associate them with each other will strike most Americans as bizarre, even offensive.”  But let’s give it a try anyway.

My own essay arises from the time when I’m writing it – the summer of 2020 during the early phases of Covid-19 pandemic.  Like everyone else in the US, I watched in amazement this spring when schools suddenly shut down across the country and students started a new regime of online learning from home.  It started me thinking about what schools mean to us, what they do for us. 

Often it’s only when an institution goes missing that we come to recognize its value.  After the Covid shutdown, parents, children, officials, and citizens discovered just what they lost when the kids came home to stay.  You could hear voices around the country and around the globe pleading, “When are schools going to open again?”

I didn’t hear people talking much about the other two public goods views of schooling.  There wasn’t a groundswell of opinion complaining about the absence of citizenship formation or the falloff of human capital production.  Instead, there was a growing awareness of the various social welfare functions of schooling that were now suddenly gone.  Here are a few, in no particular order.

Schools are the main source of child care for working parents.  When schools close, someone needs to stay home to take care of the younger children.  For parents with the kind of white collar jobs that allow them to work from home, this causes a major inconvenience as they try to juggle work and child care and online schooling.  But for parents who can’t phone in their work, having to stay home with the kids is a huge financial sacrifice, and it’s even bigger for single parents in this category.

Schools are a key place for children to get healthy meals.  In the U.S., about 30 million students receive free or discounted lunch (and often breakfast) at school every day.  It’s so common that researchers use the proportion of “students on free or reduced lunch” as a measure of the poverty rate in individual schools.  When schools close, these children go hungry.  In response to this problem, a number of closed school systems have continued to prepare these meals for parents to pick up and take home with them.

Schools are crucial for the health of children.  In the absence of universal health care in the U.S., schools have served as a frail substitute.  They require all students to have vaccinations.  They provide health education.  And they have school nurses who can check for student ailments and make referrals.

Schools are especially important for dealing with the mental health of young people.  Teachers and school psychologists can identify mental illness and serve as prompts for getting students treatment.  Special education programs identify developmental disabilities in students and devise individualized plans for treating them.

Schools serve as oases for children who are abused at home.  Educators are required by law to look out for signs of mental or physical abuse and to report these cases to authorities.  When schools close, these children are trapped in abusive settings at home, which gives the lie to the idea of sheltering in place.  For many students, the true shelter is the school itself.  In the absence of teacher referrals, agencies reported a sharp drop-off in the reports of child abuse.

Schools are domains for relative safety for students who live in dangerous neighborhoods.  For many kids, who live in settings with gangs and drugs and crime, getting to and from school is the most treacherous part of the day.  Once inside the walls of the school, they are relatively free of physical threats.  Closing school doors to students puts them at risk.

Schools are environments that are often healthier than their own homes.  Students in wealthy neighborhoods may look on schools in poor neighborhoods as relatively shabby and depressing, but for many children the buildings have a degree of heat, light, cleanliness, and safety that they can’t find at home.  These schools may not have swimming pools and tennis courts, but they also don’t have rats and refuse.

Schools may be the only institutional setting for many kids in which the professional norm is to serve the best interests of the child.  We know that students can be harmed by schools.  All it takes is a bully or a disparaging judgment.  The core of the educator’s job is to foster growth, spur interest, increase knowledge, enhance skill, and promote development.  Being cut off from such an environment for a long period of time is a major loss for any student, rich or poor.

Schools are one of the few places in American life where young people undergo a shared experience.  This is especially true at the elementary level, where most children in a neighborhood attend the same school and undergo a relatively homogeneous curriculum.  It’s less true in high school, where the tracked curriculum provides more divergent experiences.  A key component of the shared experience is that it places you face-to-face with students who may be different from you.  As we have found, when you turn schooling into online learning, you tend to exacerbate social differences, because students are isolated in disparate family contexts where there is a sharp divide in internet access. 

Schools are where children socialize with each other.  A key reason kids want to go to school is because that’s where their friends are.  It’s where they make friends they otherwise would have never meet, learn to maintain these friendships, and learn how to manage conflicts.  Humans are thoroughly social animals, who need interaction with others in order to grow and thrive.  So being cooped up at home leaves everyone, but especially children, without a central component of human existence.

Schools are the primary public institution for overseeing the development of young children into healthy and capable adults.  Families are the core private institution engaged in this process, but schools serve as the critical intermediary between family and the larger society.  They’re the way our children learn now to live and engage with other people’s children, and they’re a key way that society seeks to ameliorate social differences that might impede children’s development, serving as what Mann called the “a great equalizer of the conditions of men – the balance wheel of the social machinery.”

These are some aspects of schooling that we take for granted but don’t think about very much.  For policymakers, these they may be considered side effects of the school’s academic mission, but for many (maybe most) families they are a main effect.  And the various social support roles that schools play are particularly critical in a country like the United States, where the absence of a robust social welfare system means that schools stand as the primary alternative.  School’s absence made the heart grow fonder for it.  We all become aware of just how much schools do for us.

Systems of universal public schooling did not arise in order to promote social welfare.  During the last 200 years, in countries around the world, the impetus came from the kind of political rationale that Horace Mann so eloquently put forward.  Public schools emerged as part of the process of creating nation states.  Their function was to turn subjects of the crown into citizens of the nation, or, as Eugen Weber put it in the title of his wonderful book, to turn Peasants into Frenchmen.  Schools took localized populations with regional dialects and traditional authority relations and helped affiliate these populations with an imagined community called France or the United States.  They created a common language (in case of France, it was Parisian French), a shared sense of national membership, and a shared educational experience. 

This is the origin story of public schooling.  But once schools became institutionalized and the state’s existence grew relatively secure, they began to accumulate other functions, both private (gaining an edge in the competition for social position) and public (promoting economic growth and supporting social welfare).  In different countries these functions took different forms, and the load the state placed on schooling varied considerably.  The American case, as is so often true, was extreme.

The U.S. bet the farm on the public school.  It was relatively early in establishing a system of publicly funded and governed schools across the country in the second quarter of the nineteenth century.  But it was way ahead of European countries in its rapid upward expansion of the system.  Universal enrollment moved quickly from primary school to grammar school to high school.  By 1900, the average American teenager had completed eight years of schooling.  This led to a massive surge in high school enrollments, which doubled every decade between 1890 and 1940.  By 1951, 75 percent of 16-year olds were enrolled in high school compared to only 14 percent in the United Kingdom.   In the three decades after the Second World War, the surge spilled over into colleges, with the rate of enrollment between 1950 and 1980 rising from 9 to 40 percent of the eligible population.

The US system had an indirect connection to welfare even before it started acting as a kind of social service agency.  The short version of the story is this.  In the second part of the nineteenth century, European countries like Disraeli’s United Kingdom and Bismarck’s Germany set up the framework for a welfare state, with pensions and other elements of a safety net for the working class.  The U.S. chose not to take this route, which it largely deferred until the 1930s.  Instead it put its money on schooling.  The vision was to provide individuals with educational opportunities to get ahead on their own rather than to give them direct aid to improve their current quality of life.  The idea was to focus on developing a promising future rather than on meeting current needs.  People were supposed to educate their way out of poverty, climbing up the ladder with the help of state schooling.  The fear was that provide direct relief for food, clothing, and shelter – the dreaded dole – would only stifle their incentive to get ahead.  Better to stimulate the pursuit of future betterment rather to run the risk that people might get used to subsisting comfortably in the present. 

By nature, schooling is a forward-looking enterprise.  Its focus is on preparing students for their future roles as citizens, workers, and members of society rather than on helping them deal with their current living conditions.  By setting up an educational state rather than a welfare state, the U.S. in effect chose to write off the parents, seen as a lost cause, and concentrate instead on providing opportunities to the children, seen as still salvageable. 

In the twentieth century, spurred by the New Deal’s response to the Great Depression, the U.S. developed the rudiments of a welfare state, with pensions and then health care for the elderly, temporary cash support and health care for the poor, and unemployment insurance for the worker.  At the same time, schools began to deal with the problems arising from poverty that students brought with them to the classroom.  This was propelled by a growing understanding that hungry, sick, and abused children are not going to able to take advantage of educational opportunities in order to attain a better life in the future.  Schooling alone couldn’t provide the chance for schooling to succeed.  Thus the introduction of free meals, the school nurse, de facto day care, and other social-work activities in the school. 

The tale of the rise of the social welfare function of the American public school, therefore, is anything but a success story.  Rather, it’s a story of one failure on top of another.  First is the failure to deal directly with social inequality in American life, when instead we chose to defer the intervention to the future by focusing on educating children while ignoring their parents.  Second, when poverty kept interfering with the schooling process, we introduced rudimentary welfare programs into the school in order give students a better chance, while still leaving poor parents to their own devices. 

As with the American welfare system in general, school welfare is not much but it’s better than nothing.  Carrying on the pattern set in the nineteenth century, we are still shirking responsibility for dealing directly with poverty through the political system by opposing universal health care and a strong safety net.  Instead, we continue to put our money on schooling as the answer when the real solution lies elsewhere.  Until we decide to implement that solution, however, schooling is all we’ve got. 

In the meantime, schools serve as the wobbly but indispensable balance wheel of American social life.  Too bad it took a global pandemic to get us to realize what we lose when schools close down.

Posted in Educational goals, History of education

Are Students Consumers?

This post is a piece I published in Education Week way back in 1997.  It’s a much shorter and more accessible version of the most cited paper I ever published, “Public Goods, Private Goods: The American Struggle over Educational Goals.”  Drawing on the latter, it lays out a case of three competing educational goals that have shaped the history of American schooling: democratic equality, social efficiency, and social mobility. 

In reading it over, I find it holds up rather well, except for a tendency to demonize social mobility.  Since then I’ve come to think that, while the latter does a lot of harm, it’s also an essential component of schooling.  We can’t help but be concerned about the selective benefit that schooling provides us and our children even as we at the same time are concerned about supporting the broader benefits that schooling provides the public as a whole.

See what you think.  Here’s a link to the original and also to a PDF in case you can’t get past the paywall.  

 

Are Students “Consumers”?

David F. Labaree

Observers of American education have frequently noted that the general direction of educational reform over the years has not been forward but back and forth. Reform, it seems, is less an engine of progress than a pendulum, swinging monotonously between familiar policy alternatives. Progress is hard to come by.

However, a closer reading of the history of educational change in this country reveals a pattern that is both more complex and in a way more troubling than this. Yes, the back-and-forth movement is real, but it turns out that this pattern is for the most part good news. It simply represents a periodic shift in emphasis between two goals for education — democratic equality and social efficiency — that represent competing but equally indispensable visions of education.

The bad news is that in the 20th century, and especially in the past several decades, the pendulum swings increasingly have given way to a steady movement in the direction of a third goal, social mobility. This shift from fluctuation to forward motion may look like progress, but it’s not. The problem is that it represents a fundamental change in the way we think about education, by threatening to transform this most public of institutions from a public good into a private good. The consequences for both school and society, I suggest, are potentially devastating.

Let me explain why. First we’ll consider the role that these three goals have played in American education, and then we can explore the implications of the movement from equality and efficiency to mobility.

The first goal is democratic equality, which is the oldest of the three. From this point of view, the purpose of schooling is to produce competent citizens. This goal provided the primary impetus for the common school movement, which established the foundation for universal public education in this country during the middle of the 19th century. The idea was and is that all citizens need to be able to think, understand the world around them, behave sociably, and act according to shared political values — and that public schools are the best places to accomplish these ends. The corollary of this goal is that all these capabilities need to be equally distributed, and that public schools can serve as what Horace Mann called the great “balance wheel,” by providing a common educational competence that helps reduce differences.

Some of the most enduring and familiar characteristics of our current system of education were formed historically in response to this goal. There are the neighborhood elementary school and the comprehensive high school, which draw together students from the whole community under one roof. There is the distinctively American emphasis on general education at all levels of the educational system. There is the long-standing practice of socially promoting students from grade to grade. And there is the strong emphasis on inclusion, which over the years has led to such innovations as racial integration and the mainstreaming of special education students.

The second goal is social efficiency, which first became prominent in the Progressive era at the turn of the century. From this perspective, the purpose of education is not to produce citizens but to train productive workers. The idea is that our society’s health depends on a growing economy, and economy needs workers with skills that will allow them to carry out their occupational roles effectively. Schools, therefore, should place less emphasis on general education and more on the skills needed for particular jobs. And because skill requirements differ greatly from job to job, schools need to tailor curricula to the job and then sort students into the different curricula.

Consider some of the enduring effects that this goal has had on education over the years. There is the presence of explicitly vocational programs of study within the high school and college curriculum. There is the persistent practice of tracking and ability grouping. And there is the prominence of social efficiency arguments in the public rhetoric about education, echoing through every millage election and every race for public office in the past half-century. We are all familiar with the argument that pops upon these occasions — that education is the keystone of the community’s economic future, that spending money on education is really an investment in human capital that will pay big dividends.

Notice that the first two goals are in some ways quite different in the effects they have had on schools. One emphasizes a political role for schools while the other stresses an economic role. One pushes for general education, the other for specialized education. One homogenizes, the other differentiates.

But from another angle, the two take a similar approach, because they both treat education as public good. A public good is one that benefits all members of a community, which means that you cannot avoid being affected by it. For example, police protection and road maintenance have an impact directly or indirectly on the life of everyone. Likewise, everyone stands to gain from a public school system that produces competent citizens and productive workers, even those members of the community who don’t have children in public schools.

This leads us to something that is quite distinctive about the third educational goal, the one I call social mobility. From the perspective of this goal, education is not a public good but a private good. If the first goal for education takes the viewpoint of the citizen and the second takes that of the taxpayer, the third takes the viewpoint of the individual educational consumer.

The purpose of education from this angle is not what it can do for democracy or the economy but what it can do for me. Historically, education has paid off handsomely for individuals who stayed in school and came away with diplomas. Educational credentials have made it possible for people to distinguish themselves from their competitors, giving them a big advantage in the race for good jobs and a comfortable life. As a result, education has served as a springboard to upward mobility for the working class and a buttress against downward mobility for the middle class.

Note that if education is going to serve the social-mobility goal effectively, it has to provide some people with benefits that others don’t get. Education in this sense is a private good that only benefits the owner, an investment in my future, not yours, in my children, not other people’s children. For such an educational system to work effectively, it needs to focus a lot of attention on grading, sorting, and selecting students. It needs to provide a variety of ways for individuals to distinguish themselves from others — such as by placing themselves in a more prestigious college, a higher curriculum track, the top reading group, or the gifted program. In this sense the social-mobility goal reinforces the same sorting and selecting tendency in education that is promoted by the social-efficiency goal, but without the same concern for providing socially useful skills.

Now that I’ve spelled out some of the main characteristics of these three goals for education, let me show how they can help us understand the major swings of the pendulum in educational reform over the last 200 years.

During the common school era in the mid-19th century, the dominant goal for American education democratic equality. The connection between school and work at this point was weak. People earned job skills on the job rather than in school, and educational credentials offered social distinction but not necessarily preference in hiring.

By the end of the 19th century, however, both social efficiency and social mobility emerged as major factors in shaping education, while the influence of democratic equality declined. High school enrollments began to take off in the 1890s, which posed two big problems for education — a social-efficiency problem (how to provide education for the new wave of students), and a social-mobility problem (how to protect the value of high school credentials for middle-class consumers). The result was a series of reforms that defined the Progressive era in American education during the first half of the 20th century. These included such innovations as tracking, ability testing, ability grouping, vocationalism, special education, social promotion, and life adjustment.

Then in the 1960s and 1970s we saw a swing back from social efficiency to democratic equality (reinforced by the social-mobility goal). The national movement for racial equality brought pressure to integrate schools, and these arguments for political equality and individual opportunity led to a variety of related reforms aimed at reducing educational discrimination based on class, gender, and handicapping condition.

But in the 1980s and 1990s, the momentum shifted back from democratic equality to social efficiency — again reinforced by social mobility. The emerging movement for educational standards responded both to concerns about declining economic competitiveness (seen as a deficiency of human capital) and to concerns about a glut of high school and college credentials (seen as a threat to social mobility).

However, another way to think about these historical trends in educational reform is to turn attention away from the pendulum swings between the first two goals and to focus instead on the steady growth in the influence of the third goal throughout the last 100 years. Since its emergence as a factor in the late 19th century, social mobility has gradually grown to become the dominant goal in American education. Increasingly, neither of the other two goals can make strong headway except in alliance with the third. Only social mobility, it seems, can afford to go it alone any longer. A prime example is the recent push for educational choice, charters, and vouchers. This is the strongest educational reform movement of the 1990s, and it is grounded entirely within the consumer-is-king perspective of the social-mobility goal.

So, you may ask, what are the implications of all this? I want to mention two problems that arise from the history of conflicting goals in American education — one deriving from the conflict itself and the other from the emerging dominance of social mobility. The second problem is more serious than the first.

On the issue of conflict: Contradictory goals have shaped the basic structure of American schools, and the result is a system that is unable to accomplish any one of these goals very effectively — which has been a common complaint about schools. Also, much of what passes for educational reform may be little more than ritual swings back and forth between alternative goals — another common complaint. But I don’t think this problem is really resolvable in any simple way. Americans seem to want and need an education system that serves political equality and economic productivity and personal opportunity, so we might as well learn how to live with it.

The bigger problem is not conflict over goals but the possible victory of social mobility over the other two. The long-term trend is in the direction of this goal, and the educational reform initiatives in the last decade suggest that this trend is accelerating. At the center of the current talk about education is a series of reforms designed to empower the educational consumer, and if they win out, this would resolve the tension between public and private conceptions of education decisively in the favor of the private view. Such a resolution to the conflict over goals would hurt education in at least two ways.

First, in an educational system where the consumer is king, who will look after the public’s interest in education? As supporters of the two public goals have long pointed out, we all have a stake in the outcomes of public education, since this is the institution that shapes our fellow citizens and fellow workers. In this sense, the true consumers of education are all of the members of the community — and not just the parents of school children. But these parents are the only ones whose interests matter forth school choice movement, and their consumer preferences will dictate the shape of the system.

A second problem is this: In an educational system where the opportunity for individual advancements is the primary focus, it becomes more important to get ahead than to get an education. When the whole point of education is not to ensure that I learn valuable skills but instead to give me a competitive social advantage, then it is only natural for me to focus my ingenuity as a student toward acquiring the most desirable grades, credits, and degrees rather than toward learning the curriculum.

We have already seen this taking place in American education in the past few decades. Increasingly, students have been acting more like smart consumers than eager learners. Their most pointed question to the teacher is “Will this be on the test?” They see no point in studying anything that doesn’t really count. If the student is the consumer and the goal is to get ahead rather than to get an education, then it is only rational for students to look for the best deal. And that means getting the highest grades and the most valuable credentials for the lowest investment of effort. As cagey consumers, children in school have come to be like the rest of us when we’re in the shopping mall: They hate to pay full price when they can get the same product on sale.

That’s the bad news from this little excursion into educational history, but don’t forget the good news as well. For 200 years, Americans have seen education as a central pillar of public life. The contradictory structure of American education today has embedded within it an array of social expectations and instructional practices that clearly express these public purposes. There is reason to think that Americans will not be willing to let educational consumerism drive this public-ness out of the public schools.

Posted in Higher Education, History of education, Inequality, Meritocracy, Public Good, Uncategorized

How NOT to Defend the Private Research University

This post is a piece I published today in the Chronicle Review.  It’s about an issue that has been gnawing at me for years.  How can you justify the existence of institutions of the sort I taught at for the last two decades — rich private research universities?  These institutions obviously benefit their students and faculty, but what about the public as a whole?  Is there a public good they serve; and if so, what is it? 

Here’s the answer I came up with.  These are elite institutions to the core.  Exclusivity is baked in.  By admitting only a small number of elite students, they serve to promote social inequality by providing grads with an exclusive private good, a credential with high exchange value. But, in part because of this, they also produce valuable public goods — through the high quality research and the advanced graduate training that only they can provide. 

Open access institutions can promote the social mobility that private research universities don’t, but they can’t provide the same degree of research and advanced training.  The paradox is this:  It’s in the public’s interest to preserve the elitism of these institutions.  See what you think.

Hoover Tower

How Not to Defend the Private Research University

David F. Labaree

In this populist era, private research universities are easy targets that reek of privilege and entitlement. It was no surprise, then, when the White House pressured Harvard to decline $8.6 million in Covid-19-relief funds, while Stanford, Yale, and Princeton all judiciously decided not to seek such aid. With tens of billions of endowment dollars each, they hardly seemed to deserve the money.

And yet these institutions have long received outsized public subsidies. The economist Richard Vedder estimated that in 2010, Princeton got the equivalent of $50,000 per student in federal and state benefits, while its similar-size public neighbor, the College of New Jersey, got just $2,000 per student. Federal subsidies to private colleges include research grants, which go disproportionately to elite institutions, as well as student loan and scholarship funds. As recipients of such largess, how can presidents of private research universities justify their institutions to the public?

Here’s an example of how not to do so. Not long after he assumed the presidency of Stanford in 2016, Marc Tessier-Lavigne made the rounds of faculty meetings on campus in order to introduce himself and talk about future plans for the university. When he came to a Graduate School of Education meeting that I attended, he told us his top priority was to increase access. Asked how he might accomplish this, he said that one proposal he was considering was to increase the size of the entering undergraduate class by 100 to 200 students.

The problem is this: Stanford admits about 4.3 percent of the candidates who apply to join its class of 1,700. Admitting a couple hundred additional students might raise the admit rate to 5 percent. Now that’s access. The issue is that, for a private research university like Stanford, the essence of its institutional brand is its elitism. The inaccessibility is baked in.

Raj Chetty’s social mobility data for Stanford show that 66 percent of its undergrads come from the top 20 percent by income, 52 percent from the top 10 percent, 17 percent from the top 1 percent, and just 4 percent from the bottom 20 percent. Only 12 percent of Stanford grads move up by two quintiles or more — it’s hard for a university to promote social mobility when the large majority of its students starts at the top.

Compare that with the data for California State University at Los Angeles, where 12 percent of students are from the top quintile and 22 percent from the bottom quintile. Forty-seven percent of its graduates rise two or more income quintiles. Ten percent make it all the way from the bottom to the top quintile.

My point is that private research universities are elite institutions, and they shouldn’t pretend otherwise. Instead of preaching access and making a mountain out of the molehill of benefits they provide for the few poor students they enroll, they need to demonstrate how they benefit the public in other ways. This is a hard sell in our populist-minded democracy, and it requires acknowledging that the very exclusivity of these institutions serves the public good.

For starters, in making this case, we should embrace the emphasis on research production and graduate education and accept that providing instruction for undergraduates is only a small part of the overall mission. Typically these institutions have a much higher proportion of graduate students than large public universities oriented toward teaching (graduate students are 57 percent of the total at Stanford and just 8.5 percent in the California State University system).

Undergraduates may be able to get a high-quality education at private research universities, but there are plenty of other places where they could get the same or better, especially at liberal-arts colleges. Undergraduate education is not what makes these institutions distinctive. What does make them stand out are their professional schools and doctoral programs.

Private research universities are elite institutions, and they shouldn’t pretend otherwise.

Private research universities are souped up versions of their public counterparts, and in combination they exert an enormous impact on American life.

As of 2017, the American Association of Universities, a club consisting of the top 65 research universities, represented just 2 percent of all four-year colleges and 12 percent of all undergrads. And yet the group accounted for over 20 percent of all U.S. graduate students; 43 percent of all research doctorates; 68 percent of all postdocs; and 38 percent of all Nobel Prize winners. In addition, its graduates occupy the centers of power, including, by 2019, 64 of the Fortune 100 CEOs; 24 governors; and 268 members of Congress.

From 2014 to 2018, AAU institutions collectively produced 2.4-million publications, and their collective scholarship received 21.4 million citations. That research has an economic impact — these same institutions have established 22 research parks and, in 2018 alone, they produced over 4,800 patents, over 5,000 technology license agreements, and over 600 start-up companies.

Put all this together and it’s clear that research universities provide society with a stunning array of benefits. Some of these benefits accrue to individual entrepreneurs and investors, but the benefits for society at a whole are extraordinary. These universities drive widespread employment, technological advances that benefit consumers worldwide, and the improvement of public health (think of all the university researchers and medical schools advancing Covid-19-research efforts right now).

Besides their higher proportion of graduate students and lower student-faculty ratio, private research universities have other major advantages over publics. One is greater institutional autonomy. Private research universities are governed by a board of laypersons who own the university, control its finances, and appoint its officers. Government can dictate how it uses the public subsidies it gets (except tax subsidies), but otherwise it is free to operate as an independent actor in the academic market. This allows these colleges to pivot quickly to take advantage of opportunities for new programs of study, research areas, and sources of funding, largely independent of political influence, though they do face a fierce academic market full of other private colleges.

A 2010 study of universities in Europe and the U.S. by Caroline Hoxby and associates shows that this mix of institutional autonomy and competition is strongly associated with higher rankings in the world hierarchy of higher education. They find that every 1-percent increase in the share of the university budget that comes from government appropriations corresponds with a decrease in international ranking of 3.2 ranks. At the same time, each 1-percent increase in the university budget from competitive grants corresponds with an increase of 6.5 ranks. They also found that universities high in autonomy and competition produced more patents.

Another advantage the private research universities enjoy over their public counterparts, of course, is wealth. Stanford’s endowment is around $28 billion, and Berkeley’s is just under $5 billion, but because Stanford is so much smaller (16,000 versus 42,000 total students) this multiplies the advantage. Stanford’s endowment per student dwarfs Berkeley’s. The result is that private universities have more research resources: better labs, libraries, and physical plant; higher faculty pay (e.g., $254,000 for full professors at Stanford, compared to $200,000 at Berkeley); more funding for grad students, and more staff support.

A central asset of private research universities is their small group of academically and socially elite undergraduate students. The academic skill of these students is an important draw for faculty, but their current and future wealth is particularly important for the institution. From a democratic perspective, this wealth is a negative. The student body’s heavy skew toward the top of the income scale is a sign of how these universities are not only failing to provide much social mobility but are in fact actively engaged in preserving social advantage. We need to be honest about this issue.

But there is a major upside. Undergraduates pay their own way (as do students in professional schools); but the advanced graduate students don’t — they get free tuition plus a stipend to pay living expenses, which is subsidized, both directly and indirectly, by undergrads. The direct subsidy comes from the high sticker price undergrads pay for tuition. Part of this goes to help out upper-middle-class families who still can’t afford the tuition, but the rest goes to subsidize grad students.

The key financial benefits from undergrads come after they graduate, when the donations start rolling in. The university generously admits these students (at the expense of many of their peers), provides them with an education and a credential that jump-starts their careers and papers over their privilege, and then harvests their gratitude over a lifetime. Look around any college campus — particularly at a private research university — and you will find that almost every building, bench, and professor bears the name of a grateful donor. And nearly all of the money comes from former undergrads or professional school students, since it is they, not the doctoral students, who go on to earn the big bucks.

There is, of course, a paradox. Perhaps the gross preservation of privilege these schools traffic in serves a broader public purpose. Perhaps providing a valuable private good for the few enables the institution to provide an even more valuable public good for the many. And yet students who are denied admission to elite institutions are not being denied a college education and a chance to get ahead; they’re just being redirected. Instead of going to a private research university like Stanford or a public research university like Berkeley, many will attend a comprehensive university like San José State. Only the narrow metric of value employed at the pinnacle of the American academic meritocracy could construe this as a tragedy. San José State is a great institution, which accepts the majority of the students who apply and which sends a huge number of graduates to work in the nearby tech sector.

The economist Miguel Urquiola elaborates on this paradox in his book, Markets, Minds, and Money: Why America Leads the World in University Research (Harvard University Press, 2020), which describes how American universities came to dominate the academic world in the 20th century. The 2019 Shanghai Academic Ranking of World Universities shows that eight of the top 10 universities in the world are American, and seven of these are private.

Urquiola argues that the roots of American academe’s success can be found in its competitive marketplace. In most countries, universities are subsidiaries of the state, which controls its funding, defines its scope, and sets its policy. By contrast, American higher education has three defining characteristics: self-rule (institutions have autonomy to govern themselves); free entry (institutions can be started up by federal, state, or local governments or by individuals who acquire a corporate charter); and free scope (institutions can develop programs of research and study on their own initiative without undue governmental constraint).

The result is a radically unequal system of higher education, with extraordinary resources and capabilities concentrated in a few research universities at the top. Caroline Hoxby estimates that the most selective American research universities spend an average of $150,000 per student, 15 times as much as some poorer institutions.

As Urquiola explains, the competitive market structure puts a priority on identifying top research talent, concentrating this talent and the resources needed to support it in a small number of institutions, and motivating these researchers to ramp up their productivity. This concentration then makes it easy for major research-funding agencies, such as the National Institutes of Health, to identity the institutions that are best able to manage the research projects they want to support. And the nature of the research enterprise is such that, when markets concentrate minds and money, the social payoff is much greater than if they were dispersed more evenly.

Radical inequality in the higher-education system therefore produces outsized benefits for the public good. This, paradoxical as it may seem, is how we can truly justify the public investment in private research universities.

David Labaree is a professor emeritus at the Stanford Graduate School of Education.

 

 

Posted in Books, Higher Education, History of education, Professionalism

Nothing Succeeds Like Failure: The Sad History of American Business Schools

This post is a review I wrote of Steven Conn’s book, Nothing Succeeds Like Failure: The Sad History of American Business Schools, which will be coming out this summer in History of Education Quarterly.  Here’s a link to the proofs.Conn Book Cover

Steven Conn. Nothing Succeeds Like Failure: The Sad History of American Business Schools. Ithaca, NY: Cornell University Press, 2019. 288pp.

            In this book, historian Steven Conn has produced a gleeful roast of the American business school.  The whole story is in the title.  It goes something like this:  In the nineteenth century, proprietary business schools provided training for people (men) who wanted to go into business.  Then 1881 saw the founding of the Wharton School of Finance and Economy at the University of Pennsylvania, which was the first business school located in a university; others quickly followed.  Two forces converged to create this new type of educational enterprise.  Progressive reformers wanted to educate future business leaders who would manage corporations in the public interest instead of looting the public the way robber barons had done.  And corporate executives wanted to enhance their status and distinguish themselves from mere businessmen by requiring a college degree in business for the top level positions.  This was both a class distinction (commercial schools would be just fine for the regular Joe) and an effort to redefine business as a profession.  As Conn aptly puts it, the driving force for both business employers and their prospective employees was “profession envy” (p. 37).  After all, why should doctors and lawyers enjoy professional standing and not businessmen?

            For reformers, the key contribution of B schools was to be a rigorous curriculum that would transform the business world.  For the students who attended these schools, however, the courses they took were beside the point.  They were looking for a pure credentialing effect, by acquiring a professional degree that would launch their careers in the top tiers of the business world.  As Conn shows, the latter perspective won.  He delights in recounting the repeated efforts by business associations and major foundations (especially Ford and Carnegie) to construct a serious curriculum for business schools.  All of these reforms, he says, failed miserably.  The business course of study retained a reputation for uncertain focus and academic mediocrity.  The continuing judgment by outsiders was that “U.S. business education was terrible” (p. 76).

            This is the “failure” in the book’s title.  B schools never succeeded in doing what they promised as educational institutions.  But, as he argues, this curricular failure did nothing to impede business schools’ organizational success.  Students flocked to them in the search for the key to the executive suite and corporations used them to screen access to the top jobs.  This became especially true in the 1960s, when business schools moved upscale by introducing graduate programs, of which the most spectacular success was the MBA.  Nothing says professional like a graduate degree.  And nothing gives academic credibility to a professional program like establishing a mandate for business professors to carry out academic research just like their peers in the more prestigious professional schools. 

            Conn says that instead of working effectively to improve the business world, B schools simply adopted the values of this world and dressed them up in professional garb.  By the end of the twentieth century, corporations had shed any pretense of working in the public interest and instead asserted shareholder value as their primary goal.  Business schools also jumped on this bandwagon.  One result of this, the author notes, was to reinforce the rapacity of the new business ethos, sending an increasing share of business graduates into the realms of finance and consulting, where business is less a process of producing valuable goods and services than a game of monopoly played with other people’s money.  His conclusion: “No other profession produces felons in quite such abundance” (p. 206).

            Another result of this evolution in B schools was that they came to infect the universities that gave them a home.  Business needed universities for status and credibility, and it thanked them by dragging them down to its own level.  He charges that universities are increasingly governed liked private enterprises, with market-based incentives for colleges, departments, and individual faculty to produce income from tuition and research grants or else find themselves discarded like any other failed business or luckless worker.  It’s like business schools have succeeded in redoing the university in their own image: “All the window dressing of academia without any of its substance” (p. 222).

            That’s quite an indictment, but is it sufficient for conviction?  I think not.  One problem is that, from the very beginning, the reader gets the distinct feeling that the fix is in.  The book opens with a scene in which the author’s colleagues in the history department come together for their monthly faculty meeting in a room filled with a random collection of threadbare couches and chairs.  All of it came from the business school across campus when it bought new furniture.  This scene is familiar to a lot of faculty in the less privileged departments on any campus, where the distinction between the top tier and the rest is all too apparent.  In my school of education, we’re accustomed to our old, dingy barn of a building, all too aware of the elegant surroundings  the business school enjoys in a brand new campus paid for by Phil Knight of swoosh fame. 

But this entry point to the book signals a tone that tends to undermine the author’s argument throughout the book.  It’s hard not to read this book as a polemic of resentment toward the nouveau riche — a humanist railing against the money-grubbers across campus who are despoiling the sanctity of academic life.  This reading is not fair, since a lot of the author’s critique of business schools is correct; but it made me squirm a bit as I worked through the text.  It also made the argument feel a little too inevitable.  Read the title and the opening page and you already have a good idea of what is going to follow.  Then you see that the first chapter is about the Wharton School and you just know where you’re headed.  And sure enough, in the last chapter the author introduces Wharton’s most notorious graduate, Donald Trump.  That second shoe took two hundred pages to drop, but the drop was inevitable. 

In addition, by focusing relentlessly on the “sad history of American business schools,” Conn is unable to put this account within the larger context of the history of US higher education.  For one thing, business didn’t introduce the market to higher ed; it was there from day one.  Our current system emerged in the early nineteenth century, when a proliferation of undistinguished colleges popped up across the US, especially in small towns on the expanding frontier.  These were private enterprises with corporate charters and no reliable source of funding from either church or state.  They often emerged more as efforts to sell land (this is a college town so buy here) or plant the flag of a religious denomination than to advance higher learning.  And they had to hustle to survive in a glutted market, so they became adept at mining student tuition and cultivating donors.  Business schools are building on a long tradition of playing to the market.  They just aren’t as concerned about covering their tracks as the rest of us.

David F. Labaree

Stanford University

Posted in Credentialing, Higher Education, History of education, Sociology, Uncategorized

How Credentialing Theory Explains the Extraordinary Growth in US Higher Ed in the 19th Century

Today I am posting a piece I wrote in 1995. It was the foreword to a book by David K. Brown, Degrees of Control: A Sociology of Educational Expansion and Occupational Credentialism.  

I have long been interested in credentialing theory, but this is the only place where I ever tried to spell out in detail how the theory works.  For this purpose, I draw on the case of the rapid expansion of the US system of higher education in the 19th century and its transformation at the end of the century, which is the focus of Brown’s book.  Here’s a link to a pdf of the original. 

The case is particularly fruitful for demonstrating the value of credentialing theory, because the most prominent theory of education development simply can’t make sense of it.  Functionalist theory sees the emergence of educational systems as part of the process of modernization.  As societies become more complex, with a greater division of labor and a shift from manual to mental work, the economy requires workers with higher degrees of verbal and cognitive skills.  Elementary, secondary  and higher education arise over time in response to this need. 

The history of education in the U.S., however, poses a real problem for this explanation.  American higher education exploded in the 19th century, to the point that there were 800 some colleges in existence by 1880, which was more than the total number in the continent of Europe.  It was the highest rate of colleges per 100,000 population that the world have ever seen.   The problem is that this increase was not in respond to increasing demand from employers for college-educated workers.  While the rate of higher schooling was increasing across the century, the skill demands in the workforce were declining.  The growth of factory production was subdividing forms of skilled work, such as shoemaking, into a series of low-skilled tasks on the assembly line.  

This being the case, then, how can we understand the explosion of college founding in the 19th century?  Brown provides a compelling explanation, and I lay out his core arguments in my foreword.  I hope you find it illuminating.

 

Brown Cover

Preface

In this book, David Brown tackles an important question that has long puzzled scholars who wanted to understand the central role that education plays in American society: When compared with other Western countries, why did the United States experience such extraordinary growth in higher education? Whereas in most societies higher education has long been seen as a privilege that is granted to a relatively small proportion of the population, in the United States it has increasingly come to be seen as a right of the ordinary citizen. Nor was this rapid increase in accessibility very recent phenomenon. As Brown notes, between 1870 and 1930, the proportion of college-age persons (18 to 21 years old) who attended institutions of higher education rose from 1.7% to 13.0%. And this was long before the proliferation of regional state universities and community colleges made college attendance a majority experience for American youth.

The range of possible answers to this question is considerable, with each carrying its own distinctive image of the nature of American political and social life. For example, perhaps the rapid growth in the opportunity for higher education was an expression of egalitarian politics and a confirmation of the American Dream; or perhaps it was a political diversion, providing ideological cover for persistent inequality; or perhaps it was merely an accident — an unintended consequence of a struggle for something altogether different. In politically charged terrain such as this, one would prefer to seek guidance from an author who doesn’t ask the reader to march behind an ideological banner toward a preordained conclusion, but who instead rigorously examines the historical data and allows for the possibility of encountering surprises. What the reader wants, I think, is an analysis that is both informed by theory and sensitive to historical nuance.

In this book, Brown provides such an analysis. He approaches the subject from the perspective of historical sociology, and in doing so he manages to maintain an unusually effective balance between historical explanation and sociological theory-building. Unlike many sociologists dealing with history, he never oversimplifies the complexity of historical events in the rush toward premature theoretical closure; and unlike many historians dealing with sociology, he doesn’t merely import existing theories into his historical analysis but rather conceives of the analysis itself as a contribution to theory. His aim is therefore quite ambitious – to spell out a theoretical explanation for the spectacular growth and peculiar structure of American higher education, and to ground this explanation in an analysis of the role of college credentials in American life.

Traditional explanations do not hold up very well when examined closely. Structural-functionalist theory argues that an expanding economy created a powerful demand for advanced technical skills (human capital), which only a rapid expansion of higher education could fill. But Brown notes that during this expansion most students pursued programs not in vocational-technical areas but in liberal arts, meaning that the forms of knowledge they were acquiring were rather remote from the economically productive skills supposedly demanded by employers. Social reproduction theory sees the university as a mechanism that emerged to protect the privilege of the upper-middle class behind a wall of cultural capital, during a time (with the decline of proprietorship) when it became increasingly difficult for economic capital alone to provide such protection. But, while this theory points to a central outcome of college expansion, it fails to explain the historical contingencies and agencies that actually produced this outcome. In fact, both of these theories are essentially functionalist in approach, portraying higher education as arising automatically to fill a social need — within the economy, in the first case, and within the class system, in the second.

However, credentialing theory, as developed most extensively by Randall Collins (1979), helps explain the socially reproductive effect of expanding higher education without denying agency. It conceives of higher education diplomas as a kind of cultural currency that becomes attractive to status groups seeking an advantage in the competition for social positions, and therefore it sees the expansion of higher education as a response to consumer demand rather than functional necessity. Upper classes tend to benefit disproportionately from this educational development, not because of an institutional correspondence principle that preordains such an outcome, but because they are socially and culturally better equipped to gain access to and succeed within the educational market.

This credentialist theory of educational growth is the one that Brown finds most compelling as the basis for his own interpretation. However, when he plunges into a close examination of American higher education, he finds that Collins’ formulation of this theory often does not coincide very well with the historical evidence. One key problem is that Collins does not examine the nature of labor market recruitment, which is critical for credentialist theory, since the pursuit of college credentials only makes sense if employers are rewarding degree holders with desirable jobs. Brown shows that between 1800 and 1880 the number of colleges in the United States grew dramatically (as Collins also asserts), but that enrollments at individual colleges were quite modest. He argues that this binge of institution-creation was driven by a combination of religious and market forces but not (contrary to Collins) by the pursuit of credentials. There simply is no good evidence that a college degree was much in demand by employers during this period. Instead, a great deal of the growth in the number of colleges was the result of the desire by religious and ethnic groups to create their own settings for producing clergy and transmitting culture. In a particularly intriguing analysis, Brown argues that an additional spur to this growth came from markedly less elevated sources — local boosterism and land speculation — as development-oriented towns sought to establish colleges as a mechanism for attracting land buyers and new residents.

Brown’s version of credentialing theory identifies a few central factors that are required in order to facilitate a credential-driven expansion of higher education, and by 1880 several of these were already in place. One such factor is substantial wealth. Higher education is expensive, and expanding it for reasons of individual status attainment rather than for societal necessity is a wasteful use of a nation’s resources; it is only feasible for a very wealthy country. The United States was such a country in the late nineteenth century. A second factor is a broad institutional base. At this point, the United States had the largest number of colleges per million residents that the country has even seen, before or since. When combined with the small enrollments at each college, this meant that there was a great potential for growth within an already existing institutional framework. This potential was reinforced by a third factor, decentralized control. Colleges were governed by local boards rather than central state authorities, thus encouraging entrepreneurial behavior by college leaders, especially in the intensively competitive market environment they faced.

However, three other essential factors for rapid credential-based growth in higher education were still missing in 1880. For one thing, colleges were not going to be able to attract large numbers of new students, who were after all unlikely to be motivated solely by the love of learning, unless they could offer these students both a pleasant social experience and a practical educational experience — neither of which was the norm at colleges for most of the nineteenth century. Another problem was that colleges could not function as credentialing institutions until they had a monopoly over a particular form of credentials, but in 1880 they were still competing directly with high schools for the same students. Finally, their credentials were not going to have any value on the market unless employers began to demonstrate a distinct preference for hiring college graduates, and such a preference was still not obvious at this stage.

According to Brown, the 1880s saw a major shift in all three of these factors. The trigger for this change was a significant oversupply of institutions relative to existing demand. In this life or death situation, colleges desperately sought to increase the pool of potential students. It is no coincidence that this period marked the rapid diffusion of efforts to improve the quality of social life on campuses (from the promotion of athletics to the proliferation of fraternities), and also the shift toward curriculum with a stronger claim of practicality (emphasizing modern languages and science over Latin and Greek). At the same time, colleges sought to guarantee a flow of students from feeder institutions, which required them to establish a hierarchical relationship with high schools. The end of the century was the period in which colleges began requiring completion of a high school course as a prerequisite for college admission instead of the traditional entrance examination. This system provided high schools with a stable outlet for its graduates and colleges with predictable flow of reasonably well-prepared students. However, none of this would have been possible if the college degree had not acquired significant exchange value in the labor market. Without this, there would have been only social reasons for attending college, and high schools would have had little incentive to submit to college mandates.

Perhaps Brown’s strongest contribution to credential theory is his subtle and persuasive analysis of the reasoning that led employers to assert a preference for college graduates in the hiring process. Until now, this issue has posed a significant, perhaps fatal, problem for credentialing theory, which has asked the reader to accept two apparently contradictory assertions about credentials. First, the theory claims that a college degree has exchange value but not necessarily use value; that is, it is attractive to the consumer because it can be cashed in on a good job more or less independently of any learning that was acquired along the way. Second, this exchange value depends on the willingness of employers to hire applicants based on credentials alone, without direct knowledge of what these applicants know or what they can do. However this raises a serious question about the rationality of the employer in this process. After all, why would an employer, who presumably cares about the productivity of future employees, hire people based solely on college’s certification of competence in the absence of any evidence for that competence?

Brown tackles this issue with a nice measure of historical and sociological insight. He notes that the late nineteenth century saw the growing rationalization of work, which led to the development of large-scale bureaucracies to administer this work within both private corporations and public agencies. One result was the creation of a rapidly growing occupational sector for managerial employees who could function effectively within such a rationalized organizational structure. College graduates seemed to fit the bill for this kind of work. They emerged from the top level of the newly developed hierarchy of educational institutions and therefore seemed like natural candidates for management work in the upper levels of the new administrative hierarchy, which was based not on proprietorship or political office but on apparent skill. And what kinds of skills were called for in this line of work? What the new managerial employees needed was not so much the technical skills posited by human capital theory, he argues, but a general capacity to work effectively in a verbally and cognitively structured organizational environment, and also a capacity to feel comfortable about assuming positions of authority over other people.

These were things that the emerging American college could and indeed did provide. The increasingly corporate social structure of student life on college campuses provided good socialization for bureaucratic work, and the process of gaining access to and graduation from college provided students with an institutionalized confirmation of their social superiority and qualifications for leadership. Note that these capacities were substantive consequences of having attended college, but they were not learned as part of the college’s formal curriculum. That is, the characteristics that qualified college graduates for future bureaucratic employment were a side effect of their pursuit of a college education. In this sense, then, the college credential had a substantive meaning for employers that justified them in using it as a criterion for employment — less for the human capital that college provided than for the social capital that college conferred on graduates. Therefore, this credential, Brown argues, served an important role in the labor market by reducing the uncertainty that plagued the process of bureaucratic hiring. After all, how else was an employer to gain some assurance that candidate could do this kind of work? A college degree offered a claim to competence, which had enough substance behind it to be credible even if this substance was largely unrelated to the content of the college curriculum.

By the 1890s all the pieces were in place for a rapid expansion of college enrollments, strongly driven by credentialist pressures. Employers had reason to give preference to college graduates when hiring for management positions. As a result, middle-class families had an increasing incentive to provide their children with privileged access to an advantaged social position by sending them to college. For the students themselves, this extrinsic reward for attending college was reinforced by the intrinsic benefits accruing from an attractive social life on campus. All of this created a very strong demand for expanding college enrollments, and the pre-existing institutional conditions in higher education made it possible for colleges to respond to this demand in an aggressive fashion. There were a thousand independent institutions of higher education, accustomed to playing entrepreneurial roles in a competitive educational market, that were eager to capitalize on the surge of interest in attending college and to adapt themselves to the preferences of these new tuition-paying consumers. The result was a powerful and unrelenting surge of expansion in college enrollments that continued for the next century.

 

Brown provides a persuasive answer to the initial question about why American higher education expanded at such a rapid rate. But at this point the reader may well respond by asking the generic question that one should ask of any analyst, and that is, “So what?” More specifically, in light of the particular claims of this analysis, the question becomes: “What difference does it make that this expansion was spurred primarily by the pursuit of educational credentials?” In my view, at least, the answer to that question is clear. The impact of credentialism on both American society and the American educational system has been profound — profoundly negative. Consider some of the problems it has caused.

One major problem is that credentialism is astonishingly inefficient. Education is the largest single public investment made by most modern societies, and this is justified on the grounds that it provides a critically important contribution to the collective welfare. The public value of education is usually calculated as some combination of two types of benefits, the preparation of capable citizens (the political benefit) and the training of productive workers (the economic benefit). However the credentialist argument advanced by Brown suggests that these public benefits are not necessarily being met and that the primary beneficiaries are in fact private individuals. From this perspective, higher education (and the educational system more generally) exists largely as a mechanism for providing individuals with a cultural commodity that will give them a substantial competitive advantage in the pursuit of social position. In short, education becomes little but a vast public subsidy for private ambition.

The practical effect of this subsidy is the production of a glut of graduates. The difficulty posed by this outcome is not that the population becomes overeducated (since such a state is difficult to imagine) but that it becomes overcredentialed, since people are pursuing diplomas less for the knowledge they are thereby acquiring than for the access that the diplomas themselves will provide. The result is a spiral of credential inflation; for as each level of education in turn gradually floods with a crowd of ambitious consumers, individuals have to keep seeking ever higher levels of credentials in order to move a step ahead of the pack. In such a system nobody wins. Consumers have to spend increasing amounts of time and money to gain additional credentials, since the swelling number of credential holders keeps lowering the value of credentials at any given level. Taxpayers find an increasing share of scarce fiscal resources going to support an educational chase with little public benefit. Employers keep raising the entry-level education requirements for particular jobs, but they still find that they have to provide extensive training before employees can carry out their work productively. At all levels, this is an enormously wasteful system, one that rich countries like the United States can increasingly ill afford and that less developed countries, who imitate the U.S. educational model, find positively impoverishing.

A second major problem is that credentialism undercuts learning. In both college and high school, students are all too well aware that their mission is to do whatever it takes to acquire a diploma, which they can then cash in on what really matters — a good job. This has the effect of reifying the formal markers of academic progress-grades, credits, and degrees — and encouraging students to focus their attention on accumulating these badges of merit for the exchange value they offer. But at the same time this means directing attention away from the substance of education, reducing student motivation to learn the knowledge and skills that constitute the core of the educational curriculum. Under such conditions, it is quite rational, even if educationally destructive, for students to seek to acquire their badges of merit at a minimum academic cost, to gain the highest grade with the minimum amount of learning. This perspective is almost perfectly captured by a common student question, one that sends chills down the back of the learning-centered teacher but that makes perfect sense for the credential-oriented student: “ls this going to be on the test?” (Sedlak et al., 1986, p. 182). We have credentialism to thank for the aversion to learning that, to a great extent, lies at the heart of our educational system.

A third problem posed by credentialism is social and political more than educational. According to credentialing theory, the connection between social class and education is neither direct nor automatic, as suggested by social reproduction theory. Instead, the argument goes, market forces mediate between the class position of students and their access to and success within the educational system. That is, there is general competition for admission to institutions of higher education and for levels of achievement within these institutions. Class advantage is no guarantee of success in this competition, since such factors as individual ability, motivation, and luck all play a part in determining the result. Market forces also mediate between educational attainment (the acquisition of credentials) and social attainment (the acquisition of a social position). Some college degrees are worth more in the credentials market than others, and they provide privileged access to higher level positions independent of the class origins of the credential holder.

However, in both of these market competitions, one for acquiring the credential and the other for cashing it in, a higher class position provides a significant competitive edge. The economic, cultural, and social capital that come with higher class standing gives the bearer an advantage in getting into college, in doing well at college, and in translating college credentials into desirable social outcomes. The market-based competition that characterizes the acquisition and disposition of educational credentials gives the process a meritocratic set of possibilities, but the influence of class on this competition gives it a socially reproductive set of probabilities as well. The danger is that, as a result, a credential-driven system of education can provide meritocratic cover for socially reproductive outcomes. In the single-minded pursuit of educational credentials, both student consumers and the society that supports them can lose sight of an all-too-predictable pattern of outcomes that is masked by the headlong rush for the academic gold.

Posted in Academic writing, History of education, Rhetoric

A Brutal Review of My First Book

In the last two weeks, I’ve presented some my favorite brutal book reviews.  It’s a lot of fun to watch a skilled writer skewer someone else’s work with surgical precision (see here and my last post).  In the interest of balance, I thought it would be right and proper to present a review that eviscerates one of my own books.  So here’s a link to a review essay by Sol Cohen that was published in Historical Studies in Education in 1991.  It’s called, “The Linguistic Turn: The Absent Text of Educational Historiography.

Fortunately, I never saw the review when it first came out, three years after publication of my book, The Making of an American High School: The Credentials Market and the Central High School of Philadelphia, 1838-1939.  Those were the days when I was a recently tenured associate professor at Michigan State, still young and professionally vulnerable.  It wasn’t until 2005 that a snarky student in a class where I assigned my book pointedly sent me a copy of the review (as a way of saying, why are we reading this thoroughly discounted text?).   By then, thankfully, I was a full professor at Stanford, who was sufficiently old and arrogant to have nothing at stake, so I could enjoy the rollercoaster ride of reading Cohen’s thorough trashing of my work.

The book is a study of the first century of the first public high school in Philadelphia, the city where I grew up.  It emerged from my doctoral dissertation in sociology at the University of Pennsylvania, submitted in 1983.  The genre is historical sociology, and the data are both qualitative (public records, documents, and annual reports) and quantitative (digitized records of students in every census year from 1840-1920).  The book won me tenure at MSU and outstanding book awards in 1989 from both the History of Education Society and the American Educational Research Association.  In short, it was a big fat target, fairly begging for a take-down.  And boy, did Sol Cohen ever rise the challenge.

CHS Cover

Cohen frames his discussion of my book as an exercise in rhetorical analysis.  Building on the “linguistic turn” that emerged in social theory toward the end of the 20th century, he draws in particular on the work of Hayden White, who argued for viewing history as a literary endeavor.  White saw four primary story lines that historians employ:  Romance (a tale of growth and progress), Comedy (a fall from grace followed by a rise toward a happy ending), Tragedy (the fall of the hero), and Satire (“a decline and fall from grand beginnings”).  

The Making of an American High School is emplotted in the mode of Satire, an unrelenting critique, the reverse or a parody of the idealization of American public education which, for example, characterizes the Romantic/Comedic tradition in American educational historiography….

The narrative trajectory of Labaree’s book is a downward spiral. Its predominant mood is one of anger and disillusionment with the deterioration or subversion and fall from grace of American public secondary education. The story line of The Making of an American High School, though the reverse of Romance, is equally formulaic: from democratic origins, conflict and decline and fall. The conflict is between egalitarianism and “market values,” between the early democratic aspirations of Central High School to produce a virtuous and informed citizenry for the new republic and its latter-day function as an elitist “credentials market” controlled by a middle class whose goal is to ensure that their sons receive the “credentials” which would entitle them to become the functionaries of capitalist society….

The metaphor of the “credentials market,” by which Labaree means to signify a vulgar or profane and malignant essence of American secondary education, is one of the main rhetorical devices deployed in The Making of an American High School.  Labaree stresses the baneful effect of “market forces” and “market values” on every aspect of CHS and American secondary education: governance, pedagogy, the students, the curriculum. As befits his Satiric mode of emplotment, Labaree attacks the “market” conception of secondary education from a “high moral line,” that of democracy and egalitarianism.

The lugubrious downward narrative trajectory of The Making of an American High School unexpectedly takes a Romantic or Comedic upward turn at the very end of the book, when Labaree mysteriously foresees the coming transformation of the high school. We have to quote Labaree’s last paragraph. “As a market institution,” he writes, “the contemporary high school is an utter failure.” Yet “when rechartered as a common school, it has great potential.” The common public high school “would be able to focus on equality rather than stratification and on learning rather than the futile pursuit of educational credentials.” Stripped of its debilitating market concerns, “the common high school,” Labaree contends in his final sentence, “could seek to provide what had always eluded the early selective high school; a quality education for the whole community.” The End.

Ok, this is really not going well for me, is it?  Not only am I employing a hackneyed plot line of decline and fall and a cartoonish opposition between saintly democracy and evil markets, but I also flinch at the end from being true to my satiric ethos by hastily fabricating a last-minute happy ending.  I spin a book-length tale of fall from grace and then lose my nerve at the finish line.  In short, I’m a gutless fabulist.  

Oh, and that’s not all.

There is something more significant going on in Labaree’s book, however, than his emplotment of the history of American secondary education in the mode of Satire and the formulation of his argument in terms of the metaphor of the market. Thus, the most prominent rhetorical device Labaree utilizes in The Making of An American High School is actually not that of the market metaphor, but that of the terminology and apparatus of Quantitative Research Methodology. Labaree confronts the reader with no less than fifteen statistical tables in what is a very brief work (only about 180 pages of text), as well as four statistical Appendices….

One can applaud Labaree’ s diligence in finding and mining a trove of empirical data (“based on a sample of two thousand students drawn from the first hundred years” of CHS). But there is a kind of rhetorical overkill here. For all his figures and statistics, we are not much wiser than before; they are actually redundant. They give us no new information. What is their function in the text then? Labaree’s utilization of the nomenclature and technical apparatus of quantitative research methodology is to be understood as no more (or less) than a rhetorical strategy in the service of “realism.”

Ok, now here’s my favorite paragraph in the whole review.  I think you’ll find this one worth waiting for.  To make sure you don’t miss the best parts, I’ll underline them for you.

Within the conventions of its genre, The Making of an American High
School, though lacking in grace as a piece of writing, possesses some complexity and depth, if not breadth: it is an acceptable story. But as if Labaree were dissatisfied with the credibility and persuasiveness of a mere story, or with that story’s formal rhetorical properties, its Satiric mode of emplotment, its metaphoric mode of explanation, its fairy-tale ending, or were aware of its writerly deficiencies, he puts on scientistic or Positivist airs. Labaree’s piling on of inessential detail and his deployment of the arcane vocabulary and symbols of quantitative research function as a rhetorical device to counteract or efface the discursivity, the textuality, the obvious literary-ness of The Making of an American High School and to reinforce or enhance the authority of his book and the ideological thrust of his argument.  As if the language of “mean,” “standard deviation.” “regression analysis,” “beta factors.” “dummy variables.” and “homoscedasticity,” vis-a-vis ordinary language, were a transcendent, epistemologically superior or privileged language: rigorously scientific, impartial, objective. From this perspective, the Tables and Appendices in The Making of an American High School are not actually there to be read; they are, in fact, unreadable. They are simply there to be seen; their sheer presence in the text is what “counts.”

Wow, I’m impressed.  But wait for the closing flourish.

The Making of an American High School, within the conventions of its genre, is a modest and minor work, so thin the last chapter has to be fleshed out by a review of the past decade’s literature on the American high school. But the point is not to reprove or criticize Labaree. The Making of an
American High School is a first book. It is or was a competent doctoral
dissertation, with all the flaws of even a competent dissertation. That it was
awarded the Outstanding Book Award for 1989 by the History of Education
Society simply shows which way the historiographical winds are currently
blowing in the United States.

Nuff said.  Or, to use the discourse of quantitative research, QED.  

So how do I react to this review, nearly three decades after it appeared?  Although it’s a bit unkind, I can’t say it’s unfair.  Let me hit on a few specifics in the analysis that resonated with me.

The tale of a fall from grace.  True.  It’s about a school established to shore up a shaky republic and promote civic virtue, which then became a selective institution for reproducing social advantage through the provision of elite credentials.  It’s all down hill from the 1840s to the present.

Markets as the bad guy.  Also true.  I framed the book around a tension between democratic politics and capitalist markets, with markets getting and keeping the upper hand over the years.  That’s a theme that has continued in my work over the years, though it has become somewhat more complex.  As Cohen pointed out, my definition of markets was hazy at best.  It’s not even clear that school diplomas played a major role in the job market for most of the 19th century, when skill levels in the workforce were actually declining while levels of schooling were rising.  The golden days of school leading to a good job did not emerge as a major factor until the turn of the 20th century. 

In my second book, How to Succeed in School without Really Learning, I was forced to reconsider the politics-markets dichotomy, which I outlined in the first chapter, drawing on an essay that remains my most cited publication.  Here I split the idea of credentials markets into two major components.  From one perspective, education is a public good, which provides society with the skills it needs, skills that benefit everyone including those who didn’t get a diploma.  From another, education is a private good, whose benefits accrue only to the degree holder.  I argued that the former constitutes a vision of schooling for social efficiency whereas the latter offers a vision of schooling for social mobility.  The old good guy from the first book, democratic politics, represented a vision of schooling for democratic equality, also a public good.  For many years, I ran with the continuing tension among these three largely incompatible goals as the defining force in shaping the politics of education. 

However, by the time I got to my last book, A Perfect Mess, I stumbled onto the idea that markets were in fact the good guy in at least one major way.  They were the shaping force in the evolution of the American system of higher education, which emerged from below in a competition among private colleges rather than being created and directly controlled from above by the state.  Turns out this gave the system a degree of autonomy that was highly functional in promoting innovation in teaching and research and that helped make it a dominant force in global higher ed.  State dominated systems of higher education tend to be less effective in these ways.

The happy ending that doesn’t follow from the argument in the book.  Embarrassing but also true.  I have long argued that, before a book on education is published, the editor should delete the final chapter.  This is typically where the author pulls back from the weight of preceding analysis, which typically  demonstrates huge problems in education, and comes up with a totally incredible five-point plan for fixing the problem.  That’s sort of what I did here.  In my defense, it’s only one paragraph; and it doesn’t suggest that a happy ending will happen, only that it would be nice if it did.  But I do shudder reading it today, now that I’ve become more comfortable being a doomsayer about the prospects for fixing education.  To wit, my fourth book on the improbability of reform, Someone Has to Fail.

Deceptive rhetoric.  Also true.  The rhetorical move that strikes me as most telling now is not the the way I waved the flag of markets or statistics, as Cohen argued, but another move he alluded to but didn’t pursue.  On the face of it, the book is the history of a single high school.  But that is not something that interested me or interested my readers.  I frame the book as an analysis of the American high school in general, its evolution from a small exclusive institution for preparing citizens to a large inclusive institution for credentialing workers.  But there’s really no way to make a credible argument that the Central case is representative of the whole.  In fact, it was quite unusual. 

Most high schools in the 19th century were small additions to a town’s common schools, usually located in a room on the top floor of the grammar school, taught by the grammar school master, and organized coeducationally.  But for 50 years Central High School was the only high school in the second largest city in the country, and it remained very exclusive because of its rigorous entrance exam, its location in the most elegant educational structure in town (see the picture of its second building on the book’s cover), its authorization to grant college degrees, its teachers who were called professors, and its students who were all male.  In the first chapter I try to wave away that problem by arguing that the school is not representative but exemplary, serving as a model for where other high schools were headed.  Throughout the text I was able to maintain this fiction because of a quirk of the English language.  I kept referring to “the high school,” which left it ambiguous about whether I was referring to Central or to the high school in general.  I was always directing the analysis toward the latter.  On reflection, I’m ok about this deception.  If you’re not pushing your data to the limits of credibility, you’re probably not telling a very interesting story.  I think the evolutionary arc for the high school system that I describe in the book still in general holds up.

Using statistics as window dressing.  I wish.  This is a good news, bad news story.  The good news is that quantitative student data were critically important in establishing an important counterintuitive point.  In high school today, the best predictor of who will graduate is social class.  The effect is large and stable over time.  For Central in its first 80 years, however, class had no effect on chances for graduation.  The only factor that determined successful completion of degree was  a student’s grades in school.  It’s not that class was completely irrelevant.  The students who entered the school were heavily skewed toward the upper classes, since only these families could afford the opportunity cost of keeping their sons out of the workforce.  But once they were admitted, rich kids flunked out as much as poor kids if they didn’t keep up their grades.  Central, counter to anything I was expecting (or even desiring — I was looking to tell a Marxist story), the high school was a meritocracy.  Kind of cool.

The bad news is that the quantitative data were not useful for making any other important points in the book.  The most interesting stuff, at least for me, came from the qualitative side.  But the amount of quantitative data I generated was huge, and it ate up at least two of the four years I spent working on the dissertation.  Sol Cohen complained that I had 15 tables in the book, but the dissertation had more like 45.  I wanted to include them all, on the grounds that I did the work so I wanted it to show in the end result; but the press said no.  The disjuncture between data and its significance finally and brutally came home to me when my friend David Cohen read my whole manuscript and reported this:  “It seems that all of your tables serve as a footnote for a single assertion: Central was meritocratic.”  Two years of my life for a single footnote.  Lord save me from ever making that mistake again.  Since then I have avoided gathering and analyzing quantitative data and made it a religion to look for shortcuts as I’m doing research.  Diligence in gathering data doesn’t necessarily pay off in significance of the results.

Ok, so I’ll leave it at that.  I hope you enjoyed watching me get flayed by a professional.  And I also hope there are some useful lessons buried in there somewhere.  

Posted in Academic writing, Higher Education, History of education

The Lust for Academic Fame: America’s Engine for Scholarly Production

This post is an analysis of the engine for scholarly production in American higher education.  The issue is that the university is a unique work setting in which the usual organizational incentives don’t apply.  Administrators can’t offer much in the way of power and money as rewards for productive faculty and they also can’t do much to punish unproductive faculty who have tenure.  Yet in spite of this scholars keep cranking out the publications at a furious rate.  My argument is that the primary motive for publication is the lust for academic fame.

The piece was originally published in Aeon in December, 2018.

pile of books
Photo by Pixabay on Pexels.com

Gold among the dross

Academic research in the US is unplanned, exploitative and driven by a lust for glory. The result is the envy of the world

David F. Labaree

The higher education system is a unique type of organisation with its own way of motivating productivity in its scholarly workforce. It doesn’t need to compel professors to produce scholarship because they choose to do it on their own. This is in contrast to the standard structure for motivating employees in bureaucratic organisations, which relies on manipulating two incentives: fear and greed. Fear works by holding the threat of firing over the heads of workers in order to ensure that they stay in line: Do it my way or you’re out of here. Greed works by holding the prospect of pay increases and promotions in front of workers in order to encourage them to exhibit the work behaviours that will bring these rewards: Do it my way and you’ll get what’s yours.

Yes, in the United States contingent faculty can be fired at any time, and permanent faculty can be fired at the point of tenure. But, once tenured, there’s little other than criminal conduct or gross negligence that can threaten your job. And yes, most colleges do have merit pay systems that reward more productive faculty with higher salaries. But the differences are small – between the standard 3 per cent raise and a 4 per cent merit increase. Even though gaining consistent above-average raises can compound annually into substantial differences over time, the immediate rewards are pretty underwhelming. Not the kind of incentive that would motivate a major expenditure of effort in a given year – such as the kind that operates on Wall Street, where earning a million-dollar bonus is a real possibility. Academic administrators – chairs, deans, presidents – just don’t have this kind of power over faculty. It’s why we refer to academic leadership as an exercise in herding cats. Deans can ask you to do something, but they really can’t make you do it.

This situation is the norm for systems of higher education in most liberal democracies around the world. In more authoritarian settings, the incentives for faculty are skewed by particular political priorities, and in part for these reasons the institutions in those settings tend to be consigned to the lower tiers of international rankings. Scholarly autonomy is a defining characteristic of universities higher on the list.

If the usual extrinsic incentives of fear and greed don’t apply to academics, then what does motivate them to be productive scholars? One factor, of course, is that this population is highly self-selected. People don’t become professors in order to gain power and money. They enter the role primarily because of a deep passion for a particular field of study. They find that scholarship is a mode of work that is intrinsically satisfying. It’s more a vocation than a job. And these elements tend to be pervasive in most of the world’s universities.

But I want to focus on an additional powerful motivation that drives academics, one that we don’t talk about very much. Once launched into an academic career, faculty members find their scholarly efforts spurred on by more than a love of the work. We in academia are motivated by a lust for glory.

We want to be recognised for our academic accomplishments by earning our own little pieces of fame. So we work assiduously to accumulate a set of merit badges over the course of our careers, which we then proudly display on our CVs. This situation is particularly pervasive in the US system of higher education, which is organised more by the market than by the state. Market systems are especially prone to the accumulation of distinctions that define your position in the hierarchy. But European and other scholars are also engaged in a race to pick up honours and add lines to their CVs. It’s the universal obsession of the scholarly profession.

Take one prominent case in point: the endowed chair. A named professorship is a very big deal in the academic status order, a (relatively) scarce honour that supposedly demonstrates to peers that you’re a scholar of high accomplishment. It does involve money, but the chair-holder often sees little of it. A donor provides an endowment for the chair, which pays your salary and benefits, thus taking these expenses out of the operating budget – a big plus for the department, which saves a lot of money in the deal. And some chairs bring with them extra money that goes to the faculty member to pay for research expenses and travel.

But more often than not, the chair brings the occupant nothing at all but an honorific title, which you can add to your signature: the Joe Doakes Professor of Whatever. Once these chairs are in existence as permanent endowments, they never go away; instead they circulate among senior faculty. You hold the chair until you retire, and then it goes to someone else. In my own school, Stanford University, when the title passes to a new faculty member, that person receives an actual chair – one of those uncomfortable black wooden university armchairs bearing the school logo. On the back is a brass plaque announcing that ‘[Your Name] is the Joe Doakes Professor’. When you retire, they take away the title and leave you the physical chair. That’s it. It sounds like a joke – all you get to keep is this unusable piece of furniture – but it’s not. And faculty will kill to get this kind of honour.

This being the case, the academic profession requires a wide array of other forms of recognition that are more easily attainable and that you can accumulate the way you can collect Fabergé eggs. And they’re about as useful. Let us count the kinds of merit badges that are within the reach of faculty:

  • publication in high-impact journals and prestigious university presses;
  • named fellowships;
  • membership on review committees for awards and fellowships;
  • membership on editorial boards of journals;
  • journal editorships;
  • officers in professional organisations, which conveniently rotate on an annual basis and thus increase accessibility (in small societies, nearly everyone gets a chance to be president);
  • administrative positions in your home institution;
  • committee chairs;
  • a large number of awards of all kinds – for teaching, advising, public service, professional service, and so on: the possibilities are endless;
  • awards that particularly proliferate in the zone of scholarly accomplishment – best article/book of the year in a particular subfield by a senior/junior scholar; early career/lifetime-career achievement; and so on.

Each of these honours tells the academic world that you are the member of a purportedly exclusive club. At annual meetings of professional organisations, you can attach brightly coloured ribbons to your name tag that tell everyone you’re an officer or fellow of that organisation, like the badges that adorn military dress uniforms. As in the military, you can never accumulate too many of these academic honours. In fact, success breeds more success, as your past tokens of recognition demonstrate your fitness for future tokens of recognition.

Academics are unlike the employees of most organisations in that they fight over symbolic rather than material objects of aspiration, but they are like other workers in that they too are motivated by fear and greed. Instead of competing over power and money, they compete over respect. So far I’ve been focusing on professors’ greedy pursuit of various kinds of honours. But, if anything, fear of dishonour is an even more powerful motive for professorial behaviour. I aspire to gain the esteem of my peers but I’m terrified of earning their scorn.

Lurking in the halls of every academic department are a few furtive figures of scholarly disrepute. They’re the professors who are no longer publishing in academic journals, who have stopped attending academic conferences, and who teach classes that draw on the literature of yesteryear. Colleagues quietly warn students to avoid these academic ghosts, and administrators try to assign them courses where they will do the least harm. As an academic, I might be eager to pursue tokens of merit, but I am also desperate to avoid being lumped together with the department’s walking dead. Better to be an academic mediocrity, publishing occasionally in second-rate journals, than to be your colleagues’ archetype of academic failure.

The result of all this pursuit of honour and retreat from dishonour is a self-generating machine for scholarly production. No administrator needs to tell us to do it, and no one needs to dangle incentives in front of our noses as motivation. The pressure to publish and demonstrate academic accomplishment comes from within. College faculties become self-sustaining engines of academic production, in which we drive ourselves to demonstrate scholarly achievement without the administration needing to lift a finger or spend a dollar. What could possibly go wrong with such a system?

 

One problem is that faculty research productivity varies significantly according to what tier of the highly stratified structure of higher education professors find themselves in. Compared with systems of higher education in other countries, the US system is organised into a hierarchy of institutions that are strikingly different from each other. The top tier is occupied by the 115 universities that the Carnegie Classification labels as having the highest research activity, which represents only 2.5 per cent of the 4,700 institutions that grant college degrees. The next tier is doctoral universities with less of a research orientation, which account for 4.7 per cent of institutions. The third is an array of master’s level institutions often referred to as comprehensive universities, which account for 16 per cent. The fourth is baccalaureate institutions (liberal arts colleges), which account for 21 per cent. The fifth is two-year colleges, which account for 24 per cent. (The remaining 32 per cent are small specialised institutions that enrol only 5 per cent of all students.)

The number of publications by faculty members declines sharply as you move down the tiers of the system. One study shows how this works for professors in economics. The total number of refereed journal articles published per faculty member over the course of a career was 18.4 at research universities; 8.1 at comprehensive universities; 4.9 at liberal arts colleges; and 3.1 at all others. The decline in productivity is also sharply defined within the category of research universities. Another study looked at the top 94 institutions ranked by per-capita publications per year between 1991 and 1993. At the number-one university, average production was 12.7 per person per year; at number 20, it dropped off sharply to 4.6; at number 60, it was 2.4; and at number 94, it was 0.5.

Only 20 per cent of faculty serve at the most research-intensive universities (the top tier) where scholarly productivity is the highest. As we can see, the lowest end of this top sliver of US universities has faculty who are publishing less than one article every five years. The other 80 per cent are presumably publishing even more rarely than this, if indeed they are publishing at all. As a result, it seems that the incentive system for spurring faculty research productivity operates primarily at the very top levels of the institutional hierarchy. So why am I making such a big deal about US professors as self-motivated scholars?

The most illuminating way to understand the faculty incentive to publish is to look at the system from the point of view of the newly graduating PhD who is seeking to find a faculty position. These prospective scholars face some daunting mathematics. As we have seen, the 115 high-research universities produce the majority of research doctorates, but 80 per cent of the jobs are at lower-level institutions. The most likely jobs are not at research universities but at comprehensive universities and four-year institutions. So most doctoral graduates entering the professoriate experience dramatic downward mobility.

It’s actually even worse than that. One study of sociology graduates shows that departments ranked in the top five select the majority of their faculty from top-five departments, but most top-five graduates ended up in institutions below the rank of 20. And a lot of prospective faculty never find a position at all. A 1999 study showed that, among recent grads who sought to become professors, only two-thirds had such a position after 10 years, and only half of these had earned tenure. And many of those who do find teaching positions are working part-time, a category that in 2005 accounted for 48 per cent of all college faculty.

The prospect of a dramatic drop in academic status and the possibility of failing to find any academic job do a lot to concentrate the mind of the recent doctoral graduate. Fear of falling compounded by fear of total failure works wonders in motivating novice scholars to become flywheels of productivity. From their experience in grad school, they know that life at the highest level of the system is very good for faculty, but the good times fade fast as you move to lower levels. At every step down the academic ladder, the pay is less, the teaching loads are higher, graduate students are fewer, research support is less, and student skills are lower.

In a faculty system where academic status matters more than material benefits, the strongest signal of the status you have as a professor is the institution where you work. Your academic identity is strongly tied to your letterhead. And in light of the kind of institution where most new professors find themselves, they start hearing a loud, clear voice saying: ‘I deserve better.’

So the mandate is clear. As a grad student, you need to write your way to an academic job. And when you get a job at an institution far down the hierarchy, you need to write your way to a better job. You experience a powerful incentive to claw your way back up the academic ladder to an institution as close as possible to the one that recently graduated you. The incentive to publish is baked in from the very beginning.

One result of this Darwinian struggle to regain one’s rightful place at the top of the hierarchy is that a large number of faculty fall by the wayside without attaining their goal. Dashed dreams are the norm for large numbers of actors. This can leave a lot of bitter people occupying the middle and lower tiers of the system, and it can saddle students with professors who would really rather be somewhere else. That’s a high cost for the process that supports the productivity of scholars at the system’s pinnacle.

 

Another potential problem with my argument about the self-generating incentive for professors to publish is that the work produced by scholars is often distinguished more by its quantity rather than its quality. Put another way, a lot of the work that appears in print doesn’t seem worth the effort required to read it, much less to produce it. Under these circumstances, the value of the incentive structure seems lacking.

Consider some of the ways in which contemporary academic production promotes quantity over quality. One familiar technique is known as ‘salami slicing’. The idea here is simple. Take one study and divide it up into pieces that can each be published separately, so it leads to multiple entries in your CV. The result is an accumulation of trivial bits of a study instead of a solid contribution to the literature.

Another approach is to inflate co-authorship. Multiple authors make sense in some ways. Large projects often involve a large number of scholars and, in the sciences in particular, a long list of authors is de rigueur. Fine, as long as everyone in the list made a significant contribution to research. But often co-authorship comes for reasons of power rather than scholarly contribution. It has become normal for anyone who compiled a dataset to demand co-authorship for any papers that draw on the data, even if the data-owner added nothing to the analysis in the paper. Likewise, the principal investigator of a project might insist on being included in the author list for any publications that come from this project. More lines on the CV.

Yet another way to increase the number of publications is to increase the number of journals. By one count, as of 2014 there were 28,100 scholarly peer-reviewed journals. Consider the mathematics. There are about 1 million faculty members at US colleges and universities at the BA level and higher, so that means there are about 36 prospective authors for each of these journals. A lot of these enterprises act as club journals. The members of a particular sub-area of a sub-field set up a journal where members of the club engage in a practice that political scientists call log-rolling. I review your paper and you review mine, so everyone gets published. Edited volumes work much the same way. I publish your paper in my book, and you publish mine in yours.

A lot of journal articles are also written in a highly formulaic fashion, which makes it easy to produce lots of papers without breaking an intellectual sweat. The standard model for this kind of writing is known as IMRaD. This mnemonic represents the four canonical sections for every paper: introduction (what’s it about and what’s the literature behind it?); methods (how did I do it?); research (what are my findings?); and discussion (what does it mean?). All you have to do as a writer is to write the same paper over and over, introducing bits of new content into the tried and true formula.

The result of all this is that the number of scholarly publications is enormous and growing daily. One estimate shows that, since the first science papers were published in the 1600s, the total number of papers in science alone passed the 50 million mark in 2009; 2.5 million new science papers are published each year. How many of them do you think are worth reading? How many make a substantive contribution to the field?

 

OK, so I agree. A lot of scholarly publications – maybe most such publications – are less than stellar. Does this matter? In one sense, yes. It’s sad to see academic scholarship fall into a state where the accumulation of lines on a CV matters more than producing quality work. And think of all the time wasted reviewing papers that should never have been written, and think of how this clutters and trivialises the literature with contributions that don’t contribute.

But – hesitantly – I suggest that the incentive system for faculty publication still provides net benefits for both academy and society. I base this hope on my own analysis of the nature of the US academic system itself. Keep in mind that US higher education is a system without a plan. No one designed it and no one oversees its operation. It’s an emergent structure that arose in the 19th century under unique conditions in the US – when the market was strong, the state was weak, and the church was divided.

Under these circumstances, colleges emerged as private not-for-profit enterprises that had a state charter but little or no state funding. And, for the most part, they arose for reasons that had less to do with higher learning than with the extrinsic benefits a college could bring. As a result, the system grew from the bottom up. By the time state governments started putting up their own institutions, and the federal government started funding land-grant colleges, this market-based system was already firmly in place. Colleges were relatively autonomous enterprises that had found a way to survive without steady support from either church or state. They had to attract and retain students in order to bring in tuition dollars, and they had to make themselves useful both to these students and to elites in the local community, both of whom would then make donations to continue the colleges in operation. This autonomy was an accident, not a plan, but by the 20th century it became a major source of strength. It promoted a system that was entrepreneurial and adaptive, able to take advantage of possibilities in the environment. More responsive to consumers and community than to the state, institutions managed to mitigate the kind of top-down governance that might have stifled the system’s creativity.

The point is this: compared with planned organisational structures, emergent structures are inefficient at producing socially useful results. They’re messy by nature, and they pursue their own interests rather than following directions from above according to a plan. But as we have seen with market-based economies compared with state-planned economies, the messy approach can be quite beneficial. Entrepreneurs in the economy pursue their own profit rather than trying to serve the public good, but the side-effect of their activities is often to provide such benefits inadvertently, by increasing productivity and improving the general standard of living. A similar argument can be made about the market-based system of US higher education. Maybe it’s worth tolerating the gross inefficiency of a university system that is charging off in all directions, with each institution trying to advance itself in competition with the others. The result is a system that is the envy of the world, a world where higher education is normally framed as a pure state function under the direct control of the state education ministry.

This analysis applies as well to the professoriate. The incentive structure for US faculty encourages individual professors to be entrepreneurial in pursuing their academic careers. They need to publish in order to win honours for themselves and to avoid dishonour. As a result, they end up publishing a lot of work that is more useful to their own advancement (lines on a CV) than to the larger society. Also, following from the analysis of the first problem I introduced, an additional cost of this system is the large number of faculty who fall by the wayside in the effort to write their way into a better job. The success of the system of scholarly production at the top is based on the failed dreams of most of the participants.

But maybe it’s worth tolerating a high level of dross in the effort to produce scholarly gold – even if this is at the expense of many of the scholars themselves. Planned research production, operating according to mandates and incentives descending from above, is no more effective at producing the best scholarship than are five-year plans in producing the best economic results. At its best, the university is a place that gives maximum freedom for faculty to pursue their interests and passions in the justified hope that they will frequently come up with something interesting and possibly useful, even if this value is not immediately apparent. They’re institutions that provide answers to problems that haven’t yet developed, storing up both the dross and the gold until such time as we can determine which is which.

 

Posted in Education policy, History of education, School organization, School reform

Michael Katz — Alternative Forms of School Governance

This post is my reflection on a classic piece by my former advisor, Michael Katz.  It’s a chapter in Class, Bureaucracy, and Schools called “Alternative Proposals for American Education: The Nineteenth Century.”  Here’s a link to a PDF of the chapter.

Katz CBS

The core argument is this.  In American politics of education in the 19th century, there were four competing models of how schools could be organized and governed.  Katz calls them paternalistic voluntarism, democratic localism, corporate voluntarism, and incipient bureaucracy.  By the end of the century, the bureaucratic model won out and ever since it has constituted the way public school systems operate.  But at the time, this outcome was by no means obvious to the participants in the debate.

At one level, this analysis provides an important lesson in the role of contingency in the process of institutional development.  Historical outcomes are never predetermined.  Instead they’re the result of complex social interactions in which contingency plays a major role.  That is, a particular outcome depends on the interplay of multiple contingent factors.

At another level, his analysis unpacks the particular social values and educational visions that are embedded within each of these organizational forms for schooling.

In addition, the models Katz shows here never really went away.  Bureaucracy became the norm for school organization, but the other forms persisted, in public school systems, in other forms of modern schooling, and in educational policies.

This is a piece I often used in class, and I’m drawing on class slides in the discussion that follows.

First, consider the characteristics of each model:

  • Paternalistic voluntarism
    • Pauper school associations as the model (which preceded public schools)
    • This is a top-down organization: we educate you
    • Elite amateurs ran the organization
  • Democratic localism
    • The small- town district school as the model
    • Purely public in control and funding, governed by an elected board of lay people: we educate ourselves
    • Anti-professional, anti-intellectual, reflecting local values
  • Corporate voluntarism
    • Private colleges and academies as the model
    • Independent, local, adaptable; on the border between public and private
    • Funded by student tuition and donations
    • Flexible, anti-democratic
    • Owned and operated by an elite board of directors
  • Incipient bureaucracy
    • An interesting composite of the others
    • As with PV, it’s a top down model with elite administration
    • As with CV, it provides some autonomy from democratic control
    • But as with DL, it answers to an elected board, which exerts formal control

Questions to consider:

    • How have paternalistic voluntarism, corporate voluntarism, and democratic localism persisted in modern schooling?
    • What difference does this make in how we think about schools?

Consider how all of these forms have persisted in the present day:

  • Paternalistic voluntarism
    • Means tests for educational benefits (Chapter I)
    • School reformer’s emphasis on the education of Other People’s Children (e.g., no excuses charter schools)
    • Teach For America
  • Corporate voluntarism
    • Private colleges and private schools continue this model
  • Democratic localism
    • Decentralized control of schools at the district level — 15,000 schools districts that hire teachers, build schools, and operate the system
    • Elected local school boards
  • Incipient bureaucracy
    • Still the most visible element of the current structure of schooling

Katz sees democratic localism as the good guy, bureaucracy as the bad guy

  • Is this true?  Consider the downsides of democratic localism
    • Parochialism, racism, restricted opportunity, weak academics
    • Desegregation of schools relied on federal power to override the preferences of local school boards
  • Katz is critical of ed bureaucracy from the left, a reflection of when he published the book (1971)
  • But the right has more recently developed a critique of ed bureaucracy, which is behind the choice movement: free schools from the government school monopoly;

Question: What is your take on the role that the educational bureaucracy plays in schooling?

My own take is this:

  • Bureaucracy is how we promote fairness in education
    • Setting universal procedures for everyone
  • Bureaucracy is how we promote democratic control over a complex educational institution
    • Setting a common set of standards transmitted by the elected board
  • Bureaucracy is how we protect schools from pure market pressures (see Philip Cusick’s book, The Educational System)
    • As consumers try to manipulate the system for private ends
    • It’s how we protect teachers from the unreasonable demands of parents
  • Thus bureaucracy serves as a bastardized bastion of the public good
  • Consider the modern school system – elected board and bureaucratic administration – as an expression of liberal democracy, with the democratic and liberal elements constraining each other
    • B expresses democratic will and enforces it, restricting individual choice
    • B provides due process for adjudicating individual choice, protecting it from tyranny of majority

Next week I will be publishing a new piece of my own, which picks up this last part of the analysis.  It’s called, “Two Cheers for School Bureaucracy.”  Stay tuned.