Posted in Credentialing, Curriculum, Educational goals, History of education

Resisting Educational Standards

This post is a piece I published in Kappan in 2000.  Here’s a link to the PDF.

It’s an analysis of why Americans have long resisted setting educational standards.  Of course my timing wasn’t great.  Just one year later, the federal government passed the landmark No Child Left Behind law, which established just such a system of standard mandates.  Oops.

This small faux pas aside, however, I think the essay stands up pretty well (though I was struck by my inordinate fondness for dashes).  NCLB caused a big stir and eventually generated a counter-movement that resulted in its repeal and replacement by the more lenient and state-centered Every Student Succeeds Act (ESSA).

Here’s the core argument of a rather long piece (6,000 words):

The history of such resistance suggests that there are three factors in particular that have made standards such a hard sell: a commitment to local control of schools, a commitment to expansion of educational opportunity, and a commitment to form over substance in the way we think about educational accomplishment. All three of these factors, which I treat below, can be traced in large part to our preference for one particular purpose of education: we have increasingly held the view that education is a private good, which should serve the individual interests of educational consumers, rather than a public good, which should serve the broader public interest in producing competent citizens and productive workers.

If you know my work, this is going to sound familiar.  But for one thing, it’s a useful summary of issues from my first two books (The Making of an American High School and How to Succeed in School without Really Trying).  You’ll find some familiar themes here:  credentialism, conflicting goals of schooling, education as a public and private good, and the growing dominance of the latter.

In addition, however, it delves into some interesting issues in some detail.  One such issue is the longstanding tension between schooling as the vehicle for social opportunity and schooling as the bastion of academic rigor.  The votes are in and it’s clear that over the long history of US schools, opportunity has consistently trumped [sic] rigor.  Social advancement over academic learning is a consistent priority.  Which is part of what makes the standards movement such a radical reform effort.  It’s upending the whole purpose of schooling.

Another related issue is the longstanding pattern of emphasizing form over substance, doing school over learning the formal curriculum.  And a central player in this drama is the Carnegie unit, which doesn’t get as much credit [sic] as it deserves for shaping the form and function of US schooling.  We move through the system by accumulating credit hours, which measure not how much we learned but how many hours we spent it class.  You get credit for seat time.  It has become the standard currency of the education enterprise.

What is so wonderful and so terrible about the credit-hour system is the way it eases access to education at the expense of competence in subject matter. For people concerned about establishing and enforcing curriculum standards in education, this system is a disaster. But its attractions are clear. It effectively makes all courses functionally equivalent to all others because they are all measured in the same currency of credit hours. It also effectively makes all institutions at a certain level functionally equivalent to all others because they all offer the same diplomas or degrees. All you need to do is accumulate enough grades and credits and degrees — here, there, or anywhere — and you can present yourself as possessing the functional equivalent of an education.

Those who want to establish academic standards are in many ways trying to roll back the tide of credentialism that has swept American education along for many, many years.

Hope you find this moldy oldie useful.

Resisting Educational Standards

David F. Labaree

The matter of setting standards for American education is certainly quite visible these days, but much of what we hear about it is not very enlightening. The talk is frequently filled with ideological heat rather than with critical light, and the tone of the discussion is more often nostalgic than realistic. In addition, the pitch in favor of standards is currently so strong that it may well leave a number of listeners wondering why such an obviously needed and beneficial reform wasn’t undertaken a long time ago. But the fact is that the effort to establish educational standards has always been an uphill fight in this country.

In light of these circumstances, it is useful to examine why Americans have so vigorously resisted educational standards over the years. The history of such resistance suggests that there are three factors in particular that have made standards such a hard sell: a commitment to local control of schools, a commitment to expansion of educational opportunity, and a commitment to form over substance in the way we think about educational accomplishment. All three of these factors, which I treat below, can be traced in large part to our preference for one particular purpose of education: we have increasingly held the view that education is a private good, which should serve the individual interests of educational consumers, rather than a public good, which should serve the broader public interest in producing competent citizens and productive workers.

Preserving Local Control

First, consider our traditional commitment to preserving local control. The core issue here is the wide and deep strain of libertarian sentiment that lies at the heart of the American psyche. The urge to preserve individual liberty is a key to understanding American society, and it is what defines our distinctive approach to politics, economics, and education. “Don’t tell me what to do” has long been our national slogan. By it we have meant in particular that government should keep off our backs — especially government that is far removed from our local community. All you need to do is remember that this nation was born of an uprising against a colonial government that tried to impose modest taxes on it from afar.

In education, this sentiment came to be expressed as a staunch defense of local control of our schools. During most of the 19th century, the local school was the primary unit of educational governance foremost Americans. An individual community built a school, hired a teacher, raised money through local taxes and fees, and implemented education on its own terms. Outside help was neither offered nor welcomed. This was the ultimate in local control. Even in large cities, control of education tended to rest at the ward level.

Consider some numbers that suggest the radical degree of decentralization that has long characterized American education. It was not until 1937 that we started recording information about the number of individual school systems in the country. In that year, which was some 40 years after the start of a massive effort by reformers to consolidate districts into larger administrative units, there were about 120,000 individual school districts in the US. This meant that on average there were only two schools per district. Now, that is really local control. Even now, after consolidation has continued for another 60 years, we still have about 15,000 separate school districts — each with primary control over financing, staffing, and setting curriculum standards for our schools. 1

Certainly state governments have taken steps over the years to assert greater control over these matters in K-12 schooling, and even the federal government has made tiny and tentative moves in this direction. But all these efforts have been undertaken in the face of enormous resistance by local communities, which have vigorously fought to preserve the autonomy of their schools. A modest proposal by President Clinton for vague and voluntary national standards provoked strong opposition in Congress and elsewhere. A variety of efforts on the part of states to introduce some forms of curriculum guidelines and to reinforce them with statewide testing have stirred up strong reactions at the local level. Reinforcing this local response to setting standards has been the hostility toward government that has characterized the politics of the last two decades. Increasingly, elected officials have won office on a platform of being relentlessly anti-government. They see their primary job as an effort to protect local communities and individual citizens from the intrusion of government control.

In light of this long history of opposition to government interference in local affairs, it is not surprising that efforts to set educational standards at the national or state levels have not proceeded very far. Standards are seen as an infringement of individual liberty, and efforts to impose them run into a classic American response: “Don’t tell me what to do.”

Expanding Educational Opportunity

Consider a second factor that has shaped American resistance to educational standards: our long-standing commitment to expanding educational opportunity. The American track record in this respect is quite clear. In the last 200 years, school enrollments in the U.S. expanded faster than in any other country. Demand for educational opportunity has simply been insatiable. As each level of education has started to fill up, the demand has grown for access to the next higher level. In the early 19th century, primary education was the subject of expansion. Pressure for access to education shifted to the grammar school later in the century, to the high school around 1890, and finally to the college and university today. For elected officials it has been political suicide to attempt to block or even to slow this process — even though they can point to the huge fiscal burden imposed by this expansion.

Consider some numbers that capture the sheer size and speed of this expansion of educational opportunity. High school enrollments doubled every decade between 1890 and 1940, when high school attendance had become universal for American teenagers. Meanwhile, over the whole course of the 20th century, enrollments in higher education have grown at a relatively steady rate of about 50% every decade, from about one-quarter million students in 1900 to about 15 million today. The result is that college attendance, like high school attendance half a century ago, has become the normal expectation for American families. And as college enrollments have started to level off in the 1990s, enrollments in graduate schools have been booming, so the pattern of expanding educational opportunity shows no signs of letting up. 2

This trend has had one rather obvious consequence for educational standards. The push has clearly been to expand the quantity of access to schooling rather than to improve the quality of learning that goes on there. It is very hard to enhance quantity and quality of education simultaneously, but Americans have never really tried to do so. We have always been more intent on making sure that our children receive more years of education and higher level diplomas than we ourselves received. After all, credits and degrees are what have been so important in providing an entree to good jobs. Under these circumstances, who cares about what students learn in school as long as school credentials continue to pay economic and social rewards to those who have acquired them?

Note that any effort to establish and enforce standards for teaching and learning in American education is likely to have the consequence of restricting access to the things that historically Americans have most wanted from their education system. Raising standards means making it harder for some students — maybe many students — to get good grades, get promoted, acquire a diploma, and gain entrance to college or graduate school. It has been firmly established over the years that to restrict access to education at any level is just plain un-American. And this is particularly true when the restriction falls on my children rather than on other people’s children. In this sense, the standards movement is standing in the face of a long history of easy access and modest requirements for academic performance, a history that threatens to run right over any reformer who blocks its path. If the American commitment to local control sends the standards movement the message “Don’t tell me what to do,” the commitment to expanding educational opportunity sends the parallel message, “Don’t get in my way.”

Consider another problem that the tradition of expanding opportunity poses for the standards movement. In this case, the problem is not a form of resistance but a kind of temptation — a temptation to approach the standards issue from a dangerously misleading historical perspective. Much of the rhetoric of the standards movement has a distinctively nostalgic air to it. It often sounds as if we are pining for a return to a golden age, a time when schools were tough and students had to struggle to meet their academic standards.

The big reason for not returning to standards from the good old days is that these standards will not do us much good at the beginning of the 21st century. For one thing, the standards from the old days are largely useless to us because the conditions that allowed schools to impose these standards no longer exist. We have the rapid expansion of educational opportunity to thank for this, and — all in all — thanks are probably in order here. For example, take the case of a leading 19th-century high school that I have studied in some depth. 3 Central High School in Philadelphia was the only high school for boys in the nation’s second-largest city. It enrolled 500 young men out of a city population of one million. To gain admission, students had to pass a grueling entrance examination, and three quarters of those admitted ended up flunking out before graduation. This was one tough school. In fact, Central could be the poster child for the standards movement — except that it is not clear that we can learn anything from this case that would actually help us today.

Central was an extremely attractive place to go, and everyone wanted to get in — in large part because it was the only one of its kind. Nowadays, however, everybody is required by law to attend high school, so students see enrollment as a burden — not a privilege. Eager volunteers have turned into reluctant draftees. At the same time, Central could pick its students from the top 2% of the school population, choosing those who were both better able and more willing to succeed in its demanding academic environment. And it could throw out anyone who could not or would not meet the school’s standard. Today, high schools have to accept all students within a particular geographical area, whatever their ability or attitude toward study. And these schools are not permitted to get rid of students simply because they don’t earn top grades. Why? Because we have decided that we want everyone to have a high school education and not just the privileged few. 4

Another aspect of the golden age that makes it of little use to us is this: the standards of yesteryear rewarded forms of learning that we don’t care as much about these days. In the 19th and early 20th centuries, learning in American schools, for the most part, meant memorizing the text, and academic achievement meant successfully reciting the text back to the teacher, either orally or in writing. Recall that one of the stunning pedagogical innovations of the late 19th century was the introduction of the lecture to American classrooms, when a few daring teachers actually sought to explain the text in their own words. So students may have had to work hard, and academic success may have been difficult to attain. But it is not clear that students learned more. At least it is not clear that they learned more of the kinds of things that we tend to value today. For example, in the good old days students memorized the names of all the major rivers in the world; today we try to teach them something about how ecological systems work. Do we want to go back? I don’t think so.

Standards under the old system were easy to establish, in part because they applied to so few and in part because they were based on a narrow and mechanical notion of learning. The truly hard task is to establish standards that apply to the many rather than the few — without destroying the benefits that broad educational access has brought to this country — and to do so in a way that rewards forms of learning that are broadly useful for the kind of society that our graduates will enter.

Form over Substance

This discussion of local control and expanding educational opportunity leads us to a third major factor that has caused trouble for educational standards: Americans’ longstanding commitment to educational form over substance. By this I mean our system’s emphasis on measuring educational achievement through seat time and credentials rather than through academic performance. That is, we measure success by the amount of time we spend sitting in classrooms — placing ourselves at risk of getting an education — rather than by the amount of knowledge and skill that we actually acquire.

Now it may sound strange to talk about commitment to educational formalism, rather than perhaps treating this as an unintended consequence or a simple blind spot in our national vision of education. But I think that commitment is the right word because we are talking about a component of our system of education that is so basic and so visible that it helps define what is distinctive about that system in our own eyes and in the eyes of the world. Also, this commitment is so fervently defended by educators, students, and citizens alike that we cannot realistically think of it as a simple accident of history. We have consciously created an education system based on attaining formal markers of success — grades, credits, and degrees — rather than one based on acquiring substantive knowledge. And we proudly proclaim to the world the advantages of this system.

But we did not always value form over substance in American education. In the 19th century, we measured educational success through students’ performance on tests of their know ledge of subject matter. Consider again the case of Philadelphia’s Central High School. The only way to get into this institution was to pass an examination that was so difficult it eliminated the large majority of the students who took it.

However, this performance-based model of achievement ran into a powerful political force — the emerging demand by educational consumers for broader access to the high school. And when standards came into conflict with educational opportunity, standards lost in a rout. Under intense political pressure, the board of education in Philadelphia first began to set high school admission quotas for the various grammar schools in the city rather than adhering to a single cut score on the entrance examination. Then the board began sharply increasing the enrollment at Central. When this was still not enough to meet demand, the board started opening a series of new high schools. By 1912, Central High School was just one of many regional high schools in the city, which, like the others, had to admit anyone who had succeeded in graduating from grammar school and lived in the attendance area. In short, the examination was discarded, and in its place came a system of admission by diploma. 5

A similar process was also playing itself out in colleges at the turn of the century. Like high schools, colleges had previously tended to admit students by an examination administered by the college itself. But this practice had become unworkable as the number of students seeking admission grew larger.

There were two obvious alternatives, both of which were pursued. One was to invent a general test across colleges that all prospective students could take, and, to this end, the College Entrance Examination Board was created and began administering exams. The other was to start accepting a high school diploma as proof of qualification for admission. Both methods have survived to the present day, but admission by diploma has become the dominant form. The College Board has continued offering entrance examinations for prospective college students, but these tests quickly devolved into tests of “aptitude “rather than subject matter. Today’s SAT helps sort students by something similar to I.Q., but it does nothing to measure how much students have learned in their high school courses. So form has also taken precedence over substance in college admissions.

Another event that helped make this change possible was the invention of the infamous Carnegie unit at the turn of the century — thanks to the collaboration of the Carnegie Foundation, the National Education Association, and the College Board. A Carnegie unit was defined as a quarter of the total high school instructional time for a student in a given year. The collaborators established a standard of 14 Carnegie units across specified subjects (that is, 3½ years of high school instruction in these subjects) as a prerequisite for college admissions. As a result of this invention, the official measure of curriculum mastery became the amount of time students spend in class. It was no longer what they learned but how long they were subjected to the possibility of learning. This was a momentous step for American education.

The implications of this change are clear. The Carnegie unit set the standard for much of what became distinctive about the American education system. This is a system that stresses attendance over performance, that encourages students to pursue the tokens of academic success rather than to demonstrate mastery of academic content. The Carnegie unit quickly evolved into the credit-hour system that is so fundamental to our form of education today. Students who accumulate appropriate grades in a course earn credit for that course equal to the number of hours per week that it meets. Students who accumulate a fixed number of credit hours across appropriate curriculum categories earn a diploma. And this diploma then qualifies those students for entry into the next level of education or into a particular level of job.

What is so wonderful and so terrible about the credit-hour system is the way it eases access to education at the expense of competence in subject matter. For people concerned about establishing and enforcing curriculum standards in education, this system is a disaster. But its attractions are clear. It effectively makes all courses functionally equivalent to all others because they are all measured in the same currency of credit hours. It also effectively makes all institutions at a certain level functionally equivalent to all others because they all offer the same diplomas or degrees. All you need to do is accumulate enough grades and credits and degrees — here, there, or anywhere — and you can present yourself as possessing the functional equivalent of an education.

Those who want to establish academic standards are in many ways trying to roll back the tide of credentialism that has swept American education along for many, many years. In launching into this effort, standards reformers need to realize that they are attacking Americans’ God-given right to the credits and diplomas of their choice. Seat time is an essential corollary to educational opportunity because it is precisely what makes educational accomplishment so easy for us. That, in tum, is what makes the system so hard to roll back; it is also what makes it so attractive to others around the world.

Consider this example from the Persian Gulf. In the last few years, Kuwait has operated two parallel systems of secondary education. Under the old system, students are promoted from grade to grade by passing examinations that test their understanding of the subject matter they were taught that year, and they are admitted to the university only after passing a comprehensive examination on the entire high school curriculum. The second system, introduced just a few years ago, allows students to be promoted on the basis of course grades, to graduate from high school on the basis of accumulating the proper number of credits, and to be admitted to the university on the basis of a high school diploma and grade-point average. Guess which system is suddenly the most popular with students. The American-style system, of course, because it makes it much easier for students to graduate from high school and gain admission to the university. It is also the same system whose graduates now find themselves struggling in vain to keep up with the intellectual demands of university study. 6

All this evidence suggests another slogan that helps define the reasons for resistance to educational standards in thus. Indeed, it follows naturally from the first two I have suggested: ”Don’t make me learn, I’m trying to graduate.”

Conflicting Goals for Education

I have pointed to three factors that have helped create an American system of education that is highly resistant to educational standards, and now I would like to suggest an overall framework that helps make sense of this situation. Historically, Americans have been of mixed mind about the purposes of public education. Consider three such purposes — democratic equality, social efficiency, and social mobility. These goals have been in conflict over the years, and priorities have shifted over time from one to another and back again. Let me briefly point out the nature of each of these goals and their impact on education. Then I want to suggest how, from the perspective of these three goals, we can understand the reasons for chronic resistance to educational standards in this country and also why some ways of pursuing standards are more desirable than others.

One goal is democratic equality. From this perspective, the purpose of schooling is to produce competent citizens. The idea is that all of us as citizens need to be able to think critically, understand the way our society works, and have sufficient general knowledge to be able to make valid judgments about the essential issues in democratic political life (as voters, jurors, and so on). At the same time, democracies also require citizens whose social differences are modest enough that they can reach agreement about the policies shaping political and social life. Schools, from this angle, are the prime mechanism for providing shared level of competence and a common set of social experiences and cultural understandings essential for an effective democracy. Much of what is most familiar and enduring in the American system of education can be traced to this goal: the neighborhood elementary school and the regional comprehensive high school, populated by students from the entire community; whole-class instruction and social promotion; the stress on general over specialized education; and the emphasis on inclusion over separation of students.

Another goal is social efficiency. From this point of view, the purpose of education is less to educate citizens than to train productive workers. The idea is that economic growth requires workers with skills that are matched to particular occupational roles. As a result, schools need to provide specialized kinds of learning for alternative career paths, sort students according to predicted future careers, and then provide them with the specialized learning they need. Signs of the impact of this goal are all around us: in the stress on vocational programs in high schools and colleges, in the persistent practice of tracking and ability grouping, and in the prominent political rhetoric about education as investment in human capital.

The contrast between these two conceptions of education is striking. Should the schools prepare people for political or economic life? Provide general or specialized instruction? Promote similarity or difference? But despite these differences, the two goals are also strikingly similar in that they both see education as a public good. The nature of a public good is that it affects everyone in the community: you can’t escape it, even if you want to. In this case, everyone gains if a public school system produces competent citizens and productive workers, and everyone loses if it fails to do so. That includes people who do not have children in public schools.

What is most distinctive about the third educational goal, social mobility, is that it construes education as a private good. 7 If the first goal sees education from the viewpoint of the citizen and the second from that of the taxpayer or employer, the third takes the perspective of the individual consumer of education. From this angle, education exists because of what it can do for me or my children, not because of its benefits for democracy or the economy. And the historical track record on this point is clear: people who acquire more diplomas get better jobs. Educational credentials give individuals an advantage over competitors, and that advantage pays off handsomely, helping some to get ahead and others to stay ahead.

The key point is that if education is going to serve the goal of social mobility effectively, it has to provide some people with benefits that others do not get. As a private good, education benefits only the owner, serving as an investment in my future, not yours; in my children, not other people’s children. This calls for an education system that focuses heavily on grading, sorting, and selecting students. Such a system needs to provide individuals with forms of social distinction that mark them off from the pack by such means as placing them in the top reading group, the gifted program, a higher curriculum track, or a more prestigious college.

The Roots of the Problem: Chronic Consumerism

This analysis of conflicting goals for American education can help us in our thinking about the problem of educational standards. It can help explain the longstanding and powerful resistance to standards, and it can also help explain why some approaches to establishing standards are quite different from others (if we consider which educational goals they are designed to advance).

The dominant influence of the goal of social mobility stands behind the three forms of resistance to educational standards that I have identified above. In part, the impulse toward local control comes from a strong American political tradition that focuses on the defense of individual liberties. But the hostility toward standard setting at the state or national level goes beyond a political defense of the local school board and town council. It also has a consumer dimension. As cautious consumers of education, we want to protect the value of the diplomas that our children acquire and to preserve the social advantages that education currently brings to them. We don’t want anyone to tell us what kind of education our children can get — not state governments or Congress, not state tests or national tests, and certainly not some organization of historians or math teachers. In particular, we don’t want any system of standards that might restrict access to the educational goods our children need in order to get ahead or stay ahead. Instead, we want a system like the current one, which allows our children to gain a competitive advantage over other people’s children.

The last thing we think we need is a standards effort that equalizes educational achievement and therefore puts my child and yours on an equal footing. As a result, we are deeply concerned that standards might force learning back into education. We don’t want anything that will intrude on the current system of rewarding students with diplomas if they serve their time and sit long enough in the right classrooms. As consumers, we feel that schools have a sacred duty to offer our children the grades, credits, and degrees they want, without imposing performance tests or learning requirements that might interfere with this process of individual advancement.

I should also make one final point: consider for a moment the implications of this historical sketch for people who see standards as the reform we need in American education right now — in spite of all the factors working against their adoption. Efforts to establish standards will have vastly different consequences for education depending on which approach we take. A useful way to think about this issue is to examine what standards might look like if they emerged from one of the three goals for American education rather than another.

From the perspective of the goal of democratic equality, the point of standards is to raise the average cultural competence of American citizens and to reduce the radical cultural differences that now exist between the advantaged and the disadvantaged. This is the kind of argument we often hear from people like E. D. Hirsch, Jr., and from the various subject-matter groups that are now promoting standards. The idea is to provide all citizens with the capacities they need in order to carry out their political roles as voters and jurors. The idea is also to give everyone access to the same cultural resources, which will allow them to function as members of the same community, rather than to see themselves — as so many in our current society now do — as members of subgroups that are sharply divided by cultural, racial, and physical barriers.

From the perspective of the goal of social efficiency, the point of standards is to raise the level of human capital in American society. This means the standards should help prepare workers for the full array of jobs that make up the American economy by giving them the skills they need in order to carry out these jobs productively. It is the kind of argument we often hear from Presidents, governors, and corporate leaders, who are worried about the economic consequences of inadequate education. The difference between this perspective and the pursuit of democratic equality is striking. Standards for democratic equality focus on higher levels of shared knowledge and skill, but standards for social efficiency focus on specialized training for particular jobs. This means radically different standards, for example, for the workers who assemble cars, for the engineers who design them, and for the executives who manage the process.

Despite these differences, however, these two approaches to standards both treat education as a public good, and so they both see educational standards as a way to provide benefits to the public as a whole. The aim is to enhance the competence of citizens and the productivity of workers in order to enrich the political and economic life of the larger community.

In this way, the social mobility approach to educational standards is strikingly different. The aim from this perspective is to preserve the advantages and increase the distinctions that arise from the way individual consumers currently work the education system. Schooling is already organized in a manner that enhances consumer rights at the expense of public benefits. We have always been better at sorting students than at teaching them. A consumer-based approach to educational standards is one that stresses this sorting function, and all too many of the proposals floating around the standards movement bear this mark. You can tell this kind of approach from the others because it tends to put special emphasis not on improving skills but on distinguishing winners from losers. The focus is on labeling rather than learning — giving gold stars to those who pass through the promotional gates, who get into the gifted program or the advanced placement class, and who win a special endorsement on their high school diploma. And giving lumps of coal to those who fail to make the grade in any of these ways.

This kind of consumerism is also what leads us to misread history and try to establish standards by returning to the good old days. As I pointed out earlier, the standards of yesteryear — to the extent that they really were higher, which is doubtful — were grounded in an education system that was nothing like ours. At the high school and college levels, this system could afford to be highly selective and brutally competitive because it served such a tiny proportion of the population. Those pushing the consumer perspective within the current standards movement would like to move several steps back in that direction because more selectivity and greater attrition would improve the competitive position of their children — assuming, of course, that the bodies falling along the wayside would be other people’s children.

Finally, a standards effort guided by consumerism would not only elevate private over public educational benefits but would also reinforce an already prominent and devastatingly harmful tendency in American education: the tendency to value form over substance. From the perspective of democratic equality or social efficiency, the aim of the standards movement is to improve the quality of learning in schools. But from the perspective of social mobility, the aim of standards is not to improve learning but to make it a little harder for everyone else to obtain the grades, credits, and degrees that are the symbols of academic success. The effect is to further debase education by turning it into an ever more intense game of “how to succeed in school without really learning.” However, I hope that this is not the primary sentiment of the people who are closest to American education and know it best. As citizens and educators, I trust that we will not pursue this consumerist vision of educational standards, which is so harmful both to the quality of education and to the quality of life in American society.

  1. National Center for Education Statistics, Digest of Education Statistics (Washington, D.C.: U.S. Department of Education, 1995), Table 88.
  2. Ibid., Table 3.
  3. David F. Labaree, The Making of an American High School: The Credentials Market and the Central High School of Philadelphia, 1838-1939 (New Haven, Conn.: Yale University Press, 1988).
  4. David F. Labaree, “Raising Standards in the American High School: Why the Good Old Days Are Not Much Help,” in idem, How to Succeed in School Without Really Learning: The Credentials Race in American Education (New Haven, Conn.: Yale University Press. 1997), pp. 75-91.
  5. Labaree, The Making of an American High School.
  6. Hend Almoian, “A Comparison of Alternative Systems of Secondary Education in Kuwait,” unpublished paper, College of Education, Michigan State University, East Lansing, 1998.
  7. The discussion in this section is based on the argument in my recent book, How to Succeed in School Without Really Learning.
Posted in Educational goals, History of education, Systems of Schooling

Politics and Markets: The Enduring Dynamics of the US System of Schooling

This post is a piece I just wrote, which will end up as a chapter in a book edited by Kyle Steele, New Perspectives on the Twentieth Century American High School.  It will be published by Palgrave Macmillan as part of Bill Reese and John Rury series on Historical Studies in Education.  Here is a link to a pdf of the chapter.  This essay is dedicated to my old friend and former colleague, David Cohen, who died earlier this year.

Writing this chapter as an opportunity for me to explore how my thinking about American schooling emerged from the analysis of an early high school in my first book and then developed over the years into a broader understanding of the dynamics that have shaped the history the US educational system.  Here’s an overview of the argument:

In this essay, I explore how the tension between politics and markets, which David Cohen uncovered in my first book, helps us understand the central dynamics of the American system of schooling over its 200-year history. The primary insight is that the system, as with Central High, is at odds with itself. It’s a system without a plan. No one constructed a coherent design for the system or assigned it a clear and consistent mission. Instead, the system evolved through the dynamic interplay of competing actors seeking to accomplish contradictory social goals through a single organizational machinery.

By focusing on this tension, we can begin to understand some of the more puzzling and even troubling characteristics of the American system of schooling. It’s a radically decentralized organizational structure, dispersed across 50 states and 15,000 school districts, and no one is in charge. Yet somehow schools all over the country look and act in ways that are remarkably similar. It’s a system that has a life of its own, fends off concerted efforts by political reformers to change the core grammar of schooling, and evolves at its own pace in response to the demands of the market. Its structure is complex, incoherent, and fraught with internal contradictions, but it nonetheless seems to thrive under these circumstances. And it is somehow able to accommodate the demands placed on it by a disparate array of educational consumers, who all seem to get something valuable out of it, even though these demands pull the system in conflicting directions. It has something for everyone, it seems, except for fans of organizational coherence and efficiency. In fact, one lesson that emerges from this focus on tensions within the system is that coherence and efficiency are vastly overrated. Conflict can be constructive.

This essay starts with the tension between politics and markets that I explored in my first book and then builds on it with analyses I carried out over the next thirty years in which I sought to unpack this tension. These findings were published in three later books: How to Succeed in School Without Really Learning: The Credentials Race in American Education (1997); Someone Has to Fail: The Zero-Sum Game of Public Schooling (2010); and A Perfect Mess: The Unlikely Ascendancy of American Higher Education (2017). The aim of this review is to explore the core dynamics of the US educational system as it emerges in these works. It is a story about a balancing act among competing forces, one that began with a conversation about Central High with my friend David Cohen.

The revelation that came to me as I was working on these later books was that the form and function of the American high school served as the model for the educational system.  The nineteenth-century high school established the mix of common schooling at one level and elite schooling at the next level that came to characterize the system as a whole.  And the tracked comprehensive high school that emerged in the early twentieth century provided the template for the structure of US higher education, which, like Central in 1920, is both highly stratified and broadly inclusive.  Overall, it is a system that embraces its own contradictions by providing something for everyone – at the same time providing social access and preserving social advantage. 

I hope you like it.

Politics and Markets:

The Enduring Dynamics of the US System of Schooling[1]

 David F. Labaree

Sometimes, when you’re writing a book, someone else needs to tell you what it’s truly about. That is what happened to me as I was writing my first book, published in 1988: The Making of an American High School: The Credentials Market and the Central High School of Philadelphia, 1838-1939. I had just completed the manuscript when David Cohen, my colleague at the Michigan State University College of Education, generously offered to read the full draft and give me comments on it. As we sat together for two hours in my office, he explained to me the point I was trying to make in the text but had failed to make explicit. Although the pieces of the story I presented were interesting in themselves, he said, they fell short of forming a larger interpretive scheme. The elements of this larger story were already there, but they were just below the surface. Our conversation showed me that the heart of the story my book told about this high school revolved around an ongoing tension between politics and markets, a tension that shaped its evolution.

Central High was created as an expression of democratic politics. In this role, it was an effort to create informed citizens for the new republic. But once it was launched, it took on a new role, as a vehicle for conferring social status on the highly select group of students who attended. Its subsequent history was a struggle between these two visions of the school, as political pressures mounted to give future students greater access to the high school credential, while the families of current students sought to preserve the exclusivity that provided them with social advantage.

At the same time that David told me what my book was about, he also told me what it was not about. As I saw it, the empirical core of the book was a quantitative dataset I had compiled of 1,834 students who attended the school during census years between 1840 and 1920. I had coded the information from school records, linked it to family data from the census, punched it into IBM cards (remember those?), and analyzed it at length with statistical software. What the data showed was that—unlike the contemporary high school, where social origins best explain who graduates and who drops out—the determining factor at Central was grades. This was my big reveal. But that day in my office, David pointed out to me that all this data—recorded in no fewer than thirty-six tables—added up to a footnote to the statement, “Central High School was a meritocracy.” In total, this part of the study took two years of my still short life. Two years for one footnote.

Needless to say, at the time I struggled to accept either of David’s comments with the gratitude they deserved. He was right, but I was devastated. First, the book I thought was finished would now require a complete rewrite, so I could weave the book’s central theme back into the text. And second, this revision would mean confining the hard-won quantitative analysis to a single chapter, because the most interesting material turned out to be elsewhere. In the rush to display all my hard-won data, I had ended up stepping on my punchline.

In this essay, I explore how the tension between politics and markets, which David Cohen uncovered in my first book, helps us understand the central dynamics of the American system of schooling over its 200-year history. The primary insight is that the system, as with Central High, is at odds with itself. It’s a system without a plan. No one constructed a coherent design for the system or assigned it a clear and consistent mission. Instead, the system evolved through the dynamic interplay of competing actors seeking to accomplish contradictory social goals through a single organizational machinery.

By focusing on this tension, we can begin to understand some of the more puzzling and even troubling characteristics of the American system of schooling. It’s a radically decentralized organizational structure, dispersed across 50 states and 15,000 school districts, and no one is in charge. Yet somehow schools all over the country look and act in ways that are remarkably similar. It’s a system that has a life of its own, fends off concerted efforts by political reformers to change the core grammar of schooling, and evolves at its own pace in response to the demands of the market. Its structure is complex, incoherent, and fraught with internal contradictions, but it nonetheless seems to thrive under these circumstances. And it is somehow able to accommodate the demands placed on it by a disparate array of educational consumers, who all seem to get something valuable out of it, even though these demands pull the system in conflicting directions. It has something for everyone, it seems, except for fans of organizational coherence and efficiency. In fact, one lesson that emerges from this focus on tensions within the system is that coherence and efficiency are vastly overrated. Conflict can be constructive.

This essay starts with the tension between politics and markets that I explored in my first book and then builds on it with analyses I carried out over the next thirty years in which I sought to unpack this tension. These findings were published in three later books: How to Succeed in School Without Really Learning: The Credentials Race in American Education (1997); Someone Has to Fail: The Zero-Sum Game of Public Schooling (2010); and A Perfect Mess: The Unlikely Ascendancy of American Higher Education (2017). The aim of this review is to explore the core dynamics of the US educational system as it emerges in these works. It is a story about a balancing act among competing forces, one that began with a conversation about Central High with my friend David Cohen.

The revelation that came to me as I was working on these later books was that the form and function of the American high school served as the model for the educational system.  The nineteenth-century high school established the mix of common schooling at one level and elite schooling at the next level that came to characterize the system as a whole.  And the tracked comprehensive high school that emerged in the early twentieth century provided the template for the structure of US higher education, which, like Central in 1920, is both highly stratified and broadly inclusive.  Overall, it is a system that embraces its own contradictions by providing something for everyone – at the same time providing social access and preserving social advantage. 

Politics and Markets and the Founding of Central High

To understand the tension in the American educational system you first need to consider the core tension that lies at the heart of the American political system. Liberal democracy is an effort to balance two competing goals. One is political equality, which puts emphasis on the need for rule by the majority, grounded in political consensus, and aiming toward the ideal of equality for all. This is the democratic side of liberal democracy. The other goal is individual liberty, which puts emphasis on preserving the rights of the minority from the tyranny of the majority, open competition among individual actors, and a high tolerance for any resulting social inequality. This is the liberal side of the system, which frees persons, property, and markets from undue political constraint. These are the two tendencies I have labeled politics and markets. Balancing the two is both essential and difficult. It offers equal opportunity for unequal outcomes, majority rule and minority rights.

School is at the center of this because it reflects and serves both elements. It offers everyone access to school and the opportunity to show what individuals can achieve there. And it also creates hierarchies of merit, winners and losers, as it sorts people into different levels of the social structure. In short, it provides social access and also upholds social advantage.

So what happened when Central High School appeared upon the scene? It was founded for political and moral reasons, in support of the common-school ideal of preparing citizens of the new American republic by instilling in them the skills and civic virtues they would need in to establish and preserve republican community. But in order to accomplish this goal, the founders needed to get past a major barrier. Prior to the founding of common schools in Philadelphia in the 1830s, a form of public schooling was already in effect, but it was limited to people who couldn’t afford to pay for their own schooling. To qualify, you had to go down to city hall and declare yourself, in person, as a pauper. Middle- and upper-class families paid for private schooling for their children. Common schools would not work in creating civic community unless they could draw everyone into the mix. But the existing public system was freighted with the label “pauper schools.” Why would a respectable middle-class family want to send their children to such a stigmatized institution?

The answer to this question was ingenious. Induce the better-off to enroll in the public schools by making such enrollment the prerequisite for gaining access to an institution  that was better than anything they could find in the private education market. In Philadelphia, that institution was Central High School. The founders deliberately created it as an irresistible lure for the wealthy. It was located in the most fashionable section of town. It had a classical marble façade, a high-end German telescope mounted in an observatory on its roof, and a curriculum that was comparable to what students could find at the University of Pennsylvania. Modeled more on a college than a private academy, the school’s principal was called president, its teachers were called professors (listed in the front of the city directory along with judges and city council members), and the state authorized the school to award college degrees to its graduates. Its students were the same age-range as those at Penn; you could go to one or the other, but there was no reason to attend both. And unlike Penn, Central was free. It also offered students a meritocratic achievement structure, with a rigorous entrance exam screening those coming in and a tough grading policy that screened those who made it all the way to the end. This meant that graduates of Central were considered more than socially elite; they were certified as smart.

The result was a cultural commodity that became extraordinarily attractive to the middle and upper classes in the city: an elite college education at public expense. But there was a catch. Only students who had attended the public grammar schools could apply for admission to Central; initially they had to spend at least one year in the grammar schools and then the requirement rose to two years. This approach was wildly successful. From day one, the competition to pass the entrance exam and gain access to Central High School was intense. This was true not just for prospective students but also for the city’s grammar school masters, who were engaged in a zero-sum game to see who could get the most students into central and win themselves a prime post as a professor.

Note that the classic liberal democratic tension between political equality and market inequality was already present at the very birth of the common school. In order to create common schools, you needed an uncommon school. Only the selective inducement of the high school could guarantee full community participation in the lower schools. Thus, from the very start, public schooling in the US was both a public good and a private good. As a public good, its benefits accrued to everyone in the city, by creating citizens who were capable of maintaining a democratic polity. But it was also a private good, which provided social advantage to an elite population that could afford the opportunity cost to attain a scarce and valuable high school diploma.

Increased Access Leads to a Tracked and Socially Reproductive Central High

For fifty years, Central High School (and its female counterpart Girls High School) remained the only public secondary schools in Philadelphia, which at the time was the second largest city in the country. High school attendance was a scarce commodity there and in the rest of the country, where in 1880 it accounted for only 1.1 percent of public school enrollments.[2] At the same time that high school enrollments were small and stable, enrollments in grammar schools were expanding rapidly. By 1900, the average American over twenty-five had completed eight years of schooling.[3] If most students were to continue their education, the number of high schools needed to expand rapidly. As a result, the end of the nineteenth century was a dynamic period in the development of the American system of schooling.

The pressures on the high school were coming from two sources. The first was working-class families, who were eager to have their children gain access to a valuable credential that had long been restricted to a privileged few. It’s a time-tested rule of thumb that, in a liberal democracy, you can’t limit access to an attractive public institution like the high school for very long when demand is high. Sheer numbers eventually make themselves felt through the political arena.

In Philadelphia you could see this play out in the political tensions over access to the two high schools. By the 1870s, the school board started imposing quotas on students from the various grammar schools in the city in order to spread access more evenly across the city. By the 1880s, the city began to open manual training schools in parallel with the high schools, and by the 1890s the flood gates opened. A series of new regional high schools were established, allowing a sharp increase in enrollments. At the same time, the board abolished the high school entrance examination, which meant that students now qualified for admission to high school solely by presenting a grammar-school diploma. By 1920, Central had lost its position as the exclusive citadel at the top of the system, where it drew the best students city-wide, now demoted to the status of just one among the many available regional high schools.

Everything suddenly changed in Central High’s form and function. The vision of being a college disappeared, as Central was placed securely between grammar school and college in the new educational hierarchy. Its longstanding core curriculum, which was required for all students, by 1920 became a tracked curriculum pitched toward different academic trajectories: an academic track for those going to college, a mechanical track for future engineers, a commercial track for clerical workers, and an industrial track for machine operators. And whereas the old Central had a proud tradition of school-wide meritocracy, students in the four tracks were distributed in a pattern familiar in high schools today, according to social class, with 72 percent of the academic-track students from the middle class and only 28 percent from the working class.[4]  Its professors, who had won a position at Central after proving their mettle as grammar school masters, now became ordinary teachers, who were much younger, with no teaching experience, and no qualification but a college diploma. (The professors hadn’t needed a college degree; a Central diploma had been sufficient.)

Political pressure for greater access explains the rapid expansion of high school enrollments during this period, but it doesn’t explain why the entire structure of the high school was transformed at the same time. While working-class families wanted to have their children gain access to the high school, in order to enhance their social opportunities, middle-class families wanted to preserve for their children the exclusivity that granted them social advantage. They were the second factor that shaped the school.

In part, this was a simple response to the value of high school as a private good. In political terms, equal access is a valuable public good; but in market terms, it’s a disaster. The value of schooling as a private good is measured by its scarcity. When high school became abundant, it lost its value for middle-class families. The new structure helped to preserve a degree of exclusivity, with middle-class students largely segregated in the academic track and the lower classes dispersed across the lower tracks. In addition, the middle-class students were positioned to move on to college, which had become the new zone of advantage after the high school lost its cachet. This is a pattern we see emerging again after the Second World War, when high school filled up and college enrollments sharply expanded.

For middle-class families at the turn of the twentieth century, this combination of high school tracking and college enrollment was more than just a numbers game, trying to keep one step ahead of the Joneses. Class survival was at stake. For centuries before this period, being middle class had largely meant owning your own small business. For town dwellers, either you were a master craftsman, owning a shop where you supervised journeymen and apprentices in plying the trade of cordwainer or cooper or carpenter, or you ran a retail store serving the public. The way you passed social position to your male children was by setting them up in an apprenticeship or willing them the store.

By the late nineteenth century, this model of status transmission had fallen apart. With the emergence of the factory and machine production, apprenticeship had largely disappeared, as apprentices became simple laborers who no longer had the opportunity to move up to master. And with the emergence of the department store, small retail businesses were in severe jeopardy. No longer able to simply inherit the family business, children in middle-class families faced the daunting prospect of proletarianization. The factory floor was beckoning. These families needed a new way to secure the status of their children, and that solution was education, first in high school and then in college. Through the medium of exclusive schooling, they hoped to position their children to embrace what Burton Bledstein calls “the culture of professionalism.”[5] By this, he is not referring simply to the traditional high professions (law, medicine, clergy) but to any occupational position that is buffered from market pressures.

The iron law of markets is that no one wants to function on a level playing field in open competition with everyone else. So, a business fortifies itself as a corporation, which acts as a conspiracy against the market. And middle-class workers seek an occupation that offers protection from open competition in the job market. Higher level educational credentials can do that. If a high school or college degree is needed to qualify for a position, then this sharply reduces the number of job seekers in the pool. And once on the job, you are less likely to be displaced by someone else because of shifting supply and demand. The ideal is the sinecure, and a diploma is the ticket to secure one. By the twentieth century, college became Sinecures “R” Us.

The job market accommodated this change through the increase in scale of both corporations and government agencies, which created a large array of managerial and clerical positions. These positions were safer, cleaner, and more secure than wage labor. They were protected by educational credentials, annual salaries, chances for promotion, formal dress, and civil service regulations. And, because they were awarded according to educational merit rather than social inheritance, they also granted the salary man a degree of social legitimacy that was not available to the owner’s son. Here’s how Bledstein explains it:

Far more than other types of societies, democratic ones required persuasive symbols of the credibility of authority, symbols the majority of people could reliably believe just and warranted. It became the function of the schools in America to legitimize the authority of the middle class by appealing to the universality and objectivity of “science.”[6]

Evolving in search of this symbolic credibility, the model of the high school that emerged in the early twentieth century looks very familiar to us today. It drew students from the community around the school, who were enrolled in a single comprehensive institution, and who were then distributed into curriculum tracks according to a judicious mix of individual academic merit and inherited social position, with each track aligned with a different occupational trajectory. The school as a whole was as heterogeneous as the surrounding population, but the experience students had there was relatively homogeneous by track and social origin. In one educational setting, you had both democratic equality and market-based inequality, commonality and hierarchy. An exemplary institution for a liberal democracy.

A lovely essay by David Cohen and Barbara Neufeld, “The Failure of High Schools and the Progress of Education,” captures the distinctive tension built into this institution.[7] On one hand, the comprehensive high school was one of the great educational success stories of all time. Starting as a tiny sliver of the educational system in the nineteenth century, it became a mammoth in the twentieth—with population doubling every ten years between 1890 and 1940—and by the end of this period it incorporated the large majority of the teenagers in the country. The elite school for the privileged few evolved rapidly into a comprehensive school for the masses.

But on the other hand, this success turned quickly into failure. Instead of celebrating the accomplishment of the students who managed to graduate from the high school, we began to bemoan those who didn’t, thus creating a new social problem: the high school dropout. Also, as the high school shifted from being seen as a place for students of the highest academic accomplishment to one for students of all abilities, it became the object of handwringing about declining academic standards. As a public good, it was a political success, offering opportunity for all; but as a private good, it was an educational failure, characterized by a watered-down curriculum and low expectations for achievement. The result was that the high school became the object of most educational reform movements in the twentieth century. Once the answer, it was now the problem.

The Lessons of Central High Applied to American Educational System

At this point, having followed the trajectory of the high school, we are in a position to examine more fully the core dynamic that shaped the development of the American educational system as a whole. Here’s how it works. Start with mass schooling at one level of the system and exclusive schooling at the level above. Then, in response to popular demand from working-class families for educational opportunity at the top level, the system expands access to this level, thus making it more inclusive. Next, in response to demand by middle-class families to preserve their educational advantage, the system tracks schooling in the zone of expansion, with their children occupying the upper tracks and newcomers entering in the lower tracks. Finally, the system ushers the previously advantaged educational consumers into the next higher level of the system, where schooling remains exclusive, the new zone of advantage.

In the second quarter of the nineteenth century, for example, we saw the formation of the common school system in the US, with universal enrollment at the elementary level, partial enrollment in grammar schools, and scarce enrollment in high schools. By the end of the century, grammar schools had filled up and pressure rose for greater access to high schools. As a result, high schools shifted toward a tracked structure, with middle-class students in the top tracks and the working-class students in the tracks below. Then in the middle of the twentieth century, the same pattern played out in the system’s expansion at the college level.

By 1940, high school enrollment had become the norm for all American families, which meant that the new zone of educational opportunity was now the previously exclusive domain of higher education. As was the case with high school in the late nineteenth century, political demand arose for working-class access to college, which had previously been the preserve of the middle class. Despite the much higher per-capita cost of college compared to high school, political will converged to deliver this access. The twin spurs were a hot war and a cold war. The need to acknowledge the shared sacrifice of Second World War led to the 1944 GI Bill, which paid for veterans to go to college. And the need during the Cold War to mobilize research, enhance human capital, and demonstrate the superiority of liberal democracy over communism led to the 1965 Higher Education Opportunity Act. The result was an enormous expansion of higher education in the 1950s and 1960s. Enrollments grew from 2.4 million in 1949 to 3.6 million in 1959; but then came the 1960s, when enrollments more than doubled, reaching 8 million in 1969 and then 11.6 million in 1979.[8]

The result was to revolutionize the structure of American higher education. Here’s how I described it in A Perfect Mess:

Until the 1940s, American colleges had admitted students with little concern for academic merit or selectivity, and this was true not only for state universities but also for the private universities now considered as the pinnacle of the system. If you met certain minimal academic requirements and could pay the tuition, you were admitted. But in the postwar years, a sharp divide emerged in the system between the established colleges and universities, which dragged their feet about expanding enrollments and instead became increasingly selective, and the new institutions, which expanded rapidly by admitting nearly everyone who applied.

What were these new institutions that welcomed the newcomers? Often existing public universities would set up branch campuses in other regions of the state, which eventually became independent institutions. Former normal schools, set up in the nineteenth century as high-school level institutions for preparing teachers had evolved into teachers colleges in the early twentieth century; and by the middle of the century they had evolved into full-service state colleges and universities serving regional populations. A number of new urban college campuses also emerged during this period, aimed at students who would commute from home to pursue programs that would prepare them for mid-level white collar jobs. And the biggest players in the new lower tier of American higher education were community colleges, which provided 2-year programs allowing students to enter low-level white-collar jobs or transfer to the university. Community colleges quickly became the largest provider of college instruction in the country. By 1980, they accounted for nearly 40 percent of all college enrollments in the U.S.[9]

These new colleges and universities had several characteristics in common. Compared to their predecessors: they focused on undergraduate education; they prepared students for immediate entry into the workforce; they drew students from nearby; they cost little; and they admitted almost anyone. For all these reasons, especially the last one, they also occupied a position in the college hierarchy that was markedly lower. Just as secondary education expanded only by allowing the newcomers access to the lower tiers of the new comprehensive high school, so higher education expanded only by allowing newcomers access to the lower tiers of the newly stratified structure of the tertiary system.

As a result, the newly expanded and stratified system of higher education protected upper-middle-class students attending the older selective institutions from the lower-middle-class students attending regional and urban universities and the working-class students attending community colleges. At the same time, these upper-middle-class students started pouring into graduate programs in law, medicine, business, and engineering, which quickly became the new zone of educational advantage.[10]

            So, at 50-year intervals across the history of American education, the same pattern kept repeating. Every effort to increase access brought about a counter effort to preserve advantage. Every time the floor of the educational system rose, so did the ceiling. The result is an elevator effect, in which the system gamely provides both access and advantage, thus increasing the upward expansion of educational attainment for all while at the same time preserving social differences. Plus ça change.

What’s Next in the Struggle between Politics and Markets?

So where does that leave us today? I see three problems that have emerged from the tension that has propelled the evolution of the American system of schooling: a time problem, a cost problem, and a public goods problem. Let’s consider each in turn.

The time problem arises from the relentless upward expansion of the system, which is sucking up an increasing share of the American life span. Life expectancy has been growing slowly over the years, but time in school has been growing at a much more rapid rate. In the mid nineteenth century, the modal American spent four years in school. By 1900 it had risen to eight years. By 2000 it was thirteen years. And by 2015, for Americans over twenty-five, 59 percent had some college, 42 percent an associate’s degree, 33 percent a bachelor’s degree, and 12 percent an advanced degree.[11]

In my own case, I spent a grand total of 26 years in school: two years of preschool, twelve years of elementary and secondary school, five years of college, and seven years of graduate school (I’m a slow study). I didn’t finish my doctorate until the ripe old age of 36, which left only thirty years to ply my profession before the social-security retirement age for my cohort. As I used to ask my graduate students—most of whom had also deferred the start of graduate study until a few years after college—when do we finish preparing for life and start living it? When do we finally grow up?

            Not only does the rapid expansion of schooling eat up an increasing share of people’s lives, but it also costs them a lot of money. First, there’s the opportunity cost, as people keep deferring to the future their chances of earning a living. Then there’s the direct cost for students to pay tuition and to support themselves as adult learners. And finally, there’s the expense to the state of providing public education across all these years. As schooling expands upward, the direct costs of education to student and state grow geometrically. High school is much more expensive per student than elementary school, college much more than high school, and graduate school much more than college.

At some point in this progression, the costs start hitting a ceiling, when students are less willing to defer earning and pay the increasing cost of advanced schooling and when taxpayers are less willing to support advanced schooling for all. In the U.S., we started to see this happening in the 1970s, when the sharp rise in college enrollments spurred a taxpayer revolt, which emerged in California (which had America’s largest higher education system and charged no tuition) and started to spread across the country. People began to ask whether they were willing to pay for the higher education of other people’s children on top of the direct cost for themselves. The result was a sharp increase in college tuition (which until then was free or relatively cheap) and the shift in government support away from scholarships and toward loans.

In combination, these increases in time and money began to undermine support for higher education as a public good. If education is seen as providing broad benefits to the community as a whole, then it makes sense to support it with public funds, which had been the case for elementary school in the nineteenth century and for high school in the early twentieth century. For thirty years after 1945, higher education found itself in the same position. The huge public effort  in the Second World War justified the provision of college at public expense for returning soldiers, as established by the GI Bill. In addition, the emerging Cold War assigned higher education a major role in countering the existential threat of communism. University research played a crucial role in supplying the technologies for the arms race and space race with the Soviet Union, and broadening access to college for the working class and racial minorities helped demonstrate the moral credibility of liberal democracy in relation to communism.

But when fiscal costs of this effort mounted in the 1970s and then the Soviet Union collapsed in 1991, the rationale for public subsidy of the extraordinarily high costs of higher education collapsed as well. Under these circumstances, college began to look a lot more like a private good than a public good, whose primary beneficiaries appeared to be its 20 million students. A college degree had become the ticket of admission to the good middle-class life, with its high costs yielding even higher returns in lifelong earnings. If graduates were reaping the bulk of the benefits, then they should bear the costs. Why provide a public subsidy for private gain?

This takes us back to our starting point in this analysis of the American system of schooling: the ongoing tension between politics and markets. As we have seen, that tension was there from day one—with the establishment of the uncommon Central High School at the same time as the common elementary school—and it has persisted over the years. Elite schooling was stacked on top of open-access schooling, with one treating education as a private good and the other as a public good. As demand grew for access to the zone of educational advantage, the system responded by stratifying that zone and expanding enrollment at the next higher level. And the result we’re dealing with now is the triple threat of a system that that has devoured our time, overloaded our costs, and diminished our commitment to education as a public good.

As I write now, in the midst of a pandemic and in the waning weeks of the Trump administration, these issues are driving the debates about education policy. We hear demands for greater access to elite levels of higher education, eliminating tuition at community colleges, and forgiving student debt. And, countering these demands, we hear concerns about the feasibility of paying for these reforms, the public burden of subsidizing students who can afford to pay their way, and the need to preserve elite universities that are the envy of the world. Who knows how these debates will play out. But one thing for sure is that the tensions—between politics and markets and public goods and private goods—will continue.


Bledstein, Burton J. The Culture of Professionalism: The Middle Class and the Development of Higher Education in America (New York: W. W. Norton, 1978).

Cohen, David. K., and Neufeld, Barbara. (1981). “The Failure of High Schools and the Progress of Education.” Daedelus, 110 (Summer 1981), 69-89.

Carter, Susan B. et al., eds.). Historical Statistics of the United States (millennial edition online). New York: Cambridge University Press, 2006.

Labaree, David F. A Perfect Mess: The Unlikely Ascendancy of American Higher Education. Chicago: University of Chicago Press, 2017.

Labaree, David F. The Making of an American High School: The Credentials Market and the Central High School of Philadelphia, 1838-1939. New Haven: Yale University Press, 1988.

National Center for Educational Statistics. 120 Years of American Education (Washington, DC: Government Printing Office, 1993).

National Center for Educational Statistics. Digest of Education Statistics 2013. Washington, DC: Government Printing Office, 2014.

Ryan Camille L., and Bauman, Kurt. “Educational Attainment in the United States: 2015,” Current Population Reports, United States Census Bureau (March 2016), Table 1, accessed December 1, 2020,

United Nations Development Programme Human Development Reports, “Mean Years of Schooling (Males, aged 25 years and above),” accessed December 1, 2020,


[1] This chapter is dedicated to my friend and former colleague, David Cohen, who died in 2020.

[2] National Center for Educational Statistics, 120 Years of American Education (Washington, DC: Government Printing Office, 1993), Table 8.

[3] NCES, 120 Years of American Education, Table 5.

[4] Labaree, David F., The Making of an American High School: The Credentials Market and the Central High School of Philadelphia, 1838-1939 (New Haven: Yale University Press, 1988), Table 6.4.

[5] Burton J. Bledstein, The Culture of Professionalism: The Middle Class and the Development of Higher Education in America (New York: W. W. Norton, 1978).

[6] Bledstein, The Culture of Professionalism, 123.

[7] Cohen, David. K., & Neufeld, Barbara. (1981). The Failure of High Schools and the Progress of Education. Daedelus, 110 (Summer), 69-89.

[8] Susan B. Carter, et al., eds. Historical Statistics of the United States (millennial edition online) (New York: Cambridge University Press, 2006), Table Bc523). National Center for Educational Statistics, Digest of Education Statistics 2013 (Washington, DC: Government Printing Office, 2014), Table 303.10.

[9] NCES, 120 Years of American Education, Table 24.

[10] Labaree, David F., A Perfect Mess: The Unlikely Ascendancy of American Higher Education (Chicago: University of Chicago Press, 2017), pp. 106-108.

[11] United Nations Development Programme Human Development Reports, “Mean Years of Schooling (Males, aged 25 years and above),” accessed December 1, 2020, Camille L. Ryan and Kurt Bauman, “Educational Attainment in the United States: 2015,” Current Population Reports, ), United States Census Bureau (March 2016), Table 1, accessed December 1, 2020,

Posted in Education policy, History of education, Research, Teacher education

Do No Harm: Reflections on the Impact of Educational Research

This post is a short piece I wrote in 2011 for a special issue of the journal Teacher Education and Practice on “Enhancing Teaching and Learning Through Scholarship.” My one take is that research in education is not necessarily well positioned to enhance education; on the contrary, it often does more harm than good.  See what you think.  Here’s a link to the original.

Do No Harm

David F. Labaree

            Education is a field of dreams and so is educational research.  As educators, we dream of schools that can improve the lives of students, solve social problems, and enrich the quality of life; and as educational researchers, we dream that our studies will enhance the effectiveness of schools in achieving these worthy goals.  Both fields draw recruits who see the possibilities of education as a force for doing good, and that turns out to be a problem, because the history of both fields shows that the chances for doing real harm are substantial.  Over the years, research on teaching and teacher education – the topic of the discussion in this special issue – has caused a lot of damage to teaching and learning and learning-to-teach in schools.  So I suggest a good principle to adopt when considering the role of research in teacher education is a version of the Hippocratic Oath:  First do no harm. 

The history of educational research in the United States in the twentieth century supports a pessimistic assessment of the field’s impact on American school and society.  There was Edward L. Thorndike, whose work emphasized the importance of differentiating the curriculum in order to provide the skills and knowledge that students would later need in playing sharply different roles in a stratified workforce.  There was David Snedden, who labored tirelessly to promote narrowly vocational training for that large group of students who would end up serving in what he called “the rank and file.”  There were the kingpins of educational testing, such as Lewis Terman, who developed instruments that allowed educators to measure student ability and student learning, which in turn helped determine which track students should occupy and what role they should play in later life.  Put together, these kinds of enormously productive educational researchers helped build a system of schooling that emphasized sorting over learning and promoted a vision of teaching that emphasized the delivery of curriculum over the engagement of students.  They laid the foundation for the current machinery of curriculum standards and high-stakes testing that has turned American teaching into a machinery for raising test scores.

            Of course, these educational researchers usually did not intend to do harm.  (Snedden is the exception here, a man who was on a mission to dumb down schooling for the lower classes.)  For the most part, they saw making curriculum more scientific and intelligence testing more accurate as ways to allow individuals with merit to escape from the clutches of their social origins.  Like most educational researchers, they were optimists about the possible impact of their work.  But their examples should serve as a cautionary tale for researchers who see their work as an unmitigated exercise in human improvement. 

One factor in particular tends to bend the work of researchers toward the dark side of the force, and that is research funding.  Very few government agencies and foundations are eager to support basic research in education.  Instead, funding aligns with the latest educational policy objectives, and to get funded researchers need to demonstrate that their work will in some manner serve these objectives.  That is not to say that the researchers necessarily support these policy missions, but in order to win the grant they do have to harness their work, at least rhetorically, to the aims that motivate the request for proposals.  In the current global policy climate, that means the work needs to address issues around accountability and standards and improving test scores.  If you cannot spin your work in this direction, you will have trouble getting funded.

            Another factor that interferes with the educational researcher’s desire to do good for teachers and teacher educators is the need to confront an educational version of Gresham’s Law:  Bad research tends to displace good.  The best research is complex, and this puts the researcher at a competitive disadvantage, since policymakers and teacher educators prefer results that are definitive and easy to understand.  The most sophisticated work we produce tends to show an educational reality that has a complex array of elements interacting within a fiendishly complex organizational structure, which means that research findings have to be carefully qualified to the point where it is nearly impossible to say with clarity that a particular form of educational practice is effective or ineffective.  Instead, we have to report that it all depends.  In addition, in order to understand the research findings in any depth, you need to be able to sort through issues of design, methodology, and validity that are only accessible to experts in the field. 

            Meanwhile, there is a vast array of research available to policymakers and practitioners that supports clear answers to educational problems and does so in a manner that is easy for the layperson to comprehend.  This kind of work comes from two kinds of groups: think tanks, and entrepreneurial organizations for the delivery of education.  Think tanks remove a key element of complexity from the research process by deciding in advance what the politically desirable policy is and then conducting studies that provide clear support for that policy.  In the U.S. there are also a variety of non-governmental organizations that are active in promoting and delivering a particular brand of educational service, such as Teach For America (TFA, with its alternative to traditional teacher preparation) and the Knowledge Is Power Program (KIPP, with its alternative approach to running schools in low income neighborhoods).  These organizations commission research that conveniently demonstrates the effectiveness of what they do.  And both types of research producers are particularly effective at marketing their findings to the relevant actors in the policy and education communities. 

University based educational research cannot compete with these other producers in clarity and understandability, but they can undercut the impact of this work a bit by doing what university researchers have always been good at.  We have an advantage in being the only group without a dog in the policy hunt, which allows us to perform credible fundamental research about how schools work, how teaching and learning happens, and how teachers learn to teach.  Work like this can help show how simplistic and politically biased these other research products really are.  And it won’t do much harm.

Posted in Course Syllabus, Higher Education, History of education, History of Higher Education Class

Class on History of Higher Education in the US

This post contains all of the material for the class on the History of Higher Education in the US that I taught for at the Stanford Graduate School of Education for the last 15 years.  In retirement I wanted to make the course available on the internet to anyone who is interested.  If you are a college teacher, feel free to use any of it in whole or part.  If you are a student or a group of students, you can work your way through the class on your own at your own pace.  Any benefits that accrue are purely intrinsic, since no one will get college credits.  But that also means you’re free to pursue the parts of the class that you want and you don’t have any requirements or papers.  How great is that.

I’m posting the full syllabus below.  But it would be more useful to get it as a Word document through this link.  Feel free to share it with anyone you like.

All of the course materials except three required books are embedded in the syllabus through hyperlinks to a Google drive.  For each week, the syllabus includes a link to tips for approaching the readings, links to the PDFs of the readings, and a link to the slides for that week’s class.  Slides also include links to additional sources.  So the syllabus is all that is needed to gain access to the full class.

I hope you find this useful.

History of Higher Education in the U.S.

David Labaree

Twitter: @Dlabaree


Course Description

This course provides an introductory overview of the history of higher education in the United States.  We will start with Perkin’s account of the world history of the university, and two chapters from my book about the role of the market in shaping the history of American higher education and the pressure from consumers to have college provide both social access and social advantage.  In week two, we examine an overview of the history of American college and university in the 18th and 19th centuries from John Thelin, and my chapter on the emerging nature of the college system.  In week three, we focus on the rise of the university in the latter part of the 19th century using two more chapters from Thelin, and my own chapter on the subject.  In week four, we read a series of papers around the issue of access to higher education, showing how colleges for many years sought to repel or redirect the college aspirations of women, blacks, and Jews.  In week five, we examine the history of professional education, with special attention to schools of business, education, and medicine.  In week six, we read several chapters from Donald Levine’s book about the rise of mass higher education after World War I, my piece about the rise of community colleges, and more from Thelin.  In week seven, we look at the surge of higher ed enrollments after World War II, drawing on pieces by Rebecca Lowen, Roger Geiger, Thelin, and Labaree.  In week eight, we look at the broadly accessible full-service regional state university, drawing on Alden Dunham, Thelin, Lohmann, and my chapter on the relationship between the public and private sector.  In week nine, we read a selection of chapters from Jerome Karabel’s book about the struggle by elite universities to stay on top of a dynamic and expanding system of higher education.  And in week 10, we step back and try to get a fix on the evolved nature of the American system of higher education, drawing on work by Mitchell Stevens and the concluding chapters of my book.

Like every course, this one is not a neutral survey of all possible perspectives on the domain identified by the course title; like every course, this one has a point of view.  This point of view comes through in my book manuscript that we’ll be reading in the course.  Let me give you an idea of the kind of approach I will be taking.

The American system of higher education is an anomaly.  In the twentieth century it surged past its European forebears to become the dominant system in the world – with more money, talent, scholarly esteem, and institutional influence than any of the systems that served as its models.  By all rights, this never should have happened.  Its origins were remarkably humble: a loose assortment of parochial nineteenth-century liberal-arts colleges, which emerged in the pursuit of sectarian expansion and civic boosterism more than scholarly distinction.  These colleges had no academic credibility, no reliable source of students, and no steady funding.  Yet these weaknesses of the American system in the nineteenth century turned out to be strengths in the twentieth.  In the absence of strong funding and central control, individual colleges had to learn how to survive and thrive in a highly competitive market, in which they needed to rely on student tuition and alumni donations and had to develop a mode of governance that would position them to pursue any opportunity and cultivate any source of patronage.  As a result, American colleges developed into an emergent system of higher education that was lean, adaptable, autonomous, consumer-sensitive, self-supporting, and radically decentralized.  This put the system in a strong position to expand and prosper when, before the turn of the twentieth century, it finally got what it was most grievously lacking:  a surge of academic credibility (when it assumed the mantle of scientific research) and a surge of student enrollments (when it became the pipeline to the middle class).  This course is an effort to understand how a system that started out so badly turned out so well – and how its apparently unworkable structure is precisely what makes the system work.

That’s an overview of the kind of argument I will be making about the history of higher education.  But you should feel free to construct your own, rejecting mine in part or in whole.  The point of this class, like any class, is to encourage you to try on a variety of perspectives as part of the process of developing your own working conceptual framework for understanding the world.  I hope you will enjoy the ride.


Books:  We will be reading the following books:

Thelin, John R. (2011). A history of American higher education, 2nd ed. Baltimore: Johns Hopkins University Press.

Labaree, David F. (2017). A perfect mess: The unlikely ascendancy of American higher education.  Chicago: University of Chicago Press.

Karabel, Jerome. (2005). The chosen: The hidden history of admission and exclusion at Harvard, Yale, and Princeton. New York: Houghton Mifflin Harcourt.

Supplementary Resources:  There is a terrific online archive of primary and secondary readings on higher education, which is a supplement to The History of Higher Education, 3rd ed., published by the Association for the Study of Higher Education (ASHE):

Course Outline

Below are the topics we will cover, week by week, with the readings for each week.

Week 1

Introduction to course

Tips for week 1 readings

Labaree, David F. (2015). A system without a plan: Elements of the American model of higher education.  Chapter 1 in A perfect mess: The unlikely ascendancy of American higher education.

Labaree, David F. (2015). Balancing access and advantage.  Chapter 5 in A perfect mess: The unlikely ascendancy of American higher education,

Perkin, Harold. (1997). History of universities. In Lester F. Goodchild and Harold S. Wechsler (Eds.), ASHE reader on the history of higher education, 2nd ed. (pp. 3-32). Boston: Pearson Custom Publishing.

Class slides for week 1

Week 2

Overview of the Early History of Higher Education in the U.S.

Tips for week 2 readings

Thelin, John R. (2011). A history of American higher education, 2nd ed. Baltimore: Johns Hopkins University Press (introductory essay and chapters 1-3).

Labaree, David F. (2015). Unpromising roots:  The ragtag college system in the nineteenth century.  Chapter 2 in A perfect mess: The unlikely ascendancy of American higher education.

Class slides for week 2

Week 3

Roots of the Growth of the University in the Late 19th and Early 20th Century

Thursday 4/19

Tips for week 3 readings

Thelin, John R. (2011). A history of American higher education, 2nd ed. Baltimore: Johns Hopkins University Press (chapters 4-5).

Labaree, David F. (2015). Adding the pinnacle and keeping the base: The graduate school crowns the system, 1880-1910.  Chapter 3 in A perfect mess: The unlikely ascendancy of American higher education,

Labaree, David F. (1995).  Foreword (to book by Brown, David K. (1995). Degrees of control: A sociology of educational expansion and occupational credentialism. New York: Teachers College Press).

Class slides for week 3

 Week 4

Educating and Not Educating the Other:  Blacks, Women, and Jews

Tips for week 4 readings

Wechsler, Harold S. (1997).  An academic Gresham’s law: Group repulsion as a theme in American higher education. In Lester F. Goodchild and Harold S. Wechsler (Eds.), ASHE reader on the history of higher education, 2nd ed. (pp. 416-431). Boston: Pearson Custom Publishing.

Anderson, James D. (1997).  Training the apostles of liberal culture: Black higher education, 1900-1935. In Lester F. Goodchild and Harold S. Wechsler (Eds.), ASHE reader on the history of higher education, 2nd ed. (pp. 432-458). Boston: Pearson Custom Publishing.

Gordon, Lynn D. (1997).  From seminary to university: An overview of women’s higher education, 1870-1920. In Lester F. Goodchild and Harold S. Wechsler (Eds.), ASHE reader on the history of higher education, 2nd ed. (pp. 473-498). Boston: Pearson Custom Publishing.

Class slides for week 4

Week 5

History of Professional Education

Tips for week 5 readings

Brubacher, John S. and Rudy, Willis. (1997). Professional education. In Lester F. Goodchild and Harold S. Wechsler (Eds.), ASHE reader on the history of higher education, 2nd ed. (pp. 379-393). Boston: Pearson Custom Publishing.

Bledstein, Burton J. (1976). The culture of professionalism. In The culture of professionalism: The middle class and the development of higher education in America (pp. 80-128). New York:  W. W. Norton.

Labaree, David F. (2015). Mutual subversion: The liberal and the professional. Chapter 4 in A perfect mess: The unlikely ascendancy of American higher education,

Starr, Paul. (1984). Transformation of the medical school. In Social transformation of American medicine (pp. 112-127). New York: Basic.

Class slides for week 5

Week 6

Emergence of Mass Higher Education

Tips for week 6 readings

Levine, Donald O. (1986).  The American college and the culture of aspiration, 1915-1940. Ithaca: Cornell University Press.  Read introduction and chapters 3, 4, and 8.

Thelin, John R. (2011). A history of American higher education, 2nd ed. Baltimore: Johns Hopkins University Press (chapter 6).

Labaree, David F. (1997). The rise of the community college: Markets and the limits of educational opportunity.  In How to succeed in school without really learning:  The credentials race in American education (chapter 8, pp. 190-222). New Haven: Yale University Press.

Class slides for week 6

Week 7

The Huge Surge of Higher Education Expansion after World War II

Tips for week 7 readings

Thelin, John R. (2011). A history of American higher education, 2nd ed. Baltimore: Johns Hopkins University Press (chapter 7).

Geiger, Roger. (2004). University advancement from the postwar era to the 1960s. In Research and relevant knowledge: American research universities since World War II (chapter 5, pp. 117-156).  Read the first half of the chapter, which focuses on the rise of Stanford.

Lowen, Rebecca S. (1997). Creating the cold war university: The transformation of Stanford. Berkeley: University of California Press.  Introduction and Chapters 5 and 6.

Labaree, David F. (2015). Learning to love the bomb: America’s brief cold-war fling with the university as a public good. Chapter 7 in A perfect mess: The unlikely ascendancy of American higher education.

Class slides for week 7

Week 8

Populist, Practical, and Elite:  The Diversity and Evolved Institutional Character of the Full-Service American University

Tips for week 8 readings

Thelin, John R. (2011). A history of American higher education, 2nd ed. Baltimore: Johns Hopkins University Press (chapter 8).

Dunham, Edgar Alden. (1969). Colleges of the forgotten Americans: A profile of state colleges and universities. New York: McGraw Hill (introduction, chapters 1-2).

Lohmann, Suzanne. (2006). The public research university as a complex adaptive system. Unpublished paper, University of California, Los Angeles.

Labaree, David F. (2015). Private advantage, public impact. Chapter 6 in A perfect mess: The unlikely ascendancy of American higher education.

Class slides for week 8

Week 9

The Struggle by Elite Universities to Stay on Top

Tips for week 9 readings

Karabel, Jerome. (2005). The chosen: The hidden history of admission and exclusion at Harvard, Yale, and Princeton. New York: Houghton Mifflin Harcourt.  Read introduction and chapters 2, 4, 9, 12, 13, 17, and 18.

Class slides for week 9

Week 10

Conclusions about the American System of Higher Education

Tips for week 10 readings

Stevens, Mitchell L., Armstrong, Elizabeth A., & Arum, Richard. (2008). Sieve, incubator, temple, hub: Empirical and theoretical advances in the sociology of higher education. Annual Review of Sociology, 34 (127-151).

Labaree, David F. (2015). Upstairs, downstairs: Relations between the tiers of the system. Chapter 8 in A perfect mess: The unlikely ascendancy of American higher education,

Labaree, David F. (2015). A perfect mess. Chapter 9 in A perfect mess: The unlikely ascendancy of American higher education.

Class slides for week 10

Guidelines for Critical Reading

Whenever you set out to do a critical reading of a particular text (a book, article, speech, proposal, conference paper), you need to use the following questions as a framework to guide you as you read:

  1. What’s the point? This is the analysis/interpretation issue: what is the author’s angle?
  2. What’s new? This is the value-added issue: What does the author contribute that we don’t already know?
  3. Who says? This is the validity issue: On what (data, literature) are the claims based?
  4. Who cares? This is the significance issue, the most important issue of all, the one that subsumes all the others: Is this work worth doing?  Is the text worth reading?  Does it contribute something important?

Guidelines for Analytical Writing

             In writing papers for this (or any) course, keep in mind the following points.  They apply in particular to the longer papers, but most of the same concerns apply to critical reaction papers as well.

  1. Pick an important issue: Make sure that your analysis meets the “so what” test. Why should anyone care about this topic, anyway?  Pick an issue or issues that matters and that you really care about.
  1. Keep focused: Don’t lose track of the point you are trying to make and make sure the reader knows where you are heading and why.
  1. Aim for clarity: Don’t assume that the reader knows what you’re talking about; it’s your job to make your points clearly.  In part this means keeping focused and avoiding distracting clutter.  But in part it means that you need to make more than elliptical references to concepts and sources or to professional experience.  When referring to readings (from the course or elsewhere), explain who said what and why this point is pertinent to the issue at hand.  When drawing on your own experiences or observations, set the context so the reader can understand what you mean.  Proceed as though you were writing for an educated person who is neither a member of this class nor a professional colleague, someone who has not read the material you are referring to.
  1. Provide analysis: A good paper is more than a catalogue of facts, concepts, experiences, or references; it is more than a description of the content of a set of readings; it is more than an expression of your educational values or an announcement of your prescription for what ails education.  A good paper is a logical and coherent analysis of the issues raised within your chosen area of focus.  This means that your paper should aim to explain rather than describe.  If you give examples, be sure to tell the reader what they mean in the context of your analysis.  Make sure the reader understands the connection between the various points in your paper.
  1. Provide depth, insight, and connections: The best papers are ones that go beyond making obvious points, superficial comparisons, and simplistic assertions.  They dig below the surface of the issue at hand, demonstrating a deeper level of understanding and an ability to make interesting connections.
  1. Support your analysis with evidence: You need to do more than simply state your ideas, however informed and useful these may be.  You also need to provide evidence that reassures the reader that you know what you are talking about, thus providing a foundation for your argument.  Evidence comes in part from the academic literature, whether encountered in this course or elsewhere.  Evidence can also come from your own experience.  Remember that you are trying to accomplish two things with the use of evidence.  First, you are saying that it is not just you making this assertion but that authoritative sources and solid evidence back you up.  Second, you are supplying a degree of specificity and detail, which helps to flesh out an otherwise skeletal argument.
  1. Draw on course materials (this applies primarily to reaction papers, not the final paper). Your paper should give evidence that you are taking this course.  You do not need to agree with any of the readings or presentations, but your paper should show you have considered the course materials thoughtfully.
  1. Recognize complexity and acknowledge multiple viewpoints. The issues in the history of American education are not simple, and your paper should not propose simple solutions to complex problems. It should not reduce issues to either/or, black/white, good/bad.  Your paper should give evidence that you understand and appreciate more than one perspective on an issue.  This does not mean you should be wishy-washy.  Instead, you should aim to make a clear point by showing that you have considered alternate views.
  1. Challenge assumptions. The paper should show that you have learned something by doing this paper. There should be evidence that you have been open to changing your mind.
  1. Do not overuse quotation: In a short paper, long quotations (more than a sentence or two in length) are generally not appropriate.  Even in longer papers, quotations should be used sparingly unless they constitute a primary form of data for your analysis.  In general, your paper is more effective if written primarily in your own words, using ideas from the literature but framing them in your own way in order to serve your own analytical purposes.  However, selective use of quotations can be very useful as a way of capturing the author’s tone or conveying a particularly aptly phrased point.
  1. Cite your sources: You need to identify for the reader where particular ideas or examples come from.  This can be done through in-text citation:  Give the author’s last name, publication year, and (in the case of quotations) page number in parentheses at the end of the sentence or paragraph where the idea is presented — e.g., (Kliebard, 1986, p. 22); provide the full citations in a list of references at the end of the paper.  You can also identify sources with footnotes or endnotes:  Give the full citation for the first reference to a text and a short citation for subsequent citations to the same text.  (For critical reaction papers, you only need to give the short cite for items from the course reading; other sources require full citations.)  Note that citing a source is not sufficient to fulfill the requirement to provide evidence for your argument.  As spelled out in #6 above, you need to transmit to the reader some of the substance of what appears in the source cited, so the reader can understand the connection with the point you are making and can have some meat to chew on.  The best analytical writing provides a real feel for the material and not just a list of assertions and citations.  Depth, insight, and connections count for more than a superficial collection of glancing references.  In other words, don’t just mention an array of sources without drawing substantive points and examples from these sources; and don’t draw on ideas from such sources without identifying the ones you used.
  1. Take care in the quality of your prose: A paper that is written in a clear and effective style makes a more convincing argument than one written in a murky manner, even when both writers start with the same basic understanding of the issues.  However, writing that is confusing usually signals confusion in a person’s thinking.  After all, one key purpose of writing is to put down your ideas in a way that permits you and others to reflect on them critically, to see if they stand up to analysis.  So you should take the time to reflect on your own ideas on paper and revise them as needed.  You may want to take advantage of the opportunity in this course to submit a draft of the final paper, revise it in light of comments, and then resubmit the revised version.  This, after all, is the way writers normally proceed.  Outside of the artificial world of the classroom, writers never turn in their first draft as their final statement on a subject.
Posted in Ed schools, Educational goals, History of education

An Unlovely Legacy: The Disabling Impact of the Market on American Teacher Education

What with huge problems hanging in the balance right now, like the future of American democracy and the world order, this might be a good time to focus on a little problem, one mostly of academic interest.  The issue for today is — wait for it — the trouble with American ed schools.  Sounds a bit less consequential and a bit more manageable than the election, doesn’t it?

So I’m posting a piece I published in Kappan way back in 1994.  Here’s a link to the original.  It’s the story about how market forces shaped the development of education schools in the US in the 19th and early 20th centuries, leaving them in a relatively weakened state, characterized by both low status and mediocre performance.  

As I re-read this paper, I realized it’s also a period piece.  Published in the early years of the Clinton administration, it depicts a time with ed schools were trying to pull themselves out of the doldrums through a reform movement called the Holmes Group.  This was a collection of elite ed school deans who proposed to upgrade the stature and scholarship of the field and in the process professionalize the American teaching force.  In one stroke it would elevate both teachers and teacher educators.  One proposal was for ed schools to ally with K-12 colleagues to develop what they called “professional development schools,” which were places where student teachers could learn their profession at a high level, in a setting akin to the teaching hospitals that train physicians.

For me, this was a particularly interesting time because the president of the Holmes Group was my dean at Michigan State, Judy Lanier, and the Group’s  reports were being written by my colleagues in Erickson Hall.  It was an exciting moment — even if it eventually went nowhere.

In the paper, I tell a story of two kinds of market forces that undermined the mission of the emergent ed schools.  One was a push for social efficiency, which put a premium on producing quantity over quality in the preparation of teachers.  Another was social mobility, as a lot of students used the normal schools and teachers colleges less as a way to become teachers than as a way to open up access to a wide array of white collar jobs.  Here’s the money quote.

In addition, both approaches to teacher preparation tended to undercut the creation of a strong educational content in teacher education programs. Social efficiency undercut content in the rush by policymakers to mass-produce teachers of minimum competence. Social mobility undercut content in the rush by ambitious individuals to use teachers’ colleges as a means of climbing the social ladder. There is nothing in either goal that would press teacher education to provide an intensive and extensive educational experience for prospective teachers, nothing in either to promote academic rigor or prolonged application. In fact, everything urges toward superficiality (providing thin coverage of both subject matter and pedagogy), brevity (keeping the program short and unintrusive), accessibility (allowing entry to nearly anyone), low level of difficulty (making the process easy and graduation certain), and parsimony (doing all of this on the cheap). This, I submit, is the market-based legacy of limited vision and ineffectual process that afflicted teacher education in the past and continues to do so today.

I later developed pieces of this argument in my 2004 book, The Trouble With Ed Schools.  Hope you enjoy reading this moldy oldie.  

Ed Schools Cover

An Unlovely Legacy: The Disabling Impact of the Market on American Teacher Education

David F. Labaree

American teacher education is back in the news, but unfortunately the news is not good. This, however, is far from being a novel situation. From my reading of the history of American education, it seems that it has always been open season on teacher education. Now, as in the past, everyone seems to have something bad to say about the way we prepare our teachers. If you believe what you read and what you hear, a lot of what is wrong with American education these days can be traced to the failings of teachers and to shortcomings in the processes by which we train them for their tasks. We are told that students are not learning, that productivity is not growing, that economic competitiveness is declining – all to some extent because teachers don’t know how to teach.

As a result, politicians and policy makers at all levels have been talking about a number of possible remedies: testing students as they enter and leave teacher education programs, extending and upgrading the content of these programs, and even bypassing the programs altogether through alternative certification. The latter option means pushing people with subject-matter expertise or practical occupational experience directly into the classroom, thus protecting them from the corrupting influence of schools of education. Meanwhile, academics in the more prestigious colleges within American universities ridicule the curriculum of the school of education for what they consider its mindlessness and uselessness. Ordinary citizens also get into the act. For example, there is a recent book written by a journalist, Rita Kramer, who spent some time sitting in teacher education classrooms and interviewing education professors. Her title quite nicely captures the general lack of restraint with which critics have tended to approach teacher education: Ed School Follies: The Miseducation of America’s Teachers.

As I said, none of this criticism of teacher education is particularly new. The training of teachers has never been revered by the academy or terribly popular with the public. If one could sum up the usual complaints about teacher education in one sentence, it would be something like this: “Schools of education have failed to provide an education for teachers that is either academically elevated or pedagogically effective.” Instead of rallying to the defense of the teacher education establishment, of which I am a part, I would like to explore why this enterprise has earned such a bad reputation.

Yes, teacher education in the U.S. has been and in many ways continues to be an intellectually undemanding and frequently ineffectual form of professional training. Where I disagree with the current pattern of criticism, however, is in the diagnosis of the roots of the problem. The most popular current diagnosis of what ails American teacher education follows directly from the reigning view of what the problem is with schooling in general. In the conservative climate of the past decade, that understanding is simple to state. The problem with schools, we are told, is that they have been ruined by too much politics; the solution, we hear, is to inject a little discipline from the marketplace. This interpretation has become part of the fabric of contemporary thought about schools, but the most prominent ideological weavers currently working in this tradition are John Chubb and Terry Moe, authors of Politics, Markets, and America’s Schools.2

My own interpretation is precisely the opposite of theirs. I argue that both K-12 education and teacher education have been ruined by too much market influence and not enough democratic politics. A generous democratic rhetoric has surrounded teacher education from the days of the first normal schools, but the fact of the matter is that the dominant influence on the form and content of teacher education has come not from politics but from the market.

This market influence has resulted in the widespread belief that education has two purposes: one I call “social efficiency”; the other, “social mobility.” These two objectives have had some contradictory effects on teacher education. But they have a great deal in common, since both represent ways that teacher education has been required to respond to demands from the market – the job market in the case of social efficiency and the credentials market in the case of social mobility. The net result has been to undermine efforts to enrich the quality, duration, rigor, and political aims of teacher education. The history of teacher education has not been very elevated, either academically or politically – thanks directly, I suggest, to market influence.

In pursuing this theme, I will explore the following issues. First, I will say a little about the nature of these market-oriented purposes and their impact on American education in general. Then I will examine the historical role that each has played in shaping teacher education. This in turn will lead to a discussion of the kinds of problems that these objectives have brought about for the form and content of teacher education. And finally, I will explore one current reform initiative, known as the teacher professionalization movement, which represents an effort to buffer teacher education from the influence of the market. Will this effort move teacher education in a desirable direction or just replace one undesirable influence with another?


Both social efficiency and social mobility are purposes that have shaped American schooling in significant ways over the last 150 years. Let me say a little about the nature of each purpose and the character of its impact on schools.3

From the perspective of social efficiency, the purpose of schooling is to train students as future workers. This means providing them with the particular skills and attitudes required to fill the full range of positions in a stratified occupational structure. In short, according to this view, schools should give the job market what it wants. Social efficiency is an expression of the educational visions of employers, government officials, and taxpayers. These constituencies share a concern about filling job slots with skilled workers so that society will function efficiently, and they want schools to provide this service in a cost-effective manner.

From the perspective of social mobility, the purpose of schooling is to provide individuals with an equal opportunity to attain the more desirable social positions. This goal expresses the educational visions of the parent of a school-age child. Such a parent is concerned less with meeting society’s needs and keeping down costs than with using schools to help his or her child to get ahead. From this angle, the essence of schooling is to provide not vocational skills but educational credentials, which can be used as currency in the zero-sum competition for social status.

Note that both social efficiency and social mobility are purposes that link education directly to the job market. The key difference is that a person promoting the first goal views this link from the top down, taking the perspective of the educational provider, while a person promoting the second goal views the link from the bottom up, taking the perspective of the educational consumer.

In addition to these two market goals, however, there is also a third type of goal – arising from democratic politics – that has offered a more generous vision for American education. This is the goal that primarily motivated the founders of the common schools. The leaders of the common school movement saw universal public education as a mechanism for protecting the democratic polity from the growing class divisions and possessive individualism of an emerging market society. The common schools, they felt, could help establish a republican community on the basis of a shared educational experience cutting across class and ethnic differences. These schools could also help prepare people to function independently as citizens in a democratic society. This vision is at heart an inclusive one, grounded in political rather than economic concerns.

In spite of the power of the market, this democratic goal has found expression in American education in a number of ways over the years. There was the common school itself – which drew students from the whole community, presented them with a common curriculum, and generally chose to ignore the problem of articulating schooling with the structure of the job market. Then at the turn of the century came the comprehensive high school, which brought a heterogeneous array of students and programs together under one roof, even though students experienced quite different forms of education under that roof. More recently we have seen expressions of this goal in efforts at inclusive education, as reformers have sought to reduce inequalities as sociated with the race, class, gender, and handicapping conditions of students.

These three goals have frequently collided in the history of American education, resulting in an institution driven by contradictory impulses coexisting in a state of uneasy balance. However, the history of American teacher education has demonstrated a narrower range of purposes than this. There has been very little sign within teacher education of the effects of the democratic purposes that helped to shape schooling more generally – except, perhaps, a thin strand of democratic rhetoric running through the teacher education literature. In practice, teacher education has shown primarily the politically and socially narrowing effects of the market. Let’s consider what effect each of these market purposes has had on American teacher education over the years.4


While social efficiency goals for the teaching of students arose around 1900 (with the emergence of the high school and the advent of vocationalism), this emphasis came much earlier for the teaching of teachers. From the perspective of social efficiency, the central problem for teacher education was the chronic under-supply of teachers that developed in the mid-19th century and continued on into the early 20th century. The initial source of this problem was the development of universal public education, which produced a powerful demand for a large number of certified elementary teachers. In answer to this demand, the larger urban school systems opened their own normal schools, parallel to or incorporated within city high schools, for the purpose of staffing their elementary classrooms. At the same time, state governments around the country created state normal schools to meet the needs of those districts that could not support normal schools of their own.

Then, after elementary education had filled up, there came the rapid expansion of high school enrollments at the turn of the century. (High school attendance doubled every decade from 1890 to 1940.) This in turn created a strong demand for high school teachers, and the answer to that demand was found in the creation of state teachers’ colleges.

The essence of the social efficiency impulse was to create a form of teacher education that was organized around three basic principles – quantity, quality, and efficiency. The issue of quantity was the most obvious. The large number of slots to be filed created a need for a form of teacher education that could effectively mass-produce teachers. The issue of quality was a bit more complicated. The problem here was the need for a publicly credible system for certifying that the new teachers met some minimum standard of quality – a form of assurance that was necessary in order to maintain public support for the investment in schooling. This meant that teacher education needed to be established under public administration and around state certification requirements. The concern for quality, however, was undermined substantially by the concerns for quantity and efficiency.

By efficiency I mean simply that teacher education was under great pressure to prepare teachers at both low cost and high speed. The fiscal burden of expanding enrollments at the elementary level was enormous, and it only increased with the expansion of the high school. One answer to the efficiency problem was to feminize teaching, which school systems did in great haste starting in the mid-19th century. By paying women one-half of what they paid men, school systems found an effective way of getting two teachers for the price of one. The side effect, however, was to create a profession characterized by very high turnover, since, as a general rule, women tended to teach only during the half dozen or so years between the completion of their own education and marriage. As a result, teacher education found itself forced to turn out teachers even faster and more cheaply in order to compensate for the brief duration of teachers’ service.

The consequence of the goal of social efficiency was that it put emphasis on the creation of a form of teacher education that could produce the most teachers, in the shortest time, at the lowest cost, and at the minimum level of ability that the public would allow. All in all, this hardly constituted an elevating influence.


Much to the chagrin of the founders and funders of the various teacher education enterprises, these institutions quickly became subverted by another powerful market force: the demand by individuals for access to high school and college degrees and, through them, to social mobility. Teacher education was designed to be accessible and easy in the name of social efficiency. But ironically, it found itself the most accessible and easiest route to middle-class status for a large number of ambitious students and their parents. Jurgen Herbst has described this problem quite nicely in his book on the history of teacher education.5 There quickly emerged a strong form of consumer pressure on teacher education institutions to provide general liberal arts education for students who, in fact, had little or no intention of teaching.

The result was that normal schools underwent a gradual transition into general purpose high schools. A case in point is the history of Philadelphia’s Girls High School. Created in 1848, this school went through a series of name changes over the rest of the century – from Girls High School to Girls Normal School to Girls High and Normal School and finally back to Girls High School again. The problem in Philadelphia as elsewhere was that the purpose of the institution, though initially to train elementary teachers, was in fact up for grabs. Policy makers and fiscal authorities wanted these schools to retain their social efficiency aims and train teachers, but the parents of the school age girls wanted them to provide a broad secondary education for their daughters.

We discover the same sorts of tensions playing out in the history of state teachers’ colleges after the turn of century. These institutions were under considerable pressure from students to transform themselves into liberal arts colleges. And, given the extreme sensitivity of American higher education to consumer pressures, they eventually did just that in the 1920s and 1930s. By the 1960s and 1970s they moved one more step in that direction by becoming general-purpose universities. What was once the Michigan State Normal School in Ypsilanti is now Eastern Michigan University.

Consider the implications for teacher education of this pressure to provide social mobility. The fact that many teacher education students did not want to become teachers put the emphasis on a form of teacher education that was unobtrusive in character and minimal in scope for the convenience of students seeking a general education. These students were focused more on credentials and status than on learning and content, which meant that teacher education was expected to make only the most modest of demands so as not to block a student’s access to the desired degree.

Now let’s examine some problems with teacher education that can be traced to this pressure from the job market and the credentials market.


Some of the problems that markets created for teacher education derived from the conflict between the goals of social efficiency and social mobility. One such difficulty was simple inefficiency. The consumer pressure for mobility through teacher education promoted considerable inefficiency, since it led to the expansion of a system of teacher education that was producing a large number of nonteaching graduates. In effect, this amounted to a collective subsidy of individual ambition. As a result of this situation, teacher education grew accustomed to functioning as a system of mass production with a low net yield. It was under constant pressure to produce ever more graduates and to keep ever more rigid control of the unit costs of this production, simply because the ultimate number of teachers produced was so small relative to the number of students processed.

In addition, teacher education developed a serious identity crisis because of the confusion over which market it was supposed to serve. Trying to run a teacher education program is quite difficult when you can’t agree on its purpose. Is the primary focus on general or vocational education? Should the program concentrate on liberal arts or on teaching methods? Is the aim to provide an individual benefit for the consumer of higher education or a collective benefit for citizens needing qualified teachers? This uncertainty about purpose has afflicted teacher education from the very beginning and has continued right up to the recent past.

Some of the problems that teacher education has experienced derive from market-based commonalities between the goals of social efficiency and social mobility. After all, both of these tendencies arose from the perceived need to adapt teacher education to market demand. In the case of social efficiency, this was expressed as a need for more bodies in the classroom; in the case of social mobility, it was expressed as a need for credentials to equip students to compete for social position. Neither of these, I suggest, was a terribly noble goal for an educational institution. Neither provided any political vision for teacher education – no vision of exactly what education and teacher education should be, what kind of teachers we needed, what kind of learning we wanted them to foster, or what political/moral/social outcomes we wanted to produce.

In addition, both approaches to teacher preparation tended to undercut the creation of a strong educational content in teacher education programs. Social efficiency undercut content in the rush by policymakers to mass-produce teachers of minimum competence. Social mobility undercut content in the rush by ambitious individuals to use teachers’ colleges as a means of climbing the social ladder. There is nothing in either goal that would press teacher education to provide an intensive and extensive educational experience for prospective teachers, nothing in either to promote academic rigor or prolonged application. In fact, everything urges toward superficiality (providing thin coverage of both subject matter and pedagogy), brevity (keeping the program short and unintrusive), accessibility (allowing entry to nearly anyone), low level of difficulty (making the process easy and graduation certain), and parsimony (doing all of this on the cheap). This, I submit, is the market-based legacy of limited vision and ineffectual process that afflicted teacher education in the past and continues to do so today.


One recent effort to remedy some of these historical problems that are embedded in teacher education has come from within the community of teacher educators via the Holmes Group. This group is made up of approximately 100 deans from colleges of education at research-oriented universities. Their answer is a reform proposal that focuses on the goal of teacher professionalization.6

The Holmes Group argues that teachers need to receive an extensive and intensive professional education much like that accorded doctors and lawyers. Such an education, they assert, would help to free teachers from subordination within schools and, more important, would enable them to provide students with the kind of empowered learning that would allow them full participation in a democratic society. This approach tries to buffer teacher education from the corrupting influence of the marketplace by wrapping it in the armor of professionalism (and the rhetoric of democracy). However, as I have argued elsewhere, this movement is likely in practice to submit teachers and students to another kind of power – the intellectual and social power of the university within which teacher education has become imprisoned.7

The problem, I suggest, is that the movement to professionalize teaching has arisen from the status needs of teacher educators within the university. When it comes to academic prestige, teacher educators have always been at the bottom of the ladder. Arriving in the university relatively late and bearing the stigma of the normal school, they found themselves ill-equipped to compete for professional standing within the university. Yet the rules of academic status are well-defined. To gain prestige within the university, professors need to pursue a vigorous agenda of research activities, especially those framed in the methodology of science. Starting in the 1960s, teacher educators drew on the behavioral scientific model pioneered by educational psychologists and set off a landslide of research publications. The quantity of output since then has been so great that it has taken three large handbooks just to summarize the recent research on teaching and another to summarize the research on teacher education.8

The result for teacher education has been to push it to adopt a curriculum for training teachers that is based on its own scientific research. While this move may represent a partial reduction in the extent to which teacher education is a simple expression of the market, it serves to transform teacher education, at least in part, into an expression of the power and knowledge of the university – particularly reflecting the status concerns and scientific world view of the education professoriate. Like its market-based predecessors, driven by the goals of social efficiency and social mobility, this approach to teacher preparation undermines the kind of emphases that would support democratic schooling. What it promises to do is to add the rationalized authority of the university researcher to social efficiency and social mobility as driving forces behind teacher training.

Sadly, a truly democratic politics remains one goal that has never been implemented within the mainstream practice of teacher education. This more generous vision, which has intermittently influenced thinking about schools, also needs to become a factor in the way we think about the teachers within those schools and in the way they are prepared. Instead of structuring teacher education around the base concerns of efficient production and personal ambition, I suggest that we need to think about organizing it in a way that reflects what I hope are our more elevated concerns about the quality of education our teachers and students will receive and the political and social consequences that will emerge from that education.

  1. Rita Kramer, Ed School Follies: The Miseducation of America’s Teachers (New York: Free Press, 1991).
  2. John Chubb and Terry Moe, Politics, Markets, and America’s Schools (Washington, D.C.: Brookings Institution, 1990).
  3. I have developed this analysis of the impact of the market on American schools at greater length in the following works: The Making of an American High School: The Credentials Market and the Central High School of Philadelphia, 1838-1939 (New Haven, Conn.: Yale University Press, 1988); and “From Comprehensive High School to Community College: Politics, Markets, and the Evolution of Educational Opportunity,” in Ronald G. Corwin, ed., Research in Sociology of Education and Socialization, vol. 9 (Greenwich, Conn.: JAI Press, 1990), pp. 203-40.
  4. The best general history of American teacher education is Jurgen Herbst, And Sadly Teach: Teacher Education and Professionalization in American Culture (Madison: University of Wisconsin Press, 1989). See also John I. Goodlad, Roger Soder, and Kenneth A. Sirotnik, eds., Places Where Teachers Are Taught (San Francisco: Jossey-Bass, 1990).
  5. Herbst, op. cit.
  6. Tomorrow’s Teachers (East Lansing, Mich.: Holmes Group, 1986); and Tomorrow’s Schools: Principles for the Design of Professional Development Schools (East Lansing, Mich.: Holmes Group, 1990).
  7. David F. Labaree, “Power, Knowledge, and the Rationalization of Teaching: A Genealogy of the Movement to Professionalize Teachers,” Harvard Educational Review, Summer 1992, pp. 123-54; and idem, “Doing Good, Doing Science: The Holmes Group Reports and the Rhetorics of Educational Reform,” Teachers College Record, Summer 1992, pp. 628-40.
  8. Nathaniel L. Gage, ed., Handbook of Research on Teaching (Chicago: Rand McNally, 1963); R. M. W. Travers, ed., Handbook of Research on Teaching, 2nd ed. (Chicago: Rand McNally, 1973); Merlin C. Wittrock, ed., Handbook of Research on Teaching, 3rd ed. (New York: Macmillan, 1986); and W. Robert Houston, ed., Handbook of Research on Teacher Education (New York: Macmillan, 1990).
Posted in History, History of education, War

An Affair to Remember: America’s Brief Fling with the University as a Public Good

This post is an essay about the brief but glorious golden age of the US university during the three decades after World War II.  

American higher education rose to fame and fortune during the Cold War, when both student enrollments and funded research shot upward. Prior to World War II, the federal government showed little interest in universities and provided little support. The war spurred a large investment in defense-based scientific research in universities, and the emergence of the Cold War expanded federal investment exponentially. Unlike a hot war, the Cold War offered an extended period of federally funded research public subsidy for expanding student enrollments. The result was the golden age of the American university. The good times continued for about 30 years and then began to go bad. The decline was triggered by the combination of a decline in the perceived Soviet threat and a taxpayer revolt against high public spending; both trends culminating with the fall of the Berlin Wall in 1989. With no money and no enemy, the Cold War university fell as quickly as it arose. Instead of seeing the Cold War university as the norm, we need to think of it as the exception. What we are experiencing now in American higher education is a regression to the mean, in which, over the long haul, Americans have understood higher education to be a distinctly private good.

I originally presented this piece in 2014 at a conference at Catholic University in Leuven, Belgium.  It was then published in the Journal of Philosophy of Education in 2016 (here’s a link to the JOPE version) and then became a chapter in my 2017 book, A Perfect Mess.  Waste not, want not.  Hope you enjoy it.

Cold War

An Affair to Remember:

America’s Brief Fling with the University as a Public Good

David F. Labaree

            American higher education rose to fame and fortune during the Cold War, when both student enrollments and funded research shot upward.  Prior to World War II, the federal government showed little interest in universities and provided little support.  The war spurred a large investment in defense-based scientific research in universities for reasons of both efficiency and necessity:  universities had the researchers and infrastructure in place and the government needed to gear up quickly.  With the emergence of the Cold War in 1947, the relationship continued and federal investment expanded exponentially.  Unlike a hot war, the Cold War offered a long timeline for global competition between communism and democracy, which meant institutionalizing the wartime model of federally funded research and building a set of structures for continuing investment in knowledge whose military value was unquestioned. At the same time, the communist challenge provided a strong rationale for sending a large number of students to college.  These increased enrollments would educate the skilled workers needed by the Cold War economy, produce informed citizens to combat the Soviet menace, and demonstrate to the world the broad social opportunities available in a liberal democracy.  The result of this enormous public investment in higher education has become known as the golden age of the American university.

            Of course, as is so often the case with a golden age, it didn’t last.  The good times continued for about 30 years and then began to go bad.  The decline was triggered by the combination of a decline in the perceived Soviet threat and a taxpayer revolt against high public spending; both trends with the fall of the Berlin Wall in 1989.  With no money and no enemy, the Cold War university fell as quickly as it arose. 

            In this paper I try to make sense of this short-lived institution.  But I want to avoid the note of nostalgia that pervades many current academic accounts, in which professors and administrators grieve for the good old days of the mid-century university and spin fantasies of recapturing them.  Barring another national crisis of the same dimension, however, it just won’t happen.  Instead of seeing the Cold War university as the norm that we need to return to, I suggest that it’s the exception.  What we’re experiencing now in American higher education is, in many ways, a regression to the mean. 

            My central theme is this:  Over the long haul, Americans have understood higher education as a distinctly private good.  The period from 1940 to 1970 was the one time in our history when the university became a public good.  And now we are back to the place we have always been, where the university’s primary role is to provide individual consumers a chance to gain social access and social advantage.  Since students are the primary beneficiaries, then they should also foot the bill; so state subsidies are hard to justify.

            Here is my plan.  First, I provide an overview of the long period before 1940 when American higher education functioned primarily as a private good.  During this period, the beneficiaries changed from the university’s founders to its consumers, but private benefit was the steady state.  This is the baseline against which we can understand the rapid postwar rise and fall of public investment in higher education.  Next, I look at the huge expansion of public funding for higher education starting with World War II and continuing for the next 30 years.  Along the way I sketch how the research university came to enjoy a special boost in support and rising esteem during these decades.  Then I examine the fall from grace toward the end of the century when the public-good rationale for higher ed faded as quickly as it had emerged.  And I close by exploring the implications of this story for understanding the American system of higher education as a whole. 

            During most of its history, the central concern driving the system has not been what it can do for society but what it can do for me.  In many ways, this approach has been highly beneficial.  Much of its success as a system – as measured by wealth, rankings, and citations – derives from its core structure as a market-based system producing private goods for consumers rather than a politically-based system producing public goods for state and society.  But this view of higher education as private property is also a key source of the system’s pathologies.  It helps explain why public funding for higher education is declining and student debt is rising; why private colleges are so much richer and more prestigious that public colleges; why the system is so stratified, with wealthy students attending the exclusive colleges at the top where social rewards are high and with poor students attending the inclusive colleges at the bottom where such rewards are low; and why quality varies so radically, from colleges that ride atop the global rankings to colleges that drift in intellectual backwaters.

The Private Origins of the System

            One of the peculiar aspects of the history of American higher education is that private colleges preceded public.  Another, which in part follows from the first, is that private colleges are also more prestigious.  Nearly everywhere else in the world, state-supported and governed universities occupy the pinnacle of the national system while private institutions play a small and subordinate role, supplying degrees of less distinction and serving students of less ability.  But in the U.S., the top private universities produce more research, gain more academic citations, attract better faculty and students, and graduate more leaders of industry, government, and the professions.  According to the 2013 Shanghai rankings, 16 of the top 25 universities in the U.S. are private, and the concentration is even higher at the top of this list, where private institutions make up 8 of the top 10 (Institute of Higher Education, 2013). 

            This phenomenon is rooted in the conditions under which colleges first emerged in the U.S.  American higher education developed into a system in the early 19th century, when three key elements were in place:  the state was weak, the market was strong, and the church was divided.  The federal government at the time was small and poor, surviving largely on tariffs and the sale of public lands, and state governments were strapped simply trying to supply basic public services.  Colleges were a low priority for government since they served no compelling public need – unlike public schools, which states saw as essential for producing citizens for the republic.  So colleges only emerged when local promoters requested and received a  corporate charter from the state.  These were private not-for-profit institutions that functioned much like any other corporation.  States provided funding only sporadically and only if an institution’s situation turned dire.  And after the Dartmouth College decision in 1819, the Supreme Court made clear that a college’s corporate charter meant that it could govern itself without state interference.  Therefore, in the absence of state funding and control, early American colleges developed a market-based system of higher education. 

            If the roots of the American system were private, they were also extraordinarily local.  Unlike the European university, with its aspirations toward universality and its history of cosmopolitanism, the American college of the nineteenth century was a home-town entity.  Most often, it was founded to advance the parochial cause of promoting a particular religious denomination rather than to promote higher learning.  In a setting where no church was dominant and all had to compete for visibility, stature, and congregants, founding colleges was a valuable way to plant the flag and promote the faith.  This was particularly true when the population was rapidly expanding into new territories to the west, which meant that no denomination could afford to cede the new terrain to competitors.  Starting a college in Ohio was a way to ensure denominational growth, prepare clergy, and spread the word.

            At the same time, colleges were founded with an eye toward civic boosterism, intended to shore up a community’s claim to be a major cultural and commercial center rather than a sleepy farm town.  With a college, a town could claim that it deserved to gain lucrative recognition as a stop on the railroad line, the site for a state prison, the county seat, or even the state capital.  These consequences would elevate the value of land in the town, which would work to the benefit of major landholders.  In this sense, the nineteenth century college, like much of American history, was in part the product of a land development scheme.  In general, these two motives combined: colleges emerged as a way to advance both the interests of particular sects and also the interests of the towns where they were lodged.  Often ministers were also land speculators.  It was always better to have multiple rationales and sources of support than just one (Brown (1995); Boorstin (1965); Potts (1971).  In either case, however, the benefits of founding a college accrued to individual landowners and particular religious denominations and not to the larger public.

As a result these incentives, church officials and civic leaders around the country scrambled to get a state charter for a college, establish a board of trustees made up of local notables, and install a president.  The latter (usually a clergyman) would rent a local building, hire a small and not very accomplished faculty, and serve as the CEO of a marginal educational enterprise, one that sought to draw tuition-paying students from the area in order to make the college a going concern.  With colleges arising to meet local and sectarian needs, the result was the birth of a large number of small, parochial, and weakly funded institutions in a very short period of time in the nineteenth century, which meant that most of these colleges faced a difficult struggle to survive in the competition with peer institutions.  In the absence of reliable support from church or state, these colleges had to find a way to get by on their own. 

            Into this mix of private colleges, state and local governments began to introduce public institutions.  First came a series of universities established by individual states to serve their local populations.  Here too competition was a bigger factor than demand for learning, since a state government increasingly needed to have a university of its own in order to keep up with its neighbors.  Next came a group of land-grant colleges that began to emerge by midcentury.  Funded by grants of land from the federal government, these were public institutions that focused on providing practical education for occupations in agriculture and engineering.  Finally was an array of normal schools, which aimed at preparing teachers for the expanding system of public elementary education.  Like the private colleges, these public institutions emerged to meet the economic needs of towns that eagerly sought to house them.  And although they colleges were creatures of the state, they had only limited public funding and had to rely heavily on student tuition and private donations.

            The rate of growth of this system of higher education was staggering.  At the beginning of the American republic in 1790 the country had 19 institutions calling themselves colleges or universities (Tewksbury (1932), Table 1; Collins, 1979, Table 5.2).  By 1880, it had 811, which doesn’t even include the normal schools.  As a comparison, this was five times as many institutions as existed that year in all of Western Europe (Ruegg (2004).  To be sure, the American institutions were for the most part colleges in name only, with low academic standards, an average student body of 131 (Carter et al. (2006), Table Bc523) and faculty of 14 (Carter et al. (2006), Table Bc571).  But nonetheless this was a massive infrastructure for a system of higher education. 

            At a density of 16 colleges per million of population, the U.S. in 1880 had the most overbuilt system of higher education in the world (Collins, 1979, Table 5.2).  Created in order to meet the private needs of land speculators and religious sects rather that the public interest of state and society, the system got way ahead of demand for its services.  That changed in the 1880s.  By adopting parts of the German research university model (in form if not in substance), the top level of the American system acquired a modicum of academic respectability.  In addition – and this is more important for our purposes here – going to college finally came to be seen as a good investment for a growing number of middle-class student-consumers. 

            Three factors came together to make college attractive.  Primary among these was the jarring change in the structure of status transmission for middle-class families toward the end of the nineteenth century.  The tradition of passing on social position to your children by transferring ownership of the small family business was under dire threat, as factories were driving independent craft production out of the market and department stories were making small retail shops economically marginal.  Under these circumstances, middle class families began to adopt what Burton Bledstein calls the “culture of professionalism” (Bledstein, 1976).  Pursuing a profession (law, medicine, clergy) had long been an option for young people in this social stratum, but now this attraction grew stronger as the definition of profession grew broader.  With the threat of sinking into the working class becoming more likely, families found reassurance in the prospect of a form of work that would buffer their children from the insecurity and degradation of wage labor.  This did not necessarily mean becoming a traditional professional, where the prospects were limited and entry costs high, but instead it meant becoming a salaried employee in a management position that was clearly separated from the shop floor.  The burgeoning white-collar work opportunities as managers in corporate and government bureaucracies provided the promise of social status, economic security, and protection from downward mobility.  And the best way to certify yourself as eligible for this kind of work was to acquire a college degree. 

            Two other factors added to the attractions of college.  One was that a high school degree – once a scarce commodity that became a form of distinction for middle class youth during the nineteenth century – was in danger of becoming commonplace.  Across the middle of the century, enrollments in primary and grammar schools were growing fast, and by the 1880s they were filling up.  By 1900, the average American 20-year-old had eight years of schooling, which meant that political pressure was growing to increase access to high school (Goldin & Katz, 2008, p. 19).  This started to happen in the 1880s, and for the next 50 years high school enrollments doubled every decade.  The consequences were predictable.  If the working class was beginning to get a high school education, then middle class families felt compelled to preserve their advantage by pursuing college.

            The last piece that fell into place to increase the drawing power of college for middle class families was the effort by colleges in the 1880s and 90s to make undergraduate enrollment not just useful but enjoyable.  Ever desperate to find ways to draw and retain students, colleges responded to competitive pressure by inventing the core elements that came to define the college experience for American students in the twentieth century.  These included fraternities and sororities, pleasant residential halls, a wide variety of extracurricular entertainments, and – of course – football.  College life became a major focus of popular magazines, and college athletic events earned big coverage in newspapers.  In remarkably short order, going to college became a life stage in the acculturation of middle class youth.  It was the place where you could prepare for a respectable job, acquire sociability, learn middle class cultural norms, have a good time, and meet a suitable spouse.  And, for those who were so inclined, was the potential fringe benefit of getting an education.

            Spurred by student desire to get ahead or stay ahead, college enrollments started growing quickly.  They were at 116,000  in 1879, 157,000 in 1889, 238,000 in 1899, 355,000 in 1909, 598,000 in 1919, 1,104,000 in 1929, and 1,494,000 in 1939 (Carter et al. (2006), Table Bc523).  This was a rate of increase of more than 50 percent a decade – not as fast as the increases that would come at midcentury, but still impressive.  During this same 60-year period, total college enrollment as a proportion of the population 18-to-24 years old rose from 1.6 percent to 9.1 percent (Carter et al. (2006), Table Bc524).  By 1930, U.S. had three times the population of the U.K. and 20 times the number of college students (Levine. 1986, p. 135).  And the reason they were enrolling in such numbers was clear.  According to studies in the 1920s, almost two-thirds of undergraduates were there to get ready for a particular job, mostly in the lesser professions and middle management (Levine, 1986, p. 40).  Business and engineering were the most popular majors and the social sciences were on the rise.  As David Levine put it in his important book about college in the interwar years, “Institutions of higher learning were no longer content to educate; they now set out to train, accredit, and impart social status to their students” (Levine, 1986, p. 19.

            Enrollments were growing in public colleges faster than in private colleges, but only by a small amount.  In fact it wasn’t until 1931 – for the first time in the history of American higher education – that the public sector finally accounted for a majority of college students (Carter et al., 2006, Tables Bc531 and Bc534).  The increases occurred across all levels of the system, including the top public research universities; but the largest share of enrollments flowed into the newer institutions at the bottom of the system:  the state colleges that were emerging from normal schools, urban commuter colleges (mostly private), and an array of public and private junior colleges that offered two-year vocational programs. 

            For our purposes today, the key point is this:  The American system of colleges and universities that emerged in the nineteenth century and continued until World War II was a market-driven structure that construed higher education as a private good.  Until around 1880, the primary benefits of the system went to the people who founded individual institutions – the land speculators and religious sects for whom a new college brought wealth and competitive advantage.  This explains why colleges emerged in such remote places long before there was substantial student demand.  The role of the state in this process was muted.  The state was too weak and too poor to provide strong support for higher education, and there was no obvious state interest that argued for doing so.  Until the decade before the war, most student enrollments were in the private sector, and even at the war’s start the majority of institutions in the system were private (Carter et al., 2006, Tables Bc510 to Bc520).  

            After 1880, the primary benefits of the system went to the students who enrolled.  For them, it became the primary way to gain entry to the relatively secure confines of salaried work in management and the professions.  For middle class families, college in this period emerged as the main mechanism for transmitting social advantage from parents to children; and for others, it became the object of aspiration as the place to get access to the middle class.  State governments put increasing amounts of money into support for public higher education, not because of the public benefits it would produce but because voters demanded increasing access to this very attractive private good.

The Rise of the Cold War University

            And then came the Second World War.  There is no need here to recount the devastation it brought about or the nightmarish residue it left.  But it’s worth keeping in mind the peculiar fact that this conflict is remembered fondly by Americans, who often refer to it as the Good War (Terkel, 1997).  The war cost a lot of American lives and money, but it also brought a lot of benefits.  It didn’t hurt, of course, to be on the winning side and to have all the fighting take place on foreign territory.  And part of the positive feeling associated with the war comes from the way it thrust the country into a new role as the dominant world power.  But perhaps even more the warm feeling arises from the memory of this as a time when the country came together around a common cause.  For citizens of the United States – the most liberal of liberal democracies, where private liberty is much more highly valued than public loyalty – it was a novel and exciting feeling to rally around the federal government.  Usually viewed with suspicion as a threat to the rights of individuals and a drain on private wealth, the American government in the 1940s took on the mantle of good in the fight against evil.  Its public image became the resolute face of a white-haired man dressed in red, white, and blue, who pointed at the viewer in a famous recruiting poster.  It’s slogan: “Uncle Sam Wants You.” 

            One consequence of the war was a sharp increase in the size of the U.S. government.  The historically small federal state had started to grow substantially in the 1930s as a result of the New Deal effort to spend the country out of a decade-long economic depression, a time when spending doubled.  But the war raised the level of federal spending by a factor of seven, from $1,000 to $7,000 per capita.  After the war, the level dropped back to $2,000; and then the onset of the Cold War sent federal spending into a sharp, and this time sustained, increase – reaching $3,000 in the 50s, 4,000 in the 60s, and regaining the previous high of $7,000 in the 80s, during the last days of the Soviet Union (Garrett & Rhine, 2006, figure 3). 

            If for Americans in general World War II carries warm associations, for people in higher education it marks the beginning of the Best of Times – a short but intense period of generous public funding and rapid expansion.  Initially, of course, the war brought trouble, since it sent most prospective college students into the military.  Colleges quickly adapted by repurposing their facilities for military training and other war-related activities.  But the real long-term benefits came when the federal government decided to draw higher education more centrally into the war effort – first, as the central site for military research and development; and second, as the place to send veterans when the war was over.  Let me say a little about each.

            In the first half of the twentieth century, university researchers had to scrabble around looking for funding, forced to rely on a mix of foundations, corporations, and private donors.  The federal government saw little benefit in employing their services.  In a particularly striking case at the start of World War, the professional association of academic chemists offered its help to the War Department, which declined “on the grounds that it already had a chemist in its employ” (Levine, 1986, p. 51).[1]  The existing model was for government to maintain its own modest research facilities instead of relying on the university. 

            The scale of the next war changed all this.  At the very start, a former engineering dean from MIT, Vannevar Bush, took charge of mobilizing university scientists behind the war effort as head of the Office of Scientific Research and Development.  The model he established for managing the relationship between government and researchers set the pattern for university research that still exists in the U.S. today: Instead of setting up government centers, the idea was to farm out research to universities.  Issue a request for proposals to meet a particular research need; award the grant to the academic researchers who seemed best equipped to meet this need; and pay 50 percent or more overhead to the university for the facilities that researchers would use.  This method drew on the expertise and facilities that already existed at research universities, which both saved the government from having to maintain a costly permanent research operation and also gave it the flexibility to draw on the right people for particular projects.  For universities, it provided a large source of funds, which enhanced their research reputations, helped them expand faculty, and paid for infrastructure.  It was a win-win situation.  It also established the entrepreneurial model of the university researcher in perpetual search for grant money.  And for the first time in the history of American higher education, the university was being considered a public good, whose research capacity could serve the national interest by helping to win a war. 

            If universities could meet one national need during the war by providing military research, they could meet another national need after the war by enrolling veterans.  The GI Bill of Rights, passed by congress in 1944, was designed to pay off a debt and resolve a manpower problem.  Its official name, the Servicemen’s Readjustment Act of 1944, reflects both aims.  By the end of the war there were 15 million men and women who had served in the military, who clearly deserved a reward for their years of service to the country.  The bill offered them the opportunity to continue their education at federal expense, which included attending the college of their choice.  This opportunity also offered another public benefit, since it responded to deep concern about the ability of the economy to absorb this flood of veterans.  The country had been sliding back into depression at the start of the war, and the fear was that massive unemployment at war’s end was a real possibility.  The strategy worked.  Under the GI Bill, about two million veterans eventually attended some form of college.  By 1948, when veteran enrollment peaked, American colleges and universities had one million more students than 10 years earlier (Geiger (2004), pp. 40-41; Carter et al. (2006), Table Bc523).  This was another win-win situation.  The state rewarded national service, headed off mass unemployment, and produced a pile of human capital for future growth.  Higher education got a flood of students who could pay their own way.  The worry, of course, was what was going to happen when the wartime research contracts ended and the veterans graduated. 

            That’s where the Cold War came in to save the day.  And the timing was perfect.  The first major action of the new conflict – the Berlin Blockade – came in 1948, the same year that veteran enrollments at American colleges reached their peak.  If World War II was good for American higher education, the Cold War was a bonanza.  The hot war meant boom and bust – providing a short surge of money and students followed by a sharp decline.  But the Cold War was a prolonged effort to contain Communism.  It was sustainable because actual combat was limited and often carried out by proxies.  For universities this was a gift that, for 30 years, kept on giving.  The military threat was massive in scale – nothing less than the threat of nuclear annihilation.  And supplementing it was an ideological challenge – the competition between two social and political systems for hearts and minds.  As a result, the government needed top universities to provide it with massive amounts of scientific research that would support the military effort.  And it also needed all levels of the higher education system to educate the large numbers of citizens required to deal with the ideological menace.  We needed to produce the scientists and engineers who would allow us to compete with Soviet technology.  We needed to provide high-level human capital in order to promote economic growth and demonstrate the economic superiority of capitalism over communism.  And we needed to provide educational opportunity for our own racial minorities and lower classes in order to show that our system is not only effective but also fair and equitable.  This would be a powerful weapon in the effort to win over the third world with the attractions of the American Way.  The Cold War American government treated higher education system as a highly valuable public good, which would make a large contribution to the national interest; and the system was pleased to be the object of so much federal largesse (Loss, 2012).

            On the research side, the impact of the Cold War on American universities was dramatic.  The best way to measure this is by examining patterns of federal research and development spending over the years, which traces the ebb and flow of national threats across the last 60 years.  Funding rose slowly  from $13 billion in 1953 (in constant 2014 dollars) until the Sputnik crisis (after the Soviets succeeded in placing the first satellite in earth orbit), when funding jumped to $40 billion in 1959 and rose rapidly to a peak of $88 billion in 1967.  Then the amount backed off to $66 billion in 1975, climbing to a new peak of $104 billion in 1990 just before the collapse of the Soviet Union and then dropping off.  It started growing again in 2002 after the attack on the twin towers, reaching an all-time high of $151 billion in 2010 and has been declining ever since (AAAS, 2014).[2] 

            Initially, defense funding accounted for 85 percent of federal research funding, gradually falling back to about half in 1967, as nondefense funding increased, but remaining in a solid majority position up until the present.  For most of the period after 1957, however, the largest element in nondefense spending was research on space technology, which arose directly from the Soviet Sputnik threat.  If you combine defense and space appropriations, this accounts for about three-quarters of federal research funding until 1990.  Defense research closely tracked perceived threats in the international environment, dropping by 20 percent after 1989 and then making a comeback in 2001.  Overall,  federal funding during the Cold War for research of all types grew in constant dollars from $13 billion in 1953 to $104 in 1990, an increase of 700 percent.  These were good times for university researchers (AAAS, 2014).

            At the same time that research funding was growing rapidly, so were college enrollments.  The number of students in American higher education grew from 2.4 million in 1949 to 3.6 million in 1959; but then came the 1960s, when enrollments more than doubled, reaching 8 million in 1969.  The number hit 11.6 million in 1979 and then began to slow down – creeping up to 13.5 million in 1989 and leveling off at around 14 million in the 1990s (Carter et al., 2006, Table Bc523; NCES, 2014, Table 303.10).  During the 30 years between 1949 and 1979, enrollments increased by more than 9 million students, a growth of almost 400 percent.  And the bulk of the enrollment increases in the last two decades were in part-time students and at two-year colleges.  Among four-year institutions, the primary growth occurred not at private or flagship public universities but at regional state universities, the former normal schools.  The Cold War was not just good for research universities; it was also great for institutions of higher education all the way down the status ladder.

            In part we can understand this radical growth in college enrollments as an extension of the long-term surge in consumer demand for American higher education as a private good.  Recall that enrollments started accelerating late in the nineteenth century, when college attendance started to provide an edge in gaining middle class jobs.  This meant that attending college gave middle-class families a way to pass on social advantage while attending high school gave working-class families a way to gain social opportunity.  But by 1940, high school enrollments had become universal.  So for working-class families, the new zone of social opportunity became higher education.  This increase in consumer demand provided a market-based explanation for at least part of the flood of postwar enrollments.

            At the same time, however, the Cold War provided a strong public rationale for broadening access to college.  In 1946, President Harry Truman appointed a commission to provide a plan for expanding access to higher education, which was first time in American history that a president sought advice about education at any level.  The result was a six-volume report with the title Higher Education for American Democracy.  It’s no coincidence that the report was issued in 1947, the starting point of the Cold War.  The authors framed the report around the new threat of atomic war, arguing that “It is essential today that education come decisively to grips with the world-wide crisis of mankind” (President’s Commission, 1947, vol. 1, p. 6).  What they proposed as a public response to the crisis was a dramatic increase in access to higher education.

            The American people should set as their ultimate goal an educational system in which at no level – high school, college, graduate school, or professional school – will a qualified individual in any part of the country encounter an insuperable economic barrier to the attainment of the kind of education suited to his aptitudes and interests.
        This means that we shall aim at making higher education equally available to all young people, as we now do education in the elementary and high schools, to the extent that their capacity warrants a further social investment in their training (President’s Commission, 1947, vol. 1, p. 36).

Tellingly, the report devotes a lot of space exploring the existing barriers to educational opportunity posed by class and race – exactly the kinds of issues that were making liberal democracies look bad in light of the egalitarian promise of communism.

Decline of the System’s Public Mission

            So in the mid twentieth century, Americans went through an intense but brief infatuation with higher education as a public good.  Somehow college was going to help save us from the communist menace and the looming threat of nuclear war.  Like World War II, the Cold War brought together a notoriously individualistic population around the common goal of national survival and the preservation of liberal democracy.  It was a time when every public building had an area designated as a bomb shelter.  In the elementary school I attended in the 1950s, I can remember regular air raid drills.  The alarm would sound and teachers would lead us downstairs to the basement, whose concrete-block walls were supposed to protect us from a nuclear blast.  Although the drills did nothing to preserve life, they did serve an important social function.  Like Sunday church services, these rituals drew individuals together into communities of faith where we enacted our allegiance to a higher power. 

            For American college professors, these were the glory years, when fear of annihilation gave us a glamorous public mission and what seemed like an endless flow of public funds and funded students.  But it did not – and could not – last.  Wars can bring great benefits to the home front, but then they end.  The Cold War lasted longer than most, but this longevity came at the expense of intensity.  By the 1970s, the U.S. had lived with the nuclear threat for 30 years without any sign that the worst case was going to materialize.  You can only stand guard for so long before attention begins to flag and ordinary concerns start to push back to the surface.  In addition, waging war is extremely expensive, draining both public purse and public sympathy.  The two Cold War conflicts that engaged American troops cost a lot, stirred strong opposition, and ended badly, providing neither the idealistic glow of the Good War nor the satisfying closure of unconditional surrender by the enemy.  Korea ended with a stalemate and the return to the status quo ante bellum.  Vietnam ended with defeat and the humiliating image in 1975 of the last Americans being plucked off a rooftop in Saigon – which the victors then promptly renamed Ho Chi Minh City.

            The Soviet menace and the nuclear threat persisted, but in a form that – after the grim experience of war in the rice paddies – seemed distant and slightly unreal.  Add to this the problem that, as a tool for defeating the enemy, the radical expansion of higher education by the 70s did not appear to be a cost-effective option.  Higher ed is a very labor-intensive enterprise, in which size brings few economies of scale, and its public benefits in the war effort were hard to pin down.  As the national danger came to seem more remote, the costs of higher ed became more visible and more problematic.  Look around any university campus, and the primary beneficiaries of public largesse seem to be private actors – the faculty and staff who work there and the students whose degrees earn them higher income.  So about 30 years into the Cold War, the question naturally arose:  Why should the public pay so much to provide cushy jobs for the first group and to subsidize the personal ambition of the second?  If graduates reap the primary benefits of a college education, shouldn’t they be paying for it rather than the beleaguered taxpayer?

            The 1970s marked the beginning of the American tax revolt, and not surprisingly this revolt emerged first in the bellwether state of California.  Fueled by booming defense plants and high immigration, California had a great run in the decades after 1945.  During this period, the state developed the most comprehensive system of higher education in the country.  In 1960 it formalized this system with a Master Plan that offered every Californian the opportunity to attend college in one of three state systems.  The University of California focused on research, graduate programs, and educating the top high school graduates.  California State University (developed mostly from former teachers colleges) focused on undergraduate programs for the second tier of high school graduates.  The community college system offered the rest of the population two-year programs for vocational training and possible transfer to one of the two university systems.  By 1975, there were 9 campuses in the University of California, 23 in California State University, and xx in the community college system, with a total enrollment across all systems of 1.5 million students – accounting for 14 percent of the college students in the U.S. (Carter et al., 2006, Table Bc523; Douglass, 2000, Table 1).  Not only was the system enormous, but the Master Plan declared it illegal to charge California students tuition.  The biggest and best public system of higher education in the country was free.

            And this was the problem.  What allowed the system to grow so fast was a state fiscal regime that was quite rare in the American context – one based on high public services supported by high taxes.  After enjoying the benefits of this combination for a few years, taxpayers suddenly woke up to the realization that this approach to paying for higher education was at core un-American.  For a country deeply grounded in liberal democracy, the system of higher ed for all at no cost to the consumer looked a lot like socialism.  So, of course, it had to go.  In the mid-1970s the country’s first taxpayer revolt emerged in California, culminating in a successful campaign in 1978 to pass a state-wide initiative that put a limit on increases in property taxes.  Other tax limitation initiatives followed (Martin, 2008).  As a result, the average state appropriation per student at University of California dropped from about $3,400 (in 1960 dollars) in 1987 to $1,100 in 2010, a decline of 68 percent (UC Data Analysis (2014).  This quickly led to a steady increase in fees charged to students at California’s colleges and universities.  (It turned out that tuition was illegal but demanding fees from students was not.)  In 1960 dollars, the annual fees for in-state undergraduates at the University of California rose from $317 in 1987 to $1,122 in 2010, an increase of more than 250 percent (UC Data Analysis (2014).  This pattern of tax limitations and tuition increases spread across the country.  Nationwide during the same period of time, the average state appropriation per student at a four year public college fell from $8,500 to $5,900 (in 2012 dollars), a decline of 31 percent, while average undergraduate tuition doubled, rising from $2,600 to $5,200 (SHEEO, 2013, Figure 3).

            The decline in the state share of higher education costs was most pronounced at the top public research universities, which had a wider range of income sources.  By 2009, the average such institution was receiving only 25 percent of its revenue from state government (National Science Board (2012), Figure 5).  An extreme case is University of Virginia, where in 2013 the state provided less than six percent of the university’s operating budget (University of Virginia, 2014). 

            While these changes were happening at the state level, the federal government was also backing away from its Cold War generosity to students in higher education.  Legislation such as the National Defense Education Act (1958) and Higher Education Act (1965) had provided support for students through a roughly equal balance of grants and loans.  But in 1980 the election of Ronald Reagan as president meant that the push to lower taxes would become national policy.  At this point, support for students shifted from cash support to federally guaranteed loans.  The idea was that a college degree was a great investment for students, which would pay long-term economic dividends, so they should shoulder an increasing share of the cost.  The proportion of total student support in the form of loans was 54 percent in 1975, 67 percent in 1985, and 78 percent in 1995, and the ratio has remained at that level ever since (McPherson & Schapiro, 1998, Table 3.3; College Board, 2013, Table 1).  By 1995, students were borrowing $41 billion to attend college, which grew to $89 billion in 2005 (College Board, 2014, Table 1).  At present, about 60 percent of all students accumulate college debt, most of it in the form of federal loans, and the total student debt load has passed $1 trillion.

            At the same time that the federal government was cutting back on funding college students, it was also reducing funding for university research.  As I mentioned earlier, federal research grants in constant dollars peaked at about $100 billion in 1990, the year after the fall of the Berlin wall – a good marker for the end of the Cold War.  At this point defense accounted for about two-thirds of all university research funding – three-quarters if your include space research.  Defense research declined by about 20 percent during the 90s and didn’t start rising again substantially until 2002, the year after the fall of the Twin Towers and the beginning of the new existential threat known as the War on Terror.  Defense research reached a new peak in 2009 at a level about a third above the Cold War high, and it has been declining steadily ever since.  Increases in nondefense research helped compensate for only a part of the loss of defense funds (AAAS, 2014).


            The American system of higher education came into existence as a distinctly private good.  It arose in the nineteenth century to serve the pursuit of sectarian advantage and land speculation, and then in the twentieth century it evolved into a system for providing individual consumers a way to get ahead or stay ahead in the social hierarchy.  Quite late in the game it took World War II to give higher education an expansive national mission and reconstitute it as a public good.  But hot wars are unsustainable for long, so in 1945 the system was sliding quickly back toward public irrelevance before it was saved by the timely arrival of the Cold War.  As I have shown, the Cold War was very very good for American system of higher education.  It produced a massive increase in funding by federal and state governments, both for university research and for college student subsidies, and – more critically – it sustained this support for a period of three decades.  But these golden years gradually gave way before a national wave of taxpayer fatigue and the surprise collapse of the Soviet Union.  With the nation strapped for funds and with its global enemy dissolved, it no longer had the urgent need to enlist America’s colleges and universities in a grand national cause.  The result was a decade of declining research support and static student enrollments. In 2002 the wars in Afghanistan and Iraq brought a momentary surge in both, but these measures peaked after only eight years and then went again into decline.  Increasingly, higher education is returning to its roots as a private good.

            So what are we to take away from this story of the rise and fall of the Cold War university?  One conclusion is that the golden age of the American university in the mid twentieth century was a one-off event.  Wars may be endemic but the Cold War was unique.  So American university administrators and professors need to stop pining for a return to the good old days and learn how to live in the post-Cold-War era.  The good news is that the impact of the surge in public investment in higher education has left the system in a radically stronger condition than it was in before World War II.  Enrollments have gone from 1.5 million to 21 million; federal research funding has gone from zero to $135 billion; federal grants and loans to college students have gone from zero to $170 billion (NCES, 2014, Table 303.10; AAAS, 2014; College Board, 2014, Table 1).  And the American system of colleges and universities went from an international also-ran to a powerhouse in the world economy of higher education.  Even though all of the numbers are now dropping, they are dropping from a very high level, which is the legacy of the Cold War.  So really, we should stop whining.  We should just say thanks to the bomb for all that it did for us and move on.

            The bad news, of course, is that the numbers really are going down.  Government funding for research is declining and there is no prospect for a turnaround in the foreseeable future.  This is a problem because the federal government is the primary source of funds for basic research in the U.S.; corporations are only interested in investing in research that yields immediate dividends.  During the Cold War, research universities developed a business plan that depended heavily on external research funds to support faculty, graduate students, and overhead.  That model is now broken.  The cost of pursuing a college education is increasingly being borne by the students themselves, as states are paying a declining share of the costs of higher education.  Tuition is rising and as a result student loans are rising.  Public research universities are in a particularly difficult position because their state funding is falling most rapidly.  According to one estimate, at the current rate of decline the average state fiscal support for public higher education will reach zero in 2059 (Mortenson, 2012). 

            But in the midst of all of this bad news, we need to keep in mind that the American system of higher education has a long history of surviving and even thriving under conditions of at best modest public funding.  At its heart, this is a system of higher education based not on the state but the market.  In the hardscrabble nineteenth century, the system developed mechanisms for getting by without the steady support of funds from church or state.  It learned how to attract tuition-paying students, give them the college experience they wanted, get them to identify closely with the institution, and then milk them for donations when they graduate.  Football, fraternities, logo-bearing T shirts, and fund-raising operations all paid off handsomely.  It learned how to adapt quickly to trends in the competitive environment, whether it’s the adoption of intercollegiate football, the establishment of research centers to capitalize on funding opportunities, or providing students with food courts and rock-climbing walls.  Public institutions have a long history of behaving much like private institutions because they were never able to count on continuing state funding. 

            This system has worked well over the years.  Along with the Cold War, it has enabled American higher education to achieve an admirable global status.  By the measures of citations, wealth, drawing power, and Nobel prizes, the system has been very effective.  But it comes with enormous costs.  Private universities have serious advantages over public universities, as we can see from university rankings.  The system is the most stratified structure of higher education in the world.  Top universities in the U.S. get an unacknowledged subsidy from the colleges at the bottom of the hierarchy, which receive less public funding, charge less tuition, and receive less generous donations.  And students sort themselves into institutions in the college hierarchy that parallels their position in the status hierarchy.  Students with more cultural capital and economic capital gain greater social benefit from the system than those with less, since they go to college more often, attend the best institutions, and graduate at a much higher rate.  Nearly everyone can go to college in the U.S., but the colleges that are most accessible provide the least social advantage. 

            So, conceived and nurtured into maturity as a private good, the American system of higher education remains a market-based organism.  It took the threat of nuclear war to turn it – briefly – into a public good.  But these days seem as remote as the time when schoolchildren huddled together in a bomb shelter. 


American Association for the Advancement of Science. (2014). Historical Trends in Federal R & D: By Function, Defense and Nondefense R & D, 1953-2015. (accessed 8-21-14.

Bledstein, B. J. (1976). The Culture of Professionalism: The Middle Class and the Development of Higher Education in America. New York:  W. W. Norton.

Boorstin, D. J. (1965). Culture with Many Capitals: The  Booster College. In The Americans: The National Experience (pp. 152-161). New York: Knopf Doubleday.

Brown, D. K. (1995). Degrees of Control: A Sociology of Educational Expansion and Occupational Credentialism. New York: Teachers College Press.

Carter, S. B., et al. (2006). Historical Statistics of the United States, Millennial Education on Line. New York: Cambridge University Press.

College Board. (2013). Trends in student aid, 2013. New York: The College Board.

College Board. (2014). Trends in Higher Education: Total Federal and Nonfederal Loans over Time. (accessed 9-4-14).

Collins, R. (1979). The Credential Society: An Historical Sociology of Education and Stratification. New York: Academic Press.

Douglass, J. A. (2000). The California Idea and American Higher Education: 1850 to the 1960 Master Plan. Stanford, CA: Stanford University Press.

Garrett, T. A., & Rhine, R. M. (2006).  On the Size and Growth of Government. Federal Reserve Bank of St. Louis Review, 88:1 (pp. 13-30).

Geiger, R. L. (2004). To Advance Knowledge: The Growth of American research Universities, 1900-1940. New Brunswick: Transaction.

Goldin, C. & Katz, L. F. (2008). The Race between Education and Technology. Cambridge: Belknap Press of Harvard University Press.

Institute of Higher Education, Shanghai Jiao Tong University.  (2013).  Academic Ranking of World Universities – 2013. (accessed 6-11-14).

Levine, D. O. (1986). The American college and the culture of aspiration, 1914-1940 Ithaca: Cornell University Press.

Loss, C. P.  (2011).  Between citizens and the state: The politics of American higher education in the 20th century. Princeton, NJ: Princeton University Press.

Martin, I. W. (2008). The Permanent Tax Revolt: How the Property Tax Transformed American Politics. Stanford, CA: Stanford University Press.

McPherson, M. S. & Schapiro, M. O.  (1999).  Reinforcing Stratification in American Higher Education:  Some Disturbing Trends.  Stanford: National Center for Postsecondary Improvement.

Mortenson, T. G. (2012).  State Funding: A Race to the Bottom.  The Presidency (winter). (accessed 10-18-14).

National Center for Education Statistics. (2014). Digest of Education Statistics, 2013. Washington, DC: US Government Printing Office.

National Science Board. (2012). Diminishing Funding Expectations: Trends and Challenges for Public Research Universities. Arlington, VA: National Science Foundation.

Potts, D. B. (1971).  American Colleges in the Nineteenth Century: From Localism to Denominationalism. History of Education Quarterly, 11: 4 (pp. 363-380).

President’s Commission on Higher Education. (1947). Higher education for American democracy, a report. Washington, DC: US Government Printing Office.

Rüegg, W. (2004). European Universities and Similar Institutions in Existence between 1812 and the End of 1944: A Chronological List: Universities.  In Walter Rüegg, A History of the University in Europe, vol. 3. London: Cambridge University Press.

State Higher Education Executive Officers (SHEEO). (2013). State Higher Education Finance, FY 2012. (accessed 9-8-14).

Terkel, S. (1997). The Good War: An Oral History of World War II. New York: New Press.

Tewksbury, D. G. (1932). The Founding of American Colleges and Universities before the Civil War. New York: Teachers College Press.

U of California Data Analysis. (2014). UC Funding and Fees Analysis. (accessed 9-2-14).

University of Virginia (2014). Financing the University 101. (accessed 9-2-14).

[1] Under pressure of the war effort, the department eventually relented and enlisted the help of chemists to study gas warfare.  But the initial response is telling.

[2] Not all of this funding went into the higher education system.  Some went to stand-alone research organizations such as the Rand Corporation and American Institute of Research.  But these organizations in many ways function as an adjunct to higher education, with researcher moving freely between them and the university.

Posted in Higher Education, History of education, Organization Theory, Sociology

College: What Is It Good For?

This post is the text of a lecture I gave in 2013 at the annual meeting of the John Dewey Society.  It was published the following year in the Society’s journal, Education and Culture.  Here’s a link to the published version.           

The story I tell here is not a philosophical account of the virtues of the American university but a sociological account about how those virtues arose as unintended consequences of a system of higher education that arose for less elevated reasons.  Drawing my the analysis in the book I was writing at the time, A Perfect Mess, I show how the system emerged in large part out two impulses that had nothing to do with advancing knowledge.  One was in response to the competition among religious groups, seeking to plant the denominational flag on the growing western frontier and provide clergy for the newly arriving flock.  Another was in response to the competition among frontier towns to attract settlers who would buy land, using a college as a sign that this town was not just another dusty farm village but a true center of culture.

The essay then goes on to explore how the current positive social benefits of the US higher ed system are supported by the peculiar institutional form that characterizes American colleges and universities. 

My argument is that the true hero of the story is the evolved form of the American university, and that all the good things like free speech are the side effects of a structure that arose for other purposes.  Indeed, I argue that the institution – an intellectual haven in a heartless utilitarian world – depends on attributes that we would publicly deplore:  opacity, chaotic complexity, and hypocrisy.

In short, I’m portraying the system as one that is infused with irony, from its early origins through to its current functions.  Hope you enjoy it.

A Perfect Mess Cover

College — What Is It Good For

David F. Labaree

            I want to say up front that I’m here under false pretenses.  I’m not a Dewey scholar or a philosopher; I’m a sociologist doing history in the field of education.  And the title of my lecture is a bit deceptive.   I’m not really going to talk about what college is good for.  Instead I’m going to talk about how the institution we know as the modern American university came into being.  As a sociologist I’m more interested in the structure of the institution than in its philosophical aims.  It’s not that I’m opposed to these aims.  In fact, I love working in a university where these kinds of pursuits are open to us:   Where we can enjoy the free flow of ideas; where we explore any issue in the sciences or humanities that engages us; and where we can go wherever the issue leads without worrying about utility or orthodoxy or politics.  It’s a great privilege to work in such an institution.  And this is why I want to spend some time examining how this institution developed its basic form in the improbable context of the United States in the nineteenth century. 

            My argument is that the true hero of the story is the evolved form of the American university, and that all the good things like free speech are the side effects of a structure that arose for other purposes.  Indeed, I argue that the institution – an intellectual haven in a heartless utilitarian world – depends on attributes that we would publicly deplore:  opacity, chaotic complexity, and hypocrisy.

            I tell this story in three parts.  I start by exploring how the American system of higher education emerged in the nineteenth century, without a plan and without any apparent promise that it would turn out well.  By 1900, I show how all the pieces of the current system had come together.  This is the historical part.  Then I show how the combination of these elements created an astonishingly strong, resilient, and powerful structure.  I look at the way this structure deftly balances competing aims – the populist, the practical, and the elite.  This is the sociological part.  Then I veer back toward the issue raised in the title, to figure out what the connection is between the form of American higher education and the things that it is good for. This is the vaguely philosophical part.  I argue that the form serves the extraordinarily useful functions of protecting those of us in the faculty from the real world, protecting us from each other, and hiding what we’re doing behind a set of fictions and veneers that keep anyone from knowing exactly what is really going on. 

           In this light, I look at some of the things that could kill it for us.  One is transparency.  The current accountability movement directed toward higher education could ruin everything by shining a light on the multitude of conflicting aims, hidden cross-subsidies, and forbidden activities that constitute life in the university.  A second is disaggregation.  I’m talking about current proposals to pare down the complexity of the university in the name of efficiency:  Let online modules take over undergraduate teaching; eliminate costly residential colleges; closet research in separate institutes; and get rid of football.  These changes would destroy the synergy that comes from the university’s complex structure.  A third is principle.  I argue that the university is a procedural institution, which would collapse if we all acted on principle instead of form.   I end with a call for us to retreat from substance and stand shoulder-to-shoulder in defense of procedure.

Historical Roots of the System

            The origins of the American system of higher education could not have been more humble or less promising of future glory.  It was a system, but it had no overall structure of governance and it did not emerge from a plan.  It just happened, through an evolutionary process that had direction but no purpose.  We have a higher education system in the same sense that we have a solar system, each of which emerged over time according to its own rules.  These rules shaped the behavior of the system but they were not the product of Intelligent Design. 

            Yet something there was about this system that produced extraordinary institutional growth.  When George Washington assumed the presidency of the new republic in 1789, the U.S. already had 19 colleges and universities (Tewksbury, 1932, Table 1; Collins, 1979, Table 5.2).  By 1830 the numbers rose to 50 and then growth accelerated, with the total reaching 250 in 1860, 563 in 1870, and 811 in 1880.  To give some perspective, the number of universities in the United Kingdom between 1800 and 1880 rose from 6 to 10 and in all of Europe from 111 to 160 (Rüegg, 2004).  So in 1880 this upstart system had 5 times as many institutions of higher education as did the entire continent of Europe.  How did this happen?

            Keep in mind that the university as an institution was born in medieval Europe in the space between the dominant sources of power and wealth, the church and the state, and it drew  its support over the years from these two sources.  But higher education in the U.S. emerged in a post-feudal frontier setting where the conditions were quite different.  The key to understanding the nature of the American system of higher education is that it arose under conditions where the market was strong, the state was weak, and the church was divided.  In the absence of any overarching authority with the power and money to support a system, individual colleges had to find their own sources of support in order to get started and keep going.  They had to operate as independent enterprises in the competitive economy of higher education, and their primary reasons for being had little to do with higher learning.

            In the early- and mid-nineteenth century, the modal form of higher education in the U.S. was the liberal arts college.  This was a non-profit corporation with a state charter and a lay board, which would appoint a president as CEO of the new enterprise.  The president would then rent a building, hire a faculty, and start recruiting students.  With no guaranteed source of funding, the college had to make a go of it on its own, depending heavily on tuition from students and donations from prominent citizens, alumni, and religious sympathizers.  For college founders, location was everything.  However, whereas European universities typically emerged in major cities, these colleges in the U.S. arose in small towns far from urban population centers.  Not a good strategy if your aim was to draw a lot of students.  But the founders had other things in mind.

            One central motive for founding colleges was to promote religious denominations.  The large majority of liberal arts colleges in this period had a religious affiliation and a clergyman as president.  The U.S. was an extremely competitive market for religious groups seeking to spread the faith, and colleges were a key way to achieve this end.  With colleges, they could prepare its own clergy and provide higher education for their members; and these goals were particularly important on the frontier, where the population was growing and the possibilities for denominational expansion were the greatest.  Every denomination wanted to plant the flag in the new territories, which is why Ohio came to have so many colleges.  The denomination provided a college with legitimacy, students, and a built-in donor pool but with little direct funding.

            Another motive for founding colleges was closely allied with the first, and that was land speculation.  Establishing a college in town was not only a way to advance the faith, it was also a way to raise property values.  If town fathers could attract a college, they could make the case that the town was no mere agricultural village but a cultural center, the kind of place where prospective land buyers would want to build a house, set up a business, and raise a family.  Starting a college was cheap and easy.  It would bear the town’s name and serve as its cultural symbol.  With luck it would give the town leverage to become a county seat or gain a station on the rail line.  So a college was a good investment in a town’s future prosperity (Brown, 1995).

            The liberal arts college was the dominant but not the only form that higher education took in nineteenth century America.  Three other types of institutions emerged before 1880.  One was state universities, which were founded and governed by individual states but which received only modest state funding.  Like liberal arts colleges, they arose largely for competitive reasons.  They emerged in the new states as the frontier moved westward, not because of huge student demand but because of the need for legitimacy.  You couldn’t be taken seriously as a state unless you had a state university, especially if your neighbor had just established one. 

            The second form of institution was the land-grant college, which arose from federal efforts to promote land sales in the new territories by providing public land as a founding grant for new institutions of higher education.  Turning their backs on the classical curriculum that had long prevailed in colleges, these schools had a mandate to promote practical learning in fields such as agriculture, engineering, military science, and mining. 

            The third form was the normal school, which emerged in the middle of the century as state-founded high-school-level institutions for the preparation of teachers.  It wasn’t until the end of the century that these schools evolved into teachers colleges; and in the twentieth century they continued that evolution, turning first into full-service state colleges and then by midcentury into regional state universities. 

            Unlike liberal arts colleges, all three of these types of institutions were initiated by and governed by states, and all received some public funding.  But this funding was not nearly enough to keep them afloat, so they faced similar challenges as the liberal arts colleges, since their survival depended heavily on their ability to bring in student tuition and draw donations.  In short, the liberal arts college established the model for survival in a setting with a strong market, weak state, and divided church; and the newer public institutions had to play by the same rules.

            By 1880, the structure of the American system of higher education was well established.  It was a system made up of lean and adaptable institutions, with a strong base in rural communities, and led by entrepreneurial presidents, who kept a sharp eye out for possible threats and opportunities in the highly competitive higher-education market.  These colleges had to attract and keep the loyalty of student consumers, whose tuition was critical for paying the bills and who had plenty of alternatives in towns nearby.  And they also had to maintain a close relationship with local notables, religious peers, and alumni, who provided a crucial base of donations.

            The system was only missing two elements to make it workable in the long term.  It lacked sufficient students, and it lacked academic legitimacy.  On the student side, this was the most overbuilt system of higher education the world has ever seen.  In 1880, 811 colleges were scattered across a thinly populated countryside, which amounted to 16 colleges per million of population (Collins, 1979, Table 5.2).  The average college had only 131 students and 14 faculty and granted 17 degrees per year (Carter et al., 2006, Table Bc523, Table Bc571; U.S. Bureau of the Census, 1975, Series H 751).  As I have shown, these colleges were not established in response to student demand, but nonetheless they depended on students for survival.  Without a sharp growth in student enrollments, the whole system would have collapsed. 

            On the academic side, these were colleges in name only.  They were parochial in both senses of the word, small town institutions stuck in the boondocks and able to make no claim to advancing the boundaries of knowledge.  They were not established to promote higher learning, and they lacked both the intellectual and economic capital required to carry out such a mission.  Many high schools had stronger claims to academic prowess than these colleges.  European visitors in the nineteenth century had a field day ridiculing the intellectual poverty of these institutions.  The system was on death watch.  If it was going to be able to survive, it needed a transfusion that would provide both student enrollments and academic legitimacy. 

            That transfusion arrived just in time from a new European import, the German research university.  This model offered everything that was lacking in the American system.  It reinvented university professors as the best minds of the generation, whose expertise was certified by the new entry-level degree, the Ph.D., and who were pushing back the frontiers of knowledge through scientific research.  It introduced graduate students to the college campus, who would be selected for their high academic promise and trained to follow in the footsteps of their faculty mentors. 

            And at the same time that the German model offered academic credibility to the American system, the peculiarly Americanized form of this model made university enrollment attractive for undergraduates, whose focus was less on higher learning than on jobs and parties.  The remodeled American university provided credible academic preparation in the cognitive skills required for professional and managerial work; and it provided training in the social and political skills required for corporate employment, through the process of playing the academic game and taking on roles in intercollegiate athletics and on-campus social clubs.  It also promised a social life in which one could have a good time and meet a suitable spouse. 

            By 1900, with the arrival of the research university as the capstone, nearly all of the core elements of the current American system of higher education were in place.  Subsequent developments focused primarily on extending the system downward, adding layers that would make it more accessible to larger numbers of students – as normal schools evolved into regional state universities and as community colleges emerged as the open-access base of an increasingly stratified system.  Here ends the history portion of this account. Now we move on to the sociological part of the story.

Sociological Traits of the System

            When the research university model arrived to save the day in the 1880s, the American system of higher education was in desperate straits.  But at the same time this system had an enormous reservoir of potential strengths that prepared it for its future climb to world dominance.  Let’s consider some of these strengths.  First it had a huge capacity in place, the largest in the world by far:  campuses, buildings, faculty, administration, curriculum, and a strong base in the community.  All it needed was students and credibility. 

            Second, it consisted of a group of institutions that had figured out how to survive under dire Darwinian circumstances, where supply greatly exceeded demand and where there was no secure stream of funding from church or state.  In order to keep the enterprises afloat, they had learned how to hustle for market position, troll for students, and dun donors.  Imagine how well this played out when students found a reason to line up at their doors and donors suddenly saw themselves investing in a winner with a soaring intellectual and social mission. 

            Third, they had learned to be extraordinarily sensitive to consumer demand, upon which everything depended.  Fourth, as a result they became lean and highly adaptable enterprises, which were not bounded by the politics of state policy or the dogma of the church but could take advantage of any emerging possibility for a new program, a new kind of student or donor, or a new area of research.  Not only were they able to adapt but they were forced to do so quickly, since otherwise the competition would jump on the opportunity first and eat their lunch.

            By the time the research university arrived on the scene, the American system of higher education was already firmly established and governed by its own peculiar laws of motion and its own evolutionary patterns.  The university did not transform the system.  Instead it crowned the system and made it viable for a century of expansion and elevation.  Americans could not simply adopt the German university model, since this model depended heavily on strong state support, which was lacking in the U.S.  And the American system would not sustain a university as elevated as the German university, with its tight focus on graduate education and research at the expense of other functions.  American universities that tried to pursue this approach – such as Clark University and Johns Hopkins – found themselves quickly trailing the pack of institutions that adopted a hybrid model grounded in the preexisting American system.  In the U.S., the research university provided a crucial add-on rather than a transformation.  In this institutionally-complex market-based system, the research university became embedded within a convoluted but highly functional structure of cross-subsidies, interwoven income streams, widely dispersed political constituencies, and a bewildering array of goals and functions. 

            At the core of the system is a delicate balance among three starkly different models of higher education.  These three roughly correspond to Clark Kerr’s famous characterization of the American system as a mix of the British undergraduate college, the American land-grant college, and the German research university (Kerr, 2001, p. 14).  The first is the populist element, the second is the practical element, and the third is the elite element.  Let me say a little about each of these and make the case for how they work to reinforce each other and shore up the overall system.  I argue that these three elements are unevenly distributed across the whole system, with the populist and practical parts strongest in the lower tiers of the system, where access is easy and job utility are central, and the elite is strongest in the upper tier.  But I also argue that all three are present in the research university at the top of the system.  Consider how all these elements come together in a prototypical flagship state university.

            The populist element has its roots in the British residential undergraduate college, which colonists had in mind when they established the first American colleges; but the changes that emerged in the U.S. in the early nineteenth century were critical.  Key was the fact that American colleges during this period were broadly accessible in a way that colleges in the U.K. never were until the advent of the red-brick universities after the Second World War.  American colleges were not located in fashionable areas in major cities but in small towns in the hinterland.  There were far too many of them for them to be elite, and the need for students meant that tuition and academic standards both had to be kept relatively low.  The American college never exuded the odor of class privilege to the same degree as Oxbridge; its clientele was largely middle class.  For the new research university, this legacy meant that the undergraduate program provided critical economic and political support. 

            From the economic perspective, undergrads paid tuition, which – through large classes and thus the need for graduate teaching assistants – supported graduate programs and the larger research enterprise.  Undergrads, who were socialized in the rituals of football and fraternities, were also the ones who identified most closely with the university, which meant that in later years they became the most loyal donors.  As doers rather than thinkers, they were also the wealthiest group of alumni donors.  Politically, the undergraduate program gave the university a broad base of community support.  Since anyone could conceive of attending the state university, the institution was never as remote or alien as the German model.  Its athletic teams and academic accomplishments were a point of pride for state residents, whether or not they or their children ever attended.  They wore the school colors and cheered for it on game days.

            The practical element has its root in the land-grant college.  The idea here was that the university was not just an enterprise for providing liberal education for the elite but that it could also provide useful occupational skills for ordinary people.  Since the institution needed to attract a large group of students to pay the bills, the American university left no stone unturned when it came to developing programs that students might want.  It promoted itself as a practical and reliable mechanism for getting a good job.  This not only boosted enrollment, but it also sent a message to the citizens of the state that the university was making itself useful to the larger community, producing the teachers, engineers, managers, and dental hygienists that they needed.  

            This practical bent also extended to the university’s research effort, which was not just focusing on ivory tower pursuits.  Its researchers were working hard to design safer bridges, more productive crops, better vaccines, and more reliable student tests.  For example, when I taught at Michigan State I planted my lawn with Spartan grass seed, which was developed at the university.  These forms of applied research led to patents that brought substantial income back to the institution, but their most important function was to provide a broad base of support for the university among people who had no connection with it as an instructional or intellectual enterprise.  The idea was compelling: This is your university, working for you.

            The elite element has its roots in the German research university.  This is the component of the university formula that gives the institution academic credibility at the highest level.  Without it the university would just be a party school for the intellectually challenged and a trade school for job seekers.  From this angle, the university is the haven for the best thinkers, where professors can pursue intellectual challenges of the first order, develop cutting edge research in a wide array of domains, and train graduate students who will carry on these pursuits in the next generation.  And this academic aura envelops the entire enterprise, giving the lowliest freshman exposure to the most distinguished faculty and allowing the average graduate to sport a diploma burnished by the academic reputations of the best and the brightest.  The problem, of course, is that supporting professorial research and advanced graduate study is enormously expensive; research grants only provide a fraction of the needed funds. 

            So the populist and practical domains of the university are critically important components of the larger university package.  Without the foundation of fraternities and football, grass seed and teacher education, the superstructure of academic accomplishment would collapse of its own weight.  The academic side of the university can’t survive without both the financial subsidies and political support that come from the populist and the practical sides.  And the populist and practical sides rely on the academic legitimacy that comes from the elite side.  It’s the mixture of the three that constitutes the core strength of the American system of higher education.  This is why it is so resilient, so adaptable, so wealthy, and so powerful.  This is why its financial and political base is so broad and strong.  And this is why American institutions of higher education enjoy so much autonomy:  They respond to many sources of power in American society and they rely on many sources of support, which means they are not the captive of any single power source or revenue stream.

The Power of Form

            So my story about the American system of higher education is that it succeeded by developing a structure that allowed it to become both economically rich and politically autonomous.  It could tap multiple sources of revenue and legitimacy, which allowed it to avoid becoming the wholly owned subsidiary of the state, the church, or the market.  And by virtue of its structurally reinforced autonomy, college is good for a great many things.

            At last we come back to our topic.  What is college good for?  For those of us on faculties of research universities, they provide several core benefits that we see as especially important.  At the top of the list is that they preserve and promote free speech.  They are zones where faculty and students can feel free to pursue any idea, any line of argument, and any intellectual pursuit that they wish – free of the constraints of political pressure, cultural convention, or material interest.  Closely related to this is the fact that universities become zones where play is not only permissible but even desirable, where it’s ok to pursue an idea just because it’s intriguing, even though there is no apparent practical benefit that this pursuit would produce.

            This, of course, is a rather idealized version of the university.  In practice, as we know, politics, convention, and economics constantly intrude on the zone of autonomy in an effort to shape the process and limit these freedoms.  This is particularly true in the lower strata of the system.  My argument is not that the ideal is met but that the structure of American higher education – especially in the top tier of the system – creates a space of relative autonomy, where these constraining forces are partially held back, allowing the possibility for free intellectual pursuits that cannot be found anywhere else. 

            Free intellectual play is what we in the faculty tend to care about, but others in American society see other benefits arising from higher education that justify the enormous time and treasure that we devote to supporting the system.  Policymakers and employers put primary emphasis on higher education as an engine of human capital production, which provides the economically relevant skills that drive increases in worker productivity and growth in the GDP.  They also hail it as a place of knowledge production, where people develop valuable technologies, theories, and inventions that can feed directly into the economy.  And companies use it as a place to outsource much of their needs for workforce training and research-and-development. 

            These pragmatic benefits that people see coming from the system of higher education are real.  Universities truly are socially useful in such ways.  But it’s important to keep in mind that these social benefits only can arise if the university remains a preserve for free intellectual play.  Universities are much less useful to society if they restrict themselves to the training of individuals for particular present-day jobs, or to the production of research to solve current problems.  They are most useful if they function as storehouses for knowledges, skills, technologies, and theories – for which there is no current application but which may turn out to be enormously useful in the future.  They are the mechanism by which modern societies build capacity to deal with issues that have not yet emerged but sooner or later are likely to do so.

            But that is a discussion for another speech by another scholar.  The point I want make today about the American system of higher education is that it is good for a lot of things but it was established in order to accomplish none of these things.  As I have shown, the system that arose in the nineteenth century was not trying to store knowledge, produce capacity, or increase productivity.  And it wasn’t trying to promote free speech or encourage play with ideas.  It wasn’t even trying to preserve institutional autonomy.  These things happened as the system developed, but they were all unintended consequences.  What was driving development of the system was a clash of competing interests, all of which saw the college as a useful medium for meeting particular ends.  Religious denominations saw them as a way to spread the faith.  Town fathers saw them as a way to promote local development and increase property values.  The federal government saw them as a way to spur the sale of federal lands.  State governments saw them as a way to establish credibility in competition with other states.  College presidents and faculty saw them as a way to promote their own careers.  And at the base of the whole process of system development were the consumers, the students, without whose enrollment and tuition and donations the system would not have been able to persist.  The consumers saw the college as useful in a number of ways:  as a medium for seeking social opportunity and achieving social mobility; as a medium for preserving social advantage and avoiding downward mobility; as a place to have a good time, enjoy an easy transition to adulthood, pick up some social skills, and meet a spouse; even, sometimes, as a place to learn. 

            The point is that the primary benefits of the system of higher education derive from its form, but this form did not arise in order to produce these benefits.  We need to preserve the form in order to continue enjoying these benefits, but unfortunately the organizational  foundations upon which the form is built are, on the face of it, absurd.  And each of these foundational qualities is currently under attack from the perspective of alternative visions that, in contrast, have a certain face validity.  It the attackers accomplish their goals, the system’s form, which has been so enormously productive over the years, will collapse, and with this collapse will come the end of the university as we know it.  I didn’t promise this lecture would end well, did I?

            Let me spell out three challenges that would undercut the core autonomy and synergy that makes the system so productive in its current form.  On the surface, each of the proposed changes seems quite sensible and desirable.  Only by examining the implications of actually pursuing these changes can we see how they threaten the foundational qualities that currently undergird the system.  The system’s foundations are so paradoxical, however, that mounting a public defense of them would be difficult indeed.  Yet it is precisely these traits of the system that we need to defend in order to preserve the current highly functional form of the university.  In what follows, I am drawing inspiration from the work of Suzanne Lohmann (2004, 2006) a political scientist at UCLA, who is the scholar who has addressed these issues most astutely.

            One challenge comes from prospective reformers of American higher education who want to promote transparency.  Who can be against that?  This idea derives from the accountability movement, which has already swept across K-12 education and is now pounding the shores of higher education.  It simply asks universities to show people what they’re doing.  What is the university doing with its money and its effort?  Who is paying for what?  How do the various pieces of the complex structure of the university fit together?  And are they self-supporting or drawing resources from elsewhere?  What is faculty credit-hour production?  How is tuition related to instructional costs?  And so on.   These demands make a lot of sense. 

            The problem, however, as I have shown today, is that the autonomy of the university depends on its ability to shield its inner workings from public scrutiny.  It relies on opacity.  Autonomy will end if the public can see everything that is going on and what everything costs.  Consider all of the cross subsidies that keep the institution afloat:  undergraduates support graduate education, football supports lacrosse, adjuncts subsidize professors, rich schools subsidize poor schools.  Consider all of the instructional activities that would wilt in the light of day; consider all of the research projects that could be seen as useless or politically unacceptable.  The current structure keeps the inner workings of the system obscure, which protects the university from intrusions on its autonomy.  Remember, this autonomy arose by accident not by design; its persistence depends on keeping the details of university operations out of public view.

            A second and related challenge comes from reformers who seek to promote disaggregation.  The university is an organizational nightmare, they say, with all of those institutes and centers, departments and schools, programs and administrative offices.  There are no clear lines of authority, no mechanisms to promote efficiency and eliminate duplication, no tools to achieve economies of scale.  Transparency is one step in the right direction, they say, but the real reform that is needed is to take apart the complex interdependencies and overlapping responsibilities within the university and then figure out how each of these tasks could be accomplished in the most cost-effective and outcome-effective manner.  Why not have a few star professors tape lectures and then offer Massive Open Online Courses at colleges across the country?  Why not have institutions specialize in what they’re best at – remedial education, undergraduate instruction, vocational education, research production, graduate or student training?  Putting them together into a single institution is expensive and grossly inefficient. 

            But recall that it is precisely the aggregation of purposes and functions – the combination of the populist, the practical, and the elite – that has made the university so strong, so successful, and, yes, so useful.  This combination creates a strong base both financially and politically and allows for forms of synergy than cannot happen with a set of isolated educational functions.  The fact is that this institution can’t be disaggregated without losing what makes it the kind of university that students, policymakers, employers, and the general public find so compelling.  A key organizational element that makes the university so effective is its chaotic complexity.

            A third challenge comes not from reformers intruding on the university from the outside but from faculty members meddling with it from the inside.  The threat here arises from the dangerous practice of acting on academic principle.  Fortunately, this is not very common in academe.  But the danger is lurking in the background of every decision about faculty hires.  Here’s how it works.  You review a finalist for a faculty position in a field not closely connected to your own, and you find to your horror that the candidate’s intellectual domain seems absurd on the face of it (how can anyone take this type of work seriously?) and the candidate’s own scholarship doesn’t seem credible.  So you decide to speak against hiring the candidate and organize colleagues to support your position.  But then you happen to read a paper by Suzanne Lohmann, who points out something very fundamental about how universities work. 

            Universities are structured in a manner that protects the faculty from the outside world (that is, protecting them from the forces of transparency and disaggregation), but it’s also organized in a manner that protects the faculty from each other.  The latter is the reason we have such an enormous array of departments and schools in universities.  If every historian had to meet the approval of geologists and every psychologist had be meet the approval of law faculty, no one would ever be hired. 

           The simple fact is that part of what keeps universities healthy and autonomous is hypocrisy.  Because of the Balkanized structure of university organization, we all have our own protected spaces to operate in and we all pass judgment only on our own peers within that space.  To do otherwise would be disastrous.  We don’t have to respect each other’s work across campus, we merely need to tolerate it – grumbling about each other in private and making nice in public.  You pick your faculty, we’ll pick ours.  Lohmann (2006) calls this core procedure of the academy “log-rolling.”  If we all operated on principle, if we all only approved scholars we respected, then the university would be a much diminished place.  Put another way, I wouldn’t want to belong to a university that consisted only of people I found worthy.  Gone would be the diversity of views, paradigms, methodologies, theories, and world views that makes the university such a rich place.  The result is incredibly messy, and it permits a lot of quirky – even ridiculous – research agendas, courses, and instructional programs.  But in aggregate, this libertarian chaos includes an extraordinary range of ideas, capacities, theories, and social possibilities.  It’s exactly the kind of mess we need to treasure and preserve and defend against all opponents.

            So here is the thought I’m leaving you with.  The American system of higher education is enormously productive and useful, and it’s a great resource for students, faculty, policymakers, employers, and society.  What makes it work is not its substance but its form.  Crucial to its success is its devotion to three formal qualities:  opacity, chaotic complexity, and hypocrisy.  Embrace these forms and they will keep us free.

Posted in History of education, Public Good, Schooling, Welfare

Public Schooling as Social Welfare

This post is a follow-up to a piece I posted three weeks ago, which was Michael Katz’s 2020 essay, Public Education as Welfare.  Below is my own take on this subject, which I wrote for a book that will be published in recognition of the hundredth anniversary of the Horace Mann League.  The tentative title of the book is Public Education: The Cornerstone of American Democracy and the editors are David Berliner and Carl Hermanns.  All of the contributions focus on the role that public schools play in American life.  Here’s a link to a pdf of my piece.

Public Schooling as Social Welfare

David F. Labaree

            In the mid nineteenth century, Horace Mann made a forceful case for a distinctly political vision of public schooling, as a mechanism for creating citizens for the American republic. In the twentieth century, policymakers put forth an alternative economic vision for this institution, as a mechanism for turning out productive workers to promote growth of the American economy. In this essay, I explore a third view of public schooling, which is less readily recognizable than the other two but no less important.  This is a social vision, in which public schooling serves as a mechanism for promoting social welfare, by working to ameliorate the inequalities of American society.  

All three of these visions construe public schooling as a public good.  As a public good, its benefits flow to the entire community, including those who never attended school, by enriching the broad spectrum of political, economic, and social life.  But public schooling is also a private good.  As such, its benefits accrue only to its graduates, who use their diplomas to gain selective access to jobs at the expense of those who lack these credentials. 

Consider the relative costs and benefits of these two types of goods.  Investing in public goods is highly inclusive, in that every dollar invested goes to support the common weal.  But at the same time this investment is also highly contingent, since individuals will gain the benefits even if they don’t contribute, getting a free ride on the contributions of others.  The usual way around the free rider problem is to make such investment mandatory for everyone through the mechanism of taxation.  By contrast, investment in private goods is self-sustaining, with no state action needed.  Individuals have a strong incentive to invest because only they gain the benefit.  In addition, as a private good its effects are highly exclusive, benefiting some people at the expense of others and thus tending to increase social inequality. 

Like the political and economic visions of schooling, the welfare vision carries the traits of its condition as a public good.  Its scope is inclusive, its impact is egalitarian, and its sustainability depends heavily on state mandate.  But it lacks a key advantage shared by the other two, whose benefits clearly flow to the population as a whole.  Everyone benefits by being part of a polity in which citizens are capable, law abiding, and informed.  Everyone benefits by being part of an economy in which workers contribute productively to the general prosperity. 

In contrast, however, it’s less obvious that everyone benefits from transferring public resources to disadvantaged citizens in order to improve their quality of life.  The word welfare carries a foul odor in American politics, redolent of laziness, bad behavior, and criminality.  It’s so bad that in 1980 the federal government changed the name of the Department Health, Education, and Welfare to Health and Human Services just to get rid of the stigmatized term.

So one reason that the welfare function doesn’t jump to mind when you think of schools is that we really don’t want to associate the two.  Don’t besmirch schooling by calling it welfare.  Michael Katz caught this feeling in the opening sentences of his 2010 essay, “Public Education as Welfare,” which serves as a reference point for my own essay:  “Welfare is the most despised public institution in America. Public education is the most iconic. To associate them with each other will strike most Americans as bizarre, even offensive.”  But let’s give it a try anyway.

My own essay arises from the time when I’m writing it – the summer of 2020 during the early phases of Covid-19 pandemic.  Like everyone else in the US, I watched in amazement this spring when schools suddenly shut down across the country and students started a new regime of online learning from home.  It started me thinking about what schools mean to us, what they do for us. 

Often it’s only when an institution goes missing that we come to recognize its value.  After the Covid shutdown, parents, children, officials, and citizens discovered just what they lost when the kids came home to stay.  You could hear voices around the country and around the globe pleading, “When are schools going to open again?”

I didn’t hear people talking much about the other two public goods views of schooling.  There wasn’t a groundswell of opinion complaining about the absence of citizenship formation or the falloff of human capital production.  Instead, there was a growing awareness of the various social welfare functions of schooling that were now suddenly gone.  Here are a few, in no particular order.

Schools are the main source of child care for working parents.  When schools close, someone needs to stay home to take care of the younger children.  For parents with the kind of white collar jobs that allow them to work from home, this causes a major inconvenience as they try to juggle work and child care and online schooling.  But for parents who can’t phone in their work, having to stay home with the kids is a huge financial sacrifice, and it’s even bigger for single parents in this category.

Schools are a key place for children to get healthy meals.  In the U.S., about 30 million students receive free or discounted lunch (and often breakfast) at school every day.  It’s so common that researchers use the proportion of “students on free or reduced lunch” as a measure of the poverty rate in individual schools.  When schools close, these children go hungry.  In response to this problem, a number of closed school systems have continued to prepare these meals for parents to pick up and take home with them.

Schools are crucial for the health of children.  In the absence of universal health care in the U.S., schools have served as a frail substitute.  They require all students to have vaccinations.  They provide health education.  And they have school nurses who can check for student ailments and make referrals.

Schools are especially important for dealing with the mental health of young people.  Teachers and school psychologists can identify mental illness and serve as prompts for getting students treatment.  Special education programs identify developmental disabilities in students and devise individualized plans for treating them.

Schools serve as oases for children who are abused at home.  Educators are required by law to look out for signs of mental or physical abuse and to report these cases to authorities.  When schools close, these children are trapped in abusive settings at home, which gives the lie to the idea of sheltering in place.  For many students, the true shelter is the school itself.  In the absence of teacher referrals, agencies reported a sharp drop-off in the reports of child abuse.

Schools are domains for relative safety for students who live in dangerous neighborhoods.  For many kids, who live in settings with gangs and drugs and crime, getting to and from school is the most treacherous part of the day.  Once inside the walls of the school, they are relatively free of physical threats.  Closing school doors to students puts them at risk.

Schools are environments that are often healthier than their own homes.  Students in wealthy neighborhoods may look on schools in poor neighborhoods as relatively shabby and depressing, but for many children the buildings have a degree of heat, light, cleanliness, and safety that they can’t find at home.  These schools may not have swimming pools and tennis courts, but they also don’t have rats and refuse.

Schools may be the only institutional setting for many kids in which the professional norm is to serve the best interests of the child.  We know that students can be harmed by schools.  All it takes is a bully or a disparaging judgment.  The core of the educator’s job is to foster growth, spur interest, increase knowledge, enhance skill, and promote development.  Being cut off from such an environment for a long period of time is a major loss for any student, rich or poor.

Schools are one of the few places in American life where young people undergo a shared experience.  This is especially true at the elementary level, where most children in a neighborhood attend the same school and undergo a relatively homogeneous curriculum.  It’s less true in high school, where the tracked curriculum provides more divergent experiences.  A key component of the shared experience is that it places you face-to-face with students who may be different from you.  As we have found, when you turn schooling into online learning, you tend to exacerbate social differences, because students are isolated in disparate family contexts where there is a sharp divide in internet access. 

Schools are where children socialize with each other.  A key reason kids want to go to school is because that’s where their friends are.  It’s where they make friends they otherwise would have never meet, learn to maintain these friendships, and learn how to manage conflicts.  Humans are thoroughly social animals, who need interaction with others in order to grow and thrive.  So being cooped up at home leaves everyone, but especially children, without a central component of human existence.

Schools are the primary public institution for overseeing the development of young children into healthy and capable adults.  Families are the core private institution engaged in this process, but schools serve as the critical intermediary between family and the larger society.  They’re the way our children learn now to live and engage with other people’s children, and they’re a key way that society seeks to ameliorate social differences that might impede children’s development, serving as what Mann called the “a great equalizer of the conditions of men – the balance wheel of the social machinery.”

These are some aspects of schooling that we take for granted but don’t think about very much.  For policymakers, these they may be considered side effects of the school’s academic mission, but for many (maybe most) families they are a main effect.  And the various social support roles that schools play are particularly critical in a country like the United States, where the absence of a robust social welfare system means that schools stand as the primary alternative.  School’s absence made the heart grow fonder for it.  We all become aware of just how much schools do for us.

Systems of universal public schooling did not arise in order to promote social welfare.  During the last 200 years, in countries around the world, the impetus came from the kind of political rationale that Horace Mann so eloquently put forward.  Public schools emerged as part of the process of creating nation states.  Their function was to turn subjects of the crown into citizens of the nation, or, as Eugen Weber put it in the title of his wonderful book, to turn Peasants into Frenchmen.  Schools took localized populations with regional dialects and traditional authority relations and helped affiliate these populations with an imagined community called France or the United States.  They created a common language (in case of France, it was Parisian French), a shared sense of national membership, and a shared educational experience. 

This is the origin story of public schooling.  But once schools became institutionalized and the state’s existence grew relatively secure, they began to accumulate other functions, both private (gaining an edge in the competition for social position) and public (promoting economic growth and supporting social welfare).  In different countries these functions took different forms, and the load the state placed on schooling varied considerably.  The American case, as is so often true, was extreme.

The U.S. bet the farm on the public school.  It was relatively early in establishing a system of publicly funded and governed schools across the country in the second quarter of the nineteenth century.  But it was way ahead of European countries in its rapid upward expansion of the system.  Universal enrollment moved quickly from primary school to grammar school to high school.  By 1900, the average American teenager had completed eight years of schooling.  This led to a massive surge in high school enrollments, which doubled every decade between 1890 and 1940.  By 1951, 75 percent of 16-year olds were enrolled in high school compared to only 14 percent in the United Kingdom.   In the three decades after the Second World War, the surge spilled over into colleges, with the rate of enrollment between 1950 and 1980 rising from 9 to 40 percent of the eligible population.

The US system had an indirect connection to welfare even before it started acting as a kind of social service agency.  The short version of the story is this.  In the second part of the nineteenth century, European countries like Disraeli’s United Kingdom and Bismarck’s Germany set up the framework for a welfare state, with pensions and other elements of a safety net for the working class.  The U.S. chose not to take this route, which it largely deferred until the 1930s.  Instead it put its money on schooling.  The vision was to provide individuals with educational opportunities to get ahead on their own rather than to give them direct aid to improve their current quality of life.  The idea was to focus on developing a promising future rather than on meeting current needs.  People were supposed to educate their way out of poverty, climbing up the ladder with the help of state schooling.  The fear was that provide direct relief for food, clothing, and shelter – the dreaded dole – would only stifle their incentive to get ahead.  Better to stimulate the pursuit of future betterment rather to run the risk that people might get used to subsisting comfortably in the present. 

By nature, schooling is a forward-looking enterprise.  Its focus is on preparing students for their future roles as citizens, workers, and members of society rather than on helping them deal with their current living conditions.  By setting up an educational state rather than a welfare state, the U.S. in effect chose to write off the parents, seen as a lost cause, and concentrate instead on providing opportunities to the children, seen as still salvageable. 

In the twentieth century, spurred by the New Deal’s response to the Great Depression, the U.S. developed the rudiments of a welfare state, with pensions and then health care for the elderly, temporary cash support and health care for the poor, and unemployment insurance for the worker.  At the same time, schools began to deal with the problems arising from poverty that students brought with them to the classroom.  This was propelled by a growing understanding that hungry, sick, and abused children are not going to able to take advantage of educational opportunities in order to attain a better life in the future.  Schooling alone couldn’t provide the chance for schooling to succeed.  Thus the introduction of free meals, the school nurse, de facto day care, and other social-work activities in the school. 

The tale of the rise of the social welfare function of the American public school, therefore, is anything but a success story.  Rather, it’s a story of one failure on top of another.  First is the failure to deal directly with social inequality in American life, when instead we chose to defer the intervention to the future by focusing on educating children while ignoring their parents.  Second, when poverty kept interfering with the schooling process, we introduced rudimentary welfare programs into the school in order give students a better chance, while still leaving poor parents to their own devices. 

As with the American welfare system in general, school welfare is not much but it’s better than nothing.  Carrying on the pattern set in the nineteenth century, we are still shirking responsibility for dealing directly with poverty through the political system by opposing universal health care and a strong safety net.  Instead, we continue to put our money on schooling as the answer when the real solution lies elsewhere.  Until we decide to implement that solution, however, schooling is all we’ve got. 

In the meantime, schools serve as the wobbly but indispensable balance wheel of American social life.  Too bad it took a global pandemic to get us to realize what we lose when schools close down.

Posted in Educational goals, History of education

Are Students Consumers?

This post is a piece I published in Education Week way back in 1997.  It’s a much shorter and more accessible version of the most cited paper I ever published, “Public Goods, Private Goods: The American Struggle over Educational Goals.”  Drawing on the latter, it lays out a case of three competing educational goals that have shaped the history of American schooling: democratic equality, social efficiency, and social mobility. 

In reading it over, I find it holds up rather well, except for a tendency to demonize social mobility.  Since then I’ve come to think that, while the latter does a lot of harm, it’s also an essential component of schooling.  We can’t help but be concerned about the selective benefit that schooling provides us and our children even as we at the same time are concerned about supporting the broader benefits that schooling provides the public as a whole.

See what you think.  Here’s a link to the original and also to a PDF in case you can’t get past the paywall.  


Are Students “Consumers”?

David F. Labaree

Observers of American education have frequently noted that the general direction of educational reform over the years has not been forward but back and forth. Reform, it seems, is less an engine of progress than a pendulum, swinging monotonously between familiar policy alternatives. Progress is hard to come by.

However, a closer reading of the history of educational change in this country reveals a pattern that is both more complex and in a way more troubling than this. Yes, the back-and-forth movement is real, but it turns out that this pattern is for the most part good news. It simply represents a periodic shift in emphasis between two goals for education — democratic equality and social efficiency — that represent competing but equally indispensable visions of education.

The bad news is that in the 20th century, and especially in the past several decades, the pendulum swings increasingly have given way to a steady movement in the direction of a third goal, social mobility. This shift from fluctuation to forward motion may look like progress, but it’s not. The problem is that it represents a fundamental change in the way we think about education, by threatening to transform this most public of institutions from a public good into a private good. The consequences for both school and society, I suggest, are potentially devastating.

Let me explain why. First we’ll consider the role that these three goals have played in American education, and then we can explore the implications of the movement from equality and efficiency to mobility.

The first goal is democratic equality, which is the oldest of the three. From this point of view, the purpose of schooling is to produce competent citizens. This goal provided the primary impetus for the common school movement, which established the foundation for universal public education in this country during the middle of the 19th century. The idea was and is that all citizens need to be able to think, understand the world around them, behave sociably, and act according to shared political values — and that public schools are the best places to accomplish these ends. The corollary of this goal is that all these capabilities need to be equally distributed, and that public schools can serve as what Horace Mann called the great “balance wheel,” by providing a common educational competence that helps reduce differences.

Some of the most enduring and familiar characteristics of our current system of education were formed historically in response to this goal. There are the neighborhood elementary school and the comprehensive high school, which draw together students from the whole community under one roof. There is the distinctively American emphasis on general education at all levels of the educational system. There is the long-standing practice of socially promoting students from grade to grade. And there is the strong emphasis on inclusion, which over the years has led to such innovations as racial integration and the mainstreaming of special education students.

The second goal is social efficiency, which first became prominent in the Progressive era at the turn of the century. From this perspective, the purpose of education is not to produce citizens but to train productive workers. The idea is that our society’s health depends on a growing economy, and economy needs workers with skills that will allow them to carry out their occupational roles effectively. Schools, therefore, should place less emphasis on general education and more on the skills needed for particular jobs. And because skill requirements differ greatly from job to job, schools need to tailor curricula to the job and then sort students into the different curricula.

Consider some of the enduring effects that this goal has had on education over the years. There is the presence of explicitly vocational programs of study within the high school and college curriculum. There is the persistent practice of tracking and ability grouping. And there is the prominence of social efficiency arguments in the public rhetoric about education, echoing through every millage election and every race for public office in the past half-century. We are all familiar with the argument that pops upon these occasions — that education is the keystone of the community’s economic future, that spending money on education is really an investment in human capital that will pay big dividends.

Notice that the first two goals are in some ways quite different in the effects they have had on schools. One emphasizes a political role for schools while the other stresses an economic role. One pushes for general education, the other for specialized education. One homogenizes, the other differentiates.

But from another angle, the two take a similar approach, because they both treat education as public good. A public good is one that benefits all members of a community, which means that you cannot avoid being affected by it. For example, police protection and road maintenance have an impact directly or indirectly on the life of everyone. Likewise, everyone stands to gain from a public school system that produces competent citizens and productive workers, even those members of the community who don’t have children in public schools.

This leads us to something that is quite distinctive about the third educational goal, the one I call social mobility. From the perspective of this goal, education is not a public good but a private good. If the first goal for education takes the viewpoint of the citizen and the second takes that of the taxpayer, the third takes the viewpoint of the individual educational consumer.

The purpose of education from this angle is not what it can do for democracy or the economy but what it can do for me. Historically, education has paid off handsomely for individuals who stayed in school and came away with diplomas. Educational credentials have made it possible for people to distinguish themselves from their competitors, giving them a big advantage in the race for good jobs and a comfortable life. As a result, education has served as a springboard to upward mobility for the working class and a buttress against downward mobility for the middle class.

Note that if education is going to serve the social-mobility goal effectively, it has to provide some people with benefits that others don’t get. Education in this sense is a private good that only benefits the owner, an investment in my future, not yours, in my children, not other people’s children. For such an educational system to work effectively, it needs to focus a lot of attention on grading, sorting, and selecting students. It needs to provide a variety of ways for individuals to distinguish themselves from others — such as by placing themselves in a more prestigious college, a higher curriculum track, the top reading group, or the gifted program. In this sense the social-mobility goal reinforces the same sorting and selecting tendency in education that is promoted by the social-efficiency goal, but without the same concern for providing socially useful skills.

Now that I’ve spelled out some of the main characteristics of these three goals for education, let me show how they can help us understand the major swings of the pendulum in educational reform over the last 200 years.

During the common school era in the mid-19th century, the dominant goal for American education democratic equality. The connection between school and work at this point was weak. People earned job skills on the job rather than in school, and educational credentials offered social distinction but not necessarily preference in hiring.

By the end of the 19th century, however, both social efficiency and social mobility emerged as major factors in shaping education, while the influence of democratic equality declined. High school enrollments began to take off in the 1890s, which posed two big problems for education — a social-efficiency problem (how to provide education for the new wave of students), and a social-mobility problem (how to protect the value of high school credentials for middle-class consumers). The result was a series of reforms that defined the Progressive era in American education during the first half of the 20th century. These included such innovations as tracking, ability testing, ability grouping, vocationalism, special education, social promotion, and life adjustment.

Then in the 1960s and 1970s we saw a swing back from social efficiency to democratic equality (reinforced by the social-mobility goal). The national movement for racial equality brought pressure to integrate schools, and these arguments for political equality and individual opportunity led to a variety of related reforms aimed at reducing educational discrimination based on class, gender, and handicapping condition.

But in the 1980s and 1990s, the momentum shifted back from democratic equality to social efficiency — again reinforced by social mobility. The emerging movement for educational standards responded both to concerns about declining economic competitiveness (seen as a deficiency of human capital) and to concerns about a glut of high school and college credentials (seen as a threat to social mobility).

However, another way to think about these historical trends in educational reform is to turn attention away from the pendulum swings between the first two goals and to focus instead on the steady growth in the influence of the third goal throughout the last 100 years. Since its emergence as a factor in the late 19th century, social mobility has gradually grown to become the dominant goal in American education. Increasingly, neither of the other two goals can make strong headway except in alliance with the third. Only social mobility, it seems, can afford to go it alone any longer. A prime example is the recent push for educational choice, charters, and vouchers. This is the strongest educational reform movement of the 1990s, and it is grounded entirely within the consumer-is-king perspective of the social-mobility goal.

So, you may ask, what are the implications of all this? I want to mention two problems that arise from the history of conflicting goals in American education — one deriving from the conflict itself and the other from the emerging dominance of social mobility. The second problem is more serious than the first.

On the issue of conflict: Contradictory goals have shaped the basic structure of American schools, and the result is a system that is unable to accomplish any one of these goals very effectively — which has been a common complaint about schools. Also, much of what passes for educational reform may be little more than ritual swings back and forth between alternative goals — another common complaint. But I don’t think this problem is really resolvable in any simple way. Americans seem to want and need an education system that serves political equality and economic productivity and personal opportunity, so we might as well learn how to live with it.

The bigger problem is not conflict over goals but the possible victory of social mobility over the other two. The long-term trend is in the direction of this goal, and the educational reform initiatives in the last decade suggest that this trend is accelerating. At the center of the current talk about education is a series of reforms designed to empower the educational consumer, and if they win out, this would resolve the tension between public and private conceptions of education decisively in the favor of the private view. Such a resolution to the conflict over goals would hurt education in at least two ways.

First, in an educational system where the consumer is king, who will look after the public’s interest in education? As supporters of the two public goals have long pointed out, we all have a stake in the outcomes of public education, since this is the institution that shapes our fellow citizens and fellow workers. In this sense, the true consumers of education are all of the members of the community — and not just the parents of school children. But these parents are the only ones whose interests matter forth school choice movement, and their consumer preferences will dictate the shape of the system.

A second problem is this: In an educational system where the opportunity for individual advancements is the primary focus, it becomes more important to get ahead than to get an education. When the whole point of education is not to ensure that I learn valuable skills but instead to give me a competitive social advantage, then it is only natural for me to focus my ingenuity as a student toward acquiring the most desirable grades, credits, and degrees rather than toward learning the curriculum.

We have already seen this taking place in American education in the past few decades. Increasingly, students have been acting more like smart consumers than eager learners. Their most pointed question to the teacher is “Will this be on the test?” They see no point in studying anything that doesn’t really count. If the student is the consumer and the goal is to get ahead rather than to get an education, then it is only rational for students to look for the best deal. And that means getting the highest grades and the most valuable credentials for the lowest investment of effort. As cagey consumers, children in school have come to be like the rest of us when we’re in the shopping mall: They hate to pay full price when they can get the same product on sale.

That’s the bad news from this little excursion into educational history, but don’t forget the good news as well. For 200 years, Americans have seen education as a central pillar of public life. The contradictory structure of American education today has embedded within it an array of social expectations and instructional practices that clearly express these public purposes. There is reason to think that Americans will not be willing to let educational consumerism drive this public-ness out of the public schools.

Posted in Higher Education, History of education, Inequality, Meritocracy, Public Good, Uncategorized

How NOT to Defend the Private Research University

This post is a piece I published today in the Chronicle Review.  It’s about an issue that has been gnawing at me for years.  How can you justify the existence of institutions of the sort I taught at for the last two decades — rich private research universities?  These institutions obviously benefit their students and faculty, but what about the public as a whole?  Is there a public good they serve; and if so, what is it? 

Here’s the answer I came up with.  These are elite institutions to the core.  Exclusivity is baked in.  By admitting only a small number of elite students, they serve to promote social inequality by providing grads with an exclusive private good, a credential with high exchange value. But, in part because of this, they also produce valuable public goods — through the high quality research and the advanced graduate training that only they can provide. 

Open access institutions can promote the social mobility that private research universities don’t, but they can’t provide the same degree of research and advanced training.  The paradox is this:  It’s in the public’s interest to preserve the elitism of these institutions.  See what you think.

Hoover Tower

How Not to Defend the Private Research University

David F. Labaree

In this populist era, private research universities are easy targets that reek of privilege and entitlement. It was no surprise, then, when the White House pressured Harvard to decline $8.6 million in Covid-19-relief funds, while Stanford, Yale, and Princeton all judiciously decided not to seek such aid. With tens of billions of endowment dollars each, they hardly seemed to deserve the money.

And yet these institutions have long received outsized public subsidies. The economist Richard Vedder estimated that in 2010, Princeton got the equivalent of $50,000 per student in federal and state benefits, while its similar-size public neighbor, the College of New Jersey, got just $2,000 per student. Federal subsidies to private colleges include research grants, which go disproportionately to elite institutions, as well as student loan and scholarship funds. As recipients of such largess, how can presidents of private research universities justify their institutions to the public?

Here’s an example of how not to do so. Not long after he assumed the presidency of Stanford in 2016, Marc Tessier-Lavigne made the rounds of faculty meetings on campus in order to introduce himself and talk about future plans for the university. When he came to a Graduate School of Education meeting that I attended, he told us his top priority was to increase access. Asked how he might accomplish this, he said that one proposal he was considering was to increase the size of the entering undergraduate class by 100 to 200 students.

The problem is this: Stanford admits about 4.3 percent of the candidates who apply to join its class of 1,700. Admitting a couple hundred additional students might raise the admit rate to 5 percent. Now that’s access. The issue is that, for a private research university like Stanford, the essence of its institutional brand is its elitism. The inaccessibility is baked in.

Raj Chetty’s social mobility data for Stanford show that 66 percent of its undergrads come from the top 20 percent by income, 52 percent from the top 10 percent, 17 percent from the top 1 percent, and just 4 percent from the bottom 20 percent. Only 12 percent of Stanford grads move up by two quintiles or more — it’s hard for a university to promote social mobility when the large majority of its students starts at the top.

Compare that with the data for California State University at Los Angeles, where 12 percent of students are from the top quintile and 22 percent from the bottom quintile. Forty-seven percent of its graduates rise two or more income quintiles. Ten percent make it all the way from the bottom to the top quintile.

My point is that private research universities are elite institutions, and they shouldn’t pretend otherwise. Instead of preaching access and making a mountain out of the molehill of benefits they provide for the few poor students they enroll, they need to demonstrate how they benefit the public in other ways. This is a hard sell in our populist-minded democracy, and it requires acknowledging that the very exclusivity of these institutions serves the public good.

For starters, in making this case, we should embrace the emphasis on research production and graduate education and accept that providing instruction for undergraduates is only a small part of the overall mission. Typically these institutions have a much higher proportion of graduate students than large public universities oriented toward teaching (graduate students are 57 percent of the total at Stanford and just 8.5 percent in the California State University system).

Undergraduates may be able to get a high-quality education at private research universities, but there are plenty of other places where they could get the same or better, especially at liberal-arts colleges. Undergraduate education is not what makes these institutions distinctive. What does make them stand out are their professional schools and doctoral programs.

Private research universities are elite institutions, and they shouldn’t pretend otherwise.

Private research universities are souped up versions of their public counterparts, and in combination they exert an enormous impact on American life.

As of 2017, the American Association of Universities, a club consisting of the top 65 research universities, represented just 2 percent of all four-year colleges and 12 percent of all undergrads. And yet the group accounted for over 20 percent of all U.S. graduate students; 43 percent of all research doctorates; 68 percent of all postdocs; and 38 percent of all Nobel Prize winners. In addition, its graduates occupy the centers of power, including, by 2019, 64 of the Fortune 100 CEOs; 24 governors; and 268 members of Congress.

From 2014 to 2018, AAU institutions collectively produced 2.4-million publications, and their collective scholarship received 21.4 million citations. That research has an economic impact — these same institutions have established 22 research parks and, in 2018 alone, they produced over 4,800 patents, over 5,000 technology license agreements, and over 600 start-up companies.

Put all this together and it’s clear that research universities provide society with a stunning array of benefits. Some of these benefits accrue to individual entrepreneurs and investors, but the benefits for society at a whole are extraordinary. These universities drive widespread employment, technological advances that benefit consumers worldwide, and the improvement of public health (think of all the university researchers and medical schools advancing Covid-19-research efforts right now).

Besides their higher proportion of graduate students and lower student-faculty ratio, private research universities have other major advantages over publics. One is greater institutional autonomy. Private research universities are governed by a board of laypersons who own the university, control its finances, and appoint its officers. Government can dictate how it uses the public subsidies it gets (except tax subsidies), but otherwise it is free to operate as an independent actor in the academic market. This allows these colleges to pivot quickly to take advantage of opportunities for new programs of study, research areas, and sources of funding, largely independent of political influence, though they do face a fierce academic market full of other private colleges.

A 2010 study of universities in Europe and the U.S. by Caroline Hoxby and associates shows that this mix of institutional autonomy and competition is strongly associated with higher rankings in the world hierarchy of higher education. They find that every 1-percent increase in the share of the university budget that comes from government appropriations corresponds with a decrease in international ranking of 3.2 ranks. At the same time, each 1-percent increase in the university budget from competitive grants corresponds with an increase of 6.5 ranks. They also found that universities high in autonomy and competition produced more patents.

Another advantage the private research universities enjoy over their public counterparts, of course, is wealth. Stanford’s endowment is around $28 billion, and Berkeley’s is just under $5 billion, but because Stanford is so much smaller (16,000 versus 42,000 total students) this multiplies the advantage. Stanford’s endowment per student dwarfs Berkeley’s. The result is that private universities have more research resources: better labs, libraries, and physical plant; higher faculty pay (e.g., $254,000 for full professors at Stanford, compared to $200,000 at Berkeley); more funding for grad students, and more staff support.

A central asset of private research universities is their small group of academically and socially elite undergraduate students. The academic skill of these students is an important draw for faculty, but their current and future wealth is particularly important for the institution. From a democratic perspective, this wealth is a negative. The student body’s heavy skew toward the top of the income scale is a sign of how these universities are not only failing to provide much social mobility but are in fact actively engaged in preserving social advantage. We need to be honest about this issue.

But there is a major upside. Undergraduates pay their own way (as do students in professional schools); but the advanced graduate students don’t — they get free tuition plus a stipend to pay living expenses, which is subsidized, both directly and indirectly, by undergrads. The direct subsidy comes from the high sticker price undergrads pay for tuition. Part of this goes to help out upper-middle-class families who still can’t afford the tuition, but the rest goes to subsidize grad students.

The key financial benefits from undergrads come after they graduate, when the donations start rolling in. The university generously admits these students (at the expense of many of their peers), provides them with an education and a credential that jump-starts their careers and papers over their privilege, and then harvests their gratitude over a lifetime. Look around any college campus — particularly at a private research university — and you will find that almost every building, bench, and professor bears the name of a grateful donor. And nearly all of the money comes from former undergrads or professional school students, since it is they, not the doctoral students, who go on to earn the big bucks.

There is, of course, a paradox. Perhaps the gross preservation of privilege these schools traffic in serves a broader public purpose. Perhaps providing a valuable private good for the few enables the institution to provide an even more valuable public good for the many. And yet students who are denied admission to elite institutions are not being denied a college education and a chance to get ahead; they’re just being redirected. Instead of going to a private research university like Stanford or a public research university like Berkeley, many will attend a comprehensive university like San José State. Only the narrow metric of value employed at the pinnacle of the American academic meritocracy could construe this as a tragedy. San José State is a great institution, which accepts the majority of the students who apply and which sends a huge number of graduates to work in the nearby tech sector.

The economist Miguel Urquiola elaborates on this paradox in his book, Markets, Minds, and Money: Why America Leads the World in University Research (Harvard University Press, 2020), which describes how American universities came to dominate the academic world in the 20th century. The 2019 Shanghai Academic Ranking of World Universities shows that eight of the top 10 universities in the world are American, and seven of these are private.

Urquiola argues that the roots of American academe’s success can be found in its competitive marketplace. In most countries, universities are subsidiaries of the state, which controls its funding, defines its scope, and sets its policy. By contrast, American higher education has three defining characteristics: self-rule (institutions have autonomy to govern themselves); free entry (institutions can be started up by federal, state, or local governments or by individuals who acquire a corporate charter); and free scope (institutions can develop programs of research and study on their own initiative without undue governmental constraint).

The result is a radically unequal system of higher education, with extraordinary resources and capabilities concentrated in a few research universities at the top. Caroline Hoxby estimates that the most selective American research universities spend an average of $150,000 per student, 15 times as much as some poorer institutions.

As Urquiola explains, the competitive market structure puts a priority on identifying top research talent, concentrating this talent and the resources needed to support it in a small number of institutions, and motivating these researchers to ramp up their productivity. This concentration then makes it easy for major research-funding agencies, such as the National Institutes of Health, to identity the institutions that are best able to manage the research projects they want to support. And the nature of the research enterprise is such that, when markets concentrate minds and money, the social payoff is much greater than if they were dispersed more evenly.

Radical inequality in the higher-education system therefore produces outsized benefits for the public good. This, paradoxical as it may seem, is how we can truly justify the public investment in private research universities.

David Labaree is a professor emeritus at the Stanford Graduate School of Education.