Posted in Capitalism, Higher Education, Meritocracy, Politics

Sandel: The Tyranny of Merit

This post is a reflection on Michael Sandel’s new book, The Tyranny of Merit: What’s Become of the Common Good?  He’s a philosopher at Harvard and this is his analysis of the dangers posed by the American meritocracy.  The issue is one I’ve been exploring here for the last two years in a variety of posts (here, here, here, here, here, here, and here.)

I find Sandel’s analysis compelling, both in the ways it resonates with other takes on the subject and also in his distinctive contributions to the discussion.  My only complaint is that the whole discussion could have been carried out more effectively in a single magazine article.  The book tends to be repetitive, and it also gets into the weeds on some philosophical issues that blur its focus and undercut its impact.  Here I present what I think are the key points.  I hope you find it useful.

Sandel Cover

Both the good news and the bad news about meritocracy is its promise of opportunity for all based on individual merit rather than the luck of birth.  It’s hard to hate a principle that frees us from the tyranny of inheritance. 

The meritocratic ideal places great weight on the notion of personal responsibility. Holding people responsible for what they do is a good thing, up to a point. It respects their capacity to think and act for themselves, as moral agents and as citizens. But it is one thing to hold people responsible for acting morally; it is something else to assume that we are, each of us, wholly responsible for our lot in life.

The problem is that simply calling the new model of status attainment “achievement” rather than “ascription” doesn’t mean that your ability to get ahead is truly free of circumstances beyond your control.  

But the rhetoric of rising now rings hollow. In today’s economy, it is not easy to rise. Americans born to poor parents tend to stay poor as adults. Of those born in the bottom fifth of the income scale, only about one in twenty will make it to the top fifth; most will not even rise to the middle class. It is easier to rise from poverty in Canada or Germany, Denmark, and other European countries than it is in the United States.

The meritocratic faith argues that the social structure of inequality provides a powerful incentive for individuals to work hard to get ahead in order to escape from a bad situation and move on to something better.  The more inequality, such as in the US, the more incentive to move up.  The reality, however, is quite different.

But today, the countries with the highest mobility tend to be those with the greatest equality. The ability to rise, it seems, depends less on the spur of poverty than on access to education, health care, and other resources that equip people to succeed in the world of work.

Sandel goes on to point out additional problems with meritocracy beyond the difficulties in trying to get ahead all on your own: 1) demoralizing the losers in the race; 2) denigrating those without a college degree; and 3) turning politics into the realm of the expert rather than the citizen.

The tyranny of merit arises from more than the rhetoric of rising. It consists in a cluster of attitudes and circumstances that, taken together, have made meritocracy toxic. First, under conditions of rampant inequality and stalled mobility, reiterating the message that we are responsible for our fate and deserve what we get erodes solidarity and demoralizes those left behind by globalization. Second, insisting that a college degree is the primary route to a respectable job and a decent life creates a credentialist prejudice that undermines the dignity of work and demeans those who have not been to college; and third, insisting that social and political problems are best solved by highly educated, value-neutral experts is a technocratic conceit that corrupts democracy and disempowers ordinary citizens.

Consider the first point. Meritocracy fosters triumphalism for the winners and despair for the losers.  It you succeed or fail, you alone get the credit or the blame.  This was not the case in the bad old days of aristocrats and peasants.

If, in a feudal society, you were born into serfdom, your life would be hard, but you would not be burdened by the thought that you were responsible for your subordinate position. Nor would you labor under the belief that the landlord for whom you toiled had achieved his position by being more capable and resourceful than you. You would know he was not more deserving than you, only luckier.

If, by contrast, you found yourself on the bottom rung of a meritocratic society, it would be difficult to resist the thought that your disadvantage was at least partly your own doing, a reflection of your failure to display sufficient talent and ambition to get ahead. A society that enables people to rise, and that celebrates rising, pronounces a harsh verdict on those who fail to do so.

This triumphalist aspect of meritocracy is a kind of providentialism without God, at least without a God who intervenes in human affairs. The successful make it on their own, but their success attests to their virtue. This way of thinking heightens the moral stakes of economic competition. It sanctifies the winners and denigrates the losers.

One key issue that makes meritocracy potentially toxic is its assumption that we deserve the talents that earn us such great rewards.

There are two reasons to question this assumption. First, my having this or that talent is not my doing but a matter of good luck, and I do not merit or deserve the benefits (or burdens) that derive from luck. Meritocrats acknowledge that I do not deserve the benefits that arise from being born into a wealthy family. So why should other forms of luck—such as having a particular talent—be any different? 

Second, that I live in a society that prizes the talents I happen to have is also not something for which I can claim credit. This too is a matter of good fortune. LeBron James makes tens of millions of dollars playing basketball, a hugely popular game. Beyond being blessed with prodigious athletic gifts, LeBron is lucky to live in a society that values and rewards them. It is not his doing that he lives today, when people love the game at which he excels, rather than in Renaissance Florence, when fresco painters, not basketball players, were in high demand.

The same can be said of those who excel in pursuits our society values less highly. The world champion arm wrestler may be as good at arm wrestling as LeBron is at basketball. It is not his fault that, except for a few pub patrons, no one is willing to pay to watch him pin an opponent’s arm to the table.

He then moves on to the second point, about the central role of college in determining who’s got merit. 

Should colleges and universities take on the role of sorting people based on talent to determine who gets ahead in life?

There are at least two reasons to doubt that they should. The first concerns the invidious judgments such sorting implies for those who get sorted out, and the damaging consequences for a shared civic life. The second concerns the injury the meritocratic struggle inflicts on those who get sorted in and the risk that the sorting mission becomes so all-consuming that it diverts colleges and universities from their educational mission. In short, turning higher education into a hyper-competitive sorting contest is unhealthy for democracy and education alike.

The difficulty of predicting which talents are most socially beneficial is particularly true for the complex array of skills that people pick up in college.  Which ones matter most for determining a person’s ability to make an important contribution to society and which don’t?  How do we know if an elite college provides more of those skills than an open-access college?  This matters because a graduate from the former gets a much higher reward than one from the latter.  Pretending that a prestigious college degree is the best way to measure future performance is particularly difficult to sustain because success and degree are conflated.  Graduates of top colleges get the best jobs and thus seem to have the greatest impact, whereas non-grads never get the chance to show what they can do.

Another sports analogy helps to make this point.

Consider how difficult it is to assess even more narrowly defined talents and skills. Nolan Ryan, one of the greatest pitchers in the history of baseball, holds the all-time record for most strikeouts and was elected on the first ballot to baseball’s Hall of Fame. When he was eighteen years old, he was not signed until the twelfth round of the baseball draft; teams chose 294 other, seemingly more promising players before he was chosen. Tom Brady, one of the greatest quarterbacks in the history of football, was the 199th draft pick. If even so circumscribed a talent as the ability to throw a baseball or a football is hard to predict with much certainty, it is folly to think that the ability to have a broad and significant impact on society, or on some future field of endeavor, can be predicted well enough to justify fine-grained rankings of promising high school seniors.

And then there’s the third point, the damage that meritocracy does to democratic politics.  One element of of this is that it turns politics into an arena for credentialed experts, consigning ordinary citizens to the back seat.  How many political leaders today are without a college degree?  Vanishingly few.  Another is that meritocracy not only bars non-grads from power but they also bars them from social respect.  

Grievances arising from disrespect are at the heart of the populist movement that has swept across Europe and the US.  Sandel calls this a “politics of humiliation.”

The politics of humiliation differs in this respect from the politics of injustice. Protest against injustice looks outward; it complains that the system is rigged, that the winners have cheated or manipulated their way to the top. Protest against humiliation is psychologically more freighted. It combines resentment of the winners with nagging self-doubt: perhaps the rich are rich because they are more deserving than the poor; maybe the losers are complicit in their misfortune after all.

This feature of the politics of humiliation makes it more combustible than other political sentiments. It is a potent ingredient in the volatile brew of anger and resentment that fuels populist protest.

Sandel draws on a wonderful book by Arlie Hochschild, Strangers in Their Own Land, in which she interviews Trump supporters in Louisiana.

Hochschild offered this sympathetic account of the predicament confronting her beleaguered working-class hosts:

You are a stranger in your own land. You do not recognize yourself in how others see you. It is a struggle to feel seen and honored. And to feel honored you have to feel—and feel seen as—moving forward. But through no fault of your own, and in ways that are hidden, you are slipping backward.

Once consequence of this for those left behind is a rise in “deaths of despair.”

The overall death rate for white men and women in middle age (ages 45–54) has not changed much over the past two decades. But mortality varies greatly by education. Since the 1990s, death rates for college graduates declined by 40 percent. For those without a college degree, they rose by 25 percent. Here then is another advantage of the well-credentialed. If you have a bachelor’s degree, your risk of dying in middle age is only one quarter of the risk facing those without a college diploma. 

Deaths of despair account for much of this difference. People with less education have long been at greater risk than those with college degrees of dying from alcohol, drugs, or suicide. But the diploma divide in death has become increasingly stark. By 2017, men without a bachelor’s degree were three times more likely than college graduates to die deaths of despair.

Sandel offers two relatively reforms that might help mitigate the tyranny of meritocracy.  One focuses on elite college admissions.  

Of the 40,000-plus applicants, winnow out those who are unlikely to flourish at Harvard or Stanford, those who are not qualified to perform well and to contribute to the education of their fellow students. This would leave the admissions committee with, say, 30,000 qualified contenders, or 25,000, or 20,000. Rather than engage in the exceedingly difficult and uncertain task of trying to predict who among them are the most surpassingly meritorious, choose the entering class by lottery. In other words, toss the folders of the qualified applicants down the stairs, pick up 2,000 of them, and leave it at that.

This helps get around two problems:  the difficulty in trying to predict merit; and the outsize rewards of a winner-take-all admissions system.  But good luck trying to get this put in place over the howls of outrage from upper-middle-class parents, who have learned how to game the system to their advantage.  Consider this one small example of the reaction when an elite Alexandria high school proposed random admission from a pool of the most qualified.

Another reform is more radical and even harder to imagine putting into practice.  It begins with reconsideration of what we mean by the “common good.”

The contrast between consumer and producer identities points to two different ways of understanding the common good. One approach, familiar among economic policy makers, defines the common good as the sum of everyone’s preferences and interests. According to this account, we achieve the common good by maximizing consumer welfare, typically by maximizing economic growth. If the common good is simply a matter of satisfying consumer preferences, then market wages are a good measure of who has contributed what. Those who make the most money have presumably made the most valuable contribution to the common good, by producing the goods and services that consumers want.

A second approach rejects this consumerist notion of the common good in favor of what might be called a civic conception. According to the civic ideal, the common good is not simply about adding up preferences or maximizing consumer welfare. It is about reflecting critically on our preferences—ideally, elevating and improving them—so that we can live worthwhile and flourishing lives. This cannot be achieved through economic activity alone. It requires deliberating with our fellow citizens about how to bring about a just and good society, one that cultivates civic virtue and enables us to reason together about the purposes worthy of our political community.

If we can carry out this deliberation — a big if indeed — then we can proceed to implement a system for shifting the basis for individual compensation from what the market is willing to pay to what we collectively feel is most valuable to society.  

Thinking about pay, most would agree that what people make for this or that job often overstates or understates the true social value of the work they do. Only an ardent libertarian would insist that the wealthy casino magnate’s contribution to society is a thousand times more valuable than that of a pediatrician. The pandemic of 2020 prompted many to reflect, at least fleetingly, on the importance of the work performed by grocery store clerks, delivery workers, home care providers, and other essential but modestly paid workers. In a market society, however, it is hard to resist the tendency to confuse the money we make with the value of our contribution to the common good.

To implement a system based on public benefit rather than marketability would require completely revamping our structure of determining salaries and taxes. 

The idea is that the government would provide a supplementary payment for each hour worked by a low-wage employee, based on a target hourly-wage rate. The wage subsidy is, in a way, the opposite of a payroll tax. Rather than deduct a certain amount of each worker’s earnings, the government would contribute a certain amount, in hopes of enabling low-income workers to make a decent living even if they lack the skills to command a substantial market wage.

Generally speaking, this would mean shifting the tax burden from work to consumption and speculation. A radical way of doing so would be to lower or even eliminate payroll taxes and to raise revenue instead by taxing consumption, wealth, and financial transactions. A modest step in this direction would be to reduce the payroll tax (which makes work expensive for employers and employees alike) and make up the lost revenue with a financial transactions tax on high-frequency trading, which contributes little to the real economy.

This is how Sandel ends his book:

The meritocratic conviction that people deserve whatever riches the market bestows on their talents makes solidarity an almost impossible project. For why do the successful owe anything to the less-advantaged members of society? The answer to this question depends on recognizing that, for all our striving, we are not self-made and self-sufficient; finding ourselves in a society that prizes our talents is our good fortune, not our due. A lively sense of the contingency of our lot can inspire a certain humility: “There, but for the grace of God, or the accident of birth, or the mystery of fate, go I.” Such humility is the beginning of the way back from the harsh ethic of success that drives us apart. It points beyond the tyranny of merit toward a less rancorous, more generous public life.

Posted in Higher Education

Kroger: In Praise of American Higher Education

This post is my effort to be upbeat for a change, looking at what’s good about US education.  It’s a recent essay by John Kroger, “In Praise of American Higher Education,” which was published in Inside Higher Ed.  Here’s a link to the original.  

Hope you enjoy it.  All is not bleak.

In Praise of American Higher Education

By

John Kroger

 September 14, 2020

These are grim times, filled with bad news. Nationally, the death toll from COVID-19 has passed 190,000. Political polarization has reached record levels, with some scholars openly fearing a fascist future for America. In my hometown of Portland, Ore., we have been buffeted by business closures, violent clashes between protesters and police, and out-of-control wildfires that have killed an unknown number of our fellow citizens, destroyed over a thousand homes and filled our streets with smoke. And in the higher education community, we are struggling. Our campuses are now COVID-19 hot spots, hundreds of institutions have implemented layoffs and furloughs impacting a reported 50,000 persons, and many commentators predict a complete financial meltdown for the sector. As I started to write this essay, a friend asked, “Is there any good news to report?”

In America today, we love to bash higher education. The negative drumbeat is incessant. Tuition, we hear, is too high. Students have to take too many loans. College does not prepare students for work. Inequality and racism are widespread. Just look at recent book titles: The Breakdown of Higher EducationCrisis in Higher EducationIntro to FailureThe Quiet Crisis, How Higher Education is Failing AmericaHigher Education Under FireThe Dream Is OverCracks in the Ivory Tower, The Moral Mess of Higher Education; and The Coddling of the American Mind. Jeesh.

So, for good news today, I want to remind everyone that despite all the criticism, the United States possesses a remarkable higher education system. Yes, we have our problems, which we need to address. The government and our colleges and universities need to partner to expand access to college, make it more affordable and decrease loan burdens; we need to ensure that our students graduate with valuable job skills; we need to tackle inequality and systemic racism in admission, hiring and the curriculum. But let us not lose sight of the remarkable things we have achieved and the very real strengths our system possesses — the very strengths that will allow us to tackle and solve the problems we have identified. Consider the following:

The United States has, by far, the largest number of great universities in the world. In the latest Times World University Rankings, the United States is dominant, possessing 14 of the top 20 universities in the world. These universities — places like Yale, UC Berkeley and Johns Hopkins — provide remarkable undergraduate and graduate educations combined with world-leading research outcomes. That reputation for excellence has made the United States the international gold standard for higher education.

We provide remarkable value to our students. As a recent Brookings Institution report noted, “Higher education provides extensive benefits to students, including higher wages, better health, and a lower likelihood of requiring disability payments. A population that is more highly educated also confers wide-ranging benefits to the economy, such as lower rates of unemployment and higher wages even for workers without college degrees. A postsecondary degree can also serve as a buffer against unemployment during economic downturns. Those with postsecondary degrees saw more steady employment through the Great Recession, and the vast majority of net jobs created during the economic recovery went to college-educated workers.”

Our higher education capacity is massive. At last count, almost 20 million students are enrolled in college. This is one reason we are fourth (behind Canada, Japan and South Korea) out of all OECD nations in higher education degree attainment, far ahead of nations like Germany and France. If we believe that mass education is critical to the future of our economy and democracy, this high number — and the fact that most of our institutions could easily grow — should give us great hope.

The United States dominates global research (though China is gaining). As The Economist reported in 2018, “Since the first Nobel prizes were bestowed in 1901, American scientists have won a whopping 269 medals in the fields of chemistry, physics and physiology or medicine. This dwarfs the tallies of America’s nearest competitors, Britain (89), Germany (69) and France (31).” In a recent global ranking of university innovation — “a list that identifies and ranks the educational institutions doing the most to advance science, invent new technologies and power new markets and industries” — U.S. institutions grabbed eight out of the top 10 spots.

We possess an amazing network of community colleges offering very low-cost, high-quality foundational and continuing education to virtually every American. No matter where you live in the United States, a low-cost community college and a world of learning is just a few miles away. This network provides a great foundation for our effort to expand economic opportunity and reach underserved populations. As Secretary of Education Arne Duncan once remarked, “About half of all first-generation college students and minority students attend community colleges. It is a remarkable record. No other system of higher education in the world does so much to provide access and second-chance opportunities as our community colleges.”

We are nimble. Though higher education is often bashed for refusing to change, our ability to do so is remarkable. When COVID-19 broke out in spring 2020, almost every U.S. college and university pivoted successfully to online education in a matter of weeks. Faculty, staff and administrators, often criticized for failing to work together, collectively made this happen overnight. Now, no matter what the future holds, our colleges and universities have the ability to deliver education effectively through both traditional in-person and new online models.

We have a great tradition, starting with the GI Bill, of federal government support for college education. No one in Congress is calling for an end to Pell Grants, one of the few government programs to enjoy overwhelming bipartisan government support in this highly fractured political era. Instead, the only question is the degree to which those grants need to increase and whether that increase should be linked to cost containment by institutions or not. This foundation of political support is vital as we look to ways to expand college access and affordability.

Finally, we have amazing historically Black colleges and universities, with excellent academic programs, outstanding faculty and proud histories. As the nation begins to confront its history of racism and discrimination, these institutions provide a remarkable asset to help the nation come to terms with its past, provide transformational education in the present and move toward a better future.

So, as we go through tough times, and we continue to subject our institutions to necessary and valuable self-criticism, it is important to keep our failures and limitations in perspective. Yes, American higher education could be better. But it is remarkable, valuable and praiseworthy all the same.

Posted in Higher Education, History of education, Organization Theory, Sociology

College: What Is It Good For?

This post is the text of a lecture I gave in 2013 at the annual meeting of the John Dewey Society.  It was published the following year in the Society’s journal, Education and Culture.  Here’s a link to the published version.           

The story I tell here is not a philosophical account of the virtues of the American university but a sociological account about how those virtues arose as unintended consequences of a system of higher education that arose for less elevated reasons.  Drawing my the analysis in the book I was writing at the time, A Perfect Mess, I show how the system emerged in large part out two impulses that had nothing to do with advancing knowledge.  One was in response to the competition among religious groups, seeking to plant the denominational flag on the growing western frontier and provide clergy for the newly arriving flock.  Another was in response to the competition among frontier towns to attract settlers who would buy land, using a college as a sign that this town was not just another dusty farm village but a true center of culture.

The essay then goes on to explore how the current positive social benefits of the US higher ed system are supported by the peculiar institutional form that characterizes American colleges and universities. 

My argument is that the true hero of the story is the evolved form of the American university, and that all the good things like free speech are the side effects of a structure that arose for other purposes.  Indeed, I argue that the institution – an intellectual haven in a heartless utilitarian world – depends on attributes that we would publicly deplore:  opacity, chaotic complexity, and hypocrisy.

In short, I’m portraying the system as one that is infused with irony, from its early origins through to its current functions.  Hope you enjoy it.

A Perfect Mess Cover

College — What Is It Good For

David F. Labaree

            I want to say up front that I’m here under false pretenses.  I’m not a Dewey scholar or a philosopher; I’m a sociologist doing history in the field of education.  And the title of my lecture is a bit deceptive.   I’m not really going to talk about what college is good for.  Instead I’m going to talk about how the institution we know as the modern American university came into being.  As a sociologist I’m more interested in the structure of the institution than in its philosophical aims.  It’s not that I’m opposed to these aims.  In fact, I love working in a university where these kinds of pursuits are open to us:   Where we can enjoy the free flow of ideas; where we explore any issue in the sciences or humanities that engages us; and where we can go wherever the issue leads without worrying about utility or orthodoxy or politics.  It’s a great privilege to work in such an institution.  And this is why I want to spend some time examining how this institution developed its basic form in the improbable context of the United States in the nineteenth century. 

            My argument is that the true hero of the story is the evolved form of the American university, and that all the good things like free speech are the side effects of a structure that arose for other purposes.  Indeed, I argue that the institution – an intellectual haven in a heartless utilitarian world – depends on attributes that we would publicly deplore:  opacity, chaotic complexity, and hypocrisy.

            I tell this story in three parts.  I start by exploring how the American system of higher education emerged in the nineteenth century, without a plan and without any apparent promise that it would turn out well.  By 1900, I show how all the pieces of the current system had come together.  This is the historical part.  Then I show how the combination of these elements created an astonishingly strong, resilient, and powerful structure.  I look at the way this structure deftly balances competing aims – the populist, the practical, and the elite.  This is the sociological part.  Then I veer back toward the issue raised in the title, to figure out what the connection is between the form of American higher education and the things that it is good for. This is the vaguely philosophical part.  I argue that the form serves the extraordinarily useful functions of protecting those of us in the faculty from the real world, protecting us from each other, and hiding what we’re doing behind a set of fictions and veneers that keep anyone from knowing exactly what is really going on. 

           In this light, I look at some of the things that could kill it for us.  One is transparency.  The current accountability movement directed toward higher education could ruin everything by shining a light on the multitude of conflicting aims, hidden cross-subsidies, and forbidden activities that constitute life in the university.  A second is disaggregation.  I’m talking about current proposals to pare down the complexity of the university in the name of efficiency:  Let online modules take over undergraduate teaching; eliminate costly residential colleges; closet research in separate institutes; and get rid of football.  These changes would destroy the synergy that comes from the university’s complex structure.  A third is principle.  I argue that the university is a procedural institution, which would collapse if we all acted on principle instead of form.   I end with a call for us to retreat from substance and stand shoulder-to-shoulder in defense of procedure.

Historical Roots of the System

            The origins of the American system of higher education could not have been more humble or less promising of future glory.  It was a system, but it had no overall structure of governance and it did not emerge from a plan.  It just happened, through an evolutionary process that had direction but no purpose.  We have a higher education system in the same sense that we have a solar system, each of which emerged over time according to its own rules.  These rules shaped the behavior of the system but they were not the product of Intelligent Design. 

            Yet something there was about this system that produced extraordinary institutional growth.  When George Washington assumed the presidency of the new republic in 1789, the U.S. already had 19 colleges and universities (Tewksbury, 1932, Table 1; Collins, 1979, Table 5.2).  By 1830 the numbers rose to 50 and then growth accelerated, with the total reaching 250 in 1860, 563 in 1870, and 811 in 1880.  To give some perspective, the number of universities in the United Kingdom between 1800 and 1880 rose from 6 to 10 and in all of Europe from 111 to 160 (Rüegg, 2004).  So in 1880 this upstart system had 5 times as many institutions of higher education as did the entire continent of Europe.  How did this happen?

            Keep in mind that the university as an institution was born in medieval Europe in the space between the dominant sources of power and wealth, the church and the state, and it drew  its support over the years from these two sources.  But higher education in the U.S. emerged in a post-feudal frontier setting where the conditions were quite different.  The key to understanding the nature of the American system of higher education is that it arose under conditions where the market was strong, the state was weak, and the church was divided.  In the absence of any overarching authority with the power and money to support a system, individual colleges had to find their own sources of support in order to get started and keep going.  They had to operate as independent enterprises in the competitive economy of higher education, and their primary reasons for being had little to do with higher learning.

            In the early- and mid-nineteenth century, the modal form of higher education in the U.S. was the liberal arts college.  This was a non-profit corporation with a state charter and a lay board, which would appoint a president as CEO of the new enterprise.  The president would then rent a building, hire a faculty, and start recruiting students.  With no guaranteed source of funding, the college had to make a go of it on its own, depending heavily on tuition from students and donations from prominent citizens, alumni, and religious sympathizers.  For college founders, location was everything.  However, whereas European universities typically emerged in major cities, these colleges in the U.S. arose in small towns far from urban population centers.  Not a good strategy if your aim was to draw a lot of students.  But the founders had other things in mind.

            One central motive for founding colleges was to promote religious denominations.  The large majority of liberal arts colleges in this period had a religious affiliation and a clergyman as president.  The U.S. was an extremely competitive market for religious groups seeking to spread the faith, and colleges were a key way to achieve this end.  With colleges, they could prepare its own clergy and provide higher education for their members; and these goals were particularly important on the frontier, where the population was growing and the possibilities for denominational expansion were the greatest.  Every denomination wanted to plant the flag in the new territories, which is why Ohio came to have so many colleges.  The denomination provided a college with legitimacy, students, and a built-in donor pool but with little direct funding.

            Another motive for founding colleges was closely allied with the first, and that was land speculation.  Establishing a college in town was not only a way to advance the faith, it was also a way to raise property values.  If town fathers could attract a college, they could make the case that the town was no mere agricultural village but a cultural center, the kind of place where prospective land buyers would want to build a house, set up a business, and raise a family.  Starting a college was cheap and easy.  It would bear the town’s name and serve as its cultural symbol.  With luck it would give the town leverage to become a county seat or gain a station on the rail line.  So a college was a good investment in a town’s future prosperity (Brown, 1995).

            The liberal arts college was the dominant but not the only form that higher education took in nineteenth century America.  Three other types of institutions emerged before 1880.  One was state universities, which were founded and governed by individual states but which received only modest state funding.  Like liberal arts colleges, they arose largely for competitive reasons.  They emerged in the new states as the frontier moved westward, not because of huge student demand but because of the need for legitimacy.  You couldn’t be taken seriously as a state unless you had a state university, especially if your neighbor had just established one. 

            The second form of institution was the land-grant college, which arose from federal efforts to promote land sales in the new territories by providing public land as a founding grant for new institutions of higher education.  Turning their backs on the classical curriculum that had long prevailed in colleges, these schools had a mandate to promote practical learning in fields such as agriculture, engineering, military science, and mining. 

            The third form was the normal school, which emerged in the middle of the century as state-founded high-school-level institutions for the preparation of teachers.  It wasn’t until the end of the century that these schools evolved into teachers colleges; and in the twentieth century they continued that evolution, turning first into full-service state colleges and then by midcentury into regional state universities. 

            Unlike liberal arts colleges, all three of these types of institutions were initiated by and governed by states, and all received some public funding.  But this funding was not nearly enough to keep them afloat, so they faced similar challenges as the liberal arts colleges, since their survival depended heavily on their ability to bring in student tuition and draw donations.  In short, the liberal arts college established the model for survival in a setting with a strong market, weak state, and divided church; and the newer public institutions had to play by the same rules.

            By 1880, the structure of the American system of higher education was well established.  It was a system made up of lean and adaptable institutions, with a strong base in rural communities, and led by entrepreneurial presidents, who kept a sharp eye out for possible threats and opportunities in the highly competitive higher-education market.  These colleges had to attract and keep the loyalty of student consumers, whose tuition was critical for paying the bills and who had plenty of alternatives in towns nearby.  And they also had to maintain a close relationship with local notables, religious peers, and alumni, who provided a crucial base of donations.

            The system was only missing two elements to make it workable in the long term.  It lacked sufficient students, and it lacked academic legitimacy.  On the student side, this was the most overbuilt system of higher education the world has ever seen.  In 1880, 811 colleges were scattered across a thinly populated countryside, which amounted to 16 colleges per million of population (Collins, 1979, Table 5.2).  The average college had only 131 students and 14 faculty and granted 17 degrees per year (Carter et al., 2006, Table Bc523, Table Bc571; U.S. Bureau of the Census, 1975, Series H 751).  As I have shown, these colleges were not established in response to student demand, but nonetheless they depended on students for survival.  Without a sharp growth in student enrollments, the whole system would have collapsed. 

            On the academic side, these were colleges in name only.  They were parochial in both senses of the word, small town institutions stuck in the boondocks and able to make no claim to advancing the boundaries of knowledge.  They were not established to promote higher learning, and they lacked both the intellectual and economic capital required to carry out such a mission.  Many high schools had stronger claims to academic prowess than these colleges.  European visitors in the nineteenth century had a field day ridiculing the intellectual poverty of these institutions.  The system was on death watch.  If it was going to be able to survive, it needed a transfusion that would provide both student enrollments and academic legitimacy. 

            That transfusion arrived just in time from a new European import, the German research university.  This model offered everything that was lacking in the American system.  It reinvented university professors as the best minds of the generation, whose expertise was certified by the new entry-level degree, the Ph.D., and who were pushing back the frontiers of knowledge through scientific research.  It introduced graduate students to the college campus, who would be selected for their high academic promise and trained to follow in the footsteps of their faculty mentors. 

            And at the same time that the German model offered academic credibility to the American system, the peculiarly Americanized form of this model made university enrollment attractive for undergraduates, whose focus was less on higher learning than on jobs and parties.  The remodeled American university provided credible academic preparation in the cognitive skills required for professional and managerial work; and it provided training in the social and political skills required for corporate employment, through the process of playing the academic game and taking on roles in intercollegiate athletics and on-campus social clubs.  It also promised a social life in which one could have a good time and meet a suitable spouse. 

            By 1900, with the arrival of the research university as the capstone, nearly all of the core elements of the current American system of higher education were in place.  Subsequent developments focused primarily on extending the system downward, adding layers that would make it more accessible to larger numbers of students – as normal schools evolved into regional state universities and as community colleges emerged as the open-access base of an increasingly stratified system.  Here ends the history portion of this account. Now we move on to the sociological part of the story.

Sociological Traits of the System

            When the research university model arrived to save the day in the 1880s, the American system of higher education was in desperate straits.  But at the same time this system had an enormous reservoir of potential strengths that prepared it for its future climb to world dominance.  Let’s consider some of these strengths.  First it had a huge capacity in place, the largest in the world by far:  campuses, buildings, faculty, administration, curriculum, and a strong base in the community.  All it needed was students and credibility. 

            Second, it consisted of a group of institutions that had figured out how to survive under dire Darwinian circumstances, where supply greatly exceeded demand and where there was no secure stream of funding from church or state.  In order to keep the enterprises afloat, they had learned how to hustle for market position, troll for students, and dun donors.  Imagine how well this played out when students found a reason to line up at their doors and donors suddenly saw themselves investing in a winner with a soaring intellectual and social mission. 

            Third, they had learned to be extraordinarily sensitive to consumer demand, upon which everything depended.  Fourth, as a result they became lean and highly adaptable enterprises, which were not bounded by the politics of state policy or the dogma of the church but could take advantage of any emerging possibility for a new program, a new kind of student or donor, or a new area of research.  Not only were they able to adapt but they were forced to do so quickly, since otherwise the competition would jump on the opportunity first and eat their lunch.

            By the time the research university arrived on the scene, the American system of higher education was already firmly established and governed by its own peculiar laws of motion and its own evolutionary patterns.  The university did not transform the system.  Instead it crowned the system and made it viable for a century of expansion and elevation.  Americans could not simply adopt the German university model, since this model depended heavily on strong state support, which was lacking in the U.S.  And the American system would not sustain a university as elevated as the German university, with its tight focus on graduate education and research at the expense of other functions.  American universities that tried to pursue this approach – such as Clark University and Johns Hopkins – found themselves quickly trailing the pack of institutions that adopted a hybrid model grounded in the preexisting American system.  In the U.S., the research university provided a crucial add-on rather than a transformation.  In this institutionally-complex market-based system, the research university became embedded within a convoluted but highly functional structure of cross-subsidies, interwoven income streams, widely dispersed political constituencies, and a bewildering array of goals and functions. 

            At the core of the system is a delicate balance among three starkly different models of higher education.  These three roughly correspond to Clark Kerr’s famous characterization of the American system as a mix of the British undergraduate college, the American land-grant college, and the German research university (Kerr, 2001, p. 14).  The first is the populist element, the second is the practical element, and the third is the elite element.  Let me say a little about each of these and make the case for how they work to reinforce each other and shore up the overall system.  I argue that these three elements are unevenly distributed across the whole system, with the populist and practical parts strongest in the lower tiers of the system, where access is easy and job utility are central, and the elite is strongest in the upper tier.  But I also argue that all three are present in the research university at the top of the system.  Consider how all these elements come together in a prototypical flagship state university.

            The populist element has its roots in the British residential undergraduate college, which colonists had in mind when they established the first American colleges; but the changes that emerged in the U.S. in the early nineteenth century were critical.  Key was the fact that American colleges during this period were broadly accessible in a way that colleges in the U.K. never were until the advent of the red-brick universities after the Second World War.  American colleges were not located in fashionable areas in major cities but in small towns in the hinterland.  There were far too many of them for them to be elite, and the need for students meant that tuition and academic standards both had to be kept relatively low.  The American college never exuded the odor of class privilege to the same degree as Oxbridge; its clientele was largely middle class.  For the new research university, this legacy meant that the undergraduate program provided critical economic and political support. 

            From the economic perspective, undergrads paid tuition, which – through large classes and thus the need for graduate teaching assistants – supported graduate programs and the larger research enterprise.  Undergrads, who were socialized in the rituals of football and fraternities, were also the ones who identified most closely with the university, which meant that in later years they became the most loyal donors.  As doers rather than thinkers, they were also the wealthiest group of alumni donors.  Politically, the undergraduate program gave the university a broad base of community support.  Since anyone could conceive of attending the state university, the institution was never as remote or alien as the German model.  Its athletic teams and academic accomplishments were a point of pride for state residents, whether or not they or their children ever attended.  They wore the school colors and cheered for it on game days.

            The practical element has its root in the land-grant college.  The idea here was that the university was not just an enterprise for providing liberal education for the elite but that it could also provide useful occupational skills for ordinary people.  Since the institution needed to attract a large group of students to pay the bills, the American university left no stone unturned when it came to developing programs that students might want.  It promoted itself as a practical and reliable mechanism for getting a good job.  This not only boosted enrollment, but it also sent a message to the citizens of the state that the university was making itself useful to the larger community, producing the teachers, engineers, managers, and dental hygienists that they needed.  

            This practical bent also extended to the university’s research effort, which was not just focusing on ivory tower pursuits.  Its researchers were working hard to design safer bridges, more productive crops, better vaccines, and more reliable student tests.  For example, when I taught at Michigan State I planted my lawn with Spartan grass seed, which was developed at the university.  These forms of applied research led to patents that brought substantial income back to the institution, but their most important function was to provide a broad base of support for the university among people who had no connection with it as an instructional or intellectual enterprise.  The idea was compelling: This is your university, working for you.

            The elite element has its roots in the German research university.  This is the component of the university formula that gives the institution academic credibility at the highest level.  Without it the university would just be a party school for the intellectually challenged and a trade school for job seekers.  From this angle, the university is the haven for the best thinkers, where professors can pursue intellectual challenges of the first order, develop cutting edge research in a wide array of domains, and train graduate students who will carry on these pursuits in the next generation.  And this academic aura envelops the entire enterprise, giving the lowliest freshman exposure to the most distinguished faculty and allowing the average graduate to sport a diploma burnished by the academic reputations of the best and the brightest.  The problem, of course, is that supporting professorial research and advanced graduate study is enormously expensive; research grants only provide a fraction of the needed funds. 

            So the populist and practical domains of the university are critically important components of the larger university package.  Without the foundation of fraternities and football, grass seed and teacher education, the superstructure of academic accomplishment would collapse of its own weight.  The academic side of the university can’t survive without both the financial subsidies and political support that come from the populist and the practical sides.  And the populist and practical sides rely on the academic legitimacy that comes from the elite side.  It’s the mixture of the three that constitutes the core strength of the American system of higher education.  This is why it is so resilient, so adaptable, so wealthy, and so powerful.  This is why its financial and political base is so broad and strong.  And this is why American institutions of higher education enjoy so much autonomy:  They respond to many sources of power in American society and they rely on many sources of support, which means they are not the captive of any single power source or revenue stream.

The Power of Form

            So my story about the American system of higher education is that it succeeded by developing a structure that allowed it to become both economically rich and politically autonomous.  It could tap multiple sources of revenue and legitimacy, which allowed it to avoid becoming the wholly owned subsidiary of the state, the church, or the market.  And by virtue of its structurally reinforced autonomy, college is good for a great many things.

            At last we come back to our topic.  What is college good for?  For those of us on faculties of research universities, they provide several core benefits that we see as especially important.  At the top of the list is that they preserve and promote free speech.  They are zones where faculty and students can feel free to pursue any idea, any line of argument, and any intellectual pursuit that they wish – free of the constraints of political pressure, cultural convention, or material interest.  Closely related to this is the fact that universities become zones where play is not only permissible but even desirable, where it’s ok to pursue an idea just because it’s intriguing, even though there is no apparent practical benefit that this pursuit would produce.

            This, of course, is a rather idealized version of the university.  In practice, as we know, politics, convention, and economics constantly intrude on the zone of autonomy in an effort to shape the process and limit these freedoms.  This is particularly true in the lower strata of the system.  My argument is not that the ideal is met but that the structure of American higher education – especially in the top tier of the system – creates a space of relative autonomy, where these constraining forces are partially held back, allowing the possibility for free intellectual pursuits that cannot be found anywhere else. 

            Free intellectual play is what we in the faculty tend to care about, but others in American society see other benefits arising from higher education that justify the enormous time and treasure that we devote to supporting the system.  Policymakers and employers put primary emphasis on higher education as an engine of human capital production, which provides the economically relevant skills that drive increases in worker productivity and growth in the GDP.  They also hail it as a place of knowledge production, where people develop valuable technologies, theories, and inventions that can feed directly into the economy.  And companies use it as a place to outsource much of their needs for workforce training and research-and-development. 

            These pragmatic benefits that people see coming from the system of higher education are real.  Universities truly are socially useful in such ways.  But it’s important to keep in mind that these social benefits only can arise if the university remains a preserve for free intellectual play.  Universities are much less useful to society if they restrict themselves to the training of individuals for particular present-day jobs, or to the production of research to solve current problems.  They are most useful if they function as storehouses for knowledges, skills, technologies, and theories – for which there is no current application but which may turn out to be enormously useful in the future.  They are the mechanism by which modern societies build capacity to deal with issues that have not yet emerged but sooner or later are likely to do so.

            But that is a discussion for another speech by another scholar.  The point I want make today about the American system of higher education is that it is good for a lot of things but it was established in order to accomplish none of these things.  As I have shown, the system that arose in the nineteenth century was not trying to store knowledge, produce capacity, or increase productivity.  And it wasn’t trying to promote free speech or encourage play with ideas.  It wasn’t even trying to preserve institutional autonomy.  These things happened as the system developed, but they were all unintended consequences.  What was driving development of the system was a clash of competing interests, all of which saw the college as a useful medium for meeting particular ends.  Religious denominations saw them as a way to spread the faith.  Town fathers saw them as a way to promote local development and increase property values.  The federal government saw them as a way to spur the sale of federal lands.  State governments saw them as a way to establish credibility in competition with other states.  College presidents and faculty saw them as a way to promote their own careers.  And at the base of the whole process of system development were the consumers, the students, without whose enrollment and tuition and donations the system would not have been able to persist.  The consumers saw the college as useful in a number of ways:  as a medium for seeking social opportunity and achieving social mobility; as a medium for preserving social advantage and avoiding downward mobility; as a place to have a good time, enjoy an easy transition to adulthood, pick up some social skills, and meet a spouse; even, sometimes, as a place to learn. 

            The point is that the primary benefits of the system of higher education derive from its form, but this form did not arise in order to produce these benefits.  We need to preserve the form in order to continue enjoying these benefits, but unfortunately the organizational  foundations upon which the form is built are, on the face of it, absurd.  And each of these foundational qualities is currently under attack from the perspective of alternative visions that, in contrast, have a certain face validity.  It the attackers accomplish their goals, the system’s form, which has been so enormously productive over the years, will collapse, and with this collapse will come the end of the university as we know it.  I didn’t promise this lecture would end well, did I?

            Let me spell out three challenges that would undercut the core autonomy and synergy that makes the system so productive in its current form.  On the surface, each of the proposed changes seems quite sensible and desirable.  Only by examining the implications of actually pursuing these changes can we see how they threaten the foundational qualities that currently undergird the system.  The system’s foundations are so paradoxical, however, that mounting a public defense of them would be difficult indeed.  Yet it is precisely these traits of the system that we need to defend in order to preserve the current highly functional form of the university.  In what follows, I am drawing inspiration from the work of Suzanne Lohmann (2004, 2006) a political scientist at UCLA, who is the scholar who has addressed these issues most astutely.

            One challenge comes from prospective reformers of American higher education who want to promote transparency.  Who can be against that?  This idea derives from the accountability movement, which has already swept across K-12 education and is now pounding the shores of higher education.  It simply asks universities to show people what they’re doing.  What is the university doing with its money and its effort?  Who is paying for what?  How do the various pieces of the complex structure of the university fit together?  And are they self-supporting or drawing resources from elsewhere?  What is faculty credit-hour production?  How is tuition related to instructional costs?  And so on.   These demands make a lot of sense. 

            The problem, however, as I have shown today, is that the autonomy of the university depends on its ability to shield its inner workings from public scrutiny.  It relies on opacity.  Autonomy will end if the public can see everything that is going on and what everything costs.  Consider all of the cross subsidies that keep the institution afloat:  undergraduates support graduate education, football supports lacrosse, adjuncts subsidize professors, rich schools subsidize poor schools.  Consider all of the instructional activities that would wilt in the light of day; consider all of the research projects that could be seen as useless or politically unacceptable.  The current structure keeps the inner workings of the system obscure, which protects the university from intrusions on its autonomy.  Remember, this autonomy arose by accident not by design; its persistence depends on keeping the details of university operations out of public view.

            A second and related challenge comes from reformers who seek to promote disaggregation.  The university is an organizational nightmare, they say, with all of those institutes and centers, departments and schools, programs and administrative offices.  There are no clear lines of authority, no mechanisms to promote efficiency and eliminate duplication, no tools to achieve economies of scale.  Transparency is one step in the right direction, they say, but the real reform that is needed is to take apart the complex interdependencies and overlapping responsibilities within the university and then figure out how each of these tasks could be accomplished in the most cost-effective and outcome-effective manner.  Why not have a few star professors tape lectures and then offer Massive Open Online Courses at colleges across the country?  Why not have institutions specialize in what they’re best at – remedial education, undergraduate instruction, vocational education, research production, graduate or student training?  Putting them together into a single institution is expensive and grossly inefficient. 

            But recall that it is precisely the aggregation of purposes and functions – the combination of the populist, the practical, and the elite – that has made the university so strong, so successful, and, yes, so useful.  This combination creates a strong base both financially and politically and allows for forms of synergy than cannot happen with a set of isolated educational functions.  The fact is that this institution can’t be disaggregated without losing what makes it the kind of university that students, policymakers, employers, and the general public find so compelling.  A key organizational element that makes the university so effective is its chaotic complexity.

            A third challenge comes not from reformers intruding on the university from the outside but from faculty members meddling with it from the inside.  The threat here arises from the dangerous practice of acting on academic principle.  Fortunately, this is not very common in academe.  But the danger is lurking in the background of every decision about faculty hires.  Here’s how it works.  You review a finalist for a faculty position in a field not closely connected to your own, and you find to your horror that the candidate’s intellectual domain seems absurd on the face of it (how can anyone take this type of work seriously?) and the candidate’s own scholarship doesn’t seem credible.  So you decide to speak against hiring the candidate and organize colleagues to support your position.  But then you happen to read a paper by Suzanne Lohmann, who points out something very fundamental about how universities work. 

            Universities are structured in a manner that protects the faculty from the outside world (that is, protecting them from the forces of transparency and disaggregation), but it’s also organized in a manner that protects the faculty from each other.  The latter is the reason we have such an enormous array of departments and schools in universities.  If every historian had to meet the approval of geologists and every psychologist had be meet the approval of law faculty, no one would ever be hired. 

           The simple fact is that part of what keeps universities healthy and autonomous is hypocrisy.  Because of the Balkanized structure of university organization, we all have our own protected spaces to operate in and we all pass judgment only on our own peers within that space.  To do otherwise would be disastrous.  We don’t have to respect each other’s work across campus, we merely need to tolerate it – grumbling about each other in private and making nice in public.  You pick your faculty, we’ll pick ours.  Lohmann (2006) calls this core procedure of the academy “log-rolling.”  If we all operated on principle, if we all only approved scholars we respected, then the university would be a much diminished place.  Put another way, I wouldn’t want to belong to a university that consisted only of people I found worthy.  Gone would be the diversity of views, paradigms, methodologies, theories, and world views that makes the university such a rich place.  The result is incredibly messy, and it permits a lot of quirky – even ridiculous – research agendas, courses, and instructional programs.  But in aggregate, this libertarian chaos includes an extraordinary range of ideas, capacities, theories, and social possibilities.  It’s exactly the kind of mess we need to treasure and preserve and defend against all opponents.

            So here is the thought I’m leaving you with.  The American system of higher education is enormously productive and useful, and it’s a great resource for students, faculty, policymakers, employers, and society.  What makes it work is not its substance but its form.  Crucial to its success is its devotion to three formal qualities:  opacity, chaotic complexity, and hypocrisy.  Embrace these forms and they will keep us free.

Posted in Higher Education, Meritocracy, Philosophy

Alain de Botton: On Asking People What They ‘Do’?

This lovely essay explores the most common question that modernity prompts strangers to ask each other:  What do you do?  The author is the philosopher Alain de Botton, who explains that this question is freighted with moral judgment.  In a meritocracy, what you do for a living is not only who you are; it’s also where you stand in the hierarchy of public esteem.  Are you somebody or nobody, a winner or a loser?  Should I suck up to you or should I scorn you?

The argument here resonates with a number of recent pieces I’ve posted here about the downside of the academic meritocracy.  At the core is this problem:  when we say the social system is responsive to merit rather than birth, we place personal responsibility on individuals for their social outcomes.  It’s no longer legitimate to blame fate or luck or the gods for your lowly status, because the fault is all yours.

This essay is from his website The School of Life.  Here’s a link to the original.

On Asking People What They ‘Do’?

Alain de Botton

The world became modern when people who met for the first time shifted from asking each other (as they had always done) where they came from – to asking each other what they did.

To try to position someone by their area of origin is to assume that personal identity is formed first and foremost by membership of a geographical community; we are where we are from. We’re the person from the town by the lake, we’re from the village between the forest and the estuary. But to want to know our job is to imagine that it’s through our choice of occupation, through our distinctive way of earning money, that we become most fully ourselves; we are what we do.

The difference may seem minor but it has significant implications for the way we stand to be judged and therefore how pained the question may make us feel. We tend not to be responsible for where we are from. The universe landed us there and we probably stayed. Furthermore, entire communities are seldom viewed as either wholly good or bad; it’s assumed they will contain all sorts of people, about whom blanket judgements would be hard. One is unlikely to be condemned simply on the basis of the region or city one hails from. But we have generally had far more to do with the occupation we are engaged in. We’ll have studied a certain way, gained particular qualifications and made specific choices in order to end up, perhaps, a dentist or a cleaner, a film producer or a hospital porter. And to such choices, targeted praise or blame can be attached. 

It turns out that in being asked what we do, we are not being asked what we do, we’re being asked what we are worth – and more precisely, whether or not we are worth knowing. In modernity, there are right and wrong and answers and the wrong ones will swiftly strip us of the psychological ingredient we crave as much as we do heat, food or rest: respect. We long to be treated with dignity and kindness, for our existence to matter to others and for our particularity to be noticed and honoured. We may do almost as much damage to a person by ignoring them as by punching them in the stomach.

But respect will not be available to those who cannot give a sufficiently elevated answer to the question of what they do. The modern world is snobbish. The term is associated with a quaint aristocratic value system that emphasises bloodlines and castles. But stripped to its essence snobbery merely indicates any way of judging another human whereby one takes a relatively small section of their identity and uses it to come to a total and fixed judgement on their entire worth. For the music snob, we are what we listen to, for the clothes snob, we are our trousers. And according to the predominant kind of snobbery at large in the modern world, which is job snobbery, we are nothing but what is on our business card.

The opposite of a snob might be a parent or lover; someone who cares about who one is, not what one does. But for the majority, our existence will be weighed up according to far narrower criteria. We will exist in so far as we have performed adequately in the market place. Our longing for respect will only be satisfied through the right sort of rank. It is easy to accuse modern humans of being materialistic. This seems wrong. We may have high levels of interest in possessions and salaries, but we are not on that basis ‘materialistic’. We are simply living in a world where the possession of certain material goods has become the only conduit to the emotional rewards that are what, deep down, we crave. It isn’t the objects and titles we are after; it is, more poignantly, the feeling of being ‘seen’ and liked which will only be available to us via material means.

Not only does the modern world want to know what we do, it also has to hand some punitive explanations of why we have done not well. It promotes the idea of ‘meritocracy’, that is, a belief in a system which should allow each person to rise through classes in order to take up the place they deserve. No longer should tradition or family background limit what one can achieve. But the idea of meritocracy carries with it a nasty sting, for if we truly believe in a world in which those who deserve to get to the top get to the top, then by implication, we must also believe in a world in which those who get to the bottom deserve to get to the bottom. In other words, a world which takes itself to be meritocratic will suppose that failure and success in the professional game are not mere accidents, but always and invariably indications of genuine value.

It had not always felt quite as definitive. Premodern societies believed in the intervention of divine forces in human affairs. A successful Roman trader or soldier would have looked up and thanked Mercury or Mars for their good fortune. They knew themselves to be only ever partially responsible for what happened to them, for good or ill, and would remember as much when evaluating others. The poor weren’t necessarily indigent or sinful; the Gods might just have never looked favourably on them. But we have done away with the idea of divine intervention – or of its less directly superstitious cousin, luck. We don’t accept that someone might fail for reasons of mere bad luck. We have little patience for nuanced stories or attenuating facts; narratives that could set the bare bones of a biography in a richer context, that could explain that though someone ended up in a lowly place, they had to deal with an illness, an ailing relative, a stock market crash or a very difficult childhood. Winners make their own luck. And losers their own defeat.

No wonder that the consequences of underachievement feel especially punishing. There are fewer explanations and fewer ways of tolerating oneself. A society that assumes that what happens to an individual is the responsibility of the individual is a society that doesn’t want to hear any so-called excuses that would less closely identify a person with elements of their CV. It is a society that may leave some of the losers feeling – in extremis – that they have no right to exist. Suicide rates rise.

In the past, in the era of group identity, we might value ourselves in part for things which we had not done entirely ourselves. We might feel proud that we came from a society that had built a particularly fine cathedral or temple. Our sense of self could be bolstered by belonging to a city or nation that placed great store on athletic prowess or literary talent. Modernity has sharply weakened our ability to lean on such supports. It has tied us punitively closely to what we have personally done – or not.

At the same time, it has pointed out that the opportunities for individual achievement have never been greater. We – at last – are able to do anything. We might found a fortune, rise to the top of politics, write a hit song. There should be no limits on ambition. And therefore, any failure starts to feel even more of a damning verdict on who we are. It’s one thing to have failed in an era when failure seemed like the norm, quite another to have failed when success has been made to feel like an ongoing and universal possibility.

Even as it raised living standards across the board, the modern world has managed to make the psychological consequences of failure harder to bear. It has eroded our sense that our identity could rest on broader criteria than our professional performance. It has also made it imperative for psychological survival that we try to find a way of escaping the claustrophobia of individualism, that we recall that workplace success and failure are always relative markers, not conclusive judgements, that in reality, no one is in fact ever either a loser or a winner, that we are all bewildering mixtures of the beautiful and the ugly, the impressive and the mediocre, the idiotic and the sharp. Going forward, in a fight against the spirit of the age, we might do well to ask all new acquaintances not so much what they do but – more richly – what they happen to have been thinking about recently.

Posted in Ed schools, Higher Education, History

Too Easy a Target: The Trouble with Ed Schools and the Implications for the University

This post is a piece I published in Academe (the journal of AAUP) in 1999.  It provides an overview of the argument in my 2004 book, The Trouble with Ed Schools. I reproduce it here as a public service:  if you read this, you won’t need to read my book much less buy it.  You’re welcome.  Also, looking through it 20 years later, I was pleasantly surprised to find that it was kind of a fun read.  Here’s a link to the original.

The book and the article tell the story of the poor beleauguered ed school, maligned by one and all.  It’s a story of irony, in which an institution does what everyone asked of it and is thoroughly punished for the effort.  And it’s also a reverse Horatio Alger story, in which the beggar boy never makes it.  Here’s a glimpse of the argument, which starts with the ed school’s terrible reputation:

So how did things get this bad? No occupational group or subculture acquires a label as negative as this one without a long history of status deprivation. Critics complain about the weakness and irrelevance of teacher ed, but they rarely look at the reasons for its chronic status problems. If they did, they might find an interesting story, one that presents a more sympathetic, if not more flattering, portrait of the education school. They would also find, however, a story that portrays the rest of academe in a manner that is less self-serving than in the standard account. The historical part of this story focuses on the way that American policy makers, taxpayers, students, and universities collectively produced exactly the kind of education school they wanted. The structural part focuses on the nature of teaching as a form of social practice and the problems involved in trying to prepare people to pursue this practice.

Enjoy.

Ed Schools Cover

Too Easy a Target:

The Trouble with Ed Schools and the Implications for the University

By David F. Labaree

This is supposed to be the era of political correctness on American university campuses, a time when speaking ill of oppressed minorities is taboo. But while academics have to tiptoe around most topics, there is still one subordinate group that can be shelled with impunity — the sad sacks who inhabit the university’s education school. There is no need to take aim at this target because it is too big to miss, and there is no need to worry about hitting innocent bystanders because everyone associated with the ed school is understood to be guilty as charged.

Of course, education in general is a source of chronic concern and an object of continuous criticism for most Americans. Yet, as the annual Gallup Poll of attitudes toward education shows, citizens give good grades to their local schools at the same time that they express strong fears about the quality of public education elsewhere in the country. The vision is one of general threats to education that have not yet reached the neighborhood school but may do so in the near future. These threats include everything from the multicultural curriculum to the decline in the family, the influence of television, and the consequences of chronic poverty.

One such threat is the hapless education school, whose alleged incompetence and supposedly misguided ideas are seen as producing poorly prepared teachers and inadequate curricula. For the public, this institution is remote enough to be suspect (unlike the local school) and accessible enough to be scorned (unlike the more arcane university). For the university faculty, it is the ideal scapegoat, allowing blame for problems with schools to fall upon teacher education in particular rather than higher education in general.

For years, writers from right to left have been making the same basic complaints about the inferior quality of education faculties, the inadequacy of education students, and, to quote James Koerner’s 1963 classic, The Miseducation of American Teachers, their “puerile, repetitious, dull, and ambiguous” curriculum. This kind of complaining about ed schools is as commonplace as griping about the cold in the middle of winter. But something new has arisen in the defamatory discourse about these beleaguered institutions: the attacks are now coming from their own leaders. The victims are joining the victimizers.

So how did things get this bad? No occupational group or subculture acquires a label as negative as this one without a long history of status deprivation. Critics complain about the weakness and irrelevance of teacher ed, but they rarely look at the reasons for its chronic status problems. If they did, they might find an interesting story, one that presents a more sympathetic, if not more flattering, portrait of the education school. They would also find, however, a story that portrays the rest of academe in a manner that is less self-serving than in the standard account. The historical part of this story focuses on the way that American policy makers, taxpayers, students, and universities collectively produced exactly the kind of education school they wanted. The structural part focuses on the nature of teaching as a form of social practice and the problems involved in trying to prepare people to pursue this practice.

Decline of Normal Schools

Most education schools grew out of the normal schools that emerged in the second half of the nineteenth century. Their founders initially had heady dreams that these schools could become model institutions that would establish high-quality professional preparation for teachers along with a strong professional identity. For a time, some of the normal schools came close to realizing these dreams.

Soon, however, burgeoning enrollments in the expanding common schools produced an intense demand for new teachers to fill a growing number of classrooms, and the normal schools turned into teacher factories. They had to produce many teachers quickly and cheaply, or else school districts around the country would hire teachers without this training — or perhaps any form of professional preparation. So normal schools adapted by stressing quantity over quality, establishing a disturbing but durable pattern of weak professional preparation and low academic standards.

At the same time, normal schools had to confront a strong consumer demand from their own students, many of whom saw the schools as an accessible form of higher education rather than as a site for teacher preparation. Located close to home, unlike the more centrally located state universities and land grant colleges, the normal schools were also easier to get into and less costly. As a result, many students enrolled who had little or no interest in teaching; instead, they wanted an advanced educational credential that would gain them admission to attractive white-collar positions. They resisted being trapped within a single vocational track — the teacher preparation program — and demanded a wide array of college-level liberal arts classes and programs. Since normal schools depended heavily on tuition for their survival, they had little choice but to comply with the demands of their “customers.”

This compliance reinforced the already-established tendency toward minimizing the extent and rigor of teacher education. It also led the normal schools to transform themselves into the model of higher education that their customers wanted, first by changing into teachers’ colleges (with baccalaureate programs for nonteachers), then into state liberal-arts colleges, and finally into the general-purpose regional state universities they are today.

As the evolving colleges moved away from being normal schools, teacher education programs became increasingly marginal within their own institutions, which were coming to imitate the multipurpose university by giving pride of place to academic departments, graduate study, and preparation for the more prestigious professions. Teacher education came to be perceived as every student’s second choice, and the ed school professors came to be seen as second-class citizens in the academy.

Market Pressures in the Present

Market pressures on education schools have changed over the years, but they have not declined. Teaching is a very large occupation in the United States, with about 3 million practitioners in total. To fill all the available vacancies, approximately one in every five college graduates must enter teaching each year. If education schools do not prepare enough candidates, state legislators will authorize alternative routes into the profession (requiring little or no professional education), and school boards will hire such prospects to place warm bodies in empty classrooms.

Education schools that try to increase the duration and rigor of teacher preparation by focusing more intensively on smaller cohorts of students risk leaving the bulk of teaching in the hands of practitioners who are prepared at less demanding institutions or who have not been prepared at all. In addition, such efforts run into strong opposition from within the university, which needs ed students to provide the numbers that bring legislative appropriations and tuition payments. Subsidies from the traditionally cost-effective teacher-education factories support the university’s more prestigious, but less lucrative, endeavors. As a result, universities do not want their ed schools to turn into boutique programs for the preparation of a few highly professionalized teachers.

Another related source of institutional resistance arises whenever education schools try to promote quality over quantity. This resistance comes from academic departments, which have traditionally relied on the ability of their universities to provide teaching credentials as a way to induce students to major in “impractical” subjects. Departments such as English, history, and music have sold themselves to undergraduates for years with the argument that “you can always teach” these subjects. As a result, these same departments become upset when the education school starts to talk about upgrading, downsizing, or limiting access.

Stigmatized Populations and Soft Knowledge

The fact that education schools serve stigmatized populations aggravates the market pressures that have seriously undercut the status and the role of these schools. One such population is women, who currently account for about 70 percent of American teachers. Another is the working class, whose members have sought out the respectable knowledge-based white-collar work of teaching as a way to attain middle-class standing. Children make up a third stigmatized population. In a society that rewards contact with adults more than contact with children, and in a university setting that is more concerned with serious adult matters than with kid stuff, education schools lose out, because they are indelibly associated with children.

Teachers also suffer from an American bias in favor of doing over thinking. Teachers are the largest and most visible single group of intellectual workers in the United States — that is, people who make their living through the production and transmission of ideas. More accessible than the others in this category, teachers constitute the street-level intellectuals of our society. As the only intellectuals with whom most people will ever have close contact, teachers take the brunt of the national prejudice against book learning and those pursuits that are scornfully labeled as “academic.”

Another problem facing education schools is the low status of the knowledge they deal with: it is soft rather than hard, applied rather than pure. Hard disciplines (which claim to produce findings that are verifiable, definitive, and cumulative) outrank soft disciplines (whose central problem is interpretation and whose findings are always subject to debate and reinterpretation by others). Likewise, pure intellectual pursuits (which are oriented toward theory and abstracted from particular contexts) outrank those that are applied (which concentrate on practical work and concrete needs).

Knowledge about education is necessarily soft. Education is an extraordinarily complex social activity carried out by quirky and willful actors, and it steadfastly resists any efforts to reduce it to causal laws or predictive theories. Researchers cannot even count on being able to build on the foundation of other people’s work, since the validity of this work is always only partially established. Instead, they must make the best of a difficult situation. They try to interpret what is going on in education, but the claims they make based on these interpretations are highly contingent. Education professors can rarely speak with unclouded authority about their area of expertise or respond definitively when others challenge their authority. Outsiders find it child’s play to demonstrate the weaknesses of educational research and hold it up for ridicule for being inexact, contradictory, and impotent.

Knowledge about education is also necessarily applied. Education is not a discipline, defined by a theoretical apparatus and a research methodology, but an institutional area. As a result, education schools must focus their energies on the issues that arise from this area and respond to the practical concerns confronting educational practitioners in the field — even if doing so leads them into areas in which their constructs are less effective and their chances for success less promising. This situation unavoidably undermines the effectiveness and the intellectual coherence of educational research and thus also calls into question the academic stature of the faculty members who produce that research.

No Prestige for Practical Knowledge

Another related knowledge-based problem faces the education school. A good case can be made for the proposition that American education — particularly higher education — has long placed a greater emphasis on the exchange value of the educational experience (providing usable credentials that can be cashed in for a good job) than on its use value (providing usable knowledge). That is, what consumers have sought and universities have sold in the educational marketplace is not the content of the education received at the university (what the student actually learns there) but the form of this education (what the student can buy with a university degree).

One result of this commodification process is that universities have a strong incentive to promote research over teaching, for publications raise the visibility and prestige of the institution much more effectively than does instruction (which is less visible and more difficult to measure). And a prestigious faculty raises the exchange value of the university’s diploma, independently of whatever is learned in the process of acquiring this diploma. By relying heavily on its faculty’s high-status work in fields of hard knowledge, the university’s marketing effort does not leave an honored role for an education school that produces soft knowledge about practical problems.

A Losing Status, but a Winning Role?

What all of this suggests is that education schools are poorly positioned to play the university status game. They serve the wrong clientele and produce the wrong knowledge; they bear the mark of their modest origins and their traditionally weak programs. And yet they are pressured by everyone from their graduates’ employers to their university colleagues to stay the way they are, since they fulfill so many needs for so many constituencies.

But consider for a moment what would happen if we abandoned the status perspective in establishing the value of higher education. What if we focus instead on the social role of the education school rather than its social position in the academic firmament? What if we consider the possibility that education schools — toiling away in the dark basement of academic ignominy — in an odd way have actually been liberated by this condition from the constraints of academic status attainment? Is it possible that ed schools may have stumbled on a form of academic practice that could serve as a useful model for the rest of the university? What if the university followed this model and stopped selling its degrees on the basis of institutional prestige grounded in the production of abstract research and turned its focus on instruction in usable knowledge?

Though the university status game, with its reliance on raw credentialism — the pursuit of university degrees as a form of cultural currency that can be exchanged for social position — is not likely to go away soon, it is now under attack. Legislators, governors, business executives, and educational reformers are beginning to declare that indeed the emperor is wearing no clothes: that there is no necessary connection between university degrees and student knowledge or between professorial production and public benefit; that students need to learn something when they are in the university; that the content of what they learn should have some intrinsic value; that professors need to develop ideas that have a degree of practical significance; and that the whole university enterprise will have to justify the huge public and private investment it currently requires.

The market-based pattern of academic life has always had an element of the confidence game, since the whole structure depends on a network of interlocking beliefs that are tenuous at best: the belief that graduates of prestigious universities know more and can do more than other graduates; the belief that prestigious faculty make for a good university; and the belief that prestigious research makes for a good faculty. The problem is, of course, that when confidence in any of these beliefs is shaken, the whole structure can come tumbling down. And when it does, the only recourse is to rebuild on the basis of substance rather than reputation, demonstrations of competence rather than symbols of merit.

This dreaded moment is at hand. The fiscal crisis of the state, the growing political demand for accountability and utility, and the intensification of competition in higher education are all undermining the credibility of the current pattern of university life. Today’s relentless demand for lower taxes and reduced public services makes it hard for the university to justify a high level of public funding on the grounds of prestige alone. State governments are demanding that universities produce measurable beneficial outcomes for students, businesses, and other taxpaying sectors of the community. And, by withholding higher subsidies, states are throwing universities into a highly competitive situation in which they vie with one another to see who can attract the most tuition dollars and the most outside research grants, and who can keep the tightest control over internal costs.

In this kind of environment, education schools have a certain advantage over many other colleges and departments in the university. Unlike their competitors across campus, they offer traditionally low-cost programs designed explicitly to be useful, both to students and to the community. They give students practical preparation for and access to a large sector of employment opportunities. Their research focuses on an area about which Americans worry a great deal, and they offer consulting services and policy advice. In short, their teaching, research, and service activities are all potentially useful to students and community alike. How many colleges of arts and letters can say the same?

But before we get carried away with the counterintuitive notion that ed schools might serve as a model for a university under fire, we need to understand that these brow-beaten institutions will continue to gain little credit for their efforts to serve useful social purposes, in spite of the current political saliency of such efforts. One reason for that is the peculiar nature of the occupation – teaching — for which ed schools are obliged to prepare candidates. Another is the difficulty that faces any academic unit that tries to walk the border between theory and practice.

A Peculiar Kind of Professional

Teaching is an extraordinarily complex job. Researchers have estimated that the average teacher makes upward of 150 conscious instructional decisions during the course of the day, each of which has potentially significant consequences for the students involved. From the standpoint of public relations, however, the key difficulty is that, for the outsider, teaching looks all too easy. Its work is so visible, the skills required to do it seem so ordinary, and the knowledge it seeks to transmit is so generic. Students spend a long time observing teachers at work. If you figure that the average student spends 6 hours a day in school for 180 days a year over the course of 12 years, that means that a high school graduate will have logged about 13,000 hours watching teachers do their thing. No other social role (with the possible exception of parent) is so well known to the general public. And certainly no other form of paid employment is so well understood by prospective practitioners before they take their first day of formal professional education.

By comparison, consider other occupations that require professional preparation in the university. Before entering medical, law, or business school, students are lucky if they have spent a dozen hours in close observation of a doctor, lawyer, or businessperson at work. For these students, professional school provides an introduction to the mysteries of an arcane and remote field. But for prospective teachers, the education school seems to offer at best a gloss on a familiar topic and at worst an unnecessary hurdle for twelve-year apprentices who already know their stuff.

Not only have teacher candidates put in what one scholar calls a long “apprenticeship of observation,” but they have also noted during this apprenticeship that the skills a teacher requires are no big deal. For one thing, ordinary adult citizens already know the subject matter that elementary and secondary school teachers seek to pass along to their students — reading, writing, and math; basic information about history, science, and literature; and so on. Because there is nothing obscure about these materials, teaching seems to have nothing about it that can match the mystery and opaqueness of legal contracts, medical diagnoses, or business accounting.

Of course, this perception by the prospective teacher and the public about the skills involved in teaching leaves out the crucial problem of how a teacher goes about teaching ordinary subjects to particular students. Reading is one thing, but knowing how to teach reading is another matter altogether. Ed schools seek to fill this gap in knowledge by focusing on the pedagogy of teaching particular subjects to particular students, but they do so over the resistance of teacher candidates who believe they already know how to teach and a public that fails to see pedagogy as a meaningful skill.

Compounding this resistance to the notion that teachers have special pedagogical skills is the student’s general experience (at least in retrospect) that learning is not that hard — and, therefore, by the skills a teacher extension, that teaching is not hard either. Unlike doctors and lawyers, who use their arcane expertise for the benefit of the client without passing along the expertise itself, teachers are in the business of giving away their expertise. Their goal is to empower the student to the point at which the teacher is no longer needed and the student can function effectively without outside help. The best teachers make learning seem easy and make their own role in the learning process seem marginal. As a result, it is easy to underestimate the difficulty of being a good teacher — and of preparing people to become good teachers.

Finally, the education school does not have exclusive rights to the subject matter that teachers teach. The only part of the teacher’s knowledge over which the ed school has some control is the knowledge about how to teach. Teachers learn about English, history, math, biology, music, and other subjects from the academic departments at the university in charge of these areas of knowledge. Yet, despite the university’s shared responsibility for preparing teachers, ed schools are held accountable for the quality of the teachers and other educators they produce, often taking the blame for the deficiencies of an inadequate university education.

The Border Between Theory and Practice

The intellectual problem facing American education schools is as daunting as the instructional problem, for the territory in which ed schools do research is the mine-strewn border between theory and practice. Traditionally, the university’s peculiar area of expertise has been theory, while the public school is a realm of practice.  In reality, the situation is more complicated, since neither institution can function without relying on both forms of knowledge. Education schools exist, in part, to provide a border crossing between these two countries, each with its own distinctive language and culture and its own peculiar social structure. When an ed school is working well, it presents a model of fluid interaction between university and school and encourages others on both sides of the divide to follow suit. The ideal is to encourage the development of teachers and other educators who can draw on theory to inform their instructional practice, while encouraging university professors to become practice-oriented theoreticians, able to draw on issues from practice in their theory building and to produce theories with potential use value.

In reality, no education school (or any other institution, for that matter) can come close to meeting this ideal. The tendency is to fall on one side of the border or the other — where life is more comfortable and the responsibilities more clear cut — rather than to hold the middle ground and retain the ability to work well in both domains.

But because of their location in the university and their identification with elementary and secondary schools, ed schools have had to keep working along the border. In the process, they draw unrelenting fire from both sides. The university views colleges of education as nothing but trade schools, which supply vocational training but no academic curriculum. Students, complaining that ed-school courses are too abstract and academic, demand more field experience and fewer course requirements. From one perspective, ed-school research is too soft, too applied, and totally lacking in academic rigor, while from another, it is impractical and irrelevant, serving a university agenda while being largely useless to the schools.

Of course, both sides may be right. After years of making and attending presentations at the annual meeting of the American Educational Research Association, I am willing to concede that much of the work produced by educational researchers is lacking in both intellectual merit and practical application. But I would also argue that there is something noble and necessary about the way that the denizens of ed schools continue their quest for a workable balance between theory and practice. If only others in the academy would try to accomplish a marriage of academic elegance and social impact.

A Model for Academe

So where does this leave us in thinking about the poor beleaguered ed school? And what lessons, if any, can be learned from its checkered history?

The genuine instructional and intellectual weakness of ed schools results from the way the schools did what was demanded of them, which, though understandable, was not exactly honorable. Even so, much of the scorn that has come down on the ed school stems from its lowly status rather than from any demonstrable deficiencies in the educational role it has played. But then institutional status has circular quality about it, which means that predictions of high or low institutional quality become self-fulfilling.

In some ways, ed schools have been doing things right. They have wrestled vigorously (if not always to good effect) with the problems of public education, an area that is of deep concern to most citizens. This has meant tackling social problems of great complexity and practical importance, even though the university does not place much value on the production of this kind of messy, indeterminate, and applied knowledge.

Oddly enough, the rest of the university could learn a lot from the example of the ed school. The question, however, is whether others in the university will see the example of the ed school as positive or negative. If academics consider this story in light of the current political and fiscal climate, then the ed school could serve as a model for a way to meet growing public expectations for universities to teach things that students need to know and to generate knowledge that benefits the community.

But it seems more likely that academics will consider this story a cautionary tale about how risky and unrewarding such a strategy can be. After all, education schools have demonstrated that they are neither very successful at accomplishing the marriage of theory and practice nor well rewarded for trying. In fact, the odor of failure and disrespect continues to linger in the air around these institutions. In light of such considerations, academics are likely to feel more comfortable placing their chips in the university’s traditional confidence game, continuing to pursue academic status and to market educational credentials. And from this perspective, the example of the ed school is one they should avoid like the plague. 

Posted in Credentialing, Higher Education, Meritocracy

Rampell — It Takes a B.A. to Find a Job as a File Clerk

This blog post is a still salient 2013 article from the New York Times about credential inflation in the American job market. Turns out that if you want to be a file clerk or runner at a law firm these days, you’re going to need a four-year college degree. Here’s a link to the original.

It Takes a BA Photo

February 19, 2013

It Takes a B.A. to Find a Job as a File Clerk

By CATHERINE RAMPELL

ATLANTA —The college degree is becoming the new high school diploma: the new minimum requirement, albeit an expensive one, for getting even the lowest-level job.

Consider the 45-person law firm of Busch, Slipakoff & Schuh here in Atlanta, a place that has seen tremendous growth in the college-educated population. Like other employers across the country, the firm hires only people with a bachelor’s degree, even for jobs that do not require college-level skills.

This prerequisite applies to everyone, including the receptionist, paralegals, administrative assistants and file clerks. Even the office “runner” — the in-house courier who, for $10 an hour, ferries documents back and forth between the courthouse and the office — went to a four-year school.

“College graduates are just more career-oriented,” said Adam Slipakoff, the firm’s managing partner. “Going to college means they are making a real commitment to their futures. They’re not just looking for a paycheck.”

Economists have referred to this phenomenon as “degree inflation,” and it has been steadily infiltrating America’s job market. Across industries and geographic areas, many other jobs that didn’t used to require a diploma — positions like dental hygienists, cargo agents, clerks and claims adjusters — are increasingly requiring one, according to Burning Glass, a company that analyzes job ads from more than 20,000 online sources, including major job boards and small- to midsize-employer sites.

This up-credentialing is pushing the less educated even further down the food chain, and it helps explain why the unemployment rate for workers with no more than a high school diploma is more than twice that for workers with a bachelor’s degree: 8.1 percent versus 3.7 percent.

Some jobs, like those in supply chain management and logistics, have become more technical, and so require more advanced skills today than they did in the past. But more broadly, because so many people are going to college now, those who do not graduate are often assumed to be unambitious or less capable.

Plus, it’s a buyer’s market for employers.

“When you get 800 résumés for every job ad, you need to weed them out somehow,” said Suzanne Manzagol, executive recruiter at Cardinal Recruiting Group, which does headhunting for administrative positions at Busch, Slipakoff & Schuh and other firms in the Atlanta area.

Of all the metropolitan areas in the United States, Atlanta has had one of the largest inflows of college graduates in the last five years, according to an analysis of census data by William Frey, a demographer at the Brookings Institution. In 2012, 39 percent of job postings for secretaries and administrative assistants in the Atlanta metro area requested a bachelor’s degree, up from 28 percent in 2007, according to Burning Glass.

“When I started recruiting in ’06, you didn’t need a college degree, but there weren’t that many candidates,” Ms. Manzagol said.

Even if they are not exactly applying the knowledge they gained in their political science, finance and fashion marketing classes, the young graduates employed by Busch, Slipakoff & Schuh say they are grateful for even the rotest of rote office work they have been given.

“It sure beats washing cars,” said Landon Crider, 24, the firm’s soft-spoken runner.

He would know: he spent several years, while at Georgia State and in the months after graduation, scrubbing sedans at Enterprise Rent-a-Car. Before joining the law firm, he was turned down for a promotion to rental agent at Enterprise — a position that also required a bachelor’s degree — because the company said he didn’t have enough sales experience.

His college-educated colleagues had similarly limited opportunities, working at Ruby Tuesday or behind a retail counter while waiting for a better job to open up.

“I am over $100,000 in student loan debt right now,” said Megan Parker, who earns $37,000 as the firm’s receptionist. She graduated from the Art Institute of Atlanta in 2011 with a degree in fashion and retail management, and spent months waiting on “bridezillas” at a couture boutique, among other stores, while churning out office-job applications.

“I will probably never see the end of that bill, but I’m not really thinking about it right now,” she said. “You know, this is a really great place to work.”

The risk with hiring college graduates for jobs they are supremely overqualified for is, of course, that they will leave as soon as they find something better, particularly as the economy improves.

Mr. Slipakoff said his firm had little turnover, though, largely because of its rapid expansion. The company has grown to more than 30 lawyers from five in 2008, plus a support staff of about 15, and promotions have abounded.

“They expect you to grow, and they want you to grow,” said Ashley Atkinson, who graduated from Georgia Southern University in 2009 with a general studies degree. “You’re not stuck here under some glass ceiling.”

Within a year of being hired as a file clerk, around Halloween 2011, Ms. Atkinson was promoted twice to positions in marketing and office management. Mr. Crider, the runner, was given additional work last month, helping with copying and billing claims. He said he was taking the opportunity to learn more about the legal industry, since he plans to apply to law school next year.

The firm’s greatest success story is Laura Burnett, who in less than a year went from being a file clerk to being the firm’s paralegal for the litigation group. The partners were so impressed with her filing wizardry that they figured she could handle it.

“They gave me a raise, too,” said Ms. Burnett, a 2011 graduate of the University of West Georgia.

The typical paralegal position, which has traditionally offered a path to a well-paying job for less educated workers, requires no more than an associate degree, according to the Labor Department’s occupational handbook, but the job is still a step up from filing. Of the three daughters in her family, Ms. Burnett reckons that she has the best job. One sister, a fellow West Georgia graduate, is processing insurance claims; another, who dropped out of college, is one of the many degree-less young people who still cannot find work.

Besides the promotional pipelines it creates, setting a floor of college attainment also creates more office camaraderie, said Mr. Slipakoff, who handles most of the firm’s hiring and is especially partial to his fellow University of Florida graduates. There is a lot of trash-talking of each other’s college football teams, for example. And this year the office’s Christmas tree ornaments were a colorful menagerie of college mascots — GatorsBlue DevilsYellow JacketsWolvesEagles,TigersPanthers — in which just about every staffer’s school was represented.

“You know, if we had someone here with just a G.E.D. or something, I can see how they might feel slighted by the social atmosphere here,” he says. “There really is something sort of cohesive or binding about the fact that all of us went to college.”

Posted in Higher Education, History of education, Inequality, Meritocracy, Public Good, Uncategorized

How NOT to Defend the Private Research University

This post is a piece I published today in the Chronicle Review.  It’s about an issue that has been gnawing at me for years.  How can you justify the existence of institutions of the sort I taught at for the last two decades — rich private research universities?  These institutions obviously benefit their students and faculty, but what about the public as a whole?  Is there a public good they serve; and if so, what is it? 

Here’s the answer I came up with.  These are elite institutions to the core.  Exclusivity is baked in.  By admitting only a small number of elite students, they serve to promote social inequality by providing grads with an exclusive private good, a credential with high exchange value. But, in part because of this, they also produce valuable public goods — through the high quality research and the advanced graduate training that only they can provide. 

Open access institutions can promote the social mobility that private research universities don’t, but they can’t provide the same degree of research and advanced training.  The paradox is this:  It’s in the public’s interest to preserve the elitism of these institutions.  See what you think.

Hoover Tower

How Not to Defend the Private Research University

David F. Labaree

In this populist era, private research universities are easy targets that reek of privilege and entitlement. It was no surprise, then, when the White House pressured Harvard to decline $8.6 million in Covid-19-relief funds, while Stanford, Yale, and Princeton all judiciously decided not to seek such aid. With tens of billions of endowment dollars each, they hardly seemed to deserve the money.

And yet these institutions have long received outsized public subsidies. The economist Richard Vedder estimated that in 2010, Princeton got the equivalent of $50,000 per student in federal and state benefits, while its similar-size public neighbor, the College of New Jersey, got just $2,000 per student. Federal subsidies to private colleges include research grants, which go disproportionately to elite institutions, as well as student loan and scholarship funds. As recipients of such largess, how can presidents of private research universities justify their institutions to the public?

Here’s an example of how not to do so. Not long after he assumed the presidency of Stanford in 2016, Marc Tessier-Lavigne made the rounds of faculty meetings on campus in order to introduce himself and talk about future plans for the university. When he came to a Graduate School of Education meeting that I attended, he told us his top priority was to increase access. Asked how he might accomplish this, he said that one proposal he was considering was to increase the size of the entering undergraduate class by 100 to 200 students.

The problem is this: Stanford admits about 4.3 percent of the candidates who apply to join its class of 1,700. Admitting a couple hundred additional students might raise the admit rate to 5 percent. Now that’s access. The issue is that, for a private research university like Stanford, the essence of its institutional brand is its elitism. The inaccessibility is baked in.

Raj Chetty’s social mobility data for Stanford show that 66 percent of its undergrads come from the top 20 percent by income, 52 percent from the top 10 percent, 17 percent from the top 1 percent, and just 4 percent from the bottom 20 percent. Only 12 percent of Stanford grads move up by two quintiles or more — it’s hard for a university to promote social mobility when the large majority of its students starts at the top.

Compare that with the data for California State University at Los Angeles, where 12 percent of students are from the top quintile and 22 percent from the bottom quintile. Forty-seven percent of its graduates rise two or more income quintiles. Ten percent make it all the way from the bottom to the top quintile.

My point is that private research universities are elite institutions, and they shouldn’t pretend otherwise. Instead of preaching access and making a mountain out of the molehill of benefits they provide for the few poor students they enroll, they need to demonstrate how they benefit the public in other ways. This is a hard sell in our populist-minded democracy, and it requires acknowledging that the very exclusivity of these institutions serves the public good.

For starters, in making this case, we should embrace the emphasis on research production and graduate education and accept that providing instruction for undergraduates is only a small part of the overall mission. Typically these institutions have a much higher proportion of graduate students than large public universities oriented toward teaching (graduate students are 57 percent of the total at Stanford and just 8.5 percent in the California State University system).

Undergraduates may be able to get a high-quality education at private research universities, but there are plenty of other places where they could get the same or better, especially at liberal-arts colleges. Undergraduate education is not what makes these institutions distinctive. What does make them stand out are their professional schools and doctoral programs.

Private research universities are elite institutions, and they shouldn’t pretend otherwise.

Private research universities are souped up versions of their public counterparts, and in combination they exert an enormous impact on American life.

As of 2017, the American Association of Universities, a club consisting of the top 65 research universities, represented just 2 percent of all four-year colleges and 12 percent of all undergrads. And yet the group accounted for over 20 percent of all U.S. graduate students; 43 percent of all research doctorates; 68 percent of all postdocs; and 38 percent of all Nobel Prize winners. In addition, its graduates occupy the centers of power, including, by 2019, 64 of the Fortune 100 CEOs; 24 governors; and 268 members of Congress.

From 2014 to 2018, AAU institutions collectively produced 2.4-million publications, and their collective scholarship received 21.4 million citations. That research has an economic impact — these same institutions have established 22 research parks and, in 2018 alone, they produced over 4,800 patents, over 5,000 technology license agreements, and over 600 start-up companies.

Put all this together and it’s clear that research universities provide society with a stunning array of benefits. Some of these benefits accrue to individual entrepreneurs and investors, but the benefits for society at a whole are extraordinary. These universities drive widespread employment, technological advances that benefit consumers worldwide, and the improvement of public health (think of all the university researchers and medical schools advancing Covid-19-research efforts right now).

Besides their higher proportion of graduate students and lower student-faculty ratio, private research universities have other major advantages over publics. One is greater institutional autonomy. Private research universities are governed by a board of laypersons who own the university, control its finances, and appoint its officers. Government can dictate how it uses the public subsidies it gets (except tax subsidies), but otherwise it is free to operate as an independent actor in the academic market. This allows these colleges to pivot quickly to take advantage of opportunities for new programs of study, research areas, and sources of funding, largely independent of political influence, though they do face a fierce academic market full of other private colleges.

A 2010 study of universities in Europe and the U.S. by Caroline Hoxby and associates shows that this mix of institutional autonomy and competition is strongly associated with higher rankings in the world hierarchy of higher education. They find that every 1-percent increase in the share of the university budget that comes from government appropriations corresponds with a decrease in international ranking of 3.2 ranks. At the same time, each 1-percent increase in the university budget from competitive grants corresponds with an increase of 6.5 ranks. They also found that universities high in autonomy and competition produced more patents.

Another advantage the private research universities enjoy over their public counterparts, of course, is wealth. Stanford’s endowment is around $28 billion, and Berkeley’s is just under $5 billion, but because Stanford is so much smaller (16,000 versus 42,000 total students) this multiplies the advantage. Stanford’s endowment per student dwarfs Berkeley’s. The result is that private universities have more research resources: better labs, libraries, and physical plant; higher faculty pay (e.g., $254,000 for full professors at Stanford, compared to $200,000 at Berkeley); more funding for grad students, and more staff support.

A central asset of private research universities is their small group of academically and socially elite undergraduate students. The academic skill of these students is an important draw for faculty, but their current and future wealth is particularly important for the institution. From a democratic perspective, this wealth is a negative. The student body’s heavy skew toward the top of the income scale is a sign of how these universities are not only failing to provide much social mobility but are in fact actively engaged in preserving social advantage. We need to be honest about this issue.

But there is a major upside. Undergraduates pay their own way (as do students in professional schools); but the advanced graduate students don’t — they get free tuition plus a stipend to pay living expenses, which is subsidized, both directly and indirectly, by undergrads. The direct subsidy comes from the high sticker price undergrads pay for tuition. Part of this goes to help out upper-middle-class families who still can’t afford the tuition, but the rest goes to subsidize grad students.

The key financial benefits from undergrads come after they graduate, when the donations start rolling in. The university generously admits these students (at the expense of many of their peers), provides them with an education and a credential that jump-starts their careers and papers over their privilege, and then harvests their gratitude over a lifetime. Look around any college campus — particularly at a private research university — and you will find that almost every building, bench, and professor bears the name of a grateful donor. And nearly all of the money comes from former undergrads or professional school students, since it is they, not the doctoral students, who go on to earn the big bucks.

There is, of course, a paradox. Perhaps the gross preservation of privilege these schools traffic in serves a broader public purpose. Perhaps providing a valuable private good for the few enables the institution to provide an even more valuable public good for the many. And yet students who are denied admission to elite institutions are not being denied a college education and a chance to get ahead; they’re just being redirected. Instead of going to a private research university like Stanford or a public research university like Berkeley, many will attend a comprehensive university like San José State. Only the narrow metric of value employed at the pinnacle of the American academic meritocracy could construe this as a tragedy. San José State is a great institution, which accepts the majority of the students who apply and which sends a huge number of graduates to work in the nearby tech sector.

The economist Miguel Urquiola elaborates on this paradox in his book, Markets, Minds, and Money: Why America Leads the World in University Research (Harvard University Press, 2020), which describes how American universities came to dominate the academic world in the 20th century. The 2019 Shanghai Academic Ranking of World Universities shows that eight of the top 10 universities in the world are American, and seven of these are private.

Urquiola argues that the roots of American academe’s success can be found in its competitive marketplace. In most countries, universities are subsidiaries of the state, which controls its funding, defines its scope, and sets its policy. By contrast, American higher education has three defining characteristics: self-rule (institutions have autonomy to govern themselves); free entry (institutions can be started up by federal, state, or local governments or by individuals who acquire a corporate charter); and free scope (institutions can develop programs of research and study on their own initiative without undue governmental constraint).

The result is a radically unequal system of higher education, with extraordinary resources and capabilities concentrated in a few research universities at the top. Caroline Hoxby estimates that the most selective American research universities spend an average of $150,000 per student, 15 times as much as some poorer institutions.

As Urquiola explains, the competitive market structure puts a priority on identifying top research talent, concentrating this talent and the resources needed to support it in a small number of institutions, and motivating these researchers to ramp up their productivity. This concentration then makes it easy for major research-funding agencies, such as the National Institutes of Health, to identity the institutions that are best able to manage the research projects they want to support. And the nature of the research enterprise is such that, when markets concentrate minds and money, the social payoff is much greater than if they were dispersed more evenly.

Radical inequality in the higher-education system therefore produces outsized benefits for the public good. This, paradoxical as it may seem, is how we can truly justify the public investment in private research universities.

David Labaree is a professor emeritus at the Stanford Graduate School of Education.

 

 

Posted in Books, Higher Education, History of education, Professionalism

Nothing Succeeds Like Failure: The Sad History of American Business Schools

This post is a review I wrote of Steven Conn’s book, Nothing Succeeds Like Failure: The Sad History of American Business Schools, which will be coming out this summer in History of Education Quarterly.  Here’s a link to the proofs.Conn Book Cover

Steven Conn. Nothing Succeeds Like Failure: The Sad History of American Business Schools. Ithaca, NY: Cornell University Press, 2019. 288pp.

            In this book, historian Steven Conn has produced a gleeful roast of the American business school.  The whole story is in the title.  It goes something like this:  In the nineteenth century, proprietary business schools provided training for people (men) who wanted to go into business.  Then 1881 saw the founding of the Wharton School of Finance and Economy at the University of Pennsylvania, which was the first business school located in a university; others quickly followed.  Two forces converged to create this new type of educational enterprise.  Progressive reformers wanted to educate future business leaders who would manage corporations in the public interest instead of looting the public the way robber barons had done.  And corporate executives wanted to enhance their status and distinguish themselves from mere businessmen by requiring a college degree in business for the top level positions.  This was both a class distinction (commercial schools would be just fine for the regular Joe) and an effort to redefine business as a profession.  As Conn aptly puts it, the driving force for both business employers and their prospective employees was “profession envy” (p. 37).  After all, why should doctors and lawyers enjoy professional standing and not businessmen?

            For reformers, the key contribution of B schools was to be a rigorous curriculum that would transform the business world.  For the students who attended these schools, however, the courses they took were beside the point.  They were looking for a pure credentialing effect, by acquiring a professional degree that would launch their careers in the top tiers of the business world.  As Conn shows, the latter perspective won.  He delights in recounting the repeated efforts by business associations and major foundations (especially Ford and Carnegie) to construct a serious curriculum for business schools.  All of these reforms, he says, failed miserably.  The business course of study retained a reputation for uncertain focus and academic mediocrity.  The continuing judgment by outsiders was that “U.S. business education was terrible” (p. 76).

            This is the “failure” in the book’s title.  B schools never succeeded in doing what they promised as educational institutions.  But, as he argues, this curricular failure did nothing to impede business schools’ organizational success.  Students flocked to them in the search for the key to the executive suite and corporations used them to screen access to the top jobs.  This became especially true in the 1960s, when business schools moved upscale by introducing graduate programs, of which the most spectacular success was the MBA.  Nothing says professional like a graduate degree.  And nothing gives academic credibility to a professional program like establishing a mandate for business professors to carry out academic research just like their peers in the more prestigious professional schools. 

            Conn says that instead of working effectively to improve the business world, B schools simply adopted the values of this world and dressed them up in professional garb.  By the end of the twentieth century, corporations had shed any pretense of working in the public interest and instead asserted shareholder value as their primary goal.  Business schools also jumped on this bandwagon.  One result of this, the author notes, was to reinforce the rapacity of the new business ethos, sending an increasing share of business graduates into the realms of finance and consulting, where business is less a process of producing valuable goods and services than a game of monopoly played with other people’s money.  His conclusion: “No other profession produces felons in quite such abundance” (p. 206).

            Another result of this evolution in B schools was that they came to infect the universities that gave them a home.  Business needed universities for status and credibility, and it thanked them by dragging them down to its own level.  He charges that universities are increasingly governed liked private enterprises, with market-based incentives for colleges, departments, and individual faculty to produce income from tuition and research grants or else find themselves discarded like any other failed business or luckless worker.  It’s like business schools have succeeded in redoing the university in their own image: “All the window dressing of academia without any of its substance” (p. 222).

            That’s quite an indictment, but is it sufficient for conviction?  I think not.  One problem is that, from the very beginning, the reader gets the distinct feeling that the fix is in.  The book opens with a scene in which the author’s colleagues in the history department come together for their monthly faculty meeting in a room filled with a random collection of threadbare couches and chairs.  All of it came from the business school across campus when it bought new furniture.  This scene is familiar to a lot of faculty in the less privileged departments on any campus, where the distinction between the top tier and the rest is all too apparent.  In my school of education, we’re accustomed to our old, dingy barn of a building, all too aware of the elegant surroundings  the business school enjoys in a brand new campus paid for by Phil Knight of swoosh fame. 

But this entry point to the book signals a tone that tends to undermine the author’s argument throughout the book.  It’s hard not to read this book as a polemic of resentment toward the nouveau riche — a humanist railing against the money-grubbers across campus who are despoiling the sanctity of academic life.  This reading is not fair, since a lot of the author’s critique of business schools is correct; but it made me squirm a bit as I worked through the text.  It also made the argument feel a little too inevitable.  Read the title and the opening page and you already have a good idea of what is going to follow.  Then you see that the first chapter is about the Wharton School and you just know where you’re headed.  And sure enough, in the last chapter the author introduces Wharton’s most notorious graduate, Donald Trump.  That second shoe took two hundred pages to drop, but the drop was inevitable. 

In addition, by focusing relentlessly on the “sad history of American business schools,” Conn is unable to put this account within the larger context of the history of US higher education.  For one thing, business didn’t introduce the market to higher ed; it was there from day one.  Our current system emerged in the early nineteenth century, when a proliferation of undistinguished colleges popped up across the US, especially in small towns on the expanding frontier.  These were private enterprises with corporate charters and no reliable source of funding from either church or state.  They often emerged more as efforts to sell land (this is a college town so buy here) or plant the flag of a religious denomination than to advance higher learning.  And they had to hustle to survive in a glutted market, so they became adept at mining student tuition and cultivating donors.  Business schools are building on a long tradition of playing to the market.  They just aren’t as concerned about covering their tracks as the rest of us.

David F. Labaree

Stanford University

Posted in Higher Education, Inequality, Meritocracy

Markovits: Schooling in the Age of Human Capital

Today I’m posting a wonderful new essay by Daniel Markovits about the social consequences of the new meritocracy, which was just published in the latest issue of Hedgehog Review.  Here’s a link to the original.  As you may recall, last fall I posted a piece about his book, The Meritocracy Trap.  

In this essay, Markovits extends his analysis of the role that universities play in fostering a new and particularly dangerous kind of wealth inequality — one based on the returns on human capital instead of the returns on economic capital.  For all of history until the late 20th century, wealth meant ownership of land, stocks, bonds, businesses, or piles of gold.  The income it produced came to you simply for being the owner, whether or not you accumulated the wealth yourself.  One of the pleasures of being rich was the luxury of remaining idle. 

But the meritocracy has established a new path to wealth — based on the university-credentialed skills you accumulate early in your life and then cash in for a high paying job as an executive or professional.  Like the average wage earner, you work for a living and only retain the income if you keep working.  Unlike the average worker, however, you earn an extraordinary amount of money.  Markovits estimates that in the 1960s, between a sixth and a third of the people in the top one percent in income earned this from their own labor; now the proportion is two-thirds.  The meritocrats are the new rich.  And universities are the route to attaining these riches.

At one level, this is a fairer system by far than the old one based on simple inheritance and coupon clipping.  These people work for a living, and they work hard — longer hours than most people in the work force.  They can only attain their lucrative positions by proving their worth in the educational system, crowned by college and professional degrees.  These are the people who get the best grades and the best test scores and who qualify for entrance into and graduation from the best universities.  This provides the new form of inequality with a thick veneer of meritocratic legitimacy.  

As Markovits points out below, however, the problem is that the entire meritocratic enterprise is not directed toward identifying and certifying excellence but instead toward creating degrees of superiority.  

Excellence is a threshold concept, not a rank concept. It applies as soon as a certain level of ability or accomplishment is reached, and while it can make sense to say that one person is, in some respect, more excellent than another, this does not eliminate (or even undermine) the other’s excellence. Moreover, excellence is a substantive rather than purely formal ideal. Excellence requires not just capacity or achievement, but rather capacity and achievement realized at something worthwhile. 

The university produced degrees do not certify excellence but instead define the degree-holder’s position in line for the very best jobs.  They are positional goods, whose value is in qualifying you for a spot as close to the front of the queue as possible.  Thus all of the familiar metrics for showing  where you are in line:  SAT, LSAT, US News college rank, college admission rate.  Since everyone knows this is how the game is played, everyone wants and needs to get the diploma that grants the highest degree of superiority in the race for position.  Being really qualified for the job is meaningless if your degree doesn’t get you access to it.  As a result, Markovits notes, you can never get enough education to ensure your success in the meritocratic rat race.

“The value to me of my education,” the economist Fred Hirsch once observed, “depends not only on how much I have but also on how much the man ahead of me in the job line has.”32 This remains so, moreover, regardless of how much education the person ahead of me and I both possess. Every meritocratic success therefore necessarily breeds a flip side of failure—the investments made by the rich exclude the rest, and also those among the rich who don’t quite keep up. This means that while the rich get sated on most goods (there is only so much caviar a person can eat), they cannot get sated on schooling.

Parents with lots of human capital have a huge advantage in guiding their children through educational system, but this only breeds insecurity.  They know that they’re competing with other families with the same advantages and that only a few will gain a place in the front of the line where the most lucrative positions are allocated.  Excellence is attainable, but superiority is endlessly elusive.

I hope you find this article as illuminating as I do.

Businessman running on hamster wheel

Schooling in the Age of Human Capital

Metrics do not and, in fact, cannot measure any intelligible conception of excellence at all.

Daniel Markovits

The recent “Varsity Blues” scandal brought corruption at American universities into the public eye. Rich people bought fraudulent test scores and bribed school officials in order to get their children into top colleges. Public outrage spread beyond the scandal’s criminal face, to the legacy preferences by which universities legally favor the privileged children of their own graduates. After all, the actions in the Varsity Blues case became criminal only because the universities themselves failed to capture the proceeds of their own corruption. The outrage was natural and warranted. There is literally nothing to say in favor of a system that allows the rich to circumvent the meritocratic competition that governs college admissions for everyone else. But the outrage also distracts from and even disguises a broader and deeper corruption in American education, which arises not from betraying meritocratic ideals but, rather, from pursuing them. Meritocracy itself casts a dark shadow over education, biasing decisions about who gets it, distorting the institutions that deliver it, and corrupting the very idea of educational excellence.The methods of meritocratic schooling drive the corruption forward. Scores on the SAT (formally called the Scholastic Assessment Test), grade point averages (GPAs), and college rankings—the metrics that organize and even tyrannize meritocratic education in the United States today—are manifestly absurd. It’s not just that SAT scores, GPAs, and rankings are culturally biased or that they lack predictive validity. These familiar complaints have a point, but they all proceed from the fanciful belief that merit may be measured and that meritocracy, if properly administered, supports opportunity for all and thereby makes unequal outcomes okay. The familiar objections argue only that the metrics are poorly designed and so miss their meritocratic marks. In some instances, as when SAT scores are criticized for poorly predicting college GPAs, the criticisms simply prefer one measure over another. But the real root of the trouble with SATs, GPAs, and rankings is deeper and different: These metrics do not and, in fact, cannot measure any intelligible conception of excellence at all. And really appreciating this objection requires stepping outside meritocracy’s conventional imaginative frame.

A Transparent Absurdity

Colleges and universities quantify applicants’ merits using SAT scores and GPAs. But as a measure of anything that is itself worthwhile—of any meaningful achievement or genuine human excellence—an SAT score or a GPA is not so much imprecise and incomplete, or biased and unfair, as simply nonsensical. Even if individual questions on the test identify real skills, and even if grades on individual assignments or courses reflect real accomplishments, the sums and averages that compose overall SAT scores and GPAs fail to track any credible concept of ability or accomplishment. What sense does it make to treat a person who uses language exceptionally vividly and creatively but cannot identify the core facts in a descriptive passage as possessing, overall, average linguistic aptitude or accomplishment? It is more absurd still to treat someone who reads and writes fantastically well but is terrible at mathematics as, in any way, an ordinary or middling student. But SAT scores and GPAs push inexorably toward both conclusions. Again, even if one sets aside doubts about whether individual skills can be measured by multiple-choice questions or whether particular course work can be accurately graded, these metrics create literally mindless averages—totally without grounding in any conception of how to aggregate skills or accomplishments into an all-things-considered sum, or even any argument that the these things are commensurable or that aggregating them is intelligible.

Applicants, for their parts, measure colleges and universities by rankings, including most prominently those published by US News & World Report. These rankings are, if anything, even less intelligible than the metrics used to evaluate applicants. For colleges, for example, the rankings aggregate many factors: graduation and retention rates (both in fact and as compared to US News’s expectations), an idiosyncratic measure of “social mobility,” class size, faculty salaries, faculty education, student-faculty ratio, share of faculty who are full-time, expert opinion, academic spending per student, student standardized test scores, student rank in high school class, and alumni giving.1 Once again, even supposing that these factors reflect particular educational excellences and that the data US News gathers measure the factors, the aggregate that it builds by combining them, using weights specified to within one-tenth of one percent, remains incoherent. Berea College, for example, enrolls students who skew more toward first-generation college graduates than Princeton University, and in this way adds more to the education of each student (especially compared to her likely alternatives), but it has a less renowned, scholarly, and highly paid faculty. What possible conception of “excellence” can underwrite an all-things-considered judgment of which is “better”? US News boasts that “our methodology is the product of years of research.”2 But the basic question of what this research is studying—of what excellence this method of deciding which colleges and universities are “best” could conceivably measure, or whether any such excellence is even intelligible—remains entirely unaddressed.

In spite of their patent absurdities, the metrics deployed by both sides of the college admissions complex dominate how students and colleges are matched: Schools use test scores and grades to decide whom to admit, and applicants use rankings to decide where to enroll. The five top-ranked law schools, for example, enroll roughly two-thirds of applicants with Law School Admission Test (LSAT) scores in the ninety-ninth percentile.3 And although law schools hold precise recruitment data close, one can reasonably estimate that of the roughly 2,000 people admitted to the top five law schools each year, no more than five (which is to say effectively none) attend a law school outside the top ten.4 Law school is likely an extreme case. But instead of being outlandish, it lies at the end of a continuum and emphasizes patterns that repeat themselves (less acutely) across American higher education. Metrics that are literally nonsense drive an incredibly efficient two-way matching system.

When a transparent absurdity dominates a prominent social field, something profound lies beneath. And the metrics that tyrannize university life rise out of deep waters indeed. Elites increasingly owe their income and status not to inherited physical or financial capital but to their own skill, or human capital, acquired through intensive and even extravagant training. Colleges and universities provide the training that builds human capital, and going to college (and to the right college) therefore substantially determines who gets ahead. The practices that match students and colleges must answer the need to legitimate the inequalities this human capitalism produces, by justifying advantage on meritocratic grounds. Even when they are nonsense, numbers provide legitimacy in a scientific age. The numbers that tyrannize university life in America today, and the deformations that education suffers as a result, are therefore the inevitable pathologies of schooling in an age of human capitalism.

The Superordinate Working Class

In 2018, the average CEO of an S&P 500 company took home about $14.5 million in total compensation,5 and in a recent year, the five highest-paid employees of the S&P 1500 firms (7,500 workers overall) captured total pay equal to nearly 10 percent of the S&P 1500’s collective profits.6 In finance, twenty-five hedge fund managers took home more than $100 million in 2016,7 for example, while the average portfolio manager at a mid-sized hedge fund was reported to have made more than $2 million in 2014.8 The Office of the New York State Comptroller reported in 2018 that the average securities industry worker in New York City made more than $400,000.9 Meanwhile, the most profitable law firm in America yields profits per partner in excess of $5 million per year, and more than seventy firms generate more than $1 million per partner annually.10 Anecdotes accumulate to become data. Taken together, employees at the vice-presidential level or higher at S&P 1500 companies, professional finance workers, top management consultants, top lawyers, and specialist medical doctors account for more than half of the richest 1 percent of households in the United States.11

These and other similar jobs enable a substantial subset of the most elaborately educated people to capture enormous incomes by mixing their accumulated human capital with their contemporaneous labor. This group now composes a superordinate working class. A cautious accounting attributes over half of the 1 percent’s income to these and other kinds of labor,12 while my own more complete estimate puts the share above two-thirds.13 Moreover—and notwithstanding capital’s rising domination over ordinary workers—roughly three-quarters of the increase in the top 1 percent’s share of national income overall stems from the rise of this superordinate working class, in particular a shift of income away from middle-class workers and in favor of elite ones. The result is a society in which the greatest source of wealth, income, and status (including for the mass affluent) is the skill and training—the human capital—of free workers.

The rise of human capitalism has transformed the colleges and universities that create human capital. Two facets of the transformation matter especially. First, education has acquired an importance it never had before. Until only a few generations ago, education and the skills it produces had little economic value. Even generously calculated, the top 0.1 and the top 1 percent of the income distribution in 1960 derived only about one-sixth and one-third of their incomes, respectively, from labor, which is to say by working their own human capital.14 Moreover, schools and universities did not dominate production of such human capital as there was; both blue- and white-collar workers received substantial workplace training, throughout their careers. In Detroit, for example, young men might quit childhood jobs on their eighteenth birthdays and present themselves to a Big Three automaker, to take up unionized, lifetime jobs that would (if they were capable and hard working) eventually make them into tool-and-die-makers, earning the equivalent of nearly $100,000 per year—all with no more than a high school education.15 And in New York, a college graduate joining junior management at IBM could expect to spend four years (or 10 percent of his career) in full-time, fully paid workplace training as he ascended the corporate ladder.16 Small wonder, then, that the college wage premium was modest at midcentury, and that the graduate-school wage premium (captured above what was earned by workers with just a bachelor’s degree) was more modest still.17 Elite schools and colleges, in this system, were sites of social prestige rather than economic production. Education had little direct economic payoff; rather, it followed, and merely marked, hierarchies that were established and sustained on other grounds. The critics of the old order were clear eyed about this. Kingman Brewster—the president who did more than anyone to modernize Yale University—called the college he inherited “a finishing school on Long Island Sound.”18

But today, education has become itself a source of income, status, and power for a meritocratic elite whose wealth consists, principally, in its own human capital. The college wage premium has risen dramatically, so that the present discounted value of a bachelor’s degree (net of tuition) is nearly three times greater today than in 1965.19 The postgraduate wage premium has risen more steeply still, and the median worker with a postgraduate degree now makes well over twice the wage of the median worker with a high school diploma only, and about 1.5 times the wage of the median worker with a four-year degree only. College and postcollege degrees also protect against unemployment, so that the effects of education on lifetime earnings are more dramatic still. Just one in seventy-five workers who have never finished high school, just one in forty workers with a high school education only, and just one in six workers with a bachelor’s degree enjoy lifetime earnings equal only to those of the median professional school graduate.20

Graduates of the top colleges and universities capture yet higher incomes, enjoying more than double the income boost of an average four-year degree, with even greater gains at the very top. The highest-paid 10 percent of Harvard College graduates make an average salary of $250,000 just six years out,21 while a recent study of Harvard Law School graduates ten years out reported a median annual income (among male graduates) of nearly $400,000.22 Overall, graduates of top-ten law schools make on average a quarter more than graduates of schools ranked eleventh to twentieth, and a half more than graduates of schools ranked twenty-first to one-hundredth;23 and 96 percent of the partners at the $5 million-a-year law firm graduated from a top-ten law school.24 More broadly, a recent survey reports—incredibly—that nearly 50 percent of America’s corporate leaders, 60 percent of its financial leaders, and 50 percent of its highest government officials attended only twelve universities.25 This makes elite education one of the best investments money can buy. Purely economic rates of return have been estimated at 13 to 14 percent for college and as high as 30 percent for law school, or more than double the rate of return provided by the stock market.26 Meanwhile, the educational alternatives to college have all but disappeared. According to a recent study, the average US firm invests less than 2 percent of its payroll budget on training.27

A second transformation follows from the first. Education, especially at top-tier colleges and universities, is now distributed in very different ways from before. Colleges, especially elite ones, have never welcomed poor or even middle-class people in large numbers. But once those schools chose students based on effectively immutable criteria—breeding, race, gender—so that while college was exclusive, it was nevertheless (at least among those who qualified) effectively nonrivalrous and not competitive. Even the very top schools routinely accepted perhaps a third of their applicants, and some took much greater shares still.28 As recently as 1995, the University of Chicago admitted 71 percent of those who applied. These rates naturally produced an application process that appears almost preposterously casual today. A midcentury graduate of Yale Law School, for example, recollects that when he met the dean of admissions at a college fair, he was told, based only on their conversation, “You’ll get in if you apply.” An easy confidence suffused the very language of going to college, as the sons of wealthy families did not apply widely but rather “put themselves down for” whatever colleges their fathers had attended. The game was rigged, and the stakes were small.

But today, education is parceled out through an enormous competition that becomes most intense at the very top. Even as poor and even middle-class children have virtually no chance at succeeding, rich children (no matter how privileged) have no guarantee of success. Colleges today—especially the top ones—are therefore both extremely exclusive and ruthlessly competitive. In a recent year, for example, children who had at least one parent with a graduate degree had, statistically, a 150 times greater chance of achieving the Ivy League median on their verbal SAT than children neither of whose parents had graduated high school.29 Small wonder, then, that the Ivy Plus colleges now enroll more students from households in the top 1 percent of the income distribution than from the entire bottom half.30 This makes these schools more economically exclusive than even notorious bastions of the old aristocracy such as Oxford and Cambridge. At the same time, while being born to privilege is nearly a necessary condition for admission to a really elite American university, it is far from sufficient. Last year, the University of Chicago admitted just six percent of applicants, and Stanford fewer than five percent.

These admissions rates mean that any significant failure—any visible blot on a record—effectively excludes an applicant. Rich families respond to this fact by investing almost unimaginable resources in getting their children perfect records. Prestigious private preschools in New York City now charge $30,000 per year to educate four-and-five-year-olds, and they still get ten or twenty applications for every space. These schools feed into elite elementary schools, which feed into elite high schools that charge $50,000 per year (and, on account of their endowments, spend even more). Rich families supplement all this schooling with private tutors who can charge over $1,000 per hour. If a typical household from the richest 1 percent took the difference between the money devoted to educating its children and what is spent on a typical middle-class education, and invested these sums in the S&P 500 to give to the rich children as bequests on the deaths of their parents, this would amount to a traditional inheritance of more than $10 million per child.31 This meritocratic inheritance effectively excludes working- and middle-class children from elite education, income, and status.

These expenditures are almost as inevitable as they are exorbitant. When one set of institutions dominates the production of wealth and status in a society, the privileged few set out to monopolize places, and the pressure to gain admission becomes enormous. Human capitalism, moreover, makes schooling a positional good. “The value to me of my education,” the economist Fred Hirsch once observed, “depends not only on how much I have but also on how much the man ahead of me in the job line has.”32 This remains so, moreover, regardless of how much education the person ahead of me and I both possess. Every meritocratic success therefore necessarily breeds a flip side of failure—the investments made by the rich exclude the rest, and also those among the rich who don’t quite keep up. This means that while the rich get sated on most goods (there is only so much caviar a person can eat), they cannot get sated on schooling. Finally, rather than pick schools based on family tradition, applicants make deliberate choices about where to apply, and almost always attend the highest-ranked school that admits them, as when effectively nobody admitted to a top-five law school attends a school outside the top ten.

In these ways, human capitalism creates an educational competition in which the stakes are immense and everyone competes for the same few top prizes. Whereas aristocracies perpetuated elites by birthright, meritocratic inequality establishes school and especially college admissions committees as de facto social planners, choosing the next generation of meritocrats. Education becomes a powerful mechanism for structural exclusion—the dominant dynastic technology of our enormously unequal age. This places extreme pressure on the schools, and especially admissions committees, which must decide which people to privilege, using what criteria, and to what ends.

Bohr’s Lucky Horseshoe

What happens to schools when the degrees they grant grow so valuable that the demand for them outstrips their supply, and when admissions decisions make or break applicants’ life plans and determine who gets ahead in society? How have schools and colleges responded to their admissions decisions’ raised stakes? And what has the rise of human capital, its dominant role in wealth (even among the rich), done to the nature of education itself—to education’s aims, and to the standards by which it determines success? Measurement, and the tyranny of numbers, turns out to play a central part in the answer to all these questions—and for reasons not just shallow but deep. The manifestly absurd metrics that dominate university life are direct consequences of the role that schooling plays in our present economic and social order.

That which is measured becomes important. But at the same time, that which is important must be measured—and on a scale that allows for the sort of confident and exact judgments and comparisons that numbers yield. In a technocratic age—suspicious (for good reasons as well as bad) of humanist, interpretive, and therefore discretionary judgments about value—the demand for certainty and precision becomes irresistible when the stakes get high enough. The rise of human capitalism therefore makes it essential to construct metrics that schools and colleges might use to assess human capital and to compare the people who possess it, in order to determine whose human capital should receive additional investments.

The problem becomes more pressing still because education is lumped into standardized units called degrees, so that schools (especially the most exclusive ones, which have no part-time students or “honors colleges”) cannot hedge their bets by offering applicants varying quantities or qualities of training, but must instead make a binary choice to accept or to reject, full stop. The metrics that admissions offices use must therefore be able to aggregate across dimensions of skill and ability, in order to construct a single, all-things-considered measure of ability and accomplishment capable of supporting a “yes” or a “no.” This task becomes especially demanding in a world that has rejected the unity of the virtues and insists instead that people and institutions may excel in some ways even as they fail in others. GPAs and standardized test scores, especially on the SAT, as well as university rankings as provided by US News & World Report, provide the required metrics—comprehensive and complete orderings that can make fine distinctions that all who accept the metrics must agree on. Averages, scores, and rankings operate as prices do in economic markets, corralling judgments made unruly by normative pluralism and fragmentation into a single, public, shared measure of value.

These metrics—especially the SAT—are of course themselves disputed, sometimes vigorously. Certainly, they rest on arbitrary assumptions, and precision comes only at the cost of simply ignoring anything intractable, no matter how important. Nevertheless, even challenges to particular measures of human capital often accept the general approach that lies behind them all, and therefore give away the evaluative game—as (once again) when the SAT is criticized for lacking much power to predict GPAs. And even when they are contested, metrics like the GPA and SAT suppress ambiguities that they cannot eliminate, by pushing contestation into the background, far away from the individual cases and the evaluation of particular applicants. We may disagree about the validity of the SAT, and indeed harbor doubts about the test’s value, but we will nevertheless all agree on who has the highest score. In this sense, GPAs and SATs are like Niels Bohr’s lucky horseshoe—they work even if you don’t believe in them. In a world in which people cannot possibly agree on any underlying account of virtue or success, but literally everything turns on how success is measured, numerical scores allow admissions committees to legitimate their choices of whom to admit.

The early meritocrats understood this. At Harvard, James Bryant Conant, president from 1933 to 1953, introduced the SAT into college admissions with the specific purpose of identifying deserving applicants from outside the aristocratic elite. (James Tobin, who would serve on President John F. Kennedy’s Council of Economic Advisers and win a Nobel Prize, was an early success story.33) Yale came to meritocracy later, but (perhaps for this very reason) embraced the logic of numbers-based meritocratic evaluation more openly and explicitly. Kingman Brewster, president from 1963 to 1977, called himself an “intellectual investment banker” and encouraged his admissions office to compose Yale’s class with the aim of admitting the students who would maximize the human capital that his investments would build. R. Inslee “Inky” Clark, Brewster’s dean of undergraduate admissions from 1963 to 1969, called his selection process “talent searching” and equated talent with “who will benefit most from studying at Yale.” The new administration, moreover, deployed test scores and GPAs not just affirmatively, to find overlooked talent, but also negatively, to break the old aristocratic elite’s monopoly over places at top colleges. Clark called the old, breeding-based elite “ingrown,” and aggressively turned Yale against aristocratic prep schools. In 1968, for example, when Harvard still accepted 46 percent of applicants from Choate and Princeton took 57 percent, Yale accepted only 18 percent.34

The meritocrats aimed by these means to build a new leadership class. The old guard recognized the threat and resisted, both privately and even publicly. Brewster’s predecessor had scorned Harvard’s meritocratic admissions, which he said would favor the “beetle-browed, highly specialized intellectual.” When Brewster’s revolution was presented to the Yale Corporation, one member objected, “You’re talking about Jews and public-school graduates as leaders. Look around you at this table. These are America’s leaders. There are no Jews here. There are no public-school graduates here.” And William F. Buckley lamented that Brewster’s Yale would prefer “a Mexican-American from El Paso High…[over]…Jonathan Edwards the Sixteenth from Saint Paul’s School.” Just so, the meritocrats replied.35

They added that their meritocratic approach to building an elite—because numbers measure ability and, just as important, block overt and direct appeals to breeding—would launder the hierarchy that it produced. Prior inequalities—especially aristocratic ones—were prejudicial, malign, and offensive. But meritocracy purports to be wholesome: backed by objective numbers, open to all comers, and resolutely focused on earned advantage. Indeed, meritocracy aspires to redeem the very idea of inequality—to make unequal outcomes compatible with equal opportunities, and to render hierarchy acceptable to a democratic age. In this way, the early meritocrats combined stark criticism of the present with a profound optimism about the future.

The Soldier, the Artist, and the Financier

The meritocrats’ optimism fell, if not at once, then soon. And it fell at hurdles erected by their own reliance on numbers. The metrics that the meritocrats constructed, and that now dominate education, turned out to be not just absurd but destructive.

To begin with, numerical metrics of accomplishment naturally inflame ruthlessly single-minded competition. There is no general way to rank learning, or creativity, or achievement—merit—directly. There is no way to say, all things considered, who has better skills, wider knowledge, or deeper understanding, much less who has accomplished more overall. People value different things for different reasons. We disagree with one another about what is most valuable: the entrepreneur’s resourcefulness, the doctor’s caring, the writer’s insight, or the statesperson’s wisdom. Moreover, each of us is unsure, in our own judgments, about how best to balance these values when they conflict—unsure, to pick a famous example, about whether to pursue a life of politics or of reflection, to pursue the executory or the deliberative virtues. The agreement and repose needed to sustain a stable direct ranking simply can’t be had. This is not all bad: Ineliminable uncertainties about value diffuse and therefore dampen our competition to achieve. The soldier and the artist simply do not compete with each other, and neither competes with the financier.

By contrast, numerical metrics—again including especially GPAs and SAT scores—aggregate across incommensurables to produce a single, complete ranking of merit. Indeed, producing this ranking is part of such metrics’ point—the thing that makes them useful to admissions offices. But now, competition whose natural state is disorganized and diffuse becomes highly organized and narrowly focused. Aspiring businesspeople, doctors, writers, and statespeople all will benefit, in reaching their professional goals, from high SAT scores and GPAs, and, accordingly, they all compete to join the top ranks. The numbers on which admissions offices rely to validate their selections therefore create competition and hierarchy where the incommensurability of value once made rank unintelligible. SATs and GPAs do to human capital what prices earlier did to physical or financial capital—they make it possible to say, all things considered, who has more, who is richest. Unreasoning accumulation and open inequality follow inexorably.

The competition that the numerical metrics create, moreover, aims at foolish and indeed fruitless ambitions. SAT scores and GPAs, once again, do not measure any intelligible excellences, and high scores and averages therefore have no value in themselves. At best, pursuing them wastes effort and attention and almost surely deforms schooling, by diverting effort and attention from the many genuine excellences that education can produce. This is even more vividly true on the side of colleges and universities, with respect to the wasteful and even destructive contortions they put themselves through in pursuit of higher US News rankings.

The numbers-based distortions induced by students’ pursuit of higher test scores and institutions’ pursuit of higher rankings both may be given a natural framing in terms of the distinction between excellence and superiority. Excellence is a threshold concept, not a rank concept. It applies as soon as a certain level of ability or accomplishment is reached, and while it can make sense to say that one person is, in some respect, more excellent than another, this does not eliminate (or even undermine) the other’s excellence. Moreover, excellence is a substantive rather than purely formal ideal. Excellence requires not just capacity or achievement, but rather capacity and achievement realized at something worthwhile. It is a moral error to speak of excellence in corruption, wickedness, or depravity. Superiority, on the other hand, is opposite in both respects. It is a rank—rather than a threshold—concept, and one person’s superior accomplishment undoes rather than just exceeds the superiority of those whom she surpasses. In addition, superiority is purely formal rather than substantive. It makes perfect sense to speak in terms of superiority at activities that are worthless or even harmful.

When the numbers that rule over the processes that match students and schools under human capitalism subject education to domination by a single and profoundly mistaken conception of merit, they depose excellence, installing in its place a merciless quest for superiority. Human capitalism distorts schooling in much the same way that financialization distorts for-profit sectors of the real economy. Once, firms committed to particular products (General Motors to cars, IBM to computers) might view profits as a happy side-effect of running their businesses well. But in finance, whose only product is profit, the distinction between success and profitability becomes literally unintelligible, and financialization therefore subjects the broader economy to a tyranny of profit. Similarly, flourishing schools and universities will view their reputations and status as salutary side-effects of one or another form of academic excellence. But human capitalism shuts schools off from these conceptions of excellence and enslaves them to the pursuit of superiority. Schooling in an age of human capitalism thus becomes subjected to a tyranny of SATs, GPAs, and college rankings.

All these consequences, moreover, are neither accidents nor the result of individual vices: the shallowness of applicants or the vanity of universities. Rather, a social and economic hierarchy based on human capital creates a pitiless competition for access to the meritocratic education that builds human capital. Working- and middle-class children lack the resources to compete in the educational race and so are excluded not just from income and status but from meaningful opportunity. Rich children, meanwhile, are run ragged in a competition to achieve an intrinsically meaningless superiority that devours even those whom it appears to favor. And the colleges and universities that provide training, and administer the competition, are deformed in ways that betray any plausible conceptions of academic excellence. The Varsity Blues scandal exposed this corruption alongside the frauds that conventional responses emphasized. Why would intelligent and otherwise prudent people—one of the culprits was cochair of a major global law firm—pursue such a ham-fisted scheme other than from a desperate fear of losing meritocratic caste? No one escapes the meritocracy trap.

The only way out—for schools as well as for students—involves structural reforms that extend well beyond education, to reach economic and social inequalities writ large. But although reforms cannot end with schools, colleges, and universities, they might begin there. In particular, the familiar hope that making standardized tests less biased and more accurate and making rankings more comprehensive—that is, perfecting meritocracy—might more effectively launder social and economic inequalities without diminishing them is simply a fantasy. Colleges and universities, in particular, cannot redeem their educational souls while retaining their exclusivity. Instead, elite schools must become, simply, less elite.

If it mattered less where people got educated, applicants could pursue different paths for different reasons. And schools and colleges, freed from the burden of allocating life chances, could abandon their craving for superiority and instead pursue scholarly insight, practical innovation, community engagement, and a thousand other incommensurable virtues. Along the way, by freeing themselves from superiority’s jealous grasp, universities might redeem the very idea of excellence.

 

Posted in Credentialing, Higher Education, History of education, Sociology, Uncategorized

How Credentialing Theory Explains the Extraordinary Growth in US Higher Ed in the 19th Century

Today I am posting a piece I wrote in 1995. It was the foreword to a book by David K. Brown, Degrees of Control: A Sociology of Educational Expansion and Occupational Credentialism.  

I have long been interested in credentialing theory, but this is the only place where I ever tried to spell out in detail how the theory works.  For this purpose, I draw on the case of the rapid expansion of the US system of higher education in the 19th century and its transformation at the end of the century, which is the focus of Brown’s book.  Here’s a link to a pdf of the original. 

The case is particularly fruitful for demonstrating the value of credentialing theory, because the most prominent theory of education development simply can’t make sense of it.  Functionalist theory sees the emergence of educational systems as part of the process of modernization.  As societies become more complex, with a greater division of labor and a shift from manual to mental work, the economy requires workers with higher degrees of verbal and cognitive skills.  Elementary, secondary  and higher education arise over time in response to this need. 

The history of education in the U.S., however, poses a real problem for this explanation.  American higher education exploded in the 19th century, to the point that there were 800 some colleges in existence by 1880, which was more than the total number in the continent of Europe.  It was the highest rate of colleges per 100,000 population that the world have ever seen.   The problem is that this increase was not in respond to increasing demand from employers for college-educated workers.  While the rate of higher schooling was increasing across the century, the skill demands in the workforce were declining.  The growth of factory production was subdividing forms of skilled work, such as shoemaking, into a series of low-skilled tasks on the assembly line.  

This being the case, then, how can we understand the explosion of college founding in the 19th century?  Brown provides a compelling explanation, and I lay out his core arguments in my foreword.  I hope you find it illuminating.

 

Brown Cover

Preface

In this book, David Brown tackles an important question that has long puzzled scholars who wanted to understand the central role that education plays in American society: When compared with other Western countries, why did the United States experience such extraordinary growth in higher education? Whereas in most societies higher education has long been seen as a privilege that is granted to a relatively small proportion of the population, in the United States it has increasingly come to be seen as a right of the ordinary citizen. Nor was this rapid increase in accessibility very recent phenomenon. As Brown notes, between 1870 and 1930, the proportion of college-age persons (18 to 21 years old) who attended institutions of higher education rose from 1.7% to 13.0%. And this was long before the proliferation of regional state universities and community colleges made college attendance a majority experience for American youth.

The range of possible answers to this question is considerable, with each carrying its own distinctive image of the nature of American political and social life. For example, perhaps the rapid growth in the opportunity for higher education was an expression of egalitarian politics and a confirmation of the American Dream; or perhaps it was a political diversion, providing ideological cover for persistent inequality; or perhaps it was merely an accident — an unintended consequence of a struggle for something altogether different. In politically charged terrain such as this, one would prefer to seek guidance from an author who doesn’t ask the reader to march behind an ideological banner toward a preordained conclusion, but who instead rigorously examines the historical data and allows for the possibility of encountering surprises. What the reader wants, I think, is an analysis that is both informed by theory and sensitive to historical nuance.

In this book, Brown provides such an analysis. He approaches the subject from the perspective of historical sociology, and in doing so he manages to maintain an unusually effective balance between historical explanation and sociological theory-building. Unlike many sociologists dealing with history, he never oversimplifies the complexity of historical events in the rush toward premature theoretical closure; and unlike many historians dealing with sociology, he doesn’t merely import existing theories into his historical analysis but rather conceives of the analysis itself as a contribution to theory. His aim is therefore quite ambitious – to spell out a theoretical explanation for the spectacular growth and peculiar structure of American higher education, and to ground this explanation in an analysis of the role of college credentials in American life.

Traditional explanations do not hold up very well when examined closely. Structural-functionalist theory argues that an expanding economy created a powerful demand for advanced technical skills (human capital), which only a rapid expansion of higher education could fill. But Brown notes that during this expansion most students pursued programs not in vocational-technical areas but in liberal arts, meaning that the forms of knowledge they were acquiring were rather remote from the economically productive skills supposedly demanded by employers. Social reproduction theory sees the university as a mechanism that emerged to protect the privilege of the upper-middle class behind a wall of cultural capital, during a time (with the decline of proprietorship) when it became increasingly difficult for economic capital alone to provide such protection. But, while this theory points to a central outcome of college expansion, it fails to explain the historical contingencies and agencies that actually produced this outcome. In fact, both of these theories are essentially functionalist in approach, portraying higher education as arising automatically to fill a social need — within the economy, in the first case, and within the class system, in the second.

However, credentialing theory, as developed most extensively by Randall Collins (1979), helps explain the socially reproductive effect of expanding higher education without denying agency. It conceives of higher education diplomas as a kind of cultural currency that becomes attractive to status groups seeking an advantage in the competition for social positions, and therefore it sees the expansion of higher education as a response to consumer demand rather than functional necessity. Upper classes tend to benefit disproportionately from this educational development, not because of an institutional correspondence principle that preordains such an outcome, but because they are socially and culturally better equipped to gain access to and succeed within the educational market.

This credentialist theory of educational growth is the one that Brown finds most compelling as the basis for his own interpretation. However, when he plunges into a close examination of American higher education, he finds that Collins’ formulation of this theory often does not coincide very well with the historical evidence. One key problem is that Collins does not examine the nature of labor market recruitment, which is critical for credentialist theory, since the pursuit of college credentials only makes sense if employers are rewarding degree holders with desirable jobs. Brown shows that between 1800 and 1880 the number of colleges in the United States grew dramatically (as Collins also asserts), but that enrollments at individual colleges were quite modest. He argues that this binge of institution-creation was driven by a combination of religious and market forces but not (contrary to Collins) by the pursuit of credentials. There simply is no good evidence that a college degree was much in demand by employers during this period. Instead, a great deal of the growth in the number of colleges was the result of the desire by religious and ethnic groups to create their own settings for producing clergy and transmitting culture. In a particularly intriguing analysis, Brown argues that an additional spur to this growth came from markedly less elevated sources — local boosterism and land speculation — as development-oriented towns sought to establish colleges as a mechanism for attracting land buyers and new residents.

Brown’s version of credentialing theory identifies a few central factors that are required in order to facilitate a credential-driven expansion of higher education, and by 1880 several of these were already in place. One such factor is substantial wealth. Higher education is expensive, and expanding it for reasons of individual status attainment rather than for societal necessity is a wasteful use of a nation’s resources; it is only feasible for a very wealthy country. The United States was such a country in the late nineteenth century. A second factor is a broad institutional base. At this point, the United States had the largest number of colleges per million residents that the country has even seen, before or since. When combined with the small enrollments at each college, this meant that there was a great potential for growth within an already existing institutional framework. This potential was reinforced by a third factor, decentralized control. Colleges were governed by local boards rather than central state authorities, thus encouraging entrepreneurial behavior by college leaders, especially in the intensively competitive market environment they faced.

However, three other essential factors for rapid credential-based growth in higher education were still missing in 1880. For one thing, colleges were not going to be able to attract large numbers of new students, who were after all unlikely to be motivated solely by the love of learning, unless they could offer these students both a pleasant social experience and a practical educational experience — neither of which was the norm at colleges for most of the nineteenth century. Another problem was that colleges could not function as credentialing institutions until they had a monopoly over a particular form of credentials, but in 1880 they were still competing directly with high schools for the same students. Finally, their credentials were not going to have any value on the market unless employers began to demonstrate a distinct preference for hiring college graduates, and such a preference was still not obvious at this stage.

According to Brown, the 1880s saw a major shift in all three of these factors. The trigger for this change was a significant oversupply of institutions relative to existing demand. In this life or death situation, colleges desperately sought to increase the pool of potential students. It is no coincidence that this period marked the rapid diffusion of efforts to improve the quality of social life on campuses (from the promotion of athletics to the proliferation of fraternities), and also the shift toward curriculum with a stronger claim of practicality (emphasizing modern languages and science over Latin and Greek). At the same time, colleges sought to guarantee a flow of students from feeder institutions, which required them to establish a hierarchical relationship with high schools. The end of the century was the period in which colleges began requiring completion of a high school course as a prerequisite for college admission instead of the traditional entrance examination. This system provided high schools with a stable outlet for its graduates and colleges with predictable flow of reasonably well-prepared students. However, none of this would have been possible if the college degree had not acquired significant exchange value in the labor market. Without this, there would have been only social reasons for attending college, and high schools would have had little incentive to submit to college mandates.

Perhaps Brown’s strongest contribution to credential theory is his subtle and persuasive analysis of the reasoning that led employers to assert a preference for college graduates in the hiring process. Until now, this issue has posed a significant, perhaps fatal, problem for credentialing theory, which has asked the reader to accept two apparently contradictory assertions about credentials. First, the theory claims that a college degree has exchange value but not necessarily use value; that is, it is attractive to the consumer because it can be cashed in on a good job more or less independently of any learning that was acquired along the way. Second, this exchange value depends on the willingness of employers to hire applicants based on credentials alone, without direct knowledge of what these applicants know or what they can do. However this raises a serious question about the rationality of the employer in this process. After all, why would an employer, who presumably cares about the productivity of future employees, hire people based solely on college’s certification of competence in the absence of any evidence for that competence?

Brown tackles this issue with a nice measure of historical and sociological insight. He notes that the late nineteenth century saw the growing rationalization of work, which led to the development of large-scale bureaucracies to administer this work within both private corporations and public agencies. One result was the creation of a rapidly growing occupational sector for managerial employees who could function effectively within such a rationalized organizational structure. College graduates seemed to fit the bill for this kind of work. They emerged from the top level of the newly developed hierarchy of educational institutions and therefore seemed like natural candidates for management work in the upper levels of the new administrative hierarchy, which was based not on proprietorship or political office but on apparent skill. And what kinds of skills were called for in this line of work? What the new managerial employees needed was not so much the technical skills posited by human capital theory, he argues, but a general capacity to work effectively in a verbally and cognitively structured organizational environment, and also a capacity to feel comfortable about assuming positions of authority over other people.

These were things that the emerging American college could and indeed did provide. The increasingly corporate social structure of student life on college campuses provided good socialization for bureaucratic work, and the process of gaining access to and graduation from college provided students with an institutionalized confirmation of their social superiority and qualifications for leadership. Note that these capacities were substantive consequences of having attended college, but they were not learned as part of the college’s formal curriculum. That is, the characteristics that qualified college graduates for future bureaucratic employment were a side effect of their pursuit of a college education. In this sense, then, the college credential had a substantive meaning for employers that justified them in using it as a criterion for employment — less for the human capital that college provided than for the social capital that college conferred on graduates. Therefore, this credential, Brown argues, served an important role in the labor market by reducing the uncertainty that plagued the process of bureaucratic hiring. After all, how else was an employer to gain some assurance that candidate could do this kind of work? A college degree offered a claim to competence, which had enough substance behind it to be credible even if this substance was largely unrelated to the content of the college curriculum.

By the 1890s all the pieces were in place for a rapid expansion of college enrollments, strongly driven by credentialist pressures. Employers had reason to give preference to college graduates when hiring for management positions. As a result, middle-class families had an increasing incentive to provide their children with privileged access to an advantaged social position by sending them to college. For the students themselves, this extrinsic reward for attending college was reinforced by the intrinsic benefits accruing from an attractive social life on campus. All of this created a very strong demand for expanding college enrollments, and the pre-existing institutional conditions in higher education made it possible for colleges to respond to this demand in an aggressive fashion. There were a thousand independent institutions of higher education, accustomed to playing entrepreneurial roles in a competitive educational market, that were eager to capitalize on the surge of interest in attending college and to adapt themselves to the preferences of these new tuition-paying consumers. The result was a powerful and unrelenting surge of expansion in college enrollments that continued for the next century.

 

Brown provides a persuasive answer to the initial question about why American higher education expanded at such a rapid rate. But at this point the reader may well respond by asking the generic question that one should ask of any analyst, and that is, “So what?” More specifically, in light of the particular claims of this analysis, the question becomes: “What difference does it make that this expansion was spurred primarily by the pursuit of educational credentials?” In my view, at least, the answer to that question is clear. The impact of credentialism on both American society and the American educational system has been profound — profoundly negative. Consider some of the problems it has caused.

One major problem is that credentialism is astonishingly inefficient. Education is the largest single public investment made by most modern societies, and this is justified on the grounds that it provides a critically important contribution to the collective welfare. The public value of education is usually calculated as some combination of two types of benefits, the preparation of capable citizens (the political benefit) and the training of productive workers (the economic benefit). However the credentialist argument advanced by Brown suggests that these public benefits are not necessarily being met and that the primary beneficiaries are in fact private individuals. From this perspective, higher education (and the educational system more generally) exists largely as a mechanism for providing individuals with a cultural commodity that will give them a substantial competitive advantage in the pursuit of social position. In short, education becomes little but a vast public subsidy for private ambition.

The practical effect of this subsidy is the production of a glut of graduates. The difficulty posed by this outcome is not that the population becomes overeducated (since such a state is difficult to imagine) but that it becomes overcredentialed, since people are pursuing diplomas less for the knowledge they are thereby acquiring than for the access that the diplomas themselves will provide. The result is a spiral of credential inflation; for as each level of education in turn gradually floods with a crowd of ambitious consumers, individuals have to keep seeking ever higher levels of credentials in order to move a step ahead of the pack. In such a system nobody wins. Consumers have to spend increasing amounts of time and money to gain additional credentials, since the swelling number of credential holders keeps lowering the value of credentials at any given level. Taxpayers find an increasing share of scarce fiscal resources going to support an educational chase with little public benefit. Employers keep raising the entry-level education requirements for particular jobs, but they still find that they have to provide extensive training before employees can carry out their work productively. At all levels, this is an enormously wasteful system, one that rich countries like the United States can increasingly ill afford and that less developed countries, who imitate the U.S. educational model, find positively impoverishing.

A second major problem is that credentialism undercuts learning. In both college and high school, students are all too well aware that their mission is to do whatever it takes to acquire a diploma, which they can then cash in on what really matters — a good job. This has the effect of reifying the formal markers of academic progress-grades, credits, and degrees — and encouraging students to focus their attention on accumulating these badges of merit for the exchange value they offer. But at the same time this means directing attention away from the substance of education, reducing student motivation to learn the knowledge and skills that constitute the core of the educational curriculum. Under such conditions, it is quite rational, even if educationally destructive, for students to seek to acquire their badges of merit at a minimum academic cost, to gain the highest grade with the minimum amount of learning. This perspective is almost perfectly captured by a common student question, one that sends chills down the back of the learning-centered teacher but that makes perfect sense for the credential-oriented student: “ls this going to be on the test?” (Sedlak et al., 1986, p. 182). We have credentialism to thank for the aversion to learning that, to a great extent, lies at the heart of our educational system.

A third problem posed by credentialism is social and political more than educational. According to credentialing theory, the connection between social class and education is neither direct nor automatic, as suggested by social reproduction theory. Instead, the argument goes, market forces mediate between the class position of students and their access to and success within the educational system. That is, there is general competition for admission to institutions of higher education and for levels of achievement within these institutions. Class advantage is no guarantee of success in this competition, since such factors as individual ability, motivation, and luck all play a part in determining the result. Market forces also mediate between educational attainment (the acquisition of credentials) and social attainment (the acquisition of a social position). Some college degrees are worth more in the credentials market than others, and they provide privileged access to higher level positions independent of the class origins of the credential holder.

However, in both of these market competitions, one for acquiring the credential and the other for cashing it in, a higher class position provides a significant competitive edge. The economic, cultural, and social capital that come with higher class standing gives the bearer an advantage in getting into college, in doing well at college, and in translating college credentials into desirable social outcomes. The market-based competition that characterizes the acquisition and disposition of educational credentials gives the process a meritocratic set of possibilities, but the influence of class on this competition gives it a socially reproductive set of probabilities as well. The danger is that, as a result, a credential-driven system of education can provide meritocratic cover for socially reproductive outcomes. In the single-minded pursuit of educational credentials, both student consumers and the society that supports them can lose sight of an all-too-predictable pattern of outcomes that is masked by the headlong rush for the academic gold.