Posted in Books, Higher Education, History of education, Professionalism

Nothing Succeeds Like Failure: The Sad History of American Business Schools

This post is a review I wrote of Steven Conn’s book, Nothing Succeeds Like Failure: The Sad History of American Business Schools, which will be coming out this summer in History of Education Quarterly.  Here’s a link to the proofs.Conn Book Cover

Steven Conn. Nothing Succeeds Like Failure: The Sad History of American Business Schools. Ithaca, NY: Cornell University Press, 2019. 288pp.

            In this book, historian Steven Conn has produced a gleeful roast of the American business school.  The whole story is in the title.  It goes something like this:  In the nineteenth century, proprietary business schools provided training for people (men) who wanted to go into business.  Then 1881 saw the founding of the Wharton School of Finance and Economy at the University of Pennsylvania, which was the first business school located in a university; others quickly followed.  Two forces converged to create this new type of educational enterprise.  Progressive reformers wanted to educate future business leaders who would manage corporations in the public interest instead of looting the public the way robber barons had done.  And corporate executives wanted to enhance their status and distinguish themselves from mere businessmen by requiring a college degree in business for the top level positions.  This was both a class distinction (commercial schools would be just fine for the regular Joe) and an effort to redefine business as a profession.  As Conn aptly puts it, the driving force for both business employers and their prospective employees was “profession envy” (p. 37).  After all, why should doctors and lawyers enjoy professional standing and not businessmen?

            For reformers, the key contribution of B schools was to be a rigorous curriculum that would transform the business world.  For the students who attended these schools, however, the courses they took were beside the point.  They were looking for a pure credentialing effect, by acquiring a professional degree that would launch their careers in the top tiers of the business world.  As Conn shows, the latter perspective won.  He delights in recounting the repeated efforts by business associations and major foundations (especially Ford and Carnegie) to construct a serious curriculum for business schools.  All of these reforms, he says, failed miserably.  The business course of study retained a reputation for uncertain focus and academic mediocrity.  The continuing judgment by outsiders was that “U.S. business education was terrible” (p. 76).

            This is the “failure” in the book’s title.  B schools never succeeded in doing what they promised as educational institutions.  But, as he argues, this curricular failure did nothing to impede business schools’ organizational success.  Students flocked to them in the search for the key to the executive suite and corporations used them to screen access to the top jobs.  This became especially true in the 1960s, when business schools moved upscale by introducing graduate programs, of which the most spectacular success was the MBA.  Nothing says professional like a graduate degree.  And nothing gives academic credibility to a professional program like establishing a mandate for business professors to carry out academic research just like their peers in the more prestigious professional schools. 

            Conn says that instead of working effectively to improve the business world, B schools simply adopted the values of this world and dressed them up in professional garb.  By the end of the twentieth century, corporations had shed any pretense of working in the public interest and instead asserted shareholder value as their primary goal.  Business schools also jumped on this bandwagon.  One result of this, the author notes, was to reinforce the rapacity of the new business ethos, sending an increasing share of business graduates into the realms of finance and consulting, where business is less a process of producing valuable goods and services than a game of monopoly played with other people’s money.  His conclusion: “No other profession produces felons in quite such abundance” (p. 206).

            Another result of this evolution in B schools was that they came to infect the universities that gave them a home.  Business needed universities for status and credibility, and it thanked them by dragging them down to its own level.  He charges that universities are increasingly governed liked private enterprises, with market-based incentives for colleges, departments, and individual faculty to produce income from tuition and research grants or else find themselves discarded like any other failed business or luckless worker.  It’s like business schools have succeeded in redoing the university in their own image: “All the window dressing of academia without any of its substance” (p. 222).

            That’s quite an indictment, but is it sufficient for conviction?  I think not.  One problem is that, from the very beginning, the reader gets the distinct feeling that the fix is in.  The book opens with a scene in which the author’s colleagues in the history department come together for their monthly faculty meeting in a room filled with a random collection of threadbare couches and chairs.  All of it came from the business school across campus when it bought new furniture.  This scene is familiar to a lot of faculty in the less privileged departments on any campus, where the distinction between the top tier and the rest is all too apparent.  In my school of education, we’re accustomed to our old, dingy barn of a building, all too aware of the elegant surroundings  the business school enjoys in a brand new campus paid for by Phil Knight of swoosh fame. 

But this entry point to the book signals a tone that tends to undermine the author’s argument throughout the book.  It’s hard not to read this book as a polemic of resentment toward the nouveau riche — a humanist railing against the money-grubbers across campus who are despoiling the sanctity of academic life.  This reading is not fair, since a lot of the author’s critique of business schools is correct; but it made me squirm a bit as I worked through the text.  It also made the argument feel a little too inevitable.  Read the title and the opening page and you already have a good idea of what is going to follow.  Then you see that the first chapter is about the Wharton School and you just know where you’re headed.  And sure enough, in the last chapter the author introduces Wharton’s most notorious graduate, Donald Trump.  That second shoe took two hundred pages to drop, but the drop was inevitable. 

In addition, by focusing relentlessly on the “sad history of American business schools,” Conn is unable to put this account within the larger context of the history of US higher education.  For one thing, business didn’t introduce the market to higher ed; it was there from day one.  Our current system emerged in the early nineteenth century, when a proliferation of undistinguished colleges popped up across the US, especially in small towns on the expanding frontier.  These were private enterprises with corporate charters and no reliable source of funding from either church or state.  They often emerged more as efforts to sell land (this is a college town so buy here) or plant the flag of a religious denomination than to advance higher learning.  And they had to hustle to survive in a glutted market, so they became adept at mining student tuition and cultivating donors.  Business schools are building on a long tradition of playing to the market.  They just aren’t as concerned about covering their tracks as the rest of us.

David F. Labaree

Stanford University

Posted in Higher Education, Inequality, Meritocracy

Markovits: Schooling in the Age of Human Capital

Today I’m posting a wonderful new essay by Daniel Markovits about the social consequences of the new meritocracy, which was just published in the latest issue of Hedgehog Review.  Here’s a link to the original.  As you may recall, last fall I posted a piece about his book, The Meritocracy Trap.  

In this essay, Markovits extends his analysis of the role that universities play in fostering a new and particularly dangerous kind of wealth inequality — one based on the returns on human capital instead of the returns on economic capital.  For all of history until the late 20th century, wealth meant ownership of land, stocks, bonds, businesses, or piles of gold.  The income it produced came to you simply for being the owner, whether or not you accumulated the wealth yourself.  One of the pleasures of being rich was the luxury of remaining idle. 

But the meritocracy has established a new path to wealth — based on the university-credentialed skills you accumulate early in your life and then cash in for a high paying job as an executive or professional.  Like the average wage earner, you work for a living and only retain the income if you keep working.  Unlike the average worker, however, you earn an extraordinary amount of money.  Markovits estimates that in the 1960s, between a sixth and a third of the people in the top one percent in income earned this from their own labor; now the proportion is two-thirds.  The meritocrats are the new rich.  And universities are the route to attaining these riches.

At one level, this is a fairer system by far than the old one based on simple inheritance and coupon clipping.  These people work for a living, and they work hard — longer hours than most people in the work force.  They can only attain their lucrative positions by proving their worth in the educational system, crowned by college and professional degrees.  These are the people who get the best grades and the best test scores and who qualify for entrance into and graduation from the best universities.  This provides the new form of inequality with a thick veneer of meritocratic legitimacy.  

As Markovits points out below, however, the problem is that the entire meritocratic enterprise is not directed toward identifying and certifying excellence but instead toward creating degrees of superiority.  

Excellence is a threshold concept, not a rank concept. It applies as soon as a certain level of ability or accomplishment is reached, and while it can make sense to say that one person is, in some respect, more excellent than another, this does not eliminate (or even undermine) the other’s excellence. Moreover, excellence is a substantive rather than purely formal ideal. Excellence requires not just capacity or achievement, but rather capacity and achievement realized at something worthwhile. 

The university produced degrees do not certify excellence but instead define the degree-holder’s position in line for the very best jobs.  They are positional goods, whose value is in qualifying you for a spot as close to the front of the queue as possible.  Thus all of the familiar metrics for showing  where you are in line:  SAT, LSAT, US News college rank, college admission rate.  Since everyone knows this is how the game is played, everyone wants and needs to get the diploma that grants the highest degree of superiority in the race for position.  Being really qualified for the job is meaningless if your degree doesn’t get you access to it.  As a result, Markovits notes, you can never get enough education to ensure your success in the meritocratic rat race.

“The value to me of my education,” the economist Fred Hirsch once observed, “depends not only on how much I have but also on how much the man ahead of me in the job line has.”32 This remains so, moreover, regardless of how much education the person ahead of me and I both possess. Every meritocratic success therefore necessarily breeds a flip side of failure—the investments made by the rich exclude the rest, and also those among the rich who don’t quite keep up. This means that while the rich get sated on most goods (there is only so much caviar a person can eat), they cannot get sated on schooling.

Parents with lots of human capital have a huge advantage in guiding their children through educational system, but this only breeds insecurity.  They know that they’re competing with other families with the same advantages and that only a few will gain a place in the front of the line where the most lucrative positions are allocated.  Excellence is attainable, but superiority is endlessly elusive.

I hope you find this article as illuminating as I do.

Businessman running on hamster wheel

Schooling in the Age of Human Capital

Metrics do not and, in fact, cannot measure any intelligible conception of excellence at all.

Daniel Markovits

The recent “Varsity Blues” scandal brought corruption at American universities into the public eye. Rich people bought fraudulent test scores and bribed school officials in order to get their children into top colleges. Public outrage spread beyond the scandal’s criminal face, to the legacy preferences by which universities legally favor the privileged children of their own graduates. After all, the actions in the Varsity Blues case became criminal only because the universities themselves failed to capture the proceeds of their own corruption. The outrage was natural and warranted. There is literally nothing to say in favor of a system that allows the rich to circumvent the meritocratic competition that governs college admissions for everyone else. But the outrage also distracts from and even disguises a broader and deeper corruption in American education, which arises not from betraying meritocratic ideals but, rather, from pursuing them. Meritocracy itself casts a dark shadow over education, biasing decisions about who gets it, distorting the institutions that deliver it, and corrupting the very idea of educational excellence.The methods of meritocratic schooling drive the corruption forward. Scores on the SAT (formally called the Scholastic Assessment Test), grade point averages (GPAs), and college rankings—the metrics that organize and even tyrannize meritocratic education in the United States today—are manifestly absurd. It’s not just that SAT scores, GPAs, and rankings are culturally biased or that they lack predictive validity. These familiar complaints have a point, but they all proceed from the fanciful belief that merit may be measured and that meritocracy, if properly administered, supports opportunity for all and thereby makes unequal outcomes okay. The familiar objections argue only that the metrics are poorly designed and so miss their meritocratic marks. In some instances, as when SAT scores are criticized for poorly predicting college GPAs, the criticisms simply prefer one measure over another. But the real root of the trouble with SATs, GPAs, and rankings is deeper and different: These metrics do not and, in fact, cannot measure any intelligible conception of excellence at all. And really appreciating this objection requires stepping outside meritocracy’s conventional imaginative frame.

A Transparent Absurdity

Colleges and universities quantify applicants’ merits using SAT scores and GPAs. But as a measure of anything that is itself worthwhile—of any meaningful achievement or genuine human excellence—an SAT score or a GPA is not so much imprecise and incomplete, or biased and unfair, as simply nonsensical. Even if individual questions on the test identify real skills, and even if grades on individual assignments or courses reflect real accomplishments, the sums and averages that compose overall SAT scores and GPAs fail to track any credible concept of ability or accomplishment. What sense does it make to treat a person who uses language exceptionally vividly and creatively but cannot identify the core facts in a descriptive passage as possessing, overall, average linguistic aptitude or accomplishment? It is more absurd still to treat someone who reads and writes fantastically well but is terrible at mathematics as, in any way, an ordinary or middling student. But SAT scores and GPAs push inexorably toward both conclusions. Again, even if one sets aside doubts about whether individual skills can be measured by multiple-choice questions or whether particular course work can be accurately graded, these metrics create literally mindless averages—totally without grounding in any conception of how to aggregate skills or accomplishments into an all-things-considered sum, or even any argument that the these things are commensurable or that aggregating them is intelligible.

Applicants, for their parts, measure colleges and universities by rankings, including most prominently those published by US News & World Report. These rankings are, if anything, even less intelligible than the metrics used to evaluate applicants. For colleges, for example, the rankings aggregate many factors: graduation and retention rates (both in fact and as compared to US News’s expectations), an idiosyncratic measure of “social mobility,” class size, faculty salaries, faculty education, student-faculty ratio, share of faculty who are full-time, expert opinion, academic spending per student, student standardized test scores, student rank in high school class, and alumni giving.1 Once again, even supposing that these factors reflect particular educational excellences and that the data US News gathers measure the factors, the aggregate that it builds by combining them, using weights specified to within one-tenth of one percent, remains incoherent. Berea College, for example, enrolls students who skew more toward first-generation college graduates than Princeton University, and in this way adds more to the education of each student (especially compared to her likely alternatives), but it has a less renowned, scholarly, and highly paid faculty. What possible conception of “excellence” can underwrite an all-things-considered judgment of which is “better”? US News boasts that “our methodology is the product of years of research.”2 But the basic question of what this research is studying—of what excellence this method of deciding which colleges and universities are “best” could conceivably measure, or whether any such excellence is even intelligible—remains entirely unaddressed.

In spite of their patent absurdities, the metrics deployed by both sides of the college admissions complex dominate how students and colleges are matched: Schools use test scores and grades to decide whom to admit, and applicants use rankings to decide where to enroll. The five top-ranked law schools, for example, enroll roughly two-thirds of applicants with Law School Admission Test (LSAT) scores in the ninety-ninth percentile.3 And although law schools hold precise recruitment data close, one can reasonably estimate that of the roughly 2,000 people admitted to the top five law schools each year, no more than five (which is to say effectively none) attend a law school outside the top ten.4 Law school is likely an extreme case. But instead of being outlandish, it lies at the end of a continuum and emphasizes patterns that repeat themselves (less acutely) across American higher education. Metrics that are literally nonsense drive an incredibly efficient two-way matching system.

When a transparent absurdity dominates a prominent social field, something profound lies beneath. And the metrics that tyrannize university life rise out of deep waters indeed. Elites increasingly owe their income and status not to inherited physical or financial capital but to their own skill, or human capital, acquired through intensive and even extravagant training. Colleges and universities provide the training that builds human capital, and going to college (and to the right college) therefore substantially determines who gets ahead. The practices that match students and colleges must answer the need to legitimate the inequalities this human capitalism produces, by justifying advantage on meritocratic grounds. Even when they are nonsense, numbers provide legitimacy in a scientific age. The numbers that tyrannize university life in America today, and the deformations that education suffers as a result, are therefore the inevitable pathologies of schooling in an age of human capitalism.

The Superordinate Working Class

In 2018, the average CEO of an S&P 500 company took home about $14.5 million in total compensation,5 and in a recent year, the five highest-paid employees of the S&P 1500 firms (7,500 workers overall) captured total pay equal to nearly 10 percent of the S&P 1500’s collective profits.6 In finance, twenty-five hedge fund managers took home more than $100 million in 2016,7 for example, while the average portfolio manager at a mid-sized hedge fund was reported to have made more than $2 million in 2014.8 The Office of the New York State Comptroller reported in 2018 that the average securities industry worker in New York City made more than $400,000.9 Meanwhile, the most profitable law firm in America yields profits per partner in excess of $5 million per year, and more than seventy firms generate more than $1 million per partner annually.10 Anecdotes accumulate to become data. Taken together, employees at the vice-presidential level or higher at S&P 1500 companies, professional finance workers, top management consultants, top lawyers, and specialist medical doctors account for more than half of the richest 1 percent of households in the United States.11

These and other similar jobs enable a substantial subset of the most elaborately educated people to capture enormous incomes by mixing their accumulated human capital with their contemporaneous labor. This group now composes a superordinate working class. A cautious accounting attributes over half of the 1 percent’s income to these and other kinds of labor,12 while my own more complete estimate puts the share above two-thirds.13 Moreover—and notwithstanding capital’s rising domination over ordinary workers—roughly three-quarters of the increase in the top 1 percent’s share of national income overall stems from the rise of this superordinate working class, in particular a shift of income away from middle-class workers and in favor of elite ones. The result is a society in which the greatest source of wealth, income, and status (including for the mass affluent) is the skill and training—the human capital—of free workers.

The rise of human capitalism has transformed the colleges and universities that create human capital. Two facets of the transformation matter especially. First, education has acquired an importance it never had before. Until only a few generations ago, education and the skills it produces had little economic value. Even generously calculated, the top 0.1 and the top 1 percent of the income distribution in 1960 derived only about one-sixth and one-third of their incomes, respectively, from labor, which is to say by working their own human capital.14 Moreover, schools and universities did not dominate production of such human capital as there was; both blue- and white-collar workers received substantial workplace training, throughout their careers. In Detroit, for example, young men might quit childhood jobs on their eighteenth birthdays and present themselves to a Big Three automaker, to take up unionized, lifetime jobs that would (if they were capable and hard working) eventually make them into tool-and-die-makers, earning the equivalent of nearly $100,000 per year—all with no more than a high school education.15 And in New York, a college graduate joining junior management at IBM could expect to spend four years (or 10 percent of his career) in full-time, fully paid workplace training as he ascended the corporate ladder.16 Small wonder, then, that the college wage premium was modest at midcentury, and that the graduate-school wage premium (captured above what was earned by workers with just a bachelor’s degree) was more modest still.17 Elite schools and colleges, in this system, were sites of social prestige rather than economic production. Education had little direct economic payoff; rather, it followed, and merely marked, hierarchies that were established and sustained on other grounds. The critics of the old order were clear eyed about this. Kingman Brewster—the president who did more than anyone to modernize Yale University—called the college he inherited “a finishing school on Long Island Sound.”18

But today, education has become itself a source of income, status, and power for a meritocratic elite whose wealth consists, principally, in its own human capital. The college wage premium has risen dramatically, so that the present discounted value of a bachelor’s degree (net of tuition) is nearly three times greater today than in 1965.19 The postgraduate wage premium has risen more steeply still, and the median worker with a postgraduate degree now makes well over twice the wage of the median worker with a high school diploma only, and about 1.5 times the wage of the median worker with a four-year degree only. College and postcollege degrees also protect against unemployment, so that the effects of education on lifetime earnings are more dramatic still. Just one in seventy-five workers who have never finished high school, just one in forty workers with a high school education only, and just one in six workers with a bachelor’s degree enjoy lifetime earnings equal only to those of the median professional school graduate.20

Graduates of the top colleges and universities capture yet higher incomes, enjoying more than double the income boost of an average four-year degree, with even greater gains at the very top. The highest-paid 10 percent of Harvard College graduates make an average salary of $250,000 just six years out,21 while a recent study of Harvard Law School graduates ten years out reported a median annual income (among male graduates) of nearly $400,000.22 Overall, graduates of top-ten law schools make on average a quarter more than graduates of schools ranked eleventh to twentieth, and a half more than graduates of schools ranked twenty-first to one-hundredth;23 and 96 percent of the partners at the $5 million-a-year law firm graduated from a top-ten law school.24 More broadly, a recent survey reports—incredibly—that nearly 50 percent of America’s corporate leaders, 60 percent of its financial leaders, and 50 percent of its highest government officials attended only twelve universities.25 This makes elite education one of the best investments money can buy. Purely economic rates of return have been estimated at 13 to 14 percent for college and as high as 30 percent for law school, or more than double the rate of return provided by the stock market.26 Meanwhile, the educational alternatives to college have all but disappeared. According to a recent study, the average US firm invests less than 2 percent of its payroll budget on training.27

A second transformation follows from the first. Education, especially at top-tier colleges and universities, is now distributed in very different ways from before. Colleges, especially elite ones, have never welcomed poor or even middle-class people in large numbers. But once those schools chose students based on effectively immutable criteria—breeding, race, gender—so that while college was exclusive, it was nevertheless (at least among those who qualified) effectively nonrivalrous and not competitive. Even the very top schools routinely accepted perhaps a third of their applicants, and some took much greater shares still.28 As recently as 1995, the University of Chicago admitted 71 percent of those who applied. These rates naturally produced an application process that appears almost preposterously casual today. A midcentury graduate of Yale Law School, for example, recollects that when he met the dean of admissions at a college fair, he was told, based only on their conversation, “You’ll get in if you apply.” An easy confidence suffused the very language of going to college, as the sons of wealthy families did not apply widely but rather “put themselves down for” whatever colleges their fathers had attended. The game was rigged, and the stakes were small.

But today, education is parceled out through an enormous competition that becomes most intense at the very top. Even as poor and even middle-class children have virtually no chance at succeeding, rich children (no matter how privileged) have no guarantee of success. Colleges today—especially the top ones—are therefore both extremely exclusive and ruthlessly competitive. In a recent year, for example, children who had at least one parent with a graduate degree had, statistically, a 150 times greater chance of achieving the Ivy League median on their verbal SAT than children neither of whose parents had graduated high school.29 Small wonder, then, that the Ivy Plus colleges now enroll more students from households in the top 1 percent of the income distribution than from the entire bottom half.30 This makes these schools more economically exclusive than even notorious bastions of the old aristocracy such as Oxford and Cambridge. At the same time, while being born to privilege is nearly a necessary condition for admission to a really elite American university, it is far from sufficient. Last year, the University of Chicago admitted just six percent of applicants, and Stanford fewer than five percent.

These admissions rates mean that any significant failure—any visible blot on a record—effectively excludes an applicant. Rich families respond to this fact by investing almost unimaginable resources in getting their children perfect records. Prestigious private preschools in New York City now charge $30,000 per year to educate four-and-five-year-olds, and they still get ten or twenty applications for every space. These schools feed into elite elementary schools, which feed into elite high schools that charge $50,000 per year (and, on account of their endowments, spend even more). Rich families supplement all this schooling with private tutors who can charge over $1,000 per hour. If a typical household from the richest 1 percent took the difference between the money devoted to educating its children and what is spent on a typical middle-class education, and invested these sums in the S&P 500 to give to the rich children as bequests on the deaths of their parents, this would amount to a traditional inheritance of more than $10 million per child.31 This meritocratic inheritance effectively excludes working- and middle-class children from elite education, income, and status.

These expenditures are almost as inevitable as they are exorbitant. When one set of institutions dominates the production of wealth and status in a society, the privileged few set out to monopolize places, and the pressure to gain admission becomes enormous. Human capitalism, moreover, makes schooling a positional good. “The value to me of my education,” the economist Fred Hirsch once observed, “depends not only on how much I have but also on how much the man ahead of me in the job line has.”32 This remains so, moreover, regardless of how much education the person ahead of me and I both possess. Every meritocratic success therefore necessarily breeds a flip side of failure—the investments made by the rich exclude the rest, and also those among the rich who don’t quite keep up. This means that while the rich get sated on most goods (there is only so much caviar a person can eat), they cannot get sated on schooling. Finally, rather than pick schools based on family tradition, applicants make deliberate choices about where to apply, and almost always attend the highest-ranked school that admits them, as when effectively nobody admitted to a top-five law school attends a school outside the top ten.

In these ways, human capitalism creates an educational competition in which the stakes are immense and everyone competes for the same few top prizes. Whereas aristocracies perpetuated elites by birthright, meritocratic inequality establishes school and especially college admissions committees as de facto social planners, choosing the next generation of meritocrats. Education becomes a powerful mechanism for structural exclusion—the dominant dynastic technology of our enormously unequal age. This places extreme pressure on the schools, and especially admissions committees, which must decide which people to privilege, using what criteria, and to what ends.

Bohr’s Lucky Horseshoe

What happens to schools when the degrees they grant grow so valuable that the demand for them outstrips their supply, and when admissions decisions make or break applicants’ life plans and determine who gets ahead in society? How have schools and colleges responded to their admissions decisions’ raised stakes? And what has the rise of human capital, its dominant role in wealth (even among the rich), done to the nature of education itself—to education’s aims, and to the standards by which it determines success? Measurement, and the tyranny of numbers, turns out to play a central part in the answer to all these questions—and for reasons not just shallow but deep. The manifestly absurd metrics that dominate university life are direct consequences of the role that schooling plays in our present economic and social order.

That which is measured becomes important. But at the same time, that which is important must be measured—and on a scale that allows for the sort of confident and exact judgments and comparisons that numbers yield. In a technocratic age—suspicious (for good reasons as well as bad) of humanist, interpretive, and therefore discretionary judgments about value—the demand for certainty and precision becomes irresistible when the stakes get high enough. The rise of human capitalism therefore makes it essential to construct metrics that schools and colleges might use to assess human capital and to compare the people who possess it, in order to determine whose human capital should receive additional investments.

The problem becomes more pressing still because education is lumped into standardized units called degrees, so that schools (especially the most exclusive ones, which have no part-time students or “honors colleges”) cannot hedge their bets by offering applicants varying quantities or qualities of training, but must instead make a binary choice to accept or to reject, full stop. The metrics that admissions offices use must therefore be able to aggregate across dimensions of skill and ability, in order to construct a single, all-things-considered measure of ability and accomplishment capable of supporting a “yes” or a “no.” This task becomes especially demanding in a world that has rejected the unity of the virtues and insists instead that people and institutions may excel in some ways even as they fail in others. GPAs and standardized test scores, especially on the SAT, as well as university rankings as provided by US News & World Report, provide the required metrics—comprehensive and complete orderings that can make fine distinctions that all who accept the metrics must agree on. Averages, scores, and rankings operate as prices do in economic markets, corralling judgments made unruly by normative pluralism and fragmentation into a single, public, shared measure of value.

These metrics—especially the SAT—are of course themselves disputed, sometimes vigorously. Certainly, they rest on arbitrary assumptions, and precision comes only at the cost of simply ignoring anything intractable, no matter how important. Nevertheless, even challenges to particular measures of human capital often accept the general approach that lies behind them all, and therefore give away the evaluative game—as (once again) when the SAT is criticized for lacking much power to predict GPAs. And even when they are contested, metrics like the GPA and SAT suppress ambiguities that they cannot eliminate, by pushing contestation into the background, far away from the individual cases and the evaluation of particular applicants. We may disagree about the validity of the SAT, and indeed harbor doubts about the test’s value, but we will nevertheless all agree on who has the highest score. In this sense, GPAs and SATs are like Niels Bohr’s lucky horseshoe—they work even if you don’t believe in them. In a world in which people cannot possibly agree on any underlying account of virtue or success, but literally everything turns on how success is measured, numerical scores allow admissions committees to legitimate their choices of whom to admit.

The early meritocrats understood this. At Harvard, James Bryant Conant, president from 1933 to 1953, introduced the SAT into college admissions with the specific purpose of identifying deserving applicants from outside the aristocratic elite. (James Tobin, who would serve on President John F. Kennedy’s Council of Economic Advisers and win a Nobel Prize, was an early success story.33) Yale came to meritocracy later, but (perhaps for this very reason) embraced the logic of numbers-based meritocratic evaluation more openly and explicitly. Kingman Brewster, president from 1963 to 1977, called himself an “intellectual investment banker” and encouraged his admissions office to compose Yale’s class with the aim of admitting the students who would maximize the human capital that his investments would build. R. Inslee “Inky” Clark, Brewster’s dean of undergraduate admissions from 1963 to 1969, called his selection process “talent searching” and equated talent with “who will benefit most from studying at Yale.” The new administration, moreover, deployed test scores and GPAs not just affirmatively, to find overlooked talent, but also negatively, to break the old aristocratic elite’s monopoly over places at top colleges. Clark called the old, breeding-based elite “ingrown,” and aggressively turned Yale against aristocratic prep schools. In 1968, for example, when Harvard still accepted 46 percent of applicants from Choate and Princeton took 57 percent, Yale accepted only 18 percent.34

The meritocrats aimed by these means to build a new leadership class. The old guard recognized the threat and resisted, both privately and even publicly. Brewster’s predecessor had scorned Harvard’s meritocratic admissions, which he said would favor the “beetle-browed, highly specialized intellectual.” When Brewster’s revolution was presented to the Yale Corporation, one member objected, “You’re talking about Jews and public-school graduates as leaders. Look around you at this table. These are America’s leaders. There are no Jews here. There are no public-school graduates here.” And William F. Buckley lamented that Brewster’s Yale would prefer “a Mexican-American from El Paso High…[over]…Jonathan Edwards the Sixteenth from Saint Paul’s School.” Just so, the meritocrats replied.35

They added that their meritocratic approach to building an elite—because numbers measure ability and, just as important, block overt and direct appeals to breeding—would launder the hierarchy that it produced. Prior inequalities—especially aristocratic ones—were prejudicial, malign, and offensive. But meritocracy purports to be wholesome: backed by objective numbers, open to all comers, and resolutely focused on earned advantage. Indeed, meritocracy aspires to redeem the very idea of inequality—to make unequal outcomes compatible with equal opportunities, and to render hierarchy acceptable to a democratic age. In this way, the early meritocrats combined stark criticism of the present with a profound optimism about the future.

The Soldier, the Artist, and the Financier

The meritocrats’ optimism fell, if not at once, then soon. And it fell at hurdles erected by their own reliance on numbers. The metrics that the meritocrats constructed, and that now dominate education, turned out to be not just absurd but destructive.

To begin with, numerical metrics of accomplishment naturally inflame ruthlessly single-minded competition. There is no general way to rank learning, or creativity, or achievement—merit—directly. There is no way to say, all things considered, who has better skills, wider knowledge, or deeper understanding, much less who has accomplished more overall. People value different things for different reasons. We disagree with one another about what is most valuable: the entrepreneur’s resourcefulness, the doctor’s caring, the writer’s insight, or the statesperson’s wisdom. Moreover, each of us is unsure, in our own judgments, about how best to balance these values when they conflict—unsure, to pick a famous example, about whether to pursue a life of politics or of reflection, to pursue the executory or the deliberative virtues. The agreement and repose needed to sustain a stable direct ranking simply can’t be had. This is not all bad: Ineliminable uncertainties about value diffuse and therefore dampen our competition to achieve. The soldier and the artist simply do not compete with each other, and neither competes with the financier.

By contrast, numerical metrics—again including especially GPAs and SAT scores—aggregate across incommensurables to produce a single, complete ranking of merit. Indeed, producing this ranking is part of such metrics’ point—the thing that makes them useful to admissions offices. But now, competition whose natural state is disorganized and diffuse becomes highly organized and narrowly focused. Aspiring businesspeople, doctors, writers, and statespeople all will benefit, in reaching their professional goals, from high SAT scores and GPAs, and, accordingly, they all compete to join the top ranks. The numbers on which admissions offices rely to validate their selections therefore create competition and hierarchy where the incommensurability of value once made rank unintelligible. SATs and GPAs do to human capital what prices earlier did to physical or financial capital—they make it possible to say, all things considered, who has more, who is richest. Unreasoning accumulation and open inequality follow inexorably.

The competition that the numerical metrics create, moreover, aims at foolish and indeed fruitless ambitions. SAT scores and GPAs, once again, do not measure any intelligible excellences, and high scores and averages therefore have no value in themselves. At best, pursuing them wastes effort and attention and almost surely deforms schooling, by diverting effort and attention from the many genuine excellences that education can produce. This is even more vividly true on the side of colleges and universities, with respect to the wasteful and even destructive contortions they put themselves through in pursuit of higher US News rankings.

The numbers-based distortions induced by students’ pursuit of higher test scores and institutions’ pursuit of higher rankings both may be given a natural framing in terms of the distinction between excellence and superiority. Excellence is a threshold concept, not a rank concept. It applies as soon as a certain level of ability or accomplishment is reached, and while it can make sense to say that one person is, in some respect, more excellent than another, this does not eliminate (or even undermine) the other’s excellence. Moreover, excellence is a substantive rather than purely formal ideal. Excellence requires not just capacity or achievement, but rather capacity and achievement realized at something worthwhile. It is a moral error to speak of excellence in corruption, wickedness, or depravity. Superiority, on the other hand, is opposite in both respects. It is a rank—rather than a threshold—concept, and one person’s superior accomplishment undoes rather than just exceeds the superiority of those whom she surpasses. In addition, superiority is purely formal rather than substantive. It makes perfect sense to speak in terms of superiority at activities that are worthless or even harmful.

When the numbers that rule over the processes that match students and schools under human capitalism subject education to domination by a single and profoundly mistaken conception of merit, they depose excellence, installing in its place a merciless quest for superiority. Human capitalism distorts schooling in much the same way that financialization distorts for-profit sectors of the real economy. Once, firms committed to particular products (General Motors to cars, IBM to computers) might view profits as a happy side-effect of running their businesses well. But in finance, whose only product is profit, the distinction between success and profitability becomes literally unintelligible, and financialization therefore subjects the broader economy to a tyranny of profit. Similarly, flourishing schools and universities will view their reputations and status as salutary side-effects of one or another form of academic excellence. But human capitalism shuts schools off from these conceptions of excellence and enslaves them to the pursuit of superiority. Schooling in an age of human capitalism thus becomes subjected to a tyranny of SATs, GPAs, and college rankings.

All these consequences, moreover, are neither accidents nor the result of individual vices: the shallowness of applicants or the vanity of universities. Rather, a social and economic hierarchy based on human capital creates a pitiless competition for access to the meritocratic education that builds human capital. Working- and middle-class children lack the resources to compete in the educational race and so are excluded not just from income and status but from meaningful opportunity. Rich children, meanwhile, are run ragged in a competition to achieve an intrinsically meaningless superiority that devours even those whom it appears to favor. And the colleges and universities that provide training, and administer the competition, are deformed in ways that betray any plausible conceptions of academic excellence. The Varsity Blues scandal exposed this corruption alongside the frauds that conventional responses emphasized. Why would intelligent and otherwise prudent people—one of the culprits was cochair of a major global law firm—pursue such a ham-fisted scheme other than from a desperate fear of losing meritocratic caste? No one escapes the meritocracy trap.

The only way out—for schools as well as for students—involves structural reforms that extend well beyond education, to reach economic and social inequalities writ large. But although reforms cannot end with schools, colleges, and universities, they might begin there. In particular, the familiar hope that making standardized tests less biased and more accurate and making rankings more comprehensive—that is, perfecting meritocracy—might more effectively launder social and economic inequalities without diminishing them is simply a fantasy. Colleges and universities, in particular, cannot redeem their educational souls while retaining their exclusivity. Instead, elite schools must become, simply, less elite.

If it mattered less where people got educated, applicants could pursue different paths for different reasons. And schools and colleges, freed from the burden of allocating life chances, could abandon their craving for superiority and instead pursue scholarly insight, practical innovation, community engagement, and a thousand other incommensurable virtues. Along the way, by freeing themselves from superiority’s jealous grasp, universities might redeem the very idea of excellence.

 

Posted in Credentialing, Higher Education, History of education, Sociology, Uncategorized

How Credentialing Theory Explains the Extraordinary Growth in US Higher Ed in the 19th Century

Today I am posting a piece I wrote in 1995. It was the foreword to a book by David K. Brown, Degrees of Control: A Sociology of Educational Expansion and Occupational Credentialism.  

I have long been interested in credentialing theory, but this is the only place where I ever tried to spell out in detail how the theory works.  For this purpose, I draw on the case of the rapid expansion of the US system of higher education in the 19th century and its transformation at the end of the century, which is the focus of Brown’s book.  Here’s a link to a pdf of the original. 

The case is particularly fruitful for demonstrating the value of credentialing theory, because the most prominent theory of education development simply can’t make sense of it.  Functionalist theory sees the emergence of educational systems as part of the process of modernization.  As societies become more complex, with a greater division of labor and a shift from manual to mental work, the economy requires workers with higher degrees of verbal and cognitive skills.  Elementary, secondary  and higher education arise over time in response to this need. 

The history of education in the U.S., however, poses a real problem for this explanation.  American higher education exploded in the 19th century, to the point that there were 800 some colleges in existence by 1880, which was more than the total number in the continent of Europe.  It was the highest rate of colleges per 100,000 population that the world have ever seen.   The problem is that this increase was not in respond to increasing demand from employers for college-educated workers.  While the rate of higher schooling was increasing across the century, the skill demands in the workforce were declining.  The growth of factory production was subdividing forms of skilled work, such as shoemaking, into a series of low-skilled tasks on the assembly line.  

This being the case, then, how can we understand the explosion of college founding in the 19th century?  Brown provides a compelling explanation, and I lay out his core arguments in my foreword.  I hope you find it illuminating.

 

Brown Cover

Preface

In this book, David Brown tackles an important question that has long puzzled scholars who wanted to understand the central role that education plays in American society: When compared with other Western countries, why did the United States experience such extraordinary growth in higher education? Whereas in most societies higher education has long been seen as a privilege that is granted to a relatively small proportion of the population, in the United States it has increasingly come to be seen as a right of the ordinary citizen. Nor was this rapid increase in accessibility very recent phenomenon. As Brown notes, between 1870 and 1930, the proportion of college-age persons (18 to 21 years old) who attended institutions of higher education rose from 1.7% to 13.0%. And this was long before the proliferation of regional state universities and community colleges made college attendance a majority experience for American youth.

The range of possible answers to this question is considerable, with each carrying its own distinctive image of the nature of American political and social life. For example, perhaps the rapid growth in the opportunity for higher education was an expression of egalitarian politics and a confirmation of the American Dream; or perhaps it was a political diversion, providing ideological cover for persistent inequality; or perhaps it was merely an accident — an unintended consequence of a struggle for something altogether different. In politically charged terrain such as this, one would prefer to seek guidance from an author who doesn’t ask the reader to march behind an ideological banner toward a preordained conclusion, but who instead rigorously examines the historical data and allows for the possibility of encountering surprises. What the reader wants, I think, is an analysis that is both informed by theory and sensitive to historical nuance.

In this book, Brown provides such an analysis. He approaches the subject from the perspective of historical sociology, and in doing so he manages to maintain an unusually effective balance between historical explanation and sociological theory-building. Unlike many sociologists dealing with history, he never oversimplifies the complexity of historical events in the rush toward premature theoretical closure; and unlike many historians dealing with sociology, he doesn’t merely import existing theories into his historical analysis but rather conceives of the analysis itself as a contribution to theory. His aim is therefore quite ambitious – to spell out a theoretical explanation for the spectacular growth and peculiar structure of American higher education, and to ground this explanation in an analysis of the role of college credentials in American life.

Traditional explanations do not hold up very well when examined closely. Structural-functionalist theory argues that an expanding economy created a powerful demand for advanced technical skills (human capital), which only a rapid expansion of higher education could fill. But Brown notes that during this expansion most students pursued programs not in vocational-technical areas but in liberal arts, meaning that the forms of knowledge they were acquiring were rather remote from the economically productive skills supposedly demanded by employers. Social reproduction theory sees the university as a mechanism that emerged to protect the privilege of the upper-middle class behind a wall of cultural capital, during a time (with the decline of proprietorship) when it became increasingly difficult for economic capital alone to provide such protection. But, while this theory points to a central outcome of college expansion, it fails to explain the historical contingencies and agencies that actually produced this outcome. In fact, both of these theories are essentially functionalist in approach, portraying higher education as arising automatically to fill a social need — within the economy, in the first case, and within the class system, in the second.

However, credentialing theory, as developed most extensively by Randall Collins (1979), helps explain the socially reproductive effect of expanding higher education without denying agency. It conceives of higher education diplomas as a kind of cultural currency that becomes attractive to status groups seeking an advantage in the competition for social positions, and therefore it sees the expansion of higher education as a response to consumer demand rather than functional necessity. Upper classes tend to benefit disproportionately from this educational development, not because of an institutional correspondence principle that preordains such an outcome, but because they are socially and culturally better equipped to gain access to and succeed within the educational market.

This credentialist theory of educational growth is the one that Brown finds most compelling as the basis for his own interpretation. However, when he plunges into a close examination of American higher education, he finds that Collins’ formulation of this theory often does not coincide very well with the historical evidence. One key problem is that Collins does not examine the nature of labor market recruitment, which is critical for credentialist theory, since the pursuit of college credentials only makes sense if employers are rewarding degree holders with desirable jobs. Brown shows that between 1800 and 1880 the number of colleges in the United States grew dramatically (as Collins also asserts), but that enrollments at individual colleges were quite modest. He argues that this binge of institution-creation was driven by a combination of religious and market forces but not (contrary to Collins) by the pursuit of credentials. There simply is no good evidence that a college degree was much in demand by employers during this period. Instead, a great deal of the growth in the number of colleges was the result of the desire by religious and ethnic groups to create their own settings for producing clergy and transmitting culture. In a particularly intriguing analysis, Brown argues that an additional spur to this growth came from markedly less elevated sources — local boosterism and land speculation — as development-oriented towns sought to establish colleges as a mechanism for attracting land buyers and new residents.

Brown’s version of credentialing theory identifies a few central factors that are required in order to facilitate a credential-driven expansion of higher education, and by 1880 several of these were already in place. One such factor is substantial wealth. Higher education is expensive, and expanding it for reasons of individual status attainment rather than for societal necessity is a wasteful use of a nation’s resources; it is only feasible for a very wealthy country. The United States was such a country in the late nineteenth century. A second factor is a broad institutional base. At this point, the United States had the largest number of colleges per million residents that the country has even seen, before or since. When combined with the small enrollments at each college, this meant that there was a great potential for growth within an already existing institutional framework. This potential was reinforced by a third factor, decentralized control. Colleges were governed by local boards rather than central state authorities, thus encouraging entrepreneurial behavior by college leaders, especially in the intensively competitive market environment they faced.

However, three other essential factors for rapid credential-based growth in higher education were still missing in 1880. For one thing, colleges were not going to be able to attract large numbers of new students, who were after all unlikely to be motivated solely by the love of learning, unless they could offer these students both a pleasant social experience and a practical educational experience — neither of which was the norm at colleges for most of the nineteenth century. Another problem was that colleges could not function as credentialing institutions until they had a monopoly over a particular form of credentials, but in 1880 they were still competing directly with high schools for the same students. Finally, their credentials were not going to have any value on the market unless employers began to demonstrate a distinct preference for hiring college graduates, and such a preference was still not obvious at this stage.

According to Brown, the 1880s saw a major shift in all three of these factors. The trigger for this change was a significant oversupply of institutions relative to existing demand. In this life or death situation, colleges desperately sought to increase the pool of potential students. It is no coincidence that this period marked the rapid diffusion of efforts to improve the quality of social life on campuses (from the promotion of athletics to the proliferation of fraternities), and also the shift toward curriculum with a stronger claim of practicality (emphasizing modern languages and science over Latin and Greek). At the same time, colleges sought to guarantee a flow of students from feeder institutions, which required them to establish a hierarchical relationship with high schools. The end of the century was the period in which colleges began requiring completion of a high school course as a prerequisite for college admission instead of the traditional entrance examination. This system provided high schools with a stable outlet for its graduates and colleges with predictable flow of reasonably well-prepared students. However, none of this would have been possible if the college degree had not acquired significant exchange value in the labor market. Without this, there would have been only social reasons for attending college, and high schools would have had little incentive to submit to college mandates.

Perhaps Brown’s strongest contribution to credential theory is his subtle and persuasive analysis of the reasoning that led employers to assert a preference for college graduates in the hiring process. Until now, this issue has posed a significant, perhaps fatal, problem for credentialing theory, which has asked the reader to accept two apparently contradictory assertions about credentials. First, the theory claims that a college degree has exchange value but not necessarily use value; that is, it is attractive to the consumer because it can be cashed in on a good job more or less independently of any learning that was acquired along the way. Second, this exchange value depends on the willingness of employers to hire applicants based on credentials alone, without direct knowledge of what these applicants know or what they can do. However this raises a serious question about the rationality of the employer in this process. After all, why would an employer, who presumably cares about the productivity of future employees, hire people based solely on college’s certification of competence in the absence of any evidence for that competence?

Brown tackles this issue with a nice measure of historical and sociological insight. He notes that the late nineteenth century saw the growing rationalization of work, which led to the development of large-scale bureaucracies to administer this work within both private corporations and public agencies. One result was the creation of a rapidly growing occupational sector for managerial employees who could function effectively within such a rationalized organizational structure. College graduates seemed to fit the bill for this kind of work. They emerged from the top level of the newly developed hierarchy of educational institutions and therefore seemed like natural candidates for management work in the upper levels of the new administrative hierarchy, which was based not on proprietorship or political office but on apparent skill. And what kinds of skills were called for in this line of work? What the new managerial employees needed was not so much the technical skills posited by human capital theory, he argues, but a general capacity to work effectively in a verbally and cognitively structured organizational environment, and also a capacity to feel comfortable about assuming positions of authority over other people.

These were things that the emerging American college could and indeed did provide. The increasingly corporate social structure of student life on college campuses provided good socialization for bureaucratic work, and the process of gaining access to and graduation from college provided students with an institutionalized confirmation of their social superiority and qualifications for leadership. Note that these capacities were substantive consequences of having attended college, but they were not learned as part of the college’s formal curriculum. That is, the characteristics that qualified college graduates for future bureaucratic employment were a side effect of their pursuit of a college education. In this sense, then, the college credential had a substantive meaning for employers that justified them in using it as a criterion for employment — less for the human capital that college provided than for the social capital that college conferred on graduates. Therefore, this credential, Brown argues, served an important role in the labor market by reducing the uncertainty that plagued the process of bureaucratic hiring. After all, how else was an employer to gain some assurance that candidate could do this kind of work? A college degree offered a claim to competence, which had enough substance behind it to be credible even if this substance was largely unrelated to the content of the college curriculum.

By the 1890s all the pieces were in place for a rapid expansion of college enrollments, strongly driven by credentialist pressures. Employers had reason to give preference to college graduates when hiring for management positions. As a result, middle-class families had an increasing incentive to provide their children with privileged access to an advantaged social position by sending them to college. For the students themselves, this extrinsic reward for attending college was reinforced by the intrinsic benefits accruing from an attractive social life on campus. All of this created a very strong demand for expanding college enrollments, and the pre-existing institutional conditions in higher education made it possible for colleges to respond to this demand in an aggressive fashion. There were a thousand independent institutions of higher education, accustomed to playing entrepreneurial roles in a competitive educational market, that were eager to capitalize on the surge of interest in attending college and to adapt themselves to the preferences of these new tuition-paying consumers. The result was a powerful and unrelenting surge of expansion in college enrollments that continued for the next century.

 

Brown provides a persuasive answer to the initial question about why American higher education expanded at such a rapid rate. But at this point the reader may well respond by asking the generic question that one should ask of any analyst, and that is, “So what?” More specifically, in light of the particular claims of this analysis, the question becomes: “What difference does it make that this expansion was spurred primarily by the pursuit of educational credentials?” In my view, at least, the answer to that question is clear. The impact of credentialism on both American society and the American educational system has been profound — profoundly negative. Consider some of the problems it has caused.

One major problem is that credentialism is astonishingly inefficient. Education is the largest single public investment made by most modern societies, and this is justified on the grounds that it provides a critically important contribution to the collective welfare. The public value of education is usually calculated as some combination of two types of benefits, the preparation of capable citizens (the political benefit) and the training of productive workers (the economic benefit). However the credentialist argument advanced by Brown suggests that these public benefits are not necessarily being met and that the primary beneficiaries are in fact private individuals. From this perspective, higher education (and the educational system more generally) exists largely as a mechanism for providing individuals with a cultural commodity that will give them a substantial competitive advantage in the pursuit of social position. In short, education becomes little but a vast public subsidy for private ambition.

The practical effect of this subsidy is the production of a glut of graduates. The difficulty posed by this outcome is not that the population becomes overeducated (since such a state is difficult to imagine) but that it becomes overcredentialed, since people are pursuing diplomas less for the knowledge they are thereby acquiring than for the access that the diplomas themselves will provide. The result is a spiral of credential inflation; for as each level of education in turn gradually floods with a crowd of ambitious consumers, individuals have to keep seeking ever higher levels of credentials in order to move a step ahead of the pack. In such a system nobody wins. Consumers have to spend increasing amounts of time and money to gain additional credentials, since the swelling number of credential holders keeps lowering the value of credentials at any given level. Taxpayers find an increasing share of scarce fiscal resources going to support an educational chase with little public benefit. Employers keep raising the entry-level education requirements for particular jobs, but they still find that they have to provide extensive training before employees can carry out their work productively. At all levels, this is an enormously wasteful system, one that rich countries like the United States can increasingly ill afford and that less developed countries, who imitate the U.S. educational model, find positively impoverishing.

A second major problem is that credentialism undercuts learning. In both college and high school, students are all too well aware that their mission is to do whatever it takes to acquire a diploma, which they can then cash in on what really matters — a good job. This has the effect of reifying the formal markers of academic progress-grades, credits, and degrees — and encouraging students to focus their attention on accumulating these badges of merit for the exchange value they offer. But at the same time this means directing attention away from the substance of education, reducing student motivation to learn the knowledge and skills that constitute the core of the educational curriculum. Under such conditions, it is quite rational, even if educationally destructive, for students to seek to acquire their badges of merit at a minimum academic cost, to gain the highest grade with the minimum amount of learning. This perspective is almost perfectly captured by a common student question, one that sends chills down the back of the learning-centered teacher but that makes perfect sense for the credential-oriented student: “ls this going to be on the test?” (Sedlak et al., 1986, p. 182). We have credentialism to thank for the aversion to learning that, to a great extent, lies at the heart of our educational system.

A third problem posed by credentialism is social and political more than educational. According to credentialing theory, the connection between social class and education is neither direct nor automatic, as suggested by social reproduction theory. Instead, the argument goes, market forces mediate between the class position of students and their access to and success within the educational system. That is, there is general competition for admission to institutions of higher education and for levels of achievement within these institutions. Class advantage is no guarantee of success in this competition, since such factors as individual ability, motivation, and luck all play a part in determining the result. Market forces also mediate between educational attainment (the acquisition of credentials) and social attainment (the acquisition of a social position). Some college degrees are worth more in the credentials market than others, and they provide privileged access to higher level positions independent of the class origins of the credential holder.

However, in both of these market competitions, one for acquiring the credential and the other for cashing it in, a higher class position provides a significant competitive edge. The economic, cultural, and social capital that come with higher class standing gives the bearer an advantage in getting into college, in doing well at college, and in translating college credentials into desirable social outcomes. The market-based competition that characterizes the acquisition and disposition of educational credentials gives the process a meritocratic set of possibilities, but the influence of class on this competition gives it a socially reproductive set of probabilities as well. The danger is that, as a result, a credential-driven system of education can provide meritocratic cover for socially reproductive outcomes. In the single-minded pursuit of educational credentials, both student consumers and the society that supports them can lose sight of an all-too-predictable pattern of outcomes that is masked by the headlong rush for the academic gold.

Posted in Academic writing, Higher Education, History of education

The Lust for Academic Fame: America’s Engine for Scholarly Production

This post is an analysis of the engine for scholarly production in American higher education.  The issue is that the university is a unique work setting in which the usual organizational incentives don’t apply.  Administrators can’t offer much in the way of power and money as rewards for productive faculty and they also can’t do much to punish unproductive faculty who have tenure.  Yet in spite of this scholars keep cranking out the publications at a furious rate.  My argument is that the primary motive for publication is the lust for academic fame.

The piece was originally published in Aeon in December, 2018.

pile of books
Photo by Pixabay on Pexels.com

Gold among the dross

Academic research in the US is unplanned, exploitative and driven by a lust for glory. The result is the envy of the world

David F. Labaree

The higher education system is a unique type of organisation with its own way of motivating productivity in its scholarly workforce. It doesn’t need to compel professors to produce scholarship because they choose to do it on their own. This is in contrast to the standard structure for motivating employees in bureaucratic organisations, which relies on manipulating two incentives: fear and greed. Fear works by holding the threat of firing over the heads of workers in order to ensure that they stay in line: Do it my way or you’re out of here. Greed works by holding the prospect of pay increases and promotions in front of workers in order to encourage them to exhibit the work behaviours that will bring these rewards: Do it my way and you’ll get what’s yours.

Yes, in the United States contingent faculty can be fired at any time, and permanent faculty can be fired at the point of tenure. But, once tenured, there’s little other than criminal conduct or gross negligence that can threaten your job. And yes, most colleges do have merit pay systems that reward more productive faculty with higher salaries. But the differences are small – between the standard 3 per cent raise and a 4 per cent merit increase. Even though gaining consistent above-average raises can compound annually into substantial differences over time, the immediate rewards are pretty underwhelming. Not the kind of incentive that would motivate a major expenditure of effort in a given year – such as the kind that operates on Wall Street, where earning a million-dollar bonus is a real possibility. Academic administrators – chairs, deans, presidents – just don’t have this kind of power over faculty. It’s why we refer to academic leadership as an exercise in herding cats. Deans can ask you to do something, but they really can’t make you do it.

This situation is the norm for systems of higher education in most liberal democracies around the world. In more authoritarian settings, the incentives for faculty are skewed by particular political priorities, and in part for these reasons the institutions in those settings tend to be consigned to the lower tiers of international rankings. Scholarly autonomy is a defining characteristic of universities higher on the list.

If the usual extrinsic incentives of fear and greed don’t apply to academics, then what does motivate them to be productive scholars? One factor, of course, is that this population is highly self-selected. People don’t become professors in order to gain power and money. They enter the role primarily because of a deep passion for a particular field of study. They find that scholarship is a mode of work that is intrinsically satisfying. It’s more a vocation than a job. And these elements tend to be pervasive in most of the world’s universities.

But I want to focus on an additional powerful motivation that drives academics, one that we don’t talk about very much. Once launched into an academic career, faculty members find their scholarly efforts spurred on by more than a love of the work. We in academia are motivated by a lust for glory.

We want to be recognised for our academic accomplishments by earning our own little pieces of fame. So we work assiduously to accumulate a set of merit badges over the course of our careers, which we then proudly display on our CVs. This situation is particularly pervasive in the US system of higher education, which is organised more by the market than by the state. Market systems are especially prone to the accumulation of distinctions that define your position in the hierarchy. But European and other scholars are also engaged in a race to pick up honours and add lines to their CVs. It’s the universal obsession of the scholarly profession.

Take one prominent case in point: the endowed chair. A named professorship is a very big deal in the academic status order, a (relatively) scarce honour that supposedly demonstrates to peers that you’re a scholar of high accomplishment. It does involve money, but the chair-holder often sees little of it. A donor provides an endowment for the chair, which pays your salary and benefits, thus taking these expenses out of the operating budget – a big plus for the department, which saves a lot of money in the deal. And some chairs bring with them extra money that goes to the faculty member to pay for research expenses and travel.

But more often than not, the chair brings the occupant nothing at all but an honorific title, which you can add to your signature: the Joe Doakes Professor of Whatever. Once these chairs are in existence as permanent endowments, they never go away; instead they circulate among senior faculty. You hold the chair until you retire, and then it goes to someone else. In my own school, Stanford University, when the title passes to a new faculty member, that person receives an actual chair – one of those uncomfortable black wooden university armchairs bearing the school logo. On the back is a brass plaque announcing that ‘[Your Name] is the Joe Doakes Professor’. When you retire, they take away the title and leave you the physical chair. That’s it. It sounds like a joke – all you get to keep is this unusable piece of furniture – but it’s not. And faculty will kill to get this kind of honour.

This being the case, the academic profession requires a wide array of other forms of recognition that are more easily attainable and that you can accumulate the way you can collect Fabergé eggs. And they’re about as useful. Let us count the kinds of merit badges that are within the reach of faculty:

  • publication in high-impact journals and prestigious university presses;
  • named fellowships;
  • membership on review committees for awards and fellowships;
  • membership on editorial boards of journals;
  • journal editorships;
  • officers in professional organisations, which conveniently rotate on an annual basis and thus increase accessibility (in small societies, nearly everyone gets a chance to be president);
  • administrative positions in your home institution;
  • committee chairs;
  • a large number of awards of all kinds – for teaching, advising, public service, professional service, and so on: the possibilities are endless;
  • awards that particularly proliferate in the zone of scholarly accomplishment – best article/book of the year in a particular subfield by a senior/junior scholar; early career/lifetime-career achievement; and so on.

Each of these honours tells the academic world that you are the member of a purportedly exclusive club. At annual meetings of professional organisations, you can attach brightly coloured ribbons to your name tag that tell everyone you’re an officer or fellow of that organisation, like the badges that adorn military dress uniforms. As in the military, you can never accumulate too many of these academic honours. In fact, success breeds more success, as your past tokens of recognition demonstrate your fitness for future tokens of recognition.

Academics are unlike the employees of most organisations in that they fight over symbolic rather than material objects of aspiration, but they are like other workers in that they too are motivated by fear and greed. Instead of competing over power and money, they compete over respect. So far I’ve been focusing on professors’ greedy pursuit of various kinds of honours. But, if anything, fear of dishonour is an even more powerful motive for professorial behaviour. I aspire to gain the esteem of my peers but I’m terrified of earning their scorn.

Lurking in the halls of every academic department are a few furtive figures of scholarly disrepute. They’re the professors who are no longer publishing in academic journals, who have stopped attending academic conferences, and who teach classes that draw on the literature of yesteryear. Colleagues quietly warn students to avoid these academic ghosts, and administrators try to assign them courses where they will do the least harm. As an academic, I might be eager to pursue tokens of merit, but I am also desperate to avoid being lumped together with the department’s walking dead. Better to be an academic mediocrity, publishing occasionally in second-rate journals, than to be your colleagues’ archetype of academic failure.

The result of all this pursuit of honour and retreat from dishonour is a self-generating machine for scholarly production. No administrator needs to tell us to do it, and no one needs to dangle incentives in front of our noses as motivation. The pressure to publish and demonstrate academic accomplishment comes from within. College faculties become self-sustaining engines of academic production, in which we drive ourselves to demonstrate scholarly achievement without the administration needing to lift a finger or spend a dollar. What could possibly go wrong with such a system?

 

One problem is that faculty research productivity varies significantly according to what tier of the highly stratified structure of higher education professors find themselves in. Compared with systems of higher education in other countries, the US system is organised into a hierarchy of institutions that are strikingly different from each other. The top tier is occupied by the 115 universities that the Carnegie Classification labels as having the highest research activity, which represents only 2.5 per cent of the 4,700 institutions that grant college degrees. The next tier is doctoral universities with less of a research orientation, which account for 4.7 per cent of institutions. The third is an array of master’s level institutions often referred to as comprehensive universities, which account for 16 per cent. The fourth is baccalaureate institutions (liberal arts colleges), which account for 21 per cent. The fifth is two-year colleges, which account for 24 per cent. (The remaining 32 per cent are small specialised institutions that enrol only 5 per cent of all students.)

The number of publications by faculty members declines sharply as you move down the tiers of the system. One study shows how this works for professors in economics. The total number of refereed journal articles published per faculty member over the course of a career was 18.4 at research universities; 8.1 at comprehensive universities; 4.9 at liberal arts colleges; and 3.1 at all others. The decline in productivity is also sharply defined within the category of research universities. Another study looked at the top 94 institutions ranked by per-capita publications per year between 1991 and 1993. At the number-one university, average production was 12.7 per person per year; at number 20, it dropped off sharply to 4.6; at number 60, it was 2.4; and at number 94, it was 0.5.

Only 20 per cent of faculty serve at the most research-intensive universities (the top tier) where scholarly productivity is the highest. As we can see, the lowest end of this top sliver of US universities has faculty who are publishing less than one article every five years. The other 80 per cent are presumably publishing even more rarely than this, if indeed they are publishing at all. As a result, it seems that the incentive system for spurring faculty research productivity operates primarily at the very top levels of the institutional hierarchy. So why am I making such a big deal about US professors as self-motivated scholars?

The most illuminating way to understand the faculty incentive to publish is to look at the system from the point of view of the newly graduating PhD who is seeking to find a faculty position. These prospective scholars face some daunting mathematics. As we have seen, the 115 high-research universities produce the majority of research doctorates, but 80 per cent of the jobs are at lower-level institutions. The most likely jobs are not at research universities but at comprehensive universities and four-year institutions. So most doctoral graduates entering the professoriate experience dramatic downward mobility.

It’s actually even worse than that. One study of sociology graduates shows that departments ranked in the top five select the majority of their faculty from top-five departments, but most top-five graduates ended up in institutions below the rank of 20. And a lot of prospective faculty never find a position at all. A 1999 study showed that, among recent grads who sought to become professors, only two-thirds had such a position after 10 years, and only half of these had earned tenure. And many of those who do find teaching positions are working part-time, a category that in 2005 accounted for 48 per cent of all college faculty.

The prospect of a dramatic drop in academic status and the possibility of failing to find any academic job do a lot to concentrate the mind of the recent doctoral graduate. Fear of falling compounded by fear of total failure works wonders in motivating novice scholars to become flywheels of productivity. From their experience in grad school, they know that life at the highest level of the system is very good for faculty, but the good times fade fast as you move to lower levels. At every step down the academic ladder, the pay is less, the teaching loads are higher, graduate students are fewer, research support is less, and student skills are lower.

In a faculty system where academic status matters more than material benefits, the strongest signal of the status you have as a professor is the institution where you work. Your academic identity is strongly tied to your letterhead. And in light of the kind of institution where most new professors find themselves, they start hearing a loud, clear voice saying: ‘I deserve better.’

So the mandate is clear. As a grad student, you need to write your way to an academic job. And when you get a job at an institution far down the hierarchy, you need to write your way to a better job. You experience a powerful incentive to claw your way back up the academic ladder to an institution as close as possible to the one that recently graduated you. The incentive to publish is baked in from the very beginning.

One result of this Darwinian struggle to regain one’s rightful place at the top of the hierarchy is that a large number of faculty fall by the wayside without attaining their goal. Dashed dreams are the norm for large numbers of actors. This can leave a lot of bitter people occupying the middle and lower tiers of the system, and it can saddle students with professors who would really rather be somewhere else. That’s a high cost for the process that supports the productivity of scholars at the system’s pinnacle.

 

Another potential problem with my argument about the self-generating incentive for professors to publish is that the work produced by scholars is often distinguished more by its quantity rather than its quality. Put another way, a lot of the work that appears in print doesn’t seem worth the effort required to read it, much less to produce it. Under these circumstances, the value of the incentive structure seems lacking.

Consider some of the ways in which contemporary academic production promotes quantity over quality. One familiar technique is known as ‘salami slicing’. The idea here is simple. Take one study and divide it up into pieces that can each be published separately, so it leads to multiple entries in your CV. The result is an accumulation of trivial bits of a study instead of a solid contribution to the literature.

Another approach is to inflate co-authorship. Multiple authors make sense in some ways. Large projects often involve a large number of scholars and, in the sciences in particular, a long list of authors is de rigueur. Fine, as long as everyone in the list made a significant contribution to research. But often co-authorship comes for reasons of power rather than scholarly contribution. It has become normal for anyone who compiled a dataset to demand co-authorship for any papers that draw on the data, even if the data-owner added nothing to the analysis in the paper. Likewise, the principal investigator of a project might insist on being included in the author list for any publications that come from this project. More lines on the CV.

Yet another way to increase the number of publications is to increase the number of journals. By one count, as of 2014 there were 28,100 scholarly peer-reviewed journals. Consider the mathematics. There are about 1 million faculty members at US colleges and universities at the BA level and higher, so that means there are about 36 prospective authors for each of these journals. A lot of these enterprises act as club journals. The members of a particular sub-area of a sub-field set up a journal where members of the club engage in a practice that political scientists call log-rolling. I review your paper and you review mine, so everyone gets published. Edited volumes work much the same way. I publish your paper in my book, and you publish mine in yours.

A lot of journal articles are also written in a highly formulaic fashion, which makes it easy to produce lots of papers without breaking an intellectual sweat. The standard model for this kind of writing is known as IMRaD. This mnemonic represents the four canonical sections for every paper: introduction (what’s it about and what’s the literature behind it?); methods (how did I do it?); research (what are my findings?); and discussion (what does it mean?). All you have to do as a writer is to write the same paper over and over, introducing bits of new content into the tried and true formula.

The result of all this is that the number of scholarly publications is enormous and growing daily. One estimate shows that, since the first science papers were published in the 1600s, the total number of papers in science alone passed the 50 million mark in 2009; 2.5 million new science papers are published each year. How many of them do you think are worth reading? How many make a substantive contribution to the field?

 

OK, so I agree. A lot of scholarly publications – maybe most such publications – are less than stellar. Does this matter? In one sense, yes. It’s sad to see academic scholarship fall into a state where the accumulation of lines on a CV matters more than producing quality work. And think of all the time wasted reviewing papers that should never have been written, and think of how this clutters and trivialises the literature with contributions that don’t contribute.

But – hesitantly – I suggest that the incentive system for faculty publication still provides net benefits for both academy and society. I base this hope on my own analysis of the nature of the US academic system itself. Keep in mind that US higher education is a system without a plan. No one designed it and no one oversees its operation. It’s an emergent structure that arose in the 19th century under unique conditions in the US – when the market was strong, the state was weak, and the church was divided.

Under these circumstances, colleges emerged as private not-for-profit enterprises that had a state charter but little or no state funding. And, for the most part, they arose for reasons that had less to do with higher learning than with the extrinsic benefits a college could bring. As a result, the system grew from the bottom up. By the time state governments started putting up their own institutions, and the federal government started funding land-grant colleges, this market-based system was already firmly in place. Colleges were relatively autonomous enterprises that had found a way to survive without steady support from either church or state. They had to attract and retain students in order to bring in tuition dollars, and they had to make themselves useful both to these students and to elites in the local community, both of whom would then make donations to continue the colleges in operation. This autonomy was an accident, not a plan, but by the 20th century it became a major source of strength. It promoted a system that was entrepreneurial and adaptive, able to take advantage of possibilities in the environment. More responsive to consumers and community than to the state, institutions managed to mitigate the kind of top-down governance that might have stifled the system’s creativity.

The point is this: compared with planned organisational structures, emergent structures are inefficient at producing socially useful results. They’re messy by nature, and they pursue their own interests rather than following directions from above according to a plan. But as we have seen with market-based economies compared with state-planned economies, the messy approach can be quite beneficial. Entrepreneurs in the economy pursue their own profit rather than trying to serve the public good, but the side-effect of their activities is often to provide such benefits inadvertently, by increasing productivity and improving the general standard of living. A similar argument can be made about the market-based system of US higher education. Maybe it’s worth tolerating the gross inefficiency of a university system that is charging off in all directions, with each institution trying to advance itself in competition with the others. The result is a system that is the envy of the world, a world where higher education is normally framed as a pure state function under the direct control of the state education ministry.

This analysis applies as well to the professoriate. The incentive structure for US faculty encourages individual professors to be entrepreneurial in pursuing their academic careers. They need to publish in order to win honours for themselves and to avoid dishonour. As a result, they end up publishing a lot of work that is more useful to their own advancement (lines on a CV) than to the larger society. Also, following from the analysis of the first problem I introduced, an additional cost of this system is the large number of faculty who fall by the wayside in the effort to write their way into a better job. The success of the system of scholarly production at the top is based on the failed dreams of most of the participants.

But maybe it’s worth tolerating a high level of dross in the effort to produce scholarly gold – even if this is at the expense of many of the scholars themselves. Planned research production, operating according to mandates and incentives descending from above, is no more effective at producing the best scholarship than are five-year plans in producing the best economic results. At its best, the university is a place that gives maximum freedom for faculty to pursue their interests and passions in the justified hope that they will frequently come up with something interesting and possibly useful, even if this value is not immediately apparent. They’re institutions that provide answers to problems that haven’t yet developed, storing up both the dross and the gold until such time as we can determine which is which.

 

Posted in Higher Education, Populism, Sports

Nobel prizes are great, but college football is why American universities dominate the globe

This post is a reprint of a piece I published in Quartz in 2017.  Here’s a link to the original.  It’s an effort to explore the distinctively populist character of American higher education. 

The idea is that a key to understanding the strong public support that US colleges and universities have managed to generate is their ability to reach beyond the narrow constituency for its elevated intellectual accomplishments.  The magic is that they are elite institutions that can also appeal to the populace.  And the peculiar world of college football provides a window into how the magic works.  

If you drive around the state of Michigan, where I used to live, you would see farmers on tractors wearing a cap that typically bore the logo of either University of Michigan (maize and blue) or Michigan State (green and white).  Maybe they or their kids attended the school, or maybe they were using its patented seed; but more often than not, it was because they were rooting for the football team.  It’s hard to overestimate the value for the higher ed system of drawing a broad base of popular support.

Football

Nobel prizes are great, but college football is why

American universities dominate the globe

David F. Labaree

 

College football costs too much. It exploits players and even damages their brains. It glorifies violence and promotes a thuggish brand of masculinity. And it undermines the college’s academic mission.

We hear this a lot, and much of it is true. But consider, for the moment, that football may help explain how the American system of higher education has become so successful. According to rankings computed by Jiao Tong University in Shanghai, American institutions account for 32 of the top 50 and 16 of the top 20 universities in the world. Also, between 2000 and 2014, 49% of all Nobel recipients were scholars at US universities.

In doing research for a book about the American system of higher education, I discovered that the key to its strength has been its ability to combine elite scholarship with populist appeal. And football played a key role in creating this alchemy.

American colleges developed their skills at attracting consumers and local supporters in the early nineteenth century, when the basic elements of the higher education system came together.

These colleges emerged in a very difficult environment, when the state was weak, the market strong, and the church divided. Unlike European forebears, who could depend on funding from the state or the established church, American colleges arose as not-for-profit corporations that received only sporadic funding from church denominations and state governments and instead had to rely on students and local donors. Often adopting the name of the town where they were located, these colleges could only survive, much less thrive, if they were able to attract and retain students from nearby towns and draw donations from alumni and local citizens.

In this quest, American colleges and universities have been uniquely and spectacularly successful. Go to any American campus and you will see that nearly everyone seems to be wearing the brand—the school colors, the logo, the image of the mascot, team jerseys. Unlike their European counterparts, American students don’t just attend an institution of higher education; they identify with it. It’s not just where they enroll; it’s who they are. In the US, the first question that twenty-year-old strangers ask each other is “Where do you go to college?” And half the time the question is moot because the speakers are wearing their college colors.

Football, along with other intercollegiate sports, has been enormously helpful in building the college brand. It helps draw together all of the members of the college community (students, faculty, staff, alumni, and local fans) in opposition to the hated rival in the big game. It promotes a loyalty that lasts for a lifetime, which translates into a broad political base for gaining state funding and a steady flow of private donations.

Thus one advantage that football brings to the American university is financial. It’s not that intercollegiate sports turn a large profit; in fact, the large majority lose money. Instead, it’s that they help mobilize a stream of public and private funding. And now that state appropriations for public higher education are in steady decline, public universities, like their private counterparts, are increasingly dependent on private funding.

Another advantage that football brings is that it softens the college’s elitism. Even the most elite American public research universities (Michigan, Wisconsin, Berkeley, UCLA) have a strong populist appeal. The same is true of a number of elite privates (think Stanford, Vanderbilt, Duke, USC). In large part this comes from their role as a venue for popular entertainment supported by their accessibility to a large number of undergraduate students. As a result, the US university has managed to avoid much of the social elitism of British institutions such as Oxford and Cambridge and the academic elitism of the German university dominated by the graduate school. Go to a college town on game day, and nearly every car, house, and person is sporting the college colors.

This broad support is particularly important these days, now that the red-blue political divide has begun to affect colleges as well. A recent study showed that, while most Americans still believe that colleges have a positive influence on the country, 58% of Republicans do not. History strongly suggests that football is going to be more effective than Nobel prizes in winning back their loyalty.

So let’s hear it for college football. It’s worth two cheers at least.

Posted in Economic growth, Education policy, Higher Education

Hausmann: The Education Myth

In this post I reprint a piece by Ricardo Hausmann (an economist at Harvard’s Kennedy School), which was published in Project Syndicate in 2015. Here’s a link to the original.  If you can’t get past the paywall, here’s a link to a PDF.

What I like about this piece is the way Hausmann challenges a central principle that guides educational policy, both domestic and international.  This is the belief that education is the central engine of economic growth.  According to this credo, increasing education is how we can increase productivity, GDP, and standard of living.  Hausmann shows, however, that the impact of education on economic growth is a lot less than promised.  Other factors appear to be more important than education at expanding economies, so investing in these strategies may be a lot more efficient than the costly process of increasing access to tertiary education.

As the former chief economist at the Inter-American Development Bank and the head of the Harvard Growth Lab, he seems to know something about this subject.  See what you think.

Human Capital and Ec Development

The Education Myth

TIRANA – In an era characterized by political polarization and policy paralysis, we should celebrate broad agreement on economic strategy wherever we find it. One such area of agreement is the idea that the key to inclusive growth is, as then-British Prime Minister Tony Blair put in his 2001 reelection campaign, “education, education, education.” If we broaden access to schools and improve their quality, economic growth will be both substantial and equitable.

As the Italians would say: magari fosse vero. If only it were true. Enthusiasm for education is perfectly understandable. We want the best education possible for our children, because we want them to have a full range of options in life, to be able to appreciate its many marvels and participate in its challenges. We also know that better educated people tend to earn more.

Education’s importance is incontrovertible – teaching is my day job, so I certainly hope it is of some value. But whether it constitutes a strategy for economic growth is another matter. What most people mean by better education is more schooling; and, by higher-quality education, they mean the effective acquisition of skills (as revealed, say, by the test scores in the OECD’s standardized PISA exam). But does that really drive economic growth?

In fact, the push for better education is an experiment that has already been carried out globally. And, as my Harvard colleague Lant Pritchett has pointed out, the long-term payoff has been surprisingly disappointing.

In the 50 years from 1960 to 2010, the global labor force’s average time in school essentially tripled, from 2.8 years to 8.3 years. This means that the average worker in a median country went from less than half a primary education to more than half a high school education.

How much richer should these countries have expected to become? In 1965, France had a labor force that averaged less than five years of schooling and a per capita income of $14,000 (at 2005 prices). In 2010, countries with a similar level of education had a per capita income of less than $1,000.

In 1960, countries with an education level of 8.3 years of schooling were 5.5 times richer than those with 2.8 year of schooling. By contrast, countries that had increased their education from 2.8 years of schooling in 1960 to 8.3 years of schooling in 2010 were only 167% richer. Moreover, much of this increase cannot possibly be attributed to education, as workers in 2010 had the advantage of technologies that were 50 years more advanced than those in 1960. Clearly, something other than education is needed to generate prosperity.

As is often the case, the experience of individual countries is more revealing than the averages. China started with less education than Tunisia, Mexico, Kenya, or Iran in 1960, and had made less progress than them by 2010. And yet, in terms of economic growth, China blew all of them out of the water. The same can be said of Thailand and Indonesia vis-à-vis the Philippines, Cameroon, Ghana, or Panama. Again, the fast growers must be doing something in addition to providing education.

The experience within countries is also revealing. In Mexico, the average income of men aged 25-30 with a full primary education differs by more than a factor of three between poorer municipalities and richer ones. The difference cannot possibly be related to educational quality, because those who moved from poor municipalities to richer ones also earned more.

And there is more bad news for the “education, education, education” crowd: Most of the skills that a labor force possesses were acquired on the job. What a society knows how to do is known mainly in its firms, not in its schools. At most modern firms, fewer than 15% of the positions are open for entry-level workers, meaning that employers demand something that the education system cannot – and is not expected – to provide.

When presented with these facts, education enthusiasts often argue that education is a necessary but not a sufficient condition for growth. But in that case, investment in education is unlikely to deliver much if the other conditions are missing. After all, though the typical country with ten years of schooling had a per capita income of $30,000 in 2010, per capita income in Albania, Armenia, and Sri Lanka, which have achieved that level of schooling, was less than $5,000. Whatever is preventing these countries from becoming richer, it is not lack of education.

A country’s income is the sum of the output produced by each worker. To increase income, we need to increase worker productivity. Evidently, “something in the water,” other than education, makes people much more productive in some places than in others. A successful growth strategy needs to figure out what this is.

Make no mistake: education presumably does raise productivity. But to say that education is your growth strategy means that you are giving up on everyone who has already gone through the school system – most people over 18, and almost all over 25. It is a strategy that ignores the potential that is in 100% of today’s labor force, 98% of next year’s, and a huge number of people who will be around for the next half-century. An education-only strategy is bound to make all of them regret having been born too soon.

This generation is too old for education to be its growth strategy. It needs a growth strategy that will make it more productive – and thus able to create the resources to invest more in the education of the next generation. Our generation owes it to theirs to have a growth strategy for ourselves. And that strategy will not be about us going back to school.

Ricardo Hausmann, a former minister of planning of Venezuela and former Chief Economist at the Inter-American Development Bank, is a professor at Harvard’s John F. Kennedy School of Government and Director of the Harvard Growth Lab.

Posted in Higher Education, History

The Exceptionalism of American Higher Education

This post is an op-ed I published on my birthday (May 17) in 2018 on the online international opinion site, Project Syndicate.  The original is hidden behind a paywall; here are PDFs in English, Spanish, and Arabic.

It’s a brief essay about what is distinctive about the American system of higher education, drawn from my book, A Perfect Mess: The Unlikely Ascendancy of American Higher Education.

Web Image

The Exceptionalism of American Higher Education

 By David F. Labaree

STANFORD – In the second half of the twentieth century, American universities and colleges emerged as dominant players in the global ecology of higher education, a dominance that continues to this day. In terms of the number of Nobel laureates produced, eight of the world’s top ten universities are in the United States. Forty-two of the world’s 50 largest university endowments are in America. And, when ranked by research output, 15 of the top 20 institutions are based in the US.

Given these metrics, few can dispute that the American model of higher education is the world’s most successful. The question is why, and whether the US approach can be exported.

While America’s oldest universities date to the seventeenth and eighteenth centuries, the American system of higher education took shape in the early nineteenth century, under conditions in which the market was strong, the state was weak, and the church was divided. The “university” concept first arose in medieval Europe, with the strong support of monarchs and the Catholic Church. But in the US, with the exception of American military academies, the federal government never succeeded in establishing a system of higher education, and states were too poor to provide much support for colleges within their borders.

In these circumstances, early US colleges were nonprofit corporations that had state charters but little government money. Instead, they relied on student tuition, as well as donations from local elites, most of whom were more interested in how a college would increase the value of their adjoining property than they were in supporting education.

As a result, most US colleges were built on the frontier rather than in cities; the institutions were used to attract settlers to buy land. In this way, the first college towns were the equivalent of today’s golf-course developments – verdant enclaves that promised a better quality of life. At the same time, religious denominations competed to sponsor colleges in order to plant their own flags in new territories.

What this competition produced was a series of small, rural, and underfunded colleges led by administrators who had to learn to survive in a highly competitive environment, and where supply long preceded demand. As a result, schools were positioned to capitalize on the modest advantages they did have. Most were highly accessible (there was one in nearly every town), inexpensive (competition kept a lid on tuition), and geographically specific (colleges often became avatars for towns whose names they took). By 1880, there were five times as many colleges and universities in the US than in all of Europe.

The unintended consequence of this early saturation was a radically decentralized system of higher education that fostered a high degree of autonomy. The college president, though usually a clergyman, was in effect the CEO of a struggling enterprise that needed to attract and retain students and donors. Although university presidents often begged for, and occasionally received, state money, government funding was neither sizeable nor reliable.

In the absence of financial security, these educational CEO’s had to hustle. They were good at building long-term relationships with local notables and tuition-paying students. Once states began opening public colleges in the mid-nineteenth century, the new institutions adapted to the existing system. State funding was still insufficient, so leaders of public colleges needed to attract tuition from students and donations from graduates.

By the start of the twentieth century, when enrollments began to climb in response to a growing demand for white-collar workers, the mixed public-private system was set to expand. Local autonomy gave institutions the freedom to establish a brand in the marketplace, and in the absence of strong state control, university leaders positioned their institutions to take pursue opportunities and adapt to changing conditions. As funding for research grew after World War II, college administrators started competing vigorously for these new sources of support.

By the middle of the twentieth century, the US system of higher education reached maturity, as colleges capitalized on decentralized and autonomous governance structures to take advantage of the lush opportunities for growth that arose during the Cold War. Colleges were able to leverage the public support they had developed during the long lean years, when a university degree was highly accessible and cheap. With the exception of the oldest New England colleges – the “Ivies” – American universities never developed the elitist aura of Old World institutions like Oxford and Cambridge. Instead, they retained a populist ethos – embodied in football and fraternities and flexible academic standards – that continues to serve them well politically.

So, can other systems of higher learning adapt the US model of educational excellence to local conditions? The answer is straightforward: no.  You had to be there.

In the twenty-first century, it is not possible for colleges to emerge with the same degree of autonomy that American colleges enjoyed some 200 years ago before the development of a strong nation state. Today, most non-American institutions are wholly-owned subsidiaries of the state; governments set priorities, and administrators pursue them in a top-down manner. By contrast, American universities have retained the spirit of independence, and faculty are often given latitude to channel entrepreneurial ideas into new programs, institutes, schools, and research. This bottom-up structure makes the US system of higher education costly, consumer-driven, and deeply stratified. But this is also what gives the system its global edge.

 

Posted in Academic writing, Higher Education, Teaching, Writing

I Would Rather Do Anything Else than Grade Your Final Papers — Robin Lee Mozer

If the greatest joy that comes from retirement is that I no longer have to attend faculty meetings, the second greatest joy is that I no longer have to grade student papers.  I know, I know: commenting on student writing is a key component of being a good teacher, and there’s a real satisfaction that comes from helping someone become a better thinker and better writer.

But most students are not producing papers to improve their minds or hone their writing skills.  They’re just trying to fulfill a course requirement and get a decent grade.  And this creates a strong incentive not for excellence but for adequacy.  It encourages people to devote most of their energy toward gaming the system.

The key skill is to produce something that looks and feels like a good answer to the exam question or a good analysis of an intellectual problem.  Students have a powerful incentive to accomplish the highest grade for the lowest investment of time and intellectual effort.  This means aiming for quantity over quality (puff up the prose to hit the word count) and form over substance (dutifully refer to the required readings without actually drawing meaningful content from them).  Glibness provides useful cover for the absence of content.  It’s depressing to observe how the system fosters discursive means that undermine the purported aims of education.

Back in the days when students turned in physical papers and then received them back with handwritten comments from the instructor, I used to get a twinge in my stomach when I saw that most students didn’t bother to pick up their final papers from the box outside my office.  I felt like a sucker for providing careful comments that no one would ever see.  At one point I even asked students to tell me in advance if they wanted their papers back, so I only commented on the ones that might get read.  But this was even more depressing, since it meant that a lot of students didn’t even mind letting me know that they really only cared about the grade.  The fiction of doing something useful was what helped keep me going.

So, like many other faculty, I responded with joy to a 2016 piece that Robin Lee Mozer wrote in McSweeney’s called “I Would Rather Do Anything Else than Grade Your Final Papers.”  As a public service to teachers everywhere, I’m republishing her essay here.  Enjoy.

 

I WOULD RATHER DO ANYTHING ELSE THAN GRADE YOUR FINAL PAPERS

Dear Students Who Have Just Completed My Class,

I would rather do anything else than grade your Final Papers.

I would rather base jump off of the parking garage next to the student activity center or eat that entire sketchy tray of taco meat leftover from last week’s student achievement luncheon that’s sitting in the department refrigerator or walk all the way from my house to the airport on my hands than grade your Final Papers.

I would rather have a sustained conversation with my grandfather about politics and government-supported healthcare and what’s wrong with the system today and why he doesn’t believe in homeowner’s insurance because it’s all a scam than grade your Final Papers. Rather than grade your Final Papers, I would stand in the aisle at Lowe’s and listen patiently to All the Men mansplain the process of buying lumber and how essential it is to sight down the board before you buy it to ensure that it’s not bowed or cupped or crook because if you buy lumber with defects like that you’re just wasting your money even as I am standing there, sighting down a 2×4 the way my father taught me 15 years ago.

I would rather go to Costco on the Friday afternoon before a three-day weekend. With my preschooler. After preschool.

I would rather go through natural childbirth with twins. With triplets. I would rather take your chemistry final for you. I would rather eat beef stroganoff. I would rather go back to the beginning of the semester like Sisyphus and recreate my syllabus from scratch while simultaneously building an elaborate class website via our university’s shitty web-based course content manager and then teach the entire semester over again than grade your goddamn Final Papers.

I do not want to read your 3AM-energy-drink-fueled excuse for a thesis statement. I do not want to sift through your mixed metaphors, your abundantly employed logical fallacies, your incessant editorializing of your writing process wherein you tell me As I was reading through articles for this paper I noticed that — or In the article that I have chosen to analyze, I believe the author is trying to or worse yet, I sat down to write this paper and ideas kept flowing into my mind as I considered what I should write about because honestly, we both know that the only thing flowing into your mind were thoughts of late night pizza or late night sex or late night pizza and sex, or maybe thoughts of that chemistry final you’re probably going to fail later this week and anyway, you should know by now that any sentence about anything flowing into or out of or around your blessed mind won’t stand in this college writing classroom or Honors seminar or lit survey because we are Professors and dear god, we have Standards.

I do not want to read the one good point you make using the one source that isn’t Wikipedia. I do not want to take the time to notice that it is cited properly. I do not want to read around your 1.25-inch margins or your gauche use of size 13 sans serif fonts when everyone knows that 12-point Times New Roman is just. Fucking. Standard. I do not want to note your missing page numbers. Again. For the sixth time this semester. I do not want to attempt to read your essay printed in lighter ink to save toner, as you say, with the river of faded text from a failing printer cartridge splitting your paper like Charlton Heston in The Ten Commandments, only there, it was a sea and an entire people and here it is your vague stand-in for an argument.

I do not want to be disappointed.

I do not want to think less of you as a human being because I know that you have other classes and that you really should study for that chemistry final because it is organic chemistry and everyone who has ever had a pre-med major for a roommate knows that organic chemistry is the weed out course and even though you do not know this yet because you have never even had any sort of roommate until now, you are going to be weeded out. You are going to be weeded out and then you will be disappointed and I do not want that for you. I do not want that for you because you will have enough disappointments in your life, like when you don’t become a doctor and instead become a philosophy major and realize that you will never make as much money as your brother who went into some soul-sucking STEM field and landed some cushy government contract and made Mom and Dad so proud and who now gives you expensive home appliances like espresso machines and Dyson vacuums for birthday gifts and all you ever send him are socks and that subscription to that shave club for the $6 middle-grade blades.

I do not want you to be disappointed. I would rather do anything else than disappoint you and crush all your hopes and dreams —

Except grade your Final Papers.

The offer to take your chemistry final instead still stands.

Posted in Capitalism, Global History, Higher Education, History, State Formation, Theory

Escape from Rome: How the Loss of Empire Spurred the Rise of Modernity — and What this Suggests about US Higher Ed

This post is a brief commentary on historian Walter Scheidel’s latest book, Escape from Rome.  It’s a stunningly original analysis of a topic that has long fascinated scholars like me:  How did Europe come to create the modern world?  His answer is this:  Europe became the cauldron of modernity and the dominant power in the world because of the collapse of the Roman empire — coupled with the fact that no other power was able to replace it for the next millennium.  The secret of European success was the absence of central control.  This is what led to the extraordinary inventions that characterized modernity — in technology, energy, war, finance, governance, science, and economy.

Below I lay out central elements of his argument, providing a series of salient quotes from the text to flesh out the story.  In the last few years I’ve come to read books exclusively on Kindle and Epub, which allows me to copy passages that catch my interest into Evernote for future reference. So that’s were these quotes come from and why they don’t include page numbers.

At the end, I connect Scheidel’s analysis with my own take on the peculiar history of US higher education, as spelled out in my book A Perfect Mess.  My argument parallels his, showing how the US system arose in the absence of a strong state and dominant church, which fostered creative competition among colleges for students and money.  Out of this unpromising mess of institutions emerged a system of higher ed that came to dominate the academic world.

Escape from Rome

Here’s how Scheidel describes the consequences for Europe that arose from the fall of Rome and the long-time failure of efforts to impose a new empire there.

I argue that a single condition was essential in making the initial breakthroughs possible: competitive fragmentation of power. The nursery of modernity was riven by numerous fractures, not only by those between the warring states of medieval and early modern Europe but also by others within society: between state and church, rulers and lords, cities and magnates, knights and merchants, and, most recently, Catholics and Protestants. This often violent history of conflict and compromise was long but had a clear beginning: the fall of the Roman empire that had lorded it over most of Europe, much as successive Chinese dynasties lorded it over most of East Asia. Yet in contrast to China, nothing like the Roman empire ever returned to Europe.

Recurrent empire on European soil would have interfered with the creation and flourishing of a stable state system that sustained productive competition and diversity in design and outcome. This made the fall and lasting disappearance of hegemonic empire an indispensable precondition for later European exceptionalism and thus, ultimately, for the making of the modern world we now inhabit.

From this developmental perspective, the death of the Roman empire had a much greater impact than its prior existence and the legacy it bequeathed to later European civilization.

Contrast this with China, where dynasties rose and fell but where empire was a constant until the start of the 20th century.  It’s an extension of an argument that others, such as David Landes, have developed about the creative possibilities unleashed by a competitive state system in comparison to the stability and stasis of an imperial power.  Think about the relative stagnation of the  Ottoman, Austro-Hungarian, and Russian empires in 17th, 18th, and 19th centuries compared with the dynamic emerging nation states of Western Europe.  Think also of the paradox within Western Europe, in which the drivers of modernization came not from the richest and strongest imperial powers — Spain, France, and Austria — but from the marginal kingdom of England and the tiny Dutch republic.

The comparison between Europe in China during the second half of the first millennium is telling:

Two things matter most. One is the unidirectional character of European developments compared to the back and forth in China. The other is the level of state capacity and scale from and to which these shifts occurred. If we look at the notional endpoints of around 500 and 1000 CE, the dominant trends moved toward imperial restoration in China and toward inter-and intrastate fragmentation in Europe.

Scheidel shows how social power fragmented after the fall of Rome in such a way that made it impossible for a new hegemonic power to emerge.

After Rome’s collapse, the four principal sources of social power became increasingly unbundled. Political power was claimed by monarchs who gradually lost their grip on material resources and thence on their subordinates. Military power devolved upon lords and knights. Ideological power resided in the Catholic Church, which fiercely guarded its long-standing autonomy even as its leadership was deeply immersed in secular governance and the management of capital and labor. Economic power was contested between feudal lords and urban merchants and entrepreneurs, with the latter slowly gaining the upper hand. In the heyday of these fractures, in the High Middle Ages, weak kings, powerful lords, belligerent knights, the pope and his bishops and abbots, and autonomous capitalists all controlled different levers of social power. Locked in unceasing struggle, they were compelled to cooperate and compromise to make collective action possible.

He points out that “The Christian church was the most powerful and enduring legacy of the Roman empire,” becoming “Europe’s only functioning international organization.”  But in the realms of politics, war, and economy the local element was critical, which produced a situation where local innovation could emerge without interference from higher authority.

The rise of estates and the communal movement shared one crucial characteristic: they produced bodies such as citizen communes, scholarly establishments, merchant guilds, and councils of nobles and commoners that were, by necessity, relatively democratic in the sense that they involved formalized deliberative and consensus-building interactions. Over the long run, these bodies gave Latin Europe an edge in the development of institutions for impersonal exchange that operated under the rule of law and could be scaled up in response to technological change.

Under these circumstances, the states that started to emerge in Europe in the middle ages built on the base of distributed power and local initiative that developed in the vacuum left by the Roman Empire.

As state power recoalesced in Latin Europe, it did so restrained by the peculiar institutional evolution and attendant entitlements and liberties that this acutely fractured environment had engendered and that—not for want of rulers’ trying—could not be fully undone. These powerful medieval legacies nurtured the growth of a more “organic” version of the state—as opposed to the traditional imperial “capstone” state—in close engagement with organized representatives of civil society.

Two features were thus critical: strong local government and its routinized integration into polity-wide institutions, which constrained both despotic power and aristocratic autonomy, and sustained interstate conflict. Both were direct consequences of the fading of late Roman institutions and the competitive polycentrism born of the failure of hegemonic empire. And both were particularly prominent in medieval England: the least Roman of Western Europe’s former Roman provinces, it experienced what with the benefit of hindsight turned out to be the most propitious initial conditions for future transformative development.

The Pax Romana was replaced by a nearly constant state of war, with the proliferation of castle building and the dispersion of military capacity at the local level.  These wars were devastating for the participants but became primary spur for technological, political, and economic innovation.  Everyone needed to develop an edge to help with the inevitable coming conflict.

After the reformation, the small marginal and Protestant states on the North Sea enjoyed a paradoxical advantage in the early modern period, when Catholic Spain, France, and Austria were developing increasingly strong centralized states.  Their marginality allowed them to build most effectively on the inherited medieval model.

…it made a difference that the North Sea region was alone in preserving medieval decentralized political structures and communitarian legacies and building on them during the Reformation while more authoritarian monarchies rose across much of the continent—what Jan Luiten van Zanden deems “an unbroken democratic tradition” from the communal movement of the High Middle Ages to the Dutch Revolt and England’s Glorious Revolution.

England in particular benefited from the differential process of development in Europe.

Yet even as a comprehensive balance sheet remains beyond our reach, there is a case to be made that the British economy expanded and modernized in part because of rather than in spite of the tremendous burdens of war, taxation, and protectionism. By focusing on trade and manufacture as a means of strengthening the state, Britain’s elites came to pursue developmental policies geared toward the production of “goods with high(er) added value, that were (more) knowledge and capital intensive and that were better than those of foreign competitors so they could be sold abroad for a good price.”

Thanks to a combination of historical legacies and geography, England and then Britain happened to make the most of their pricey membership in the European state system. Economic growth had set in early; medieval integrative institutions and bargaining mechanisms were preserved and adapted to govern a more cohesive state; elite commitments facilitated high levels of taxation and public debt; and the wars that mattered most were won.

Reduced to its essentials, the story of institutional development followed a clear arc. In the Middle Ages, the dispersion of power within polities constrained the intensity of interstate competition by depriving rulers of the means to engage in sustained conflict. In the early modern period, these conditions were reversed. Interstate conflict escalated as diversity within states diminished and state capacity increased. Enduring differences between rival polities shaped and were in turn shaped by the ways in which elements of earlier domestic heterogeneity, bargaining and balancing survived and influenced centralization to varying degrees. The key to success was to capitalize on these medieval legacies in maximizing internal cohesion and state capacity later. This alone made it possible to prevail in interstate conflict without adopting authoritarian governance that stifled innovation. The closest approximations of this “Goldilocks scenario” could be found in the North Sea region, first in the Netherlands and then in England.

As maritime European states (England, Spain, Portugal, and the Dutch Republic) spread out across the globe, the competition increased exponentially — which then provided even stronger incentives for innovation at all levels of state and society.

Polycentrism was key. Interstate conflict did not merely foster technological innovation in areas such as ship design and weaponry that proved vital for global expansion, it also raised the stakes by amplifying both the benefits of overseas conquest and its inverse, the costs of opportunities forgone: successful ventures deprived rivals from rewards they might otherwise have reaped, and vice versa. States played a zero-sum game: their involvements overseas have been aptly described as “a competitive process driven as much by anxiety over loss as by hope of gain.”

In conclusion, Scheidel argues that bloody and costly conflict among competing states was the the source of rapid modernization and the rise of European domination of the globe.

I am advocating a perspective that steers clear of old-fashioned triumphalist narratives of “Western” exceptionalism and opposing denunciations of colonialist victimization. The question is not who did what to whom: precisely because competitive fragmentation proved so persistent, Europeans inflicted horrors on each other just as liberally as they meted them out to others around the globe. Humanity paid a staggering price for modernity. In the end, although this may seem perverse to those of us who would prefer to think that progress can be attained in peace and harmony, it was ceaseless struggle that ushered in the most dramatic and exhilaratingly open-ended transformation in the history of our species: the “Great Escape.” Long may it last.

I strongly recommend that you read this book.  There’s insight and provocation on every page.

The Parallel with the History US Higher Education

As I mentioned at the beginning, my own analysis of the emergence of American higher ed tracks nicely on Scheidel’s analysis of Europe after the fall of Rome.  US colleges arose in the early 19th century under conditions where the state was weak, the church divided, and the market strong.  In the absence of a strong central power and a reliable source of financial support, these colleges came into existence as corporations with state charters but not state funding.  (State colleges came later but followed the model of their private predecessors.)  Their creation had less to do with advancing knowledge than with serving more immediately practical aims.

One was to advance the faith in a highly competitive religious environment.  This provided a strong incentive to plant the denominational flag across the countryside, especially on steadily moving western frontier.  A college was a way for Lutherans and Methodists and Presbyterians and others to announce their presence, educate congregants, attract newcomers, and train clergy.  Thus the huge number of colleges in Ohio, the old Northwest Territory.

Another spur for college formation was the crass pursuit of money.  The early US was a huge, underpopulated territory which had too much land and not enough buyers.  This turned nearly everyone on the frontier into a land speculator (ministers included), feverishly coming up with schemes to make the land in their town more valuable for future residents than the land in other towns in the area.  One way to do this was to set up a school, telegraphing to prospects that this was a place to settle down and raise a family.  When other towns followed suit, you could up the ante by establishing a college, usually bearing the town name, which told the world that yours was not some dusty agricultural village but a vibrant center of culture.

The result was a vast number of tiny and unimpressive colleges scattered across the less populated parts of a growing country.  Without strong funding from church or state, they struggled to survive in a highly competitive setting.  This they managed by creating lean institutions that were adept at attracting and retaining student consumers and eliciting donations from alumni and from the wealthier people in town.  The result was the most overbuilt system of higher education the world has ever seen, with five times as many colleges in 1880 than the entire continent of Europe.

All the system was lacking was academic credibility and a strong incentive for student enrollment.  These conditions were met at the end of the century, with the arrival of the German research university to crown the system and give it legitimacy and with the rise of the corporation and its need for white collar workers.

At this point, the chaotic fragmentation and chronic competition that characterized the American system of higher education turned out to be enormously functional.  Free from the constraints that European nation states and national churches imposed on universities, American institutions could develop programs, lure students, hustle for dollars, and promote innovations in knowledge production and technology.  They knew how to make themselves useful to their communities and their states, developing a broad base of political and financial support and demonstrating their social and economic value.

Competing colleges, like competing states, promoted a bottom-up vitality in the American higher ed system that was generally lacking in the older institutions of Europe that were under control of a strong state or church.  Early institutional chaos led to later institutional strength, a system what was not created by design but emerged from an organic process of evolutionary competition.  In the absence of Rome (read: a hegemonic national university), the US higher education system became Rome.

Posted in Academic writing, Educational Research, Higher Education, Writing

Getting It Wrong — Rethinking a Life in Scholarship

This post is an overview of my life as a scholar.  I presented an oral version in my job talk at Stanford in 2002.  The idea was to make sense of the path I’d taken in my scholarly writing up to that point.  What were the issues I was looking at and why?  How did these ideas develop over time?  And what lessons can we learn from this process that might be of use to scholars who are just starting out.

This piece first appeared in print as the introduction to a 2005 book called Education, Markets, and the Public Good: The Selected Works of David F. Labaree.  As a friend told after hearing about the book, “Isn’t this kind of compilation something that’s published after you’re dead?”  So why was I doing this at as a mere youth of 58?  The answer: Routledge offered me the opportunity.  Was there ever an academic who turned out the chance to publish something when the chance arose?  The book was part of a series called — listen for the drum roll — The World Library of Educationalists, which must have a place near the top of the list of bad ideas floated by publishers.  After the first year, when a few libraries rose to the bait, annual sales of this volume never exceeded single digits.  It’s rank in the Amazon bestseller list is normally in the two millions.

Needless to say, no one ever read this piece in its originally published form.  So I tried again, this time slightly adapting it for a 2011 volume edited by Wayne Urban called Leaders in the Historical Study of American Education, which consisted of autobiographical sketches by scholars in the field.  It now ranks in the five millions on Amazon, so the essay still never found a reader.  As a result, I decided to give the piece one more chance at life in my blog.  I enjoyed reading it again and thought it offered some value to young scholars just starting out in a daunting profession.  I hope you enjoy it too.

The core insight is that research trajectories are not things you can  carefully map out in advance.  They just happen.  You learn as you go.  And the most effective means of learning from your own work — at least from my experience — arises from getting it wrong, time and time again.  If you’re not getting things wrong, you may not be learning much at all, since you may just be continually finding what you’re looking for.  It may well be that what you need to find are the things you’re not looking for and that you really don’t want to confront.  The things that challenge your own world view, that take you in a direction you’d rather not go, forcing you to give up ideas you really want to keep.

Another insight I got from this process of reflection is that it’s good to know what are the central weaknesses in the way you do research.  Everyone has them.  Best to acknowledge where you’re coming from and learn to live with that.  These weaknesses don’t discount the value of your work, they just put limits on it.  Your way of doing scholarship are probably better at producing some kinds of insights over others.  That’s OK.  Build on your strengths and let others point out your weaknesses.  You have no obligation and no ability to give the final answer on any important question.  Instead, your job is to make a provocative contribution to the ongoing scholarly conversation and let other scholars take it from there, countering your errors and filling in the gaps.  There is no last word.

Here’s a link to a PDF of the 2011 version.  Hope you find it useful.

 

Adventures in Scholarship

Instead of writing an autobiographical sketch for this volume, I thought it would be more useful to write about the process of scholarship, using my own case as a cautionary tale.  The idea is to help emerging scholars in the field to think about how scholars develop a line of research across a career, both with the hope of disabusing them of misconceptions and showing them how scholarship can unfold as a scary but exhilarating adventure in intellectual development.  The brief story I tell here has three interlocking themes:  You need to study things that resonate with your own experience; you need to take risks and plan to make a lot of mistakes; and you need to rely on friends and colleagues to tell you when you’re going wrong.  Let me explore each of these points.

Study What Resonates with Experience

First, a little about the nature of the issues I explore in my scholarship and then some thoughts about the source of my interest in these issues. My work focuses on the historical sociology of the American system of education and on the thick vein of irony that runs through it.  This system has long presented itself as a model of equal opportunity and open accessibility, and there is a lot of evidence to support these claims.  In comparison with Europe, this upward expansion of access to education came earlier, moved faster, and extended to more people.  Today, virtually anyone can go to some form of postsecondary education in the U.S., and more than two-thirds do.  But what students find when they enter the educational system at any level is that they are gaining equal access to a sharply unequal array of educational experiences.  Why?  Because the system balances open access with radical stratification.  Everyone can go to high school, but quality of education varies radically across schools.  Almost everyone can go to college, but the institutions that are most accessible (community colleges) provide the smallest boost to a student’s life chances, whereas the ones that offer the surest entrée into the best jobs (major research universities) are highly selective.  This extreme mixture of equality and inequality, of accessibility and stratification, is a striking and fascinating characteristic of American education, which I have explored in some form or another in all my work.

Another prominent irony in the story of American education is that this system, which was set up to instill learning, actually undercuts learning because of a strong tendency toward formalism.  Educational consumers (students and their parents) quickly learn that the greatest rewards of the system go to those who attain its highest levels (measured by years of schooling, academic track, and institutional prestige), where credentials are highly scarce and thus the most valuable.  This vertically-skewed incentive structure strongly encourages consumers to game the system by seeking to accumulate the largest number of tokens of attainment – grades, credits, and degrees – in the most prestigious programs at the most selective schools.  However, nothing in this reward structure encourages learning, since the payoff comes from the scarcity of the tokens and not the volume of knowledge accumulated in the process of acquiring these tokens.  At best, learning is a side effect of this kind of credential-driven system.  At worst, it is a casualty of the system, since the structure fosters consumerism among students, who naturally seek to gain the most credentials for the least investment in time and effort.  Thus the logic of the used-car lot takes hold in the halls of learning.

In exploring these two issues of stratification and formalism, I tend to focus on one particular mechanism that helps explain both kinds of educational consequences, and that is the market.  Education in the U.S., I argue, has increasingly become a commodity, which is offered and purchased through market processes in much the same way as other consumer goods.  Educational institutions have to be sensitive to consumers, by providing the mix of educational products that the various sectors of the market demand.  This promotes stratification in education, because consumers want educational credentials that will distinguish them from the pack in their pursuit of social advantage.  It also promotes formalism, because markets operate based on the exchange value of a commodity (what it can be exchanged for) rather than its use value (what it can be used for).  Educational consumerism preserves and increases social inequality, undermines knowledge acquisition, and promotes the dysfunctional overinvestment of public and private resources in an endless race for degrees of advantage.  The result is that education has increasingly come to be seen primarily as a private good, whose benefits accrue only to the owner of the educational credential, rather than a public good, whose benefits are shared by all members of the community even if they don’t have a degree or a child in school.  In many ways, the aim of my work has been to figure out why the American vision of education over the years made this shift from public to private.

This is what my work has focused on in the last 30 years, but why focus on these issues?  Why this obsessive interest in formalism, markets, stratification, and education as arbiter of status competition?  Simple. These were the concerns I grew up with.

George Orwell once described his family’s social location as the lower upper middle class, and this captures the situation of my own family.  In The Road to Wigan Pier, his meditation on class relations in England, he talks about his family as being both culture rich and money poor.[1]  Likewise for mine.  Both of my grandfathers were ministers.  On my father’s side the string of clergy went back four generations in the U.S.  On my mother’s side, not only was her father a minister but so was her mother’s father, who was in turn the heir to a long clerical lineage in Scotland.  All of these ministers were Presbyterians, whose clergy has long had a distinctive history of being highly educated cultural leaders who were poor as church mice.  The last is a bit of an exaggeration, but the point is that their prestige and authority came from learning and not from wealth.  So they tended to value education and disdain grubbing for money.  My father was an engineer who managed to support his family in a modest but comfortable middle-class lifestyle.  He and my mother plowed all of their resources into the education of their three sons, sending all of them to a private high school in Philadelphia (Germantown Academy) and to private colleges (Lehigh, Drexel, Wooster, and Harvard).  Both of my parents were educated at elite schools (Princeton and Wilson) – on ministerial scholarships – and they wanted to do the same for their own children.

What this meant is that we grew up taking great pride in our cultural heritage and educational accomplishments and adopting a condescending attitude to those who simply engaged in trade for a living.  Coupled with this condescension was a distinct tinge of envy for the nice clothes, well decorated houses, new cars, and fancy trips that the families of our friends experienced.  I thought of my family as a kind of frayed nobility, raising the flag of culture in a materialistic society while wearing hand-me-down clothes.  From this background, it was only natural for me to study education as the central social institution, and to focus in particular on the way education had been corrupted by the consumerism and status-competition of a market society.  In doing so I was merely entering the family business.  Someone out there needed to stand up for substantive over formalistic learning and for the public good over the private good, while at the same time calling attention to the dangers of a social hierarchy based on material status.  So I launched my scholarship from a platform of snobbish populism – a hankering for a lost world where position was grounded on the cultural authority of true learning and where mere credentialism could not hold sway.

Expect to Get Things Wrong

Becoming a scholar is not easy under the best of circumstances, and we may make it even harder by trying to imbue emerging scholars with a dedication for getting things right.[2]  In doctoral programs and tenure reviews, we stress the importance of rigorous research methods and study design, scrupulous attribution of ideas, methodical accumulation of data, and cautious validation of claims.  Being careful to stand on firm ground methodologically in itself is not a bad thing for scholars, but trying to be right all the time can easily make us overly cautious, encouraging us to keep so close to our data and so far from controversy that we end up saying nothing that’s really interesting.  A close look at how scholars actually carry out their craft reveals that they generally thrive on frustration.  Or at least that has been my experience.  When I look back at my own work over the years, I find that the most consistent element is a tendency for getting it wrong.  Time after time I have had to admit failure in the pursuit of my intended goal, abandon an idea that I had once warmly embraced, or backtrack to correct a major error.  In the short run these missteps were disturbing, but in the long run they have proven fruitful.

Maybe I’m just rationalizing, but it seems that getting it wrong is an integral part of scholarship.  For one thing, it’s central to the process of writing.  Ideas often sound good in our heads and resonate nicely in the classroom, but the real test is whether they work on paper.[3]  Only there can we figure out the details of the argument, assess the quality of the logic, and weigh the salience of the evidence.  And whenever we try to translate a promising idea into a written text, we inevitably encounter problems that weren’t apparent when we were happily playing with the idea over lunch.  This is part of what makes writing so scary and so exciting:  It’s a high wire act, in which failure threatens us with every step forward.  Can we get past each of these apparently insuperable problems?  We don’t really know until we get to the end.

This means that if there’s little risk in writing a paper there’s also little potential reward.  If all we’re doing is putting a fully developed idea down on paper, then this isn’t writing; it’s transcribing.  Scholarly writing is most productive when authors are learning from the process, and this happens only if the writing helps us figure out something we didn’t really know (or only sensed), helps us solve an intellectual problem we weren’t sure was solvable, or makes us turn a corner we didn’t know was there.  Learning is one of the main things that makes the actual process of writing (as opposed to the final published product) worthwhile for the writer.  And if we aren’t learning something from our own writing, then there’s little reason to think that future readers will learn from it either.  But these kinds of learning can only occur if a successful outcome for a paper is not obvious at the outset, which means that the possibility of failure is critically important to the pursuit of scholarship.

Getting it wrong is also functional for scholarship because it can force us to give up a cherished idea in the face of the kinds of arguments and evidence that accumulate during the course of research.  Like everyone else, scholars are prone to confirmation bias.  We look for evidence to support the analysis we prefer and overlook evidence that supports other interpretations.  So when we collide with something in our research or writing that deflects us from the path toward our preferred destination, we tend to experience this deflection as failure.  However, although these experiences are not pleasant, they can be quite productive.  Not only do they prompt us to learn things we don’t want to know, they can also introduce arguments into the literature that people don’t want to hear.  A colleague at the University of

Michigan, David Angus, had both of these benefits in mind when he used to pose the following challenge to every candidate for a faculty position in the School of Education:  “Tell me about some point when your research forced you to give up an idea you really cared about.”

I have experienced all of these forms of getting it wrong.  Books never worked out the way they were supposed to, because of changes forced on me by the need to come up with remedies for ailing arguments.  The analysis often turned in a direction that meant giving up something I wanted to keep and embracing something I preferred to avoid.  And nothing ever stayed finished.  Just when I thought I had a good analytical hammer and started using it to pound everything in sight, it would shatter into pieces and I would be forced to start over.  This story of misdirection and misplaced intentions starts, as does every academic story, with a dissertation.

Marx Gives Way to Weber

My dissertation topic fell into my lap one day during the final course in my doctoral program in sociology at the University of Pennsylvania, when I mentioned to Michael Katz that I had done a brief study of Philadelphia’s Central High School for an earlier class.  He had a new grant for studying the history of education in Philadelphia and Central was the lead school.  He needed someone to study the school, and I needed a topic, advisor, and funding; by happy accident, it all came together in 15 minutes.  I had first become interested in education as an object of study as an undergraduate at Harvard in the late 1960s, where I majored in Students for a Democratic Society and minored in sociology.  In my last year or two there, I worked on a Marxist analysis of Harvard as an institution of social privilege (is there a better case?), which whet my appetite for educational research.

For the dissertation, I wanted to apply the same kind of Marxist approach to Central High School, which seemed to beg for it.  Founded in 1838, it was the first high school in the city and one of the first in the county, and it later developed into the elite academic high school for boys in the city.  It looked like the Harvard of public high schools.  I had a model for this kind of analysis, Katz’s study of Beverly High School, in which he explained how this high school, shortly after its founding, came to be seen by many citizens as an institution that primarily served the upper classes, thus prompting the town meeting to abolish the school in 1861.[4]  I was planning to do this kind of study about Central, and there seemed to be plenty of evidence to support such an interpretation, including its heavily upper-middle-class student body, its aristocratic reputation in the press, and its later history as the city’s elite high school.

That was the intent, but my plan quickly ran into two big problems in the data I was gathering.  First, a statistical analysis of student attainment and achievement at the school over its first 80 years showed a consistent pattern:  only one-quarter of the students managed to graduate, which meant it was highly selective; but grades and not class determined who made it and who didn’t, which meant it was – surprise – highly meritocratic.  Attrition in modern high schools is strongly correlated with class, but this was not true in the early years at Central.  Middle class students were more likely to enroll in the first place, but they were no more likely to succeed than working class students.  The second problem was that the high school’s role in the Philadelphia school system didn’t fit the Marxist story of top-down control that I was trying to tell.  In the first 50 years of the high school, there was a total absence of bureaucratic authority over the Philadelphia school system.  The high school was an attractive good in the local educational market, offering elevated education in a grand building at a collegiate level (it granted bachelor degrees) and at no cost.  Grammar school students competed for access to this commodity by passing an entrance exam, and grammar school masters competed to get the most students into Central by teaching to the test.  The power that the high school exerted over the system was considerable but informal, arising from consumer demand from below rather than bureaucratic dictate from above.

Thus my plans to tell a story of class privilege and social control fell apart at the very outset of my dissertation; in its place, I found a story about markets and stratification:  Marx gives way to Weber.  The establishment of Central High school in the nation’s second largest city created a desirable commodity with instant scarcity, and this consumer-based market power not only gave the high school control over the school system but also gave it enough autonomy to establish a working meritocracy.  The high school promoted inequality: it served a largely middle class constituency and established an extreme form of educational stratification.  But it imposed a tough meritocratic regime equally on the children of the middle class and working class, with both groups failing most of the time.

Call on Your Friends for Help

In the story I’m telling here, the bad news is that scholarship is a terrain that naturally lures you into repeatedly getting it wrong.  The good news is that help is available if you look for it, which can turn scholarly wrong-headedness into a fruitful learning experience.  Just ask your friends and colleagues.  The things you most don’t want to hear may be just the things that will save you from intellectual confusion and professional oblivion.  Let me continue with the story, showing how colleagues repeatedly saved my bacon.

Markets Give Ground to Politics

Once I completed the dissertation, I gradually settled into being a Weberian, a process that took a while because of the disdain that Marxists hold for Weber.[5]  I finally decided I had a good story to tell about markets and schools, even if it wasn’t the one I had wanted to tell, so I used this story in rewriting the dissertation as a book.  When I had what I thought was a final draft ready to send to the publisher, I showed it to my colleague at Michigan State, David Cohen, who had generously offered to give it a reading.  His comments were extraordinarily helpful and quite devastating.  In the book, he said, I was interpreting the evolution of the high school and the school system as a result of the impact of the market, but the story I was really telling was about an ongoing tension for control of schools between markets and politics.[6]  The latter element was there in the text, but I had failed to recognize it and make it explicit in the analysis.  In short, he explained to me the point of my own book; so I had to rewrite the entire manuscript in order to bring out this implicit argument.

Framing this case in the history of American education as a tension between politics and markets allowed me to tap into the larger pattern of tensions that always exist in a liberal democracy:  the democratic urge to promote equality of power and access and outcomes, and the liberal urge to preserve individual liberty, promote free markets, and tolerate inequality.  The story of Central High School spoke to both these elements.  It showed a system that provided equal opportunity and unequal outcomes.  Democratic politics pressed for expanding access to high school for all citizens, whereas markets pressed for restricting access to high school credentials through attrition and tracking.  Central see-sawed back and forth between these poles, finally settling on the grand compromise that has come to characterize American education ever since:  open access to a stratified school system.  Using both politics and markets in the analysis also introduced me to the problem of formalism, since political goals for education (preparing competent citizens) value learning, whereas market goals (education for social advantage) value credentialing.

Disaggregating Markets

The book came out in 1988 with the title, The Making of an American High School.[7]  With politics and markets as my new hammer, everything looked like a nail.  So I wrote a series of papers in which I applied the idea to a wide variety of educational institutions and reform efforts, including the evolution of high school teaching as work, the history of social promotion, the history of the community college, the rhetorics of educational reform, and the emergence of the education school.

Midway through this flurry of papers, however, I ran into another big problem.  I sent a draft of my community college paper to David Hogan, a friend and former member of my dissertation committee at Penn, and his critique stopped me cold.  He pointed out that I was using the idea of educational markets to refer to two things that were quite different, both in concept and in practice.  One was the actions of educational consumers, the students who want education to provide the credentials they needed in order to get ahead; the other was the actions of educational providers, the taxpayers and employers who want education to produce the human capital that society needs in order to function.  The consumer sought education’s exchange value, providing selective benefits for the individual who owns the credential; the producer sought education’s use value, providing collective benefits to everyone in society, even those not in school.

This forced me to reconstruct the argument from the ground up, abandoning the politics and markets angle and constructing in its place a tension among three goals that competed for primacy in shaping the history of American education.  “Democratic equality” referred to the goal of using education to prepare capable citizens; “social efficiency” referred to the goal of using education to prepare productive workers; and “social mobility” referred to the goal of using education to enable individuals to get ahead in society.  The first was a stand-in for educational politics, the second and third were a disaggregation of educational markets.

Abandoning the Good, the Bad, and the Ugly

Once formulated, the idea of the three goals became a mainstay in my teaching, and for a while it framed everything I wrote.  I finished the string of papers I mentioned earlier, energized by the analytical possibilities inherent in the new tool.  But by the mid-1990s, I began to be afraid that its magic power would start to fade on me soon, as had happened with earlier enthusiasms like Marxism and politics-and-markets.  Most ideas have a relatively short shelf life, as metaphors quickly reach their limits and big ideas start to shrink upon close examination.  That doesn’t mean these images and concepts are worthless, only that they are bounded, both conceptually and temporally.  So scholars need to strike while the iron is hot.  Michael Katz once made this point to me with the Delphic advice, “Write your first book first.”  In other words, if you have an idea worth injecting into the conversation, you should do so now, since it will eventually evolve into something else, leaving the first idea unexpressed.  Since the evolution of an idea is never finished, holding off publication until the idea is done is a formula for never publishing.

So it seemed like the right time to put together a collection of my three-goals papers into a book, and I had to act quickly before they started to turn sour.  With a contract for the book and a sabbatical providing time to put it together, I now had to face the problem of framing the opening chapter.  In early 1996 I completed a draft and submitted it to American Educational Research Journal.  The reviews knocked me back on my heels.  They were supportive but highly critical.  One in particular, which I later found out was written by Norton Grubb, forced me to rethink the entire scheme of competing goals.  He pointed out something I had completely missed in my enthusiasm for the tool-of-the-moment.  In practice my analytical scheme with three goals turned into a normative scheme with two:  a Manichean vision of light and darkness, with Democratic Equality as the Good, and with Social Mobility and Social Efficiency as the Bad and the Ugly.  This ideologically colored representation didn’t hold up under close scrutiny.  Grubb pointed out that social efficiency is not as ugly as I was suggesting.  Like democratic equality and unlike social mobility, it promotes learning, since it has a stake in the skills of the workforce.  Also, like democratic equality, it views education as a public good, whose benefits accrue to everyone and not just (as with social mobility) to the credential holder.

This trenchant critique forced me to start over, putting a different spin on the whole idea of competing goals, abandoning the binary vision of good and evil, reluctantly embracing the idea of balance, and removing the last vestige of my original bumper-sticker Marxism.  As I reconstructed the argument, I put forward the idea that all three of these goals emerge naturally from the nature of a liberal democracy, and that all three are necessary.[8]  There is no resolution to the tension among educational goals, just as there is no resolution to the problem of being both liberal and democratic.  We need an educational system that makes capable citizens and productive workers while also enabling individuals to pursue their own aspirations.  And we all act out our support for each of these goals according to which social role is most salient to us at the moment.  As citizens, we want graduates who can vote intelligently; as taxpayers and employers, we want graduates who will increase economic productivity; and as parents, we want an educational system that offers our children social opportunity.  The problem is the imbalance in the current mix of goals, as the growing primacy of social mobility over the other two goals privileges private over public interests, stratification over equality, and credentials over learning.

Examining Life at the Bottom of the System

With this reconstruction of the story, I was able to finish my second book, published in 1997, and get it out the door before any other major problems could threaten its viability.[9]  One such problem was already coming into view.  In comments on my AERJ goals paper, John Rury (the editor) pointed out that my argument relied on a status competition model of social organization – students fighting for scarce credentials in order to move up or stay up – that did not really apply to the lower levels of the system.  Students in the lower tracks in high school and in the open-access realms of higher education (community colleges and regional state universities) lived in a different world from the one I was talking about.  They were affected by the credentials race, but they weren’t really in the race themselves.  For them, the incentives to compete were minimal, the rewards remote, and the primary imperative was not success but survival.

Fortunately, however, there was one place at the bottom of the educational hierarchy I did know pretty well, and that was the poor beleaguered education school.  From 1985 to 2003, while I was teaching in the College of Education at Michigan State University, I received a rich education in the subject.  I had already started a book about ed schools, but it wasn’t until the book was half completed that I realized it was forcing me to rethink my whole thesis about the educational status game.  Here was an educational institution that was the antithesis of the Harvards and Central High Schools that I had been writing about thus far.  Residing at the very bottom of the educational hierarchy, the ed school was disdained by academics, avoided by the best students, ignored by policymakers, and discounted by its own graduates.  It was the perfect case to use in answering a question I had been avoiding:  What happens to education when credentials carry no exchange value and the status game is already lost?

What I found is that life at the bottom has some advantages, but they are outweighed by disadvantages.  On the positive side, the education school’s low status frees it to focus efforts on learning rather than on credentials, on the use value rather than exchange value of education; in this sense, it is liberated from the race for credentials that consumes the more prestigious realms of higher education.  On the negative side, however, the ed school’s low status means that it has none of the autonomy that prestigious institutions (like Central High School) generate for themselves, which leaves it vulnerable to kibitzing from the outside.  This institutional weakness also has made the ed school meekly responsive to its environment, so that over the years it obediently produced large numbers of teachers at low cost and with modest professional preparation, as requested.

When I had completed a draft of the book, I asked for comments from two colleagues at Michigan State, Lynn Fendler and Tom Bird, who promptly pointed out several big problems with the text.  One had to do with the argument in the last few chapters, where I was trying to make two contradictory points:  ed schools were weak in shaping schools but effective in promoting progressive ideology.  The other problem had to do with the book’s tone:  as an insider taking a critical position about ed schools, I sounded like I was trying to enhance my own status at the expense of colleagues.  Fortunately, they were able to show me a way out of both predicaments.  On the first issue, they helped me see that ed schools were more committed to progressivism as a rhetorical stance than as a mode of educational practice.  In our work as teacher educators, we have to prepare teachers to function within an educational system that is hostile to progressive practices.  On the second issue, they suggested that I shift from the third person to the first person.  By announcing clearly both my membership in the community under examination and my participation in the problems I was critiquing, I could change the tone from accusatory to confessional.  With these important changes in place, The Trouble with Ed Schools was published in 2004.[10]

Enabling Limitations

In this essay I have been telling a story about grounding research in an unlovely but fertile mindset, getting it wrong repeatedly, and then trying to fix it with the help of friends.  However, I don’t want to leave the impression that I think any of these fixes really resolved the problems.  The story is more about filling potholes than about re-engineering the road.  It’s also about some fundamental limitations in my approach to the historical sociology of American education, which I have been unwilling and unable to fix since they lie at the core of my way of seeing things.  Intellectual frameworks define, shape, and enable the work of scholars.  Such frameworks can be helpful by allowing us to cut a slice through the data and reveal interesting patterns that are not apparent from other angles, but they can only do so if they maintain a sharp leading edge.  As an analytical instrument, a razor works better than a baseball bat, and a beach ball doesn’t work at all.  The sharp edge, however, comes at a cost, since it necessarily narrows the analytical scope and commits a scholar to one slice through a problem at the expense of others.  I’m all too aware of the limitations that arise from my own cut at things.

One problem is that I tend to write a history without actors.  Taking a macro-sociological approach to history, I am drawn to explore general patterns and central tendencies in the school-society relationship rather than the peculiarities of individual cases.  In the stories I tell, people don’t act.  Instead, social forces contend, social institutions evolve in response to social pressures, and collective outcomes ensue.  My focus is on general processes and structures rather than on the variations within categories.  What is largely missing from my account of American education is the radical diversity of traits and behaviors that characterizes educational actors and organizations.  I plead guilty to these charges.  However, my aim has been not to write a tightly textured history of the particular but to explore some of the broad socially structured patters that shape the main outlines of American educational life.  My sense is that this kind of work serves a useful purpose—especially in a field such as education, whose dominant perspectives have been psychological and presentist rather than sociological and historical; and in a sub-field like history of education, which can be prone to the narrow monograph with little attention to the big picture; and in a country like the United States, which is highly individualistic in orientation and tends to discount the significance of the collective and the categorical.

Another characteristic of my work is that I tend to stretch arguments well beyond the supporting evidence.  As anyone can see in reading my books, I am not in the business of building an edifice of data and planting a cautious empirical generalization on the roof.  My first book masqueraded as a social history of an early high school, but it was actually an essay on the political and market forces shaping the evolution of American education in general—a big leap to make from historical data about a single, atypical school.  Likewise my second book is a series of speculations about credentialing and consumerism that rests on a modest and eclectic empirical foundation.  My third book involves minimal data on education in education schools and maximal rumination about the nature of “the education school.”  In short, validating claims has not been my strong suit.  I think the field of educational research is sufficiently broad and rich that it can afford to have some scholars who focus on constructing credible empirical arguments about education and others who focus on exploring ways of thinking about the subject.

The moral of this story, therefore, may be that scholarship is less a monologue than a conversation.  In education, as in other areas, our field is so expansive that we can’t cover more than a small portion, and it’s so complex that we can’t even gain mastery over our own tiny piece of the terrain.  But that’s ok.  As participants in the scholarly conversation, our responsibility is not to get things right but to keep things interesting, while we rely on discomfiting interactions with our data and with our colleagues to provide the correctives we need to make our scholarship more durable.

[1]  George Orwell,  The Road to Wigan Pier (New York: Harcourt, Brace, 1958).

[2]  I am grateful to Lynn Fendler and Tom Bird for comments on an earlier draft of this portion of the essay.  As they have done before, they saved me from some embarrassing mistakes.  I presented an earlier version of this analysis in a colloquium at the Stanford School of Education in 2002 and in the Division F Mentoring Seminar at the American Educational Research Association annual meeting in New Orleans later the same year.  A later version was published as the introduction to Education, Markets, and the Public Good: The Selected Works of David F. Labaree (London: Routledge Falmer, 2007).  Reprinted with the kind permission of Taylor and Francis.

[3]  That doesn’t mean it’s necessarily the best way to start developing an idea.  For me, teaching has always served better as a medium for stimulating creative thought.  It’s a chance for me to engage with ideas from texts about a particular topic, develop a story about these ideas, and see how it sounds when I tell it in class and listen to student responses.  The classroom has a wonderful mix of traits for these purposes: by forcing discipline and structure on the creative process while allowing space for improvisation and offering the chance to reconstruct everything the next time around.  After my first book, most of my writing had its origins in this pedagogical process.  But at a certain point I find that I have to test these ideas in print.

[4]  Michael B. Katz, The Irony of Early School Reform: Educational Innovation in Mid-Nineteenth Century Massachusetts (Boston: Harvard University Press, 1968).

[5]  Marx’s message is rousing and it can fit on a bumper sticker:  Workers of the world, unite!  But Weber’s message is more complicated, pessimistic, and off-putting:  The iron cage of rationalization has come to dominate the structure of thought and social action, but we can’t stop it or even escape from it.

[6]  He also pointed out, in passing, that my chapter on the attainment system at the high school – which incorporated 17 tables in the book (30 in the dissertation), and which took me two years to develop by collecting, coding, keying, and statistically analyzing data from 2,000 student records – was essentially one big footnote in support of the statement, “Central High School was meritocratic.”  Depressing but true.

[7]  David F. Labaree, The Making of an American High School: The Credentials Market and the Central High School of Philadelphia, 1838-1939 (New Haven: Yale University Press, 1988).

[8]  David F. Labaree, “Public Goods, Private Goods: The American Struggle over Educational Goals. American Educational Research Journal 34:1 (Spring, 1998): 39-81.

[9]  David F. Labaree,  How to Succeed in School Without Really Learning: The Credentials Race in American Education (New Haven, Yale University Press, 1997).

[10] David F. Labaree,  The Trouble with Ed Schools (New Haven: Yale University Press, 2004).