Posted in Academic writing, Uncategorized

Agnes Callard — Publish and Perish

This post is a recent essay by philosopher Agnes Callard about the problems with academic writing.  It was published in The Point.  

In this essay, she explores the way that professionalization has ruined scholarly writing.  The need to sound professional and publish in the kinds of serious journals that are the route to tenure and academic recognition has diverted us from using writing to communicate ideas to a broad audience in favor of writing to signal scholarly cred. 

Early on she makes a confession that many of us could make: “Although I love to read, and read a lot, little of my reading comes from recent philosophy journals.”  Sound familiar?  It sure resonates with me.  We tend to read journals in our field out of duty — reviewing for journals, checking on a possible citation for our own work — rather than out of any hope that this will provide us with enlightenment.

Much of her discussion is familiar territory, part of the literature on the failures of academic writing that I have highlighted in this blog (e.g., here and here.  But she adds a qualifier that I find intriguing: 

The sad thing about being stuck reading narrow, boring, abstruse papers is not how bad they are, but how good they are. When I am enough of an insider to be in a position to engage the writer in back-and-forth questioning, either in speech or in writing, that process of objection and pushback tends to expose a real and powerful line of thought driving the piece. Philosophers haven’t stopped loving knowledge, despite the increasingly narrow confines within which we must, if we are to survive, pursue it.

It’s not that academics are failing to come up with interesting ideas.  It’s that they feel compelled to hide these ideas under heaps of jargon, turgid prose, and professional posturing.  In conversation, these scholars can explain what’s cool about their work.  But it’s obviously counterproductive to make the reader do all the work of unpacking your argument into a discernable and compelling form. 

Recall Stephen Toulmin’s point, which I use as the epigraph for my syllabus on academic writing: “The effort the writer does not put into writing, the reader has to put into reading.”

Enjoy.

 

Publish and Perish

Agnes Callard

These words exist for you to read them. I wrote them to try to convey some ideas to you. These are not the first words I wrote for you—those were worse. I wrote and rewrote, with a view to clarifying my meaning. I want to make sure that what you take away is exactly what I have in mind, and I want to be concise and engaging, because I am mindful of competing demands on your time and attention.

You might think that everything I am saying is trivial and obvious, because of course all writing is like this. Writing is a form of communication; it exists to be read. But that is, in fact, not how all writing works. In particular, it is not how academic writing works. Academic writing does not exist in order to communicate with a reader. In academia, or at least the part of it that I inhabit, we write, most of the time, not so much for the sake of being read as for the sake of publication.

Let me illustrate by way of a confession regarding my own academic reading habits. Although I love to read, and read a lot, little of my reading comes from recent philosophy journals. The main occasions on which I read new articles in my areas of specialization are when I am asked to referee or otherwise assess them, when I am helping someone prepare them for publication and when I will need to cite them in my own paper.

“Counts” being the operative word. What can be counted is what will get done. In the humanities, no one counts whether anyone reads our papers. Only whether they are published, and where. I have observed these pressures escalate over time: nowadays it is unsurprising when those merely applying to graduate schools have already published a paper or two.

Writing for the sake of publication—instead of for the sake of being read—is academia’s version of “teaching to the test.” The result is papers few actually want to read. First, the writing is hypercomplex. Yes, the thinking is also complex, but the writing in professional journals regularly contains a layer of complexity beyond what is needed to make the point. It is not edited for style and readability. Most significantly of all, academic writing is obsessed with other academic writing—with finding a “gap in the literature” as opposed to answering a straightforwardly interesting or important question.

Of course publication is a necessary step along the way to readership, but the academic who sets their sights on it is like the golfer or baseball player who stops their swing when they make contact with the ball. Without follow-through, what you get are short, jerky movements; we academics have become purveyors of small, awkwardly phrased ideas.

In making these claims about academic writing, I am thinking in the first instance of my own corner of academia—philosophy—though I suspect that my points generalize, at least over the academic humanities. To offer up one anecdote: in spring 2019 I was teaching Joyce’s Portrait of the Artist as a Young Man; since I don’t usually teach literature, I thought I should check out recent secondary literature on Joyce. What I found was abstruse and hypercomplex, laden with terminology and indirect. I didn’t feel I was learning anything I could use to make the meaning of the novel more accessible to myself or to my students. I am willing to take some of the blame here: I am sure I could have gotten something out of those pieces if I had been willing to put more effort into reading them. Still, I do not lack the intellectual competence required to understand analyses of Joyce; I feel all of those writers could have done more to write for me.

But whether my points generalize across the humanities or not, I will confess that I feel the urgency of the problem for philosophy much more than for some abstract entity called “the humanities.” I love Joyce, I love Homer, but I am not invested in the quality of current scholarship on either. It’s philosophy that I worry about.

When I am asked for sources of “big ideas” in philosophy—the kind that would get the extra-philosophical world to stand up and take notice—I struggle to list anyone born after 1950. It is sobering to consider that the previous decade produced: Daniel Dennett, Saul Kripke, David Lewis, Derek Parfit, John McDowell, Peter Singer, G. A. Cohen and Martha Nussbaum. In my view, each of these people towers over everyone who comes after them in at least one of the categories by which we might judge a philosopher: breadth, depth, originality or degree of public influence. Or consider this group, born in roughly the two decades prior (1919-1938), remarkable in its intellectual fertility: Elizabeth Anscombe, Philippa Foot, Stanley Cavell, Harry Frankfurt, Bernard Williams, Thomas Nagel, Robert Nozick, Richard Rorty, Hilary Putnam, John Rawls. These are the philosophers about whom one routinely asks, “Why don’t people write philosophy like this anymore?” And this isn’t only a point about writing style. Their work is inviting—it asks new questions, it sells the reader on why those questions matter and it presents itself as a point of entry into philosophy. This is why all of us keep assigning their work over and over again, a striking fact given how much the number of philosophers has ballooned since their time.

And it’s not just a matter of a few exceptional figures. A few years ago, I happened to browse through back issues of a top journal (Ethics) from 1940-1950—not an easy decade for the world, or academia. I went in assuming those papers would be of much lower quality than what is being put out now. Keep in mind, this is a time when not only was publication not required for getting a job, even a Ph.D. was not required; there were far fewer philosophers, and getting a paper accepted at a journal was a vastly less competitive process.

In general, I would describe the papers from that decade as lacking something in terms of precision, clarity and “scholarliness,” but also as being more engaging and ambitious, more heterogeneous in tone and writing style, and better written. Perhaps some amount of academic competition is salutary, but the all-consuming competition of recent years, it appears, has been less productive of excellence than of homogeneity and stagnation. Because the most reliable mark of “quality” is familiarity, the machine incentivizes keeping innovation to a minimum—only at the margin, just enough to get published. It constricts the space of thought. Over time, we end up with less and less to show for all the effort, talent and philosophical training we are throwing into philosophical research. If I wanted to make progress on one of my own papers, I’d certainly be better served with a paper from Ethics in 2020—I’m much more likely to want to cite it. But if I were just curiously browsing for some philosophical reading, I’d go for one of those back issues. We might be hitting more balls today, but none of them is going far.

Some see a way out: they call it “public philosophy.” But it is a mistake to think that this represents an escape from the problem I am describing. We do not have two systems for doing philosophy, “academic philosophy” and “public philosophy.” “Public philosophy,” including the piece of it you are currently reading, is written mostly by academic philosophers—which is to say, people who studied, received Ph.D.s at and in the vast majority of cases make a living by working within the academic philosophy system.

I have no objection to applying the title “philosopher” broadly, including to those public intellectuals who have had so much more success in speaking to a general audience than I or any of my colleagues who operate more strictly within the confines of academic philosophy: from Judith Butler and Bruno Latour to Slavoj Žižek, Camille Paglia and Steven Pinker. But it is one thing to be a “philosopher” in the sense of being a source of intellectual inspiration to the public, or a subset thereof, and another to be a member of a philosophical community. The latter designation requires a person not only to be beholden to such a community argumentatively, but also calls for participation in the maintenance and self-reproduction of that community through education, training and management. Academic philosophy is the system we have. You can’t jump ship, because there’s nowhere to jump.

The sad thing about being stuck reading narrow, boring, abstruse papers is not how bad they are, but how good they are. When I am enough of an insider to be in a position to engage the writer in back-and-forth questioning, either in speech or in writing, that process of objection and pushback tends to expose a real and powerful line of thought driving the piece. Philosophers haven’t stopped loving knowledge, despite the increasingly narrow confines within which we must, if we are to survive, pursue it.

Some in the philosophical community will defend this “narrowing” as a sign of the increasingly scientific character of philosophy. But no matter how scientific some parts of philosophy become, the following difference will always remain: unlike science, philosophy cannot benefit those who don’t engage in it. Philosophical technology—ideas, arguments, distinctions, questions—cannot live outside the human mind.

One doesn’t need to idolize Socrates, as I happen to, to think that philosophy is an especially dialogical discipline. All academic work invites response in the weak sense of “there is always more to be said,” or “corrections welcome,” but philosophical talks, papers and books specifically aim to provoke, to incite, to court pushback and counterexample. Our task is not to take some questions off humanity’s plate, but to infect others with our need to find answers.

The philosopher is an especially needy kind of truth-seeker. Like vampires, zombies and werewolves, we are creatures who need company, and who will do whatever it takes to create it.

No one thinks that Plato, Descartes, Kant and the rest were right about everything; nonetheless, centuries and millennia later, we cannot stop talking not just about them, but to them, with them. They made us into one of them, and we need to keep paying that forward.

Posted in Higher Education, History of education, Inequality, Meritocracy, Public Good, Uncategorized

How NOT to Defend the Private Research University

This post is a piece I published today in the Chronicle Review.  It’s about an issue that has been gnawing at me for years.  How can you justify the existence of institutions of the sort I taught at for the last two decades — rich private research universities?  These institutions obviously benefit their students and faculty, but what about the public as a whole?  Is there a public good they serve; and if so, what is it? 

Here’s the answer I came up with.  These are elite institutions to the core.  Exclusivity is baked in.  By admitting only a small number of elite students, they serve to promote social inequality by providing grads with an exclusive private good, a credential with high exchange value. But, in part because of this, they also produce valuable public goods — through the high quality research and the advanced graduate training that only they can provide. 

Open access institutions can promote the social mobility that private research universities don’t, but they can’t provide the same degree of research and advanced training.  The paradox is this:  It’s in the public’s interest to preserve the elitism of these institutions.  See what you think.

Hoover Tower

How Not to Defend the Private Research University

David F. Labaree

In this populist era, private research universities are easy targets that reek of privilege and entitlement. It was no surprise, then, when the White House pressured Harvard to decline $8.6 million in Covid-19-relief funds, while Stanford, Yale, and Princeton all judiciously decided not to seek such aid. With tens of billions of endowment dollars each, they hardly seemed to deserve the money.

And yet these institutions have long received outsized public subsidies. The economist Richard Vedder estimated that in 2010, Princeton got the equivalent of $50,000 per student in federal and state benefits, while its similar-size public neighbor, the College of New Jersey, got just $2,000 per student. Federal subsidies to private colleges include research grants, which go disproportionately to elite institutions, as well as student loan and scholarship funds. As recipients of such largess, how can presidents of private research universities justify their institutions to the public?

Here’s an example of how not to do so. Not long after he assumed the presidency of Stanford in 2016, Marc Tessier-Lavigne made the rounds of faculty meetings on campus in order to introduce himself and talk about future plans for the university. When he came to a Graduate School of Education meeting that I attended, he told us his top priority was to increase access. Asked how he might accomplish this, he said that one proposal he was considering was to increase the size of the entering undergraduate class by 100 to 200 students.

The problem is this: Stanford admits about 4.3 percent of the candidates who apply to join its class of 1,700. Admitting a couple hundred additional students might raise the admit rate to 5 percent. Now that’s access. The issue is that, for a private research university like Stanford, the essence of its institutional brand is its elitism. The inaccessibility is baked in.

Raj Chetty’s social mobility data for Stanford show that 66 percent of its undergrads come from the top 20 percent by income, 52 percent from the top 10 percent, 17 percent from the top 1 percent, and just 4 percent from the bottom 20 percent. Only 12 percent of Stanford grads move up by two quintiles or more — it’s hard for a university to promote social mobility when the large majority of its students starts at the top.

Compare that with the data for California State University at Los Angeles, where 12 percent of students are from the top quintile and 22 percent from the bottom quintile. Forty-seven percent of its graduates rise two or more income quintiles. Ten percent make it all the way from the bottom to the top quintile.

My point is that private research universities are elite institutions, and they shouldn’t pretend otherwise. Instead of preaching access and making a mountain out of the molehill of benefits they provide for the few poor students they enroll, they need to demonstrate how they benefit the public in other ways. This is a hard sell in our populist-minded democracy, and it requires acknowledging that the very exclusivity of these institutions serves the public good.

For starters, in making this case, we should embrace the emphasis on research production and graduate education and accept that providing instruction for undergraduates is only a small part of the overall mission. Typically these institutions have a much higher proportion of graduate students than large public universities oriented toward teaching (graduate students are 57 percent of the total at Stanford and just 8.5 percent in the California State University system).

Undergraduates may be able to get a high-quality education at private research universities, but there are plenty of other places where they could get the same or better, especially at liberal-arts colleges. Undergraduate education is not what makes these institutions distinctive. What does make them stand out are their professional schools and doctoral programs.

Private research universities are elite institutions, and they shouldn’t pretend otherwise.

Private research universities are souped up versions of their public counterparts, and in combination they exert an enormous impact on American life.

As of 2017, the American Association of Universities, a club consisting of the top 65 research universities, represented just 2 percent of all four-year colleges and 12 percent of all undergrads. And yet the group accounted for over 20 percent of all U.S. graduate students; 43 percent of all research doctorates; 68 percent of all postdocs; and 38 percent of all Nobel Prize winners. In addition, its graduates occupy the centers of power, including, by 2019, 64 of the Fortune 100 CEOs; 24 governors; and 268 members of Congress.

From 2014 to 2018, AAU institutions collectively produced 2.4-million publications, and their collective scholarship received 21.4 million citations. That research has an economic impact — these same institutions have established 22 research parks and, in 2018 alone, they produced over 4,800 patents, over 5,000 technology license agreements, and over 600 start-up companies.

Put all this together and it’s clear that research universities provide society with a stunning array of benefits. Some of these benefits accrue to individual entrepreneurs and investors, but the benefits for society at a whole are extraordinary. These universities drive widespread employment, technological advances that benefit consumers worldwide, and the improvement of public health (think of all the university researchers and medical schools advancing Covid-19-research efforts right now).

Besides their higher proportion of graduate students and lower student-faculty ratio, private research universities have other major advantages over publics. One is greater institutional autonomy. Private research universities are governed by a board of laypersons who own the university, control its finances, and appoint its officers. Government can dictate how it uses the public subsidies it gets (except tax subsidies), but otherwise it is free to operate as an independent actor in the academic market. This allows these colleges to pivot quickly to take advantage of opportunities for new programs of study, research areas, and sources of funding, largely independent of political influence, though they do face a fierce academic market full of other private colleges.

A 2010 study of universities in Europe and the U.S. by Caroline Hoxby and associates shows that this mix of institutional autonomy and competition is strongly associated with higher rankings in the world hierarchy of higher education. They find that every 1-percent increase in the share of the university budget that comes from government appropriations corresponds with a decrease in international ranking of 3.2 ranks. At the same time, each 1-percent increase in the university budget from competitive grants corresponds with an increase of 6.5 ranks. They also found that universities high in autonomy and competition produced more patents.

Another advantage the private research universities enjoy over their public counterparts, of course, is wealth. Stanford’s endowment is around $28 billion, and Berkeley’s is just under $5 billion, but because Stanford is so much smaller (16,000 versus 42,000 total students) this multiplies the advantage. Stanford’s endowment per student dwarfs Berkeley’s. The result is that private universities have more research resources: better labs, libraries, and physical plant; higher faculty pay (e.g., $254,000 for full professors at Stanford, compared to $200,000 at Berkeley); more funding for grad students, and more staff support.

A central asset of private research universities is their small group of academically and socially elite undergraduate students. The academic skill of these students is an important draw for faculty, but their current and future wealth is particularly important for the institution. From a democratic perspective, this wealth is a negative. The student body’s heavy skew toward the top of the income scale is a sign of how these universities are not only failing to provide much social mobility but are in fact actively engaged in preserving social advantage. We need to be honest about this issue.

But there is a major upside. Undergraduates pay their own way (as do students in professional schools); but the advanced graduate students don’t — they get free tuition plus a stipend to pay living expenses, which is subsidized, both directly and indirectly, by undergrads. The direct subsidy comes from the high sticker price undergrads pay for tuition. Part of this goes to help out upper-middle-class families who still can’t afford the tuition, but the rest goes to subsidize grad students.

The key financial benefits from undergrads come after they graduate, when the donations start rolling in. The university generously admits these students (at the expense of many of their peers), provides them with an education and a credential that jump-starts their careers and papers over their privilege, and then harvests their gratitude over a lifetime. Look around any college campus — particularly at a private research university — and you will find that almost every building, bench, and professor bears the name of a grateful donor. And nearly all of the money comes from former undergrads or professional school students, since it is they, not the doctoral students, who go on to earn the big bucks.

There is, of course, a paradox. Perhaps the gross preservation of privilege these schools traffic in serves a broader public purpose. Perhaps providing a valuable private good for the few enables the institution to provide an even more valuable public good for the many. And yet students who are denied admission to elite institutions are not being denied a college education and a chance to get ahead; they’re just being redirected. Instead of going to a private research university like Stanford or a public research university like Berkeley, many will attend a comprehensive university like San José State. Only the narrow metric of value employed at the pinnacle of the American academic meritocracy could construe this as a tragedy. San José State is a great institution, which accepts the majority of the students who apply and which sends a huge number of graduates to work in the nearby tech sector.

The economist Miguel Urquiola elaborates on this paradox in his book, Markets, Minds, and Money: Why America Leads the World in University Research (Harvard University Press, 2020), which describes how American universities came to dominate the academic world in the 20th century. The 2019 Shanghai Academic Ranking of World Universities shows that eight of the top 10 universities in the world are American, and seven of these are private.

Urquiola argues that the roots of American academe’s success can be found in its competitive marketplace. In most countries, universities are subsidiaries of the state, which controls its funding, defines its scope, and sets its policy. By contrast, American higher education has three defining characteristics: self-rule (institutions have autonomy to govern themselves); free entry (institutions can be started up by federal, state, or local governments or by individuals who acquire a corporate charter); and free scope (institutions can develop programs of research and study on their own initiative without undue governmental constraint).

The result is a radically unequal system of higher education, with extraordinary resources and capabilities concentrated in a few research universities at the top. Caroline Hoxby estimates that the most selective American research universities spend an average of $150,000 per student, 15 times as much as some poorer institutions.

As Urquiola explains, the competitive market structure puts a priority on identifying top research talent, concentrating this talent and the resources needed to support it in a small number of institutions, and motivating these researchers to ramp up their productivity. This concentration then makes it easy for major research-funding agencies, such as the National Institutes of Health, to identity the institutions that are best able to manage the research projects they want to support. And the nature of the research enterprise is such that, when markets concentrate minds and money, the social payoff is much greater than if they were dispersed more evenly.

Radical inequality in the higher-education system therefore produces outsized benefits for the public good. This, paradoxical as it may seem, is how we can truly justify the public investment in private research universities.

David Labaree is a professor emeritus at the Stanford Graduate School of Education.

 

 

Posted in Capitalism, Culture, Meritocracy, Uncategorized

Clare Coffey — Closing Time: We’re All Counting Bodies

This is a lovely essay by Clare Coffey from the summer issue of Hedgehog Review.  In it she explores the extremes in contemporary American life through the medium of two recent books:  those who have been shunted aside in the knowledge economy and destined to deaths of despair, and those who occupy the flashiest reaches of the new uber class.  She does this through an adept analysis of two recent books:  Deaths of Despair and the Future of Capitalism, by Anne Case and Angus Deaton; and Very Important People: Status and Beauty in the Global Party Circuit, by Ashley Mears.  In combination, the books tell a powerful story.

Closing Time

We’re All Counting Bodies

Clare Coffey

Lenin’s maxim that “there are decades when nothing happens, and there are weeks when decades happen” can be tough on writers. You spend years carefully marshaling an argument, anticipating objections, tightening your focus, sacrificing claims that might interfere with the suasion of your central point, and then—bam, the gun goes off. Something happens that makes the point toward which you were gently cajoling the reader not only obvious but insufficient. Your thoroughbred stands ready, but the rest of the field has already left the gate.

So it is with Deaths of Despair and the Future of Capitalism. In 2014, Princeton economists Anne Case and Angus Deaton, the latter a Nobel Prize winner, noted that for the first time, the mortality rate among white Americans without a college degree was climbing rather than dropping; further, while members of this group remained relatively advantaged compared to their black peers, the two cohorts’ mortality rates were moving in opposite directions. Case and Deaton found that a significant portion of this hike in mortality was due to deaths from alcoholism, drug use, and suicide—phenomena which, bundled together, they labeled “deaths of despair.”

Deaths of Despair Cover

Six years later, in this new book, the two economists attempt to turn these observations into a thesis: What can this horrifying data can tell us about American society at large? Instead of linking the deaths to any single deprivation, the authors place them in a context of wholesale loss of social status and coherent identity for those without purchase in the knowledge professions—a loss that encompasses wage stagnation, the decline of union power, and the transition from a manufacturing to a service economy.

For Case and Deaton, the closing of a factory involves all three, and cannot be understood strictly in terms of lost earnings or job numbers. Even in a “success” story, in which workers get new jobs at a staffing agency or an Amazon fulfillment center, a qualitative catastrophe occurs: to the prestige of difficult, directly productive work; to a measure of democratic control over the conditions of work; to the sense of valued belonging to socially important organizations; to the norms governing work, marriage, and sociality that developed in a particular material context, and which cannot simply transfer over or remake themselves overnight. At least some of these losses are downstream of sectoral transition only insofar as firm structure and historic labor organization is concerned. There is no purely sectoral reason for companies to outsource all non-knowledge jobs to staffing companies, or for Amazon to fire whistleblowers. The differences between NYC taxis and Uber lie in the fact that one has a union and the other classifies its workers as independent contractors, not in NAICS codes. But however carefully you parse the causes, deaths of despair are the final result of a long, slow social death.

Who are the culprits? Case and Deaton are careful not to absolve capitalism, but they insist that the problem is not really capitalism itself but its abuses: “We are not against capitalism. We believe in the power of competition and free markets. Capitalism has brought an end to misery and death for millions in now rich countries over the past 250 years and, much more rapidly, in countries like India and China, over the past 50 years.” This qualification is not unique to them; it takes different forms, from the regulatory reformism of political liberals such as Elizabeth Warren to the attacks on “crony capitalism” of doctrinaire libertarians, for whom the true free market has not yet been tried. For Case and Deaton, the big-picture problem is unchecked economic trends that encourage “upward redistribution”; their more specific and more representative target is a rent-seeking health-care industry.

Their complaint is not only that companies like Purdue Pharma arguably jump-started the opioid epidemic by hard-selling their pain medications and concealing these drugs’ addictive potential. Case and Deaton also argue that the health-care sector has eaten up American wage gains with insurance costs, funneling more and more money to health-care spending while delivering less and less in terms of health outcomes. The numbers the authors have assembled are convincing. But who at this juncture needs to be convinced? A teenager recently died of COVID-19 after being turned away from an urgent care clinic for lack of insurance. Hospital personnel are getting laid off in the midst of a pandemic to stanch balance sheet losses resulting from delayed elective care. Hospitals that have been operated on the basis of years of business school orthodoxy lack the extra capacity to deal with anything more momentous than a worse-than-usual flu season. Who is in any serious doubt that the American health-care system is cobbled together out of rusty tin cans and profit margins? The more pertinent question is what in America isn’t.

The release of Case and Deaton’s book just as an often fatal communicable disease was going pandemic was not, of course, the fault of the authors. But it makes for oddly frustrating reading. Positing a link between deindustrialization and health-care rent seeking and deaths of despair is an abductive argument about historical and present actors rather than a purely statistical inference. As Case and Deaton freely admit, you cannot prove by means of regression analysis that any of their targets are the unmistakable causes of these deaths. For that matter, there’s too much bundling among both the phenomena (alcoholic diseases, overdoses, suicides) and the proposed causes (deindustrialization, the decline of organized labor, wage stagnation, corporate restructuring) to conduct even a controlled test.

While it may not be possible to demonstrate airtight causality, Deaths of Despair nonetheless provides valuable documentation of the humiliations, losses, and unmoorings of those on the wrong end of a widening economic divide. The book is less a technocratic prescription than a grim body count.

In Very Important People: Status and Beauty in the Global Party Circuit, Ashley Mears is counting bodies too, albeit very different ones. From New York to Miami, from Ibiza to Saint-Tropez, all over the elite global party scene in which Mears, a sociologist and former fashion model, did eighteen months of research, everyone is counting bodies. The bodies are those of models, ruthlessly quantified and highly valuable to the owners of elite nightclubs. Very Important People hinges on one insight: The image of a rooftop party filled with glamorous models drinking champagne isn’t just a pop-culture cliché. It is a lucrative business model.

VIP Cover

According to Mears, up through the nineties the business model for nightclubs was simple. There was a bar and a dance floor. You paid to get in and you paid to drink. Ideally, you’d want a certain ratio of women to men, but the pleasures on offer were fairly straightforward. But in the early 2000s, a new model emerged, ironically enough, in the repurposed industrial buildings of New York’s Meatpacking District. Rather than rely on the dance floor and bar, clubs encouraged (usually male) customers to put down serious cash for immediately available and strategically placed tables and VIP sections, where bottles of liquor at marked-up prices could be brought to them. Clubs that could successfully brand themselves as elite might make enormous sums off out-of-town dentists on a spree, young financiers looking to woo or compete with business associates by demonstrating access to the city’s most exclusive pleasures, and the mega-rich “whales” proclaiming their status by over-the-top performances of generosity and waste.

The table is crucial for this strategy to succeed. It allows maximum visibility for both the whale’s endless parade of bottles of Dom Perignon (much of it left undrunk by virtue of sheer volume) and the groups of models that signal that this is the kind of club where a whale might be found. The good that is being advertised is indistinguishable from the advertising process.

A whole secondary ecosystem has grown up around this glitzy “potlach,” as Mears calls it—this elaborately choreographed wasting of wealth. There are the elite club promoters, who might make thousands a night if they show up with enough models, and whose transactional relationships with the models are defined in useful, fragile terms of mutual care. There are the models, young and broke in expensive cities, who get free meals, free champagne, and sometimes free housing as long as they show up and play nice. There are the bouncers, who police the height and looks of entrants, and the whales, who both command the scene and function as an advertisement for its desirability. Being adjacent to real wealth is a powerful incentive, especially for promoters, who dream of rubbing shoulders and making deals of their own through connections forged in the club.

The owners make money, and everyone else gets a little something and a little scammed. Perhaps among those who are scammed the least are the models, the majority of whom seem to be in it for a good party rather than upward mobility. When you are very young and very beautiful, the world tends to see those traits as the most important things about you. One way to register dissent is to trade them only for things equally ephemeral, inconsequential, delightful: a glass of champagne, moonlight over the Riviera, a night spent dancing till dawn. Reaping the benefits of belonging to an intrinsically exclusive club is not heroic. But it seems no worse than the trade made by the wives of the superwealthy, who in one scene appear, disapproving and hostile, at a table adjacent to their husbands’ at an Upper East Side restaurant. They have made a more thoroughgoing negotiation of their value to wealthy men—one resting on the ability to reproduce the upper class as well as attest to its presence.

Demarcating status is the limit of the model’s power. It is what she is at the club to do. The model is not there primarily to be sexually alluring—that is the role of the lower-class-coded bottle waitress. One of Mears’s subjects even confesses that models aren’t his type: They are too tall and skinny, too stereotyped, and after all, desire is so highly personal—less an estimation that a face has been arranged in the single best way as delight that it has been arranged in such a way. But models are necessary precisely because their bodies and faces have transcended the whims of any personally desiring subject, to the objectivity of market value. Their beauty can be quantified in inches, and dollars.

To contemplate and cultivate beauty is perhaps noble. To desire and consume it is at least human. To desire not any object in itself, but an image of desirability, is ghastly. There are many scenes in Very Important People, from the physical dissipation to the moments bordering on human trafficking, that are morally horrifying. What lingers, though, is this spectral quality: huge amounts of money, time, and flesh in service to a recursive and finally imaginary value. If anyone has gained from the losses of Case and Deaton’s subjects, it is the patrons of the global party circuit. But their gains seem less hoarded than unmade, in a kind of reverse alchemy—transmuted into the allurements of a phantom world, elusive, seductive, and all too soluble in the light of day.

Posted in Credentialing, Higher Education, History of education, Sociology, Uncategorized

How Credentialing Theory Explains the Extraordinary Growth in US Higher Ed in the 19th Century

Today I am posting a piece I wrote in 1995. It was the foreword to a book by David K. Brown, Degrees of Control: A Sociology of Educational Expansion and Occupational Credentialism.  

I have long been interested in credentialing theory, but this is the only place where I ever tried to spell out in detail how the theory works.  For this purpose, I draw on the case of the rapid expansion of the US system of higher education in the 19th century and its transformation at the end of the century, which is the focus of Brown’s book.  Here’s a link to a pdf of the original. 

The case is particularly fruitful for demonstrating the value of credentialing theory, because the most prominent theory of education development simply can’t make sense of it.  Functionalist theory sees the emergence of educational systems as part of the process of modernization.  As societies become more complex, with a greater division of labor and a shift from manual to mental work, the economy requires workers with higher degrees of verbal and cognitive skills.  Elementary, secondary  and higher education arise over time in response to this need. 

The history of education in the U.S., however, poses a real problem for this explanation.  American higher education exploded in the 19th century, to the point that there were 800 some colleges in existence by 1880, which was more than the total number in the continent of Europe.  It was the highest rate of colleges per 100,000 population that the world have ever seen.   The problem is that this increase was not in respond to increasing demand from employers for college-educated workers.  While the rate of higher schooling was increasing across the century, the skill demands in the workforce were declining.  The growth of factory production was subdividing forms of skilled work, such as shoemaking, into a series of low-skilled tasks on the assembly line.  

This being the case, then, how can we understand the explosion of college founding in the 19th century?  Brown provides a compelling explanation, and I lay out his core arguments in my foreword.  I hope you find it illuminating.

 

Brown Cover

Preface

In this book, David Brown tackles an important question that has long puzzled scholars who wanted to understand the central role that education plays in American society: When compared with other Western countries, why did the United States experience such extraordinary growth in higher education? Whereas in most societies higher education has long been seen as a privilege that is granted to a relatively small proportion of the population, in the United States it has increasingly come to be seen as a right of the ordinary citizen. Nor was this rapid increase in accessibility very recent phenomenon. As Brown notes, between 1870 and 1930, the proportion of college-age persons (18 to 21 years old) who attended institutions of higher education rose from 1.7% to 13.0%. And this was long before the proliferation of regional state universities and community colleges made college attendance a majority experience for American youth.

The range of possible answers to this question is considerable, with each carrying its own distinctive image of the nature of American political and social life. For example, perhaps the rapid growth in the opportunity for higher education was an expression of egalitarian politics and a confirmation of the American Dream; or perhaps it was a political diversion, providing ideological cover for persistent inequality; or perhaps it was merely an accident — an unintended consequence of a struggle for something altogether different. In politically charged terrain such as this, one would prefer to seek guidance from an author who doesn’t ask the reader to march behind an ideological banner toward a preordained conclusion, but who instead rigorously examines the historical data and allows for the possibility of encountering surprises. What the reader wants, I think, is an analysis that is both informed by theory and sensitive to historical nuance.

In this book, Brown provides such an analysis. He approaches the subject from the perspective of historical sociology, and in doing so he manages to maintain an unusually effective balance between historical explanation and sociological theory-building. Unlike many sociologists dealing with history, he never oversimplifies the complexity of historical events in the rush toward premature theoretical closure; and unlike many historians dealing with sociology, he doesn’t merely import existing theories into his historical analysis but rather conceives of the analysis itself as a contribution to theory. His aim is therefore quite ambitious – to spell out a theoretical explanation for the spectacular growth and peculiar structure of American higher education, and to ground this explanation in an analysis of the role of college credentials in American life.

Traditional explanations do not hold up very well when examined closely. Structural-functionalist theory argues that an expanding economy created a powerful demand for advanced technical skills (human capital), which only a rapid expansion of higher education could fill. But Brown notes that during this expansion most students pursued programs not in vocational-technical areas but in liberal arts, meaning that the forms of knowledge they were acquiring were rather remote from the economically productive skills supposedly demanded by employers. Social reproduction theory sees the university as a mechanism that emerged to protect the privilege of the upper-middle class behind a wall of cultural capital, during a time (with the decline of proprietorship) when it became increasingly difficult for economic capital alone to provide such protection. But, while this theory points to a central outcome of college expansion, it fails to explain the historical contingencies and agencies that actually produced this outcome. In fact, both of these theories are essentially functionalist in approach, portraying higher education as arising automatically to fill a social need — within the economy, in the first case, and within the class system, in the second.

However, credentialing theory, as developed most extensively by Randall Collins (1979), helps explain the socially reproductive effect of expanding higher education without denying agency. It conceives of higher education diplomas as a kind of cultural currency that becomes attractive to status groups seeking an advantage in the competition for social positions, and therefore it sees the expansion of higher education as a response to consumer demand rather than functional necessity. Upper classes tend to benefit disproportionately from this educational development, not because of an institutional correspondence principle that preordains such an outcome, but because they are socially and culturally better equipped to gain access to and succeed within the educational market.

This credentialist theory of educational growth is the one that Brown finds most compelling as the basis for his own interpretation. However, when he plunges into a close examination of American higher education, he finds that Collins’ formulation of this theory often does not coincide very well with the historical evidence. One key problem is that Collins does not examine the nature of labor market recruitment, which is critical for credentialist theory, since the pursuit of college credentials only makes sense if employers are rewarding degree holders with desirable jobs. Brown shows that between 1800 and 1880 the number of colleges in the United States grew dramatically (as Collins also asserts), but that enrollments at individual colleges were quite modest. He argues that this binge of institution-creation was driven by a combination of religious and market forces but not (contrary to Collins) by the pursuit of credentials. There simply is no good evidence that a college degree was much in demand by employers during this period. Instead, a great deal of the growth in the number of colleges was the result of the desire by religious and ethnic groups to create their own settings for producing clergy and transmitting culture. In a particularly intriguing analysis, Brown argues that an additional spur to this growth came from markedly less elevated sources — local boosterism and land speculation — as development-oriented towns sought to establish colleges as a mechanism for attracting land buyers and new residents.

Brown’s version of credentialing theory identifies a few central factors that are required in order to facilitate a credential-driven expansion of higher education, and by 1880 several of these were already in place. One such factor is substantial wealth. Higher education is expensive, and expanding it for reasons of individual status attainment rather than for societal necessity is a wasteful use of a nation’s resources; it is only feasible for a very wealthy country. The United States was such a country in the late nineteenth century. A second factor is a broad institutional base. At this point, the United States had the largest number of colleges per million residents that the country has even seen, before or since. When combined with the small enrollments at each college, this meant that there was a great potential for growth within an already existing institutional framework. This potential was reinforced by a third factor, decentralized control. Colleges were governed by local boards rather than central state authorities, thus encouraging entrepreneurial behavior by college leaders, especially in the intensively competitive market environment they faced.

However, three other essential factors for rapid credential-based growth in higher education were still missing in 1880. For one thing, colleges were not going to be able to attract large numbers of new students, who were after all unlikely to be motivated solely by the love of learning, unless they could offer these students both a pleasant social experience and a practical educational experience — neither of which was the norm at colleges for most of the nineteenth century. Another problem was that colleges could not function as credentialing institutions until they had a monopoly over a particular form of credentials, but in 1880 they were still competing directly with high schools for the same students. Finally, their credentials were not going to have any value on the market unless employers began to demonstrate a distinct preference for hiring college graduates, and such a preference was still not obvious at this stage.

According to Brown, the 1880s saw a major shift in all three of these factors. The trigger for this change was a significant oversupply of institutions relative to existing demand. In this life or death situation, colleges desperately sought to increase the pool of potential students. It is no coincidence that this period marked the rapid diffusion of efforts to improve the quality of social life on campuses (from the promotion of athletics to the proliferation of fraternities), and also the shift toward curriculum with a stronger claim of practicality (emphasizing modern languages and science over Latin and Greek). At the same time, colleges sought to guarantee a flow of students from feeder institutions, which required them to establish a hierarchical relationship with high schools. The end of the century was the period in which colleges began requiring completion of a high school course as a prerequisite for college admission instead of the traditional entrance examination. This system provided high schools with a stable outlet for its graduates and colleges with predictable flow of reasonably well-prepared students. However, none of this would have been possible if the college degree had not acquired significant exchange value in the labor market. Without this, there would have been only social reasons for attending college, and high schools would have had little incentive to submit to college mandates.

Perhaps Brown’s strongest contribution to credential theory is his subtle and persuasive analysis of the reasoning that led employers to assert a preference for college graduates in the hiring process. Until now, this issue has posed a significant, perhaps fatal, problem for credentialing theory, which has asked the reader to accept two apparently contradictory assertions about credentials. First, the theory claims that a college degree has exchange value but not necessarily use value; that is, it is attractive to the consumer because it can be cashed in on a good job more or less independently of any learning that was acquired along the way. Second, this exchange value depends on the willingness of employers to hire applicants based on credentials alone, without direct knowledge of what these applicants know or what they can do. However this raises a serious question about the rationality of the employer in this process. After all, why would an employer, who presumably cares about the productivity of future employees, hire people based solely on college’s certification of competence in the absence of any evidence for that competence?

Brown tackles this issue with a nice measure of historical and sociological insight. He notes that the late nineteenth century saw the growing rationalization of work, which led to the development of large-scale bureaucracies to administer this work within both private corporations and public agencies. One result was the creation of a rapidly growing occupational sector for managerial employees who could function effectively within such a rationalized organizational structure. College graduates seemed to fit the bill for this kind of work. They emerged from the top level of the newly developed hierarchy of educational institutions and therefore seemed like natural candidates for management work in the upper levels of the new administrative hierarchy, which was based not on proprietorship or political office but on apparent skill. And what kinds of skills were called for in this line of work? What the new managerial employees needed was not so much the technical skills posited by human capital theory, he argues, but a general capacity to work effectively in a verbally and cognitively structured organizational environment, and also a capacity to feel comfortable about assuming positions of authority over other people.

These were things that the emerging American college could and indeed did provide. The increasingly corporate social structure of student life on college campuses provided good socialization for bureaucratic work, and the process of gaining access to and graduation from college provided students with an institutionalized confirmation of their social superiority and qualifications for leadership. Note that these capacities were substantive consequences of having attended college, but they were not learned as part of the college’s formal curriculum. That is, the characteristics that qualified college graduates for future bureaucratic employment were a side effect of their pursuit of a college education. In this sense, then, the college credential had a substantive meaning for employers that justified them in using it as a criterion for employment — less for the human capital that college provided than for the social capital that college conferred on graduates. Therefore, this credential, Brown argues, served an important role in the labor market by reducing the uncertainty that plagued the process of bureaucratic hiring. After all, how else was an employer to gain some assurance that candidate could do this kind of work? A college degree offered a claim to competence, which had enough substance behind it to be credible even if this substance was largely unrelated to the content of the college curriculum.

By the 1890s all the pieces were in place for a rapid expansion of college enrollments, strongly driven by credentialist pressures. Employers had reason to give preference to college graduates when hiring for management positions. As a result, middle-class families had an increasing incentive to provide their children with privileged access to an advantaged social position by sending them to college. For the students themselves, this extrinsic reward for attending college was reinforced by the intrinsic benefits accruing from an attractive social life on campus. All of this created a very strong demand for expanding college enrollments, and the pre-existing institutional conditions in higher education made it possible for colleges to respond to this demand in an aggressive fashion. There were a thousand independent institutions of higher education, accustomed to playing entrepreneurial roles in a competitive educational market, that were eager to capitalize on the surge of interest in attending college and to adapt themselves to the preferences of these new tuition-paying consumers. The result was a powerful and unrelenting surge of expansion in college enrollments that continued for the next century.

 

Brown provides a persuasive answer to the initial question about why American higher education expanded at such a rapid rate. But at this point the reader may well respond by asking the generic question that one should ask of any analyst, and that is, “So what?” More specifically, in light of the particular claims of this analysis, the question becomes: “What difference does it make that this expansion was spurred primarily by the pursuit of educational credentials?” In my view, at least, the answer to that question is clear. The impact of credentialism on both American society and the American educational system has been profound — profoundly negative. Consider some of the problems it has caused.

One major problem is that credentialism is astonishingly inefficient. Education is the largest single public investment made by most modern societies, and this is justified on the grounds that it provides a critically important contribution to the collective welfare. The public value of education is usually calculated as some combination of two types of benefits, the preparation of capable citizens (the political benefit) and the training of productive workers (the economic benefit). However the credentialist argument advanced by Brown suggests that these public benefits are not necessarily being met and that the primary beneficiaries are in fact private individuals. From this perspective, higher education (and the educational system more generally) exists largely as a mechanism for providing individuals with a cultural commodity that will give them a substantial competitive advantage in the pursuit of social position. In short, education becomes little but a vast public subsidy for private ambition.

The practical effect of this subsidy is the production of a glut of graduates. The difficulty posed by this outcome is not that the population becomes overeducated (since such a state is difficult to imagine) but that it becomes overcredentialed, since people are pursuing diplomas less for the knowledge they are thereby acquiring than for the access that the diplomas themselves will provide. The result is a spiral of credential inflation; for as each level of education in turn gradually floods with a crowd of ambitious consumers, individuals have to keep seeking ever higher levels of credentials in order to move a step ahead of the pack. In such a system nobody wins. Consumers have to spend increasing amounts of time and money to gain additional credentials, since the swelling number of credential holders keeps lowering the value of credentials at any given level. Taxpayers find an increasing share of scarce fiscal resources going to support an educational chase with little public benefit. Employers keep raising the entry-level education requirements for particular jobs, but they still find that they have to provide extensive training before employees can carry out their work productively. At all levels, this is an enormously wasteful system, one that rich countries like the United States can increasingly ill afford and that less developed countries, who imitate the U.S. educational model, find positively impoverishing.

A second major problem is that credentialism undercuts learning. In both college and high school, students are all too well aware that their mission is to do whatever it takes to acquire a diploma, which they can then cash in on what really matters — a good job. This has the effect of reifying the formal markers of academic progress-grades, credits, and degrees — and encouraging students to focus their attention on accumulating these badges of merit for the exchange value they offer. But at the same time this means directing attention away from the substance of education, reducing student motivation to learn the knowledge and skills that constitute the core of the educational curriculum. Under such conditions, it is quite rational, even if educationally destructive, for students to seek to acquire their badges of merit at a minimum academic cost, to gain the highest grade with the minimum amount of learning. This perspective is almost perfectly captured by a common student question, one that sends chills down the back of the learning-centered teacher but that makes perfect sense for the credential-oriented student: “ls this going to be on the test?” (Sedlak et al., 1986, p. 182). We have credentialism to thank for the aversion to learning that, to a great extent, lies at the heart of our educational system.

A third problem posed by credentialism is social and political more than educational. According to credentialing theory, the connection between social class and education is neither direct nor automatic, as suggested by social reproduction theory. Instead, the argument goes, market forces mediate between the class position of students and their access to and success within the educational system. That is, there is general competition for admission to institutions of higher education and for levels of achievement within these institutions. Class advantage is no guarantee of success in this competition, since such factors as individual ability, motivation, and luck all play a part in determining the result. Market forces also mediate between educational attainment (the acquisition of credentials) and social attainment (the acquisition of a social position). Some college degrees are worth more in the credentials market than others, and they provide privileged access to higher level positions independent of the class origins of the credential holder.

However, in both of these market competitions, one for acquiring the credential and the other for cashing it in, a higher class position provides a significant competitive edge. The economic, cultural, and social capital that come with higher class standing gives the bearer an advantage in getting into college, in doing well at college, and in translating college credentials into desirable social outcomes. The market-based competition that characterizes the acquisition and disposition of educational credentials gives the process a meritocratic set of possibilities, but the influence of class on this competition gives it a socially reproductive set of probabilities as well. The danger is that, as a result, a credential-driven system of education can provide meritocratic cover for socially reproductive outcomes. In the single-minded pursuit of educational credentials, both student consumers and the society that supports them can lose sight of an all-too-predictable pattern of outcomes that is masked by the headlong rush for the academic gold.

Posted in Uncategorized

Public Schools for Private Gain: The Declining American Commitment to Serving the Public Good

This post is a piece I published in Kappan in November, 2018.  Here’s a link to the original.

Public schools for private gain:

The declining American commitment to serving the public good

When schooling comes to be viewed mainly as a source of private benefit, both schools
and society suffer grave consequences.

By David F. Labaree

We Americans tend to talk about public schooling as though we know what that term means. But in the complex educational landscape of the 21st century — where charter schools, private schools, and religious schools compete with traditional public schools for resources and support — it’s becoming less and less obvious what makes a school “public” at all.

Public Schools for Private Gain Image

A school is public, one might argue, if it meets certain formal criteria: It is funded by the public, governed by the public, and openly accessible to the public. But in that case, what should we make of charter schools, which are broadly understood to be public schools even though many are governed by private organizations? And how should we categorize religious schools that enroll students using public vouchers or tax credits, or public schools that use exams to restrict access? For that matter, don’t private schools often serve public interests, and don’t public schools often promote students’ private interests?

In short, our efforts to distinguish between public and nonpublic schools often oversimplify the ways in which today’s schools operate and the complex roles they play in our society. And such distinctions matter because they shape our thinking about educational policy. After all, if we’re unclear which schools deserve what kinds of funding and support, then how do we justify a system of elementary, secondary, and higher education that consumes more than $800 billion in taxes every year and consumes 10 to 20 or more years of every person’s life?

To clarify what we mean by public schooling, it’s helpful to broaden the discussion by considering not just the formal features of schools (their funding, governance, and admissions criteria) but also their aims. That is, to what extent do they pursue the public good, and to what extent do they serve private interests?

A public good is one that benefits all members of the community, whether or not they contribute to its upkeep or make use of it personally. In contrast, private goods benefit individuals, accruing only to those people who are able to take advantage of them. Thus, schooling is a public good to the extent that it helps everyone (including people who don’t have children in school); it is, by nature, inclusive. And schooling is a private good to the extent that it provides individuals with knowledge, skills, and credentials they can use to distinguish themselves from other people and get ahead in life; it is a form of private property, whose benefits are exclusive to those who own them.

People, organizations, and governments that create public goods tend to face what is known as the “free-rider” problem: If you can’t prevent people from enjoying goods for free, then they’ll have little incentive to pay for them. For example, if I can hang out at my local park whenever I want, then why should I donate to the park clean-up fund that my neighbors organized? I can get a free ride on them, enjoying a clean park without chipping in any of my own money.

The standard solution to the free-rider problem is to make it mandatory for everybody to support certain public goods (for example, efforts to reduce air pollution, fight crime, and monitor food safety) by using mechanisms such as general taxation. Indeed, this is how we’ve always supported our public schools. You may pay tuition to send your children to an exclusive, ivy-covered academy — or you might not have kids at all — but even so, you are required to pay taxes that fund schools for the whole community. Your family may not benefit personally from the services provided by, say, the elementary school down the road, but you do benefit, along with your neighbors, from having a well-funded school nearby. If local kids get a decent education and grow up to become gainfully employed, law-abiding citizens, that is a public good. It makes the entire community a better, safer, and happier place to live.

For much of American history, schooling has been understood in this way, first and foremost. For example, at the founding of our educational system, in the early 19th century, schools were supposed to turn young people into virtuous and competent citizens, a public good that was strongly political in nature. By the turn of the 20th century, schooling was still regarded mainly as a public good, but the mission had begun to shift from politics (creating citizens) to economics (training capable workers who can help promote broad prosperity). Over the subsequent decades, however, growing numbers of Americans came to view schooling mainly as a private good, producing credentials that allow individuals to get ahead, or stay ahead, in the competition for money and social status.

In this article, I argue that this shift in how Americans have viewed schooling — from conceiving of it mainly as a public good to defining it mostly as a private good — has led to dramatic changes in both the quality of the education that students receive and the kind of society we expect our schools to create. The institution that for much of our history helped bring us together into a community of citizens is increasingly dispersing us into a social hierarchy defined by the level of education we’ve attained.

The social functions of U.S. schooling: A short history[1]

In the early 19th century, the United States created a system of universal public schooling for the same reason that other emerging nations have done so over the years: to turn subjects of the King into citizens of the state.

Historically, public schooling has been the midwife of the nation state, whose viability depends on its ability to convert the occupants of a particular territory into members of an imagined community, who come to see themselves for the first time as French, say, or American. This mission was particularly important for the U.S. because it was a republic entering a world that had long demonstrated hostility toward the survival of such states. From ancient Rome to the Italian city states of the Renaissance, republics tended either to succumb to a tyrant or be destroyed in a Hobbesian war among irreconcilable interests.

As the Founders well knew, the survival of the American republic depended on its ability to form individuals into a republican community in which citizens were imbued with a commitment to the public good. Further, when the Common School Movement emerged in the 1820s and 30s, it faced an additional challenge as well, because the civic virtue of the fragile new republic was undergoing a vigorous challenge from the possessive individualism of the emerging free-market economy. Horace Mann, the leader of the movement in Massachusetts, put the case this way: “It may be an easy thing to make a Republic; but it is a very laborious thing to make Republicans; and woe to the republic that rests upon no better foundations than ignorance, selfishness, and passion.”[2]

The key characteristic of the new common school was not its curriculum or pedagogy but its commonality. It brought young people together into a single building where they engaged in a shared social and cultural experience, meant to counter the differences of social class that posed a serious threat to republican identity. Ideally, students would learn, in age-graded classrooms, to belong to a community of equals.

Further, the goal wasn’t just to teach them to internalize democratic norms but also to make it possible for capitalism to coexist with republicanism. For the free market to function, the state had to relax its control over individuals, allowing them to make their own decisions as rational actors. By learning to regulate their own thoughts and behaviors within the space of the classroom, students would become prepared for both commerce and citizenship, able to pursue their self-interests in the economic marketplace while at the same time participating in the political marketplace of ideas.

However, by the end of the 19th century, the survival of the republic was no longer in question. At that point, the U.S. was emerging as a world power, with booming industrial production, large-scale immigration, and a growing military presence. And while there was some pressure to turn peasant immigrants from Southern and Eastern Europe into American citizens, policy makers were even more concerned with turning them into modern industrial workers. In the roaring economy of the Progressive Era, then, the mission of schooling evolved: The most pressing goal was to strengthen the nation’s human capital (to put it in today’s terms).

Note, though, that schooling continued to be defined as a public good. When the workforce became more skilled and productivity increased, the whole country benefited. Overall, Americans’ standard of living improved. Thus, there remained a strong rationale for everyone to contribute to the education of other people’s children. And that rationale continues to resonate somewhat today. Even now, politicians and policy makers often talk about “investing” public funds in education as a way to promote economic growth, lifting all boats.

It was only in the 20th century that schooling came to be regarded as the primary means for individuals to obtain a good job. As their enrollments skyrocketed, high schools gave up the longstanding practice of providing a common course of study for all students and, instead, differentiated the curriculum, providing separate tracks designed for different career trajectories: the industrial course for factory workers, the business course for clerical workers, and the academic course for those bound for college (and then for work in management and the professions). As one school board president in the 1920s put it, “For a long time, all boys were trained to be President . . . Now we are training them to get jobs.”[3]

The new vocationalism lacked the grandeur of the mission set for the Common School, but it did address parents’ primary concern: how to ensure their children ended up with a good income and a secure social position, ideally by landing a job in the upper ranks of the new occupational hierarchy. Such work tended to be safer, cleaner, less manual, more mental, more secure, more prestigious, and better paid. And, crucially, each step up in the hierarchy required a higher level of education.

This new function of schooling — allocating desirable jobs — was in some ways just the flip side of the idea that schools exist to produce capable workers. What a policy maker views as a process of strengthening the nation’s human capital looks, to the individual student, like a way to attain personal status. For the student, school becomes purely a contest to obtain better educational qualifications and get better jobs. And from this angle, school is a decidedly private good. The pursuit of high-status jobs is a zero-sum game. If you get hired for a position, then I don’t.

All but gone is the assumption that purpose of schooling is to benefit the community at large. Less and less often do Americans conceive of education as a cooperative effort in nation-building or a collective investment in workforce development. Increasingly, rather, school comes to be viewed as an intense competition among individuals to get ahead in society and avoid being left behind. It has begun to look, to a great extent, like a means of creating winners and losers in the pursuit of academic merit, with the results determining who becomes winners and losers in life.

Consequences of the rise of schooling as a private good

When schools become a mechanism for allocating social status, they provoke intense competition over invidious educational distinctions. But while schooling may serve as a very private good, that doesn’t mean it can’t also function, at the same time, as a public good.

At one level, everyone who attends a school benefits personally from the knowledge, skills, and socialization they gain there, as well as from any diplomas they receive, which certify their learning and provide a signal to the job market about their relative employability for a variety of occupational positions. Viewed from this angle, even students at the most traditional public schools  accrue private goods.

And at another level, everyone in society benefits from having a well-educated and successful group of fellow citizens and co-workers. One of the core concepts of neoclassical economics is that the pursuit of private and personal gain often has public benefits. People with more education tend to commit fewer crimes, participate more fully in public life, vote more often, and contribute to civil society through engagement with a variety of nongovernmental organizations. They are more likely to assume positions of political, social, and economic leadership and to populate the professions. And they tend to be more productive workers, which helps both to spur economic growth and to increase the standard of living for the population as a whole. The fact that these benefits may be unintended consequences, resulting indirectly from people seeking personal gain and glory, doesn’t make them any less significant.

Consider the classic statement of this phenomenon by Adam Smith: “It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own interest . . . Nobody but a beggar chuses [sic] to depend chiefly upon the benevolence of his fellow-citizens.”[4] From this perspective, the competition for educational advantage benefits not only the individuals who gain the credentials but also the public at large. When we strengthen the level of skill in the workforce, everybody’s quality of life improves. And if true, this solves the free-rider problem: Rather than compelling people to contribute to the public good, we can simply encourage them to pursue their private interests, trusting that this will, over the long haul, produce the greatest benefits for everybody.

The problem is that, whether or not this theory is correct, few of us can afford to wait for the long haul. Encouraging individuals to pursue their private interests doesn’t do much for the vast numbers of people who have serious obstacles to confront in the short term. Moreover, while a rising tide of economic growth may raise all boats, this doesn’t change the fact that most kids are born in dinghies, not yachts.

We know from decades of research that children from lower-income backgrounds tend to attend worse schools than those born into affluent families, are less likely to be in the high-level reading group or the honors track, and are much less likely to graduate from high school. If they go to college, they are less likely to attend a four year institution and are less likely to earn a degree. And every year, it becomes less and less likely that a person who was born in a dinghy will ever end up owning a yacht, much less raise their children in one.

For those families that do enjoy greater wealth, the public benefits of schooling are easy to miss, whereas the private benefits are material, immediate, and personal. When push comes to shove, the latter are simply more compelling. It’s no surprise that affluent parents will deploy their economic, social, and cultural capital to gain as many educational advantages as they can for their children. They move to the best school district they can afford or send their kids to private school; they make sure they get into the classes with the best teachers, gain access to the gifted program in elementary school and the advanced placement program in high school. And they push their children toward the most selective college they can attend. To do anything less would be a disservice.

Sure, in the name of fairness and justice they could choose to send their children to the same lousy schools that less fortunate people are forced to attend. But even if they support efforts to improve the quality of educational opportunities for other people’s children, what kind of parent would put their children’s future at risk for a political principle?

In short, the pursuit of private educational goods drives most parents’ immediate decisions, while efforts to promote the public good are deferred to the indeterminate realm of political action for possible resolution in the distant future. It’s not that anybody wants to punish other people’s children; it’s just that they need to take care of their own. But when the public good is forever postponed, the effects are punishing indeed. And when schooling comes to be viewed solely as a means of private advancement, the consequences are dismal for both school and society:

  • Over time, the market rewards the accumulation of educational credentials more than it values knowledge and skills. For example, employers will pay a higher salary to a person who squeaked out a college degree than one who excelled in all four years of college but left one credit short of a diploma.
  • As a result, students learn early on that the goal is to acquire as many grades, credits, and degrees as possible rather than the knowledge and skills that these tokens are supposed to represent. So much the better if you can find ways to game the system (by, for example, studying only what’s likely to be on the test, buttering up the teacher, or just plain cheating). Only a sucker pays the sticker price.
  • In turn, schooling becomes more and more stratified, in two related ways: first, students have incentives to pursue the highest level of schooling they can (a graduate degree is better than a 4-year degree, which is better than a 2-year degree, and so on). Second, they have incentives to get into the highest-status institutions they can, at every level.
  • Cooperative learning becomes a dangerous waste of time. Students’ have no incentive to learn from their classmates but only to maximize their own ranking relative to them.
  • Families with more economic and cultural and social capital begin to hoard educational opportunities for their own children, elbowing others aside for access to the most desirable schools, teachers, and other resources.
  • This in turn threatens the legitimacy of the whole system, undermining the claim that people succeed according to their educational merit.
  • Moreover, people with the highest-status degrees and jobs tend to marry each other and pass their concentrated levels of advantage on to their own children, which only widens the divide across subsequent generations.
  • Enjoying greater wealth, those parents choose to send their children to private schools, or they choose to live in neighborhoods with elite public schools — in any case, the nominally “public” school hardly differs from the private academy, except that while it enjoys public subsidies, its boundaries have been drawn up in a way that denies access to other people’s children. (In effect, such a school is a public resource turned toward private ends.)

My point is that over the last several decades, as schooling has come to be viewed mainly as a source of private benefit rather than as a public good, the consequences have been dramatic for both schools and society. Increasingly prized as a resource by affluent families, traditional public schooling has become a mechanism by which to reinforce their advantages. And as a result, is has become harder and harder to distinguish what is truly public about our public schools.

At a deeper level, as we have privatized our vision of public schooling, we have shown a willingness to back away from the social commitment to the public good that motivated the formation of the American republic and the common school system. We have grown all too comfortable in allowing the fate of other people’s children be determined by the unequal competition among consumers for social advantage through schooling. The invisible hand of the market may work for the general benefit in the economic activities of the butcher and the baker but not in the political project of creating citizens.

[1] The discussion in this section is drawn from the following: David F. Labaree, “Public Goods, Private Goods: The American Struggle over Educational Goals,” American Educational Research Journal, 34:1 (Spring, 1997), pp. 39-81; David F. Labaree, “Founding the American School System,” in Someone Has to Fail: The Zero-Sum Game of Public Schooling (Cambridge, MA: Harvard University Press, 2010), pp. 42-79.

[2] Horace Mann, Fifth annual report to the Massachusetts Board of Education (Boston: Board of Education, 1841).

[3] Robert S. and Helen Merrill Lynd, Middletown (New York: Harcourt, Brace and World, 1929), p. 194.

[4] Adam Smith, An Inquiry in the Nature and Causes of the Wealth of Nations, Edwin Cannan, ed. (Chicago: University of Chicago Press, 1776/1976), book 1, chapter 2, p. 18.

 

 

Posted in Academic writing, Uncategorized

Academic Writing Issues #8 — Getting Off to a Fast Start

The introduction to a paper is critically important.  This is where you try to draw in readers, tell them what you’re going to address, and show why this issue is important.  It’s also a place to show a little style, demonstrating that you’re going to take readers on a fun ride.  Below are two exemplary cases of opening strong, one is from a detective novel, the other from an academic book.

If you want to see how to draw in the reader quickly, a good place to look is the work of a genre writer.  Authors who make a living from their writing need to make their case up front — to catch readers in the first paragraph and make them want to keep going.  Check out writers of mystery, detective, spy, or science fiction novels.  They’ve got to be good on the first page or the reader is just going to put the book down and pick up another.

One of my favorite genre writers is Elmore Leonard, who’s a master of the opening page.  Here’s the opening page of his novel Glitz:

THE NIGHT VINCENT WAS SHOT he saw it coming. The guy approached out of the streetlight on the corner of Meridian and Sixteenth, South Beach, and reached Vincent as he was walking from his car to his apartment building. It was early, a few minutes past nine.

Vincent turned his head to look at the guy and there was a moment when he could have taken him and did consider it, hit the guy as hard as he could. But Vincent was carrying a sack of groceries. He wasn’t going to drop a half gallon of Gallo Hearty Burgundy, a bottle of prune juice and a jar of Ragú spaghetti sauce on the sidewalk. Not even when the guy showed his gun, called him a motherfucker through his teeth and said he wanted Vincent’s wallet and all the money he had on him. The guy was not big, he was scruffy, wore a tank top and biker boots and smelled. Vincent believed he had seen him before, in the detective bureau holding cell. It wouldn’t surprise him. Muggers were repeaters in their strungout state, often dumb, always desperate. They came on with adrenaline pumping, hoping to hit and get out. Vincent’s hope was to give the guy pause.

He said, “You see that car? Standard Plymouth, nothing on it, not even wheel covers?” It was a pale gray. “You think I’d go out and buy a car like that?” The guy was wired or not paying attention. Vincent had to tell him, “It’s a police car, asshole. Now gimme the gun and go lean against it.”

What he should have done, either put the groceries down and given the guy his wallet or screamed in the guy’s face to hit the deck, now, or he was fucking dead. Instead of trying to be clever and getting shot for it.

Quite a grabber, isn’t it — right from the opening sentence.  For me the key is the deft and concise way he manages to introduce his main character — Vincent, the scruffy, street-wise detective.  Instead of an extensive physical description or character analysis, he provides a list of what’s in his bag of groceries.  Specific details like Gallo Hearty Burgundy and Ragu spaghetti sauce tell you clearly what kind of guy he is:  not a man of refinement on the world stage but a single guy in a seedy part of town with proletarian tastes.  And the next paragraph shows him as the wise-guy cop who can’t resist sticking it to a guy even though it might well not be the smartest move under the circumstances.  One page and you already know Vincent and want to stick with him for a while.

The second example comes from the opening of the first chapter of a 1968 book by the educational sociologist Philip Jackson called Life in Classrooms.

On a typical weekday morning between September and June some 35 million Americans kiss their loved ones goodby, pick up their lunch pails and books, and leave to spend their day in that collection of enclosures (totaling about one million) known as elementary school class­rooms. This massive exodus from home to school is accomplished with a minimum of fuss and bother. Few tears are shed (except perhaps by the very youngest) and few cheers are raised. The school attendance of children is such a common experience in our society that those of us who watch them go hardly pause to consider what happens to them when they get there. Of course our indifference disappears occasionally. When something goes wrong or when we have been notified of his remarkable achievement, we might ponder, for a moment at least, the mean­ing of the experience for the child in question, but most of the time we simply note that our Johnny is on his way to school, and now, it is time for our second cup of coffee.

Parents are interested, to be sure, in how well Johnny does while there, and when he comes trudging home they may ask him questions about what happened today or, more generally, how things went. But both their questions and his answers typically focus on the highlights of the school experience-its unusual aspects-rather than on the mundane and seemingly trivial events that filled the bulk of his school hours. Parents are interested, in other words, in the spice of school life rather than its substance.

Teachers, too, are chiefly concerned with only a very narrow aspect of a youngster’s school experience. They, too, are likely to focus on specific acts of misbehavior or accomplishment as representing what a particular student did in school today, even though the acts in question occupied but a small fraction of the student’s time. Teachers, like parents, seldom ponder the significance of the thousands of fleeting events that combine to form the routine of the classroom.

And the student himself is no less selective. Even if someone bothered to question him about the minutiae of his school day, he would probably be unable to give a complete account of what he had done. For him, too, the day has been reduced in memory into a small number of signal events-“I got 100 on my spelling test,” “A new boy came and he sat next to me,”-or recurring activities-“We went to gym,” “We had music.” His spontaneous recall of detail is not much greater than that required to answer our conventional questions.

This concentration on the highlights of school life is understandable from the standpoint of human interest. A similar selection process operates when we inquire into or recount other types of daily activity. When we are asked about our trip downtown or our day at the office we rarely bother describing the ride on the bus or the time spent in front of the watercooler. In­deed, we are more likely to report that nothing happened than to catalogue the pedestrian actions that took place between home and return. Unless something interesting occurred there is little purpose in talking about our experience.

Yet from the standpoint of giving shape and meaning to our lives these events about which we rarely speak may be as important as those that hold our listener’s attention. Certainly they represent a much larger portion of our experience than do those about which we talk. The daily routine, the “rat race,” and the infamous “old grind” may be brightened from time to time by happenings that add color to an otherwise drab existence, but the grayness of our daily lives has an abrasive potency of its own. Anthropologists understand this fact better than do most other social scientists, and their field studies have taught us to appreciate the cultural signifi­cance of the humdrum elements of human existence. This is the lesson we must heed as we seek to understand life in elementary classrooms.

Notice how he draws you into observing the daily life of school from the perspective of its main participants — parents, teachers, and students.  He’s showing you how the routine of schooling is so familiar to everyone that it becomes invisible.  Ask students what happened in school today and they’re likely to say, “Nothing.”  Of course, a lot actually happened but none of it is noteworthy.  You only hear about something that broke the routine:  there was a concert in assembly; Jimmy threw up in the lunchroom.

This is his point.  Students are learning things from the regular process of schooling.  They stand in line, wait for the bell, get evaluated, respond to commands.  This is not the formal curriculum, made up of school subjects, but the hidden curriculum of doing school.  The process of schooling, he suggests, may in fact have a bigger impact on the student than its formal content.  He draws you into this idea and leaves you wanting to know more.  That’s good writing.

Posted in Education policy, History of education, School reform, Uncategorized

From Citizens to Consumers — Abbreviated Version with New Conclusion about Why We Keep Trying to Reform Schools

This is an updated and abbreviated version of the lecture I posted on December 2.  It makes for an easier read, plus I’ve added a piece at the end trying to answer the question: Why do we keep trying to reform schools?

Here’s the new conclusion about the endless efforts to reform schools:

 

This still leaves open the question of why reforming American schools has proven to be such steady work over the years.  The answer is that we reform schools in an effort to solve pressing social problems.  And we have to keep coming up with new reform movements because schools keep failing to fix the problems we ask them to fix.  The issue is that we keep asking schools to do things they are incapable of doing.

For example, schools can’t eliminate or even reduce social inequality, racial divisions, or poor health.  These are social problems that require major political interventions to transform the social structure, which we are unwilling to undertake because they will provoke too much political opposition.  If we wanted, we could redistribute wealth and income and establish a universal public health system, but we don’t.  So we dump the problems on schools and then blame them for failing to solve these problems.

In addition, schooling is such a large and complex social institution that efforts to change it are more likely to introduce new problems than to solve old ones.  In U.S. history, the common school movement was the only truly successful reform, which created the social and cultural basis for the American republic.  The others caused problems.  Progressivism created a differentiated and vocationalized form of schooling that required the standards and choice movements to reintroduce commonality and choice.  Desegregation spurred whites to abandon urban schools, which are now as segregated as they ever were.  So we continue to tinker with schools in order to fix problems for which we lack the political will to fix ourselves.  And the work of school reformers is unlikely to ever reach an end.

Posted in Higher Education, History of education, Uncategorized

Research Universities and the Public Good

This post is a review essay of a new book called Research Universities and the Public Good.  It appeared in the current issue of American Journal of Sociology.  Here’s a link to a PDF of the original.

Research Universities and the Public Good: Discovery for an Uncertain Future

By Jason Owen-Smith. Stanford, Calif.: Stanford University Press,
2018. Pp. xii + 213. $35.00.

David F. Labaree
Stanford University

American higher education has long been immune to the kind of criticism
levied against elementary and secondary education because it has been seen
as a great success story, in contrast to the popular narrative of failure that
has been applied to the lower levels of the system. And the rest of the world
seems to agree with this distinction. Families outside the United States have
not been eager to send their children to our schools, but they have been
clamoring for admission to the undergraduate and graduate programs at
our colleges and universities. In the last few years, however, this reputational
immunity has been quickly fading. The relentlessly rationalizing reformers
who have done so much harm to U.S. schools in the name of accountability
have now started to direct their attention to higher education. Watch out,
they’re coming for us.

One tiny sector of the huge and remarkably diverse structure of U.S.
higher education has been particularly vulnerable to this contagion, namely,
the research university. This group represents only 3% of the more than
5,000 degree-granting institutions in the country, and it educates only a
small percentage of college students while sucking up a massive amount of
public and private resources. Its highly paid faculty don’t teach very much,
instead focusing their time instead on producing research on obscure topics
published in journals for the perusal of their colleagues rather than the public.
No wonder state governments have been reducing their funding for public
research universities and the federal government has been cutting its support
for research. No wonder there are strong calls for disaggregating the
multiplicity of functions that make these institutions so complex, so that
the various services of the university can be delivered more cost-effectively
to consumers.

In his new book, Jason Owen-Smith, a sociology professor at the University
of Michigan, mounts a valiant and highly effective defense of the apparently
indefensible American research university. While acknowledging the
complexity of functions that run through these institutions, he focuses his
attention primarily on the public benefits that derive from their research
production. As he notes, although they represent less than 3% of the institutions
of higher education, they produce nearly 90% of the system’s research and development. In an era when education is increasingly portrayed as primarily a private good—providing degrees whose benefits only accrue to the degree holders—he deliberately zeroes in on the way that university research constitutes a public good whose benefits accrue to the community as a whole.

He argues that the core public functions of the research university are to
serve as “sources of knowledge and skilled people, anchors for communities,
industries, and regions, and hubs connecting all of the far-flung parts of society”
(p. 1; original emphasis). In chapter 1 he spells out the overall argument,
in chapter 2 he explores the usefulness of the peculiarly complex
organization of the research university, in chapters 3–5 he examines in more
detail each of the core functions, and at the end he suggests ways that university
administrators can help position their institutions to demonstrate
the value they provide the public.

The core function is to produce knowledge and skill. The most telling
point the author makes about this function is that it works best if allowed
to emerge organically from the complex incentive structure of the university
itself instead of being directed by government or industry toward solving
the most current problems. Trying to make research relevant may well
make it dysfunctional. Mie Augier and James March (“The Pursuit of Relevance
in Management Education,” California Management Review 49
[2007]: 129–46) argue that the pursuit of relevance is afflicted by both ambiguity
(we don’t know what’s going to be relevant until we encounter the
next problem) and myopia (by focusing too tightly on the current case we
miss what it is a case of ). In short, as Owen-Smith notes, investing in research
universities is a kind of social insurance by which we develop answers
to problems that haven’t yet emerged.While the private sector focuses
on applied research that is likely to have immediate utility, public funds are
most needed to support the basic research whose timeline for utility is unknown
but whose breadth of benefit is much greater.

The second function of the research university is to serve as a regional anchor.
A creative tension that energizes this institution is that it’s both cosmopolitan
and local. It aspires to universal knowledge, but it’s deeply grounded
in place. Companies can move, but universities can’t. This isn’t just because
of physical plant, a constraint that also affects companies; it’s because universities
develop a complex web of relationships with the industries and governments
and citizens in their neighborhood. Think Stanford and Silicon
Valley. Owen-Smith makes the analogy to the anchor store in a shopping
mall.

The third function of the research university is to serve as a hub, which is
the cosmopolitan side of its relationship with the world. It’s located in place
but connected to the intellectual and economic world through a complex
web of networks. Like the university itself, these webs emerge organically
out of the actions of a vast array of actors pursuing their own research enterprises
and connecting with colleagues and funding sources and clients
and sites of application around the country and the globe. Research
universities are uniquely capable of convening people from all sectors
around issues of mutual interest. Such synergies benefit everyone.

The current discourse on universities, which narrowly conceives of them
as mechanisms for delivering degrees to students, desperately needs the
message that Owen-Smith delivers here. Students may be able to get a degree
through a cheap online program, but only the complex and costly system
of research universities can deliver the kinds of knowledge production,
community development, and network building that provide such invaluable
benefits for the public as a whole. One thing I would add to the author’s
analysis is that American research universities have been able to develop
such strong public support in the past in large part because they combine
top-flight scholarship with large programs of undergraduate education that
are relatively accessible to the public and rather undemanding intellectually.
Elite graduate programs and research projects rest on a firm populist base
that may help the university survive the current assaults, a base grounded
as much in football and fraternities as in the erudition of its faculty. This,
however, is but a footnote to this powerfully framed contribution to the literature
on U.S. higher education.

American Journal of Sociology, 125:2 (September, 2019), pp. 610-12

Posted in Academic writing, Uncategorized

Academic Writing Issues #6 — Mangling Metaphors

Metaphor is an indispensable tool for the writer.  It carries out an essential function by connecting what you’re talking about with other related issues that the reader already recognizes.  This provides a comparative perspective, which gives a richer context for the issue at hand.  Metaphor also introduces a playful characterization of the issue by making figurative comparisons that are counter-intuitive, finding similarities in things that are apparently opposite.  The result is to provoke the reader’s thinking in ways that straightforward exposition cannot.

Metaphors can — and often do — go disastrously wrong.  Here’s Bryan Garner on the subject:  “Skillful use of metaphor is one of the highest attainments of writing; graceless and even aesthetically offensive use of metaphors is one of the commonest scourges of writing.”  A particular problem, especially for academics, is using a shopworn metaphor that has become a cliché, which has lost all value through overuse.  Cases in point: lens; interrogate; path; bottom line; no stone unturned; weighing the evidence.

In this post, I provide two pieces that speak to the issue of metaphors.  One is a section on the subject from Bryan Garner’s Modern American Usage, which provides some great examples of metaphor gone bad.  A second is an extended excerpt from Matt Taibbi’s delightfully vicious takedown of Thomas Friedman’s best-seller, The World Is Flat.

 

Bryan Garner on Metaphor

METAPHORS. A. Generally. A metaphor is a figure of speech in which one thing is called by the name of something else, or is said to be that other thing. Unlike similes, which use like or as, metaphorical comparisons are implicit—not explicit. Skillful use of metaphor is one of the highest attainments of writing; graceless and even aesthetically offensive use of metaphors is one of the commonest scourges of writing.

Although a graphic phrase often lends both force and compactness to writing, it must seem contextually agreeable. That is, speaking technically, the vehicle of the metaphor (i.e., the literal sense of the metaphorical language) must accord with the tenor of the metaphor (i.e., the ultimate, metaphorical sense), which is to say that the means must fit the end. To illustrate the distinction between the vehicle and the tenor of a metaphor, in the statement that essay is a patchwork quilt without discernible design, the makeup of the essay is the tenor, and the quilt is the vehicle. It is the comparison of the tenor with the vehicle that makes or breaks a metaphor.

A writer would be ill advised, for example, to use rustic metaphors in a discussion of the problems of air pollution, which is essentially a problem of the bigger cities and outlying areas. Doing that mismatches the vehicle with the tenor.

  1. Mixed Metaphors. The most embarrassing problem with metaphors occurs when one metaphor crowds another. It can happen with CLICHÉS—e.g.:
  • “It’s on a day like this that the cream really rises to the crop.” (This mingles the cream rises to the top with the cream of the crop.)
  • “He’s really got his hands cut out for him.” (This mingles he’s got his hands full with he’s got his work cut out for him.)
  • “This will separate the men from the chaff.” (This mingles separate the men from the boys with separate the wheat from the chaff.)
  • “It will take someone willing to pick up the gauntlet and run with it.” (This mingles pick up the gauntlet with pick up the ball and run with it.)
  • “From now on, I am watching everything you do with a fine-toothed comb.” (Watching everything you do isn’t something that can occur with a fine-toothed comb.)

The purpose of an image is to fix the idea in the reader’s or hearer’s mind. If jarringly disparate images appear together, the audience is left confused or sometimes laughing, at the writer’s expense.

The following classic example comes from a speech by Boyle Roche in the Irish Parliament, delivered in about 1790: “Mr. Speaker, I smell a rat. I see him floating in the air. But mark me, sir, I will nip him in the bud.” Perhaps the supreme example of the comic misuse of metaphor occurred in the speech of a scientist who referred to “a virgin field pregnant with possibilities.”

  1. Dormant Metaphors. Dormant metaphors sometimes come alive in contexts in which the user had no intention of reviving them. In the following examples, progeny, outpouring, and behind their backs are dormant metaphors that, in most contexts, don’t suggest their literal meanings. But when they’re used with certain concrete terms, the results can be jarring—e.g.:
  • “This Note examines the doctrine set forth in Roe v. Wade and its progeny.” “Potential Fathers and Abortion,” 55 Brooklyn L. Rev. 1359, 1363 (1990). (Roe v. Wade, of course, legalized abortion.)
  • “The slayings also have generated an outpouring of hand wringing from Canada’s commentators.” Anne Swardson, “In Canada, It Takes Only Two Deaths,” Wash. Post (Nat’l Weekly ed.), 18–24 Apr. 1994, at 17. (Hand-wringing can’t be poured.)
  • “But managers at Hyland Hills have found that, for whatever reasons, more and more young skiers are smoking behind their backs. And they are worried that others are setting a bad example.” Barbara Lloyd, “Ski Area Cracks Down on Smoking,” N.Y. Times, 25 Jan. 1996, at B13. (It’s a fire hazard to smoke behind your back.)

Yet another pitfall for the unwary is the CLICHÉ-metaphor that the writer renders incorrectly, as by writing taxed to the breaking point instead of stretched to the breaking point.

Garner, Bryan. Garner’s Modern American Usage (pp. 534-535). Oxford University Press. Kindle Edition.

 

Matt Taibbi on The World Is Flat

Start with the title.

The book’s genesis is a conversation Friedman has with Nandan Nilekani, the CEO of Infosys. Nilekani caually mutters to Friedman: “Tom, the playing field is being leveled.” To you and me, an innocent throwaway phrase–the level playing field being, after all, one of the most oft-repeated stock ideas in the history of human interaction. Not to Friedman. Ten minutes after his talk with Nilekani, he is pitching a tent in his company van on the road back from the Infosys campus in Bangalore:

As I left the Infosys campus that evening along the road back to Bangalore, I kept chewing on that phrase: “The playing field is being leveled.”  What Nandan is saying, I thought, is that the playing field is being flattened… Flattened? Flattened? My God, he’s telling me the world is flat!

This is like three pages into the book, and already the premise is totally fucked. Nilekani said level, not flat. The two concepts are completely different. Level is a qualitative idea that implies equality and competitive balance; flat is a physical, geographic concept that Friedman, remember, is openly contrasting–ironically, as it were–with Columbus’s discovery that the world is round.

Except for one thing. The significance of Columbus’s discovery was that on a round earth, humanity is more interconnected than on a flat one. On a round earth, the two most distant points are closer together than they are on a flat earth. But Friedman is going to spend the next 470 pages turning the “flat world” into a metaphor for global interconnectedness. Furthermore, he is specifically going to use the word round to describe the old, geographically isolated, unconnected world.

“Let me… share with you some of the encounters that led me to conclude that the world is no longer round,” he says. He will literally travel backward in time, against the current of human knowledge.

To recap: Friedman, imagining himself Columbus, journeys toward India. Columbus, he notes, traveled in three ships; Friedman “had Lufthansa business class.” When he reaches India–Bangalore to be specific–he immediately plays golf. His caddy, he notes with interest, wears a cap with the 3M logo. Surrounding the golf course are billboards for Texas Instruments and Pizza Hut. The Pizza Hut billboard reads: “Gigabites of Taste.” Because he sees a Pizza Hut ad on the way to a golf course, something that could never happen in America, Friedman concludes: “No, this definitely wasn’t Kansas.”
Report Advertisement

After golf, he meets Nilekani, who casually mentions that the playing field is level. A nothing phrase, but Friedman has traveled all the way around the world to hear it. Man travels to India, plays golf, sees Pizza Hut billboard, listens to Indian CEO mutter small talk, writes 470-page book reversing the course of 2000 years of human thought. That he misattributes his thesis to Nilekani is perfect: Friedman is a person who not only speaks in malapropisms, he also hears malapropisms. Told level; heard flat. This is the intellectual version of Far Out Space Nuts, when NASA repairman Bob Denver sets a whole sitcom in motion by pressing “launch” instead of “lunch” in a space capsule. And once he hits that button, the rocket takes off.

And boy, does it take off. Predictably, Friedman spends the rest of his huge book piling one insane image on top of the other, so that by the end–and I’m not joking here–we are meant to understand that the flat world is a giant ice-cream sundae that is more beef than sizzle, in which everyone can fit his hose into his fire hydrant, and in which most but not all of us are covered with a mostly good special sauce. Moreover, Friedman’s book is the first I have encountered, anywhere, in which the reader needs a calculator to figure the value of the author’s metaphors.

God strike me dead if I’m joking about this. Judge for yourself. After the initial passages of the book, after Nilekani has forgotten Friedman and gone back to interacting with the sane, Friedman begins constructing a monstrous mathematical model of flatness. The baseline argument begins with a lengthy description of the “ten great flatteners,” which is basically a highlight reel of globalization tomahawk dunks from the past two decades: the collapse of the Berlin Wall, the Netscape IPO, the pre-Y2K outsourcing craze, and so on. Everything that would give an IBM human resources director a boner, that’s a flattener. The catch here is that Flattener #10 is new communications technology: “Digital, Mobile, Personal, and Virtual.” These technologies Friedman calls “steroids,” because they are “amplifying and turbocharging all the other flatteners.”

According to the mathematics of the book, if you add an IPac to your offshoring, you go from running to sprinting with gazelles and from eating with lions to devouring with them. Although these 10 flatteners existed already by the time Friedman wrote “The Lexus and the Olive Tree”–a period of time referred to in the book as Globalization 2.0, with Globalization 1.0 beginning with Columbus–they did not come together to bring about Globalization 3.0, the flat world, until the 10 flatteners had, with the help of the steroids, gone through their “Triple Convergence.” The first convergence is the merging of software and hardware to the degree that makes, say, the Konica Minolta Bizhub (the product featured in Friedman’s favorite television commercial) possible. The second convergence came when new technologies combined with new ways of doing business. The third convergence came when the people of certain low-wage industrial countries–India, Russia, China, among others–walked onto the playing field. Thanks to steroids, incidentally, they occasionally are “not just walking” but “jogging and even sprinting” onto the playing field.
Now let’s say that the steroids speed things up by a factor of two. It could be any number, but let’s be conservative and say two. The whole point of the book is to describe the journey from Globalization 2.0 (Friedman’s first bestselling book) to Globalization 3.0 (his current bestselling book). To get from 2.0 to 3.0, you take 10 flatteners, and you have them converge–let’s say this means squaring them, because that seems to be the idea–three times. By now, the flattening factor is about a thousand. Add a few steroids in there, and we’re dealing with a flattening factor somewhere in the several thousands at any given page of the book. We’re talking about a metaphor that mathematically adds up to a four-digit number. If you’re like me, you’re already lost by the time Friedman starts adding to this numerical jumble his very special qualitative descriptive imagery. For instance:

And now the icing on the cake, the ubersteroid that makes it all mobile: wireless. Wireless is what allows you to take everything that has been digitized, made virtual and personal, and do it from anywhere.  Ladies and gentlemen, I bring you a Thomas Friedman metaphor, a set of upside-down antlers with four thousand points: the icing on your uber-steroid-flattener-cake!

Let’s speak Friedmanese for a moment and examine just a few of the notches on these antlers (Friedman, incidentally, measures the flattening of the world in notches, i.e. “The flattening process had to go another notch”; I’m not sure where the notches go in the flat plane, but there they are.) Flattener #1 is actually two flatteners, the collapse of the Berlin Wall and the spread of the Windows operating system. In a Friedman book, the reader naturally seizes up in dread the instant a suggestive word like “Windows” is introduced; you wince, knowing what’s coming, the same way you do when Leslie Nielsen orders a Black Russian. And Friedman doesn’t disappoint. His description of the early 90s:

The walls had fallen down and the Windows had opened, making the world much flatter than it had ever been–but the age of seamless global communication had not yet dawned.
How the fuck do you open a window in a fallen wall? More to the point, why would you open a window in a fallen wall? Or did the walls somehow fall in such a way that they left the windows floating in place to be opened?

Four hundred and 73 pages of this, folks. Is there no God?

© 2012 New York Press All rights reserved.
View this story online at: http://www.alternet.org/story/21856/

Posted in Higher Education, History of education, Meritocracy, Uncategorized

US Higher Education and Inequality: How the Solution Became the Problem

This post is a paper I wrote last summer and presented at the University of Oslo in August.  It’s a patchwork quilt of three previously published pieces around a topic I’ve been focused on a lot lately:  the role of US higher education — for better and for worse — in creating the new American aristocracy of merit.

In it I explore the way that systems of formal schooling both opened up opportunity for people to get ahead by individual merit and created the most effective structure ever devised for reproducing social inequality.  By defining merit as the accumulation of academic credentials and by constructing a radically stratified and extraordinarily opaque hierarchy of educational institutions for granting these credentials, the system grants an enormous advantage to the children of those who have already negotiated the system most effectively.

The previous generation of academic winners learned its secrets and decoded its inner logic.  They found out that it’s the merit badges that matter, not the amount of useful learning you acquire along the way.  So they coach their children in the art of gaming the system.  The result is that these children not only gain a huge advantage at winning the rewards of the meritocracy but also acquire a degree of legitimacy for these rewards that no previous system of inherited privilege ever attained.  They triumphed in a meritocratic competition, so they fully earned the power, money, and position that they derived from it.  Gotta love a system that can pull that off.

Here’s a PDF of the paper.

 

U.S. Higher Education and Inequality:

How the Solution Became the Problem

by

David F. Labaree

Lee L. Jacks Professor of Education, Emeritus

Stanford University

Email: dlabaree@stanford.edu

Web: https://dlabaree.people.stanford.edu

Twitter: @Dlabaree

Blog: https:/

/davidlabaree.com/

GSE Logo

Lecture delivered at University of Oslo

August 14, 2019

 

One of the glories of the emergence of modernity is that it offered the possibility and even the ideal that social position could be earned rather than inherited.  Instead of being destined to become a king or a peasant by dictate of paternity, for the first time in history individuals had the opportunity to attain their roles in society on the basis of merit.  And in this new world, public education became both the avenue for opportunity and the arbiter of merit.  But one of the anomalies of modernity is that school-based meritocracy, while increasing the fluidity of status attainment, has had little effect on the degree of inequality in modern societies.

In this paper, I explore how the structure of schooling helped bring about this outcome in the United States, with special focus on the evolution of higher education in the twentieth century.  The core issue driving the evolution of this structure is that the possibility for social mobility works at both the top and the bottom of the social hierarchy, with one group seeing the chance of rising up and the other facing the threat of falling down.  As a result, the former sees school as the way for their children to gain access to higher position while the latter sees it as the way for their children to preserve the social position they were born with.  Under pressure from both sides, the structure of schooling needs to find a way to accommodate these two contradictory aims.  In practice the system can accomplish this by allowing children from families at the bottom and the top to both increase their educational attainment beyond the level of their parents.  In theory this means that both groups can gain academic credentials that allow them to qualify for higher level occupational roles than the previous generation.  They can therefore both move up in parallel, gaining upward mobility without reducing the social distance between them.  Thus you end up with more opportunity without more equality.

Theoretically, it would be possible for the system to reduce or eliminate the degree to which elites manage to preserve their advantage through education simply by imposing a ceiling on the educational attainment allowed for their children.  That way, when the bottom group rises they get closer to the top group.  As a matter of practice, that option is not available in the U.S.  As the most liberal of liberal democracies, the U.S. sees any such limits on the choices of the upper group as a gross violation of individual liberty.  The result is a peculiar dynamic that has governed the evolution of the structure of American education over the years.  The pattern is this.  The out-group exerts political pressure in order to gain greater educational credentials for their children while the in-group responds by increasing the credentials of their own children.  The result is that both groups move up in educational qualifications at the same time.  Schooling goes up but social gaps remain the same.  It’s an elevator effect.  Every time the floor rises, so does the ceiling.

In the last 200 years of the history of schooling in the United States, the dynamic has played out like this.  At the starting point, one group has access to a level of education that is denied to another group.  The outsiders exert pressure to gain access to this level, which democratic leaders eventually feel compelled to grant.  But the insiders feel threatened by the loss of social advantage that greater access would bring, so they press to preserve that advantage.  How does the system accomplish this?  Through two simple mechanisms.  First, at the level where access is expanding, it stratifies schooling into curricular tracks or streams.  This means that the newcomers fill the lower tracks while the old-timers occupy the upper tracks.  Second, for the previously advantaged group it expands access to schooling at the next higher level.  So the system expands access to one level of schooling while simultaneously stratifying that level and opening up the next level.

This process has gone through three cycles in the history of U.S. schooling.  When the common school movement created a system of universal elementary schooling in the second quarter of the nineteenth century, it also created a selective public high school at the top of the system.  The purpose of the latter was to draw upper-class children from private schools into the public system by offering access to the high school only to graduates of the public grammar schools.  Without the elite high school as inducement, public schooling would have been left the domain for paupers. Then at the end of the nineteenth century, elementary grades filled up and demand increased for wider access to high school, so the system opened the doors to this institution.  But at the same it introduced curriculum tracks and set off a surge of college enrollments by the former high school students.  And when high schools themselves filled by the middle of the twentieth century, the system opened access to higher education by creating a range of new nonselective colleges and universities to absorb the influx.  This preserved the exclusivity of the older institutions, whose graduates in large numbers then started pursuing postgraduate degrees.

Result: A Very Stratified System of Higher Education

By the middle of the twentieth century, higher education was the zone of advantage for any American trying to get ahead or stay ahead.  And as a result of the process by which the tertiary system managed to incorporate both functions, it became extraordinarily stratified.  This was a system that emerged without a plan, based not on government fiat but the competing interests of educational consumers seeking to use it to their own advantage.  A market-oriented system of higher education such as this one has a special dynamic that leads to a high degree of stratification.  Each educational enterprise competes with the others to establish a position in the market that will allow it to draw students, generate a comfortable surplus, and maintain this situation over time.  The problem is that, given the lack of effective state limits on the establishment and expansion of colleges, these schools find themselves in a buyer’s market.  Individual buyers may want one kind of program over another, which gives colleges an incentive to differentiate the market horizontally to accommodate these various demands.  At the same time, however, buyers want a college diploma that will help them get ahead socially.  This means that consumers don’t just want a college education that is different; they want one that is better – better at providing access to good jobs.  In response to this consumer demand, the U.S. has developed a multi-tiered hierarchy of higher education, ranging from open-access institutions at the bottom to highly exclusive institutions at the top, with each of the upper tier institutions offering graduates a degree that provides invidious distinction over graduates from schools in the lower tiers.

This stratified structure of higher education arose in the nineteenth century in a dynamic market system, where the institutional actors had to operate according to four basic rules.  Rule One:  Age trumps youth.  It’s no accident that the oldest American colleges are overrepresented in the top tier.  Of the top 20 U.S. universities,[1] 19 were founded before 1900 and 7 before 1776, even though more than half of all American universities were founded in the twentieth century.  Before competitors had entered the field, the oldest schools had already established a pattern of training the country’s leaders, locked up access to the wealthiest families, accumulated substantial endowments, and hired the most capable faculty.

Rule Two:  The strongest rewards go to those at the top of the system.  This means that every college below the top has a strong incentive to move up the ladder, and that top colleges have a strong incentive to preserve their advantage.  Even though it is very difficult for lower-level schools to move up, this doesn’t keep them from trying.  Despite long odds, the possible payoff is big enough that everyone stays focused on the tier above.  A few major success stories allow institutions to keep their hopes alive.  University presidents lie awake at night dreaming of replicating the route to the top followed by social climbers like Berkeley, Hopkins, Chicago, and Stanford.

Rule Three:  It pays to imitate your betters.  As the research university emerged as the model for the top tier in American higher education in the twentieth century, it became the ideal toward which all other schools sought to move.  To get ahead you needed to offer a full array of undergraduate, graduate, and professional programs, selective admissions and professors who publish, a football stadium and Gothic architecture.  (David Riesman called this structure of imitation “the academic procession.”)[2]  Of course, given the advantages enjoyed by the top tier, imitation has rarely produced the desired results.  But it’s the only game in town.  Even if you don’t move up in the rankings, you at least help reassure your school’s various constituencies that they are associated with something that looks like and feels like a real university.

Rule Four:  It’s best to expand the system by creating new colleges rather than increasing enrollments at existing colleges.  Periodically new waves of educational consumers push for access to higher education.  Initially, existing schools expanded to meet the demand, which meant that as late as 1900 Harvard was the largest U.S. university, public or private.[3]  But beyond this point in the growth process, it was not in the interest of existing institutions to provide wider access.  Concerned about protecting their institutional advantage, they had no desire to sully their hard-won distinction by admitting the unwashed.  Better to have this kind of thing done by additional colleges created for that purpose.  The new colleges emerged, then, as a clearly designated lower tier in the system, defined as such by both their newness and their accessibility.

Think about how these rules have shaped the historical process that produced the present stratified structure of higher education.  This structure has four tiers.  In line with Rule One, these tiers from top to bottom emerged in roughly chronological order.  The Ivy League colleges emerged in the colonial period, followed by a series of flagship state colleges in the early and mid-nineteenth century.  These institutions, along with a few social climbers that emerged later, grew to form the core of the elite research universities that make up the top tier of the system.  Schools in this tier are the most influential, prestigious, well-funded, exclusive, research-productive, and graduate-oriented – in the U.S. and in the world.

The second tier emerged from the land grant colleges that began appearing in the mid to late nineteenth century.  They were created to fill a need not met by existing institutions, expanding access for a broader array of students and offering programs with practical application in areas like agriculture and engineering.  They were often distinguished from the flagship research university by the word “state” in their title (as with University of Michigan vs. Michigan State University) or the label “A & M” (for Agricultural and Mechanical, as with University of Texas vs. Texas A & M).  But, in line with Rules Two and Three, they responded to consumer demand by quickly evolving into full service colleges and universities; and in the twentieth century they adopted the form and function of the research university, albeit in a more modest manner.

The third tier arose from the normal schools, established in the late nineteenth century to prepare teachers.  Like the land grant schools that preceded them, these narrowly vocational institutions evolved quickly under pressure from consumers, who wanted them to model themselves after the schools in the top tiers by offering a more valuable set of credentials that would provide access to a wider array of social opportunities.  Under these market pressures, normal schools evolved into teachers colleges, general-purpose state colleges, and finally, by the 1960s, comprehensive regional state universities.

The fourth tier emerged in part from the junior colleges that first arose in the early twentieth century and eventually evolved into an extensive system of community colleges.  Like the land grant college and normal school, these institutions offered access to a new set of students at a lower level of the system.  Unlike their predecessors, for the most part they have not been allowed by state governments to imitate the university model, remaining primarily as two-year schools.  But through the transfer option, many students use them as a more accessible route into institutions in the upper tiers.

What This Means for Educational Consumers

This highly stratified system is very difficult for consumers to navigate.  Instead of allocating access to the top level of the system using the mechanism employed by most of the rest of the world – a state-administered university matriculation exam – the highly decentralized American system allocates access by means of informal mechanisms that in comparison seem anarchic.  In the absence of one access route, there are many; and in the absence of clear rules for prospective students, there are multiple and conflicting rules of thumb.  Also, the rules of thumb vary radically according to which tier of the system you are seeking to enter.

First, let’s look at the admissions process for families (primarily the upper-middle class) who are trying to get their children entrée to the elite category of highly selective liberal arts colleges and research universities.  They have to take into account the wide array of factors that enter into the complex and opaque process that American colleges use to select students at this level:  quality of high school; quality of a student’s program of study; high school grades; test scores in the SAT or ACT college aptitude tests; interests and passions expressed in an application essay; parents’ alumni status; whether the student needs financial aid; athletic skills; service activities; diversity factors such as race, ethnicity, class, national origin, sex, and sexual orientation; and extracurricular contributions a student might make to the college community.  There is no centralized review process; instead every college carries out its own admissions review and employs its own criteria.

This open and indeterminate process provides a huge advantage for upper-middle-class families.  If you are a parent who is a college graduate and who works at a professional or managerial job, where the payoff of going to a good college is readily apparent, you have the cultural and social capital to negotiate this system effectively and read its coded messages.  For you, going to college is not the issue; it’s a matter of which college your children can get into that would provide them with the greatest competitive advantage in the workplace.  You want for them the college that might turn them down rather than the one that would welcome them with open arms.  So you enroll your children in test prep; hire a college advisor; plan out a strategic plan for high school course-taking and extracurriculars; craft a service resume that makes them look appropriately public-spirited; take them on the obligatory college tour; and come up with just the right mix of applications to the stretch schools, the safety schools, and those in between.  And all this pays off handsomely: 77 percent of children from families in the top quintile by income gain a bachelor’s degree.[4]

If you are a parent farther down the class scale, who didn’t attend college and whose own work environment is not well stocked with college graduates, you have a lot more trouble negotiating the system.  The odds are not good:  for students from the fourth income quintile, only 17 percent earn a BA, and for the lowest quintile the rate is only 9 percent.[5]  Under these circumstances, having your child go to a college, any college, is a big deal; and one college is hard to distinguish from another.  But you are faced by a system that offers an extraordinary diversity of choices for prospective students:  public, not-for-profit, or for-profit; secular or religious; two-year or four-year; college or university; teaching or research oriented; massive or tiny student body; vocational or liberal; division 1, 2, or 3 intercollegiate athletics, or no sports at all; party school or nerd haven; high rank or low rank; full-time or part-time enrollment; urban or pastoral; gritty or serene; residential, commuter, or “suitcase college” (where students go home on weekends).  In this complex setting both consumers and providers somehow have to make choices that are in their own best interest.  Families from the upper-middle class are experts at negotiating this system, trimming the complexity down to a few essentials:  a four-year institution that is highly selective and preferably private (not-for-profit).  Everything else is optional.

If you’re a working-class family, however – lacking deep knowledge of the system and without access to the wide array of support systems that money can buy – you are more likely to take the system at face value.  Having your children go to a community college is the most obvious and attractive option.  It’s close to home, inexpensive, and easy to get into.  It’s where your children’s friends will be going, it allows them to work and go to school part time, and it doesn’t seem as forbiddingly alien as the state university (much less the Ivies).  You don’t need anything to gain admission except a high school diploma or GED.  No tests, counselors, tours, or resume-burnishing is required.  Of you could try the next step up, the local comprehensive state university.  To apply for admission, all you need is a high school transcript.  You might get turned down, but the odds are in your favor.  The cost is higher but can usually be paid with federal grants and loans.  An alternative is a for-profit institution, which is extremely accessible, flexible, and often online.  It’s not cheap, but federal grants and loans can pay the cost.  What you don’t have any way of knowing is that the most accessible colleges at the bottom of the system are also the ones where students are least likely to graduate.  (Only 29 percent of students entering two-year colleges earn an associate degree in three years;[6] only 39 percent earn a degree from a two-year or four-year institution in six years.[7])  You also may not be aware that the economic payoff for these colleges is lower; or that the colleges higher up the system may not only provide stronger support toward graduation and but might even be less expensive because of greater scholarship funding.

In this way, the complexity and opacity of this market-based and informally-structured system helps reinforce the social advantages of those at the top of the social ladder and limit the opportunities for those at the bottom.  It’s a system that rewards the insider knowledge of old hands and punishes newcomers.  To work it effectively, you need reject the fiction that a college is a college is a college and learn how seek advantage in the system’s upper tiers.

On the other hand, the system’s fluidity is real.  The absence of state-sanctioned and formally structured tracks means that the barriers between the system’s tiers are permeable.  Your children’s future is not predetermined by their high school curriculum or their score on the matriculation exam.  They can apply to any college they want and see what happens.  Of course, if their grades and scores are not great, their chances of admission to upper level institutions are poor.  But their chances of getting into a teaching-oriented state university are pretty good, and their chances of getting into a community college are virtually assured.  And if they take the latter option, as is most often the case for children from socially disadvantaged families, there is a real (if modest) possibility that they might be able to prove their academic chops, earn an AA degree, and transfer to a university, even a research university.  The probabilities of moving up in the system are low:  most community college students never earn an AA degree; and transfers have a harder time succeeding in the university than students who enroll there as freshmen.  But the possibilities are nonetheless genuine.

American higher education offers something for everyone.  It helps those at the bottom to get ahead and those at the top to stay ahead.  It provides socially useful educational services for every ability level and every consumer preference.  This gives it an astonishingly broad base of political support across the entire population, since everyone needs it and everyone can potentially benefit from it.  And this kind of legitimacy is not possible if the opportunity the system offers to the lower classes is a simple fraud.  First generation college students, even if they struggled in high school, can attend community college, transfer to San Jose State, and end up working at Apple.  It’s not very likely, but it assuredly is possible.  True, the more advantages you bring to the system – cultural capital, connections, family wealth – the higher the probability that you will succeed in it.  But even if you are lacking in these attributes, there is still an outside chance that you just might make it through the system and emerge with a good middle class job.

This helps explain how the system gets away with preserving social advantage for those at the top without stirring a revolt from those at the bottom.  Students from working-class and lower-class families are much less likely to be admitted to the upper reaches of the higher education system that provides the greatest social rewards; but the opportunity to attend some form of college is high, and attending a college at the lower levels of the system may provide access to a good job.  The combination of high access to the lower levels of the system and high attrition on the way to attaining a bachelor’s degree creates a situation where the system gets credit for openness and the student bears the burden for failing to capitalize on it.  The system gave you a chance but you just couldn’t make the grade.  The ready-made explanations for personal failure accumulate quickly as students try to move through the system.  You didn’t study hard enough, you didn’t get good grades in high school, you didn’t get good test scores, so you couldn’t get into a selective college.  Instead you went to a community college, where you got distracted from your studies by work, family, and friends, and you didn’t have the necessary academic ability; so you failed to complete your AA degree.  Or maybe you did complete the degree and transferred to a university, but you had trouble competing with students who were more able and better prepared than you.  Along with the majority of students who don’t make it all the way to a BA, you bear the burden for your failure – a conclusion that is reinforced by the occasional but highly visible successes of a few of your peers.  The system is well defended against charges of unfairness.

So we can understand why people at the bottom don’t cry foul.  It gave you a chance.  And there is one more reason for keeping up your hope that education will pay off for you.  A degree from an institution in a lower tier may pay lower benefits, but for some purposes one degree really is as good as another.  Often the question in getting a job or a promotion is not whether you have a classy credential but whether you have whatever credential is listed as the minimum requirement in the job description.  Bureaucracies operate on a level where form often matters more than substance.  As long as you can check off the box confirming that you have a bachelor’s degree, the BA from University of Phoenix and the BA from University of Pennsylvania can serve the same function, by allowing you to be considered for the job.  And if, say, you’re a public school teacher, an MA from Capella University, under the district contract, is as effective as one from Stanford University, because either will qualify you for a $5,000 bump in pay.

At the same time, however, we can see why the system generates so much anxiety among students who are trying to use the system to move up the social ladder for the good life.  It’s really the only game in town for getting a good job in twenty-first century America.  Without higher education, you are closed off from the white collar jobs that provide the most security and pay.  Yes, you could try to start a business, or you could try to work your way up the ladder in an organization without a college degree; but the first approach is highly risky and the second is highly unlikely, since most jobs come with minimum education requirements regardless of experience.  So you have to put all of your hopes in the higher-ed basket while knowing – because of your own difficult experiences in high school and because of what you see happening with family and friends – that your chances for success are not good.  You either you choose to pursue higher ed against the odds or you simply give up.  It’s a situation fraught with anxiety.

What is less obvious, however, is why the American system of higher education – which is so clearly skewed in favor of people at the top of the social order – fosters so much anxiety in them.  Upper-middle-class families in the U.S. are obsessed with education and especially with getting their children into the right college.  Why?  They live in the communities that have the best public schools; their children have cultural and social skills that schools value and reward; and they can afford the direct cost and opportunity cost of sending their high school grads to a residential college, even one of the pricey privates.  So why are there only a few colleges that seem to matter to this group?  Why does it matter so much to have your child not only get into the University of California but into Berkeley or UCLA?  What’s wrong with having them attend Santa Cruz or even one of the Cal State campuses?  And why the overwhelming passion for pursuing admission to Harvard or Yale?

The urgency behind all such frantic concern about admission to the most elite level of the system is this:  As parents of privilege, you can pass on your wealth to your children, but you can’t give them a profession.  Education is built into the core of modern societies, where occupations are no longer inherited but more or less earned.  If you’re a successful doctor or lawyer, you can provide a lot of advantages for your children; but in order for them to gain a position such as yours, they must succeed in school, get into a good college, and then into a good graduate school.  Unless they own the company, even business executives can’t pass on position to their children, and even then it’s increasingly rare that they would actually do so.  (Like most shareholders, they would profit more by having the company led by a competent executive than by the boss’s son.)  Under these circumstances of modern life, providing social advantage to your children means providing them with educational advantage.  Parents who have been through the process of climbing the educational hierarchy in order to gain prominent position in the occupational hierarchy know full well what it takes to make the grade.

They also know something else:  When you’re at the top of the social system, there is little opportunity to rise higher but plenty of opportunity to fall farther down.  Consider data on intergenerational mobility in the U.S.  For children of parents in the top quintile by household income, 60 percent end up at least one quintile lower than their parents and 37 fall at least two quintiles.[8]  That’s a substantial decline in social position.  So there’s good reason for these parents to fear downward mobility for their children and to use all their powers to marshal educational resources to head it off.  The problem is this:  Even though your own children have a wealth of advantages in negotiating the educational system, there are still enough bright and ambitious students from the lower classes who manage to make it through the educational gauntlet to pose them a serious threat.  So you need to make sure that your children attend the best schools, get into the high reading group and the program for the gifted, take plenty of advanced placement classes, and then get into a highly selective college and graduate school.  Leave nothing to chance, since some of your heirs are likely to be less talented and ambitious than those children who prove themselves against all odds by climbing the educational ladder.  When the higher education system opened up access after World War II, it made competition for the top tier of the system sharply higher, and the degree of competitiveness continued to increase as the proportion of students going to college grew to a sizeable majority.  As Jerome Karabel has noted in his study of elite college admissions, the American system of higher education does not equalize opportunity but it does equalize anxiety.[9]  It makes families at all levels of American society nervous about their ability to negotiate the system effectively, because it provides the only highway to the good life.

The American Meritocracy

The American system of education is formally meritocratic, but one of its social effects is to naturalize privilege.  This starts when a student’s academic merit is so central and so pervasive in schooling that it embeds itself within the individual person.  You start saying things like:  I’m smart.  I’m dumb.  I’m a good student.  I’m a bad student.  I’m good at reading but bad at math.  I’m lousy at sports.  The construction of merit is coextensive with the entire experience of growing up, and therefore it comes to constitute the emergent you.  It no longer seems to be something imposed by a teacher or a school but instead comes to be an essential part of your identity.  It’s now less what you do and increasingly who you are.  In this way, the systemic construction of merit begins to disappear and what’s left is a permanent trait of the individual.  You are your grade and your grade is your destiny.

The problem, however – as an enormous amount of research shows – is that the formal measures of merit that schools use are subject to powerful influence from a student’s social origins.  No matter how you measure merit, it affects your score.  It shapes your educational attainment.  It also shows up in measures that rank educational institutions by quality and selectivity.  Across the board, your parents’ social class has an enormous impact on the level of merit you are likely to acquire in school.  Students with higher social position end up accumulating a disproportionately large number of academic merit badges.

The correlations between socioeconomic status and school measures of merit are strong and consistent, and the causation is easy to determine.  Being born well has an enormously positive impact on the education merit you acquire across your life.  Let us count the ways.  Economic capital is one obvious factor.  Wealthy communities can support better schools. Social capital is another factor.  Families from the upper middle classes have a much broader network of relationships with the larger society than those form the working class, which provides a big advantage for their schooling prospects.  For them, the educational system is not foreign territory but feels like home.

Cultural capital is a third factor, and the most important of all.  School is a place that teaches students the cognitive skills, cultural norms, and forms of knowledge that are required for competent performance in positions of power.  Schools demonstrate a strong disposition toward these capacities over others:  mental over manual skills, theoretical over practical knowledge, decontextualized over contextualized perspectives, mind over body, Gesellschaft over Gemeinschaft.  Parents in the upper middle class are already highly skilled in these cultural capacities, which they deploy in their professional and managerial work on a daily basis.  Their children have grown up in the world of cultural capital.  It’s a language they learn to speak at home.  For working-class children, school is an introduction to a foreign culture and a new language, which unaccountably other students seem to already know.  They’re playing catchup from day one.  Also, it turns out that schools are better at rewarding cultural capital than they are at teaching it.  So kids from the upper middle class can glide through school with little effort while others continually struggle to keep up.  The longer they remain in school, the larger the achievement gap between the two groups.

In the wonderful world of academic merit, therefore, the fix is in.  Upper income students have a built-in advantage in acquiring the grades, credits, and degrees that constitute the primary prizes of the school meritocracy.  But – and this is the true magic of the educational process – the merits that these students accumulate at school come in a purified academic form that is independent of their social origins.  They may have entered schooling as people of privilege, but they leave it as people of merit.  They’re good students.  They’re smart.  They’re well educated.  As a result, they’re totally deserving of special access to the best jobs.  They arrived with inherited privilege but they leave with earned privilege.  So now they fully deserve what they get with their new educational credentials.

In this way, the merit structure of schooling performs a kind of alchemy.  It turns class position into academic merit.  It turns ascribed status into achieved status. You may have gotten into Harvard by growing up in a rich neighborhood with great schools and by being a legacy.  But when you graduate, you bear the label of a person of merit, whose future accomplishments arise alone from your superior abilities.  You’ve been given a second nature.

Consequences of Naturalized Privilege: The New Aristocracy

The process by which schools naturalize academic merit brings major consequences to the larger society.  The most important of these is that it legitimizes social inequality.  People who were born on third base get credit for hitting a triple, and people who have to start in the batter’s box face the real possibility of striking out.  According to the educational system, divergent social outcomes are the result of differences in individual merit, so, one way or the other, people get what they deserve.  The fact that a fraction of students from the lower classes manage against the odds to prove themselves in school and move up the social scale only adds further credibility to the existence of a real meritocracy.

In the United States in the last 40 years, we have come to see the broader implications of this system of status attainment through institutional merit.  It has created a new kind of aristocracy.  This is not Jefferson’s natural aristocracy, grounded in public accomplishments, but a caste of meritocratic privilege, grounded in the formalized and naturalized merit signaled by educational credentials.  As with aristocracies of old, the new meritocracy is a system of rule by your betters – no longer defined as those who are better born or more accomplished but now as those who are better educated.  Michael Young saw this coming back in 1958, as he predicted in his fable, The Rise of the Meritocracy.[10]  But now we can see that it has truly taken hold.

The core expertise of this new aristocracy is skill in working the system.  You have to know how to play the game of educational merit-getting and pass this on to your children.  The secret is in knowing that the achievements that get awarded merit points through the process of schooling are not substantive but formal.  Schooling is not about learning the subject matter; it’s about getting good grades, accumulating course credits, and collecting the diploma on the way out the door.  Degrees pay off, not what you learned in school or even the number of years of schooling you have acquired.  What you need to know is what’s going to be on the test and nothing else.  So you need to study strategically and spend of lot of effort working the refs.  Give teacher what she wants and be sure to get on her good side.  Give the college admissions officers the things they are looking for in your application.  Pump up your test scores with coaching and learning how to game the questions.

Members of the new aristocracy are particularly aggressive about carrying out a strategy known as opportunity hoarding.  There is no academic advantage too trivial to pursue, and the number of advantages you accumulate can never be enough.  In order to get your children into the right selective college you need send them to the right school, get them into the gifted program in elementary school and the right track in high school, hire a tutor, carry out test prep, do the college tour, pursue prizes, develop a well-rounded resume for the student (sport, student leadership, musical instrument, service), pull strings as a legacy and a donor, and on and on and on.

As we saw earlier, such behavior by upper-middle-class parents is not a crazy as it seems.  The problem with being at the top is that there’s nowhere to go but down.  The system is just meritocratic enough to keep the most privileged families on edge, worried about having their child bested by a smart poor kid.   Again, as Karabel put it, the only thing U.S. education equalizes is anxiety.

As with earlier aristocracies, the new aristocrats of merit cluster together in the same communities, where the schools are like no other.  Their children attend the same elite colleges, where they meet their future mates and then transmit their combined cultural, social, and economic capital in concentrated form to their children, a process sociologists call assortative mating.  And one consequence of this increase concentration of educational resources is that the achievement gap between low and high income students has been rising; Sean Reardon’s study shows the gap growing 40 percent in the last quarter of the twentieth century.  This is how educational and social inequality grows larger over time.

By assuming the form of meritocracy, schools have come to play a central role in defining the character of modern society.  In the process they have served to increase social opportunity while also increasing social inequality.  At the same time, they have established a solid educational basis for the legitimacy of this new inequality, and they have fostered the development of a new aristocracy of educational merit whose economic power, social privilege, and cultural cohesion would be the envy of the high nobility in early modern England or France.  Now, as then, the aristocracy assumes its outsized social role as a matter of natural right.

 

References

Community College Research Center. (2015). Community College FAQs. Teachers College, Columbia University. http://ccrc.tc.columbia.edu/Community-College-FAQs.html (accessed 8-3-15).

Geiger, Roger L. (2004). To Advance Knowledge: The Growth of American research Universities, 1900-1940. New Brunswick: Transaction.

Karabel, Jerome. (2005). The Chosen: The Hidden History of Admission and Exclusion at Harvard, Yale, and Princeton. New York: Mariner Books.

National Center for Education Statistics. (2014). Digest of Education Statistics, 2013. Washington, DC: US Government Printing Office.

Pell Institute and PennAHEAD. (2015). Indicators of Higher Education Equity in the United States (2015 revised edition). Philadelphia: The Pell Institute for the Study of Opportunity in Higher Education and the University of Pennsylvania Alliance for Higher Education and Democracy (PennAHEAD). http://www.pellinstitute.org/publications-Indicators_of_Higher_Education_Equity_in_the_United_States_45_Year_Report.shtml (accessed 8-10-15).

Pew Charitable Trusts Economic Mobility Project. (2012). Pursuing the American Dream: Economic Mobility Across Generations. Washington, DC: Pew Charitable Trusts. http://www.pewtrusts.org/en/research-and-analysis/reports/0001/01/01/pursuing-the-american-dream (accessed 8-10-15).

Riesman, David.  (1958).  The Academic Procession.  In Constraint and variety in American education.  Garden City, NY:  Doubleday.

U.S. News and World Report. (2015). National Universities Rankings.  http://colleges.usnews.rankingsandreviews.com/best-colleges/rankings/national-universities (accessed 4-28-15).

Young, Michael D. (1958). The Rise of the Meritocracy, 1870-2023.  New York:  Random House.

 

[1] U.S. News (2015).

[2] Riesman, (1958).

[3] Geiger (2004), 270.

[4] Pell (2015), p. 31.

[5] Pell (2015), p. 31.

[6] NCES (2014), table 326.20.

[7] CCRC (2015).

[8] Pew (2012), figure 3.

[9] Karabel (2005), p. 547.

[10] Young (1958).