Posted in Academic writing, Higher Education, Teaching, Writing

I Would Rather Do Anything Else than Grade Your Final Papers — Robin Lee Mozer

If the greatest joy that comes from retirement is that I no longer have to attend faculty meetings, the second greatest joy is that I no longer have to grade student papers.  I know, I know: commenting on student writing is a key component of being a good teacher, and there’s a real satisfaction that comes from helping someone become a better thinker and better writer.

But most students are not producing papers to improve their minds or hone their writing skills.  They’re just trying to fulfill a course requirement and get a decent grade.  And this creates a strong incentive not for excellence but for adequacy.  It encourages people to devote most of their energy toward gaming the system.

The key skill is to produce something that looks and feels like a good answer to the exam question or a good analysis of an intellectual problem.  Students have a powerful incentive to accomplish the highest grade for the lowest investment of time and intellectual effort.  This means aiming for quantity over quality (puff up the prose to hit the word count) and form over substance (dutifully refer to the required readings without actually drawing meaningful content from them).  Glibness provides useful cover for the absence of content.  It’s depressing to observe how the system fosters discursive means that undermine the purported aims of education.

Back in the days when students turned in physical papers and then received them back with handwritten comments from the instructor, I used to get a twinge in my stomach when I saw that most students didn’t bother to pick up their final papers from the box outside my office.  I felt like a sucker for providing careful comments that no one would ever see.  At one point I even asked students to tell me in advance if they wanted their papers back, so I only commented on the ones that might get read.  But this was even more depressing, since it meant that a lot of students didn’t even mind letting me know that they really only cared about the grade.  The fiction of doing something useful was what helped keep me going.

So, like many other faculty, I responded with joy to a 2016 piece that Robin Lee Mozer wrote in McSweeney’s called “I Would Rather Do Anything Else than Grade Your Final Papers.”  As a public service to teachers everywhere, I’m republishing her essay here.  Enjoy.

 

I WOULD RATHER DO ANYTHING ELSE THAN GRADE YOUR FINAL PAPERS

Dear Students Who Have Just Completed My Class,

I would rather do anything else than grade your Final Papers.

I would rather base jump off of the parking garage next to the student activity center or eat that entire sketchy tray of taco meat leftover from last week’s student achievement luncheon that’s sitting in the department refrigerator or walk all the way from my house to the airport on my hands than grade your Final Papers.

I would rather have a sustained conversation with my grandfather about politics and government-supported healthcare and what’s wrong with the system today and why he doesn’t believe in homeowner’s insurance because it’s all a scam than grade your Final Papers. Rather than grade your Final Papers, I would stand in the aisle at Lowe’s and listen patiently to All the Men mansplain the process of buying lumber and how essential it is to sight down the board before you buy it to ensure that it’s not bowed or cupped or crook because if you buy lumber with defects like that you’re just wasting your money even as I am standing there, sighting down a 2×4 the way my father taught me 15 years ago.

I would rather go to Costco on the Friday afternoon before a three-day weekend. With my preschooler. After preschool.

I would rather go through natural childbirth with twins. With triplets. I would rather take your chemistry final for you. I would rather eat beef stroganoff. I would rather go back to the beginning of the semester like Sisyphus and recreate my syllabus from scratch while simultaneously building an elaborate class website via our university’s shitty web-based course content manager and then teach the entire semester over again than grade your goddamn Final Papers.

I do not want to read your 3AM-energy-drink-fueled excuse for a thesis statement. I do not want to sift through your mixed metaphors, your abundantly employed logical fallacies, your incessant editorializing of your writing process wherein you tell me As I was reading through articles for this paper I noticed that — or In the article that I have chosen to analyze, I believe the author is trying to or worse yet, I sat down to write this paper and ideas kept flowing into my mind as I considered what I should write about because honestly, we both know that the only thing flowing into your mind were thoughts of late night pizza or late night sex or late night pizza and sex, or maybe thoughts of that chemistry final you’re probably going to fail later this week and anyway, you should know by now that any sentence about anything flowing into or out of or around your blessed mind won’t stand in this college writing classroom or Honors seminar or lit survey because we are Professors and dear god, we have Standards.

I do not want to read the one good point you make using the one source that isn’t Wikipedia. I do not want to take the time to notice that it is cited properly. I do not want to read around your 1.25-inch margins or your gauche use of size 13 sans serif fonts when everyone knows that 12-point Times New Roman is just. Fucking. Standard. I do not want to note your missing page numbers. Again. For the sixth time this semester. I do not want to attempt to read your essay printed in lighter ink to save toner, as you say, with the river of faded text from a failing printer cartridge splitting your paper like Charlton Heston in The Ten Commandments, only there, it was a sea and an entire people and here it is your vague stand-in for an argument.

I do not want to be disappointed.

I do not want to think less of you as a human being because I know that you have other classes and that you really should study for that chemistry final because it is organic chemistry and everyone who has ever had a pre-med major for a roommate knows that organic chemistry is the weed out course and even though you do not know this yet because you have never even had any sort of roommate until now, you are going to be weeded out. You are going to be weeded out and then you will be disappointed and I do not want that for you. I do not want that for you because you will have enough disappointments in your life, like when you don’t become a doctor and instead become a philosophy major and realize that you will never make as much money as your brother who went into some soul-sucking STEM field and landed some cushy government contract and made Mom and Dad so proud and who now gives you expensive home appliances like espresso machines and Dyson vacuums for birthday gifts and all you ever send him are socks and that subscription to that shave club for the $6 middle-grade blades.

I do not want you to be disappointed. I would rather do anything else than disappoint you and crush all your hopes and dreams —

Except grade your Final Papers.

The offer to take your chemistry final instead still stands.

Posted in Capitalism, Global History, Higher Education, History, State Formation, Theory

Escape from Rome: How the Loss of Empire Spurred the Rise of Modernity — and What this Suggests about US Higher Ed

This post is a brief commentary on historian Walter Scheidel’s latest book, Escape from Rome.  It’s a stunningly original analysis of a topic that has long fascinated scholars like me:  How did Europe come to create the modern world?  His answer is this:  Europe became the cauldron of modernity and the dominant power in the world because of the collapse of the Roman empire — coupled with the fact that no other power was able to replace it for the next millennium.  The secret of European success was the absence of central control.  This is what led to the extraordinary inventions that characterized modernity — in technology, energy, war, finance, governance, science, and economy.

Below I lay out central elements of his argument, providing a series of salient quotes from the text to flesh out the story.  In the last few years I’ve come to read books exclusively on Kindle and Epub, which allows me to copy passages that catch my interest into Evernote for future reference. So that’s were these quotes come from and why they don’t include page numbers.

At the end, I connect Scheidel’s analysis with my own take on the peculiar history of US higher education, as spelled out in my book A Perfect Mess.  My argument parallels his, showing how the US system arose in the absence of a strong state and dominant church, which fostered creative competition among colleges for students and money.  Out of this unpromising mess of institutions emerged a system of higher ed that came to dominate the academic world.

Escape from Rome

Here’s how Scheidel describes the consequences for Europe that arose from the fall of Rome and the long-time failure of efforts to impose a new empire there.

I argue that a single condition was essential in making the initial breakthroughs possible: competitive fragmentation of power. The nursery of modernity was riven by numerous fractures, not only by those between the warring states of medieval and early modern Europe but also by others within society: between state and church, rulers and lords, cities and magnates, knights and merchants, and, most recently, Catholics and Protestants. This often violent history of conflict and compromise was long but had a clear beginning: the fall of the Roman empire that had lorded it over most of Europe, much as successive Chinese dynasties lorded it over most of East Asia. Yet in contrast to China, nothing like the Roman empire ever returned to Europe.

Recurrent empire on European soil would have interfered with the creation and flourishing of a stable state system that sustained productive competition and diversity in design and outcome. This made the fall and lasting disappearance of hegemonic empire an indispensable precondition for later European exceptionalism and thus, ultimately, for the making of the modern world we now inhabit.

From this developmental perspective, the death of the Roman empire had a much greater impact than its prior existence and the legacy it bequeathed to later European civilization.

Contrast this with China, where dynasties rose and fell but where empire was a constant until the start of the 20th century.  It’s an extension of an argument that others, such as David Landes, have developed about the creative possibilities unleashed by a competitive state system in comparison to the stability and stasis of an imperial power.  Think about the relative stagnation of the  Ottoman, Austro-Hungarian, and Russian empires in 17th, 18th, and 19th centuries compared with the dynamic emerging nation states of Western Europe.  Think also of the paradox within Western Europe, in which the drivers of modernization came not from the richest and strongest imperial powers — Spain, France, and Austria — but from the marginal kingdom of England and the tiny Dutch republic.

The comparison between Europe in China during the second half of the first millennium is telling:

Two things matter most. One is the unidirectional character of European developments compared to the back and forth in China. The other is the level of state capacity and scale from and to which these shifts occurred. If we look at the notional endpoints of around 500 and 1000 CE, the dominant trends moved toward imperial restoration in China and toward inter-and intrastate fragmentation in Europe.

Scheidel shows how social power fragmented after the fall of Rome in such a way that made it impossible for a new hegemonic power to emerge.

After Rome’s collapse, the four principal sources of social power became increasingly unbundled. Political power was claimed by monarchs who gradually lost their grip on material resources and thence on their subordinates. Military power devolved upon lords and knights. Ideological power resided in the Catholic Church, which fiercely guarded its long-standing autonomy even as its leadership was deeply immersed in secular governance and the management of capital and labor. Economic power was contested between feudal lords and urban merchants and entrepreneurs, with the latter slowly gaining the upper hand. In the heyday of these fractures, in the High Middle Ages, weak kings, powerful lords, belligerent knights, the pope and his bishops and abbots, and autonomous capitalists all controlled different levers of social power. Locked in unceasing struggle, they were compelled to cooperate and compromise to make collective action possible.

He points out that “The Christian church was the most powerful and enduring legacy of the Roman empire,” becoming “Europe’s only functioning international organization.”  But in the realms of politics, war, and economy the local element was critical, which produced a situation where local innovation could emerge without interference from higher authority.

The rise of estates and the communal movement shared one crucial characteristic: they produced bodies such as citizen communes, scholarly establishments, merchant guilds, and councils of nobles and commoners that were, by necessity, relatively democratic in the sense that they involved formalized deliberative and consensus-building interactions. Over the long run, these bodies gave Latin Europe an edge in the development of institutions for impersonal exchange that operated under the rule of law and could be scaled up in response to technological change.

Under these circumstances, the states that started to emerge in Europe in the middle ages built on the base of distributed power and local initiative that developed in the vacuum left by the Roman Empire.

As state power recoalesced in Latin Europe, it did so restrained by the peculiar institutional evolution and attendant entitlements and liberties that this acutely fractured environment had engendered and that—not for want of rulers’ trying—could not be fully undone. These powerful medieval legacies nurtured the growth of a more “organic” version of the state—as opposed to the traditional imperial “capstone” state—in close engagement with organized representatives of civil society.

Two features were thus critical: strong local government and its routinized integration into polity-wide institutions, which constrained both despotic power and aristocratic autonomy, and sustained interstate conflict. Both were direct consequences of the fading of late Roman institutions and the competitive polycentrism born of the failure of hegemonic empire. And both were particularly prominent in medieval England: the least Roman of Western Europe’s former Roman provinces, it experienced what with the benefit of hindsight turned out to be the most propitious initial conditions for future transformative development.

The Pax Romana was replaced by a nearly constant state of war, with the proliferation of castle building and the dispersion of military capacity at the local level.  These wars were devastating for the participants but became primary spur for technological, political, and economic innovation.  Everyone needed to develop an edge to help with the inevitable coming conflict.

After the reformation, the small marginal and Protestant states on the North Sea enjoyed a paradoxical advantage in the early modern period, when Catholic Spain, France, and Austria were developing increasingly strong centralized states.  Their marginality allowed them to build most effectively on the inherited medieval model.

…it made a difference that the North Sea region was alone in preserving medieval decentralized political structures and communitarian legacies and building on them during the Reformation while more authoritarian monarchies rose across much of the continent—what Jan Luiten van Zanden deems “an unbroken democratic tradition” from the communal movement of the High Middle Ages to the Dutch Revolt and England’s Glorious Revolution.

England in particular benefited from the differential process of development in Europe.

Yet even as a comprehensive balance sheet remains beyond our reach, there is a case to be made that the British economy expanded and modernized in part because of rather than in spite of the tremendous burdens of war, taxation, and protectionism. By focusing on trade and manufacture as a means of strengthening the state, Britain’s elites came to pursue developmental policies geared toward the production of “goods with high(er) added value, that were (more) knowledge and capital intensive and that were better than those of foreign competitors so they could be sold abroad for a good price.”

Thanks to a combination of historical legacies and geography, England and then Britain happened to make the most of their pricey membership in the European state system. Economic growth had set in early; medieval integrative institutions and bargaining mechanisms were preserved and adapted to govern a more cohesive state; elite commitments facilitated high levels of taxation and public debt; and the wars that mattered most were won.

Reduced to its essentials, the story of institutional development followed a clear arc. In the Middle Ages, the dispersion of power within polities constrained the intensity of interstate competition by depriving rulers of the means to engage in sustained conflict. In the early modern period, these conditions were reversed. Interstate conflict escalated as diversity within states diminished and state capacity increased. Enduring differences between rival polities shaped and were in turn shaped by the ways in which elements of earlier domestic heterogeneity, bargaining and balancing survived and influenced centralization to varying degrees. The key to success was to capitalize on these medieval legacies in maximizing internal cohesion and state capacity later. This alone made it possible to prevail in interstate conflict without adopting authoritarian governance that stifled innovation. The closest approximations of this “Goldilocks scenario” could be found in the North Sea region, first in the Netherlands and then in England.

As maritime European states (England, Spain, Portugal, and the Dutch Republic) spread out across the globe, the competition increased exponentially — which then provided even stronger incentives for innovation at all levels of state and society.

Polycentrism was key. Interstate conflict did not merely foster technological innovation in areas such as ship design and weaponry that proved vital for global expansion, it also raised the stakes by amplifying both the benefits of overseas conquest and its inverse, the costs of opportunities forgone: successful ventures deprived rivals from rewards they might otherwise have reaped, and vice versa. States played a zero-sum game: their involvements overseas have been aptly described as “a competitive process driven as much by anxiety over loss as by hope of gain.”

In conclusion, Scheidel argues that bloody and costly conflict among competing states was the the source of rapid modernization and the rise of European domination of the globe.

I am advocating a perspective that steers clear of old-fashioned triumphalist narratives of “Western” exceptionalism and opposing denunciations of colonialist victimization. The question is not who did what to whom: precisely because competitive fragmentation proved so persistent, Europeans inflicted horrors on each other just as liberally as they meted them out to others around the globe. Humanity paid a staggering price for modernity. In the end, although this may seem perverse to those of us who would prefer to think that progress can be attained in peace and harmony, it was ceaseless struggle that ushered in the most dramatic and exhilaratingly open-ended transformation in the history of our species: the “Great Escape.” Long may it last.

I strongly recommend that you read this book.  There’s insight and provocation on every page.

The Parallel with the History US Higher Education

As I mentioned at the beginning, my own analysis of the emergence of American higher ed tracks nicely on Scheidel’s analysis of Europe after the fall of Rome.  US colleges arose in the early 19th century under conditions where the state was weak, the church divided, and the market strong.  In the absence of a strong central power and a reliable source of financial support, these colleges came into existence as corporations with state charters but not state funding.  (State colleges came later but followed the model of their private predecessors.)  Their creation had less to do with advancing knowledge than with serving more immediately practical aims.

One was to advance the faith in a highly competitive religious environment.  This provided a strong incentive to plant the denominational flag across the countryside, especially on steadily moving western frontier.  A college was a way for Lutherans and Methodists and Presbyterians and others to announce their presence, educate congregants, attract newcomers, and train clergy.  Thus the huge number of colleges in Ohio, the old Northwest Territory.

Another spur for college formation was the crass pursuit of money.  The early US was a huge, underpopulated territory which had too much land and not enough buyers.  This turned nearly everyone on the frontier into a land speculator (ministers included), feverishly coming up with schemes to make the land in their town more valuable for future residents than the land in other towns in the area.  One way to do this was to set up a school, telegraphing to prospects that this was a place to settle down and raise a family.  When other towns followed suit, you could up the ante by establishing a college, usually bearing the town name, which told the world that yours was not some dusty agricultural village but a vibrant center of culture.

The result was a vast number of tiny and unimpressive colleges scattered across the less populated parts of a growing country.  Without strong funding from church or state, they struggled to survive in a highly competitive setting.  This they managed by creating lean institutions that were adept at attracting and retaining student consumers and eliciting donations from alumni and from the wealthier people in town.  The result was the most overbuilt system of higher education the world has ever seen, with five times as many colleges in 1880 than the entire continent of Europe.

All the system was lacking was academic credibility and a strong incentive for student enrollment.  These conditions were met at the end of the century, with the arrival of the German research university to crown the system and give it legitimacy and with the rise of the corporation and its need for white collar workers.

At this point, the chaotic fragmentation and chronic competition that characterized the American system of higher education turned out to be enormously functional.  Free from the constraints that European nation states and national churches imposed on universities, American institutions could develop programs, lure students, hustle for dollars, and promote innovations in knowledge production and technology.  They knew how to make themselves useful to their communities and their states, developing a broad base of political and financial support and demonstrating their social and economic value.

Competing colleges, like competing states, promoted a bottom-up vitality in the American higher ed system that was generally lacking in the older institutions of Europe that were under control of a strong state or church.  Early institutional chaos led to later institutional strength, a system what was not created by design but emerged from an organic process of evolutionary competition.  In the absence of Rome (read: a hegemonic national university), the US higher education system became Rome.

Posted in Academic writing, Educational Research, Higher Education, Writing

Getting It Wrong — Rethinking a Life in Scholarship

This post is an overview of my life as a scholar.  I presented an oral version in my job talk at Stanford in 2002.  The idea was to make sense of the path I’d taken in my scholarly writing up to that point.  What were the issues I was looking at and why?  How did these ideas develop over time?  And what lessons can we learn from this process that might be of use to scholars who are just starting out.

This piece first appeared in print as the introduction to a 2005 book called Education, Markets, and the Public Good: The Selected Works of David F. Labaree.  As a friend told after hearing about the book, “Isn’t this kind of compilation something that’s published after you’re dead?”  So why was I doing this at as a mere youth of 58?  The answer: Routledge offered me the opportunity.  Was there ever an academic who turned out the chance to publish something when the chance arose?  The book was part of a series called — listen for the drum roll — The World Library of Educationalists, which must have a place near the top of the list of bad ideas floated by publishers.  After the first year, when a few libraries rose to the bait, annual sales of this volume never exceeded single digits.  It’s rank in the Amazon bestseller list is normally in the two millions.

Needless to say, no one ever read this piece in its originally published form.  So I tried again, this time slightly adapting it for a 2011 volume edited by Wayne Urban called Leaders in the Historical Study of American Education, which consisted of autobiographical sketches by scholars in the field.  It now ranks in the five millions on Amazon, so the essay still never found a reader.  As a result, I decided to give the piece one more chance at life in my blog.  I enjoyed reading it again and thought it offered some value to young scholars just starting out in a daunting profession.  I hope you enjoy it too.

The core insight is that research trajectories are not things you can  carefully map out in advance.  They just happen.  You learn as you go.  And the most effective means of learning from your own work — at least from my experience — arises from getting it wrong, time and time again.  If you’re not getting things wrong, you may not be learning much at all, since you may just be continually finding what you’re looking for.  It may well be that what you need to find are the things you’re not looking for and that you really don’t want to confront.  The things that challenge your own world view, that take you in a direction you’d rather not go, forcing you to give up ideas you really want to keep.

Another insight I got from this process of reflection is that it’s good to know what are the central weaknesses in the way you do research.  Everyone has them.  Best to acknowledge where you’re coming from and learn to live with that.  These weaknesses don’t discount the value of your work, they just put limits on it.  Your way of doing scholarship are probably better at producing some kinds of insights over others.  That’s OK.  Build on your strengths and let others point out your weaknesses.  You have no obligation and no ability to give the final answer on any important question.  Instead, your job is to make a provocative contribution to the ongoing scholarly conversation and let other scholars take it from there, countering your errors and filling in the gaps.  There is no last word.

Here’s a link to a PDF of the 2011 version.  Hope you find it useful.

 

Adventures in Scholarship

Instead of writing an autobiographical sketch for this volume, I thought it would be more useful to write about the process of scholarship, using my own case as a cautionary tale.  The idea is to help emerging scholars in the field to think about how scholars develop a line of research across a career, both with the hope of disabusing them of misconceptions and showing them how scholarship can unfold as a scary but exhilarating adventure in intellectual development.  The brief story I tell here has three interlocking themes:  You need to study things that resonate with your own experience; you need to take risks and plan to make a lot of mistakes; and you need to rely on friends and colleagues to tell you when you’re going wrong.  Let me explore each of these points.

Study What Resonates with Experience

First, a little about the nature of the issues I explore in my scholarship and then some thoughts about the source of my interest in these issues. My work focuses on the historical sociology of the American system of education and on the thick vein of irony that runs through it.  This system has long presented itself as a model of equal opportunity and open accessibility, and there is a lot of evidence to support these claims.  In comparison with Europe, this upward expansion of access to education came earlier, moved faster, and extended to more people.  Today, virtually anyone can go to some form of postsecondary education in the U.S., and more than two-thirds do.  But what students find when they enter the educational system at any level is that they are gaining equal access to a sharply unequal array of educational experiences.  Why?  Because the system balances open access with radical stratification.  Everyone can go to high school, but quality of education varies radically across schools.  Almost everyone can go to college, but the institutions that are most accessible (community colleges) provide the smallest boost to a student’s life chances, whereas the ones that offer the surest entrée into the best jobs (major research universities) are highly selective.  This extreme mixture of equality and inequality, of accessibility and stratification, is a striking and fascinating characteristic of American education, which I have explored in some form or another in all my work.

Another prominent irony in the story of American education is that this system, which was set up to instill learning, actually undercuts learning because of a strong tendency toward formalism.  Educational consumers (students and their parents) quickly learn that the greatest rewards of the system go to those who attain its highest levels (measured by years of schooling, academic track, and institutional prestige), where credentials are highly scarce and thus the most valuable.  This vertically-skewed incentive structure strongly encourages consumers to game the system by seeking to accumulate the largest number of tokens of attainment – grades, credits, and degrees – in the most prestigious programs at the most selective schools.  However, nothing in this reward structure encourages learning, since the payoff comes from the scarcity of the tokens and not the volume of knowledge accumulated in the process of acquiring these tokens.  At best, learning is a side effect of this kind of credential-driven system.  At worst, it is a casualty of the system, since the structure fosters consumerism among students, who naturally seek to gain the most credentials for the least investment in time and effort.  Thus the logic of the used-car lot takes hold in the halls of learning.

In exploring these two issues of stratification and formalism, I tend to focus on one particular mechanism that helps explain both kinds of educational consequences, and that is the market.  Education in the U.S., I argue, has increasingly become a commodity, which is offered and purchased through market processes in much the same way as other consumer goods.  Educational institutions have to be sensitive to consumers, by providing the mix of educational products that the various sectors of the market demand.  This promotes stratification in education, because consumers want educational credentials that will distinguish them from the pack in their pursuit of social advantage.  It also promotes formalism, because markets operate based on the exchange value of a commodity (what it can be exchanged for) rather than its use value (what it can be used for).  Educational consumerism preserves and increases social inequality, undermines knowledge acquisition, and promotes the dysfunctional overinvestment of public and private resources in an endless race for degrees of advantage.  The result is that education has increasingly come to be seen primarily as a private good, whose benefits accrue only to the owner of the educational credential, rather than a public good, whose benefits are shared by all members of the community even if they don’t have a degree or a child in school.  In many ways, the aim of my work has been to figure out why the American vision of education over the years made this shift from public to private.

This is what my work has focused on in the last 30 years, but why focus on these issues?  Why this obsessive interest in formalism, markets, stratification, and education as arbiter of status competition?  Simple. These were the concerns I grew up with.

George Orwell once described his family’s social location as the lower upper middle class, and this captures the situation of my own family.  In The Road to Wigan Pier, his meditation on class relations in England, he talks about his family as being both culture rich and money poor.[1]  Likewise for mine.  Both of my grandfathers were ministers.  On my father’s side the string of clergy went back four generations in the U.S.  On my mother’s side, not only was her father a minister but so was her mother’s father, who was in turn the heir to a long clerical lineage in Scotland.  All of these ministers were Presbyterians, whose clergy has long had a distinctive history of being highly educated cultural leaders who were poor as church mice.  The last is a bit of an exaggeration, but the point is that their prestige and authority came from learning and not from wealth.  So they tended to value education and disdain grubbing for money.  My father was an engineer who managed to support his family in a modest but comfortable middle-class lifestyle.  He and my mother plowed all of their resources into the education of their three sons, sending all of them to a private high school in Philadelphia (Germantown Academy) and to private colleges (Lehigh, Drexel, Wooster, and Harvard).  Both of my parents were educated at elite schools (Princeton and Wilson) – on ministerial scholarships – and they wanted to do the same for their own children.

What this meant is that we grew up taking great pride in our cultural heritage and educational accomplishments and adopting a condescending attitude to those who simply engaged in trade for a living.  Coupled with this condescension was a distinct tinge of envy for the nice clothes, well decorated houses, new cars, and fancy trips that the families of our friends experienced.  I thought of my family as a kind of frayed nobility, raising the flag of culture in a materialistic society while wearing hand-me-down clothes.  From this background, it was only natural for me to study education as the central social institution, and to focus in particular on the way education had been corrupted by the consumerism and status-competition of a market society.  In doing so I was merely entering the family business.  Someone out there needed to stand up for substantive over formalistic learning and for the public good over the private good, while at the same time calling attention to the dangers of a social hierarchy based on material status.  So I launched my scholarship from a platform of snobbish populism – a hankering for a lost world where position was grounded on the cultural authority of true learning and where mere credentialism could not hold sway.

Expect to Get Things Wrong

Becoming a scholar is not easy under the best of circumstances, and we may make it even harder by trying to imbue emerging scholars with a dedication for getting things right.[2]  In doctoral programs and tenure reviews, we stress the importance of rigorous research methods and study design, scrupulous attribution of ideas, methodical accumulation of data, and cautious validation of claims.  Being careful to stand on firm ground methodologically in itself is not a bad thing for scholars, but trying to be right all the time can easily make us overly cautious, encouraging us to keep so close to our data and so far from controversy that we end up saying nothing that’s really interesting.  A close look at how scholars actually carry out their craft reveals that they generally thrive on frustration.  Or at least that has been my experience.  When I look back at my own work over the years, I find that the most consistent element is a tendency for getting it wrong.  Time after time I have had to admit failure in the pursuit of my intended goal, abandon an idea that I had once warmly embraced, or backtrack to correct a major error.  In the short run these missteps were disturbing, but in the long run they have proven fruitful.

Maybe I’m just rationalizing, but it seems that getting it wrong is an integral part of scholarship.  For one thing, it’s central to the process of writing.  Ideas often sound good in our heads and resonate nicely in the classroom, but the real test is whether they work on paper.[3]  Only there can we figure out the details of the argument, assess the quality of the logic, and weigh the salience of the evidence.  And whenever we try to translate a promising idea into a written text, we inevitably encounter problems that weren’t apparent when we were happily playing with the idea over lunch.  This is part of what makes writing so scary and so exciting:  It’s a high wire act, in which failure threatens us with every step forward.  Can we get past each of these apparently insuperable problems?  We don’t really know until we get to the end.

This means that if there’s little risk in writing a paper there’s also little potential reward.  If all we’re doing is putting a fully developed idea down on paper, then this isn’t writing; it’s transcribing.  Scholarly writing is most productive when authors are learning from the process, and this happens only if the writing helps us figure out something we didn’t really know (or only sensed), helps us solve an intellectual problem we weren’t sure was solvable, or makes us turn a corner we didn’t know was there.  Learning is one of the main things that makes the actual process of writing (as opposed to the final published product) worthwhile for the writer.  And if we aren’t learning something from our own writing, then there’s little reason to think that future readers will learn from it either.  But these kinds of learning can only occur if a successful outcome for a paper is not obvious at the outset, which means that the possibility of failure is critically important to the pursuit of scholarship.

Getting it wrong is also functional for scholarship because it can force us to give up a cherished idea in the face of the kinds of arguments and evidence that accumulate during the course of research.  Like everyone else, scholars are prone to confirmation bias.  We look for evidence to support the analysis we prefer and overlook evidence that supports other interpretations.  So when we collide with something in our research or writing that deflects us from the path toward our preferred destination, we tend to experience this deflection as failure.  However, although these experiences are not pleasant, they can be quite productive.  Not only do they prompt us to learn things we don’t want to know, they can also introduce arguments into the literature that people don’t want to hear.  A colleague at the University of

Michigan, David Angus, had both of these benefits in mind when he used to pose the following challenge to every candidate for a faculty position in the School of Education:  “Tell me about some point when your research forced you to give up an idea you really cared about.”

I have experienced all of these forms of getting it wrong.  Books never worked out the way they were supposed to, because of changes forced on me by the need to come up with remedies for ailing arguments.  The analysis often turned in a direction that meant giving up something I wanted to keep and embracing something I preferred to avoid.  And nothing ever stayed finished.  Just when I thought I had a good analytical hammer and started using it to pound everything in sight, it would shatter into pieces and I would be forced to start over.  This story of misdirection and misplaced intentions starts, as does every academic story, with a dissertation.

Marx Gives Way to Weber

My dissertation topic fell into my lap one day during the final course in my doctoral program in sociology at the University of Pennsylvania, when I mentioned to Michael Katz that I had done a brief study of Philadelphia’s Central High School for an earlier class.  He had a new grant for studying the history of education in Philadelphia and Central was the lead school.  He needed someone to study the school, and I needed a topic, advisor, and funding; by happy accident, it all came together in 15 minutes.  I had first become interested in education as an object of study as an undergraduate at Harvard in the late 1960s, where I majored in Students for a Democratic Society and minored in sociology.  In my last year or two there, I worked on a Marxist analysis of Harvard as an institution of social privilege (is there a better case?), which whet my appetite for educational research.

For the dissertation, I wanted to apply the same kind of Marxist approach to Central High School, which seemed to beg for it.  Founded in 1838, it was the first high school in the city and one of the first in the county, and it later developed into the elite academic high school for boys in the city.  It looked like the Harvard of public high schools.  I had a model for this kind of analysis, Katz’s study of Beverly High School, in which he explained how this high school, shortly after its founding, came to be seen by many citizens as an institution that primarily served the upper classes, thus prompting the town meeting to abolish the school in 1861.[4]  I was planning to do this kind of study about Central, and there seemed to be plenty of evidence to support such an interpretation, including its heavily upper-middle-class student body, its aristocratic reputation in the press, and its later history as the city’s elite high school.

That was the intent, but my plan quickly ran into two big problems in the data I was gathering.  First, a statistical analysis of student attainment and achievement at the school over its first 80 years showed a consistent pattern:  only one-quarter of the students managed to graduate, which meant it was highly selective; but grades and not class determined who made it and who didn’t, which meant it was – surprise – highly meritocratic.  Attrition in modern high schools is strongly correlated with class, but this was not true in the early years at Central.  Middle class students were more likely to enroll in the first place, but they were no more likely to succeed than working class students.  The second problem was that the high school’s role in the Philadelphia school system didn’t fit the Marxist story of top-down control that I was trying to tell.  In the first 50 years of the high school, there was a total absence of bureaucratic authority over the Philadelphia school system.  The high school was an attractive good in the local educational market, offering elevated education in a grand building at a collegiate level (it granted bachelor degrees) and at no cost.  Grammar school students competed for access to this commodity by passing an entrance exam, and grammar school masters competed to get the most students into Central by teaching to the test.  The power that the high school exerted over the system was considerable but informal, arising from consumer demand from below rather than bureaucratic dictate from above.

Thus my plans to tell a story of class privilege and social control fell apart at the very outset of my dissertation; in its place, I found a story about markets and stratification:  Marx gives way to Weber.  The establishment of Central High school in the nation’s second largest city created a desirable commodity with instant scarcity, and this consumer-based market power not only gave the high school control over the school system but also gave it enough autonomy to establish a working meritocracy.  The high school promoted inequality: it served a largely middle class constituency and established an extreme form of educational stratification.  But it imposed a tough meritocratic regime equally on the children of the middle class and working class, with both groups failing most of the time.

Call on Your Friends for Help

In the story I’m telling here, the bad news is that scholarship is a terrain that naturally lures you into repeatedly getting it wrong.  The good news is that help is available if you look for it, which can turn scholarly wrong-headedness into a fruitful learning experience.  Just ask your friends and colleagues.  The things you most don’t want to hear may be just the things that will save you from intellectual confusion and professional oblivion.  Let me continue with the story, showing how colleagues repeatedly saved my bacon.

Markets Give Ground to Politics

Once I completed the dissertation, I gradually settled into being a Weberian, a process that took a while because of the disdain that Marxists hold for Weber.[5]  I finally decided I had a good story to tell about markets and schools, even if it wasn’t the one I had wanted to tell, so I used this story in rewriting the dissertation as a book.  When I had what I thought was a final draft ready to send to the publisher, I showed it to my colleague at Michigan State, David Cohen, who had generously offered to give it a reading.  His comments were extraordinarily helpful and quite devastating.  In the book, he said, I was interpreting the evolution of the high school and the school system as a result of the impact of the market, but the story I was really telling was about an ongoing tension for control of schools between markets and politics.[6]  The latter element was there in the text, but I had failed to recognize it and make it explicit in the analysis.  In short, he explained to me the point of my own book; so I had to rewrite the entire manuscript in order to bring out this implicit argument.

Framing this case in the history of American education as a tension between politics and markets allowed me to tap into the larger pattern of tensions that always exist in a liberal democracy:  the democratic urge to promote equality of power and access and outcomes, and the liberal urge to preserve individual liberty, promote free markets, and tolerate inequality.  The story of Central High School spoke to both these elements.  It showed a system that provided equal opportunity and unequal outcomes.  Democratic politics pressed for expanding access to high school for all citizens, whereas markets pressed for restricting access to high school credentials through attrition and tracking.  Central see-sawed back and forth between these poles, finally settling on the grand compromise that has come to characterize American education ever since:  open access to a stratified school system.  Using both politics and markets in the analysis also introduced me to the problem of formalism, since political goals for education (preparing competent citizens) value learning, whereas market goals (education for social advantage) value credentialing.

Disaggregating Markets

The book came out in 1988 with the title, The Making of an American High School.[7]  With politics and markets as my new hammer, everything looked like a nail.  So I wrote a series of papers in which I applied the idea to a wide variety of educational institutions and reform efforts, including the evolution of high school teaching as work, the history of social promotion, the history of the community college, the rhetorics of educational reform, and the emergence of the education school.

Midway through this flurry of papers, however, I ran into another big problem.  I sent a draft of my community college paper to David Hogan, a friend and former member of my dissertation committee at Penn, and his critique stopped me cold.  He pointed out that I was using the idea of educational markets to refer to two things that were quite different, both in concept and in practice.  One was the actions of educational consumers, the students who want education to provide the credentials they needed in order to get ahead; the other was the actions of educational providers, the taxpayers and employers who want education to produce the human capital that society needs in order to function.  The consumer sought education’s exchange value, providing selective benefits for the individual who owns the credential; the producer sought education’s use value, providing collective benefits to everyone in society, even those not in school.

This forced me to reconstruct the argument from the ground up, abandoning the politics and markets angle and constructing in its place a tension among three goals that competed for primacy in shaping the history of American education.  “Democratic equality” referred to the goal of using education to prepare capable citizens; “social efficiency” referred to the goal of using education to prepare productive workers; and “social mobility” referred to the goal of using education to enable individuals to get ahead in society.  The first was a stand-in for educational politics, the second and third were a disaggregation of educational markets.

Abandoning the Good, the Bad, and the Ugly

Once formulated, the idea of the three goals became a mainstay in my teaching, and for a while it framed everything I wrote.  I finished the string of papers I mentioned earlier, energized by the analytical possibilities inherent in the new tool.  But by the mid-1990s, I began to be afraid that its magic power would start to fade on me soon, as had happened with earlier enthusiasms like Marxism and politics-and-markets.  Most ideas have a relatively short shelf life, as metaphors quickly reach their limits and big ideas start to shrink upon close examination.  That doesn’t mean these images and concepts are worthless, only that they are bounded, both conceptually and temporally.  So scholars need to strike while the iron is hot.  Michael Katz once made this point to me with the Delphic advice, “Write your first book first.”  In other words, if you have an idea worth injecting into the conversation, you should do so now, since it will eventually evolve into something else, leaving the first idea unexpressed.  Since the evolution of an idea is never finished, holding off publication until the idea is done is a formula for never publishing.

So it seemed like the right time to put together a collection of my three-goals papers into a book, and I had to act quickly before they started to turn sour.  With a contract for the book and a sabbatical providing time to put it together, I now had to face the problem of framing the opening chapter.  In early 1996 I completed a draft and submitted it to American Educational Research Journal.  The reviews knocked me back on my heels.  They were supportive but highly critical.  One in particular, which I later found out was written by Norton Grubb, forced me to rethink the entire scheme of competing goals.  He pointed out something I had completely missed in my enthusiasm for the tool-of-the-moment.  In practice my analytical scheme with three goals turned into a normative scheme with two:  a Manichean vision of light and darkness, with Democratic Equality as the Good, and with Social Mobility and Social Efficiency as the Bad and the Ugly.  This ideologically colored representation didn’t hold up under close scrutiny.  Grubb pointed out that social efficiency is not as ugly as I was suggesting.  Like democratic equality and unlike social mobility, it promotes learning, since it has a stake in the skills of the workforce.  Also, like democratic equality, it views education as a public good, whose benefits accrue to everyone and not just (as with social mobility) to the credential holder.

This trenchant critique forced me to start over, putting a different spin on the whole idea of competing goals, abandoning the binary vision of good and evil, reluctantly embracing the idea of balance, and removing the last vestige of my original bumper-sticker Marxism.  As I reconstructed the argument, I put forward the idea that all three of these goals emerge naturally from the nature of a liberal democracy, and that all three are necessary.[8]  There is no resolution to the tension among educational goals, just as there is no resolution to the problem of being both liberal and democratic.  We need an educational system that makes capable citizens and productive workers while also enabling individuals to pursue their own aspirations.  And we all act out our support for each of these goals according to which social role is most salient to us at the moment.  As citizens, we want graduates who can vote intelligently; as taxpayers and employers, we want graduates who will increase economic productivity; and as parents, we want an educational system that offers our children social opportunity.  The problem is the imbalance in the current mix of goals, as the growing primacy of social mobility over the other two goals privileges private over public interests, stratification over equality, and credentials over learning.

Examining Life at the Bottom of the System

With this reconstruction of the story, I was able to finish my second book, published in 1997, and get it out the door before any other major problems could threaten its viability.[9]  One such problem was already coming into view.  In comments on my AERJ goals paper, John Rury (the editor) pointed out that my argument relied on a status competition model of social organization – students fighting for scarce credentials in order to move up or stay up – that did not really apply to the lower levels of the system.  Students in the lower tracks in high school and in the open-access realms of higher education (community colleges and regional state universities) lived in a different world from the one I was talking about.  They were affected by the credentials race, but they weren’t really in the race themselves.  For them, the incentives to compete were minimal, the rewards remote, and the primary imperative was not success but survival.

Fortunately, however, there was one place at the bottom of the educational hierarchy I did know pretty well, and that was the poor beleaguered education school.  From 1985 to 2003, while I was teaching in the College of Education at Michigan State University, I received a rich education in the subject.  I had already started a book about ed schools, but it wasn’t until the book was half completed that I realized it was forcing me to rethink my whole thesis about the educational status game.  Here was an educational institution that was the antithesis of the Harvards and Central High Schools that I had been writing about thus far.  Residing at the very bottom of the educational hierarchy, the ed school was disdained by academics, avoided by the best students, ignored by policymakers, and discounted by its own graduates.  It was the perfect case to use in answering a question I had been avoiding:  What happens to education when credentials carry no exchange value and the status game is already lost?

What I found is that life at the bottom has some advantages, but they are outweighed by disadvantages.  On the positive side, the education school’s low status frees it to focus efforts on learning rather than on credentials, on the use value rather than exchange value of education; in this sense, it is liberated from the race for credentials that consumes the more prestigious realms of higher education.  On the negative side, however, the ed school’s low status means that it has none of the autonomy that prestigious institutions (like Central High School) generate for themselves, which leaves it vulnerable to kibitzing from the outside.  This institutional weakness also has made the ed school meekly responsive to its environment, so that over the years it obediently produced large numbers of teachers at low cost and with modest professional preparation, as requested.

When I had completed a draft of the book, I asked for comments from two colleagues at Michigan State, Lynn Fendler and Tom Bird, who promptly pointed out several big problems with the text.  One had to do with the argument in the last few chapters, where I was trying to make two contradictory points:  ed schools were weak in shaping schools but effective in promoting progressive ideology.  The other problem had to do with the book’s tone:  as an insider taking a critical position about ed schools, I sounded like I was trying to enhance my own status at the expense of colleagues.  Fortunately, they were able to show me a way out of both predicaments.  On the first issue, they helped me see that ed schools were more committed to progressivism as a rhetorical stance than as a mode of educational practice.  In our work as teacher educators, we have to prepare teachers to function within an educational system that is hostile to progressive practices.  On the second issue, they suggested that I shift from the third person to the first person.  By announcing clearly both my membership in the community under examination and my participation in the problems I was critiquing, I could change the tone from accusatory to confessional.  With these important changes in place, The Trouble with Ed Schools was published in 2004.[10]

Enabling Limitations

In this essay I have been telling a story about grounding research in an unlovely but fertile mindset, getting it wrong repeatedly, and then trying to fix it with the help of friends.  However, I don’t want to leave the impression that I think any of these fixes really resolved the problems.  The story is more about filling potholes than about re-engineering the road.  It’s also about some fundamental limitations in my approach to the historical sociology of American education, which I have been unwilling and unable to fix since they lie at the core of my way of seeing things.  Intellectual frameworks define, shape, and enable the work of scholars.  Such frameworks can be helpful by allowing us to cut a slice through the data and reveal interesting patterns that are not apparent from other angles, but they can only do so if they maintain a sharp leading edge.  As an analytical instrument, a razor works better than a baseball bat, and a beach ball doesn’t work at all.  The sharp edge, however, comes at a cost, since it necessarily narrows the analytical scope and commits a scholar to one slice through a problem at the expense of others.  I’m all too aware of the limitations that arise from my own cut at things.

One problem is that I tend to write a history without actors.  Taking a macro-sociological approach to history, I am drawn to explore general patterns and central tendencies in the school-society relationship rather than the peculiarities of individual cases.  In the stories I tell, people don’t act.  Instead, social forces contend, social institutions evolve in response to social pressures, and collective outcomes ensue.  My focus is on general processes and structures rather than on the variations within categories.  What is largely missing from my account of American education is the radical diversity of traits and behaviors that characterizes educational actors and organizations.  I plead guilty to these charges.  However, my aim has been not to write a tightly textured history of the particular but to explore some of the broad socially structured patters that shape the main outlines of American educational life.  My sense is that this kind of work serves a useful purpose—especially in a field such as education, whose dominant perspectives have been psychological and presentist rather than sociological and historical; and in a sub-field like history of education, which can be prone to the narrow monograph with little attention to the big picture; and in a country like the United States, which is highly individualistic in orientation and tends to discount the significance of the collective and the categorical.

Another characteristic of my work is that I tend to stretch arguments well beyond the supporting evidence.  As anyone can see in reading my books, I am not in the business of building an edifice of data and planting a cautious empirical generalization on the roof.  My first book masqueraded as a social history of an early high school, but it was actually an essay on the political and market forces shaping the evolution of American education in general—a big leap to make from historical data about a single, atypical school.  Likewise my second book is a series of speculations about credentialing and consumerism that rests on a modest and eclectic empirical foundation.  My third book involves minimal data on education in education schools and maximal rumination about the nature of “the education school.”  In short, validating claims has not been my strong suit.  I think the field of educational research is sufficiently broad and rich that it can afford to have some scholars who focus on constructing credible empirical arguments about education and others who focus on exploring ways of thinking about the subject.

The moral of this story, therefore, may be that scholarship is less a monologue than a conversation.  In education, as in other areas, our field is so expansive that we can’t cover more than a small portion, and it’s so complex that we can’t even gain mastery over our own tiny piece of the terrain.  But that’s ok.  As participants in the scholarly conversation, our responsibility is not to get things right but to keep things interesting, while we rely on discomfiting interactions with our data and with our colleagues to provide the correctives we need to make our scholarship more durable.

[1]  George Orwell,  The Road to Wigan Pier (New York: Harcourt, Brace, 1958).

[2]  I am grateful to Lynn Fendler and Tom Bird for comments on an earlier draft of this portion of the essay.  As they have done before, they saved me from some embarrassing mistakes.  I presented an earlier version of this analysis in a colloquium at the Stanford School of Education in 2002 and in the Division F Mentoring Seminar at the American Educational Research Association annual meeting in New Orleans later the same year.  A later version was published as the introduction to Education, Markets, and the Public Good: The Selected Works of David F. Labaree (London: Routledge Falmer, 2007).  Reprinted with the kind permission of Taylor and Francis.

[3]  That doesn’t mean it’s necessarily the best way to start developing an idea.  For me, teaching has always served better as a medium for stimulating creative thought.  It’s a chance for me to engage with ideas from texts about a particular topic, develop a story about these ideas, and see how it sounds when I tell it in class and listen to student responses.  The classroom has a wonderful mix of traits for these purposes: by forcing discipline and structure on the creative process while allowing space for improvisation and offering the chance to reconstruct everything the next time around.  After my first book, most of my writing had its origins in this pedagogical process.  But at a certain point I find that I have to test these ideas in print.

[4]  Michael B. Katz, The Irony of Early School Reform: Educational Innovation in Mid-Nineteenth Century Massachusetts (Boston: Harvard University Press, 1968).

[5]  Marx’s message is rousing and it can fit on a bumper sticker:  Workers of the world, unite!  But Weber’s message is more complicated, pessimistic, and off-putting:  The iron cage of rationalization has come to dominate the structure of thought and social action, but we can’t stop it or even escape from it.

[6]  He also pointed out, in passing, that my chapter on the attainment system at the high school – which incorporated 17 tables in the book (30 in the dissertation), and which took me two years to develop by collecting, coding, keying, and statistically analyzing data from 2,000 student records – was essentially one big footnote in support of the statement, “Central High School was meritocratic.”  Depressing but true.

[7]  David F. Labaree, The Making of an American High School: The Credentials Market and the Central High School of Philadelphia, 1838-1939 (New Haven: Yale University Press, 1988).

[8]  David F. Labaree, “Public Goods, Private Goods: The American Struggle over Educational Goals. American Educational Research Journal 34:1 (Spring, 1998): 39-81.

[9]  David F. Labaree,  How to Succeed in School Without Really Learning: The Credentials Race in American Education (New Haven, Yale University Press, 1997).

[10] David F. Labaree,  The Trouble with Ed Schools (New Haven: Yale University Press, 2004).

Posted in Higher Education, History of education, Uncategorized

Research Universities and the Public Good

This post is a review essay of a new book called Research Universities and the Public Good.  It appeared in the current issue of American Journal of Sociology.  Here’s a link to a PDF of the original.

Research Universities and the Public Good: Discovery for an Uncertain Future

By Jason Owen-Smith. Stanford, Calif.: Stanford University Press,
2018. Pp. xii + 213. $35.00.

David F. Labaree
Stanford University

American higher education has long been immune to the kind of criticism
levied against elementary and secondary education because it has been seen
as a great success story, in contrast to the popular narrative of failure that
has been applied to the lower levels of the system. And the rest of the world
seems to agree with this distinction. Families outside the United States have
not been eager to send their children to our schools, but they have been
clamoring for admission to the undergraduate and graduate programs at
our colleges and universities. In the last few years, however, this reputational
immunity has been quickly fading. The relentlessly rationalizing reformers
who have done so much harm to U.S. schools in the name of accountability
have now started to direct their attention to higher education. Watch out,
they’re coming for us.

One tiny sector of the huge and remarkably diverse structure of U.S.
higher education has been particularly vulnerable to this contagion, namely,
the research university. This group represents only 3% of the more than
5,000 degree-granting institutions in the country, and it educates only a
small percentage of college students while sucking up a massive amount of
public and private resources. Its highly paid faculty don’t teach very much,
instead focusing their time instead on producing research on obscure topics
published in journals for the perusal of their colleagues rather than the public.
No wonder state governments have been reducing their funding for public
research universities and the federal government has been cutting its support
for research. No wonder there are strong calls for disaggregating the
multiplicity of functions that make these institutions so complex, so that
the various services of the university can be delivered more cost-effectively
to consumers.

In his new book, Jason Owen-Smith, a sociology professor at the University
of Michigan, mounts a valiant and highly effective defense of the apparently
indefensible American research university. While acknowledging the
complexity of functions that run through these institutions, he focuses his
attention primarily on the public benefits that derive from their research
production. As he notes, although they represent less than 3% of the institutions
of higher education, they produce nearly 90% of the system’s research and development. In an era when education is increasingly portrayed as primarily a private good—providing degrees whose benefits only accrue to the degree holders—he deliberately zeroes in on the way that university research constitutes a public good whose benefits accrue to the community as a whole.

He argues that the core public functions of the research university are to
serve as “sources of knowledge and skilled people, anchors for communities,
industries, and regions, and hubs connecting all of the far-flung parts of society”
(p. 1; original emphasis). In chapter 1 he spells out the overall argument,
in chapter 2 he explores the usefulness of the peculiarly complex
organization of the research university, in chapters 3–5 he examines in more
detail each of the core functions, and at the end he suggests ways that university
administrators can help position their institutions to demonstrate
the value they provide the public.

The core function is to produce knowledge and skill. The most telling
point the author makes about this function is that it works best if allowed
to emerge organically from the complex incentive structure of the university
itself instead of being directed by government or industry toward solving
the most current problems. Trying to make research relevant may well
make it dysfunctional. Mie Augier and James March (“The Pursuit of Relevance
in Management Education,” California Management Review 49
[2007]: 129–46) argue that the pursuit of relevance is afflicted by both ambiguity
(we don’t know what’s going to be relevant until we encounter the
next problem) and myopia (by focusing too tightly on the current case we
miss what it is a case of ). In short, as Owen-Smith notes, investing in research
universities is a kind of social insurance by which we develop answers
to problems that haven’t yet emerged.While the private sector focuses
on applied research that is likely to have immediate utility, public funds are
most needed to support the basic research whose timeline for utility is unknown
but whose breadth of benefit is much greater.

The second function of the research university is to serve as a regional anchor.
A creative tension that energizes this institution is that it’s both cosmopolitan
and local. It aspires to universal knowledge, but it’s deeply grounded
in place. Companies can move, but universities can’t. This isn’t just because
of physical plant, a constraint that also affects companies; it’s because universities
develop a complex web of relationships with the industries and governments
and citizens in their neighborhood. Think Stanford and Silicon
Valley. Owen-Smith makes the analogy to the anchor store in a shopping
mall.

The third function of the research university is to serve as a hub, which is
the cosmopolitan side of its relationship with the world. It’s located in place
but connected to the intellectual and economic world through a complex
web of networks. Like the university itself, these webs emerge organically
out of the actions of a vast array of actors pursuing their own research enterprises
and connecting with colleagues and funding sources and clients
and sites of application around the country and the globe. Research
universities are uniquely capable of convening people from all sectors
around issues of mutual interest. Such synergies benefit everyone.

The current discourse on universities, which narrowly conceives of them
as mechanisms for delivering degrees to students, desperately needs the
message that Owen-Smith delivers here. Students may be able to get a degree
through a cheap online program, but only the complex and costly system
of research universities can deliver the kinds of knowledge production,
community development, and network building that provide such invaluable
benefits for the public as a whole. One thing I would add to the author’s
analysis is that American research universities have been able to develop
such strong public support in the past in large part because they combine
top-flight scholarship with large programs of undergraduate education that
are relatively accessible to the public and rather undemanding intellectually.
Elite graduate programs and research projects rest on a firm populist base
that may help the university survive the current assaults, a base grounded
as much in football and fraternities as in the erudition of its faculty. This,
however, is but a footnote to this powerfully framed contribution to the literature
on U.S. higher education.

American Journal of Sociology, 125:2 (September, 2019), pp. 610-12

Posted in Higher Education, History of education, Meritocracy, Uncategorized

US Higher Education and Inequality: How the Solution Became the Problem

This post is a paper I wrote last summer and presented at the University of Oslo in August.  It’s a patchwork quilt of three previously published pieces around a topic I’ve been focused on a lot lately:  the role of US higher education — for better and for worse — in creating the new American aristocracy of merit.

In it I explore the way that systems of formal schooling both opened up opportunity for people to get ahead by individual merit and created the most effective structure ever devised for reproducing social inequality.  By defining merit as the accumulation of academic credentials and by constructing a radically stratified and extraordinarily opaque hierarchy of educational institutions for granting these credentials, the system grants an enormous advantage to the children of those who have already negotiated the system most effectively.

The previous generation of academic winners learned its secrets and decoded its inner logic.  They found out that it’s the merit badges that matter, not the amount of useful learning you acquire along the way.  So they coach their children in the art of gaming the system.  The result is that these children not only gain a huge advantage at winning the rewards of the meritocracy but also acquire a degree of legitimacy for these rewards that no previous system of inherited privilege ever attained.  They triumphed in a meritocratic competition, so they fully earned the power, money, and position that they derived from it.  Gotta love a system that can pull that off.

Here’s a PDF of the paper.

 

U.S. Higher Education and Inequality:

How the Solution Became the Problem

by

David F. Labaree

Lee L. Jacks Professor of Education, Emeritus

Stanford University

Email: dlabaree@stanford.edu

Web: https://dlabaree.people.stanford.edu

Twitter: @Dlabaree

Blog: https:/

/davidlabaree.com/

GSE Logo

Lecture delivered at University of Oslo

August 14, 2019

 

One of the glories of the emergence of modernity is that it offered the possibility and even the ideal that social position could be earned rather than inherited.  Instead of being destined to become a king or a peasant by dictate of paternity, for the first time in history individuals had the opportunity to attain their roles in society on the basis of merit.  And in this new world, public education became both the avenue for opportunity and the arbiter of merit.  But one of the anomalies of modernity is that school-based meritocracy, while increasing the fluidity of status attainment, has had little effect on the degree of inequality in modern societies.

In this paper, I explore how the structure of schooling helped bring about this outcome in the United States, with special focus on the evolution of higher education in the twentieth century.  The core issue driving the evolution of this structure is that the possibility for social mobility works at both the top and the bottom of the social hierarchy, with one group seeing the chance of rising up and the other facing the threat of falling down.  As a result, the former sees school as the way for their children to gain access to higher position while the latter sees it as the way for their children to preserve the social position they were born with.  Under pressure from both sides, the structure of schooling needs to find a way to accommodate these two contradictory aims.  In practice the system can accomplish this by allowing children from families at the bottom and the top to both increase their educational attainment beyond the level of their parents.  In theory this means that both groups can gain academic credentials that allow them to qualify for higher level occupational roles than the previous generation.  They can therefore both move up in parallel, gaining upward mobility without reducing the social distance between them.  Thus you end up with more opportunity without more equality.

Theoretically, it would be possible for the system to reduce or eliminate the degree to which elites manage to preserve their advantage through education simply by imposing a ceiling on the educational attainment allowed for their children.  That way, when the bottom group rises they get closer to the top group.  As a matter of practice, that option is not available in the U.S.  As the most liberal of liberal democracies, the U.S. sees any such limits on the choices of the upper group as a gross violation of individual liberty.  The result is a peculiar dynamic that has governed the evolution of the structure of American education over the years.  The pattern is this.  The out-group exerts political pressure in order to gain greater educational credentials for their children while the in-group responds by increasing the credentials of their own children.  The result is that both groups move up in educational qualifications at the same time.  Schooling goes up but social gaps remain the same.  It’s an elevator effect.  Every time the floor rises, so does the ceiling.

In the last 200 years of the history of schooling in the United States, the dynamic has played out like this.  At the starting point, one group has access to a level of education that is denied to another group.  The outsiders exert pressure to gain access to this level, which democratic leaders eventually feel compelled to grant.  But the insiders feel threatened by the loss of social advantage that greater access would bring, so they press to preserve that advantage.  How does the system accomplish this?  Through two simple mechanisms.  First, at the level where access is expanding, it stratifies schooling into curricular tracks or streams.  This means that the newcomers fill the lower tracks while the old-timers occupy the upper tracks.  Second, for the previously advantaged group it expands access to schooling at the next higher level.  So the system expands access to one level of schooling while simultaneously stratifying that level and opening up the next level.

This process has gone through three cycles in the history of U.S. schooling.  When the common school movement created a system of universal elementary schooling in the second quarter of the nineteenth century, it also created a selective public high school at the top of the system.  The purpose of the latter was to draw upper-class children from private schools into the public system by offering access to the high school only to graduates of the public grammar schools.  Without the elite high school as inducement, public schooling would have been left the domain for paupers. Then at the end of the nineteenth century, elementary grades filled up and demand increased for wider access to high school, so the system opened the doors to this institution.  But at the same it introduced curriculum tracks and set off a surge of college enrollments by the former high school students.  And when high schools themselves filled by the middle of the twentieth century, the system opened access to higher education by creating a range of new nonselective colleges and universities to absorb the influx.  This preserved the exclusivity of the older institutions, whose graduates in large numbers then started pursuing postgraduate degrees.

Result: A Very Stratified System of Higher Education

By the middle of the twentieth century, higher education was the zone of advantage for any American trying to get ahead or stay ahead.  And as a result of the process by which the tertiary system managed to incorporate both functions, it became extraordinarily stratified.  This was a system that emerged without a plan, based not on government fiat but the competing interests of educational consumers seeking to use it to their own advantage.  A market-oriented system of higher education such as this one has a special dynamic that leads to a high degree of stratification.  Each educational enterprise competes with the others to establish a position in the market that will allow it to draw students, generate a comfortable surplus, and maintain this situation over time.  The problem is that, given the lack of effective state limits on the establishment and expansion of colleges, these schools find themselves in a buyer’s market.  Individual buyers may want one kind of program over another, which gives colleges an incentive to differentiate the market horizontally to accommodate these various demands.  At the same time, however, buyers want a college diploma that will help them get ahead socially.  This means that consumers don’t just want a college education that is different; they want one that is better – better at providing access to good jobs.  In response to this consumer demand, the U.S. has developed a multi-tiered hierarchy of higher education, ranging from open-access institutions at the bottom to highly exclusive institutions at the top, with each of the upper tier institutions offering graduates a degree that provides invidious distinction over graduates from schools in the lower tiers.

This stratified structure of higher education arose in the nineteenth century in a dynamic market system, where the institutional actors had to operate according to four basic rules.  Rule One:  Age trumps youth.  It’s no accident that the oldest American colleges are overrepresented in the top tier.  Of the top 20 U.S. universities,[1] 19 were founded before 1900 and 7 before 1776, even though more than half of all American universities were founded in the twentieth century.  Before competitors had entered the field, the oldest schools had already established a pattern of training the country’s leaders, locked up access to the wealthiest families, accumulated substantial endowments, and hired the most capable faculty.

Rule Two:  The strongest rewards go to those at the top of the system.  This means that every college below the top has a strong incentive to move up the ladder, and that top colleges have a strong incentive to preserve their advantage.  Even though it is very difficult for lower-level schools to move up, this doesn’t keep them from trying.  Despite long odds, the possible payoff is big enough that everyone stays focused on the tier above.  A few major success stories allow institutions to keep their hopes alive.  University presidents lie awake at night dreaming of replicating the route to the top followed by social climbers like Berkeley, Hopkins, Chicago, and Stanford.

Rule Three:  It pays to imitate your betters.  As the research university emerged as the model for the top tier in American higher education in the twentieth century, it became the ideal toward which all other schools sought to move.  To get ahead you needed to offer a full array of undergraduate, graduate, and professional programs, selective admissions and professors who publish, a football stadium and Gothic architecture.  (David Riesman called this structure of imitation “the academic procession.”)[2]  Of course, given the advantages enjoyed by the top tier, imitation has rarely produced the desired results.  But it’s the only game in town.  Even if you don’t move up in the rankings, you at least help reassure your school’s various constituencies that they are associated with something that looks like and feels like a real university.

Rule Four:  It’s best to expand the system by creating new colleges rather than increasing enrollments at existing colleges.  Periodically new waves of educational consumers push for access to higher education.  Initially, existing schools expanded to meet the demand, which meant that as late as 1900 Harvard was the largest U.S. university, public or private.[3]  But beyond this point in the growth process, it was not in the interest of existing institutions to provide wider access.  Concerned about protecting their institutional advantage, they had no desire to sully their hard-won distinction by admitting the unwashed.  Better to have this kind of thing done by additional colleges created for that purpose.  The new colleges emerged, then, as a clearly designated lower tier in the system, defined as such by both their newness and their accessibility.

Think about how these rules have shaped the historical process that produced the present stratified structure of higher education.  This structure has four tiers.  In line with Rule One, these tiers from top to bottom emerged in roughly chronological order.  The Ivy League colleges emerged in the colonial period, followed by a series of flagship state colleges in the early and mid-nineteenth century.  These institutions, along with a few social climbers that emerged later, grew to form the core of the elite research universities that make up the top tier of the system.  Schools in this tier are the most influential, prestigious, well-funded, exclusive, research-productive, and graduate-oriented – in the U.S. and in the world.

The second tier emerged from the land grant colleges that began appearing in the mid to late nineteenth century.  They were created to fill a need not met by existing institutions, expanding access for a broader array of students and offering programs with practical application in areas like agriculture and engineering.  They were often distinguished from the flagship research university by the word “state” in their title (as with University of Michigan vs. Michigan State University) or the label “A & M” (for Agricultural and Mechanical, as with University of Texas vs. Texas A & M).  But, in line with Rules Two and Three, they responded to consumer demand by quickly evolving into full service colleges and universities; and in the twentieth century they adopted the form and function of the research university, albeit in a more modest manner.

The third tier arose from the normal schools, established in the late nineteenth century to prepare teachers.  Like the land grant schools that preceded them, these narrowly vocational institutions evolved quickly under pressure from consumers, who wanted them to model themselves after the schools in the top tiers by offering a more valuable set of credentials that would provide access to a wider array of social opportunities.  Under these market pressures, normal schools evolved into teachers colleges, general-purpose state colleges, and finally, by the 1960s, comprehensive regional state universities.

The fourth tier emerged in part from the junior colleges that first arose in the early twentieth century and eventually evolved into an extensive system of community colleges.  Like the land grant college and normal school, these institutions offered access to a new set of students at a lower level of the system.  Unlike their predecessors, for the most part they have not been allowed by state governments to imitate the university model, remaining primarily as two-year schools.  But through the transfer option, many students use them as a more accessible route into institutions in the upper tiers.

What This Means for Educational Consumers

This highly stratified system is very difficult for consumers to navigate.  Instead of allocating access to the top level of the system using the mechanism employed by most of the rest of the world – a state-administered university matriculation exam – the highly decentralized American system allocates access by means of informal mechanisms that in comparison seem anarchic.  In the absence of one access route, there are many; and in the absence of clear rules for prospective students, there are multiple and conflicting rules of thumb.  Also, the rules of thumb vary radically according to which tier of the system you are seeking to enter.

First, let’s look at the admissions process for families (primarily the upper-middle class) who are trying to get their children entrée to the elite category of highly selective liberal arts colleges and research universities.  They have to take into account the wide array of factors that enter into the complex and opaque process that American colleges use to select students at this level:  quality of high school; quality of a student’s program of study; high school grades; test scores in the SAT or ACT college aptitude tests; interests and passions expressed in an application essay; parents’ alumni status; whether the student needs financial aid; athletic skills; service activities; diversity factors such as race, ethnicity, class, national origin, sex, and sexual orientation; and extracurricular contributions a student might make to the college community.  There is no centralized review process; instead every college carries out its own admissions review and employs its own criteria.

This open and indeterminate process provides a huge advantage for upper-middle-class families.  If you are a parent who is a college graduate and who works at a professional or managerial job, where the payoff of going to a good college is readily apparent, you have the cultural and social capital to negotiate this system effectively and read its coded messages.  For you, going to college is not the issue; it’s a matter of which college your children can get into that would provide them with the greatest competitive advantage in the workplace.  You want for them the college that might turn them down rather than the one that would welcome them with open arms.  So you enroll your children in test prep; hire a college advisor; plan out a strategic plan for high school course-taking and extracurriculars; craft a service resume that makes them look appropriately public-spirited; take them on the obligatory college tour; and come up with just the right mix of applications to the stretch schools, the safety schools, and those in between.  And all this pays off handsomely: 77 percent of children from families in the top quintile by income gain a bachelor’s degree.[4]

If you are a parent farther down the class scale, who didn’t attend college and whose own work environment is not well stocked with college graduates, you have a lot more trouble negotiating the system.  The odds are not good:  for students from the fourth income quintile, only 17 percent earn a BA, and for the lowest quintile the rate is only 9 percent.[5]  Under these circumstances, having your child go to a college, any college, is a big deal; and one college is hard to distinguish from another.  But you are faced by a system that offers an extraordinary diversity of choices for prospective students:  public, not-for-profit, or for-profit; secular or religious; two-year or four-year; college or university; teaching or research oriented; massive or tiny student body; vocational or liberal; division 1, 2, or 3 intercollegiate athletics, or no sports at all; party school or nerd haven; high rank or low rank; full-time or part-time enrollment; urban or pastoral; gritty or serene; residential, commuter, or “suitcase college” (where students go home on weekends).  In this complex setting both consumers and providers somehow have to make choices that are in their own best interest.  Families from the upper-middle class are experts at negotiating this system, trimming the complexity down to a few essentials:  a four-year institution that is highly selective and preferably private (not-for-profit).  Everything else is optional.

If you’re a working-class family, however – lacking deep knowledge of the system and without access to the wide array of support systems that money can buy – you are more likely to take the system at face value.  Having your children go to a community college is the most obvious and attractive option.  It’s close to home, inexpensive, and easy to get into.  It’s where your children’s friends will be going, it allows them to work and go to school part time, and it doesn’t seem as forbiddingly alien as the state university (much less the Ivies).  You don’t need anything to gain admission except a high school diploma or GED.  No tests, counselors, tours, or resume-burnishing is required.  Of you could try the next step up, the local comprehensive state university.  To apply for admission, all you need is a high school transcript.  You might get turned down, but the odds are in your favor.  The cost is higher but can usually be paid with federal grants and loans.  An alternative is a for-profit institution, which is extremely accessible, flexible, and often online.  It’s not cheap, but federal grants and loans can pay the cost.  What you don’t have any way of knowing is that the most accessible colleges at the bottom of the system are also the ones where students are least likely to graduate.  (Only 29 percent of students entering two-year colleges earn an associate degree in three years;[6] only 39 percent earn a degree from a two-year or four-year institution in six years.[7])  You also may not be aware that the economic payoff for these colleges is lower; or that the colleges higher up the system may not only provide stronger support toward graduation and but might even be less expensive because of greater scholarship funding.

In this way, the complexity and opacity of this market-based and informally-structured system helps reinforce the social advantages of those at the top of the social ladder and limit the opportunities for those at the bottom.  It’s a system that rewards the insider knowledge of old hands and punishes newcomers.  To work it effectively, you need reject the fiction that a college is a college is a college and learn how seek advantage in the system’s upper tiers.

On the other hand, the system’s fluidity is real.  The absence of state-sanctioned and formally structured tracks means that the barriers between the system’s tiers are permeable.  Your children’s future is not predetermined by their high school curriculum or their score on the matriculation exam.  They can apply to any college they want and see what happens.  Of course, if their grades and scores are not great, their chances of admission to upper level institutions are poor.  But their chances of getting into a teaching-oriented state university are pretty good, and their chances of getting into a community college are virtually assured.  And if they take the latter option, as is most often the case for children from socially disadvantaged families, there is a real (if modest) possibility that they might be able to prove their academic chops, earn an AA degree, and transfer to a university, even a research university.  The probabilities of moving up in the system are low:  most community college students never earn an AA degree; and transfers have a harder time succeeding in the university than students who enroll there as freshmen.  But the possibilities are nonetheless genuine.

American higher education offers something for everyone.  It helps those at the bottom to get ahead and those at the top to stay ahead.  It provides socially useful educational services for every ability level and every consumer preference.  This gives it an astonishingly broad base of political support across the entire population, since everyone needs it and everyone can potentially benefit from it.  And this kind of legitimacy is not possible if the opportunity the system offers to the lower classes is a simple fraud.  First generation college students, even if they struggled in high school, can attend community college, transfer to San Jose State, and end up working at Apple.  It’s not very likely, but it assuredly is possible.  True, the more advantages you bring to the system – cultural capital, connections, family wealth – the higher the probability that you will succeed in it.  But even if you are lacking in these attributes, there is still an outside chance that you just might make it through the system and emerge with a good middle class job.

This helps explain how the system gets away with preserving social advantage for those at the top without stirring a revolt from those at the bottom.  Students from working-class and lower-class families are much less likely to be admitted to the upper reaches of the higher education system that provides the greatest social rewards; but the opportunity to attend some form of college is high, and attending a college at the lower levels of the system may provide access to a good job.  The combination of high access to the lower levels of the system and high attrition on the way to attaining a bachelor’s degree creates a situation where the system gets credit for openness and the student bears the burden for failing to capitalize on it.  The system gave you a chance but you just couldn’t make the grade.  The ready-made explanations for personal failure accumulate quickly as students try to move through the system.  You didn’t study hard enough, you didn’t get good grades in high school, you didn’t get good test scores, so you couldn’t get into a selective college.  Instead you went to a community college, where you got distracted from your studies by work, family, and friends, and you didn’t have the necessary academic ability; so you failed to complete your AA degree.  Or maybe you did complete the degree and transferred to a university, but you had trouble competing with students who were more able and better prepared than you.  Along with the majority of students who don’t make it all the way to a BA, you bear the burden for your failure – a conclusion that is reinforced by the occasional but highly visible successes of a few of your peers.  The system is well defended against charges of unfairness.

So we can understand why people at the bottom don’t cry foul.  It gave you a chance.  And there is one more reason for keeping up your hope that education will pay off for you.  A degree from an institution in a lower tier may pay lower benefits, but for some purposes one degree really is as good as another.  Often the question in getting a job or a promotion is not whether you have a classy credential but whether you have whatever credential is listed as the minimum requirement in the job description.  Bureaucracies operate on a level where form often matters more than substance.  As long as you can check off the box confirming that you have a bachelor’s degree, the BA from University of Phoenix and the BA from University of Pennsylvania can serve the same function, by allowing you to be considered for the job.  And if, say, you’re a public school teacher, an MA from Capella University, under the district contract, is as effective as one from Stanford University, because either will qualify you for a $5,000 bump in pay.

At the same time, however, we can see why the system generates so much anxiety among students who are trying to use the system to move up the social ladder for the good life.  It’s really the only game in town for getting a good job in twenty-first century America.  Without higher education, you are closed off from the white collar jobs that provide the most security and pay.  Yes, you could try to start a business, or you could try to work your way up the ladder in an organization without a college degree; but the first approach is highly risky and the second is highly unlikely, since most jobs come with minimum education requirements regardless of experience.  So you have to put all of your hopes in the higher-ed basket while knowing – because of your own difficult experiences in high school and because of what you see happening with family and friends – that your chances for success are not good.  You either you choose to pursue higher ed against the odds or you simply give up.  It’s a situation fraught with anxiety.

What is less obvious, however, is why the American system of higher education – which is so clearly skewed in favor of people at the top of the social order – fosters so much anxiety in them.  Upper-middle-class families in the U.S. are obsessed with education and especially with getting their children into the right college.  Why?  They live in the communities that have the best public schools; their children have cultural and social skills that schools value and reward; and they can afford the direct cost and opportunity cost of sending their high school grads to a residential college, even one of the pricey privates.  So why are there only a few colleges that seem to matter to this group?  Why does it matter so much to have your child not only get into the University of California but into Berkeley or UCLA?  What’s wrong with having them attend Santa Cruz or even one of the Cal State campuses?  And why the overwhelming passion for pursuing admission to Harvard or Yale?

The urgency behind all such frantic concern about admission to the most elite level of the system is this:  As parents of privilege, you can pass on your wealth to your children, but you can’t give them a profession.  Education is built into the core of modern societies, where occupations are no longer inherited but more or less earned.  If you’re a successful doctor or lawyer, you can provide a lot of advantages for your children; but in order for them to gain a position such as yours, they must succeed in school, get into a good college, and then into a good graduate school.  Unless they own the company, even business executives can’t pass on position to their children, and even then it’s increasingly rare that they would actually do so.  (Like most shareholders, they would profit more by having the company led by a competent executive than by the boss’s son.)  Under these circumstances of modern life, providing social advantage to your children means providing them with educational advantage.  Parents who have been through the process of climbing the educational hierarchy in order to gain prominent position in the occupational hierarchy know full well what it takes to make the grade.

They also know something else:  When you’re at the top of the social system, there is little opportunity to rise higher but plenty of opportunity to fall farther down.  Consider data on intergenerational mobility in the U.S.  For children of parents in the top quintile by household income, 60 percent end up at least one quintile lower than their parents and 37 fall at least two quintiles.[8]  That’s a substantial decline in social position.  So there’s good reason for these parents to fear downward mobility for their children and to use all their powers to marshal educational resources to head it off.  The problem is this:  Even though your own children have a wealth of advantages in negotiating the educational system, there are still enough bright and ambitious students from the lower classes who manage to make it through the educational gauntlet to pose them a serious threat.  So you need to make sure that your children attend the best schools, get into the high reading group and the program for the gifted, take plenty of advanced placement classes, and then get into a highly selective college and graduate school.  Leave nothing to chance, since some of your heirs are likely to be less talented and ambitious than those children who prove themselves against all odds by climbing the educational ladder.  When the higher education system opened up access after World War II, it made competition for the top tier of the system sharply higher, and the degree of competitiveness continued to increase as the proportion of students going to college grew to a sizeable majority.  As Jerome Karabel has noted in his study of elite college admissions, the American system of higher education does not equalize opportunity but it does equalize anxiety.[9]  It makes families at all levels of American society nervous about their ability to negotiate the system effectively, because it provides the only highway to the good life.

The American Meritocracy

The American system of education is formally meritocratic, but one of its social effects is to naturalize privilege.  This starts when a student’s academic merit is so central and so pervasive in schooling that it embeds itself within the individual person.  You start saying things like:  I’m smart.  I’m dumb.  I’m a good student.  I’m a bad student.  I’m good at reading but bad at math.  I’m lousy at sports.  The construction of merit is coextensive with the entire experience of growing up, and therefore it comes to constitute the emergent you.  It no longer seems to be something imposed by a teacher or a school but instead comes to be an essential part of your identity.  It’s now less what you do and increasingly who you are.  In this way, the systemic construction of merit begins to disappear and what’s left is a permanent trait of the individual.  You are your grade and your grade is your destiny.

The problem, however – as an enormous amount of research shows – is that the formal measures of merit that schools use are subject to powerful influence from a student’s social origins.  No matter how you measure merit, it affects your score.  It shapes your educational attainment.  It also shows up in measures that rank educational institutions by quality and selectivity.  Across the board, your parents’ social class has an enormous impact on the level of merit you are likely to acquire in school.  Students with higher social position end up accumulating a disproportionately large number of academic merit badges.

The correlations between socioeconomic status and school measures of merit are strong and consistent, and the causation is easy to determine.  Being born well has an enormously positive impact on the education merit you acquire across your life.  Let us count the ways.  Economic capital is one obvious factor.  Wealthy communities can support better schools. Social capital is another factor.  Families from the upper middle classes have a much broader network of relationships with the larger society than those form the working class, which provides a big advantage for their schooling prospects.  For them, the educational system is not foreign territory but feels like home.

Cultural capital is a third factor, and the most important of all.  School is a place that teaches students the cognitive skills, cultural norms, and forms of knowledge that are required for competent performance in positions of power.  Schools demonstrate a strong disposition toward these capacities over others:  mental over manual skills, theoretical over practical knowledge, decontextualized over contextualized perspectives, mind over body, Gesellschaft over Gemeinschaft.  Parents in the upper middle class are already highly skilled in these cultural capacities, which they deploy in their professional and managerial work on a daily basis.  Their children have grown up in the world of cultural capital.  It’s a language they learn to speak at home.  For working-class children, school is an introduction to a foreign culture and a new language, which unaccountably other students seem to already know.  They’re playing catchup from day one.  Also, it turns out that schools are better at rewarding cultural capital than they are at teaching it.  So kids from the upper middle class can glide through school with little effort while others continually struggle to keep up.  The longer they remain in school, the larger the achievement gap between the two groups.

In the wonderful world of academic merit, therefore, the fix is in.  Upper income students have a built-in advantage in acquiring the grades, credits, and degrees that constitute the primary prizes of the school meritocracy.  But – and this is the true magic of the educational process – the merits that these students accumulate at school come in a purified academic form that is independent of their social origins.  They may have entered schooling as people of privilege, but they leave it as people of merit.  They’re good students.  They’re smart.  They’re well educated.  As a result, they’re totally deserving of special access to the best jobs.  They arrived with inherited privilege but they leave with earned privilege.  So now they fully deserve what they get with their new educational credentials.

In this way, the merit structure of schooling performs a kind of alchemy.  It turns class position into academic merit.  It turns ascribed status into achieved status. You may have gotten into Harvard by growing up in a rich neighborhood with great schools and by being a legacy.  But when you graduate, you bear the label of a person of merit, whose future accomplishments arise alone from your superior abilities.  You’ve been given a second nature.

Consequences of Naturalized Privilege: The New Aristocracy

The process by which schools naturalize academic merit brings major consequences to the larger society.  The most important of these is that it legitimizes social inequality.  People who were born on third base get credit for hitting a triple, and people who have to start in the batter’s box face the real possibility of striking out.  According to the educational system, divergent social outcomes are the result of differences in individual merit, so, one way or the other, people get what they deserve.  The fact that a fraction of students from the lower classes manage against the odds to prove themselves in school and move up the social scale only adds further credibility to the existence of a real meritocracy.

In the United States in the last 40 years, we have come to see the broader implications of this system of status attainment through institutional merit.  It has created a new kind of aristocracy.  This is not Jefferson’s natural aristocracy, grounded in public accomplishments, but a caste of meritocratic privilege, grounded in the formalized and naturalized merit signaled by educational credentials.  As with aristocracies of old, the new meritocracy is a system of rule by your betters – no longer defined as those who are better born or more accomplished but now as those who are better educated.  Michael Young saw this coming back in 1958, as he predicted in his fable, The Rise of the Meritocracy.[10]  But now we can see that it has truly taken hold.

The core expertise of this new aristocracy is skill in working the system.  You have to know how to play the game of educational merit-getting and pass this on to your children.  The secret is in knowing that the achievements that get awarded merit points through the process of schooling are not substantive but formal.  Schooling is not about learning the subject matter; it’s about getting good grades, accumulating course credits, and collecting the diploma on the way out the door.  Degrees pay off, not what you learned in school or even the number of years of schooling you have acquired.  What you need to know is what’s going to be on the test and nothing else.  So you need to study strategically and spend of lot of effort working the refs.  Give teacher what she wants and be sure to get on her good side.  Give the college admissions officers the things they are looking for in your application.  Pump up your test scores with coaching and learning how to game the questions.

Members of the new aristocracy are particularly aggressive about carrying out a strategy known as opportunity hoarding.  There is no academic advantage too trivial to pursue, and the number of advantages you accumulate can never be enough.  In order to get your children into the right selective college you need send them to the right school, get them into the gifted program in elementary school and the right track in high school, hire a tutor, carry out test prep, do the college tour, pursue prizes, develop a well-rounded resume for the student (sport, student leadership, musical instrument, service), pull strings as a legacy and a donor, and on and on and on.

As we saw earlier, such behavior by upper-middle-class parents is not a crazy as it seems.  The problem with being at the top is that there’s nowhere to go but down.  The system is just meritocratic enough to keep the most privileged families on edge, worried about having their child bested by a smart poor kid.   Again, as Karabel put it, the only thing U.S. education equalizes is anxiety.

As with earlier aristocracies, the new aristocrats of merit cluster together in the same communities, where the schools are like no other.  Their children attend the same elite colleges, where they meet their future mates and then transmit their combined cultural, social, and economic capital in concentrated form to their children, a process sociologists call assortative mating.  And one consequence of this increase concentration of educational resources is that the achievement gap between low and high income students has been rising; Sean Reardon’s study shows the gap growing 40 percent in the last quarter of the twentieth century.  This is how educational and social inequality grows larger over time.

By assuming the form of meritocracy, schools have come to play a central role in defining the character of modern society.  In the process they have served to increase social opportunity while also increasing social inequality.  At the same time, they have established a solid educational basis for the legitimacy of this new inequality, and they have fostered the development of a new aristocracy of educational merit whose economic power, social privilege, and cultural cohesion would be the envy of the high nobility in early modern England or France.  Now, as then, the aristocracy assumes its outsized social role as a matter of natural right.

 

References

Community College Research Center. (2015). Community College FAQs. Teachers College, Columbia University. http://ccrc.tc.columbia.edu/Community-College-FAQs.html (accessed 8-3-15).

Geiger, Roger L. (2004). To Advance Knowledge: The Growth of American research Universities, 1900-1940. New Brunswick: Transaction.

Karabel, Jerome. (2005). The Chosen: The Hidden History of Admission and Exclusion at Harvard, Yale, and Princeton. New York: Mariner Books.

National Center for Education Statistics. (2014). Digest of Education Statistics, 2013. Washington, DC: US Government Printing Office.

Pell Institute and PennAHEAD. (2015). Indicators of Higher Education Equity in the United States (2015 revised edition). Philadelphia: The Pell Institute for the Study of Opportunity in Higher Education and the University of Pennsylvania Alliance for Higher Education and Democracy (PennAHEAD). http://www.pellinstitute.org/publications-Indicators_of_Higher_Education_Equity_in_the_United_States_45_Year_Report.shtml (accessed 8-10-15).

Pew Charitable Trusts Economic Mobility Project. (2012). Pursuing the American Dream: Economic Mobility Across Generations. Washington, DC: Pew Charitable Trusts. http://www.pewtrusts.org/en/research-and-analysis/reports/0001/01/01/pursuing-the-american-dream (accessed 8-10-15).

Riesman, David.  (1958).  The Academic Procession.  In Constraint and variety in American education.  Garden City, NY:  Doubleday.

U.S. News and World Report. (2015). National Universities Rankings.  http://colleges.usnews.rankingsandreviews.com/best-colleges/rankings/national-universities (accessed 4-28-15).

Young, Michael D. (1958). The Rise of the Meritocracy, 1870-2023.  New York:  Random House.

 

[1] U.S. News (2015).

[2] Riesman, (1958).

[3] Geiger (2004), 270.

[4] Pell (2015), p. 31.

[5] Pell (2015), p. 31.

[6] NCES (2014), table 326.20.

[7] CCRC (2015).

[8] Pew (2012), figure 3.

[9] Karabel (2005), p. 547.

[10] Young (1958).

Posted in Course Syllabus, Higher Education, History of education, History of Higher Education Class

Course on the History of Higher Education in the U.S.

This post contains all of the material for the class on the History of Higher Education in the US that I taught for at the Stanford Graduate School of Education for the last 15 years.  In retirement I wanted to make the course available on the internet to anyone who is interested.  If you are a college teacher, feel free to use any of it in whole or part.  If you are a student or a group of students, you can work your way through the class on your own at your own pace.  Any benefits that accrue are purely intrinsic, since no one will get college credits.  But that also means you’re free to pursue the parts of the class that you want and you don’t have any requirements or papers.  How great is that.

I’m posting the full syllabus below.  But it would be more useful to get it as a Word document through this link.  Feel free to share it with anyone you like.

All of the course materials except three required books are embedded in the syllabus through hyperlinks to a Google drive.  For each week, the syllabus includes a link to tips for approaching the readings, links to the PDFs of the readings, and a link to the slides for that week’s class.  Slides also include links to additional sources.  So the syllabus is all that is needed to gain access to the full class.

I hope you find this useful.

 

History of Higher Education in the U.S.

A 10-Week Class

David Labaree

Web: http://www.stanford.edu/~dlabaree/

Twitter: @Dlabaree

Blog: https://davidlabaree.com/

Course Description

This course provides an introductory overview of the history of higher education in the United States.  We will start with Perkin’s account of the world history of the university, and two chapters from my book about the role of the market in shaping the history of American higher education and the pressure from consumers to have college provide both social access and social advantage.  In week two, we examine an overview of the history of American college and university in the 18th and 19th centuries from John Thelin, and my chapter on the emerging nature of the college system.  In week three, we focus on the rise of the university in the latter part of the 19th century using two more chapters from Thelin, and my own chapter on the subject.  In week four, we read a series of papers around the issue of access to higher education, showing how colleges for many years sought to repel or redirect the college aspirations of women, blacks, and Jews.  In week five, we examine the history of professional education, with special attention to schools of business, education, and medicine.  In week six, we read several chapters from Donald Levine’s book about the rise of mass higher education after World War I, my piece about the rise of community colleges, and more from Thelin.  In week seven, we look at the surge of higher ed enrollments after World War II, drawing on pieces by Rebecca Lowen, Roger Geiger, Thelin, and Labaree.  In week eight, we look at the broadly accessible full-service regional state university, drawing on Alden Dunham, Thelin, Lohmann, and my chapter on the relationship between the public and private sector.  In week nine, we read a selection of chapters from Jerome Karabel’s book about the struggle by elite universities to stay on top of a dynamic and expanding system of higher education.  And in week 10, we step back and try to get a fix on the evolved nature of the American system of higher education, drawing on work by Mitchell Stevens and the concluding chapters of my book.

Like every course, this one is not a neutral survey of all possible perspectives on the domain identified by the course title; like every course, this one has a point of view.  This point of view comes through in my book manuscript that we’ll be reading in the course.  Let me give you an idea of the kind of approach I will be taking.

The American system of higher education is an anomaly.  In the twentieth century it surged past its European forebears to become the dominant system in the world – with more money, talent, scholarly esteem, and institutional influence than any of the systems that served as its models.  By all rights, this never should have happened.  Its origins were remarkably humble: a loose assortment of parochial nineteenth-century liberal-arts colleges, which emerged in the pursuit of sectarian expansion and civic boosterism more than scholarly distinction.  These colleges had no academic credibility, no reliable source of students, and no steady funding.  Yet these weaknesses of the American system in the nineteenth century turned out to be strengths in the twentieth.  In the absence of strong funding and central control, individual colleges had to learn how to survive and thrive in a highly competitive market, in which they needed to rely on student tuition and alumni donations and had to develop a mode of governance that would position them to pursue any opportunity and cultivate any source of patronage.  As a result, American colleges developed into an emergent system of higher education that was lean, adaptable, autonomous, consumer-sensitive, self-supporting, and radically decentralized.  This put the system in a strong position to expand and prosper when, before the turn of the twentieth century, it finally got what it was most grievously lacking:  a surge of academic credibility (when it assumed the mantle of scientific research) and a surge of student enrollments (when it became the pipeline to the middle class).  This course is an effort to understand how a system that started out so badly turned out so well – and how its apparently unworkable structure is precisely what makes the system work.

That’s an overview of the kind of argument I will be making about the history of higher education.  But you should feel free to construct your own, rejecting mine in part or in whole.  The point of this class, like any class, is to encourage you to try on a variety of perspectives as part of the process of developing your own working conceptual framework for understanding the world.  I hope you will enjoy the ride.

Readings

Books:  We will be reading the following books:

Thelin, John R. (2011). A history of American higher education, 2nd ed. Baltimore: Johns Hopkins University Press.

Labaree, David F. (2017). A perfect mess: The unlikely ascendancy of American higher education.  Chicago: University of Chicago Press.

Karabel, Jerome. (2005). The chosen: The hidden history of admission and exclusion at Harvard, Yale, and Princeton. New York: Houghton Mifflin Harcourt.

             Supplementary Resources:  There is a terrific online archive of primary and secondary readings on higher education, which is a supplement to The History of Higher Education, 3rd ed., published by the Association for the Study of Higher Education (ASHE): http://www.pearsoncustom.com/mi/msu_ashe/.

Course Outline

Below are the topics we will cover, week by week, with the readings for each week.

Week 1

Introduction to course

Tips for week 1 readings

Labaree, David F. (2015). A system without a plan: Elements of the American model of higher education.  Chapter 1 in A perfect mess: The unlikely ascendancy of American higher education.

Labaree, David F. (2015). Balancing access and advantage.  Chapter 5 in A perfect mess: The unlikely ascendancy of American higher education,

Perkin, Harold. (1997). History of universities. In Lester F. Goodchild and Harold S. Wechsler (Eds.), ASHE reader on the history of higher education, 2nd ed. (pp. 3-32). Boston: Pearson Custom Publishing.

Class slides for week 1

Week 2

Overview of the Early History of Higher Education in the U.S.

Tips for week 2 readings

Thelin, John R. (2011). A history of American higher education, 2nd ed. Baltimore: Johns Hopkins University Press (introductory essay and chapters 1-3).

Labaree, David F. (2015). Unpromising roots:  The ragtag college system in the nineteenth century.  Chapter 2 in A perfect mess: The unlikely ascendancy of American higher education.

Class slides for week 2

Week 3

Roots of the Growth of the University in the Late 19th and Early 20th Century

Thursday 4/19

Tips for week 3 readings

Thelin, John R. (2011). A history of American higher education, 2nd ed. Baltimore: Johns Hopkins University Press (chapters 4-5).

Labaree, David F. (2015). Adding the pinnacle and keeping the base: The graduate school crowns the system, 1880-1910.  Chapter 3 in A perfect mess: The unlikely ascendancy of American higher education,

Labaree, David F. (1995).  Foreword (to book by Brown, David K. (1995). Degrees of control: A sociology of educational expansion and occupational credentialism. New York: Teachers College Press).

Class slides for week 3

 Week 4

Educating and Not Educating the Other:  Blacks, Women, and Jews

Tips for week 4 readings

Wechsler, Harold S. (1997).  An academic Gresham’s law: Group repulsion as a theme in American higher education. In Lester F. Goodchild and Harold S. Wechsler (Eds.), ASHE reader on the history of higher education, 2nd ed. (pp. 416-431). Boston: Pearson Custom Publishing.

Anderson, James D. (1997).  Training the apostles of liberal culture: Black higher education, 1900-1935. In Lester F. Goodchild and Harold S. Wechsler (Eds.), ASHE reader on the history of higher education, 2nd ed. (pp. 432-458). Boston: Pearson Custom Publishing.

Gordon, Lynn D. (1997).  From seminary to university: An overview of women’s higher education, 1870-1920. In Lester F. Goodchild and Harold S. Wechsler (Eds.), ASHE reader on the history of higher education, 2nd ed. (pp. 473-498). Boston: Pearson Custom Publishing.

Class slides for week 4

Week 5

History of Professional Education

Tips for week 5 readings

Brubacher, John S. and Rudy, Willis. (1997). Professional education. In Lester F. Goodchild and Harold S. Wechsler (Eds.), ASHE reader on the history of higher education, 2nd ed. (pp. 379-393). Boston: Pearson Custom Publishing.

Bledstein, Burton J. (1976). The culture of professionalism. In The culture of professionalism: The middle class and the development of higher education in America (pp. 80-128). New York:  W. W. Norton.

Labaree, David F. (2015). Mutual subversion: The liberal and the professional. Chapter 4 in A perfect mess: The unlikely ascendancy of American higher education,

Starr, Paul. (1984). Transformation of the medical school. In Social transformation of American medicine (pp. 112-127). New York: Basic.

Class slides for week 5

Week 6

Emergence of Mass Higher Education

Tips for week 6 readings

Levine, Donald O. (1986).  The American college and the culture of aspiration, 1915-1940. Ithaca: Cornell University Press.  Read introduction and chapters 3, 4, and 8.

Thelin, John R. (2011). A history of American higher education, 2nd ed. Baltimore: Johns Hopkins University Press (chapter 6).

Labaree, David F. (1997). The rise of the community college: Markets and the limits of educational opportunity.  In How to succeed in school without really learning:  The credentials race in American education (chapter 8, pp. 190-222). New Haven: Yale University Press.

Class slides for week 6

Week 7

The Huge Surge of Higher Education Expansion after World War II

Tips for week 7 readings

Thelin, John R. (2011). A history of American higher education, 2nd ed. Baltimore: Johns Hopkins University Press (chapter 7).

Geiger, Roger. (2004). University advancement from the postwar era to the 1960s. In Research and relevant knowledge: American research universities since World War II (chapter 5, pp. 117-156).  Read the first half of the chapter, which focuses on the rise of Stanford.

Lowen, Rebecca S. (1997). Creating the cold war university: The transformation of Stanford. Berkeley: University of California Press.  Introduction and Chapters 5 and 6.

Labaree, David F. (2015). Learning to love the bomb: America’s brief cold-war fling with the university as a public good. Chapter 7 in A perfect mess: The unlikely ascendancy of American higher education.

Class slides for week 7

Week 8

Populist, Practical, and Elite:  The Diversity and Evolved Institutional Character of the Full-Service American University

Tips for week 8 readings

Thelin, John R. (2011). A history of American higher education, 2nd ed. Baltimore: Johns Hopkins University Press (chapter 8).

Dunham, Edgar Alden. (1969). Colleges of the forgotten Americans: A profile of state colleges and universities. New York: McGraw Hill (introduction, chapters 1-2).

Lohmann, Suzanne. (2006). The public research university as a complex adaptive system. Unpublished paper, University of California, Los Angeles.

Labaree, David F. (2015). Private advantage, public impact. Chapter 6 in A perfect mess: The unlikely ascendancy of American higher education.

Class slides for week 8

Week 9

The Struggle by Elite Universities to Stay on Top

Tips for week 9 readings

Karabel, Jerome. (2005). The chosen: The hidden history of admission and exclusion at Harvard, Yale, and Princeton. New York: Houghton Mifflin Harcourt.  Read introduction and chapters 2, 4, 9, 12, 13, 17, and 18.

Class slides for week 9

Week 10

Conclusions about the American System of Higher Education

Tips for week 10 readings

Stevens, Mitchell L., Armstrong, Elizabeth A., & Arum, Richard. (2008). Sieve, incubator, temple, hub: Empirical and theoretical advances in the sociology of higher education. Annual Review of Sociology, 34 (127-151).

Labaree, David F. (2015). Upstairs, downstairs: Relations between the tiers of the system. Chapter 8 in A perfect mess: The unlikely ascendancy of American higher education,

Labaree, David F. (2015). A perfect mess. Chapter 9 in A perfect mess: The unlikely ascendancy of American higher education.

Class slides for week 10

 

Guidelines for Critical Reading

Whenever you set out to do a critical reading of a particular text (a book, article, speech, proposal, conference paper), you need to use the following questions as a framework to guide you as you read:

  1. What’s the point? This is the analysis/interpretation issue: what is the author’s angle?
  2. What’s new? This is the value-added issue: What does the author contribute that we don’t already know?
  3. Who says? This is the validity issue: On what (data, literature) are the claims based?
  4. Who cares? This is the significance issue, the most important issue of all, the one that subsumes all the others: Is this work worth doing?  Is the text worth reading?  Does it contribute something important?

Guidelines for Analytical Writing

             In writing papers for this (or any) course, keep in mind the following points.  They apply in particular to the longer papers, but most of the same concerns apply to critical reaction papers as well.

  1. Pick an important issue: Make sure that your analysis meets the “so what” test. Why should anyone care about this topic, anyway?  Pick an issue or issues that matters and that you really care about.

 

  1. Keep focused: Don’t lose track of the point you are trying to make and make sure the reader knows where you are heading and why.

 

  1. Aim for clarity: Don’t assume that the reader knows what you’re talking about; it’s your job to make your points clearly.  In part this means keeping focused and avoiding distracting clutter.  But in part it means that you need to make more than elliptical references to concepts and sources or to professional experience.  When referring to readings (from the course or elsewhere), explain who said what and why this point is pertinent to the issue at hand.  When drawing on your own experiences or observations, set the context so the reader can understand what you mean.  Proceed as though you were writing for an educated person who is neither a member of this class nor a professional colleague, someone who has not read the material you are referring to.

 

  1. Provide analysis: A good paper is more than a catalogue of facts, concepts, experiences, or references; it is more than a description of the content of a set of readings; it is more than an expression of your educational values or an announcement of your prescription for what ails education.  A good paper is a logical and coherent analysis of the issues raised within your chosen area of focus.  This means that your paper should aim to explain rather than describe.  If you give examples, be sure to tell the reader what they mean in the context of your analysis.  Make sure the reader understands the connection between the various points in your paper.

 

  1. Provide depth, insight, and connections: The best papers are ones that go beyond making obvious points, superficial comparisons, and simplistic assertions.  They dig below the surface of the issue at hand, demonstrating a deeper level of understanding and an ability to make interesting connections.

 

  1. Support your analysis with evidence: You need to do more than simply state your ideas, however informed and useful these may be.  You also need to provide evidence that reassures the reader that you know what you are talking about, thus providing a foundation for your argument.  Evidence comes in part from the academic literature, whether encountered in this course or elsewhere.  Evidence can also come from your own experience.  Remember that you are trying to accomplish two things with the use of evidence.  First, you are saying that it is not just you making this assertion but that authoritative sources and solid evidence back you up.  Second, you are supplying a degree of specificity and detail, which helps to flesh out an otherwise skeletal argument.

 

  1. Draw on course materials (this applies primarily to reaction papers, not the final paper). Your paper should give evidence that you are taking this course.  You do not need to agree with any of the readings or presentations, but your paper should show you have considered the course materials thoughtfully.

 

  1. Recognize complexity and acknowledge multiple viewpoints. The issues in the history of American education are not simple, and your paper should not propose simple solutions to complex problems. It should not reduce issues to either/or, black/white, good/bad.  Your paper should give evidence that you understand and appreciate more than one perspective on an issue.  This does not mean you should be wishy-washy.  Instead, you should aim to make a clear point by showing that you have considered alternate views.

 

  1. Challenge assumptions. The paper should show that you have learned something by doing this paper. There should be evidence that you have been open to changing your mind.

 

  1. Do not overuse quotation: In a short paper, long quotations (more than a sentence or two in length) are generally not appropriate.  Even in longer papers, quotations should be used sparingly unless they constitute a primary form of data for your analysis.  In general, your paper is more effective if written primarily in your own words, using ideas from the literature but framing them in your own way in order to serve your own analytical purposes.  However, selective use of quotations can be very useful as a way of capturing the author’s tone or conveying a particularly aptly phrased point.

 

  1. Cite your sources: You need to identify for the reader where particular ideas or examples come from.  This can be done through in-text citation:  Give the author’s last name, publication year, and (in the case of quotations) page number in parentheses at the end of the sentence or paragraph where the idea is presented — e.g., (Kliebard, 1986, p. 22); provide the full citations in a list of references at the end of the paper.  You can also identify sources with footnotes or endnotes:  Give the full citation for the first reference to a text and a short citation for subsequent citations to the same text.  (For critical reaction papers, you only need to give the short cite for items from the course reading; other sources require full citations.)  Note that citing a source is not sufficient to fulfill the requirement to provide evidence for your argument.  As spelled out in #6 above, you need to transmit to the reader some of the substance of what appears in the source cited, so the reader can understand the connection with the point you are making and can have some meat to chew on.  The best analytical writing provides a real feel for the material and not just a list of assertions and citations.  Depth, insight, and connections count for more than a superficial collection of glancing references.  In other words, don’t just mention an array of sources without drawing substantive points and examples from these sources; and don’t draw on ideas from such sources without identifying the ones you used.

 

  1. Take care in the quality of your prose: A paper that is written in a clear and effective style makes a more convincing argument than one written in a murky manner, even when both writers start with the same basic understanding of the issues.  However, writing that is confusing usually signals confusion in a person’s thinking.  After all, one key purpose of writing is to put down your ideas in a way that permits you and others to reflect on them critically, to see if they stand up to analysis.  So you should take the time to reflect on your own ideas on paper and revise them as needed.  You may want to take advantage of the opportunity in this course to submit a draft of the final paper, revise it in light of comments, and then resubmit the revised version.  This, after all, is the way writers normally proceed.  Outside of the artificial world of the classroom, writers never turn in their first draft as their final statement on a subject.

  

Posted in Higher Education, Meritocracy, Uncategorized

Daniel Markovits on “The Meritocracy Trap”

In this post, which I just wrote, I look at the arguments in the new book by Daniel Markovits.  It crystallizes a lot of the issues in the current debate about meritocracy and advances the argument in ways I hadn’t considered before.  This is not a review of the book but a teaser to get you to read it for yourself.  In it I single out some of his key points and give you some of my favorite quotes from the book.  Enjoy.

Moskovits Cover

Daniel Markovits on The Meritocracy Trap

In the last year or two, the media have been filled with critiques of the American meritocracy (e.g., here, here, and here).  It’s about time this issue got the critical attention it deserves, since the standard account has long been that the only problem with the meritocracy is that it’s not meritocratic enough.  Thus the Varsity Blues college admissions soap opera that has been playing in the press for months now, another case of rich people buying privileged access to credentials they haven’t earned the hard way.  That’s an old story of jumping the line and cutting in front of the truly worthy.

But in his new book, The Meritocracy Trap, Daniel Markovits makes a more complex, more interesting, and ultimately more damning critique.  The problem with meritocracy, he ways, lies at its very core and not just in its slipshod implementation.  It’s a destructive force in modern society, which puts people in the lower 99 percent at a severe disadvantage in the pursuit of social mobility and a good life.  But – and this is the less familiar part – it is also damaging to the people in the top group who gain the most financial and social benefits from it.  It’s a trap for both groups, and both would be better off without it.

In this post, I want to walk through key parts of the book’s argument and present some of my favorite quotes.  Markovits, a professor at Yale Law School, is a very effective writer and the story he tells in not only largely compelling but it’s also compulsively quotable.  I hope this teaser will convince you to read the book, which is available in the usual places and also in pirated editions online.

Here’s how he sets up the argument:

Common usage often conflates meritocracy with equality of opportunity. But although meritocracy was embraced as the handmaiden of equality of opportunity, and did open up the elite in its early years, it now more nearly stifles than fosters social mobility. The avenues that once carried people from modest circumstances into the American elite are narrowing dramatically. Middle-class families cannot afford the elaborate schooling that rich families buy, and ordinary schools lag farther and farther behind elite ones, commanding fewer resources and delivering inferior educations. Even as top universities emphasize achievement rather than breeding, they run admissions competitions that students from middle-class backgrounds cannot win, and their student bodies skew dramatically toward wealth. Meritocratic education now predominantly serves an elite caste rather than the general public.

Meritocracy similarly transforms jobs to favor the super-educated graduates that elite universities produce, so that work extends and compounds inequalities produced in school. Competence and an honest work ethic no longer assure a good job. Middle-class workers, without elite degrees, face discrimination all across a labor market that increasingly privileges elaborate education and extravagant training.

The meritocracy thus works at two levels, a hyper-intense winner-take-all competition to get the very best education in an extremely stratified system of schooling coupled with a similarly intense competition in the elite sector of the workforce for the positions at the very top with the most extraordinary financial and social rewards.

This system obviously gives a huge advantage to students who bring the cultural, social, and economic capital that comes from being the children of those who are already in the elite sector.  That, as I said, is an old story; no surprise there.  But he also shows the price paid by the group at the top.  As he puts it,

the rich and the rest are entangled in a single, shared, and mutually destructive economic and social logic. Their seemingly opposite burdens are in fact two symptoms of a shared meritocratic disease. Meritocratic elites acquire their caste through processes that ruthlessly exclude most Americans and, at the same time, mercilessly assault those who do go through them. The powerfully felt but unexplained frustrations that mar both classes—unprecedented resentment among the middle class and inscrutable anxiety among the elite—are eddies in a shared stream, drawing their energies from a single current.

            Markovits notes that “For virtually all of human history, income and industry have charted opposite courses.”  The rich were idle, living off the land and off the labor of others.  The poor were the workhorses of the economy.  But today,

High society has reversed course. Now it valorizes industry and despises leisure. As every rich person knows, when an acquaintance asks “How are you?” the correct answer is “So busy.” The old leisure class would have thought this a humiliating admission. The working rich boast that they are in demand.

The result is that, in a dramatic historical reversal, meritocrats at the top of the workforce now work longer hours than the middle or working classes.

In 1940, a typical worker in the bottom 60 percent worked nearly four (or 10 percent) more weekly hours than a typical worker in the top 1 percent. By 2010, the low-income worker devoted roughly twelve (or 30 percent) fewer hours to work than the high-income worker. Taken together, these trends shift the balance of ordinary to elite labor by nearly sixteen hours—or two regulation workdays—per week.

What’s going on here is that in the new meritocracy, top positions go to people who prove their worth not only by accumulating the most highly credentialed skills in school but by demonstrating the greatest dedication to the job.  The days of bankers’ hours and white-shoe law firms, with genteel professionals working at a relaxed aristocratic rate, are gone.  Take the case of lawyers, which Markovits knows best:

In 1962 (when elite lawyers earned a third of what they do today), the American Bar Association could confidently declare that “there are . . . approximately 1300 fee-earning hours per year” available to the normal lawyer. Today, by contrast, a major law firm pronounces with equal confidence that a quota of 2,400 billable hours “if properly managed” is “not unreasonable,” which is a euphemism for “necessary for having a hope of making partner.” Billing 2,400 hours requires working from 8 a.m. until 8 p.m., six days a week, without vacation or sick days, every week of the year. Graduates of elite law schools join law firms that commonly require associates and even partners to work sixty-, eighty-, and even hundred-hour weeks.

            The issue is that the meritocrats are claiming the top rewards not as owners of property but as workers using their own human capital.  “Unlike land or factories, human capital can produce income—at least using current technologies—only by being mixed with its owners’ own contemporaneous labor.”  In order to win the competition, they need to exploit their own labor.

People who are required to measure up from preschool through retirement become submerged in the effort. They become constituted by their achievements, so that eliteness goes from being something that a person enjoys to being everything that he is. In a mature meritocracy, schools and jobs dominate elite life so immersively that they leave no self over apart from status. An investment banker, enrolled as a two-year-old in the Episcopal School and then passed on to Dalton, Princeton, Morgan Stanley, Harvard Business School, and finally to Goldman Sachs (where he spends his income on sending his children to the schools that he once attended), becomes this résumé, in the minds of others and even in his own imagination.

As a result,

Meritocratic inequality might free the rich in consumption, but it enslaves them in production….  A person who lives like this places himself, quite literally, at the disposal of others—he uses himself up….  The elite, acting now as rentiers of their own human capital, exploit themselves, becoming not just victims but also agents of their own alienation.

            Of course, it’s hard to feel sorry for the people who win this competition, since their rewards are so over the top.

David Rockefeller received a salary of about $1.6 million (in 2015 dollars) when he became chairman of Chase Manhattan Bank in 1969, which amounted to roughly fifty times a typical bank teller’s income. Last year Jamie Dimon, who runs JPMorgan Chase today, received a total compensation of $29.5 million, which is over a thousand times as much as today’s banks pay typical tellers.

So no one says, “Poor Jamie Dimon.”  But one fundamental consequence of the long work hours of the new elite is that it helps justify their high rewards.  Not only are they better educated than you are, they also work harder than you do.  So how are you supposed to cry foul about where you ended up in life?  By not simply cashing in on their credentials but also by exploiting their own human capital, they provide the meritocracy with iron-clad legitimacy.

To make matters worse, meritocracy—precisely because it justifies economic inequalities and disguises class—denies ordinary Americans any high-minded language through which to explain and articulate the harms and wrongs of their increasing…. They become “victims without a language of victimhood.”

Markovits also connects the rise of meritocracy and the anxieties in foments to the politics of the Trump era.

Meritocracy is therefore far from innocent in the recent rise of nativism and populism. Instead, nativism and populism represent a backlash against meritocratic inequality brought on by advanced meritocracy. Nativism and populism express the same ideological and psychological forces behind the epidemic of addiction, overdose, and suicide that has lowered life expectancy in the white working and middle class.

The contrast with Obama is instructive: “Obama—a superordinate product of elite education—embodied meritocracy’s triumph. Trump—‘a blue-collar billionaire’ who announces ‘I love the poorly educated’ and openly opposes the meritocratic elite—exploits meritocracy’s enduring discontents.”  As he observes, “False prophets gain a foothold…because deeply discontented people care—often most and always first—about being heard and not just being helped. They will cling to the only ship that acknowledges the storm.”

 

 

 

 

 

 

Posted in Higher Education, History, History of education

Q and A about A Perfect Mess

This is a Q and A I did with Scott Jaschik about my book, A Perfect Mess, shortly after it came out.  It was published in Inside Higher Ed in 2017.

‘A Perfect Mess’

Author discusses his new book about American higher education, which suggests it may be better off today than people realize … because it has always faced so many problems and has always been a “hustler’s paradise.”

By

Scott Jaschik
May 3, 2017

 

David F. Labaree’s new book makes a somewhat unusual argument to reassure those worried about the future of American higher education. Yes, it has many serious problems, he writes. But it always has and always will. And that is in fact a strength of American higher education, he argues.

Labaree, a professor of education at Stanford University, answered questions via email about his new book, A Perfect Mess: The Unlikely Ascendancy of American Higher Education (University of Chicago Press).

Q: What do you consider the uniquely American qualities in the development of higher education in the U.S.?

A: The American system of higher education emerged in a unique historical setting in the early 19th century, when the state was weak, the market strong and the church divided. Whereas the European university was the creature of the medieval Roman Catholic church and then grew strong under the rising nation-state in the early modern period, the American system lacked the steady support of church or state and had to rely on the market in order to survive. This posed a terrible problem in the 19th century, as colleges had to scrabble around looking for consumers who would pay tuition and for private sponsors who would provide donations. But at the same time, it planted the seeds of institutional autonomy that came to serve the system so well in the next two centuries. Free from the control of church and state, individual colleges learned to survive on their own resources by meeting the needs of their students and their immediate communities.

By the 20th century, this left the system with the proven ability to adapt to circumstances, take advantage of opportunities, build its own sources of political and economic support, and expand to meet demand. Today the highest-rated universities in global rankings of higher education institutions are the ones with the greatest autonomy, in particular as measured by being less dependent on state funds. And American institutions dominate these rankings; according to the Shanghai rankings, they account for 16 of the top 20 universities in the world.

Q: How significant is the decentralization of American higher education, with public and private systems, and publics reflecting very different traditions in different states?

A: Decentralization has been a critically important element in the American system of higher education. The federal government never established a national university, and state governments were slow in setting up their own colleges because of lack of funds. As a result, unlike anywhere else in the world, private colleges in the U.S. emerged before the publics. They were born as not-for-profit corporations with state charters but with little public funding and no public control.

The impulse for founding these colleges had little to do with advancing higher learning. Instead founders established these institutions primarily to pursue two other goals — to promote the interests of one religious denomination over others and to make land in one town more attractive to buy than land in a neighboring town. Typically, these two aims came together. Developers would donate land and set up a college, seek affiliation with a church, and then use this college as a way to promote their town as a cultural center rather than a dusty agricultural village. Remember that early America had too much land and not enough buyers. The federal government was giving it away. This helps explain why American colleges arose in the largest numbers in the sparsely populated frontier rather than in the established cities in the East — why Ohio had so many more colleges than Massachusetts or Virginia.

Q: What eras strike you as those in which American higher education was most threatened?

A: American higher education was in the greatest jeopardy in the period after the Civil War. The system was drastically overbuilt. In 1880 the U.S. had more than 800 colleges, five times the number in the entire continent of Europe. Overall, however, they were poor excuses for institutions of higher education. On average they had only 130 students and 10 faculty, which made them barely able to survive from year to year, forced to defer faculty salaries and beg for donations. European visitors loved to write home about how intellectually and socially undistinguished they were.

The system as a whole had only one great asset — a huge amount of capacity. What it lacked, however, was sufficient students and academic credibility. Fortunately, both of these elements arose in the last quarter of the century to save the day. The rise of white-collar employment in the new corporations and government agencies created demand for people with strong cognitive, verbal and social skills, the kinds of things that students learn in school. And with the public high schools filling up with working-class students, the college became the primary way for middle-class families to provide their children with advantaged access to managerial and professional work. At the same time, the import of the German model of the university, with a faculty of specialized researchers sporting the new badge of merit, the Ph.D., offered American colleges and universities the possibility for academic stature that had so long eluded them. This steady flow of students and newfound academic distinction allowed the system to realize the potential embedded in its expansive capacity and autonomous structure.

Q: You seem to be suggesting not to worry too much about today’s problems, because higher education has always been a “perfect mess.” But are there issues that are notably worse today than in the past?

A: First, let me say a little about the advantages of the system’s messiness. In the next section, I’ll respond about the problem facing the system today. The relative autonomy and decentralization of American higher education allows individual colleges and universities to find their own ways of meeting needs, finding supporters and making themselves useful. They can choose to specialize, focusing on particular parts of the market — by level of degree, primary consumer base, religious orientation or vocational function. Or, like the big public and private universities, they can choose to provide something for everyone. This makes for individual institutions that don’t have a clean organization chart, looking instead like what some researchers have called “organized anarchies.”

The typical university is in constant tension between autonomous academic departments, which control curriculum and faculty hiring and promotion, and a strong president, who controls funding and is responsible only to the lay board of directors who own the place. Also thrown into the mix are a jumble of independent institutes, research centers and academic programs that have emerged in response to a variety of funding opportunities and faculty initiatives. The resulting institution is a hustler’s paradise, driven by a wide array of entrepreneurial actors: faculty trying to pursue intellectual interests and forge a career; administrators trying to protect and enrich the larger enterprise; and donors and students who want to draw on the university’s rich resources and capitalize on association with its stellar brand. These actors are feverishly pursuing their own interests within the framework of the university, which lures them with incentives, draws strength from their complex interactions and then passes these benefits on to society.

Q: What do you see as the major challenges facing academic leaders today?

A: The biggest problem facing the American system of higher education today is how to deal with its own success. In the 19th century, very few people attended college, so the system was not much in the public spotlight. Burgeoning enrollments in the 20th century put the system center stage, especially when it became the expectation that most people should graduate from some sort of college. As higher education moved from being an option to becoming a necessity, it increasingly found itself under the kind of intense scrutiny that has long been directed at American schools.

Accountability pressure in the last three decades has reshaped elementary and secondary schooling, and now the accountability police are headed to the college campus. As with earlier iterations, this reform effort demands that colleges demonstrate the value that students and the public are getting for their investment in higher education. This is particularly the case because higher education is so much more expensive per student than schooling at lower levels. So how much of this cost should the public pay from tax revenues and how much debt should individual students take on?

The danger posed by this accountability pressure is that colleges, like the K-12 schools before them, will come under pressure to narrow their mission to a small number of easily measurable outcomes. Most often the purpose boils down to the efficient delivery of instructional services to students, which will provide them with good jobs and provide society with an expanding economy. This ignores the wide array of social functions that the university serves. It’s a laboratory for working on pressing social problems; a playpen for intellectuals to pursue whatever questions seem interesting; a repository for the knowledge needed to address problems that haven’t yet emerged; a zone of creativity and exploration partially buffered from the realm of necessity; and, yes, a classroom for training future workers. The system’s organizational messiness is central to its social value.

Posted in Credentialing, Higher Education, History, Meritocracy

Schooling the Meritocracy: How Schools Came to Democratize Merit, Formalize Achievement, and Naturalize Privilege

 

This a new piece I recently wrote, based on a paper I presented last fall at the ISCHE conference in Berlin.  It’s part of a larger project that focuses on the construction of the American meritocracy, which is to say the new American aristocracy of credentials.

Schooling the Meritocracy:

How Schools Came to Democratize Merit, Formalize Achievement, and Naturalize Privilege

David F. Labaree

 

Merit is much in the news these days.  Controversy swirls around the central role that education plays in establishing who has the most merit and thus who gets the best job.  Parents are suing Harvard, for purportedly admitting students based on ethnicity rather than academic achievement.  Federal prosecutors are indicting wealthy parents for trying to bribe their children’s way into the most selective colleges.  At the core of these debates is a concern about fairness.  To what extent does the social structure allow people to get what they deserve, based on individual merit rather than social power and privilege?  There’s nothing new about our obsession with establishing merit.  The ancient Greeks and Romans were as concerned with this issue as much as we are.  What is new, however, is that all the attention now is focused on schools as the great merit dispensers.

Modern systems of public schooling have transformed the concept of merit.  The premodern form of this quality was what Joseph Kett calls essential merit.  This represented a person’s public accomplishments, which were seen as a measure of character.  Such merit was hard won through good works and had to be defended vigorously, even if that meant engaging in duels.  The new kind of merit, which arose in the mid nineteenth century after the emergence of universal public schooling in the U.S., was what Kett calls institutional merit.  This you earned by attending school and advancing through the levels of academic attainment.  It became your personal property, which could not be challenged by others and which granted you privileges in accordance with the level of merit you acquired.

Here I examine three consequences of this shift from essential to institutional merit in the American setting.  First, this change democratized merit by making it, at least theoretically, accessible to anyone and not just the gentry, who in the premodern period had prime access to this reputational good.  Second, it formalized the idea of merit by turning it from a series of publicly visible and substantive accomplishments into the accumulation of the forms that schooling had to offer – grades, credits, and degrees.  Third, following from the first two, it served the social function of naturalizing the privileges of birth by transposing them into academic accomplishments.  The well born, through the medium of schooling, acquired a second nature that transformed ascribed status into achieved status.

 

Essential Merit

 

From the very start, the country’s Founding Fathers were obsessed with essential merit.  To twenty-first century ears, the way they used the term sounds like what we might call character or honor or reputation.  Individuals enacted this kind of merit through public performances, and it referred not just to achievements in general but especially those that were considered most admirable for public figures.  This put a premium on taking on roles of public service more than private accomplishment and on contributing to the public good.  Such merit might come from demonstrating courage on the battlefield, sacrifice for the community in a position of public leadership, scientific or literary eminence.  Think Washington, Jefferson, Madison, Hamilton, and Franklin.  It extended well beyond simple self-aggrandizement, although it often spurred that among its most ardent suitors.  It was grounded in depth of achievement, but it also relied heavily on symbolism to underscore its virtue.

Merit was both an enactment and a display.  The most accomplished practitioner of essential merit in the revolutionary period was George Washington.  From his earliest days he sought to craft the iconic persona that has persisted to the present day.  His copybook in school was filled with 110 rules of civility that should govern public behavior.  He constructed a resume of public service that led inevitably from an officer in the colonial militia, to a representative to the continental congress, to commander in chief of the revolutionary army, and then to president.  A tall man in an era of short men, he would tower over a room of ordinary people, and he liked to demonstrate his physical strength and his prowess as an accomplished horseman.  This was a man with a strong sense of his reputation and of how to maintain it.  And he scored the ultimate triumph of essential merit in his last performance in public life, when he chose to step down from the presidency after two terms and return to Mount Vernon – Cincinnatus laying down his awesome powers and going back to the farm.

This kind of merit is what Jefferson meant when he referred to a “natural aristocracy,” arising in the fertile fields of the new world that were uncorrupted by the inheritance of office.  It represents the kinds of traits that made aristocracy a viable form of governance for so many years:  educating men of privilege to take on positions of public leadership, imbued with noblesse oblige, and armed with the skills to be effective in the role.  Merit was a powerful motivator for the Founding Fathers, a spur to emulation for the benefit of the community, a self-generating dynamic for a hyper-accomplishment.  And it was a key source of their broad legitimacy as public leaders.

But essential merit also had its problems.  Although it left room for self-made men to demonstrate their merit – like Franklin and Hamilton – it was largely open to men of leisure, born into the gentry, supported by a plantation full of slaves, and free to serve the public without having to worry about making a living; think Washington, Jefferson, and Madison.  When politics began the transition in the 1820s from the Federalist to the Jacksonian era, the air of aristocracy fit uncomfortably into the emerging democratic ethos that Tocqueville so expertly captured in Democracy in America.

Another problem was that essential merit brought with it unruly competition.   How much essential merit can crowd into a room before a fight breaks out?  How can everyone be a leader?  What happens if you don’t get the respect you think you earned?  One response, quite common at the time, was to engage in a duel.  If your reputation was maligned by someone and that person refused to retract the slur, then your honor compelled you to defend your reputation with your life.  Alexander Hamilton was but one casualty of this lethal side effect of essential merit.  Benedict Arnold is another case in point.  An accomplished military officer and Washington protégé, Arnold was doing everything right on the battlefield to demonstrate his merit.  But when he sought appointment as a major general, politics blocked his path.  This was a slight too much for him to bear.  Instead of a duel (who would he challenge, his mentor Washington?), he opted for treason, plotting to pass along to the British his command of the fort at West Point.  So the dynamic behind essential merit was a powerful driver for behavior that was both socially functional and socially destructive.

 

The Rise of Institutional Merit

 

By the second quarter of the nineteenth century, a new form of merit was arising in the new republic.  In contrast to the high-flown notion of essential merit, grounded in high accomplishment in public life and defended with your life, the new merit was singularly pedestrian.  It mean grades on a report card at school.  Hardly the stuff of stirring biographies.  These grades were originally labeled as measures of merit and demerit in academic work, recording what you did well and what you did badly.  Ironically, ground zero for this new system was Benedict Arnold’s old fort at West Point, which was now the location of the U.S. Military Academy.  The sum of your merits and demerits constituted your academic worth.  Soon the emerging common school system adopted the same mode of evaluation.

The sheer banality of the new merit offered real advantages.  Unlike its predecessor, it did not signal membership in an exclusive club accessible primarily to the well-born but instead arose from a system that governed an entire population within a school.  As a result, it was well suited to a more democratic political culture.  Also, it provided a stimulus sufficiently strong to promote productive competition among students for academic standing, but these marks on a report card were not really worth fighting over.

So institutional merit emerged as a highly functional construct for meeting the organizational needs of the new systems of public schooling that arose in the middle of nineteenth century America.  What started out as a mechanism for motivating students in a classroom grew into a model for structuring an entire system of education.  Once the principle of ranking by individual achievement was established, it developed into a way of ranking groups of students within schools and then groups of schools with school systems.  The first innovation, as schools became larger and more heterogeneous in both age and ability, was to organize groups of students into homogeneous classrooms with others of the same age and ability.  If you performed with sufficient merit in one grade, you would be promoted with your peers at the end of the year into the next grade up the ladder.  If your merit was not up to standard, you would be held back to repeat the grade.  This allowed teachers to pitch instruction toward a group of students who were at roughly the same level of achievement and development.  It also created a more level playing field that allowed teachers to compare and rank the relative performance of students within the class, which they couldn’t do in a one-room schoolhouse with a wide array of ages and abilities.  So the invention of the grade also led to the invention of the metric that defines some students as above grade-level and others as below.  Graded schooling was thus the foundation of the modern meritocracy.

The next step in the development of institutional merit was the erection of a graded system of schooling.  Students would start out in an elementary school for the lower grades, then gain promotion to a grammar school, and (by the end of the nineteenth century) move up to a high school for the final grades.  Entry at one level was dependent on successful completion of the level below.  A clear hierarchy of schooling emerged based on the new merit metric.  And it didn’t stop there.  High school graduation became the criterion for entry into college, and the completion of college became the requirement for entry into graduate school.  A single graded structure guided student progress through each individual school and across the entire hierarchy of schooling, serving as a rationalized and incremental ladder of meritocratic attainment leading from first grade through the most elevated levels of the system.

Consider some of the consequences of the emergence of this finely tuned machinery for arranging students by institutional merit.  When you have a measure of what average progress should look like – annual promotion to the next grade, and periodic promotion to the school at the next level – then you also have a clear measure of failure.  There were three ways for students to demonstrate failure within the system:  to be held back from promotion to the next grade; to be denied the diploma that demonstrated completion of particular school level; to leave the system altogether as a particular point in the graded hierarchy.  Thus emerged the chronic problems of the new system – retardation and elimination.

A parallel challenge to the legitimacy of the merit structure occurred at the level of the school.  By the early twentieth century, level of school became increasingly important in determining access to the best jobs.  As a particular level of schooling began to fill up, as happened to the high school in the first half of the twentieth century, then that level of diploma became less able to provide invidious distinction.  For a high school graduate, this meant that the perceived quality of the school became an important factor in determining the relative merit of your degree compared with other high school graduates.  When college enrollments took off in the mid twentieth century and this level of the system emerged as the new zone of universal education, the value of a college degree likewise became dependent on the imputed merit of the institution granting it.  The result is a two-layered hierarchy of merit in the American educational system.  One was the formal graded system from first grade to graduate school.  Another was the informal ranking of institutions at the same formal level.  Both became critical in determining graduates’ level of institutional merit and their position in the queue for the best jobs.  Consider some of the consequences of the dominance of this new form of merit.

Democratizing Merit

As we saw, essential merit had a bias toward privilege.  The founding fathers who displayed the most merit were to the manor born.  They were free to exercise public service because of birth and wealth.  Yes, it was possible as well for an outsider to demonstrate essential merit, but it wasn’t easy.  Benjamin Franklin was sui generis, and even he acted less as a leader and more as a sage and diplomat.  Alexander Hamilton fought his way to the top, but he never lost his outsider status and ended up dying to defend his honor, which was hard-won but never fully secure.

What gives essential merit face validity is that it is based on what you have actually accomplished.  Your merit is your accomplishments.  That’s hard to beat as a basis for respect, but it’s also hard to attain.  Washington could prove himself as a military officer because his gentry status automatically qualified him to become an officer in the first place.  Jefferson became a political figure because that’s what men of his status did with themselves and his election would be assured.  As a result, what made this kind of merit so compelling is what also made it so difficult for anyone but the gentry to demonstrate.

So the move toward institutional merit radically opened up the possibility of attaining it.  It’s a system that applied to everyone – not just the people with special access but everyone in the common school classroom.  All students in the class could demonstrate their worth and earn the appropriate merits that measured that worth.  And everyone was measured on the same scale.  If essential merit was the measure of the member of the natural aristocracy, institutional merit was the measure of the citizen in a democracy.  You’ve got to love that part about it.

Another characteristic of institutional merit also made it distinctly democratic.  What it measured was neither intrinsically important nor deeply admirable.  It didn’t measure your valor in battle or your willingness to sacrifice for the public good; instead it reflected how many right answers you got on a weekly spelling test.  No big deal.

But what makes this measure of merit so powerful for the average person was its implication.  It measured a trivial accomplishment in the confined academic world of the classroom, but it implied a bright future.  If essential merit measured your real accomplishment in the world, institutional merit offered a prediction of your future accomplishment.  It said, look out for this guy – he’s going to be somebody.  This is a major benefit that derives from the new measure.  Measuring how well you did a job is relatively easy, but predicting in advance how well you will do that job is a very big deal.

Does institutional merit really predict future accomplishment?  Do academic grades, credits, and degrees tell us how people will perform on the job?  Human capital theorists say yes: the skills acquired in school translate into greater productivity in the workforce.  Credentialing theorists say no:  the workforce rewards degrees by demanding them as prerequisites for getting a job, but this doesn’t demonstrate that what is learned in school helps a person in doing the job.  I lean toward the latter group, but for our purposes this debate doesn’t really matter.  As long as the job market treats academic merit as a predictor of job performance, then this form of merit serves as such.  Whether academic learning is useful on the job is irrelevant as long as the measures of academic merit are used to allocate people to jobs.  And a system that offers everyone in a community access to schools that will award them tokens of institutional merit gives everyone a chance to gain any social position.  That’s a very democratic measure indeed.

Formalizing Merit

Part of what makes institutional merit so democratic is that the measure itself is so abstract.  What it’s measuring is not concrete accomplishment – winning a battle or passing a law – but generic accomplishment on a standardized and decontextualized scale.  It’s a score from A to F or 1 to 100 or 0 to 4.  All of these scales are in use in American schools, but which you use doesn’t matter.  They’re all interchangeable.  All they tell us is how high or low an individual was rated on some academic task.  Then these individual scores are averaged together across a heterogeneous array of such tasks to compute a composite score that tells us – what?  The score says that overall, at the end of the class, you met academic expectations (for that class in that grade) at a high, medium, or low level, or that you failed to meet the minimum expectation at all.  And, if compared to the grades that fellow students received in the same class, it shows where your performance ranked with that of your peers.

It’s the sheer abstraction of this measure of merit that gives it so much power.  A verbal description of a student’s performance in the class would be a much richer way of understanding what she learned there:  In her biology class, Joanie demonstrated a strong understanding of heredity and photosynthesis but she had some trouble with the vascular system.  The problem is that this doesn’t tell you how she compares with her classmates or whether she will qualify to become a banker.  What helps with the latter is that she received a grade of B+ (3.3 on a 4.0 scale) and the class average was B.  The grade tells you much less but it means a lot more for her and her future.  Especially when it is combined with all of her other grades in classes across her whole career in high school, culminating in her final grade point average and a diploma.  It says, she’ll get into college, but it won’t be very selective one.  She’ll end up in a middle class job, but she won’t be a top manager.  In terms of her future, this is what really matters, not her mastery of photosynthesis.

In this way, institutional merit is part of the broad process of rationalization that arose with modernity.  It filters out all of the noise that comes from context and content and qualitative judgments and comes up with a quantitative measure that locates the individual as a point on a normal curve representing everyone in the cohort.  It shows where you rank and predicts where you’re headed.  It becomes a central part of the machinery of disciplinary power.

Naturalizing Privilege

Once merit became democratized and formalized, it also became naturalized.  The process of naturalization works like this.  Your merit is so central and so pervasive in a system of universal schooling that it embeds itself within the individual person.  You start saying things like:  I’m smart.  I’m dumb.  I’m a good student.  I’m a bad student.  I’m good at reading but bad at math.  I’m lousy at sports.  The construction of merit is coextensive with the entire experience of growing up, and therefore it comes to constitute the emergent you.  It no longer seems to be something imposed by a teacher or a school but instead comes to be an essential part of your identity.  It’s now less what you do and increasingly who you are.  In this way, the systemic construction of merit begins to disappear and what’s left is a permanent trait of the individual.  You are your grade and your grade is your destiny.

The problem, however – as an enormous amount of research shows – is that the formal measures of merit that schools use are subject to powerful influence from a student’s social origins.  No matter how your measure merit, it affects your score.  It shapes your educational attainment.  It also shows up in measures that rank educational institutions by quality and selectivity.  Across the board, your parents’ social class has an enormous impact on the level of merit you are likely to acquire in school.  Students with higher social position end up accumulating a disproportionately large number of academic merit badges.

The correlations between socioeconomic status and school measures of merit are strong and consistent, and the causation is easy to determine.  Being born well has an enormously positive impact on the education merit you acquire across your life.  Let us count the ways.  Economic capital is one obvious factor.  Wealthy communities can support better schools. Social capital is another factor.  Families from the upper middle classes have a much broader network of relationships with the larger society than those form the working class, which provides a big advantage for their schooling prospects.  For them, the educational system is not foreign territory but feels like home.

Cultural capital is a third factor, and the most important of all.  School is a place that teaches students the cognitive skills, cultural norms, and forms of knowledge that are required for competent performance in positions of power.  Schools demonstrate a strong disposition toward these capacities over others:  mental over manual skills, theoretical over practical knowledge, decontextualized over contextualized perspectives, mind over body, Gesellschaft over Gemeinschaft.  Parents in the upper middle class are already highly skilled in these cultural capacities, which they deploy in their professional and managerial work on a daily basis.  Their children have grown up in the world of cultural capital.  It’s a language they learn to speak at home.  For working-class children, school is an introduction to a foreign culture and a new language, which unaccountably other students seem to already know.  They’re playing catchup from day one.  Also, it turns out that schools are better at rewarding cultural capital than they are at teaching it.  So kids from the upper middle class can glide through school with little effort while others continually struggle to keep up.  The longer they remain in school, the larger the achievement gap between the two groups.

So, in the wonderful world of academic merit, the fix is in.  Upper income students have a built-in advantage in acquiring the grades, credits, and degrees that constitute the primary prizes of the school meritocracy.  But – and this is the true magic of the educational process – the merits that these students accumulate at school come in a purified academic form that is independent of their social origins.  They may have entered schooling as people of privilege, but they leave it as people of merit.  They’re good students.  They’re smart.  They’re well educated.  As a result, they’re totally deserving of special access to the best jobs.  They arrived with inherited privilege but they leave with earned privilege.  So now they fully deserve what they get with their new educational credentials.

In this way, the merit structure of schooling performs a kind of alchemy.  It turns class position into academic merit.  It turns ascribed status into achieved status. You may have gotten into Harvard by growing up in a rich neighborhood with great schools and by being a legacy.  But when you graduate, you bear the label of a person of merit, whose future accomplishments arise alone from your superior abilities.  You’ve been given a second nature.

Consequences of Naturalized Privilege: The New Aristocracy

The process by which schools naturalize academic merit brings major consequences to the larger society.  The most important of these is that it legitimizes social inequality.  People who were born on third base get credit for hitting a triple, and people who have to start in the batter’s box face the real possibility of striking out.  According to the educational system, divergent social outcomes are the result of differences in individual merit, so, one way or the other, people get what they deserve.  The fact that a fraction of students from the lower classes manage against the odds to prove themselves in school and move up the social scale only adds further credibility to the existence of a real meritocracy.

In the United States in the last 40 years, we have come to see the broader implications of this system of status attainment through institutional merit.  It has created a new kind of aristocracy.  This is not Jefferson’s natural aristocracy, grounded in public accomplishments, but a caste of meritocratic privilege, grounded in the formalized and naturalized merit signaled by educational credentials.  As with aristocracies of old, the new meritocracy is a system of rule by your betters – no longer defined as those who are better born or more accomplished but now as those who are better educated.  Michael Young saw this coming back in 1958, as he predicted in his fable, The Rise of the Meritocracy.  But now we can see that it has truly taken hold.

The core expertise of this new aristocracy is skill in working the system.  You have to know how to play the game of educational merit-getting and pass this on to your children.  The secret is in knowing that the achievements that get awarded merit points through the process of schooling are not substantive but formal.  Schooling is not about learning the subject matter; it’s about getting good grades, accumulating course credits, and collecting the diploma on the way out the door.  Degrees pay off, not what you learned in school or even the number of years of schooling you have acquired.  What you need to know is what’s going to be on the test and nothing else.  So you need to study strategically and spend of lot of effort working the refs.  Give teacher what she wants and be sure to get on her good side.  Give the college admissions officers the things they are looking for in your application.  Pump up your test scores with coaching and learning how to game the questions.

Members of the new aristocracy are particularly aggressive about carrying out a strategy known as opportunity hoarding.  There is no academic advantage too trivial to pursue, and the number of advantages you accumulate can never be enough.  In order to get your children into the right selective college you need send them to the right school, get them into the gifted program in elementary school and the right track in high school, hire a tutor, carry out test prep, do the college tour, pursue prizes, develop a well-rounded resume for the student (sport, student leadership, musical instrument, service), pull strings as a legacy and a donor, and on and on and on.

Such behavior by upper-middle-class parents is not a crazy as it seems.  The problem with being at the top is that there’s nowhere to go but down.  If you look at studies of intergenerational mobility in the US, the top quintile of families have a big advantage, with more than 40 percent of children ending up in the same quintile as their parents, twice the rate that would occur by chance.  But that still means that 60 percent are going to be downwardly mobile.  The system is just meritocratic enough to keep the most privileged families on edge, worried about having their child bested by a smart poor kid.   As Jerry Karabel puts it in The Chosen, the only thing U.S. education equalizes is anxiety.

As with earlier aristocracies, the new aristocrats of merit cluster together in the same communities, where the schools are like no other.  Their children attend the same elite colleges, where they meet their future mates and then transmit their combined cultural, social, and economic capital in concentrated form to their children, a process sociologists call assortative mating.  And one consequence of this increase concentration of educational resources is that the achievement gap between low and high income students has been rising; Sean Reardon’s study shows the gap growing 40 percent in the last quarter of the twentieth century.  This is how educational and social inequality grows larger over time.

 

By democratizing, formalizing, and naturalizing merit, schools have played a central role in defining the character of modern society.  In the process they have served to increase social opportunity while also increasing social inequality.  At the same time, they have established a solid educational basis for the legitimacy of this new inequality, and they have fostered the development of a new aristocracy of educational merit whose economic power, social privilege, and cultural cohesion would be the envy of the high nobility in early modern England or France.  Now, as then, the aristocracy assumes its outsized social role as a matter of natural right.

 

Posted in Educational Research, Higher Education, Scholarship

We’re Producing Academic Technicians and Justice Warriors: Sermon on Educational Research, Pt. 2

This is a followup to the “Sermon on Educational Research” that I posted last week.  It’s a reflection on two dysfunctional orientations toward scholarship that students often pick up in the course of doctoral study.

We’re Producing Academic Technicians and Justice Warriors:

A Sermon on Educational Research, Part 2

David F. Labaree

Published in International Journal of the Historiography of Education, 1-2019

Download here

            In 2012, I wrote a paper for this journal titled “A Sermon on Educational Research.”  It offered advice to doctoral students in education about how to approach their work as emergent scholars in the field.  The key bits of advice were:  be wrong; be lazy; be irrelevant.  The idea was to immunize scholars against some of the chronic syndromes in educational scholarship – trying to be right instead of interesting, trying to be diligent instead of strategic, and trying to focus on issues arising from professional utility instead of from intellectual interest.  Needless to say, the advice failed to take hold.  The engine of educational research production has continued to plow ahead in pursuit of validity, diligence, and relevance.

So here I am, giving it another try.  This time I take aim at two kinds of practices among doctoral students in education that are particularly prominent right now and also particularly problematic for the future health of the field.  Most students don’t fit in either category, but the very existence of these practices threatens to pollute the pool.  One practice is the effort to become a hardcore academic technician; the other is the effort to become a hardcore justice warrior.  Though at one level they represent opposite orientations toward research, at another level they have in common the urge to serve as social engineers intent of fixing social problems.  The antidotes to these two tendencies, I suggest, are to take a dose of humility every day and to approach educational research as a form of play.  Let’s play with ideas instead of being hell-bent on tinkering with the social machinery.

The Academic Technician

One role that education doctoral students adopt is academic technician.  In practice, this means concentrating on learning the craft of a particular domain of educational research.  Not that there’s anything wrong with craft.  Without it, we wouldn’t be a profession at all but just a bunch of amateurs.  The problem comes from learning the craft too well.  That means apprenticing yourself to an expert in your subfield and adopting all the practices and perspectives that this expert represents.

One flaw with this approach is that it treats educational research as a field whose primary problems are technical.  It’s all about immersing yourself in cutting-edge research methodologies and diligently applying these to whatever data meets their assumptions.  Often the result is scholarship that is technically expert and substantively deficient.  Your aim is to be able to defend the validity of your findings more than their significance, since at colloquia where this kind of work is presented the arguments are mostly focused on whether the methodology warrants the modest claims made by the author.

The focus on acquiring technical skills diverts students from engaging with the big issues in the field of education, which are primarily normative.  Education is an effort to form children into the kinds of adults we want them to be.  So the central issues in education revolve around the ends we want to accomplish and the values we hold dear.  The key conflicts are about purpose rather than practice.  Technical skills are not sufficient to explore these issues, and by concentrating too much on acquiring these purportedly hard skills we turn our attention away from the normative concerns that by comparison seem awfully squishy.

Another problem that arises from the effort to become academic technicians is that it turns students into terrible writers.  You populate your text with jargon and other forms of academic shorthand because you are speaking to an audience so small it could fit in a single seminar room.  You’re trying to do science, so you model your writing after the lifeless language of the journeyman scientific journal article.  This means using passive voice, abandoning the first person (“data were gathered” – who did that?), avoiding action verbs, loading the text with nominalizations (never use a verb when you can turn it into a noun), and at all costs refusing to tell an engaging story.  If you make an effort to draw the reader’s interest, it’s considered unprofessional.  In this style of writing, papers are built by the numbers.  Using the IMRaD formula, papers need to consist of Introduction, Methods, Results, and Discussion.  You can read them and write them in any order.  Every paper is just an exercise in filling each of these categories with new content.  It’s plug and play all the way.

The Justice Warrior

Another scholarly role that doctoral students adopt is justice warrior.  If the first role ranks means over ends, this one canonizes ends and dispenses with means altogether.  All that matters is your frequently expressed commitment to particular values of social justice.  You can’t express these values too often or too vehemently, since the mission is all important and the enemies resisting the mission are legion.  As a result, your position is perpetually atop the high horse of righteous indignation.  The primary targets of your scholarship are sexism, racism, and colonialism, with social class coming in a distant fourth.

If people seek to question your position because of a putative failure to construct a compelling argument or to validate your claims with clear evidence and rigorous methods, they are only demonstrating that they are on the wrong side.  It’s ok to dismiss any text or argument whose author might be accused of betraying a tinge of sexism, racism, or colonialism.  Everything that follows is fruit of the poisonous tree.  You can say something like, “Once I saw he was using the male pronoun, I couldn’t continue reading.”  Nothing worthwhile comes from someone you deem a bad person.  This simplifies your life as an emergent scholar, since you can ignore most of the literature.  It also means you seek out like-minded souls to serve on your committee and like-minded journals to place your papers.

This approach incorporates a distinctive stance toward intellectual life.  The academic technician restricts intellectual interest to the methods within a small subfield at the expense of engaging with interesting ideas.  But the justice warrior on principle adopts a position that is whole-heartedly anti-intellectual.  You need to shun most ideas because they bear the taint of their sinful origins.  Maintaining ideological purity is the key focus of your academic life.  The world is black and white and only sinners see shades of gray.

For justice warriors, every class, colloquium, meeting, and paper is an opportunity to signal your virtue.  This has the effect of stifling the conversation, since it’s hard for anyone to come back with a critical comment without looking sexist, racist, or colonialist.  Once you establish the high ideological ground, it’s easy to defend your position without having to draw on data, methods, or logic.  Being right brings reliable rewards.

Some Common Ground: Becoming Social Engineers

These two tendencies within the educational research community appear to be opposites, but in one way they are quite similar.  Both show a commitment for scholars in the field to become engineers whose job it is to fix social problems.  For the academic technicians, this means a focus on creating data-driven policies for school reform, where the aim is to bring school and society in line with the findings of rigorous research.  Research says do this, so let it be so.  For justice warriors, this means a focus on bringing school and society in line with your own personal sense of what is righteous.  Both say:  I know best, so get out of my way.

The problem with this, of course, is that exercises in social engineering so often go very badly, no matter how much they are validated by science or confirmed by belief.  Think communism, fascism, the inquisition, eugenics, the penitentiary, and the school.  Rationalized scientific knowledge can be a destructive tool for tinkering with the emergent, organic ecology of social life.  And moral absolutism can easily poison the soil.

A Couple Antidotes

One antidote to the dual diseases of academic technicalism and justice fundamentalism is a dose of humility.  What most social engineers have in common is a failure to consider the possibility that they might be wrong.  Maybe I don’t know enough.  Maybe my methods don’t apply in this setting.  Maybe my theories are flawed.  Maybe my values are not universal.  Maybe my beliefs are mistaken.  Maybe my morals are themselves tainted by inconsistency.  Maybe it’s not just a case of technique; maybe ends matter.  Maybe it’s not just a case of values; maybe means matter.

Adopting humility doesn’t mean that you need to be tentative in your assertions or diffident in your willingness to enter the conversation.  Often you need to overstate your point in order to get attention and push people to engage with you.  But it does mean that you need to be willing to reconsider your argument based on evidence and arguments that you encounter through with other scholars – or with your own data.  This reminds me of a colleague who used to ask faculty candidates the same question:  “Tell me about a time when your research forced you to give up an idea that you really wanted to hold on to.”  If your own research isn’t capable of changing your mind, then it seems you’re not really examining data but simply confirming belief.

Another way to counter these two baleful tendencies in the field is to approach research as a form of intellectual play.  This means playing with ideas instead of engineering improvement, instead of pursuing methodological perfection, instead of pursuing ideological perfection.  Play lets you try things out without fear of being technically or ideologically wrong.  It keeps you from taking yourself too seriously, always a risk for academics at all levels.  Play will keep you from adopting the social engineering stance that assumes you know better than they do.  This doesn’t mean abandoning your commitments to rigor and values.  Your values will continue to shape what you play with, serving to make the stories you tell with your research meaningful and worthwhile.  Your technique will continue to be needed to make the stories you tell credible.

Playing with ideas is fundamental to the ways that universities work.  Ideally, universities provide a zone of autonomy for faculty that allows them to explore the intellectual terrain, unfettered by concerns about what’s politically correct, socially useful, or potentially ridiculous.  This freedom is more than a license to be frivolous, though it’s tolerant of such behavior.  Its value comes from the way it opens up possibilities that more planful programs of research might miss.  It allows you to think the unthinkable and pursue the longshot.  Maybe most such efforts come to naught, but that’s an acceptable cost if a few fall on fertile soil and grow into insights of great significance.  Play is messy but it’s highly functional.

In closing, let me note for the record that most doctoral students in education don’t fall into either of the two categories of scholarly malpractice that I identify here.  Most are neither academic technicians nor justice warriors.  Most manage to negotiate a position that avoids either of these polar tendencies.  That’s the good news, which bodes well for the future of our field.  The bad news, however, is that this often leaves them feeling as though they have fallen between two stools.  Compared to the academic technicians they seem unprofessional, and compared to the justice warriors they seem immoral.  That’s a position that threatens their ability to function as the kind of educational scholars we need.