Posted in Credentialing, Higher Education, History of education, Sociology, Uncategorized

How Credentialing Theory Explains the Extraordinary Growth in US Higher Ed in the 19th Century

Today I am posting a piece I wrote in 1995. It was the foreword to a book by David K. Brown, Degrees of Control: A Sociology of Educational Expansion and Occupational Credentialism.  

I have long been interested in credentialing theory, but this is the only place where I ever tried to spell out in detail how the theory works.  For this purpose, I draw on the case of the rapid expansion of the US system of higher education in the 19th century and its transformation at the end of the century, which is the focus of Brown’s book.  Here’s a link to a pdf of the original. 

The case is particularly fruitful for demonstrating the value of credentialing theory, because the most prominent theory of education development simply can’t make sense of it.  Functionalist theory sees the emergence of educational systems as part of the process of modernization.  As societies become more complex, with a greater division of labor and a shift from manual to mental work, the economy requires workers with higher degrees of verbal and cognitive skills.  Elementary, secondary  and higher education arise over time in response to this need. 

The history of education in the U.S., however, poses a real problem for this explanation.  American higher education exploded in the 19th century, to the point that there were 800 some colleges in existence by 1880, which was more than the total number in the continent of Europe.  It was the highest rate of colleges per 100,000 population that the world have ever seen.   The problem is that this increase was not in respond to increasing demand from employers for college-educated workers.  While the rate of higher schooling was increasing across the century, the skill demands in the workforce were declining.  The growth of factory production was subdividing forms of skilled work, such as shoemaking, into a series of low-skilled tasks on the assembly line.  

This being the case, then, how can we understand the explosion of college founding in the 19th century?  Brown provides a compelling explanation, and I lay out his core arguments in my foreword.  I hope you find it illuminating.

 

Brown Cover

Preface

In this book, David Brown tackles an important question that has long puzzled scholars who wanted to understand the central role that education plays in American society: When compared with other Western countries, why did the United States experience such extraordinary growth in higher education? Whereas in most societies higher education has long been seen as a privilege that is granted to a relatively small proportion of the population, in the United States it has increasingly come to be seen as a right of the ordinary citizen. Nor was this rapid increase in accessibility very recent phenomenon. As Brown notes, between 1870 and 1930, the proportion of college-age persons (18 to 21 years old) who attended institutions of higher education rose from 1.7% to 13.0%. And this was long before the proliferation of regional state universities and community colleges made college attendance a majority experience for American youth.

The range of possible answers to this question is considerable, with each carrying its own distinctive image of the nature of American political and social life. For example, perhaps the rapid growth in the opportunity for higher education was an expression of egalitarian politics and a confirmation of the American Dream; or perhaps it was a political diversion, providing ideological cover for persistent inequality; or perhaps it was merely an accident — an unintended consequence of a struggle for something altogether different. In politically charged terrain such as this, one would prefer to seek guidance from an author who doesn’t ask the reader to march behind an ideological banner toward a preordained conclusion, but who instead rigorously examines the historical data and allows for the possibility of encountering surprises. What the reader wants, I think, is an analysis that is both informed by theory and sensitive to historical nuance.

In this book, Brown provides such an analysis. He approaches the subject from the perspective of historical sociology, and in doing so he manages to maintain an unusually effective balance between historical explanation and sociological theory-building. Unlike many sociologists dealing with history, he never oversimplifies the complexity of historical events in the rush toward premature theoretical closure; and unlike many historians dealing with sociology, he doesn’t merely import existing theories into his historical analysis but rather conceives of the analysis itself as a contribution to theory. His aim is therefore quite ambitious – to spell out a theoretical explanation for the spectacular growth and peculiar structure of American higher education, and to ground this explanation in an analysis of the role of college credentials in American life.

Traditional explanations do not hold up very well when examined closely. Structural-functionalist theory argues that an expanding economy created a powerful demand for advanced technical skills (human capital), which only a rapid expansion of higher education could fill. But Brown notes that during this expansion most students pursued programs not in vocational-technical areas but in liberal arts, meaning that the forms of knowledge they were acquiring were rather remote from the economically productive skills supposedly demanded by employers. Social reproduction theory sees the university as a mechanism that emerged to protect the privilege of the upper-middle class behind a wall of cultural capital, during a time (with the decline of proprietorship) when it became increasingly difficult for economic capital alone to provide such protection. But, while this theory points to a central outcome of college expansion, it fails to explain the historical contingencies and agencies that actually produced this outcome. In fact, both of these theories are essentially functionalist in approach, portraying higher education as arising automatically to fill a social need — within the economy, in the first case, and within the class system, in the second.

However, credentialing theory, as developed most extensively by Randall Collins (1979), helps explain the socially reproductive effect of expanding higher education without denying agency. It conceives of higher education diplomas as a kind of cultural currency that becomes attractive to status groups seeking an advantage in the competition for social positions, and therefore it sees the expansion of higher education as a response to consumer demand rather than functional necessity. Upper classes tend to benefit disproportionately from this educational development, not because of an institutional correspondence principle that preordains such an outcome, but because they are socially and culturally better equipped to gain access to and succeed within the educational market.

This credentialist theory of educational growth is the one that Brown finds most compelling as the basis for his own interpretation. However, when he plunges into a close examination of American higher education, he finds that Collins’ formulation of this theory often does not coincide very well with the historical evidence. One key problem is that Collins does not examine the nature of labor market recruitment, which is critical for credentialist theory, since the pursuit of college credentials only makes sense if employers are rewarding degree holders with desirable jobs. Brown shows that between 1800 and 1880 the number of colleges in the United States grew dramatically (as Collins also asserts), but that enrollments at individual colleges were quite modest. He argues that this binge of institution-creation was driven by a combination of religious and market forces but not (contrary to Collins) by the pursuit of credentials. There simply is no good evidence that a college degree was much in demand by employers during this period. Instead, a great deal of the growth in the number of colleges was the result of the desire by religious and ethnic groups to create their own settings for producing clergy and transmitting culture. In a particularly intriguing analysis, Brown argues that an additional spur to this growth came from markedly less elevated sources — local boosterism and land speculation — as development-oriented towns sought to establish colleges as a mechanism for attracting land buyers and new residents.

Brown’s version of credentialing theory identifies a few central factors that are required in order to facilitate a credential-driven expansion of higher education, and by 1880 several of these were already in place. One such factor is substantial wealth. Higher education is expensive, and expanding it for reasons of individual status attainment rather than for societal necessity is a wasteful use of a nation’s resources; it is only feasible for a very wealthy country. The United States was such a country in the late nineteenth century. A second factor is a broad institutional base. At this point, the United States had the largest number of colleges per million residents that the country has even seen, before or since. When combined with the small enrollments at each college, this meant that there was a great potential for growth within an already existing institutional framework. This potential was reinforced by a third factor, decentralized control. Colleges were governed by local boards rather than central state authorities, thus encouraging entrepreneurial behavior by college leaders, especially in the intensively competitive market environment they faced.

However, three other essential factors for rapid credential-based growth in higher education were still missing in 1880. For one thing, colleges were not going to be able to attract large numbers of new students, who were after all unlikely to be motivated solely by the love of learning, unless they could offer these students both a pleasant social experience and a practical educational experience — neither of which was the norm at colleges for most of the nineteenth century. Another problem was that colleges could not function as credentialing institutions until they had a monopoly over a particular form of credentials, but in 1880 they were still competing directly with high schools for the same students. Finally, their credentials were not going to have any value on the market unless employers began to demonstrate a distinct preference for hiring college graduates, and such a preference was still not obvious at this stage.

According to Brown, the 1880s saw a major shift in all three of these factors. The trigger for this change was a significant oversupply of institutions relative to existing demand. In this life or death situation, colleges desperately sought to increase the pool of potential students. It is no coincidence that this period marked the rapid diffusion of efforts to improve the quality of social life on campuses (from the promotion of athletics to the proliferation of fraternities), and also the shift toward curriculum with a stronger claim of practicality (emphasizing modern languages and science over Latin and Greek). At the same time, colleges sought to guarantee a flow of students from feeder institutions, which required them to establish a hierarchical relationship with high schools. The end of the century was the period in which colleges began requiring completion of a high school course as a prerequisite for college admission instead of the traditional entrance examination. This system provided high schools with a stable outlet for its graduates and colleges with predictable flow of reasonably well-prepared students. However, none of this would have been possible if the college degree had not acquired significant exchange value in the labor market. Without this, there would have been only social reasons for attending college, and high schools would have had little incentive to submit to college mandates.

Perhaps Brown’s strongest contribution to credential theory is his subtle and persuasive analysis of the reasoning that led employers to assert a preference for college graduates in the hiring process. Until now, this issue has posed a significant, perhaps fatal, problem for credentialing theory, which has asked the reader to accept two apparently contradictory assertions about credentials. First, the theory claims that a college degree has exchange value but not necessarily use value; that is, it is attractive to the consumer because it can be cashed in on a good job more or less independently of any learning that was acquired along the way. Second, this exchange value depends on the willingness of employers to hire applicants based on credentials alone, without direct knowledge of what these applicants know or what they can do. However this raises a serious question about the rationality of the employer in this process. After all, why would an employer, who presumably cares about the productivity of future employees, hire people based solely on college’s certification of competence in the absence of any evidence for that competence?

Brown tackles this issue with a nice measure of historical and sociological insight. He notes that the late nineteenth century saw the growing rationalization of work, which led to the development of large-scale bureaucracies to administer this work within both private corporations and public agencies. One result was the creation of a rapidly growing occupational sector for managerial employees who could function effectively within such a rationalized organizational structure. College graduates seemed to fit the bill for this kind of work. They emerged from the top level of the newly developed hierarchy of educational institutions and therefore seemed like natural candidates for management work in the upper levels of the new administrative hierarchy, which was based not on proprietorship or political office but on apparent skill. And what kinds of skills were called for in this line of work? What the new managerial employees needed was not so much the technical skills posited by human capital theory, he argues, but a general capacity to work effectively in a verbally and cognitively structured organizational environment, and also a capacity to feel comfortable about assuming positions of authority over other people.

These were things that the emerging American college could and indeed did provide. The increasingly corporate social structure of student life on college campuses provided good socialization for bureaucratic work, and the process of gaining access to and graduation from college provided students with an institutionalized confirmation of their social superiority and qualifications for leadership. Note that these capacities were substantive consequences of having attended college, but they were not learned as part of the college’s formal curriculum. That is, the characteristics that qualified college graduates for future bureaucratic employment were a side effect of their pursuit of a college education. In this sense, then, the college credential had a substantive meaning for employers that justified them in using it as a criterion for employment — less for the human capital that college provided than for the social capital that college conferred on graduates. Therefore, this credential, Brown argues, served an important role in the labor market by reducing the uncertainty that plagued the process of bureaucratic hiring. After all, how else was an employer to gain some assurance that candidate could do this kind of work? A college degree offered a claim to competence, which had enough substance behind it to be credible even if this substance was largely unrelated to the content of the college curriculum.

By the 1890s all the pieces were in place for a rapid expansion of college enrollments, strongly driven by credentialist pressures. Employers had reason to give preference to college graduates when hiring for management positions. As a result, middle-class families had an increasing incentive to provide their children with privileged access to an advantaged social position by sending them to college. For the students themselves, this extrinsic reward for attending college was reinforced by the intrinsic benefits accruing from an attractive social life on campus. All of this created a very strong demand for expanding college enrollments, and the pre-existing institutional conditions in higher education made it possible for colleges to respond to this demand in an aggressive fashion. There were a thousand independent institutions of higher education, accustomed to playing entrepreneurial roles in a competitive educational market, that were eager to capitalize on the surge of interest in attending college and to adapt themselves to the preferences of these new tuition-paying consumers. The result was a powerful and unrelenting surge of expansion in college enrollments that continued for the next century.

 

Brown provides a persuasive answer to the initial question about why American higher education expanded at such a rapid rate. But at this point the reader may well respond by asking the generic question that one should ask of any analyst, and that is, “So what?” More specifically, in light of the particular claims of this analysis, the question becomes: “What difference does it make that this expansion was spurred primarily by the pursuit of educational credentials?” In my view, at least, the answer to that question is clear. The impact of credentialism on both American society and the American educational system has been profound — profoundly negative. Consider some of the problems it has caused.

One major problem is that credentialism is astonishingly inefficient. Education is the largest single public investment made by most modern societies, and this is justified on the grounds that it provides a critically important contribution to the collective welfare. The public value of education is usually calculated as some combination of two types of benefits, the preparation of capable citizens (the political benefit) and the training of productive workers (the economic benefit). However the credentialist argument advanced by Brown suggests that these public benefits are not necessarily being met and that the primary beneficiaries are in fact private individuals. From this perspective, higher education (and the educational system more generally) exists largely as a mechanism for providing individuals with a cultural commodity that will give them a substantial competitive advantage in the pursuit of social position. In short, education becomes little but a vast public subsidy for private ambition.

The practical effect of this subsidy is the production of a glut of graduates. The difficulty posed by this outcome is not that the population becomes overeducated (since such a state is difficult to imagine) but that it becomes overcredentialed, since people are pursuing diplomas less for the knowledge they are thereby acquiring than for the access that the diplomas themselves will provide. The result is a spiral of credential inflation; for as each level of education in turn gradually floods with a crowd of ambitious consumers, individuals have to keep seeking ever higher levels of credentials in order to move a step ahead of the pack. In such a system nobody wins. Consumers have to spend increasing amounts of time and money to gain additional credentials, since the swelling number of credential holders keeps lowering the value of credentials at any given level. Taxpayers find an increasing share of scarce fiscal resources going to support an educational chase with little public benefit. Employers keep raising the entry-level education requirements for particular jobs, but they still find that they have to provide extensive training before employees can carry out their work productively. At all levels, this is an enormously wasteful system, one that rich countries like the United States can increasingly ill afford and that less developed countries, who imitate the U.S. educational model, find positively impoverishing.

A second major problem is that credentialism undercuts learning. In both college and high school, students are all too well aware that their mission is to do whatever it takes to acquire a diploma, which they can then cash in on what really matters — a good job. This has the effect of reifying the formal markers of academic progress-grades, credits, and degrees — and encouraging students to focus their attention on accumulating these badges of merit for the exchange value they offer. But at the same time this means directing attention away from the substance of education, reducing student motivation to learn the knowledge and skills that constitute the core of the educational curriculum. Under such conditions, it is quite rational, even if educationally destructive, for students to seek to acquire their badges of merit at a minimum academic cost, to gain the highest grade with the minimum amount of learning. This perspective is almost perfectly captured by a common student question, one that sends chills down the back of the learning-centered teacher but that makes perfect sense for the credential-oriented student: “ls this going to be on the test?” (Sedlak et al., 1986, p. 182). We have credentialism to thank for the aversion to learning that, to a great extent, lies at the heart of our educational system.

A third problem posed by credentialism is social and political more than educational. According to credentialing theory, the connection between social class and education is neither direct nor automatic, as suggested by social reproduction theory. Instead, the argument goes, market forces mediate between the class position of students and their access to and success within the educational system. That is, there is general competition for admission to institutions of higher education and for levels of achievement within these institutions. Class advantage is no guarantee of success in this competition, since such factors as individual ability, motivation, and luck all play a part in determining the result. Market forces also mediate between educational attainment (the acquisition of credentials) and social attainment (the acquisition of a social position). Some college degrees are worth more in the credentials market than others, and they provide privileged access to higher level positions independent of the class origins of the credential holder.

However, in both of these market competitions, one for acquiring the credential and the other for cashing it in, a higher class position provides a significant competitive edge. The economic, cultural, and social capital that come with higher class standing gives the bearer an advantage in getting into college, in doing well at college, and in translating college credentials into desirable social outcomes. The market-based competition that characterizes the acquisition and disposition of educational credentials gives the process a meritocratic set of possibilities, but the influence of class on this competition gives it a socially reproductive set of probabilities as well. The danger is that, as a result, a credential-driven system of education can provide meritocratic cover for socially reproductive outcomes. In the single-minded pursuit of educational credentials, both student consumers and the society that supports them can lose sight of an all-too-predictable pattern of outcomes that is masked by the headlong rush for the academic gold.

Posted in Academic writing, History of education, Rhetoric

A Brutal Review of My First Book

In the last two weeks, I’ve presented some my favorite brutal book reviews.  It’s a lot of fun to watch a skilled writer skewer someone else’s work with surgical precision (see here and my last post).  In the interest of balance, I thought it would be right and proper to present a review that eviscerates one of my own books.  So here’s a link to a review essay by Sol Cohen that was published in Historical Studies in Education in 1991.  It’s called, “The Linguistic Turn: The Absent Text of Educational Historiography.

Fortunately, I never saw the review when it first came out, three years after publication of my book, The Making of an American High School: The Credentials Market and the Central High School of Philadelphia, 1838-1939.  Those were the days when I was a recently tenured associate professor at Michigan State, still young and professionally vulnerable.  It wasn’t until 2005 that a snarky student in a class where I assigned my book pointedly sent me a copy of the review (as a way of saying, why are we reading this thoroughly discounted text?).   By then, thankfully, I was a full professor at Stanford, who was sufficiently old and arrogant to have nothing at stake, so I could enjoy the rollercoaster ride of reading Cohen’s thorough trashing of my work.

The book is a study of the first century of the first public high school in Philadelphia, the city where I grew up.  It emerged from my doctoral dissertation in sociology at the University of Pennsylvania, submitted in 1983.  The genre is historical sociology, and the data are both qualitative (public records, documents, and annual reports) and quantitative (digitized records of students in every census year from 1840-1920).  The book won me tenure at MSU and outstanding book awards in 1989 from both the History of Education Society and the American Educational Research Association.  In short, it was a big fat target, fairly begging for a take-down.  And boy, did Sol Cohen ever rise the challenge.

CHS Cover

Cohen frames his discussion of my book as an exercise in rhetorical analysis.  Building on the “linguistic turn” that emerged in social theory toward the end of the 20th century, he draws in particular on the work of Hayden White, who argued for viewing history as a literary endeavor.  White saw four primary story lines that historians employ:  Romance (a tale of growth and progress), Comedy (a fall from grace followed by a rise toward a happy ending), Tragedy (the fall of the hero), and Satire (“a decline and fall from grand beginnings”).  

The Making of an American High School is emplotted in the mode of Satire, an unrelenting critique, the reverse or a parody of the idealization of American public education which, for example, characterizes the Romantic/Comedic tradition in American educational historiography….

The narrative trajectory of Labaree’s book is a downward spiral. Its predominant mood is one of anger and disillusionment with the deterioration or subversion and fall from grace of American public secondary education. The story line of The Making of an American High School, though the reverse of Romance, is equally formulaic: from democratic origins, conflict and decline and fall. The conflict is between egalitarianism and “market values,” between the early democratic aspirations of Central High School to produce a virtuous and informed citizenry for the new republic and its latter-day function as an elitist “credentials market” controlled by a middle class whose goal is to ensure that their sons receive the “credentials” which would entitle them to become the functionaries of capitalist society….

The metaphor of the “credentials market,” by which Labaree means to signify a vulgar or profane and malignant essence of American secondary education, is one of the main rhetorical devices deployed in The Making of an American High School.  Labaree stresses the baneful effect of “market forces” and “market values” on every aspect of CHS and American secondary education: governance, pedagogy, the students, the curriculum. As befits his Satiric mode of emplotment, Labaree attacks the “market” conception of secondary education from a “high moral line,” that of democracy and egalitarianism.

The lugubrious downward narrative trajectory of The Making of an American High School unexpectedly takes a Romantic or Comedic upward turn at the very end of the book, when Labaree mysteriously foresees the coming transformation of the high school. We have to quote Labaree’s last paragraph. “As a market institution,” he writes, “the contemporary high school is an utter failure.” Yet “when rechartered as a common school, it has great potential.” The common public high school “would be able to focus on equality rather than stratification and on learning rather than the futile pursuit of educational credentials.” Stripped of its debilitating market concerns, “the common high school,” Labaree contends in his final sentence, “could seek to provide what had always eluded the early selective high school; a quality education for the whole community.” The End.

Ok, this is really not going well for me, is it?  Not only am I employing a hackneyed plot line of decline and fall and a cartoonish opposition between saintly democracy and evil markets, but I also flinch at the end from being true to my satiric ethos by hastily fabricating a last-minute happy ending.  I spin a book-length tale of fall from grace and then lose my nerve at the finish line.  In short, I’m a gutless fabulist.  

Oh, and that’s not all.

There is something more significant going on in Labaree’s book, however, than his emplotment of the history of American secondary education in the mode of Satire and the formulation of his argument in terms of the metaphor of the market. Thus, the most prominent rhetorical device Labaree utilizes in The Making of An American High School is actually not that of the market metaphor, but that of the terminology and apparatus of Quantitative Research Methodology. Labaree confronts the reader with no less than fifteen statistical tables in what is a very brief work (only about 180 pages of text), as well as four statistical Appendices….

One can applaud Labaree’ s diligence in finding and mining a trove of empirical data (“based on a sample of two thousand students drawn from the first hundred years” of CHS). But there is a kind of rhetorical overkill here. For all his figures and statistics, we are not much wiser than before; they are actually redundant. They give us no new information. What is their function in the text then? Labaree’s utilization of the nomenclature and technical apparatus of quantitative research methodology is to be understood as no more (or less) than a rhetorical strategy in the service of “realism.”

Ok, now here’s my favorite paragraph in the whole review.  I think you’ll find this one worth waiting for.  To make sure you don’t miss the best parts, I’ll underline them for you.

Within the conventions of its genre, The Making of an American High
School, though lacking in grace as a piece of writing, possesses some complexity and depth, if not breadth: it is an acceptable story. But as if Labaree were dissatisfied with the credibility and persuasiveness of a mere story, or with that story’s formal rhetorical properties, its Satiric mode of emplotment, its metaphoric mode of explanation, its fairy-tale ending, or were aware of its writerly deficiencies, he puts on scientistic or Positivist airs. Labaree’s piling on of inessential detail and his deployment of the arcane vocabulary and symbols of quantitative research function as a rhetorical device to counteract or efface the discursivity, the textuality, the obvious literary-ness of The Making of an American High School and to reinforce or enhance the authority of his book and the ideological thrust of his argument.  As if the language of “mean,” “standard deviation.” “regression analysis,” “beta factors.” “dummy variables.” and “homoscedasticity,” vis-a-vis ordinary language, were a transcendent, epistemologically superior or privileged language: rigorously scientific, impartial, objective. From this perspective, the Tables and Appendices in The Making of an American High School are not actually there to be read; they are, in fact, unreadable. They are simply there to be seen; their sheer presence in the text is what “counts.”

Wow, I’m impressed.  But wait for the closing flourish.

The Making of an American High School, within the conventions of its genre, is a modest and minor work, so thin the last chapter has to be fleshed out by a review of the past decade’s literature on the American high school. But the point is not to reprove or criticize Labaree. The Making of an
American High School is a first book. It is or was a competent doctoral
dissertation, with all the flaws of even a competent dissertation. That it was
awarded the Outstanding Book Award for 1989 by the History of Education
Society simply shows which way the historiographical winds are currently
blowing in the United States.

Nuff said.  Or, to use the discourse of quantitative research, QED.  

So how do I react to this review, nearly three decades after it appeared?  Although it’s a bit unkind, I can’t say it’s unfair.  Let me hit on a few specifics in the analysis that resonated with me.

The tale of a fall from grace.  True.  It’s about a school established to shore up a shaky republic and promote civic virtue, which then became a selective institution for reproducing social advantage through the provision of elite credentials.  It’s all down hill from the 1840s to the present.

Markets as the bad guy.  Also true.  I framed the book around a tension between democratic politics and capitalist markets, with markets getting and keeping the upper hand over the years.  That’s a theme that has continued in my work over the years, though it has become somewhat more complex.  As Cohen pointed out, my definition of markets was hazy at best.  It’s not even clear that school diplomas played a major role in the job market for most of the 19th century, when skill levels in the workforce were actually declining while levels of schooling were rising.  The golden days of school leading to a good job did not emerge as a major factor until the turn of the 20th century. 

In my second book, How to Succeed in School without Really Learning, I was forced to reconsider the politics-markets dichotomy, which I outlined in the first chapter, drawing on an essay that remains my most cited publication.  Here I split the idea of credentials markets into two major components.  From one perspective, education is a public good, which provides society with the skills it needs, skills that benefit everyone including those who didn’t get a diploma.  From another, education is a private good, whose benefits accrue only to the degree holder.  I argued that the former constitutes a vision of schooling for social efficiency whereas the latter offers a vision of schooling for social mobility.  The old good guy from the first book, democratic politics, represented a vision of schooling for democratic equality, also a public good.  For many years, I ran with the continuing tension among these three largely incompatible goals as the defining force in shaping the politics of education. 

However, by the time I got to my last book, A Perfect Mess, I stumbled onto the idea that markets were in fact the good guy in at least one major way.  They were the shaping force in the evolution of the American system of higher education, which emerged from below in a competition among private colleges rather than being created and directly controlled from above by the state.  Turns out this gave the system a degree of autonomy that was highly functional in promoting innovation in teaching and research and that helped make it a dominant force in global higher ed.  State dominated systems of higher education tend to be less effective in these ways.

The happy ending that doesn’t follow from the argument in the book.  Embarrassing but also true.  I have long argued that, before a book on education is published, the editor should delete the final chapter.  This is typically where the author pulls back from the weight of preceding analysis, which typically  demonstrates huge problems in education, and comes up with a totally incredible five-point plan for fixing the problem.  That’s sort of what I did here.  In my defense, it’s only one paragraph; and it doesn’t suggest that a happy ending will happen, only that it would be nice if it did.  But I do shudder reading it today, now that I’ve become more comfortable being a doomsayer about the prospects for fixing education.  To wit, my fourth book on the improbability of reform, Someone Has to Fail.

Deceptive rhetoric.  Also true.  The rhetorical move that strikes me as most telling now is not the the way I waved the flag of markets or statistics, as Cohen argued, but another move he alluded to but didn’t pursue.  On the face of it, the book is the history of a single high school.  But that is not something that interested me or interested my readers.  I frame the book as an analysis of the American high school in general, its evolution from a small exclusive institution for preparing citizens to a large inclusive institution for credentialing workers.  But there’s really no way to make a credible argument that the Central case is representative of the whole.  In fact, it was quite unusual. 

Most high schools in the 19th century were small additions to a town’s common schools, usually located in a room on the top floor of the grammar school, taught by the grammar school master, and organized coeducationally.  But for 50 years Central High School was the only high school in the second largest city in the country, and it remained very exclusive because of its rigorous entrance exam, its location in the most elegant educational structure in town (see the picture of its second building on the book’s cover), its authorization to grant college degrees, its teachers who were called professors, and its students who were all male.  In the first chapter I try to wave away that problem by arguing that the school is not representative but exemplary, serving as a model for where other high schools were headed.  Throughout the text I was able to maintain this fiction because of a quirk of the English language.  I kept referring to “the high school,” which left it ambiguous about whether I was referring to Central or to the high school in general.  I was always directing the analysis toward the latter.  On reflection, I’m ok about this deception.  If you’re not pushing your data to the limits of credibility, you’re probably not telling a very interesting story.  I think the evolutionary arc for the high school system that I describe in the book still in general holds up.

Using statistics as window dressing.  I wish.  This is a good news, bad news story.  The good news is that quantitative student data were critically important in establishing an important counterintuitive point.  In high school today, the best predictor of who will graduate is social class.  The effect is large and stable over time.  For Central in its first 80 years, however, class had no effect on chances for graduation.  The only factor that determined successful completion of degree was  a student’s grades in school.  It’s not that class was completely irrelevant.  The students who entered the school were heavily skewed toward the upper classes, since only these families could afford the opportunity cost of keeping their sons out of the workforce.  But once they were admitted, rich kids flunked out as much as poor kids if they didn’t keep up their grades.  Central, counter to anything I was expecting (or even desiring — I was looking to tell a Marxist story), the high school was a meritocracy.  Kind of cool.

The bad news is that the quantitative data were not useful for making any other important points in the book.  The most interesting stuff, at least for me, came from the qualitative side.  But the amount of quantitative data I generated was huge, and it ate up at least two of the four years I spent working on the dissertation.  Sol Cohen complained that I had 15 tables in the book, but the dissertation had more like 45.  I wanted to include them all, on the grounds that I did the work so I wanted it to show in the end result; but the press said no.  The disjuncture between data and its significance finally and brutally came home to me when my friend David Cohen read my whole manuscript and reported this:  “It seems that all of your tables serve as a footnote for a single assertion: Central was meritocratic.”  Two years of my life for a single footnote.  Lord save me from ever making that mistake again.  Since then I have avoided gathering and analyzing quantitative data and made it a religion to look for shortcuts as I’m doing research.  Diligence in gathering data doesn’t necessarily pay off in significance of the results.

Ok, so I’ll leave it at that.  I hope you enjoyed watching me get flayed by a professional.  And I also hope there are some useful lessons buried in there somewhere.  

Posted in Professionalism, Sports, Teacher education, Teaching

Panicking vs. Choking: The Different Ways that Amateurs and Professionals Fail

Professionals, by definition, are more skilled than amateurs in any given field, but they both experience failure.  And to an average observer, they appear to fail in similar ways.   The practitioner is moving along nicely in carrying out his or her craft — and then suddenly it all falls apart.  The golf ball flies off into the rough, the scalpel starts to shake, and the entire enterprise irretrievably falls apart.  The result is utter disaster.  We tend to attribute the failure to something we call panicking or choking.  Recovery is nearly impossible.  It’s embarrassing to watch, but it’s also gratifying that it’s not happening to us.  

In 2000, Malcolm Gladwell published a remarkably insightful piece in the New Yorker that makes a critical distinction between the two kinds of failure.  His thesis is this:  Amateurs panic but professionals choke.   

I have found his analysis very helpful in trying to understand the nature of professionalism, in everything from golfing to teaching.   It shows that the process of learning a profession is radically different from the process of practicing a profession.  What works for the learner is a disaster for the accomplished practitioner.  

Think about learning any complex new skill.  When you start to learn golf, for example, your instructor focuses your attention on key components of the craft.  You need expert help because none of these components comes naturally.  There’s the grip, the way your form your fingers around the handle of the club.  There’s the stance, the spread of your legs, and your position in relation to the ball.  And then there are the complex mechanics of the swing: the arc of your hands, the position and motion of the shoulders, the angle of your elbows, the turn of the hips, and full-body follow-through to the end of the swing.  You can’t swing a golf club the way you swing a baseball bat or a tennis racket.  The craft is entirely different.  Falling back on the familiar is the opposite of what you need to succeed in the new sport.  You need to focus obsessively on the elements of the new craft.  Where are my hands, my elbows, my feet, my shoulders, my hips, my eyes?  Don’t do what feels natural; do what’s right.

That’s how you learn the skills.  But, at a certain point, to become an accomplished professional in the field requires you to abandon the elements of the craft and focus instead on the overall flow that results from these elements.  You don’t pay attention to the pieces of the swing; you pay attention to the overall rhythm of a righteous swing, which years of practice have given you a feel for.  The awkward elements of the craft have now come to feel natural, and you nurture that sense of naturalness in order to keep your practice flowing easily.  

If amateurs fail to keep focused on the particular pieces of the craft, they are likely to slip back into what feels more natural, like swinging a bat instead of a club.  At that point, everything falls apart.  The swing descends into chaos, the ball flies off at the wrong angle, if you hit it at all.  Your pulse races, your head buzzes, your ears ring.  You screwed up badly and you’re so overwhelmed by the feeling of failure that you can’t begin of figure out how to fix the problem.  In short, you panic.  

For professionals, the road to failure is the opposite of that traversed by the amateur.  The problem starts when a shot goes awry.  This breaks the easy flow of the good swing.  The key to recovery is to find the rhythm again in the next shot, to draw on muscle memory to feel the naturalness of your highly practiced swing.  But if the natural feel doesn’t return quickly, if a second bad shot follows, you may lose confidence in your body’s ability to get back in the swing of your swing.  And you fall back on the elements that you learned back at the beginning.  You start thinking about your grip, the angle of your shoulders, placement of your elbows, rotation of your hips.  These were helpful when your were starting out in the profession, but you put them aside once you acquired the feel of the good swing.  Turning to them now is exactly what you don’t need, since it takes the smooth flow of the entire body and breaks it up into its components parts.   You swing falls to pieces.  In short, you choke.

Rory-McIlroy-1090306

One of the things I like about the distinction between panicking and choking is that it helps us understand key characteristics of both the problems people have learning a profession and the problems they have in practicing a profession.  And in particular, it shows how difficult it is for accomplished professionals to teach newcomers. 

The beginner needs help in learning the basic elements of the trade.  But to the professional, focusing on the elements is threatening to their ability to maintain the flow of their own professional practice.  With good reason they worry that breaking the flow is an invitation to disaster.  It’s a pipeline to choking.  

Consider the relationship between a student teacher and master teacher.  The novice needs coaching in the elements of teacher craft.  How should I use my voice, eyes, body, and facial expression to maintain control of the classroom?  How do I balance the competing demands of teaching in the full flow of a lesson?   What should I focus on now?  Controlling student behavior or encouraging participation, working on particular cognitive skills or social skills, taking advantage of opportunities posed by student questions or keeping momentum in the flow of the lesson?  

Teacher Struggling

For an accomplished teacher, these are pieces of a holistic practice, which are difficult to address individually.  Also, the experience of learning the elements is now rather remote and reliving that experience threatens to take you back to your first year in the classroom, when you were frequently prone to panic.  So often the safest is for master to tell novice, “Watch how I do it.”  But this is not much help if you’re the novice, since what you’re watching is a display of teacherly expertise whose individual components disappear in the blur of action.  Both master and novice struggle to bridge the gap between learning the parts and practicing the whole.

Check out the way Gladwell spells out his analysis.  I think it will change the way you think about the process of learning a profession.

 

August 21 & 28, 2000

PERFORMANCE STUDIES

The Art of Failure:

Why some people choke and others panic 

Malcolm Gladwell

There was a moment, in the third and deciding set of the 1993 Wimbledon final, when Jana Novotna seemed invincible. She was leading 4-1 and serving at 40-30, meaning that she was one point from winning the game, and just five points from the most coveted championship in tennis. She had just hit a backhand to her opponent, Steffi Graf, that skimmed the net and landed so abruptly on the far side of the court that Graf could only watch, in flat- footed frustration. The stands at Center Court were packed. The Duke and Duchess of Kent were in their customary place in the royal box. Novotna was in white, poised and confident, her blond hair held back with a headband–and then something happened. She served the ball straight into the net. She stopped and steadied herself for the second serve–the toss, the arch of the back–but this time it was worse. Her swing seemed halfhearted, all arm and no legs and torso. Double fault. On the next point, she was slow to react to a high shot by Graf, and badly missed on a forehand volley. At game point, she hit an overhead straight into the net. Instead of 5-1, it was now 4-2. Graf to serve: an easy victory, 4-3. Novotna to serve. She wasn’t tossing the ball high enough. Her head was down. Her movements had slowed markedly. She double-faulted once, twice, three times. Pulled wide by a Graf forehand, Novotna inexplicably hit a low, flat shot directly at Graf, instead of a high crosscourt forehand that would have given her time to get back into position: 4-4. Did she suddenly realize how terrifyingly close she was to victory? Did she remember that she had never won a major tournament before? Did she look across the net and see Steffi Graf–Steffi Graf!–the greatest player of her generation?      

On the baseline, awaiting Graf’s serve, Novotna was now visibly agitated, rocking back and forth, jumping up and down. She talked to herself under her breath. Her eyes darted around the court. Graf took the game at love; Novotna, moving as if in slow motion, did not win a single point: 5-4, Graf. On the sidelines, Novotna wiped her racquet and her face with a towel, and then each finger individually. It was her turn to serve. She missed a routine volley wide, shook her head, talked to herself. She missed her first serve, made the second, then, in the resulting rally, mis-hit a backhand so badly that it sailed off her racquet as if launched into flight. Novotna was unrecognizable, not an élite tennis player but a beginner again. She was crumbling under pressure, but exactly why was as baffling to her as it was to all those looking on. Isn’t pressure supposed to bring out the best in us? We try harder. We concentrate harder. We get a boost of adrenaline. We care more about how well we perform. So what was happening to her?      

At championship point, Novotna hit a low, cautious, and shallow lob to Graf. Graf answered with an unreturnable overhead smash, and, mercifully, it was over. Stunned, Novotna moved to the net. Graf kissed her twice. At the awards ceremony, the Duchess of Kent handed Novotna the runner-up’s trophy, a small silver plate, and whispered something in her ear, and what Novotna had done finally caught up with her. There she was, sweaty and exhausted, looming over the delicate white-haired Duchess in her pearl necklace. The Duchess reached up and pulled her head down onto her shoulder, and Novotna started to sob. 

Human beings sometimes falter under pressure. Pilots crash and divers drown. Under the glare of competition, basketball players cannot find the basket and golfers cannot find the pin. When that happens, we say variously that people have “panicked” or, to use the sports colloquialism, “choked.” But what do those words mean? Both are pejoratives. To choke or panic is considered to be as bad as to quit. But are all forms of failure equal? And what do the forms in which we fail say about who we are and how we think? We live in an age obsessed with success, with documenting the myriad ways by which talented people overcome challenges and obstacles. There is as much to be learned, though, from documenting the myriad ways in which talented people sometimes fail.      

“Choking” sounds like a vague and all-encompassing term, yet it describes a very specific kind of failure. For example, psychologists often use a primitive video game to test motor skills. They’ll sit you in front of a computer with a screen that shows four boxes in a row, and a keyboard that has four corresponding buttons in a row. One at a time, x’s start to appear in the boxes on the screen, and you are told that every time this happens you are to push the key corresponding to the box. According to Daniel Willingham, a psychologist at the University of Virginia, if you’re told ahead of time about the pattern in which those x’s will appear,  your reaction time in hitting the right key will improve dramatically. You’ll play the game very carefully for a few rounds, until you’ve learned the sequence, and then you’ll get faster and faster. Willingham calls this “explicit learning.” But suppose you’re not told that the x’s appear in a regular sequence, and even after playing the game for a while you’re not aware that there is a pattern. You’ll still get faster: you’ll learn the sequence unconsciously. Willingham calls that “implicit learning”–learning that takes place outside of awareness. These two learning systems are quite separate, based in different parts of the brain. Willingham says that when you are first taught something–say, how to hit a backhand or an overhead forehand–you think it through in a very deliberate, mechanical manner. But as you get better the implicit system takes over: you start to hit a backhand fluidly, without thinking. The basal ganglia, where implicit learning partially resides, are concerned with force and timing, and when that system kicks in you begin to develop touch and accuracy, the ability to hit a drop shot or place a serve at a hundred miles per hour. “This is something that is going to happen gradually,” Willingham says. “You hit several thousand forehands, after a while you may still be attending to it. But not very much. In the end, you don’t really notice what your hand is doing at all.”      

Under conditions of stress, however, the explicit system sometimes takes over. That’s what it means to choke. When Jana Novotna faltered at Wimbledon, it was because she began thinking about her shots again. She lost her fluidity, her touch. She double-faulted on her serves and mis-hit her overheads, the shots that demand the greatest sensitivity in force and timing. She seemed like a different person–playing with the slow, cautious deliberation of a beginner–because, in a sense, she was a beginner again: she was relying on a learning system that she hadn’t used to hit serves and overhead forehands and volleys since she was first taught tennis, as a child. The same thing has happened to Chuck Knoblauch, the New York Yankees’ second baseman, who inexplicably has had trouble throwing the ball to first base. Under the stress of playing in front of forty thousand fans at Yankee Stadium, Knoblauch finds himself reverting to explicit mode, throwing like a Little Leaguer again.      

Panic is something else altogether. Consider the following account of a scuba-diving accident, recounted to me by Ephimia Morphew, a human-factors specialist at NASA: “It was an open-water certification dive, Monterey Bay, California, about ten years ago. I was nineteen. I’d been diving for two weeks. This was my first time in the open ocean without the instructor. Just my buddy and I. We had to go about forty feet down, to the bottom of the ocean, and do an exercise where we took our regulators out of our mouth, picked up a spare one that we had on our vest, and practiced breathing out of the spare. My buddy did hers. Then it was my turn. I removed my regulator. I lifted up my secondary regulator. I put it in my mouth, exhaled, to clear the lines, and then I inhaled, and, to my surprise, it was water. I inhaled water. Then the hose that connected that mouthpiece to my tank, my air source, came unlatched and air from the hose came exploding into my face.      

“Right away, my hand reached out for my partner’s air supply, as if I was going to rip it out. It was without thought. It was a physiological response. My eyes are seeing my hand do something irresponsible. I’m fighting with myself. Don’t do it. Then I searched my mind for what I could do. And nothing came to mind. All I could remember was one thing: If you can’t take care of yourself, let your buddy take care of you. I let my hand fall back to my side, and I just stood there.”      

This is a textbook example of panic. In that moment, Morphew stopped thinking. She forgot that she had another source of air, one that worked perfectly well and that, moments before, she had taken out of her mouth. She forgot that her partner had a working air supply as well, which could easily be shared, and she forgot that grabbing her partner’s regulator would imperil both of them. All she had was her most basic instinct: get air. Stress wipes out short-term memory. People with lots of experience tend not to panic, because when the stress suppresses their short- term memory they still have some residue of experience to draw on. But what did a novice like Morphew have? I searched my mind for what I could do. And nothing came to mind.      

Panic also causes what psychologists call perceptual narrowing. In one study, from the early seventies, a group of subjects were asked to perform a visual acuity task while undergoing what they thought was a sixty-foot dive in a pressure chamber. At the same time, they were asked to push a button whenever they saw a small light flash on and off in their peripheral vision. The subjects in the pressure chamber had much higher heart rates than the control group, indicating that they were under stress. That stress didn’t affect their accuracy at the visual-acuity task, but they were only half as good as the control group at picking up the peripheral light. “You tend to focus or obsess on one thing,” Morphew says. “There’s a famous airplane example, where the landing light went off, and the pilots had no way of knowing if the landing gear was down. The pilots were so focussed on that light that no one noticed the autopilot had been disengaged, and they crashed the plane.” Morphew reached for her buddy’s air supply because it was the only air supply she could see.      

Panic, in this sense, is the opposite of choking. Choking is about thinking too much. Panic is about thinking too little. Choking is about loss of instinct. Panic is reversion to instinct. They may look the same, but they are worlds apart. 

Why does this distinction matter? In some instances, it doesn’t much. If you lose a close tennis match, it’s of little moment whether you choked or panicked; either way, you lost. But there are clearly cases when how failure happens is central to understanding why failure happens.      

Take the plane crash in which John F. Kennedy, Jr., was killed last summer. The details of the flight are well known. On a Friday evening last July, Kennedy took off with his wife and sister-in-law for Martha’s Vineyard. The night was hazy, and Kennedy flew along the Connecticut coastline, using the trail of lights below him as a guide. At Westerly, Rhode Island, he left the shoreline, heading straight out over Rhode Island Sound, and at that point, apparently disoriented by the darkness and haze, he began a series of curious maneuvers: He banked his plane to the right, farther out into the ocean, and then to the left. He climbed and descended. He sped up and slowed down. Just a few miles from his destination, Kennedy lost control of the plane, and it crashed into the ocean.      

Kennedy’s mistake, in technical terms, was that he failed to keep his wings level. That was critical, because when a plane banks to one side it begins to turn and its wings lose some of their vertical lift. Left unchecked, this process accelerates. The angle of the bank increases, the turn gets sharper and sharper, and the plane starts to dive toward the ground in an ever-narrowing corkscrew. Pilots call this the graveyard spiral. And why didn’t Kennedy stop the dive? Because, in times of low visibility and high stress, keeping your wings level–indeed, even knowing whether you are in a graveyard spiral–turns out to be surprisingly difficult. Kennedy failed under pressure.      

Had Kennedy been flying during the day or with a clear moon, he would have been fine. If you are the pilot, looking straight ahead from the cockpit, the angle of your wings will be obvious from the straight line of the horizon in front of you. But when it’s dark outside the horizon disappears. There is no external measure of the plane’s bank. On the ground, we know whether we are level even when it’s dark, because of the motion-sensing mechanisms in the inner ear. In a spiral dive, though, the effect of the plane’s G-force on the inner ear means that the pilot feels perfectly level even if his plane is not. Similarly, when you are in a jetliner that is banking at thirty degrees after takeoff, the book on your neighbor’s lap does not slide into your lap, nor will a pen on the floor roll toward the “down” side of the plane. The physics of flying is such that an airplane in the midst of a turn always feels perfectly level to someone inside the cabin.      

This is a difficult notion, and to understand it I went flying with William Langewiesche, the author of a superb book on flying, “Inside the Sky.” We met at San Jose Airport, in the jet center where the Silicon Valley billionaires keep their private planes. Langewiesche is a rugged man in his forties, deeply tanned, and handsome in the way that pilots (at least since the movie “The Right Stuff”) are supposed to be. We took off at dusk, heading out toward Monterey Bay, until we had left the lights of the coast behind and night had erased the horizon. Langewiesche let the plane bank gently to the left. He took his hands off the stick. The sky told me nothing now, so I concentrated on the instruments. The nose of the plane was dropping. The gyroscope told me that we were banking, first fifteen, then thirty, then forty-five degrees. “We’re in a spiral dive,” Langewiesche said calmly. Our airspeed was steadily accelerating, from a hundred and eighty to a hundred and ninety to two hundred knots. The needle on the altimeter was moving down. The plane was dropping like a stone, at three thousand feet per minute. I could hear, faintly, a slight increase in the hum of the engine, and the wind noise as we picked up speed. But if Langewiesche and I had been talking I would have caught none of that. Had the cabin been unpressurized, my ears might have popped, particularly as we went into the steep part of the dive. But beyond that? Nothing at all. In a spiral dive, the G-load–the force of inertia–is normal. As Langewiesche puts it, the plane likes to spiral-dive. The total time elapsed since we started diving was no more than six or seven seconds. Suddenly, Langewiesche straightened the wings and pulled back on the stick to get the nose of the plane up, breaking out of the dive. Only now did I feel the full force of the G-load, pushing me back in my seat. “You feel no G-load in a bank,” Langewiesche said. “There’s nothing more confusing for the uninitiated.”       

I asked Langewiesche how much longer we could have fallen. “Within five seconds, we would have exceeded the limits of the airplane,” he replied, by which he meant that the force of trying to pull out of the dive would have broken the plane into pieces. I looked away from the instruments and asked Langewiesche to spiral-dive again, this time without telling me. I sat and waited. I was about to tell Langewiesche that he could start diving anytime, when, suddenly, I was thrown back in my chair. “We just lost a thousand feet,” he said.      

This inability to sense, experientially, what your plane is doing is what makes night flying so stressful. And this was the stress that Kennedy must have felt when he turned out across the water at Westerly, leaving the guiding lights of the Connecticut coastline behind him. A pilot who flew into Nantucket that night told the National Transportation Safety Board that when he descended over Martha’s Vineyard he looked down and there was “nothing to see. There was no horizon and no light…. I thought the island might [have] suffered a power failure.” Kennedy was now blind, in every sense, and he must have known the danger he was in. He had very little experience in flying strictly by instruments. Most of the time when he had flown up to the Vineyard the horizon or lights had still been visible. That strange, final sequence of maneuvers was Kennedy’s frantic search for a clearing in the haze. He was trying to pick up the lights of Martha’s Vineyard, to restore the lost horizon. Between the lines of the National Transportation Safety Board’s report on the crash, you can almost feel his desperation:

About 2138 the target began a right turn in a southerly direction. About 30 seconds later, the target stopped its descent at 2200 feet and began a climb that lasted another 30 seconds.  During this period of time, the target stopped the turn, and the airspeed decreased to about 153 KIAS. About 2139, the target leveled off at 2500 feet and flew in a southeasterly direction. About 50 seconds later, the target entered a left turn and climbed to 2600 feet. As the target continued in the left turn, it began a descent that      reached a rate of about 900 fpm.      

But was he choking or panicking? Here the distinction between those two states is critical. Had he choked, he would have reverted to the mode of explicit learning. His movements in the cockpit would have become markedly slower and less fluid. He would have gone back to the mechanical, self-conscious application of the lessons he had first received as a pilot–and that might have been a good thing. Kennedy needed to think, to concentrate on his instruments, to break away from the instinctive flying that served him when he had a visible horizon.      

But instead, from all appearances, he panicked. At the moment when he needed to remember the lessons he had been taught about instrument flying, his mind–like Morphew’s when she was underwater–must have gone blank. Instead of reviewing the instruments, he seems to have been focussed on one question: Where are the lights of Martha’s Vineyard? His gyroscope and his other instruments may well have become as invisible as the peripheral lights in the underwater-panic experiments. He had fallen back on his instincts–on the way the plane felt–and in the dark, of course, instinct can tell you nothing. The N.T.S.B. report says that the last time the Piper’s wings were level was seven seconds past 9:40, and the plane hit the water at about 9:41, so the critical period here was less than sixty seconds. At twenty-five seconds past the minute, the plane was tilted at an angle greater than forty-five degrees. Inside the cockpit it would have felt normal. At some point, Kennedy must have heard the rising wind outside, or the roar of the engine as it picked up speed. Again, relying on instinct, he might have pulled back on the stick, trying to raise the nose of the plane. But pulling back on the stick without first levelling the wings only makes the spiral tighter and the problem worse. It’s also possible that Kennedy did nothing at all, and that he was frozen at the controls, still frantically searching for the lights of the Vineyard, when his plane hit the water. Sometimes pilots don’t even try to make it out of a spiral dive. Langewiesche calls that “one G all the way down.” 

What happened to Kennedy that night illustrates a second major difference between panicking and choking. Panicking is conventional failure, of the sort we tacitly understand. Kennedy panicked because he didn’t know enough about instrument flying. If he’d had another year in the air, he might not have panicked, and that fits with what we believe–that performance ought to improve with experience, and that pressure is an obstacle that the diligent can overcome. But choking makes little intuitive sense. Novotna’s problem wasn’t lack of diligence; she was as superbly conditioned and schooled as anyone on the tennis tour. And what did experience do for her? In 1995, in the third round of the French Open, Novotna choked even more spectacularly than she had against Graf, losing to Chanda Rubin after surrendering a 5-0 lead in the third set. There seems little doubt that part of the reason for her collapse against Rubin was her collapse against Graf–that the second failure built on the first, making it possible for her to be up 5-0 in the third set and yet entertain the thought I can still lose. If panicking is conventional failure, choking is paradoxical failure.      

Claude Steele, a psychologist at Stanford University, and his colleagues have done a number of experiments in recent years looking at how certain groups perform under pressure, and their findings go to the heart of what is so strange about choking. Steele and Joshua Aronson found that when they gave a group of Stanford undergraduates a standardized test and told them that it was a measure of their intellectual ability, the white students did much better than their black counterparts. But when the same test was presented simply as an abstract laboratory tool, with no relevance to ability, the scores of blacks and whites were virtually identical. Steele and Aronson attribute this disparity to what they call “stereotype threat”: when black students are put into a situation where they are directly confronted with a stereotype about their group–in this case, one having to do with intelligence–the resulting pressure causes their performance to suffer.      

Steele and others have found stereotype threat at work in any situation where groups are depicted in negative ways. Give a group of qualified women a math test and tell them it will measure their quantitative ability and they’ll do much worse than equally skilled men will; present the same test simply as a research tool and they’ll do just as well as the men. Or consider a handful of experiments conducted by one of Steele’s former graduate students, Julio Garcia, a professor at Tufts University. Garcia gathered together a group of white, athletic students and had a white instructor lead them through a series of physical tests: to jump as high as they could, to do a standing broad jump, and to see how many pushups they could do in twenty seconds. The instructor then asked them to do the tests a second time, and, as you’d expect, Garcia found that the students did a little better on each of the tasks the second time around. Then Garcia ran a second group of students through the tests, this time replacing the instructor between the first and second trials with an African-American. Now the white students ceased to improve on their vertical leaps. He did the experiment again, only this time he replaced the white instructor with a black instructor who was much taller and heavier than the previous black instructor. In this trial, the white students actually jumped less high than they had the first time around. Their performance on the pushups, though, was unchanged in each of the conditions. There is no stereotype, after all, that suggests that whites can’t do as many pushups as blacks. The task that was affected was the vertical leap, because of what our culture says: white men can’t jump.      

It doesn’t come as news, of course, that black students aren’t as good at test-taking as white students, or that white students aren’t as good at jumping as black students. The problem is that we’ve always assumed that this kind of failure under pressure is panic. What is it we tell underperforming athletes and students? The same thing we tell novice pilots or scuba divers: to work harder, to buckle down, to take the tests of their ability more seriously. But Steele says that when you look at the way black or female students perform under stereotype threat you don’t see the wild guessing of a panicked test taker. “What you tend to see is carefulness and second-guessing,” he explains. “When you go and interview them, you have the sense that when they are in the stereotype-threat condition they say to themselves, ‘Look, I’m going to be careful here. I’m not going to mess things up.’ Then, after having decided to take that strategy, they calm down and go through the test. But that’s not the way to succeed on a standardized test. The more you do that, the more you will get away from the intuitions that help you, the quick processing. They think they did well, and they are trying to do well. But they are not.” This is choking, not panicking. Garcia’s athletes and Steele’s students are like Novotna, not Kennedy. They failed because they were good at what they did: only those who care about how well they perform ever feel the pressure of stereotype threat. The usual prescription for failure–to work harder and take the test more seriously–would only make their problems worse.      

That is a hard lesson to grasp, but harder still is the fact that choking requires us to concern ourselves less with the performer and more with the situation in which the performance occurs. Novotna herself could do nothing to prevent her collapse against Graf. The only thing that could have saved her is if–at that critical moment in the third set–the television cameras had been turned off, the Duke and Duchess had gone home, and the spectators had been told to wait outside. In sports, of course, you can’t do that. Choking is a central part of the drama of athletic competition, because the spectators have to be there–and the ability to overcome the pressure of the spectators is part of what it means to be a champion. But the same ruthless inflexibility need not govern the rest of our lives. We have to learn that sometimes a poor performance reflects not the innate ability of the performer but the complexion of the audience; and that sometimes a poor test score is the sign not of a poor student but of a good one. 

Through the first three rounds of the 1996 Masters golf tournament, Greg Norman held a seemingly insurmountable lead over his nearest rival, the Englishman Nick Faldo. He was the best player in the world. His nickname was the Shark. He didn’t saunter down the fairways; he stalked the course, blond and broad-shouldered, his caddy behind him, struggling to keep up. But then came the ninth hole on the tournament’s final day. Norman was paired with Faldo, and the two hit their first shots well. They were now facing the green. In front of the pin, there was a steep slope, so that any ball hit short would come rolling back down the hill into oblivion. Faldo shot first, and the ball landed safely long, well past the cup.       

Norman was next. He stood over the ball. “The one thing you guard against here is short,” the announcer said, stating the obvious. Norman swung and then froze, his club in midair, following the ball in flight. It was short. Norman watched, stone-faced, as the ball rolled thirty yards back down the hill, and with that error something inside of him broke.      

At the tenth hole, he hooked the ball to the left, hit his third shot well past the cup, and missed a makeable putt. At eleven, Norman had a three-and-a-half-foot putt for par–the kind he had been making all week. He shook out his hands and legs before grasping the club, trying to relax. He missed: his third straight bogey. At twelve, Norman hit the ball straight into the water. At thirteen, he hit it into a patch of pine needles. At sixteen, his movements were so mechanical and out of synch that, when he swung, his hips spun out ahead of his body and the ball sailed into another pond. At that, he took his club and made a frustrated scythelike motion through the grass, because what had been obvious for twenty minutes was now official: he had fumbled away the chance of a lifetime.      

Faldo had begun the day six strokes behind Norman. By the time the two started their slow walk to the eighteenth hole, through the throng of spectators, Faldo had a four- stroke lead. But he took those final steps quietly, giving only the smallest of nods, keeping his head low. He understood what had happened on the greens and fairways that day. And he was bound by the particular etiquette of choking, the understanding that what he had earned was something less than a victory and what Norman had suffered was something less than a defeat.      

When it was all over, Faldo wrapped his arms around Norman. “I don’t know what to say — I just want to give you a hug,” he whispered, and then he said the only thing you can say to a choker: “I feel horrible about what happened. I’m so sorry.” With that, the two men began to cry.

Posted in Academic writing, Higher Education, History of education

The Lust for Academic Fame: America’s Engine for Scholarly Production

This post is an analysis of the engine for scholarly production in American higher education.  The issue is that the university is a unique work setting in which the usual organizational incentives don’t apply.  Administrators can’t offer much in the way of power and money as rewards for productive faculty and they also can’t do much to punish unproductive faculty who have tenure.  Yet in spite of this scholars keep cranking out the publications at a furious rate.  My argument is that the primary motive for publication is the lust for academic fame.

The piece was originally published in Aeon in December, 2018.

pile of books
Photo by Pixabay on Pexels.com

Gold among the dross

Academic research in the US is unplanned, exploitative and driven by a lust for glory. The result is the envy of the world

David F. Labaree

The higher education system is a unique type of organisation with its own way of motivating productivity in its scholarly workforce. It doesn’t need to compel professors to produce scholarship because they choose to do it on their own. This is in contrast to the standard structure for motivating employees in bureaucratic organisations, which relies on manipulating two incentives: fear and greed. Fear works by holding the threat of firing over the heads of workers in order to ensure that they stay in line: Do it my way or you’re out of here. Greed works by holding the prospect of pay increases and promotions in front of workers in order to encourage them to exhibit the work behaviours that will bring these rewards: Do it my way and you’ll get what’s yours.

Yes, in the United States contingent faculty can be fired at any time, and permanent faculty can be fired at the point of tenure. But, once tenured, there’s little other than criminal conduct or gross negligence that can threaten your job. And yes, most colleges do have merit pay systems that reward more productive faculty with higher salaries. But the differences are small – between the standard 3 per cent raise and a 4 per cent merit increase. Even though gaining consistent above-average raises can compound annually into substantial differences over time, the immediate rewards are pretty underwhelming. Not the kind of incentive that would motivate a major expenditure of effort in a given year – such as the kind that operates on Wall Street, where earning a million-dollar bonus is a real possibility. Academic administrators – chairs, deans, presidents – just don’t have this kind of power over faculty. It’s why we refer to academic leadership as an exercise in herding cats. Deans can ask you to do something, but they really can’t make you do it.

This situation is the norm for systems of higher education in most liberal democracies around the world. In more authoritarian settings, the incentives for faculty are skewed by particular political priorities, and in part for these reasons the institutions in those settings tend to be consigned to the lower tiers of international rankings. Scholarly autonomy is a defining characteristic of universities higher on the list.

If the usual extrinsic incentives of fear and greed don’t apply to academics, then what does motivate them to be productive scholars? One factor, of course, is that this population is highly self-selected. People don’t become professors in order to gain power and money. They enter the role primarily because of a deep passion for a particular field of study. They find that scholarship is a mode of work that is intrinsically satisfying. It’s more a vocation than a job. And these elements tend to be pervasive in most of the world’s universities.

But I want to focus on an additional powerful motivation that drives academics, one that we don’t talk about very much. Once launched into an academic career, faculty members find their scholarly efforts spurred on by more than a love of the work. We in academia are motivated by a lust for glory.

We want to be recognised for our academic accomplishments by earning our own little pieces of fame. So we work assiduously to accumulate a set of merit badges over the course of our careers, which we then proudly display on our CVs. This situation is particularly pervasive in the US system of higher education, which is organised more by the market than by the state. Market systems are especially prone to the accumulation of distinctions that define your position in the hierarchy. But European and other scholars are also engaged in a race to pick up honours and add lines to their CVs. It’s the universal obsession of the scholarly profession.

Take one prominent case in point: the endowed chair. A named professorship is a very big deal in the academic status order, a (relatively) scarce honour that supposedly demonstrates to peers that you’re a scholar of high accomplishment. It does involve money, but the chair-holder often sees little of it. A donor provides an endowment for the chair, which pays your salary and benefits, thus taking these expenses out of the operating budget – a big plus for the department, which saves a lot of money in the deal. And some chairs bring with them extra money that goes to the faculty member to pay for research expenses and travel.

But more often than not, the chair brings the occupant nothing at all but an honorific title, which you can add to your signature: the Joe Doakes Professor of Whatever. Once these chairs are in existence as permanent endowments, they never go away; instead they circulate among senior faculty. You hold the chair until you retire, and then it goes to someone else. In my own school, Stanford University, when the title passes to a new faculty member, that person receives an actual chair – one of those uncomfortable black wooden university armchairs bearing the school logo. On the back is a brass plaque announcing that ‘[Your Name] is the Joe Doakes Professor’. When you retire, they take away the title and leave you the physical chair. That’s it. It sounds like a joke – all you get to keep is this unusable piece of furniture – but it’s not. And faculty will kill to get this kind of honour.

This being the case, the academic profession requires a wide array of other forms of recognition that are more easily attainable and that you can accumulate the way you can collect Fabergé eggs. And they’re about as useful. Let us count the kinds of merit badges that are within the reach of faculty:

  • publication in high-impact journals and prestigious university presses;
  • named fellowships;
  • membership on review committees for awards and fellowships;
  • membership on editorial boards of journals;
  • journal editorships;
  • officers in professional organisations, which conveniently rotate on an annual basis and thus increase accessibility (in small societies, nearly everyone gets a chance to be president);
  • administrative positions in your home institution;
  • committee chairs;
  • a large number of awards of all kinds – for teaching, advising, public service, professional service, and so on: the possibilities are endless;
  • awards that particularly proliferate in the zone of scholarly accomplishment – best article/book of the year in a particular subfield by a senior/junior scholar; early career/lifetime-career achievement; and so on.

Each of these honours tells the academic world that you are the member of a purportedly exclusive club. At annual meetings of professional organisations, you can attach brightly coloured ribbons to your name tag that tell everyone you’re an officer or fellow of that organisation, like the badges that adorn military dress uniforms. As in the military, you can never accumulate too many of these academic honours. In fact, success breeds more success, as your past tokens of recognition demonstrate your fitness for future tokens of recognition.

Academics are unlike the employees of most organisations in that they fight over symbolic rather than material objects of aspiration, but they are like other workers in that they too are motivated by fear and greed. Instead of competing over power and money, they compete over respect. So far I’ve been focusing on professors’ greedy pursuit of various kinds of honours. But, if anything, fear of dishonour is an even more powerful motive for professorial behaviour. I aspire to gain the esteem of my peers but I’m terrified of earning their scorn.

Lurking in the halls of every academic department are a few furtive figures of scholarly disrepute. They’re the professors who are no longer publishing in academic journals, who have stopped attending academic conferences, and who teach classes that draw on the literature of yesteryear. Colleagues quietly warn students to avoid these academic ghosts, and administrators try to assign them courses where they will do the least harm. As an academic, I might be eager to pursue tokens of merit, but I am also desperate to avoid being lumped together with the department’s walking dead. Better to be an academic mediocrity, publishing occasionally in second-rate journals, than to be your colleagues’ archetype of academic failure.

The result of all this pursuit of honour and retreat from dishonour is a self-generating machine for scholarly production. No administrator needs to tell us to do it, and no one needs to dangle incentives in front of our noses as motivation. The pressure to publish and demonstrate academic accomplishment comes from within. College faculties become self-sustaining engines of academic production, in which we drive ourselves to demonstrate scholarly achievement without the administration needing to lift a finger or spend a dollar. What could possibly go wrong with such a system?

 

One problem is that faculty research productivity varies significantly according to what tier of the highly stratified structure of higher education professors find themselves in. Compared with systems of higher education in other countries, the US system is organised into a hierarchy of institutions that are strikingly different from each other. The top tier is occupied by the 115 universities that the Carnegie Classification labels as having the highest research activity, which represents only 2.5 per cent of the 4,700 institutions that grant college degrees. The next tier is doctoral universities with less of a research orientation, which account for 4.7 per cent of institutions. The third is an array of master’s level institutions often referred to as comprehensive universities, which account for 16 per cent. The fourth is baccalaureate institutions (liberal arts colleges), which account for 21 per cent. The fifth is two-year colleges, which account for 24 per cent. (The remaining 32 per cent are small specialised institutions that enrol only 5 per cent of all students.)

The number of publications by faculty members declines sharply as you move down the tiers of the system. One study shows how this works for professors in economics. The total number of refereed journal articles published per faculty member over the course of a career was 18.4 at research universities; 8.1 at comprehensive universities; 4.9 at liberal arts colleges; and 3.1 at all others. The decline in productivity is also sharply defined within the category of research universities. Another study looked at the top 94 institutions ranked by per-capita publications per year between 1991 and 1993. At the number-one university, average production was 12.7 per person per year; at number 20, it dropped off sharply to 4.6; at number 60, it was 2.4; and at number 94, it was 0.5.

Only 20 per cent of faculty serve at the most research-intensive universities (the top tier) where scholarly productivity is the highest. As we can see, the lowest end of this top sliver of US universities has faculty who are publishing less than one article every five years. The other 80 per cent are presumably publishing even more rarely than this, if indeed they are publishing at all. As a result, it seems that the incentive system for spurring faculty research productivity operates primarily at the very top levels of the institutional hierarchy. So why am I making such a big deal about US professors as self-motivated scholars?

The most illuminating way to understand the faculty incentive to publish is to look at the system from the point of view of the newly graduating PhD who is seeking to find a faculty position. These prospective scholars face some daunting mathematics. As we have seen, the 115 high-research universities produce the majority of research doctorates, but 80 per cent of the jobs are at lower-level institutions. The most likely jobs are not at research universities but at comprehensive universities and four-year institutions. So most doctoral graduates entering the professoriate experience dramatic downward mobility.

It’s actually even worse than that. One study of sociology graduates shows that departments ranked in the top five select the majority of their faculty from top-five departments, but most top-five graduates ended up in institutions below the rank of 20. And a lot of prospective faculty never find a position at all. A 1999 study showed that, among recent grads who sought to become professors, only two-thirds had such a position after 10 years, and only half of these had earned tenure. And many of those who do find teaching positions are working part-time, a category that in 2005 accounted for 48 per cent of all college faculty.

The prospect of a dramatic drop in academic status and the possibility of failing to find any academic job do a lot to concentrate the mind of the recent doctoral graduate. Fear of falling compounded by fear of total failure works wonders in motivating novice scholars to become flywheels of productivity. From their experience in grad school, they know that life at the highest level of the system is very good for faculty, but the good times fade fast as you move to lower levels. At every step down the academic ladder, the pay is less, the teaching loads are higher, graduate students are fewer, research support is less, and student skills are lower.

In a faculty system where academic status matters more than material benefits, the strongest signal of the status you have as a professor is the institution where you work. Your academic identity is strongly tied to your letterhead. And in light of the kind of institution where most new professors find themselves, they start hearing a loud, clear voice saying: ‘I deserve better.’

So the mandate is clear. As a grad student, you need to write your way to an academic job. And when you get a job at an institution far down the hierarchy, you need to write your way to a better job. You experience a powerful incentive to claw your way back up the academic ladder to an institution as close as possible to the one that recently graduated you. The incentive to publish is baked in from the very beginning.

One result of this Darwinian struggle to regain one’s rightful place at the top of the hierarchy is that a large number of faculty fall by the wayside without attaining their goal. Dashed dreams are the norm for large numbers of actors. This can leave a lot of bitter people occupying the middle and lower tiers of the system, and it can saddle students with professors who would really rather be somewhere else. That’s a high cost for the process that supports the productivity of scholars at the system’s pinnacle.

 

Another potential problem with my argument about the self-generating incentive for professors to publish is that the work produced by scholars is often distinguished more by its quantity rather than its quality. Put another way, a lot of the work that appears in print doesn’t seem worth the effort required to read it, much less to produce it. Under these circumstances, the value of the incentive structure seems lacking.

Consider some of the ways in which contemporary academic production promotes quantity over quality. One familiar technique is known as ‘salami slicing’. The idea here is simple. Take one study and divide it up into pieces that can each be published separately, so it leads to multiple entries in your CV. The result is an accumulation of trivial bits of a study instead of a solid contribution to the literature.

Another approach is to inflate co-authorship. Multiple authors make sense in some ways. Large projects often involve a large number of scholars and, in the sciences in particular, a long list of authors is de rigueur. Fine, as long as everyone in the list made a significant contribution to research. But often co-authorship comes for reasons of power rather than scholarly contribution. It has become normal for anyone who compiled a dataset to demand co-authorship for any papers that draw on the data, even if the data-owner added nothing to the analysis in the paper. Likewise, the principal investigator of a project might insist on being included in the author list for any publications that come from this project. More lines on the CV.

Yet another way to increase the number of publications is to increase the number of journals. By one count, as of 2014 there were 28,100 scholarly peer-reviewed journals. Consider the mathematics. There are about 1 million faculty members at US colleges and universities at the BA level and higher, so that means there are about 36 prospective authors for each of these journals. A lot of these enterprises act as club journals. The members of a particular sub-area of a sub-field set up a journal where members of the club engage in a practice that political scientists call log-rolling. I review your paper and you review mine, so everyone gets published. Edited volumes work much the same way. I publish your paper in my book, and you publish mine in yours.

A lot of journal articles are also written in a highly formulaic fashion, which makes it easy to produce lots of papers without breaking an intellectual sweat. The standard model for this kind of writing is known as IMRaD. This mnemonic represents the four canonical sections for every paper: introduction (what’s it about and what’s the literature behind it?); methods (how did I do it?); research (what are my findings?); and discussion (what does it mean?). All you have to do as a writer is to write the same paper over and over, introducing bits of new content into the tried and true formula.

The result of all this is that the number of scholarly publications is enormous and growing daily. One estimate shows that, since the first science papers were published in the 1600s, the total number of papers in science alone passed the 50 million mark in 2009; 2.5 million new science papers are published each year. How many of them do you think are worth reading? How many make a substantive contribution to the field?

 

OK, so I agree. A lot of scholarly publications – maybe most such publications – are less than stellar. Does this matter? In one sense, yes. It’s sad to see academic scholarship fall into a state where the accumulation of lines on a CV matters more than producing quality work. And think of all the time wasted reviewing papers that should never have been written, and think of how this clutters and trivialises the literature with contributions that don’t contribute.

But – hesitantly – I suggest that the incentive system for faculty publication still provides net benefits for both academy and society. I base this hope on my own analysis of the nature of the US academic system itself. Keep in mind that US higher education is a system without a plan. No one designed it and no one oversees its operation. It’s an emergent structure that arose in the 19th century under unique conditions in the US – when the market was strong, the state was weak, and the church was divided.

Under these circumstances, colleges emerged as private not-for-profit enterprises that had a state charter but little or no state funding. And, for the most part, they arose for reasons that had less to do with higher learning than with the extrinsic benefits a college could bring. As a result, the system grew from the bottom up. By the time state governments started putting up their own institutions, and the federal government started funding land-grant colleges, this market-based system was already firmly in place. Colleges were relatively autonomous enterprises that had found a way to survive without steady support from either church or state. They had to attract and retain students in order to bring in tuition dollars, and they had to make themselves useful both to these students and to elites in the local community, both of whom would then make donations to continue the colleges in operation. This autonomy was an accident, not a plan, but by the 20th century it became a major source of strength. It promoted a system that was entrepreneurial and adaptive, able to take advantage of possibilities in the environment. More responsive to consumers and community than to the state, institutions managed to mitigate the kind of top-down governance that might have stifled the system’s creativity.

The point is this: compared with planned organisational structures, emergent structures are inefficient at producing socially useful results. They’re messy by nature, and they pursue their own interests rather than following directions from above according to a plan. But as we have seen with market-based economies compared with state-planned economies, the messy approach can be quite beneficial. Entrepreneurs in the economy pursue their own profit rather than trying to serve the public good, but the side-effect of their activities is often to provide such benefits inadvertently, by increasing productivity and improving the general standard of living. A similar argument can be made about the market-based system of US higher education. Maybe it’s worth tolerating the gross inefficiency of a university system that is charging off in all directions, with each institution trying to advance itself in competition with the others. The result is a system that is the envy of the world, a world where higher education is normally framed as a pure state function under the direct control of the state education ministry.

This analysis applies as well to the professoriate. The incentive structure for US faculty encourages individual professors to be entrepreneurial in pursuing their academic careers. They need to publish in order to win honours for themselves and to avoid dishonour. As a result, they end up publishing a lot of work that is more useful to their own advancement (lines on a CV) than to the larger society. Also, following from the analysis of the first problem I introduced, an additional cost of this system is the large number of faculty who fall by the wayside in the effort to write their way into a better job. The success of the system of scholarly production at the top is based on the failed dreams of most of the participants.

But maybe it’s worth tolerating a high level of dross in the effort to produce scholarly gold – even if this is at the expense of many of the scholars themselves. Planned research production, operating according to mandates and incentives descending from above, is no more effective at producing the best scholarship than are five-year plans in producing the best economic results. At its best, the university is a place that gives maximum freedom for faculty to pursue their interests and passions in the justified hope that they will frequently come up with something interesting and possibly useful, even if this value is not immediately apparent. They’re institutions that provide answers to problems that haven’t yet developed, storing up both the dross and the gold until such time as we can determine which is which.

 

Posted in Writing

The Art of the Take-Down: Hostile Book Reviews, Pt. 2

Recently I did a post on the art of the take-down — when a skillful writer demolishes a book with wit and literary precision.  Sometimes the target is the subject of the review.  More often, the target is the book’s author, in which the review becomes a lesson for the reader on what the book in question could have been if the author had been as adept as the reviewer.  Here are two more favorite examples of the genre.

World Is Flat

First up is a particularly vicious attack by Matt Taibi in 2005 on Thomas Freedman’s bestselling book, The World Is Flat.  The assault begins with a loud bang:

Start with the title.

The book’s genesis is a conversation Friedman has with Nandan Nilekani, the CEO of Infosys. Nilekani casually mutters to Friedman: “Tom, the playing field is being leveled.” To you and me, an innocent throwaway phrase–the level playing field being, after all, one of the most oft-repeated stock ideas in the history of human interaction. Not to Friedman. Ten minutes after his talk with Nilekani, he is pitching a tent in his company van on the road back from the Infosys campus in Bangalore:

As I left the Infosys campus that evening along the road back to Bangalore, I kept chewing on that phrase: “The playing field is being leveled.” What Nandan is saying, I thought, is that the playing field is being flattened… Flattened? Flattened? My God, he’s telling me the world is flat!

This is like three pages into the book, and already the premise is totally fucked. Nilekani said level, not flat. The two concepts are completely different. Level is a qualitative idea that implies equality and competitive balance; flat is a physical, geographic concept that Friedman, remember, is openly contrasting–ironically, as it were–with Columbus’s discovery that the world is round.

Except for one thing. The significance of Columbus’s discovery was that on a round earth, humanity is more interconnected than on a flat one. On a round earth, the two most distant points are closer together than they are on a flat earth. But Friedman is going to spend the next 470 pages turning the “flat world” into a metaphor for global interconnectedness. Furthermore, he is specifically going to use the word round to describe the old, geographically isolated, unconnected world.

You’ve gotta love this one.  When a writer like Taibbi can effectively trash the premise of an entire book in a couple paragraphs, it’s a real tour-de-force.  He shows vividly that Freedman’s entire analysis is based on a botched metaphor.  What Freedman wants to say is that the world is smaller and more interconnected than ever before, which is not exactly a stunning observation.  But instead of calling it The World Is Small, which wouldn’t get any notice, he decides to go for an a title that is so counterintuitive as to be totally attention grabbing:  The World Is Flat.  Unfortunately this runs directly counter to his own anodyne point, since it describes the bad old world before the internet.  

Virility

The second case in point is the 2016 review by Clive James of an academic book edited by three French scholars (Alain Corbin, Jean-Jacques Courtine, and Georges Vigarello) called A History of Virility, which weighs in at 752 pages.  James uses his review as way to ridicule the pretentiousness of academic writing, especially in the French social science tradition.  Only scholars could make sex boring.  His opening sentence is a killer:

This book is a lead mine of information. There could have been gold in it, but perhaps yellow lustre was thought to be less impressive than grey heft. Only one of a series of volumes published under the general title of “European Perspectives“, the book bulks large as a collection of specially commissioned articles with virility for a subject. Virile itself in its heaps of strenuously acquired science-sounding vocabulary, it shows what can be done with three sufficiently influential European editors marshal the expertise of a phalanx of sufficiently dedicated European sociologists in order to invest a sufficiently important theme with an extra gravitas id doesn’t really need. The result is like the European Union: one searches for the benefits while keeping an eye on the exits.

Most of the Europeans involved are French, and one of the reasons for the book’s ponderous collective ton could be that the already glutinous academic version of the French language has not been very excitingly translated into English: a term such as “structured, normative alterity” might have sounded more sprightly in the original. The exclusive blame can scarcely be place on the subject itself, which is, by the nature of things, quite sexy. By the time the European experts have worked out their perspectives, however, the kind of urge that once got Peter Abelard into immortal trouble is drained of poetic nuance, not to say truth.  

Later in the review, he digs into the substance of the book’s take on virility:

All too early in the book, on the point of whether masculinity is acquired or intrinsic, Simone de Beauvoir is quoted. The quotation is familiar, but stands out among the circumambient solemnity with a startling freshness, which is a bad sign, because in any context where Beauvoir sets the standard for vivid utterance, it is being set low. “A man,” says Beauvoir “is not born a man, he becomes a man.” She sounds more scientific than the social scientist who quotes her, although Abelard, could be speak, might point out to both of them that the ideal that masculinity is not an a priori condition attached to physiology starts looking shaky when the knives come out. (“The pursuit of truth hides castration,” said Lacan, to which Abelard might have replied “if only.”) But the book, could it speak in a single voice — most of the time, alas, it does, if only in the sense that so many modern academics in the soft sciences sound the same — might reply that sexuality is not merely a matter of gender, or that gender is not merely a matter of anatomy, and that these things are modalities, with virility yet another modality.  As always in such a book of any size, if you hear the word “modality” you can count on hearing it again soon.

There’s so much to like here.  First the language: “strenuously acquired science-sounding vocabulary;” the repetition of the world “sufficiently;” pointing out how the authors use terms “structured, normative alterity” and repeatedly refer to “modality.”  And then there are the gratuitous digs at Beauvoir and the EU.  I’d kill to have crafted one of his sentences.

Posted in Empire, History, Modernity, War

What If Napoleon Had Won at Waterloo?

Today I want to explore an interesting case of counterfactual history.  What would have happened if Napoleon Bonaparte had won in 1815 at the Battle of Waterloo?  What consequences might have followed for Europe in the next two centuries?  That he might have succeeded is not mere fantasy.  According to the victor, Lord Wellington, the battle was “the nearest-run thing you ever saw in your life.”

The standard account, written by the winners, is that the allies arrayed against Napoleon (primarily Britain, Prussia, Austria, and Russia) had joined together to stop him from continuing to rampage across the continent, conquering one territory after another.  From this angle, they were the saviors of freedom, who finally succeeded in vanquishing and deposing the evil dictator. 

I want to explore an alternative interpretation, which draws on two sources.  One is an article in Smithsonian Magazine by Andrew Roberts, “Why We’d Be Better Off if Napoleon never Lost at Waterloo.”  The other is a book by the same author, Napoleon: A Life

Napoleon

The story revolves around two different Napoleons:  the general and the ruler.  As a general, he was one of the greatest in history.  Depending on how you count, he fought 60 or 70 battles and lost only seven of them, mostly at the end.  In the process, he conquered (or controlled through alliance) most of Western Europe.  So the allies had reason to fear him and to eliminate the threat he posed.  

As a ruler, however, Napoleon looks quite different.  In this role, he was the agent of the French Revolution and its Enlightenment principles, which he succeeded in institutionalizing within France and spreading across the continent.  Andrews notes in his article that Napoleon 

said he would be remembered not for his military victories, but for his domestic reforms, especially the Code Napoleon, that brilliant distillation of 42 competing and often contradictory legal codes into a single, easily comprehensible body of French law. In fact, Napoleon’s years as first consul, from 1799 to 1804, were extraordinarily peaceful and productive. He also created the educational system based on lycées and grandes écoles and the Sorbonne, which put France at the forefront of European educational achievement. He consolidated the administrative system based on departments and prefects. He initiated the Council of State, which still vets the laws of France, and the Court of Audit, which oversees its public accounts. He organized the Banque de France and the Légion d’Honneur, which thrive today. He also built or renovated much of the Parisian architecture that we still enjoy, both the useful—the quays along the Seine and four bridges over it, the sewers and reservoirs—and the beautiful, such as the Arc de Triomphe, the Rue de Rivoli and the Vendôme column.

He stood as the antithesis of the monarchical state system at the time, grounded in preserving the feudal privileges of the nobility and the church and the subordination of peasants and workers.  As a result, he ended up creating a lot of enemies, who initiated most of the battles he fought.  In addition, however, he drew a lot of support from key actors within the territories he conquered, to whom he looked less like an invader than a liberator.  Andrews points out in his book that:

Napoleon’s political support from inside the annexed territories came from many constituencies: urban elites who didn’t want to return to the rule of their local Legitimists, administrative reformers who valued efficiency, religious minorities such as Protestants and Jews whose rights were protected by law, liberals who believed in concepts such as secular education and the liberating power of divorce, Poles and other nationalities who hoped for national self-determination, businessmen (at least until the Continental System started to bite), admirers of the simplicity of the Code Napoléon, opponents of the way the guilds had worked to restrain trade, middle-class reformers, in France those who wanted legal protection for their purchases of hitherto ecclesiastical or princely confiscated property, and – especially in Germany – peasants who no longer had to pay feudal dues.

When the allies defeated Napoleon the first time, they exiled him to Elba and installed Louis XVIII as king, seeking to sweep away all of the gains from the revolution and the empire.  Louis failed spectacularly in gaining local support for the reversion to the Ancien Regime.  Sensing this, Napoleon escaped to the mainland after only nine months and headed for Paris.  The royalist troops sent to stop him instead rallied to his cause, and in 18 days he was eating Louis’s dinner in the Tuileries, restored as emperor without anyone firing a single shot in defense of the Bourbons.  Quite a statement about how the French, as opposed to the allies, viewed his return.  

Once back in charge, Napoleon sent a note to the allies, reassuring them that he was content to rule at home and leave conquest to the past: “After presenting the spectacle of great campaigns to the world, from now on it will be more pleasant to know no other rivalry than that of the benefits of peace, of no other struggle than the holy conflict of the happiness of peoples.” 

They weren’t buying it.  They had reason to be suspicious, but instead of waiting and seeing they launched an all out assault on France in an effort to get him out of the way.  Andrews argues, and I agree, that their aim was not defensive but actively reactionary.  His liberalized and modernized France posed a threat to the preservation of the traditional powers of monarchy, nobility, and church.  They sought to tamp out the fires of reform and revolution before it reared up in their own domains.  In this sense, then, Andrews says Waterloo was a battle that didn’t need to happen.  It was an unprovoked, preemptive strike.

Andrews concludes his Smithsonian article with this assessment of what might have been if Waterloo had turned out differently:

If Napoleon had remained emperor of France for the six years remaining in his natural life, European civilization would have benefited inestimably. The reactionary Holy Alliance of Russia, Prussia and Austria would not have been able to crush liberal constitutionalist movements in Spain, Greece, Eastern Europe and elsewhere; pressure to join France in abolishing slavery in Asia, Africa and the Caribbean would have grown; the benefits of meritocracy over feudalism would have had time to become more widely appreciated; Jews would not have been forced back into their ghettos in the Papal States and made to wear the yellow star again; encouragement of the arts and sciences would have been better understood and copied; and the plans to rebuild Paris would have been implemented, making it the most gorgeous city in the world.

Napoleon deserved to lose Waterloo, and Wellington to win it, but the essential point in this bicentenary year is that the epic battle did not need to be fought—and the world would have been better off if it hadn’t been.

What followed his loss was a century of reaction across the continent of Europe. The Bourbons were restored and the liberal gains in Germany, Spain, Austria and Italy were rolled back.  Royalist statesmen such as Metternich and Bismarck aggressively defended their regimes against reform efforts by liberals and Marxists alike.  These regimes persisted until the First World War, which they precipitated and which eventually brought them all down — Hohenzollerns, Habsburgs, Romanovs, and Ottomans.  The reactions to the fall of these monarchies in turn set the stage for the Second World War.

You can only play out historical counterfactuals so far, before the chain of contingencies becomes too long and the analysis turns wholly speculative.  But it seems quite reasonable to me to think that, if Napoleon had won at Waterloo, this history would have played out quite differently.  The existence proof of a modern liberal state in the middle of Europe would have shored up reform efforts in the surrounding monarchies and headed off the reactionary status quo that finally erupted in the Great War that extinguished them all.

Posted in Writing

The Art of the Take-Down: A Sampling of Hostile Book Reviews

Book reviews are a terrific resource, which allow you to keep up on what’s happening in a wide variety of fields without actually having to read the book.  And occasionally they point out a book you really should read.  Writing these reviews used to be an art that employed a large number of talented writers, but that’s been changing.  Formal book reviews in newspapers and magazines, which used to be the staple of the business, are declining in number.  Now we’re increasingly dependent on online sources and the unpolished and unhelpful amateur reviews on Amazon and elsewhere.

But you can still find good book reviews, ones that are pleasure to read even if you’re not terribly interested in the book’s topic.  My favorites are the take-downs — when a skillful writer demolishes a book with wit and literary precision.  Sometimes the target is the subject of the review.  More often, the target is the book’s author, in which the review becomes a lesson for the reader on what the book in question could have been if the author had been as adept as the reviewer.  Here are two of my favorite examples.

TR

In 2011, historian Jackson Lear wrote a review of Colonel Roosevelt by Edmund Morris that is a classic of the first time.  He turns it into a viciously effective essay about Teddy Roosevelt’s failings as a person and a president.  As with a lot of the best take-downs, he lets you know in the opening sentence what kind of story this is going to be: “The reputation of Theodore Roosevelt has become as bloated as the man himself.”  In particular he focuses on 

the cult-like status that Roosevelt enjoys outside the academy, especially in Washington. In political discourse, his name evokes bipartisan affection, bordering on reverence; few presidents are safer for politicians of either party to cite as an inspiration….  Not bad for a man who, despite his undeniable bravery and public spirit, spent much of his life behaving like a bully, drunk on his own self-regard. How does one account for the contemporary adulation of this man?

Here’s how Lear sums up his assessment of TR at the end of the essay:

In The Nation, Stuart Sherman characterized Roosevelt’s life as the drama of a man done in by his own obsessions, descending from defense of the public good into mere “fighting for fighting’s sake.” Sherman ended on a wistful note: “‘How much more glorious [Roosevelt’s life would have been] if in his great personality there had been planted a spark of magnanimity.’”

The problem was that magnanimity required a kind of manhood that Roosevelt the boy did not possess. A part of his character remained attached to older traditions of masculine honor, to a paternalist sense of noblesse oblige: this was the part that lay behind his challenges to irresponsible capital, his elevation of public good over private gain. But there was another, deeper part that relished physical struggle—and above all violence—for its own sake. This was what kept him from being more than “a great big boy,” and often little more than a schoolyard bully. It was also what kept him, in the end, from becoming a truly tragic figure. Tragedy is for grown-ups.

Ouch.  Now that’s an epitaph no one wants.

Hersey

The second example is a 1963 review by Gore Vidal of John Hersey’s book, Here to Stay.  His critique focuses is on the way Hersey writes.  Here’s the opening paragraph, which sets up the question the reviewer wants to answer:

John Hersey has brought together a number of his journalistic pieces in a volume called Here to Stay and a baffling collection it is. To give Mr. Hersey his due — and who is so hard as not to give it him? — he is good-hearted, right-minded and, as they used to say of newspaper reporters, “tireless.” He is also, as Mr. Orville Prescott would say, “dull, dull, dull.” Mr. Hersey’s dullness is not easily accounted for. His pieces deal with interesting subjects: Connecticut floods, concentration camp survivors, returning veterans, battle fatigue cases, and his famous Hiroshima study. He is fascinated by death, holocaust and man’s monotonous inhumanity to man. He can describe a disaster chastely and attentively. He has an eye for minutiae (here begins his failure for he has not much gift for selection). He is willing to take on great themes (Hiroshima), but despite his efforts, something always goes wrong. Why?

Vidal sees the problem in Hersey’s insistence on piling up mountains of meticulously documented details about a disaster without giving us any sense of what they mean and why they matter.  He puts it this way:

To what end does Mr. Hersey in his level, fact-choked style insist that we attend these various disasters human and natural? So deliberately is he a camera that it is often hard to determine what he means us to feel by what is shown. The simple declarative sentences are excellent at conveying action; they are less good at suggesting atmosphere; they are hopeless at expressing a moral point of view, even by indirection.

Here is his conclusion:

Of course Mr. Hersey is to be praised for avoiding emotional journalism and overt editorializing (though a week of reading Emil Zola might do him good); yet despite his properly nervous preface, he does not seem to realize that the only point to writing serious journalism is to awaken in the reader not only the sense of how something was, but the apprehension of why it was, and to what moral end the recorder is leading us, protesting or not. Mr. Hersey is content to give us mere facts. A good man, he finds war hell and human suffering terrible, but that is nowhere near enough. At no point in the deadpan monotonous chronicle of Hiroshima is there any sense of what the Bomb meant and means. He does not even touch on the public debate as to whether or not there was any need to use such weapon when Japan was already making overtures of surrender. To Mr. Hersey it just fell, that’s all, and it was terrible, and he would like to tell us about it. If he has any attitude about the moral position of the United States before and after this extraordinary human happening, he keeps it safely hidden beneath the little sentences and the small facts.

To use Mr. Hersey’s own unhappy image, in reading him one does not drink the bitter elixir of adrenalin, one merely sips a familiar cup of something anodyne, something not stimulant but barbiturate, and the moral sense sleeps on.

Posted in Higher Education, Populism, Sports

Nobel prizes are great, but college football is why American universities dominate the globe

This post is a reprint of a piece I published in Quartz in 2017.  Here’s a link to the original.  It’s an effort to explore the distinctively populist character of American higher education. 

The idea is that a key to understanding the strong public support that US colleges and universities have managed to generate is their ability to reach beyond the narrow constituency for its elevated intellectual accomplishments.  The magic is that they are elite institutions that can also appeal to the populace.  And the peculiar world of college football provides a window into how the magic works.  

If you drive around the state of Michigan, where I used to live, you would see farmers on tractors wearing a cap that typically bore the logo of either University of Michigan (maize and blue) or Michigan State (green and white).  Maybe they or their kids attended the school, or maybe they were using its patented seed; but more often than not, it was because they were rooting for the football team.  It’s hard to overestimate the value for the higher ed system of drawing a broad base of popular support.

Football

Nobel prizes are great, but college football is why

American universities dominate the globe

David F. Labaree

 

College football costs too much. It exploits players and even damages their brains. It glorifies violence and promotes a thuggish brand of masculinity. And it undermines the college’s academic mission.

We hear this a lot, and much of it is true. But consider, for the moment, that football may help explain how the American system of higher education has become so successful. According to rankings computed by Jiao Tong University in Shanghai, American institutions account for 32 of the top 50 and 16 of the top 20 universities in the world. Also, between 2000 and 2014, 49% of all Nobel recipients were scholars at US universities.

In doing research for a book about the American system of higher education, I discovered that the key to its strength has been its ability to combine elite scholarship with populist appeal. And football played a key role in creating this alchemy.

American colleges developed their skills at attracting consumers and local supporters in the early nineteenth century, when the basic elements of the higher education system came together.

These colleges emerged in a very difficult environment, when the state was weak, the market strong, and the church divided. Unlike European forebears, who could depend on funding from the state or the established church, American colleges arose as not-for-profit corporations that received only sporadic funding from church denominations and state governments and instead had to rely on students and local donors. Often adopting the name of the town where they were located, these colleges could only survive, much less thrive, if they were able to attract and retain students from nearby towns and draw donations from alumni and local citizens.

In this quest, American colleges and universities have been uniquely and spectacularly successful. Go to any American campus and you will see that nearly everyone seems to be wearing the brand—the school colors, the logo, the image of the mascot, team jerseys. Unlike their European counterparts, American students don’t just attend an institution of higher education; they identify with it. It’s not just where they enroll; it’s who they are. In the US, the first question that twenty-year-old strangers ask each other is “Where do you go to college?” And half the time the question is moot because the speakers are wearing their college colors.

Football, along with other intercollegiate sports, has been enormously helpful in building the college brand. It helps draw together all of the members of the college community (students, faculty, staff, alumni, and local fans) in opposition to the hated rival in the big game. It promotes a loyalty that lasts for a lifetime, which translates into a broad political base for gaining state funding and a steady flow of private donations.

Thus one advantage that football brings to the American university is financial. It’s not that intercollegiate sports turn a large profit; in fact, the large majority lose money. Instead, it’s that they help mobilize a stream of public and private funding. And now that state appropriations for public higher education are in steady decline, public universities, like their private counterparts, are increasingly dependent on private funding.

Another advantage that football brings is that it softens the college’s elitism. Even the most elite American public research universities (Michigan, Wisconsin, Berkeley, UCLA) have a strong populist appeal. The same is true of a number of elite privates (think Stanford, Vanderbilt, Duke, USC). In large part this comes from their role as a venue for popular entertainment supported by their accessibility to a large number of undergraduate students. As a result, the US university has managed to avoid much of the social elitism of British institutions such as Oxford and Cambridge and the academic elitism of the German university dominated by the graduate school. Go to a college town on game day, and nearly every car, house, and person is sporting the college colors.

This broad support is particularly important these days, now that the red-blue political divide has begun to affect colleges as well. A recent study showed that, while most Americans still believe that colleges have a positive influence on the country, 58% of Republicans do not. History strongly suggests that football is going to be more effective than Nobel prizes in winning back their loyalty.

So let’s hear it for college football. It’s worth two cheers at least.

Posted in Academic writing, History, Writing

On Writing: How the King James Bible and How It Shaped the English Language and Still Teaches Us How to Write

When you’re interested in improving your writing, it’s a good idea to have some models to work from.  I’ve presented some of my favorite models in this blog.  These have included a number of examples of good writing by both academics (Max Weber, E.P. Thompson, Jim March, and Mary Metzand nonacademics (Frederick Douglass, Elmore Leonard).

Today I want to explore one of the two most influential forces in shaping the English language over the years:  The King James Bible.  (The other, of course, is Shakespeare.)  Earlier I presented one analysis by Ann Wroe, which focused on the thundering sound of the prose in this extraordinary text.  Today I want to draw on two other pieces of writing that explore the powerful model that this bible provides us all for how to write in English with power and grace.  One is by Adam Nicholson, who wrote a book on the subject (God’s Secretaries: The Making of the King James Bible).  The other, which I reprint in full at the end of this post, is by Charles McGrath.  

The impulse to produce a bible in English arose with the English reformation, as a Protestant vernacular alternative to the Latin version that was canonical in the Catholic church.  The text was commissioned in 1604 by King James, who succeeded Elizabeth I after her long reign, and it was constructed by a committee of 54 scholars.  They went back to the original texts in Hebrew and Greek, but they drew heavily on earlier English translations. 

The foundational translation was written by William Tyndale, who was executed for heresy in Antwerp in 1536, and this was reworked into what became known as the Geneva bible by Calvinists who were living in Switzerland.  One aim of the committee was to produce a version that was more compatible with the beliefs of English and Scottish versions of the faith, but for James the primary impetus was to remove the anti-royalist tone that was embedded within the earlier text.  Recent scholars have concluded that 84% of the words in the King James New Testament and 76% in the Old Testament are Tyndale’s.

As Nicholson puts it, the language of the King James Bible is an amazing mix — “majestic but intimate, the voice of the universe somehow heard in the innermost part of the ear.”

You don’t have to be a Christian to hear the power of those words—simple in vocabulary, cosmic in scale, stately in their rhythms, deeply emotional in their impact. Most of us might think we have forgotten its words, but the King James Bible has sewn itself into the fabric of the language. If a child is ever the apple of her parents’ eye or an idea seems as old as the hills, if we are at death’s door or at our wits’ end, if we have gone through a baptism of fire or are about to bite the dust, if it seems at times that the blind are leading the blind or we are casting pearls before swine, if you are either buttering someone up or casting the first stone, the King James Bible, whether we know it or not, is speaking through us. The haves and have-nots, heads on plates, thieves in the night, scum of the earth, best until last, sackcloth and ashes, streets paved in gold, and the skin of one’s teeth: All of them have been transmitted to us by the translators who did their magnificent work 400 years ago.

Wouldn’t it be lovely if we academics could write in way that sticks in people’s minds for 400 years?  Well, maybe that’s a bit too much to hope for.  But even if we can’t aspire to be epochally epigrammatic, there are still lessons we can learn from Tyndale and the Group of 54.  

One such lesson is the power of simplicity.  Too often scholars feel the compulsion to gussy up their language with jargon and Latinate constructions in the name of professionalism.  If any idiot can understand what you’re saying, then you’re not being a serious scholar.  But the magic of the King James Bible is that it uses simple Anglo-Saxon words to make the most profound statements.  Listen to this passage from Ecclesiastes:

I returned and saw under the sun, that the race is not to the swift, nor the battle to the strong, neither yet bread to the wise, nor yet riches to men of understanding, nor yet favor to men of skill, but time and chance happeneth to them all.

Or this sentence from Paul’s letter to the Phillipians:

Finally, brethren, whatsoever things are true, whatsoever things are honest, whatsoever things are just, whatsoever things are pure, whatsoever things are lovely, whatsoever things are of good report; if there be any virtue, and if there be any praise, think on these things.

Or the stunning opening line of the Gospel of John:

In the beginning was the Word, and the Word was with God, and the Word was God.

This is a text that can speak clearly to the untutored while at the same time elevating them to a higher plane.  For us it’s a model for how to match simplicity with profundity.

KJB

Why the King James Bible Endures

By CHARLES McGRATH

The King James Bible, which was first published 400 years ago next month, may be the single best thing ever accomplished by a committee. The Bible was the work of 54 scholars and clergymen who met over seven years in six nine-man subcommittees, called “companies.” In a preface to the new Bible, Miles Smith, one of the translators and a man so impatient that he once walked out of a boring sermon and went to the pub, wrote that anything new inevitably “endured many a storm of gainsaying, or opposition.” So there must have been disputes — shouting; table pounding; high-ruffed, black-gowned clergymen folding their arms and stomping out of the room — but there is no record of them. And the finished text shows none of the PowerPoint insipidness we associate with committee-speak or with later group translations like the 1961 New English Bible, which T.S. Eliot said did not even rise to “dignified mediocrity.” Far from bland, the King James Bible is one of the great masterpieces of English prose.

The issue of how, or even whether, to translate sacred texts was a fraught one in those days, often with political as well as religious overtones, and it still is. The Roman Catholic Church, for instance, recently decided to retranslate the missal used at Mass to make it more formal and less conversational. Critics have complained that the new text is awkward and archaic, while its defenders (some of whom probably still prefer the Mass in Latin) insist that’s just the point — that language a little out of the ordinary is more devotional and inspiring. No one would ever say that the King James Bible is an easy read. And yet its very oddness is part of its power.

From the start, the King James Bible was intended to be not a literary creation but rather a political and theological compromise between the established church and the growing Puritan movement. What the king cared about was clarity, simplicity, doctrinal orthodoxy. The translators worked hard on that, going back to the original Hebrew, Greek and Aramaic, and yet they also spent a lot of time tweaking the English text in the interest of euphony and musicality. Time and again the language seems to slip almost unconsciously into iambic pentameter — this was the age of Shakespeare, commentators are always reminding us — and right from the beginning the translators embraced the principles of repetition and the dramatic pause: “In the beginning God created the Heaven, and the Earth. And the earth was without forme, and void, and darkenesse was upon the face of the deepe: and the Spirit of God mooved upon the face of the waters.”

The influence of the King James Bible is so great that the list of idioms from it that have slipped into everyday speech, taking such deep root that we use them all the time without any awareness of their biblical origin, is practically endless: sour grapes; fatted calf; salt of the earth; drop in a bucket; skin of one’s teeth; apple of one’s eye; girded loins; feet of clay; whited sepulchers; filthy lucre; pearls before swine; fly in the ointment; fight the good fight; eat, drink and be merry.

But what we also love about this Bible is its strangeness — its weird punctuation, odd pronouns (as in “Our Father, which art in heaven”), all those verbs that end in “eth”: “In the morning it flourisheth, and groweth up; in the evening it is cut downe, and withereth.” As Robert Alter has demonstrated in his startling and revealing translations of the Psalms and the Pentateuch, the Hebrew Bible is even stranger, and in ways that the King James translators may not have entirely comprehended, and yet their text performs the great trick of being at once recognizably English and also a little bit foreign. You can hear its distinctive cadences in the speeches of Lincoln, the poetry of Whitman, the novels of Cormac McCarthy.

Even in its time, the King James Bible was deliberately archaic in grammar and phraseology: an expression like “yea, verily,” for example, had gone out of fashion some 50 years before. The translators didn’t want their Bible to sound contemporary, because they knew that contemporaneity quickly goes out of fashion. In his very useful guide, “God’s Secretaries: The Making of the King James Bible,” Adam Nicolson points out that when the Victorians came to revise the King James Bible in 1885, they embraced this principle wholeheartedly, and like those people who whack and scratch old furniture to make it look even more ancient, they threw in a lot of extra Jacobeanisms, like “howbeit,” “peradventure, “holden” and “behooved.”

This is the opposite, of course, of the procedure followed by most new translations, starting with Good News for Modern Man, a paperback Bible published by the American Bible Society in 1966, whose goal was to reflect not the language of the Bible but its ideas, rendering them into current terms, so that Ezekiel 23:20, for example (“For she doted upon their paramours, whose flesh is as the flesh of asses, and whose issue is like the issue of horses”) becomes “She was filled with lust for oversexed men who had all the lustfulness of donkeys or stallions.”

There are countless new Bibles available now, many of them specialized: a Bible for couples, for gays and lesbians, for recovering addicts, for surfers, for skaters and skateboarders, not to mention a superheroes Bible for children. They are all “accessible,” but most are a little tone-deaf, lacking in grandeur and majesty, replacing “through a glasse, darkly,” for instance, with something along the lines of “like a dim image in a mirror.” But what this modernizing ignores is that the most powerful religious language is often a little elevated and incantatory, even ambiguous or just plain hard to understand. The new Catholic missal, for instance, does not seem to fear the forbidding phrase, replacing the statement that Jesus is “one in being with the Father” with the more complicated idea that he is “consubstantial with the Father.”

Not everyone prefers a God who talks like a pal or a guidance counselor. Even some of us who are nonbelievers want a God who speaketh like — well, God. The great achievement of the King James translators is to have arrived at a language that is both ordinary and heightened, that rings in the ear and lingers in the mind. And that all 54 of them were able to agree on every phrase, every comma, without sounding as gassy and evasive as the Financial Crisis Inquiry Commission, is little short of amazing, in itself proof of something like divine inspiration.

 

Posted in Economic growth, Education policy, Higher Education

Hausmann: The Education Myth

In this post I reprint a piece by Ricardo Hausmann (an economist at Harvard’s Kennedy School), which was published in Project Syndicate in 2015. Here’s a link to the original.  If you can’t get past the paywall, here’s a link to a PDF.

What I like about this piece is the way Hausmann challenges a central principle that guides educational policy, both domestic and international.  This is the belief that education is the central engine of economic growth.  According to this credo, increasing education is how we can increase productivity, GDP, and standard of living.  Hausmann shows, however, that the impact of education on economic growth is a lot less than promised.  Other factors appear to be more important than education at expanding economies, so investing in these strategies may be a lot more efficient than the costly process of increasing access to tertiary education.

As the former chief economist at the Inter-American Development Bank and the head of the Harvard Growth Lab, he seems to know something about this subject.  See what you think.

Human Capital and Ec Development

The Education Myth

TIRANA – In an era characterized by political polarization and policy paralysis, we should celebrate broad agreement on economic strategy wherever we find it. One such area of agreement is the idea that the key to inclusive growth is, as then-British Prime Minister Tony Blair put in his 2001 reelection campaign, “education, education, education.” If we broaden access to schools and improve their quality, economic growth will be both substantial and equitable.

As the Italians would say: magari fosse vero. If only it were true. Enthusiasm for education is perfectly understandable. We want the best education possible for our children, because we want them to have a full range of options in life, to be able to appreciate its many marvels and participate in its challenges. We also know that better educated people tend to earn more.

Education’s importance is incontrovertible – teaching is my day job, so I certainly hope it is of some value. But whether it constitutes a strategy for economic growth is another matter. What most people mean by better education is more schooling; and, by higher-quality education, they mean the effective acquisition of skills (as revealed, say, by the test scores in the OECD’s standardized PISA exam). But does that really drive economic growth?

In fact, the push for better education is an experiment that has already been carried out globally. And, as my Harvard colleague Lant Pritchett has pointed out, the long-term payoff has been surprisingly disappointing.

In the 50 years from 1960 to 2010, the global labor force’s average time in school essentially tripled, from 2.8 years to 8.3 years. This means that the average worker in a median country went from less than half a primary education to more than half a high school education.

How much richer should these countries have expected to become? In 1965, France had a labor force that averaged less than five years of schooling and a per capita income of $14,000 (at 2005 prices). In 2010, countries with a similar level of education had a per capita income of less than $1,000.

In 1960, countries with an education level of 8.3 years of schooling were 5.5 times richer than those with 2.8 year of schooling. By contrast, countries that had increased their education from 2.8 years of schooling in 1960 to 8.3 years of schooling in 2010 were only 167% richer. Moreover, much of this increase cannot possibly be attributed to education, as workers in 2010 had the advantage of technologies that were 50 years more advanced than those in 1960. Clearly, something other than education is needed to generate prosperity.

As is often the case, the experience of individual countries is more revealing than the averages. China started with less education than Tunisia, Mexico, Kenya, or Iran in 1960, and had made less progress than them by 2010. And yet, in terms of economic growth, China blew all of them out of the water. The same can be said of Thailand and Indonesia vis-à-vis the Philippines, Cameroon, Ghana, or Panama. Again, the fast growers must be doing something in addition to providing education.

The experience within countries is also revealing. In Mexico, the average income of men aged 25-30 with a full primary education differs by more than a factor of three between poorer municipalities and richer ones. The difference cannot possibly be related to educational quality, because those who moved from poor municipalities to richer ones also earned more.

And there is more bad news for the “education, education, education” crowd: Most of the skills that a labor force possesses were acquired on the job. What a society knows how to do is known mainly in its firms, not in its schools. At most modern firms, fewer than 15% of the positions are open for entry-level workers, meaning that employers demand something that the education system cannot – and is not expected – to provide.

When presented with these facts, education enthusiasts often argue that education is a necessary but not a sufficient condition for growth. But in that case, investment in education is unlikely to deliver much if the other conditions are missing. After all, though the typical country with ten years of schooling had a per capita income of $30,000 in 2010, per capita income in Albania, Armenia, and Sri Lanka, which have achieved that level of schooling, was less than $5,000. Whatever is preventing these countries from becoming richer, it is not lack of education.

A country’s income is the sum of the output produced by each worker. To increase income, we need to increase worker productivity. Evidently, “something in the water,” other than education, makes people much more productive in some places than in others. A successful growth strategy needs to figure out what this is.

Make no mistake: education presumably does raise productivity. But to say that education is your growth strategy means that you are giving up on everyone who has already gone through the school system – most people over 18, and almost all over 25. It is a strategy that ignores the potential that is in 100% of today’s labor force, 98% of next year’s, and a huge number of people who will be around for the next half-century. An education-only strategy is bound to make all of them regret having been born too soon.

This generation is too old for education to be its growth strategy. It needs a growth strategy that will make it more productive – and thus able to create the resources to invest more in the education of the next generation. Our generation owes it to theirs to have a growth strategy for ourselves. And that strategy will not be about us going back to school.

Ricardo Hausmann, a former minister of planning of Venezuela and former Chief Economist at the Inter-American Development Bank, is a professor at Harvard’s John F. Kennedy School of Government and Director of the Harvard Growth Lab.