Posted in Higher Education, History of education, Organization Theory, Sociology

College: What Is It Good For?

This post is the text of a lecture I gave in 2013 at the annual meeting of the John Dewey Society.  It was published the following year in the Society’s journal, Education and Culture.  Here’s a link to the published version.           

The story I tell here is not a philosophical account of the virtues of the American university but a sociological account about how those virtues arose as unintended consequences of a system of higher education that arose for less elevated reasons.  Drawing my the analysis in the book I was writing at the time, A Perfect Mess, I show how the system emerged in large part out two impulses that had nothing to do with advancing knowledge.  One was in response to the competition among religious groups, seeking to plant the denominational flag on the growing western frontier and provide clergy for the newly arriving flock.  Another was in response to the competition among frontier towns to attract settlers who would buy land, using a college as a sign that this town was not just another dusty farm village but a true center of culture.

The essay then goes on to explore how the current positive social benefits of the US higher ed system are supported by the peculiar institutional form that characterizes American colleges and universities. 

My argument is that the true hero of the story is the evolved form of the American university, and that all the good things like free speech are the side effects of a structure that arose for other purposes.  Indeed, I argue that the institution – an intellectual haven in a heartless utilitarian world – depends on attributes that we would publicly deplore:  opacity, chaotic complexity, and hypocrisy.

In short, I’m portraying the system as one that is infused with irony, from its early origins through to its current functions.  Hope you enjoy it.

A Perfect Mess Cover

College — What Is It Good For

David F. Labaree

            I want to say up front that I’m here under false pretenses.  I’m not a Dewey scholar or a philosopher; I’m a sociologist doing history in the field of education.  And the title of my lecture is a bit deceptive.   I’m not really going to talk about what college is good for.  Instead I’m going to talk about how the institution we know as the modern American university came into being.  As a sociologist I’m more interested in the structure of the institution than in its philosophical aims.  It’s not that I’m opposed to these aims.  In fact, I love working in a university where these kinds of pursuits are open to us:   Where we can enjoy the free flow of ideas; where we explore any issue in the sciences or humanities that engages us; and where we can go wherever the issue leads without worrying about utility or orthodoxy or politics.  It’s a great privilege to work in such an institution.  And this is why I want to spend some time examining how this institution developed its basic form in the improbable context of the United States in the nineteenth century. 

            My argument is that the true hero of the story is the evolved form of the American university, and that all the good things like free speech are the side effects of a structure that arose for other purposes.  Indeed, I argue that the institution – an intellectual haven in a heartless utilitarian world – depends on attributes that we would publicly deplore:  opacity, chaotic complexity, and hypocrisy.

            I tell this story in three parts.  I start by exploring how the American system of higher education emerged in the nineteenth century, without a plan and without any apparent promise that it would turn out well.  By 1900, I show how all the pieces of the current system had come together.  This is the historical part.  Then I show how the combination of these elements created an astonishingly strong, resilient, and powerful structure.  I look at the way this structure deftly balances competing aims – the populist, the practical, and the elite.  This is the sociological part.  Then I veer back toward the issue raised in the title, to figure out what the connection is between the form of American higher education and the things that it is good for. This is the vaguely philosophical part.  I argue that the form serves the extraordinarily useful functions of protecting those of us in the faculty from the real world, protecting us from each other, and hiding what we’re doing behind a set of fictions and veneers that keep anyone from knowing exactly what is really going on. 

           In this light, I look at some of the things that could kill it for us.  One is transparency.  The current accountability movement directed toward higher education could ruin everything by shining a light on the multitude of conflicting aims, hidden cross-subsidies, and forbidden activities that constitute life in the university.  A second is disaggregation.  I’m talking about current proposals to pare down the complexity of the university in the name of efficiency:  Let online modules take over undergraduate teaching; eliminate costly residential colleges; closet research in separate institutes; and get rid of football.  These changes would destroy the synergy that comes from the university’s complex structure.  A third is principle.  I argue that the university is a procedural institution, which would collapse if we all acted on principle instead of form.   I end with a call for us to retreat from substance and stand shoulder-to-shoulder in defense of procedure.

Historical Roots of the System

            The origins of the American system of higher education could not have been more humble or less promising of future glory.  It was a system, but it had no overall structure of governance and it did not emerge from a plan.  It just happened, through an evolutionary process that had direction but no purpose.  We have a higher education system in the same sense that we have a solar system, each of which emerged over time according to its own rules.  These rules shaped the behavior of the system but they were not the product of Intelligent Design. 

            Yet something there was about this system that produced extraordinary institutional growth.  When George Washington assumed the presidency of the new republic in 1789, the U.S. already had 19 colleges and universities (Tewksbury, 1932, Table 1; Collins, 1979, Table 5.2).  By 1830 the numbers rose to 50 and then growth accelerated, with the total reaching 250 in 1860, 563 in 1870, and 811 in 1880.  To give some perspective, the number of universities in the United Kingdom between 1800 and 1880 rose from 6 to 10 and in all of Europe from 111 to 160 (Rüegg, 2004).  So in 1880 this upstart system had 5 times as many institutions of higher education as did the entire continent of Europe.  How did this happen?

            Keep in mind that the university as an institution was born in medieval Europe in the space between the dominant sources of power and wealth, the church and the state, and it drew  its support over the years from these two sources.  But higher education in the U.S. emerged in a post-feudal frontier setting where the conditions were quite different.  The key to understanding the nature of the American system of higher education is that it arose under conditions where the market was strong, the state was weak, and the church was divided.  In the absence of any overarching authority with the power and money to support a system, individual colleges had to find their own sources of support in order to get started and keep going.  They had to operate as independent enterprises in the competitive economy of higher education, and their primary reasons for being had little to do with higher learning.

            In the early- and mid-nineteenth century, the modal form of higher education in the U.S. was the liberal arts college.  This was a non-profit corporation with a state charter and a lay board, which would appoint a president as CEO of the new enterprise.  The president would then rent a building, hire a faculty, and start recruiting students.  With no guaranteed source of funding, the college had to make a go of it on its own, depending heavily on tuition from students and donations from prominent citizens, alumni, and religious sympathizers.  For college founders, location was everything.  However, whereas European universities typically emerged in major cities, these colleges in the U.S. arose in small towns far from urban population centers.  Not a good strategy if your aim was to draw a lot of students.  But the founders had other things in mind.

            One central motive for founding colleges was to promote religious denominations.  The large majority of liberal arts colleges in this period had a religious affiliation and a clergyman as president.  The U.S. was an extremely competitive market for religious groups seeking to spread the faith, and colleges were a key way to achieve this end.  With colleges, they could prepare its own clergy and provide higher education for their members; and these goals were particularly important on the frontier, where the population was growing and the possibilities for denominational expansion were the greatest.  Every denomination wanted to plant the flag in the new territories, which is why Ohio came to have so many colleges.  The denomination provided a college with legitimacy, students, and a built-in donor pool but with little direct funding.

            Another motive for founding colleges was closely allied with the first, and that was land speculation.  Establishing a college in town was not only a way to advance the faith, it was also a way to raise property values.  If town fathers could attract a college, they could make the case that the town was no mere agricultural village but a cultural center, the kind of place where prospective land buyers would want to build a house, set up a business, and raise a family.  Starting a college was cheap and easy.  It would bear the town’s name and serve as its cultural symbol.  With luck it would give the town leverage to become a county seat or gain a station on the rail line.  So a college was a good investment in a town’s future prosperity (Brown, 1995).

            The liberal arts college was the dominant but not the only form that higher education took in nineteenth century America.  Three other types of institutions emerged before 1880.  One was state universities, which were founded and governed by individual states but which received only modest state funding.  Like liberal arts colleges, they arose largely for competitive reasons.  They emerged in the new states as the frontier moved westward, not because of huge student demand but because of the need for legitimacy.  You couldn’t be taken seriously as a state unless you had a state university, especially if your neighbor had just established one. 

            The second form of institution was the land-grant college, which arose from federal efforts to promote land sales in the new territories by providing public land as a founding grant for new institutions of higher education.  Turning their backs on the classical curriculum that had long prevailed in colleges, these schools had a mandate to promote practical learning in fields such as agriculture, engineering, military science, and mining. 

            The third form was the normal school, which emerged in the middle of the century as state-founded high-school-level institutions for the preparation of teachers.  It wasn’t until the end of the century that these schools evolved into teachers colleges; and in the twentieth century they continued that evolution, turning first into full-service state colleges and then by midcentury into regional state universities. 

            Unlike liberal arts colleges, all three of these types of institutions were initiated by and governed by states, and all received some public funding.  But this funding was not nearly enough to keep them afloat, so they faced similar challenges as the liberal arts colleges, since their survival depended heavily on their ability to bring in student tuition and draw donations.  In short, the liberal arts college established the model for survival in a setting with a strong market, weak state, and divided church; and the newer public institutions had to play by the same rules.

            By 1880, the structure of the American system of higher education was well established.  It was a system made up of lean and adaptable institutions, with a strong base in rural communities, and led by entrepreneurial presidents, who kept a sharp eye out for possible threats and opportunities in the highly competitive higher-education market.  These colleges had to attract and keep the loyalty of student consumers, whose tuition was critical for paying the bills and who had plenty of alternatives in towns nearby.  And they also had to maintain a close relationship with local notables, religious peers, and alumni, who provided a crucial base of donations.

            The system was only missing two elements to make it workable in the long term.  It lacked sufficient students, and it lacked academic legitimacy.  On the student side, this was the most overbuilt system of higher education the world has ever seen.  In 1880, 811 colleges were scattered across a thinly populated countryside, which amounted to 16 colleges per million of population (Collins, 1979, Table 5.2).  The average college had only 131 students and 14 faculty and granted 17 degrees per year (Carter et al., 2006, Table Bc523, Table Bc571; U.S. Bureau of the Census, 1975, Series H 751).  As I have shown, these colleges were not established in response to student demand, but nonetheless they depended on students for survival.  Without a sharp growth in student enrollments, the whole system would have collapsed. 

            On the academic side, these were colleges in name only.  They were parochial in both senses of the word, small town institutions stuck in the boondocks and able to make no claim to advancing the boundaries of knowledge.  They were not established to promote higher learning, and they lacked both the intellectual and economic capital required to carry out such a mission.  Many high schools had stronger claims to academic prowess than these colleges.  European visitors in the nineteenth century had a field day ridiculing the intellectual poverty of these institutions.  The system was on death watch.  If it was going to be able to survive, it needed a transfusion that would provide both student enrollments and academic legitimacy. 

            That transfusion arrived just in time from a new European import, the German research university.  This model offered everything that was lacking in the American system.  It reinvented university professors as the best minds of the generation, whose expertise was certified by the new entry-level degree, the Ph.D., and who were pushing back the frontiers of knowledge through scientific research.  It introduced graduate students to the college campus, who would be selected for their high academic promise and trained to follow in the footsteps of their faculty mentors. 

            And at the same time that the German model offered academic credibility to the American system, the peculiarly Americanized form of this model made university enrollment attractive for undergraduates, whose focus was less on higher learning than on jobs and parties.  The remodeled American university provided credible academic preparation in the cognitive skills required for professional and managerial work; and it provided training in the social and political skills required for corporate employment, through the process of playing the academic game and taking on roles in intercollegiate athletics and on-campus social clubs.  It also promised a social life in which one could have a good time and meet a suitable spouse. 

            By 1900, with the arrival of the research university as the capstone, nearly all of the core elements of the current American system of higher education were in place.  Subsequent developments focused primarily on extending the system downward, adding layers that would make it more accessible to larger numbers of students – as normal schools evolved into regional state universities and as community colleges emerged as the open-access base of an increasingly stratified system.  Here ends the history portion of this account. Now we move on to the sociological part of the story.

Sociological Traits of the System

            When the research university model arrived to save the day in the 1880s, the American system of higher education was in desperate straits.  But at the same time this system had an enormous reservoir of potential strengths that prepared it for its future climb to world dominance.  Let’s consider some of these strengths.  First it had a huge capacity in place, the largest in the world by far:  campuses, buildings, faculty, administration, curriculum, and a strong base in the community.  All it needed was students and credibility. 

            Second, it consisted of a group of institutions that had figured out how to survive under dire Darwinian circumstances, where supply greatly exceeded demand and where there was no secure stream of funding from church or state.  In order to keep the enterprises afloat, they had learned how to hustle for market position, troll for students, and dun donors.  Imagine how well this played out when students found a reason to line up at their doors and donors suddenly saw themselves investing in a winner with a soaring intellectual and social mission. 

            Third, they had learned to be extraordinarily sensitive to consumer demand, upon which everything depended.  Fourth, as a result they became lean and highly adaptable enterprises, which were not bounded by the politics of state policy or the dogma of the church but could take advantage of any emerging possibility for a new program, a new kind of student or donor, or a new area of research.  Not only were they able to adapt but they were forced to do so quickly, since otherwise the competition would jump on the opportunity first and eat their lunch.

            By the time the research university arrived on the scene, the American system of higher education was already firmly established and governed by its own peculiar laws of motion and its own evolutionary patterns.  The university did not transform the system.  Instead it crowned the system and made it viable for a century of expansion and elevation.  Americans could not simply adopt the German university model, since this model depended heavily on strong state support, which was lacking in the U.S.  And the American system would not sustain a university as elevated as the German university, with its tight focus on graduate education and research at the expense of other functions.  American universities that tried to pursue this approach – such as Clark University and Johns Hopkins – found themselves quickly trailing the pack of institutions that adopted a hybrid model grounded in the preexisting American system.  In the U.S., the research university provided a crucial add-on rather than a transformation.  In this institutionally-complex market-based system, the research university became embedded within a convoluted but highly functional structure of cross-subsidies, interwoven income streams, widely dispersed political constituencies, and a bewildering array of goals and functions. 

            At the core of the system is a delicate balance among three starkly different models of higher education.  These three roughly correspond to Clark Kerr’s famous characterization of the American system as a mix of the British undergraduate college, the American land-grant college, and the German research university (Kerr, 2001, p. 14).  The first is the populist element, the second is the practical element, and the third is the elite element.  Let me say a little about each of these and make the case for how they work to reinforce each other and shore up the overall system.  I argue that these three elements are unevenly distributed across the whole system, with the populist and practical parts strongest in the lower tiers of the system, where access is easy and job utility are central, and the elite is strongest in the upper tier.  But I also argue that all three are present in the research university at the top of the system.  Consider how all these elements come together in a prototypical flagship state university.

            The populist element has its roots in the British residential undergraduate college, which colonists had in mind when they established the first American colleges; but the changes that emerged in the U.S. in the early nineteenth century were critical.  Key was the fact that American colleges during this period were broadly accessible in a way that colleges in the U.K. never were until the advent of the red-brick universities after the Second World War.  American colleges were not located in fashionable areas in major cities but in small towns in the hinterland.  There were far too many of them for them to be elite, and the need for students meant that tuition and academic standards both had to be kept relatively low.  The American college never exuded the odor of class privilege to the same degree as Oxbridge; its clientele was largely middle class.  For the new research university, this legacy meant that the undergraduate program provided critical economic and political support. 

            From the economic perspective, undergrads paid tuition, which – through large classes and thus the need for graduate teaching assistants – supported graduate programs and the larger research enterprise.  Undergrads, who were socialized in the rituals of football and fraternities, were also the ones who identified most closely with the university, which meant that in later years they became the most loyal donors.  As doers rather than thinkers, they were also the wealthiest group of alumni donors.  Politically, the undergraduate program gave the university a broad base of community support.  Since anyone could conceive of attending the state university, the institution was never as remote or alien as the German model.  Its athletic teams and academic accomplishments were a point of pride for state residents, whether or not they or their children ever attended.  They wore the school colors and cheered for it on game days.

            The practical element has its root in the land-grant college.  The idea here was that the university was not just an enterprise for providing liberal education for the elite but that it could also provide useful occupational skills for ordinary people.  Since the institution needed to attract a large group of students to pay the bills, the American university left no stone unturned when it came to developing programs that students might want.  It promoted itself as a practical and reliable mechanism for getting a good job.  This not only boosted enrollment, but it also sent a message to the citizens of the state that the university was making itself useful to the larger community, producing the teachers, engineers, managers, and dental hygienists that they needed.  

            This practical bent also extended to the university’s research effort, which was not just focusing on ivory tower pursuits.  Its researchers were working hard to design safer bridges, more productive crops, better vaccines, and more reliable student tests.  For example, when I taught at Michigan State I planted my lawn with Spartan grass seed, which was developed at the university.  These forms of applied research led to patents that brought substantial income back to the institution, but their most important function was to provide a broad base of support for the university among people who had no connection with it as an instructional or intellectual enterprise.  The idea was compelling: This is your university, working for you.

            The elite element has its roots in the German research university.  This is the component of the university formula that gives the institution academic credibility at the highest level.  Without it the university would just be a party school for the intellectually challenged and a trade school for job seekers.  From this angle, the university is the haven for the best thinkers, where professors can pursue intellectual challenges of the first order, develop cutting edge research in a wide array of domains, and train graduate students who will carry on these pursuits in the next generation.  And this academic aura envelops the entire enterprise, giving the lowliest freshman exposure to the most distinguished faculty and allowing the average graduate to sport a diploma burnished by the academic reputations of the best and the brightest.  The problem, of course, is that supporting professorial research and advanced graduate study is enormously expensive; research grants only provide a fraction of the needed funds. 

            So the populist and practical domains of the university are critically important components of the larger university package.  Without the foundation of fraternities and football, grass seed and teacher education, the superstructure of academic accomplishment would collapse of its own weight.  The academic side of the university can’t survive without both the financial subsidies and political support that come from the populist and the practical sides.  And the populist and practical sides rely on the academic legitimacy that comes from the elite side.  It’s the mixture of the three that constitutes the core strength of the American system of higher education.  This is why it is so resilient, so adaptable, so wealthy, and so powerful.  This is why its financial and political base is so broad and strong.  And this is why American institutions of higher education enjoy so much autonomy:  They respond to many sources of power in American society and they rely on many sources of support, which means they are not the captive of any single power source or revenue stream.

The Power of Form

            So my story about the American system of higher education is that it succeeded by developing a structure that allowed it to become both economically rich and politically autonomous.  It could tap multiple sources of revenue and legitimacy, which allowed it to avoid becoming the wholly owned subsidiary of the state, the church, or the market.  And by virtue of its structurally reinforced autonomy, college is good for a great many things.

            At last we come back to our topic.  What is college good for?  For those of us on faculties of research universities, they provide several core benefits that we see as especially important.  At the top of the list is that they preserve and promote free speech.  They are zones where faculty and students can feel free to pursue any idea, any line of argument, and any intellectual pursuit that they wish – free of the constraints of political pressure, cultural convention, or material interest.  Closely related to this is the fact that universities become zones where play is not only permissible but even desirable, where it’s ok to pursue an idea just because it’s intriguing, even though there is no apparent practical benefit that this pursuit would produce.

            This, of course, is a rather idealized version of the university.  In practice, as we know, politics, convention, and economics constantly intrude on the zone of autonomy in an effort to shape the process and limit these freedoms.  This is particularly true in the lower strata of the system.  My argument is not that the ideal is met but that the structure of American higher education – especially in the top tier of the system – creates a space of relative autonomy, where these constraining forces are partially held back, allowing the possibility for free intellectual pursuits that cannot be found anywhere else. 

            Free intellectual play is what we in the faculty tend to care about, but others in American society see other benefits arising from higher education that justify the enormous time and treasure that we devote to supporting the system.  Policymakers and employers put primary emphasis on higher education as an engine of human capital production, which provides the economically relevant skills that drive increases in worker productivity and growth in the GDP.  They also hail it as a place of knowledge production, where people develop valuable technologies, theories, and inventions that can feed directly into the economy.  And companies use it as a place to outsource much of their needs for workforce training and research-and-development. 

            These pragmatic benefits that people see coming from the system of higher education are real.  Universities truly are socially useful in such ways.  But it’s important to keep in mind that these social benefits only can arise if the university remains a preserve for free intellectual play.  Universities are much less useful to society if they restrict themselves to the training of individuals for particular present-day jobs, or to the production of research to solve current problems.  They are most useful if they function as storehouses for knowledges, skills, technologies, and theories – for which there is no current application but which may turn out to be enormously useful in the future.  They are the mechanism by which modern societies build capacity to deal with issues that have not yet emerged but sooner or later are likely to do so.

            But that is a discussion for another speech by another scholar.  The point I want make today about the American system of higher education is that it is good for a lot of things but it was established in order to accomplish none of these things.  As I have shown, the system that arose in the nineteenth century was not trying to store knowledge, produce capacity, or increase productivity.  And it wasn’t trying to promote free speech or encourage play with ideas.  It wasn’t even trying to preserve institutional autonomy.  These things happened as the system developed, but they were all unintended consequences.  What was driving development of the system was a clash of competing interests, all of which saw the college as a useful medium for meeting particular ends.  Religious denominations saw them as a way to spread the faith.  Town fathers saw them as a way to promote local development and increase property values.  The federal government saw them as a way to spur the sale of federal lands.  State governments saw them as a way to establish credibility in competition with other states.  College presidents and faculty saw them as a way to promote their own careers.  And at the base of the whole process of system development were the consumers, the students, without whose enrollment and tuition and donations the system would not have been able to persist.  The consumers saw the college as useful in a number of ways:  as a medium for seeking social opportunity and achieving social mobility; as a medium for preserving social advantage and avoiding downward mobility; as a place to have a good time, enjoy an easy transition to adulthood, pick up some social skills, and meet a spouse; even, sometimes, as a place to learn. 

            The point is that the primary benefits of the system of higher education derive from its form, but this form did not arise in order to produce these benefits.  We need to preserve the form in order to continue enjoying these benefits, but unfortunately the organizational  foundations upon which the form is built are, on the face of it, absurd.  And each of these foundational qualities is currently under attack from the perspective of alternative visions that, in contrast, have a certain face validity.  It the attackers accomplish their goals, the system’s form, which has been so enormously productive over the years, will collapse, and with this collapse will come the end of the university as we know it.  I didn’t promise this lecture would end well, did I?

            Let me spell out three challenges that would undercut the core autonomy and synergy that makes the system so productive in its current form.  On the surface, each of the proposed changes seems quite sensible and desirable.  Only by examining the implications of actually pursuing these changes can we see how they threaten the foundational qualities that currently undergird the system.  The system’s foundations are so paradoxical, however, that mounting a public defense of them would be difficult indeed.  Yet it is precisely these traits of the system that we need to defend in order to preserve the current highly functional form of the university.  In what follows, I am drawing inspiration from the work of Suzanne Lohmann (2004, 2006) a political scientist at UCLA, who is the scholar who has addressed these issues most astutely.

            One challenge comes from prospective reformers of American higher education who want to promote transparency.  Who can be against that?  This idea derives from the accountability movement, which has already swept across K-12 education and is now pounding the shores of higher education.  It simply asks universities to show people what they’re doing.  What is the university doing with its money and its effort?  Who is paying for what?  How do the various pieces of the complex structure of the university fit together?  And are they self-supporting or drawing resources from elsewhere?  What is faculty credit-hour production?  How is tuition related to instructional costs?  And so on.   These demands make a lot of sense. 

            The problem, however, as I have shown today, is that the autonomy of the university depends on its ability to shield its inner workings from public scrutiny.  It relies on opacity.  Autonomy will end if the public can see everything that is going on and what everything costs.  Consider all of the cross subsidies that keep the institution afloat:  undergraduates support graduate education, football supports lacrosse, adjuncts subsidize professors, rich schools subsidize poor schools.  Consider all of the instructional activities that would wilt in the light of day; consider all of the research projects that could be seen as useless or politically unacceptable.  The current structure keeps the inner workings of the system obscure, which protects the university from intrusions on its autonomy.  Remember, this autonomy arose by accident not by design; its persistence depends on keeping the details of university operations out of public view.

            A second and related challenge comes from reformers who seek to promote disaggregation.  The university is an organizational nightmare, they say, with all of those institutes and centers, departments and schools, programs and administrative offices.  There are no clear lines of authority, no mechanisms to promote efficiency and eliminate duplication, no tools to achieve economies of scale.  Transparency is one step in the right direction, they say, but the real reform that is needed is to take apart the complex interdependencies and overlapping responsibilities within the university and then figure out how each of these tasks could be accomplished in the most cost-effective and outcome-effective manner.  Why not have a few star professors tape lectures and then offer Massive Open Online Courses at colleges across the country?  Why not have institutions specialize in what they’re best at – remedial education, undergraduate instruction, vocational education, research production, graduate or student training?  Putting them together into a single institution is expensive and grossly inefficient. 

            But recall that it is precisely the aggregation of purposes and functions – the combination of the populist, the practical, and the elite – that has made the university so strong, so successful, and, yes, so useful.  This combination creates a strong base both financially and politically and allows for forms of synergy than cannot happen with a set of isolated educational functions.  The fact is that this institution can’t be disaggregated without losing what makes it the kind of university that students, policymakers, employers, and the general public find so compelling.  A key organizational element that makes the university so effective is its chaotic complexity.

            A third challenge comes not from reformers intruding on the university from the outside but from faculty members meddling with it from the inside.  The threat here arises from the dangerous practice of acting on academic principle.  Fortunately, this is not very common in academe.  But the danger is lurking in the background of every decision about faculty hires.  Here’s how it works.  You review a finalist for a faculty position in a field not closely connected to your own, and you find to your horror that the candidate’s intellectual domain seems absurd on the face of it (how can anyone take this type of work seriously?) and the candidate’s own scholarship doesn’t seem credible.  So you decide to speak against hiring the candidate and organize colleagues to support your position.  But then you happen to read a paper by Suzanne Lohmann, who points out something very fundamental about how universities work. 

            Universities are structured in a manner that protects the faculty from the outside world (that is, protecting them from the forces of transparency and disaggregation), but it’s also organized in a manner that protects the faculty from each other.  The latter is the reason we have such an enormous array of departments and schools in universities.  If every historian had to meet the approval of geologists and every psychologist had be meet the approval of law faculty, no one would ever be hired. 

           The simple fact is that part of what keeps universities healthy and autonomous is hypocrisy.  Because of the Balkanized structure of university organization, we all have our own protected spaces to operate in and we all pass judgment only on our own peers within that space.  To do otherwise would be disastrous.  We don’t have to respect each other’s work across campus, we merely need to tolerate it – grumbling about each other in private and making nice in public.  You pick your faculty, we’ll pick ours.  Lohmann (2006) calls this core procedure of the academy “log-rolling.”  If we all operated on principle, if we all only approved scholars we respected, then the university would be a much diminished place.  Put another way, I wouldn’t want to belong to a university that consisted only of people I found worthy.  Gone would be the diversity of views, paradigms, methodologies, theories, and world views that makes the university such a rich place.  The result is incredibly messy, and it permits a lot of quirky – even ridiculous – research agendas, courses, and instructional programs.  But in aggregate, this libertarian chaos includes an extraordinary range of ideas, capacities, theories, and social possibilities.  It’s exactly the kind of mess we need to treasure and preserve and defend against all opponents.

            So here is the thought I’m leaving you with.  The American system of higher education is enormously productive and useful, and it’s a great resource for students, faculty, policymakers, employers, and society.  What makes it work is not its substance but its form.  Crucial to its success is its devotion to three formal qualities:  opacity, chaotic complexity, and hypocrisy.  Embrace these forms and they will keep us free.

Posted in Credentialing, Higher Education, History of education, Sociology, Uncategorized

How Credentialing Theory Explains the Extraordinary Growth in US Higher Ed in the 19th Century

Today I am posting a piece I wrote in 1995. It was the foreword to a book by David K. Brown, Degrees of Control: A Sociology of Educational Expansion and Occupational Credentialism.  

I have long been interested in credentialing theory, but this is the only place where I ever tried to spell out in detail how the theory works.  For this purpose, I draw on the case of the rapid expansion of the US system of higher education in the 19th century and its transformation at the end of the century, which is the focus of Brown’s book.  Here’s a link to a pdf of the original. 

The case is particularly fruitful for demonstrating the value of credentialing theory, because the most prominent theory of education development simply can’t make sense of it.  Functionalist theory sees the emergence of educational systems as part of the process of modernization.  As societies become more complex, with a greater division of labor and a shift from manual to mental work, the economy requires workers with higher degrees of verbal and cognitive skills.  Elementary, secondary  and higher education arise over time in response to this need. 

The history of education in the U.S., however, poses a real problem for this explanation.  American higher education exploded in the 19th century, to the point that there were 800 some colleges in existence by 1880, which was more than the total number in the continent of Europe.  It was the highest rate of colleges per 100,000 population that the world have ever seen.   The problem is that this increase was not in respond to increasing demand from employers for college-educated workers.  While the rate of higher schooling was increasing across the century, the skill demands in the workforce were declining.  The growth of factory production was subdividing forms of skilled work, such as shoemaking, into a series of low-skilled tasks on the assembly line.  

This being the case, then, how can we understand the explosion of college founding in the 19th century?  Brown provides a compelling explanation, and I lay out his core arguments in my foreword.  I hope you find it illuminating.

 

Brown Cover

Preface

In this book, David Brown tackles an important question that has long puzzled scholars who wanted to understand the central role that education plays in American society: When compared with other Western countries, why did the United States experience such extraordinary growth in higher education? Whereas in most societies higher education has long been seen as a privilege that is granted to a relatively small proportion of the population, in the United States it has increasingly come to be seen as a right of the ordinary citizen. Nor was this rapid increase in accessibility very recent phenomenon. As Brown notes, between 1870 and 1930, the proportion of college-age persons (18 to 21 years old) who attended institutions of higher education rose from 1.7% to 13.0%. And this was long before the proliferation of regional state universities and community colleges made college attendance a majority experience for American youth.

The range of possible answers to this question is considerable, with each carrying its own distinctive image of the nature of American political and social life. For example, perhaps the rapid growth in the opportunity for higher education was an expression of egalitarian politics and a confirmation of the American Dream; or perhaps it was a political diversion, providing ideological cover for persistent inequality; or perhaps it was merely an accident — an unintended consequence of a struggle for something altogether different. In politically charged terrain such as this, one would prefer to seek guidance from an author who doesn’t ask the reader to march behind an ideological banner toward a preordained conclusion, but who instead rigorously examines the historical data and allows for the possibility of encountering surprises. What the reader wants, I think, is an analysis that is both informed by theory and sensitive to historical nuance.

In this book, Brown provides such an analysis. He approaches the subject from the perspective of historical sociology, and in doing so he manages to maintain an unusually effective balance between historical explanation and sociological theory-building. Unlike many sociologists dealing with history, he never oversimplifies the complexity of historical events in the rush toward premature theoretical closure; and unlike many historians dealing with sociology, he doesn’t merely import existing theories into his historical analysis but rather conceives of the analysis itself as a contribution to theory. His aim is therefore quite ambitious – to spell out a theoretical explanation for the spectacular growth and peculiar structure of American higher education, and to ground this explanation in an analysis of the role of college credentials in American life.

Traditional explanations do not hold up very well when examined closely. Structural-functionalist theory argues that an expanding economy created a powerful demand for advanced technical skills (human capital), which only a rapid expansion of higher education could fill. But Brown notes that during this expansion most students pursued programs not in vocational-technical areas but in liberal arts, meaning that the forms of knowledge they were acquiring were rather remote from the economically productive skills supposedly demanded by employers. Social reproduction theory sees the university as a mechanism that emerged to protect the privilege of the upper-middle class behind a wall of cultural capital, during a time (with the decline of proprietorship) when it became increasingly difficult for economic capital alone to provide such protection. But, while this theory points to a central outcome of college expansion, it fails to explain the historical contingencies and agencies that actually produced this outcome. In fact, both of these theories are essentially functionalist in approach, portraying higher education as arising automatically to fill a social need — within the economy, in the first case, and within the class system, in the second.

However, credentialing theory, as developed most extensively by Randall Collins (1979), helps explain the socially reproductive effect of expanding higher education without denying agency. It conceives of higher education diplomas as a kind of cultural currency that becomes attractive to status groups seeking an advantage in the competition for social positions, and therefore it sees the expansion of higher education as a response to consumer demand rather than functional necessity. Upper classes tend to benefit disproportionately from this educational development, not because of an institutional correspondence principle that preordains such an outcome, but because they are socially and culturally better equipped to gain access to and succeed within the educational market.

This credentialist theory of educational growth is the one that Brown finds most compelling as the basis for his own interpretation. However, when he plunges into a close examination of American higher education, he finds that Collins’ formulation of this theory often does not coincide very well with the historical evidence. One key problem is that Collins does not examine the nature of labor market recruitment, which is critical for credentialist theory, since the pursuit of college credentials only makes sense if employers are rewarding degree holders with desirable jobs. Brown shows that between 1800 and 1880 the number of colleges in the United States grew dramatically (as Collins also asserts), but that enrollments at individual colleges were quite modest. He argues that this binge of institution-creation was driven by a combination of religious and market forces but not (contrary to Collins) by the pursuit of credentials. There simply is no good evidence that a college degree was much in demand by employers during this period. Instead, a great deal of the growth in the number of colleges was the result of the desire by religious and ethnic groups to create their own settings for producing clergy and transmitting culture. In a particularly intriguing analysis, Brown argues that an additional spur to this growth came from markedly less elevated sources — local boosterism and land speculation — as development-oriented towns sought to establish colleges as a mechanism for attracting land buyers and new residents.

Brown’s version of credentialing theory identifies a few central factors that are required in order to facilitate a credential-driven expansion of higher education, and by 1880 several of these were already in place. One such factor is substantial wealth. Higher education is expensive, and expanding it for reasons of individual status attainment rather than for societal necessity is a wasteful use of a nation’s resources; it is only feasible for a very wealthy country. The United States was such a country in the late nineteenth century. A second factor is a broad institutional base. At this point, the United States had the largest number of colleges per million residents that the country has even seen, before or since. When combined with the small enrollments at each college, this meant that there was a great potential for growth within an already existing institutional framework. This potential was reinforced by a third factor, decentralized control. Colleges were governed by local boards rather than central state authorities, thus encouraging entrepreneurial behavior by college leaders, especially in the intensively competitive market environment they faced.

However, three other essential factors for rapid credential-based growth in higher education were still missing in 1880. For one thing, colleges were not going to be able to attract large numbers of new students, who were after all unlikely to be motivated solely by the love of learning, unless they could offer these students both a pleasant social experience and a practical educational experience — neither of which was the norm at colleges for most of the nineteenth century. Another problem was that colleges could not function as credentialing institutions until they had a monopoly over a particular form of credentials, but in 1880 they were still competing directly with high schools for the same students. Finally, their credentials were not going to have any value on the market unless employers began to demonstrate a distinct preference for hiring college graduates, and such a preference was still not obvious at this stage.

According to Brown, the 1880s saw a major shift in all three of these factors. The trigger for this change was a significant oversupply of institutions relative to existing demand. In this life or death situation, colleges desperately sought to increase the pool of potential students. It is no coincidence that this period marked the rapid diffusion of efforts to improve the quality of social life on campuses (from the promotion of athletics to the proliferation of fraternities), and also the shift toward curriculum with a stronger claim of practicality (emphasizing modern languages and science over Latin and Greek). At the same time, colleges sought to guarantee a flow of students from feeder institutions, which required them to establish a hierarchical relationship with high schools. The end of the century was the period in which colleges began requiring completion of a high school course as a prerequisite for college admission instead of the traditional entrance examination. This system provided high schools with a stable outlet for its graduates and colleges with predictable flow of reasonably well-prepared students. However, none of this would have been possible if the college degree had not acquired significant exchange value in the labor market. Without this, there would have been only social reasons for attending college, and high schools would have had little incentive to submit to college mandates.

Perhaps Brown’s strongest contribution to credential theory is his subtle and persuasive analysis of the reasoning that led employers to assert a preference for college graduates in the hiring process. Until now, this issue has posed a significant, perhaps fatal, problem for credentialing theory, which has asked the reader to accept two apparently contradictory assertions about credentials. First, the theory claims that a college degree has exchange value but not necessarily use value; that is, it is attractive to the consumer because it can be cashed in on a good job more or less independently of any learning that was acquired along the way. Second, this exchange value depends on the willingness of employers to hire applicants based on credentials alone, without direct knowledge of what these applicants know or what they can do. However this raises a serious question about the rationality of the employer in this process. After all, why would an employer, who presumably cares about the productivity of future employees, hire people based solely on college’s certification of competence in the absence of any evidence for that competence?

Brown tackles this issue with a nice measure of historical and sociological insight. He notes that the late nineteenth century saw the growing rationalization of work, which led to the development of large-scale bureaucracies to administer this work within both private corporations and public agencies. One result was the creation of a rapidly growing occupational sector for managerial employees who could function effectively within such a rationalized organizational structure. College graduates seemed to fit the bill for this kind of work. They emerged from the top level of the newly developed hierarchy of educational institutions and therefore seemed like natural candidates for management work in the upper levels of the new administrative hierarchy, which was based not on proprietorship or political office but on apparent skill. And what kinds of skills were called for in this line of work? What the new managerial employees needed was not so much the technical skills posited by human capital theory, he argues, but a general capacity to work effectively in a verbally and cognitively structured organizational environment, and also a capacity to feel comfortable about assuming positions of authority over other people.

These were things that the emerging American college could and indeed did provide. The increasingly corporate social structure of student life on college campuses provided good socialization for bureaucratic work, and the process of gaining access to and graduation from college provided students with an institutionalized confirmation of their social superiority and qualifications for leadership. Note that these capacities were substantive consequences of having attended college, but they were not learned as part of the college’s formal curriculum. That is, the characteristics that qualified college graduates for future bureaucratic employment were a side effect of their pursuit of a college education. In this sense, then, the college credential had a substantive meaning for employers that justified them in using it as a criterion for employment — less for the human capital that college provided than for the social capital that college conferred on graduates. Therefore, this credential, Brown argues, served an important role in the labor market by reducing the uncertainty that plagued the process of bureaucratic hiring. After all, how else was an employer to gain some assurance that candidate could do this kind of work? A college degree offered a claim to competence, which had enough substance behind it to be credible even if this substance was largely unrelated to the content of the college curriculum.

By the 1890s all the pieces were in place for a rapid expansion of college enrollments, strongly driven by credentialist pressures. Employers had reason to give preference to college graduates when hiring for management positions. As a result, middle-class families had an increasing incentive to provide their children with privileged access to an advantaged social position by sending them to college. For the students themselves, this extrinsic reward for attending college was reinforced by the intrinsic benefits accruing from an attractive social life on campus. All of this created a very strong demand for expanding college enrollments, and the pre-existing institutional conditions in higher education made it possible for colleges to respond to this demand in an aggressive fashion. There were a thousand independent institutions of higher education, accustomed to playing entrepreneurial roles in a competitive educational market, that were eager to capitalize on the surge of interest in attending college and to adapt themselves to the preferences of these new tuition-paying consumers. The result was a powerful and unrelenting surge of expansion in college enrollments that continued for the next century.

 

Brown provides a persuasive answer to the initial question about why American higher education expanded at such a rapid rate. But at this point the reader may well respond by asking the generic question that one should ask of any analyst, and that is, “So what?” More specifically, in light of the particular claims of this analysis, the question becomes: “What difference does it make that this expansion was spurred primarily by the pursuit of educational credentials?” In my view, at least, the answer to that question is clear. The impact of credentialism on both American society and the American educational system has been profound — profoundly negative. Consider some of the problems it has caused.

One major problem is that credentialism is astonishingly inefficient. Education is the largest single public investment made by most modern societies, and this is justified on the grounds that it provides a critically important contribution to the collective welfare. The public value of education is usually calculated as some combination of two types of benefits, the preparation of capable citizens (the political benefit) and the training of productive workers (the economic benefit). However the credentialist argument advanced by Brown suggests that these public benefits are not necessarily being met and that the primary beneficiaries are in fact private individuals. From this perspective, higher education (and the educational system more generally) exists largely as a mechanism for providing individuals with a cultural commodity that will give them a substantial competitive advantage in the pursuit of social position. In short, education becomes little but a vast public subsidy for private ambition.

The practical effect of this subsidy is the production of a glut of graduates. The difficulty posed by this outcome is not that the population becomes overeducated (since such a state is difficult to imagine) but that it becomes overcredentialed, since people are pursuing diplomas less for the knowledge they are thereby acquiring than for the access that the diplomas themselves will provide. The result is a spiral of credential inflation; for as each level of education in turn gradually floods with a crowd of ambitious consumers, individuals have to keep seeking ever higher levels of credentials in order to move a step ahead of the pack. In such a system nobody wins. Consumers have to spend increasing amounts of time and money to gain additional credentials, since the swelling number of credential holders keeps lowering the value of credentials at any given level. Taxpayers find an increasing share of scarce fiscal resources going to support an educational chase with little public benefit. Employers keep raising the entry-level education requirements for particular jobs, but they still find that they have to provide extensive training before employees can carry out their work productively. At all levels, this is an enormously wasteful system, one that rich countries like the United States can increasingly ill afford and that less developed countries, who imitate the U.S. educational model, find positively impoverishing.

A second major problem is that credentialism undercuts learning. In both college and high school, students are all too well aware that their mission is to do whatever it takes to acquire a diploma, which they can then cash in on what really matters — a good job. This has the effect of reifying the formal markers of academic progress-grades, credits, and degrees — and encouraging students to focus their attention on accumulating these badges of merit for the exchange value they offer. But at the same time this means directing attention away from the substance of education, reducing student motivation to learn the knowledge and skills that constitute the core of the educational curriculum. Under such conditions, it is quite rational, even if educationally destructive, for students to seek to acquire their badges of merit at a minimum academic cost, to gain the highest grade with the minimum amount of learning. This perspective is almost perfectly captured by a common student question, one that sends chills down the back of the learning-centered teacher but that makes perfect sense for the credential-oriented student: “ls this going to be on the test?” (Sedlak et al., 1986, p. 182). We have credentialism to thank for the aversion to learning that, to a great extent, lies at the heart of our educational system.

A third problem posed by credentialism is social and political more than educational. According to credentialing theory, the connection between social class and education is neither direct nor automatic, as suggested by social reproduction theory. Instead, the argument goes, market forces mediate between the class position of students and their access to and success within the educational system. That is, there is general competition for admission to institutions of higher education and for levels of achievement within these institutions. Class advantage is no guarantee of success in this competition, since such factors as individual ability, motivation, and luck all play a part in determining the result. Market forces also mediate between educational attainment (the acquisition of credentials) and social attainment (the acquisition of a social position). Some college degrees are worth more in the credentials market than others, and they provide privileged access to higher level positions independent of the class origins of the credential holder.

However, in both of these market competitions, one for acquiring the credential and the other for cashing it in, a higher class position provides a significant competitive edge. The economic, cultural, and social capital that come with higher class standing gives the bearer an advantage in getting into college, in doing well at college, and in translating college credentials into desirable social outcomes. The market-based competition that characterizes the acquisition and disposition of educational credentials gives the process a meritocratic set of possibilities, but the influence of class on this competition gives it a socially reproductive set of probabilities as well. The danger is that, as a result, a credential-driven system of education can provide meritocratic cover for socially reproductive outcomes. In the single-minded pursuit of educational credentials, both student consumers and the society that supports them can lose sight of an all-too-predictable pattern of outcomes that is masked by the headlong rush for the academic gold.

Posted in Culture, History, Politics, Populism, Sociology

Colin Woodard: Maps that Show the Historical Roots of Current US Political Faultlines

This post is a commentary on Colin Woodard’s book American Nations: A History of the Eleven Rival Regional Cultures of North America.  

Woodard argues that the United States is not a single national culture but  a collection of national cultures, each with its own geographic base.  The core insight for this analytical approach comes from “Wilbur Zelinsky of Pennsylvania State University [who] formulated [a] theory in 1973, which he called the Doctrine of First Effective Settlement. ‘Whenever an empty territory undergoes settlement, or an earlier population is dislodged by invaders, the specific characteristics of the first group able to effect a viable, self-perpetuating society are of crucial significance for the later social and cultural geography of the area, no matter how tiny the initial band of settlers may have been,’ Zelinsky wrote. ‘Thus, in terms of lasting impact, the activities of a few hundred, or even a few score, initial colonizers can mean much more for the cultural geography of a place than the contributions of tens of thousands of new immigrants a few generations later.’”

I’m suspicious of theories that smack of cultural immutability and cultural determinism, but Woodard’s account is more sophisticated than that.  His is a story of the power of founders in a new institutional setting, who lay out the foundational norms for a society that lacks any cultural history of its own or which expelled the preexisting cultural group (in the U.S. case, Native Americans).  So part of the story is about the acculturation of newcomers into an existing worldview.  But another part is the highly selective nature of immigration, since new arrivals often seek out places to settle that are culturally compatible.  They may target a particular destination because its cultural characteristics, creating a pipeline of like-minded immigrants; or they choose to move on to another territory if the first port of entry is not to their taste.  Once established, these cultures often expanded westward as the country developed, extending the size and geographical scope of each nation.

Why does he insist on calling them nations?  At first this bothered me a bit, but then I realized he was using the term “nation” in Benedict Anderson’s sense as “imagined communities.”  Tidewater and Yankeedom are not nation states; they are cultural components of the American state.  But they do act as nations for their citizens.  Each of these nations is a community of shared values and worldviews that binds people together who have never met and often live far away.  The magic of the nation is that it creates a community of common sense and purpose that extends well beyond the reach of normal social interaction.  If you’re Yankee to the core, you can land in a strange town in Yankeedom and feel at home.  These are my people.  I belong here.

He argues that these national groupings continue to have a significant impact of the cultural geography of the US, shaping people’s values, styles of social organization, views of religion and government, and ultimately how they vote.  The kicker is the alignment between the spatial distribution of these cultures and the current voting patterns.  He lays out this argument succinctly in a 2018 op-ed he wrote for the New York Times.  I recommend reading it.

The whole analysis is neatly summarized in the two maps he deployed in that op-ed, which I have reproduced below.

The Map of America’s 11 Nations

11 Nations Map

This first map shows the geographic boundaries of the various cultural groupings in the U.S.  It all started on the east coast with the founding cultural binary that shaped the formation of the country in the late 18th century — New England Yankees and Tidewater planters.  He argues that they are direct descendants of the two factions in the English civil war of the mid 17th century, with the Yankees as the Calvinist Roundheads, who (especially after being routed by the restoration in England) sought to establish a new theocratic society in the northeast founded on strong government, and the Anglican Cavaliers, who sought to reproduce the decentralized English aristocratic ideal on Virginia plantations.  In between was the Dutch entrepot of New York, focused on commerce and multiculturalism (think “Hamilton”), and the Quaker colony in Pennsylvania, founded on equality and suspicion of government.  The US constitution was an effort to balance all of these cultural priorities within a single federal system.

Then came two other groups that didn’t fit well into any of these four cultural enclaves.  The immigrants to the Deep South originated in the slave societies of British West Indies, bringing with them a rigid caste structure and a particularly harsh version of chattel slavery.  Immigrants to Greater Appalachia came from the Scots-Irish clan cultures in Northern Ireland and the Scottish borderlands, with a strong commitment to individual liberty, resentment of government, and a taste for violence.

Tidewater and Yankeedom dominated the presidency and federal government for the country’s first 40 years.  But in 1828 the US elected its first president from rapidly expanding Appalachia, Andrew Jackson.  And by then the massive westward expansion of the Deep South, along with the extraordinary wealth and power that accrued from its cotton-producing slave economy, created the dynamics leading to the Civil War.  This pitted the four nations of the northeast against Tidewater and Deep South, with Appalachia split between the two, resentful of both Yankee piety and Southern condescension.  The multiracial and multicultural nations of French New Orleans and the Mexican southwest (El Norte) were hostile to the Deep South and resented its efforts to expand its dominion westward.

The other two major cultural groupings emerged in the mid 19th century.  The thin strip along the west coast consisted of Yankees in the cities and Appalachians in the back country, combining the utopianism of the former with the radical individualism of the latter.  The Far West is the one grouping that is based not on cultural geography but physical geography.  A vast arid area unsuited to farming, it became the domain of the only two entities powerful enough to control it — large corporations (railroad and mining), which exploited it, and the federal government, which owned most of the land and provided armed protection from Indians.

So let’s jump ahead and look at the consequences of this cultural landscape for our current political divisions.  Examine the electoral map for the 2016 presidential race, which shows the vote in Woodard’s 11 nations.

The 2016 Electoral Map

2016 Vote Map

Usually you see voting maps with results by state.  Here instead we see voting results by county, which allows for a more fine-tuned analysis.  Woodard assigns each county to one of the 11 “nations” and then shows the red or blue vote margin for each cultural grouping.

It’s striking to see how well the nations match the vote.  The strongest vote for Clinton came from the Left Coast, El Norte, and New Netherland, with substantial support from Yankeedom, Tidewater, and Spanish Caribbean.  Midlands was only marginally supportive of the Democrat.  Meanwhile the Deep South and Far West were modestly pro-Trump (about as much as Yankeedom was pro-Clinton), but the true kicker was Appalachia, which voted overwhelmingly for Trump (along with New France in southern Louisiana).

Appalachia forms the heart of Trump’s electoral base of support.  It’s an area that resents intellectual, cultural, and political elites; that turns away from mainstream religious denominations in favor of evangelical sects; and that lags behing behind in the 21st century information economy.  As a result, this is the heartland of populism.  It’s no wonder that the portrait on the wall in Trump’s Oval portrays Andrew Jackson.

Now one more map, this time showing were in the country people have been social distancing and where they haven’t, as measure by how much they were traveling away from home (using cell phone data).  It comes from a piece Woodard recently published in Washington Monthly.

Social Distancing Map

Once again, the patterns correspond nicely to the 11 nations.  Here’s how Woodard summarizes the data:

Yankeedom, the Midlands, New Netherland, and the Left Coast show dramatic decreases in movement – 70 to 100 percent in most counties, whether urban or rural, rich, or poor.

Across much of Greater Appalachia, the Deep South and the Far West, by contrast, travel fell by only 15 to 50 percent. This was true even in much of Kentucky, the interior counties of Washington and Oregon, where Democratic governors had imposed a statewide shelter-in-place order.

Not surprisingly, most of the states where governors imposed stay-at-home orders by March 27 are located in or dominated by one or a combination of the communitarian nations. This includes states whose governors are Republicans: Ohio, New Hampshire, Vermont, and Massachusetts.

Most of the laggard governors lead states dominated by individualistic nations. In the Deep South and Greater Appalachia you find Florida’s Ron DeSantis, who allowed spring breakers to party on the beaches. There’s Brian Kemp of Georgia who left matters in the hands of local officials for much of the month and then, on April 2, claimed to have just learned the virus can be transmitted by asymptomatic individuals. You have Asa Hutchinson of Arkansas, who on April 7 denied mayors the power to impose local lockdowns. And then there’s Mississippi’s Tate Reeves, who resisted action because “I don’t like government telling private business what they can and cannot do.”

Nothing like a pandemic to show what your civic values are.  Is it all about us or all about me?

Posted in Power, Sociology, Students, Teaching

Willard Waller on the Power Struggle between Teachers and Students

In 1932, Willard Waller published his classic book, The Sociology of Teaching.  For years I used a chapter from it (“The Teacher-Pupil Relationship“) as a way to get students to think about the problem that most frightens rookie teachers and that continues to haunt even the most experienced practitioners:  how to gain and maintain control of the classroom.

The core problems facing you as a teacher in the classroom are these:  students radically outnumber you; they don’t want to be there; and your power to get them to do what you want is sharply limited.  Otherwise, teaching is a piece of cake.

They outnumber you:  Teaching is one of the few professions that are practiced in isolation from other professionals.  Most classrooms are self-contained structures with one teacher and 25 or 30 students, so teachers have to ply their craft behind closed doors without the support of their peers.  You can commiserate with colleagues about you class in the bar after work, but during the school day you are on your own, left to figure out a way to maintain control that works for you.

They’re conscripts:   Most professionals have voluntary clients, who come to them seeking help with a problem: write my will, fix my knee, do my taxes.  Students are not like that.  They’re in the classroom under compulsion.  The law mandates school attendance and so does the job market, since the only way to get a good job is to acquire the right educational credentials.  As a result, as a teacher you have to figure out how to motivate this group of conscripts to follow your lead and learn what you teach.  This poses a huge challenge, to face a room full of students who may be thinking, “Teach me, I dare you.”

Your powers are limited:   You have some implied authority as an adult and some institutional authority as the agent of the school, but the consequences students face for resisting you are relatively weak:  a low grade, a timeout in the back of the room, a referral to the principal, or a call to the parent.  In the long run, resisting school can ruin your future by consigning you to a bad job, low pay, and a shorter life.  And teachers try to use this angle:  Listen up, you’re going to need this some day.  But the long run is not very meaningful to kids, for whom adulthood is a distant fantasy but the reality of life in the classroom is here and now.  As a result, teachers rely on a kind of confidence game, pretending they have more power than they do and trying to keep students from realizing the truth.  You can only issue a few threats before students begin to realize how hollow they are.

One example of the limits of teacher power is something I remember teachers saying when I was in elementary school:  “Don’t let me you see you do that again!”  At the time this just meant “Don’t do it,” but now I’ve come to interpret the admonition more literally:  “Don’t let me you see you do that again!”  If I see you, I’ll have to call you on it in order to put down your challenge to my authority; but if you do it behind my back, I don’t have to respond and can save my ammunition for a direct threat.

Here’s how Waller sees the problem:

The weightiest social relationship of the teacher is his relationship to his students; it is this relationship which is teaching.  It is around this relationship that the teacher’s personality tends to be organized, and it is in adaptation to the needs of this relationship that the qualities of character which mark the teacher are produced. The teacher-pupil relationship is a special form of dominance and subordination, a very unstable relationship and in quivering equilibrium, not much supported by sanction and the strong arm of authority, but depending largely upon purely personal ascendancy.  Every  teacher is  a  taskmaster and  every  taskmaster is a hard man….

Ouch.  He goes on to describe the root of the conflict between teachers and students in the classroom:

The teacher-pupil relationship is a form of institutionalized dominance and subordination. Teacher and pupil confront each other in the school with an original conflict of desires, and however much that conflict may be reduced in amount, or however much it may be hidden, it still remains. The teacher represents the adult group, ever the enemy of the spontaneous life of groups of children. The teacher represents the formal curriculum, and his interest is in imposing that curriculum upon the children in the form of tasks; pupils are much more interested in life in their own world than in the desiccated bits of adult life which teachers have to offer. The teacher represents the established social order in the school, and his interest is in maintaining that order, whereas pupils have only a negative interest in that feudal superstructure.

I’ve always resonated with this depiction of the school curriculum:  “desiccated bits of adult life.”  Why indeed would students develop an appetite for the processed meat that emerges from textbooks?  Why would they be eager to learn the dry as toast knowledge that constitutes the formal curriculum, disconnected from context and bereft of meaning?

Waller Book Cover

An additional insight I gain from Waller is this:  that teaching has a great impact on teachers than on students.

Conflict is in the role, for the wishes of the teacher and the student are necessarily divergent, and more conflict because the teacher must protect himself from the possible destruction of his authority that might arise from this divergence of motives. Subordination is possible only because the subordinated one is a subordinate with a mere fragment of his personality, while the dominant one participates completely. The subject is a subject only part of the time and with a part of himself, but the king is all king.

What a great insight.  Students can phone it in.  They can pretend to be listening while lost in their own fantasies.  But teachers don’t enjoy this luxury.  They need to be totally immersed in the teacher role, making it a component of self and not a cloak lightly worn.  “The subject is a subject only part of the time and with a part of himself, but the king is all king.”

Here he talks about the resources that teachers and students bring to the struggle for power in the classroom:

Whatever the rules that the teacher lays down, the tendency of the pupils is to empty them of meaning. By mechanization of conformity, by “laughing off” the teacher or hating him out of all existence as a person, by taking refuge in self-initiated activities that are always just beyond the teacher’s reach, students attempt to neutralize teacher control. The teacher, however, is striving to read meaning into the rules and regulations, to make standards really standards, to force students really to conform. This is a battle which is not unequal. The power of the teacher to pass rules is not limited, but his power to enforce rules is, and so is his power to control attitudes toward rules.

He goes on to wrap up this point, repeating it in different forms in order to bring it home.

Teaching makes the teacher. Teaching is a boomerang that never fails to come back to the hand that threw it. Of teaching, too, it is true, perhaps, that it is more blessed to give than to receive, and it also has more effect. Between good teaching and bad there is a great difference where students are concerned, but none in this, that its most pronounced effect is upon the teacher. Teaching does something to those who teach.

I love this stuff, and students who have been teachers often appreciate the way he gives visibility to the visceral struggle for control that they experienced in the classroom.  But for a lot of students, teachers or not, he’s a hard sell.  One complaint is that he’s sexist.  Of course he is.  The teacher is always “he” and the milieu he’s describing has a masculine feel, focused more on power over students than on engagement with them.  But so what?  The power issue in the classroom is as real for female as male teachers.

A related complaint is that the situation he describes is dated; things are different in classrooms now than they were in the 1930s.  The teacher-student relationship today is warmer, more informal, more focused on drawing students into the process of learning than on driving them toward it.  In this context, teachers who exercise power in the classroom can just be seen as bad teachers.  Good teachers take a progressive approach, creating an atmosphere of positive feeling in which students and teachers like each other and interact through exchange rather than dictation.

Much of this is true, I think.  Classrooms are indeed warmer and more informal places than they used to be, as Larry Cuban has pointed out in his work.  But that doesn’t mean that the power struggle has disappeared.  Progressive teachers are engaged in the eternal pedagogical practice of getting students to do what teachers want.  This is an exercise in power, but contemporary teachers are just sneakier about it.  They find ways of motivating student compliance with their wishes through inducement, personal engagement, humor, and fostering affectionate connections with their students.

The most effective use of power is the one that is least visible.  Better to have students feel that what they’re doing in the classroom is the result of their own choice rather than the dictate of the teacher.  But this is still a case of a teacher imposing her will on students, and it’s still true that without imposing her will she won’t be able to teach effectively.  Waller just scrapes off the rose-tinted film of progressive posturing from the window into teaching, so you can see for yourself what’s really at stake in the pedagogical exchange.

It helps to realized that The Sociology of Teaching was used as a textbook for students who were preparing to become teachers.  In it, his voice is that of a grizzled homicide detective lecturing bright-eyed students at the police academy, revealing the true nature of the job they’re embarking on.  David Cohen caught Waller’s vision perfectly in a lovely essay, “Willard Waller, On Hating School and Loving Education,” which I highly recommend.  From his perspective, Waller was a jaded progressive, who pined for schools that were true to the progressive spirit but wanted to warn future teachers about the grim reality what was actually awaiting them.

Waller’s book has been out of print for years, but you can find a scanned version here.  Enjoy.

Posted in Inequality, School organization, Schooling, Sociology

Two Cheers for School Bureaucracy

This post is a piece I wrote for Kappan, published in the March 2020 edition.  Here’s a link to the PDF.

Bureaucracies are often perceived as inflexible, impersonal, hierarchical, and too devoted to rules and red tape. But here I make a case for these characteristics being a positive in the world of public education. U.S. schools are built within a liberal democratic system, where the liberal pursuit of self-interest is often in tension with the democratic pursuit of egalitarianism. In recent years, I argue, schools have tilted toward the liberal side, enabling privileged families to game the system to help their children get ahead. In such a system, an impersonal bureaucracy stands as a check that ensures that the democratic side of schooling, in which all children are treated equally, remains in effect.

 

Cover page from Two Cheers Magazine version-page-0.

 

Two Cheers for School Bureaucracy

By David F. Labaree

To call an organization “bureaucratic” has long been taken to mean that it is inflexible, impersonal, hierarchical, and strongly favors a literal rather than substantive interpretation of rules. In the popular imagination, bureaucracies make it difficult to accomplish whatever you want to do, forcing you to wade through a relentless proliferation of red tape.

School bureaucracy is no exception to this rule. Teachers, students, administrators, parents, citizens, reformers, and policymakers have long railed against it as a barrier that stands between them and the kind of schools they want and need. My aim here is to provide a little pushback against this received wisdom by proposing a modest defense of school bureaucracy. My core assertion is this: Bureaucracy may make it hard to change schools for the better, but at the same time it helps keep schools from turning for the worse.

Critiques of bureaucracy

Criticisms of school bureaucracy have taken different forms over the years. When I was in graduate school in the 1970s, the critique came from the left. From that perspective, the bureaucracy was a top-down system in which those at the top (policy makers, administrators) impose their will on the actors at the bottom (teachers, students, parents, and communities). Because the bureaucracy was built within a system that perpetuated inequalities of class, race, and gender, it tended to operate in a way that made sure that White males from the upper classes maintained their position, and that stifled grassroots efforts to bring about change from below. Central critical texts at the time were Class, Bureaucracy, and Schools, published in 1971 by Michael Katz (who was my doctoral advisor at the University of Pennsylvania) and Schooling in Capitalist America, published in 1976 by Samuel Bowles and Herbert Gintis.

By the 1990s, however, attacks on school bureaucracy started to come from the right. Building on the Reagan-era view of government as the problem rather than the solution, critics in the emergent school choice movement began to develop a critique of bureaucracy as a barrier to school effectiveness. The central text then was Politics, Markets, and America’s Schools by John Chubb and Terry Moe (1990), who argued that organizational autonomy was the key factor that made private and religious schools more effective than public schools. Because they didn’t have to follow the rigid rules laid down by the school-district bureaucracy, they were free to adapt to families’ demands for the kind of school that met their children’s needs. To Chubb and Moe, state control of schools inevitably stifles the imagination and will of local educators. According to their analysis, democratic control of schools fosters a bureaucratic structure to make sure all schools adhere to political admonitions from above. They proposed abandoning state control, releasing schools from the tyranny of bureaucracy and politics so they could respond to market pressures from educational consumers.

So the only thing the left and the right agree on is that school bureaucracy is a problem, one that arises from the very nature of bureaucracy itself — an organizational system defined as rule by offices (bureaus) rather than by people. The central function of any bureaucracy is to be a neutral structure that carries the aims of its designers at the top down to the ground level where the action takes place. Each actor in the system plays a role that is defined by their particular job description and aligned with the organization’s overall purpose, and the nature of this role is independent of the individual who fills it. Actors are interchangeable, but the roles remain. The problem arises if you want something from the bureaucracy that it is not programmed to provide. In that case, the organization does indeed come to seem inflexible, impersonal, hierarchical, and rigidly committed to following the rules.

The bureaucracy of schools

Embedded within the structure of the school bureaucracy are the contradictory values of liberal democracy. Liberalism brings a strong commitment to individual liberty, preservation of private property, and a tolerance of the kinds of social inequalities that arise if you leave people to pursue their own interests without state interference. It sees education as a private good (Labaree, 2018). These are the characteristics of school bureaucracy — private interests promoting outcomes that may be unequal — that upset the left. Democracy, on the other hand, brings a strong commitment to political and social equality, in which the citizenry establishes schooling for its collective betterment, and the structure of schooling seeks to provide equal benefits to all students. It sees education as a public good. These are the characteristics — collectivist and egalitarian — that upset the right.

Over the years, I have argued — in books such as How to Succeed in School without Really Learning (1997) and Someone Has to Fail (2012) — that the balance between the liberal and democratic in U.S. schools has tilted sharply toward the liberal. Increasingly, we treat schooling as a private good, whose benefits accrue primarily to the educational consumer who receives the degree. It has become the primary way for people to get ahead in society and a primary way for people who are already ahead to stay that way. It both promotes access and preserves advantage. Families that enjoy a social advantage have become increasingly effective at manipulating the educational system to ensure that their children will enjoy this same advantage. In a liberal democracy, where we are reluctant to constrain individual liberty, privileged parents have been able to game the structure of schooling to provide advantages for their children at the expense of other people’s children. They threaten to turn education into a zero-sum game whose winners get the best jobs.

Gaming the system

So how do upper-middle-class families boost their children’s chances for success in this competition? The first and most obvious step is to buy a house in a district with good schools. Real estate agents know that they’re selling a school system along with a house — I recall an agent once telling me not to consider a house on the other side of the street because it was in the wrong district — and the demand in areas with the best schools drives up housing prices. If you can’t move to such a district, you enter the lottery to gain access to the best schools of choice in town. Failing that, you send your children to private schools. Then, once you’ve placed them in a good school, you work to give your children an edge within that school. You already have a big advantage if you are highly educated and thus able to pass on to your children the cultural capital that constitutes the core of what schools teach and value. If students come to school already adept at the verbal and cognitive and behavioral skills that schools seek to instill, then they have a leg up over students who must rely on the school alone to teach them these skills.

In addition, privileged parents have a wealth of experience at doing school at the highest levels, and they use this social capital to game the system in favor of their kids: You work to get your children into the class of the best available teacher, then push to get them into the top reading group and the gifted and talented program. When they get to high school, you steer them into the top academic track and the most advanced placement classes, while also rounding out their college admissions portfolios with an impressive array of extracurricular activities and volunteer work. Then comes the race to get into the best college (meaning the one with the most selective admissions), using an array of techniques including the college tour, private admissions counselors, test prep tutoring, legacies, social networks, and strategic donations. Ideally, you save hundreds of thousands of dollars by securing this elite education within the public system. But whether you send your kids to public or private school, you seek out every conceivable way to mark them as smarter and more accomplished and more college-admissible than their classmates.

At first glance, these frantic efforts by upper-middle class parents to work the system for the benefit of their children can seem comically overwrought. Children from economically successful and highly educated families do better in school and in life than other children precisely because of the economic, cultural, and social advantages they have from birth. So why all fuss about getting kids into the best college instead of one of the best colleges? The fix is in, and it’s in their favor, so relax.

The anxiety about college admissions among these families is not irrational (see, for example, Doepke & Zilibotti, 2019). It arises from two characteristics of the system. First, in modern societies social position is largely determined by educational attainment rather than birth. Your parents may be doctors, but they can’t pass the family business on to their children. Instead, you must trace the same kind of stellar path through the educational system that your parents did. This leads to the second problem. If you’re born at the top of the system, the only mobility available to you is downward. And because jobs are allocated according to educational attainment, there are always a number of smart and motivated poor kids who may win the academic contest instead of you, who may not be as smart or motivated. There’s a real chance that you will end up at a lower social position than your parents, so your parents feel pressure to leave no stone unturned in the effort to give you an educational edge.

The bureaucracy barrier

Here is where bureaucracy enters the scene, as it can create barriers to the most affluent parents’ efforts to guarantee the success of their children. The school system, as a bureaucracy established in part with the egalitarian values of its democratic control structure, just doesn’t think your children are all that special. This is precisely the problem Chubb and Moe and other choice supporters have identified.

When we’re talking about a bureaucracy, roles are roles and rules are rules. The role of the teacher is to serve all students in the class and not just yours. School rules apply to everyone, so you can’t always be the exception. Get over it. At one level, your children are just part of the crowd of students in their school, subject to the same policies and procedures and educational experiences as all of the others. By and large, privileged parents don’t want to hear that.

So school bureaucracy sometimes succeeds in rolling back a few of the structures that privilege upper-middle class students.  They seek to eliminate ability grouping in favor of cooperative learning, abandon gifted programs for the few in favor of using the pedagogies of these programs for the many, and reduce high school tracking by creating heterogenous classrooms.

Of course, this doesn’t mean that the bureaucracy always or even usually wins out in the competition with parents seeking special treatment for their children.  Parents often succeed in fighting off efforts to eliminating ability groups, tracks, gifted programs, and other threats.  Private interests are relentless in trying to obtain private schooling at public expense, but every impediment to getting their way is infuriating to parents lobbying for privilege.

For these parents, the school bureaucracy becomes the enemy, which you need to bypass, suborn, or overrule in your effort to turn school to the benefit of your children. At the same time, this same bureaucracy becomes the friend and protector of the democratic side of liberal democratic schooling. Without it, empowered families would proceed unimpeded in their quest to make schooling a purely private good. So two cheers for bureaucracy.

References

Bowles, S. & Gintis, H. (1976). Schooling in capitalist America New York, NY: Basic Books.

Chubb, J. & Moe, T. (1990). Politics, markets, and America’s schools. Washington, DC: Brookings.

Doepke, M. & Zilibotti, F. (2019). The economic roots of helicopter parenting. Phi Delta Kappan, 100 (7), 22-27.

Katz, M. (1971). Class, bureaucracy, and schools. New York, NY: Praeger.

 Labaree, D.L. (1997) How to succeed in school without really learning. New Haven, CT: Yale University Press.

Labaree, D.L. (2018). Public schools for private gain: The declining American commitment to serving the public good. Phi Delta Kappan, 100 (3), 8-13

Labaree, D.L. (2010). Someone has to fail. Cambridge, MA: Harvard University Press.

 AUTHORID

DAVID F. LABAREE (dlabaree@stanford.edu; @DLabaree) is Lee L. Jacks Professor of Education, emeritus, at the Stanford University Graduate School of Education in Palo Alto, CA. He is the author, most recently, of A Perfect Mess: The Unlikely Ascendency of American Higher Education (University of Chicago Press, 2017).

 

ABSTRACT

Bureaucracies are often perceived as inflexible, impersonal, hierarchical, and too devoted to rules and red tape. But David Labaree makes a case for these characteristics being a positive in the world of public education. U.S. schools are built within a liberal democratic system, where the liberal pursuit of self-interest is often in tension with the democratic pursuit of egalitarianism. In recent years, Labaree argues, schools have tilted toward the liberal side, enabling privileged families to game the system to help their children get ahead. In such a system, an impersonal bureaucracy stands as a check that ensures that the democratic side of schooling, in which all children are treated equally, remains in effect.

 

 

 

Posted in Family, Meritocracy, Modernity, Schooling, Sociology, Teaching

What Schools Can Do that Families Can’t: Robert Dreeben’s Analysis

In this post, I explore a key issue in understanding the social role that schools play:  Why do we need schools anyway?  For thousands of years, children grew up learning the skills, knowledge, and values they would need in order to be fully functioning adults.  They didn’t need schools to accomplish this.  The family, the tribe, the apprenticeship, and the church were sufficient to provide them with this kind of acculturation.  Keep in mind that education is ancient but universal public schooling is a quite recent invention, which arose about 200 years ago as part of the creation of modernity.

Here I focus on a comparison between family and school as institutions for social learning.  In particular, I examine what social ends schools can accomplish that families can’t.  I’m drawing on a classic analysis by Robert Dreeben in his 1968 book, On What Is Learned in School.  Dreeben is a sociologist in the structural functionalist tradition who was a student of Talcott Parsons.  His book demonstrates the strengths of functionalism in helping us understand schooling as a critically important mechanism for societies to survive in competition with other societies in the modern era.  The section I’m focusing on here is chapter six, “The Contribution of Schooling to the Learning of Norms: Independence, Achievement, Universalism, and Specificity.”   I strongly recommend that you read the original, using the preceding link.  My discussion is merely a commentary on his text.

Dreeben Cover

I’m drawing on a set of slides I used when I taught this chapter in class.

This is structural functionalism at its best:

      • The structure of schooling teaches students values that modern societies require; the structure functions even if that outcome is unintended

He examines the social functions of the school compared with the family

      • Not the explicit learning that goes on in school – the subject matter, the curriculum (English, math, science, social studies)

      • Instead he looks as the social norms you learn in school

He’s not focusing on the explicit teaching that goes on in school – the formal curriculum

      • Instead he focuses on what the structure of the school setting teaches students – vs. what the structure of the family teaches children

      • The emphasis, therefore, is on the differences in social structure of the two settings

      • What can and can’t be learned in each setting?

Families and schools are parallel in several important ways

      • Socialization: they teach the young

        • Both provide the young with skills, knowledge, values, and norms

        • Both use explicit and implicit teaching

      • Selection: they set the young on a particular social trajectory in the social hierarchy

        • Both provide them with social means to attain a particular social position

        • School: via grades, credits and degrees

        • Families: via economic, social, and cultural capital

The difference between family and school boils down to preparing the young for two very different kinds of social relationships

      • Primary relationships, which families model as the relations between parent and child and between siblings

      • Secondary relationships, which schools model as the relations between teacher and student and between students

Each setting prepares children to take on a distinctive kind of relationship

Dreeben argues that schools teach students four norms that are central to the effective functioning of modern societies:  Independence, achievement, universalism, and specificity.  These are central to the kinds of roles we play in public life, which sociologists call secondary roles, roles that are institutionally structured in relation to other secondary roles, such as employee-employer, customer-clerk, bus rider-bus driver, teacher-student.  The norms that define proper behavior in secondary roles differ strikingly from the norms for another set of relationship defined as primary roles.  These are the intimate relationship we have with our closest friends and family members.  One difference is that we play a large number of secondary roles in order to function in complex modern societies but only a small number of primary roles.  Another is that secondary roles are strictly utilitarian, means to practical ends, whereas primary roles are ends in themselves.  A third is that secondary role relationships are narrowly defined; you don’t need or want to know much about the salesperson in the store in order to make your purchase.  Primary relationship are quite diffuse, requiring deeper involvement — friends vs. acquaintances.

As a result, each of the four norms that schools teach, which are essential for maintaining secondary role relationships, correspond to equal and opposite norms that are essential for maintaining primary role relationships.  Modern social life requires expertise at moving back and forth effortlessly between these different kinds of roles and the contrasting norms they require of us.  We have to be good at maintaining our work relations and our personal relations and knowing which norms apply to which setting.

Secondary Roles                      Primary Roles

(Work, public, school)           (Family, friends)

Independence                          Group orientation

Achievement                            Ascription

Universalism                            Particularism

Specificity                                  Diffuseness

Here is what’s involved in each of these contrasting norms:

Independence                            Group orientation

      Self reliance                                Dependence on group

      Individualism                             Group membership

      Individual effort                        Collective effort

      Act on your own                         Need/owe group support

Achievement                               Ascription

      Status based on what you do  Status based on who you are

      Active                                             Passive

      Earned                                           Inherited

                         Meritocracy                                  Aristocracy

Universalism                              Particularism

      Equality within category —       Personal uniqueness — my child

           a 5th grade student

      General rules apply to all        Different rules for us vs. them

      Central to fairness, justice      Central to being special

Specificity                                   Diffuseness

       Narrow relations                       Broad relations

       Extrinsic relations                    Intrinsic relations

       Means to an end                        An end in itself

Think about how the structure of the school differs from the structure of the family and what the consequences of these differences are.

Family vs. School:

Structure of the school (vs. structure of the family)

      • Teacher and student are both achieved roles (ascribed roles)

      • Large number of kids per adult (few)

      • No particularistic ties between teacher and students (blood ties)

      • Teachers deal with the class as a group (families as individuals based on sex and birth order)

      • Teacher and student are universalistic roles, with individuals being interchangeable in these roles (family roles are unique to that family and not interchangeable)

      • Relationship is short term, especially as you move up the grades (relations are lifelong)

      • Teachers and students are subject to objective evaluation (familie use subjective, emotional criteria)

      • Teachers and students both see their roles as means to an end (family relations are supposed to be selfless, ends in themselves)

      • Students are all the same age (in family birth order is central)

  Consider the modes of differentiation and stratification in families vs. schools.

Children in families:

Race, class, ethnicity, and religion are all the same

Age and gender are different

Children in schools:

Age is the same

Race, class, ethnicity, religion, and gender are different

This allows for meritocratic evaluation, fostering the learning of achievement and independence

Questions

Do you agree that characteristics of school as a social structure makes it effective at transmitting secondary social norms, preparing for secondary roles?

Do you agree that characteristics of family as a social structure makes it ineffective at transmitting secondary norms, preparing for secondary roles?

But consider this complication to the story

Are schools, workplaces, public interactions fully in tune with the secondary model?

Are families, friends fully in tune with the primary model?

How do these two intermingle?  Why?

      • Having friends at work and school, makes life nicer – and also makes you work more efficiently

      • Getting students to like you makes you a more effective teacher

      • But the norm for a professional or occupational relationship is secondary – that’s how you define a good teacher, lawyer, worker

      • The norm for primary relations is that they are ends in themselves not means to an end

      • Family members may use each other for personal gain, but that is not considered the right way to behave

Posted in History, Sociology, War

War! What Is It Good For?

This post is an overview of the 2014 book by Stanford classicist Ian Morris, War! What Is It Good For?  In it he makes the counter-intuitive argument that over time some forms of war have been socially productive.  In contrast with the message of 1970s song by the same name, war may in fact be good for something.

The central story is this.  Some wars lead to the incorporation of large numbers of people under a single imperial state.  In the short run, this is devastatingly destructive; but in the long run it can be quite beneficial.  Under such regimes (e.g., the early Roman and Chinese empires and the more recent British and American empires), the state imposes a new order that sharply reduces rates of violent death and fosters economic development.  The result is an environment that allows the population live longer and grow wealthier, not just in the imperial heartland but also in the newly colonized territories.  Morris War Cover

So how does this work?  He starts with a key distinction made by Mancur Olson.  All states are a form of banditry, Olson says, since they extract revenue by force.  Some are roving bandits, who sweep into town, sack the place, and then move on.  But others are stationary bandits, who are stuck in place.  In this situation, the state needs to develop a way to gain the greatest revenue from its territory over the long haul, which means establishing order and promoting economic development.  It has an incentive to foster the safety and productivity of its population.

Rulers steal from their people too, Olson recognized, but the big difference between Leviathan and the rape-and-pillage kind of bandit is that rulers are stationary bandits. Instead of stealing everything and hightailing it, they stick around. Not only is it in their interest to avoid the mistake of squeezing every last drop from the community; it is also in their interest to do whatever they can to promote their subjects’ prosperity so there will be more for the rulers to take later.

This argument is an extension of the one that Thomas Hobbes made in Leviathan:

Whatsoever therefore is consequent to a time or war where every man is enemy to every man, the same is consequent to the time wherein men live without other security than what their own strength and their own invention shall furnish them withal. In such condition there is no place for industry, because the fruit thereof is uncertain, and consequently no culture of the earth, no navigation nor use of the commodities that may be imported by sea, no commodious building, no instruments of moving and removing such things as require much force, no knowledge of the face of the earth; no account of time, no arts, no letters, no society, and, which is worst of all, continual fear and danger of violent death, and the life of man solitary, poor, nasty, brutish, and short.

(Wow, that boy could write.)

Morris says that stationary bandit states first arose with the emergence of agriculture, when tribes found that staying in place and tending their crops could support a larger population than roving across the landscape hunting and gathering.  This leads to what he calls caging.  People can’t easily move and the state has an incentive to protect them from marauders so it can harvest the surplus from this population for its own benefit.

Over time, these states have reduced violence to an extraordinary extent, reining in “the continual fear and danger of violent death.”

Averaged across the planet, violence killed about 1 person in every 4,375 in 2012, implying that just 0.7 percent of the people alive today will die violently, as against 1–2 percent of the people who lived in the twentieth century, 2–5 percent in the ancient empires, 5–10 percent in Eurasia in the age of steppe migrations, and a terrifying 10–20 percent in the Stone Age.

In the process, states found that they prospered most when they relaxed direct control of the economy and allowed markets to develop according to their own dynamic.  This created a paradoxical relationship between state and economy.

Markets could not work well unless governments got out of them, but markets could not work at all unless governments got into them, using force to pacify the world and keep the Beast at bay. Violence and commerce were two sides of the same coin, because the invisible hand needed an invisible fist to smooth the way before it could work its magic.

Empires, of course, don’t last forever.  At a certain point, hegemony yields to outside threats.  One chronic source of threat in Eurasian history was the roving bandit states of the Steppes that did in Rome and constantly harried China.  Another threat is the rise of a new hegemon.  The British global empire of the 18th and 19th century fostered the emergence of the United States, which became the empire of the late 20th and early 21st century, and this in turn fostered the development of China.

And there can be long periods of time between empires, when wars are largely unproductive.  After the fall of Rome, Europe experienced nearly a millennium of unproductive wars, as small states competed for dominance without anyone ever actually attaining it, a condition he calls “feudal anarchy.”  The result of a sharp increase in violence and and sharp decline in standard of living.  It wasn’t until the 16th century that Europe regained the per capita income enjoyed by Romans.

It seems to me, in fact, that “feudal anarchy” is an excellent description not just of western Europe between about 900 and 1400 but also of most of Eurasia’s lucky latitudes in the same period. From England to Japan, societies staggered toward feudal anarchy as their Leviathans dismembered themselves.

But 1400 saw the beginning of the 500-year war in which Europe strove mightily to dominate the world, finally producing the imperium of the British and then the Americans.

Morris’s conclusion from this extensive analysis is disturbing but also compelling:

The answer to the question in this book’s title is both paradoxical and horrible. War has been good for making humanity safer and richer, but it has done so through mass murder. But because war has been good for something, we must recognize that all this misery and death was not in vain. Given a choice of how to get from the poor, violent Stone Age to … peace and prosperity…, few of us, I am sure, would want war to be the way, but evolution—which is what human history is—is not driven by what we want. In the end, the only thing that matters is the grim logic of the game of death.

…while war is the worst imaginable way to create larger, more peaceful societies, it is pretty much the only way humans have found.

One way to test the validity of Morris’s argument in this book is to compare it to the analysis by his Stanford colleague, Walter Scheidel, in his latest book, Escape from Rome, which I reviewed here two weeks ago.  Scheidel argues that the fall of Rome, and the failure of any new empire to replace it for most of the next millennium, is the reason that Europe made the turn toward modernity before any other region of the world.  In Scheidel’s view, what Morris calls feudal anarchy, which shortened lifespans and fostered poverty for so long and for so many people, was the key spur to economic, social, technological, political, and military innovation — as competing states desperately sought to survive in the war of all against all.

Empires may keep the peace and promote commerce, but they also emphasize the preservation of power over the development of science and the invention of new technologies.  This is why the key engines of modernization in early modern Europe were not the large countries in the center — France and Spain — but the small countries on the margins, England and the Netherlands.

For most people, enjoying relative peace and prosperity within an empire is a lot better than the alternative.  But for the future global population as a whole, the greatest benefit may come from a sustained competition among warring states, which  spur the breakthrough innovations that have produced history’s most dramatic advances in peace and prosperity.  In this sense, even the unproductive wars of the feudal period may have been productive in the long run.  Once again, war was the answer.

Posted in Credentialing, Curriculum, Meritocracy, Sociology, Systems of Schooling

Mary Metz: Real School

This blog post is a tribute to the classic paper by Mary Metz, “Real School.”  In it she shows how schools follow a cultural script that demonstrates all of the characteristics we want to see in a school.  The argument, in line with neo-institutional theory (see this example by Meyer and Rowan), is that schools are organized around meeting our cultural expectations for the form that schools should take more than around producing particular outcomes.  Following the script keeps us reassured that the school we are associated with — as a parent, student, teacher, administrator, taxpayer, political leader, etc. — is indeed a real school.  It follows that the less effective a school is at producing desirable social outcomes — high scores, graduation rates, college attendance, future social position — the most closely we want it to follow the script.  It’s a lousy high school but it still has an advanced placement program, a football team, a debate team, and a senior prom.  So it’s a real high school.

Here’s the citation and a link to a PDF of the original article:

Metz, Mary H. (1990). Real school: A universal drama amid disparate experience. In Douglas E. Mitchell & Margaret E. Goertz (Eds.), Education Politics for the New Century (pp. 75-91). New York: Falmer.

And here’s a summary of some of its key points.

Roots of real school: the need for reassurance

  • We’re willing to setting for formal over substantive equity in schooling

  • The system provides formal equivalence across school settings, to reassure everyone that all kids get the same educational opportunity

  • Even though this is obviously not the case — as evidenced by the way parents are so careful where they send their kids, where they buy a house

  • What’s at stake is institutional legitimacy

  • Teachers, administrators, parents, citizens all want reassurance that their school is a real school

  • If not, then I’m not a real teacher, a real student, so what are we doing here?

This arises from the need for schools to balance conflicting outcomes within the same institution — schools need to provide both access and advantage, both equality and inequality

  • We want it both ways with our schools: we’re all equal, but I’m better than you

  • Both qualities are important for the social functions and public legitimacy of the social system

  • This means that school, on the face of it, needs to give everyone a fair shot

  • But it also means that school, in practice, needs to sort the winners from the losers

  • And winning only has meaning if it appears to be the result of individual merit

  • But who wants to leave this up for chance for their own children?

  • So parents use every tool they’ve got to game the system and get their children a leg up in the competition

  • And upper-middle-class parents have a lot of such tools — cultural capital, social capital, and economic capital

  • Yet they still need the formal equality of schooling as cover for this quest for advantage

So wWhy is it, as Metz shows, that schools that are least effective in producing student learning are the most diligent in doing real school?

  • Teachers and parents in these schools rarely demand the abandonment of real school — a failed model — in favor of something radically different

  • To the contrary, they demand even closer alignment with the real school model

  • They do so because they need to maintain the confidence in the system

  • More successful schools can stay a little farther from the script, because parents are more confident they will produce the right outcomes for their kids

  • Education is a confidence game – in both senses of the word: an effort to maintain confidence and an effort to con the consumer

The magic of school formalism

  • Formalism is central to the system and its effectiveness as a place to provide access and advantage at the same time

  • So you focus on structure and form and process more than on substantive learning

  • Meyer and Rowan‘s formalistic definition of a school:

    • “A school is an accredited institution where a certified teacher teaches a sanctioned curriculum to a matriculated student who then receives an authorized diploma.”

  • Students can make progress and graduate even if they’re not learning much

  • It helps that the quality of schooling is less visible than the quantity

Enjoy.

Real School Front Page

Posted in History of education, Meritocracy, Sociology, Systems of Schooling, Teaching

Pluck vs. Luck

This post is a piece I recently published in AeonHere’s the link to the original.  I wrote this after years of futile efforts to get Stanford students to think critically about how they got to their current location at the top of the meritocracy.  It was nearly impossible to get students to consider that their path to Palo Alto might have been the result of anything but smarts and hard work.  Luck of birth never seemed to be a major factor in the stories they told about how they got here.  I can understand this, since I’ve spent a lifetime patting myself on the back for my own academic accomplishments, feeling sorry for the poor bastards who didn’t have what it took to climb the academic ladder.

But in recent years, I have come to spend a lot of time thinking critically about the nature of the American meritocracy.  I’ve published a few pieces here on the subject, in which I explore the way in which this process of allocating status through academic achievement constitutes a nearly perfect system for reproducing social inequality — protected by a solid cover of legitimacy.  The story it tells to everyone in society, winners and losers alike, is that you got what you deserved.

So I started telling students my own story about how I got to Stanford — in two contrasting versions.  One is a traditional account of climbing the ladder through skill and grit, a story of merit rewarded.  The other is a more realistic account of getting ahead by leveraging family advantage, a story of having the right parents.

See what you think.

Pluck vs. Luck

David F. Labaree

Occupants of the American meritocracy are accustomed to telling stirring stories about their lives. The standard one is a comforting tale about grit in the face of adversity – overcoming obstacles, honing skills, working hard – which then inevitably affords entry to the Promised Land. Once you have established yourself in the upper reaches of the occupational pyramid, this story of virtue rewarded rolls easily off the tongue. It makes you feel good (I got what I deserved) and it reassures others (the system really works).

But you can also tell a different story, which is more about luck than pluck, and whose driving forces are less your own skill and motivation, and more the happy circumstances you emerged from and the accommodating structure you traversed.

As an example, here I’ll tell my own story about my career negotiating the hierarchy in the highly stratified system of higher education in the United States. I ended up in a cushy job as a professor at Stanford University. How did I get there? I tell the story both ways: one about pluck, the other about luck. One has the advantage of making me more comfortable. The other has the advantage of being more true.

I was born to a middle-class family and grew up in Philadelphia in the 1950s. As a skinny, shy kid who wasn’t good at sports, my early life revolved about being a good student. In upper elementary school, I became president of the student council and captain of the safety patrol (an office that conferred a cool red badge that I wore with pride). In high school, I continued to be the model student, eventually getting elected president of the student council (see a pattern here?) and graduating in 1965 near the top of my class. I was accepted at Harvard University with enough advanced-placement credits to skip freshman year (which, fortunately, I didn’t). There I majored in antiwar politics. Those were the days when an activist organisation such as Students for a Democratic Society was a big factor on campuses. I went to two of their annual conventions and wrote inflammatory screeds about Harvard’s elitism (who knew).

In 1970, I graduated with a degree in sociology and no job prospects. What do you do with a sociology degree, anyway? It didn’t help that the job market was in the doldrums. I eventually ended up back in Philadelphia with a job at the Federal Reserve Bank – first in public relations (leading school groups on tours) and then in bank relations (visiting banks around the Third Federal Reserve District). From student radical with a penchant for Marxist sociology, I suddenly became a banker wearing a suit every day and reading The Wall Street Journal. It got me out of the house and into my own apartment but it was not for me. Labarees don’t do finance.

After four years, I quit in disgust, briefly became a reporter at a suburban newspaper, hated that too, and then stumbled by accident into academic work. Looking for any old kind of work in the want ads in my old paper, I spotted an opening at Bucks County Community College, where I applied for three different positions – admissions officer, writing instructor, and sociology instructor. I got hired in the latter role, and the rest is history. I liked the work but realised that I needed a master’s degree to get a full-time job, so I entered the University of Pennsylvania sociology department. Once in the programme, I decided to continue on to get a PhD, supporting myself by teaching at the community college, Trenton State, and at Penn.

In 1981, as I was nearing the end of my dissertation, I started applying for faculty positions. Little did I know that the job market was lousy and that I would be continually applying for positions for the next four years.

As someone who started at the bottom, I can tell you that everything is better at the top

The first year yielded one job offer, at a place so depressing that I decided to stay in Philadelphia and continue teaching as an adjunct. That spring I got a one-year position in sociology at Georgetown University in Washington, DC. In the fall, with the clock ticking, I applied to 60 jobs around the country. This time, my search yielded four interviews, all tenure-track positions – at Yale University, at Georgetown, at the University of Cincinnati and at Widener University.

The only offer I got was the one I didn’t want, Widener – a small, non-selective private school in the Philadelphia suburbs that until the 1960s had been a military college. Three years past degree, I felt I had hit bottom in the meritocracy. The moment I got there, I started applying for jobs while desperately trying to write my way into a better one. I published a couple of journal articles and submitted a book proposal to Yale University Press. They hadn’t hired me but maybe they’d publish me.

Finally, a lifeline came my way. A colleague at the College of Education at Michigan State University encouraged me to apply for a position in history of education and I got the job. In the fall of 1985, I started as an assistant professor in the Department of Teacher Education at MSU. Fifteen years after college and four years after starting to look for faculty positions, my career in higher education finally took a big jump upward.

MSU was a wonderful place to work and to advance an academic career. I taught there for 18 years, moving through the ranks to full professor, and publishing three books and 20 articles and book chapters. Early on, I won two national awards for my first book and a university teaching award, and was later elected president of the History of Education Society and vice-president of the American Educational Research Association.

Then in 2002 came an opportunity to apply for a position in education at one of the world’s great universities, Stanford. It worked out, and I started there as a professor in 2003 in the School of Education, and stayed until retirement in 2018. I served in several administrative roles including associate dean, and was given an endowed chair. How cool.

As someone who started at the bottom of the hierarchy of US higher education, I can tell you that everything is better at the top. Everything: pay, teaching loads, intellectual culture, quality of faculty and students, physical surroundings, staff support, travel funds, perks. Even the weather is better. Making it in the meritocracy is as good as it gets. No matter how hard things go at first, talent will win out. Virtue earns its reward. Life is fair.

Of course, there’s also another story, one that’s less heartening but more realistic. A story that’s more about luck than pluck, and that features structural circumstances more than heroic personal struggle. So let me now tell that version.

Professor Robert M Labaree of Lincoln University in southeast Pennsylvania, the author’s grandfather. Photo courtesy of the author

The short story is that I’m in the family business. In the 1920s, my parents grew up as next-door neighbours on a university campus where their fathers were both professors. It was Lincoln University, a historically black institution in southeast Pennsylvania near the Mason-Dixon line. The students were black, the faculty white – most of the latter, like my grandfathers, were clergymen. The students were well-off financially, coming from the black bourgeoisie, whereas the highly educated faculty lived in the genteel poverty of university housing. It was a kind of cultural missionary setting, but more comfortable than the foreign missions. One grandfather had served as a missionary in Iran, where my father was born; that was hardship duty. But here was a place where upper-middle-class whites could do good and do well at the same time.

Both grandfathers were Presbyterian ministers, each descended from long lines of Presbyterian ministers. The Presbyterian clergy developed a well-earned reputation over the years of having modest middle-class economic capital and large stores of social and cultural capital. Relatively poor in money, they were rich in social authority and higher learning. In this tradition, education is everything. In part because of that, some ended up in US higher education, where in the 19th century most of the faculty were clergy (because they were well-educated men and worked for peanuts). My grandfather’s grandfather, Benjamin Labaree, was president of Middlebury College in the 1840s and ’50s. Two of my father’s cousins were professors; my brother is a professor. It’s the family business.

Rev Benjamin Labaree, who was president of Middlebury College, 1840-1866, and the author’s great-great-grandfather. Photo courtesy of the author

Like many retirees, I recently started to dabble in genealogy. Using Ancestry.com, I’ve traced back 10 or 12 generations on both sides of the family, some back to the 1400s, finding ancestors in the US, Scotland, England and France. They are all relentlessly upper-middle-class – mostly ministers, but also some physicians and other professionals. Not a peasant in the bunch, and no one in business. I’m to the manor born (well, really the manse). The most distant Labaree I’ve found is Jacques Laborie, born in 1668 in the village of Cardaillac in France. He served as a surgeon in the army of Louis XIV and then became ordained as a Calvinist minister in Zurich before Louis in 1685 expelled the reformed Protestants (Huguenots) from France. He moved to England, where he married another Huguenot, and then immigrated to Connecticut. Among his descendants were at least four generations of Presbyterian ministers, including two college professors. This is a good start for someone like me, seeking to climb the hierarchy of higher education – like being born on third base. But how did it work out in practice for my career?

I was the model Harvard student – a white, upper-middle-class male from an elite school

My parents both attended elite colleges, Princeton University and Wilson College (on ministerial scholarships), and they invested heavily in their children’s education. They sent us to a private high school and private colleges. It was a sacrifice to do this, but they thought it was worth it. Compared with our next-door neighbours, we lived modestly – driving an old station wagon instead of a new Cadillac – but we took pride in our cultural superiority. Labarees didn’t work in trade. Having blown their money on schooling and lived too long, my parents died broke. They were neither the first nor the last victims of the meritocracy, who gave their all so that their children could succeed.

This background gave me a huge edge in cultural and social capital. In my high school’s small and high-quality classrooms, I got a great education and learned how to write. The school traditionally sent its top five students every year to Princeton but I decided on Harvard instead. At the time, I was the model Harvard student – a white, upper-middle-class male from an elite school. No females and almost no minorities.

At Harvard, I distinguished myself in political activity rather than scholarship. I avoided seminars and honours programmes, where it was harder to hide and standards were higher. After the first year, I almost never attended discussion sections, and skipped the majority of the lectures as well, muddling through by doing the reading, and writing a good-enough paper or exam. I phoned it in. When I graduated, I had an underwhelming manuscript, with a 2.5 grade-point average (B-/C+). Not exactly an ideal candidate for graduate study, one would think.

And then there was that job at the bank, which got me out of the house and kept me fed and clothed until I finally recognised my family calling by going to grad school. After beating the bushes looking for work up and down the west coast, how did I get this job? Turned out that my father used to play in a string quartet with a guy who later became the vice-president for personnel at the Federal Reserve Bank. My father called, the friend said come down for an interview. I did and I got the job.

When I finally decided to pursue grad school, I took the Graduate Record Examinations and scored high. Great. The trouble is that an applicant with high scores and low grades is problematic, since this combination suggests high ability and bad attitude. But somehow I got into an elite graduate programme (though Princeton turned me down). Why? Because I went to Harvard, so who cares about the grades? It’s a brand that opens doors. Take my application to teach at the community college. Why hire someone with no graduate degree and a mediocre undergraduate transcript to teach college students? It turns out that the department chair who hired me also went to Harvard. Members of the club take care of each other.

If you have the right academic credentials, you get the benefit of the doubt. The meritocracy is quite forgiving toward its own. You get plenty of second and third chances where others would not. Picture if I had applied to Penn with the same grades and scores but with a degree from West Chester (state) University instead of Harvard. Would I really have had a chance? You can blow off your studies without consequence if you do it at the right school. Would I have been hired to teach at the community college with an off-brand BA? I think not.

And let’s reconsider my experience at Widener. For me – an upper-middle-class professor with two Ivy League degrees and generations of cultural capital – these students were a world apart. Of course, so were the community-college students I taught earlier, but they were taking courses on weekends while holding a job. That felt more like teaching night school than teaching college. At Widener, however, they were full-time students at a place that called itself a university, but to me this wasn’t a real university where I could be a real professor. Looking around the campus with the eye of a born-and-bred snob, I decided quickly that these were not my people. Most were the first in their families to be going to college and did not have the benefit of a strong high-school education.

In order to make it in academe, you need friends in high places. I had them

A student complained to me one day after she got back her exam that she’d received a worse grade than her friend who didn’t study nearly as hard. That’s not fair, she said. I shrugged it off at the time. Her answer to the essay exam question was simply not as good. But looking back, I realised that I was grading my students on skills I wasn’t teaching them. I assigned multiple readings and then gave take-home exams, which required students to weave together a synthesis of these readings in an essay that responded to a broad analytical question. That’s the kind of exam I was used to, but it required a set of analytical and writing skills that I assumed rather than provided. You can do well on a multiple-choice exam if you study the appropriate textbook chapters; the more time you invest, the higher the grade. That might not be a great way to learn, but it’s a system that rewards effort. My exams, however, rewarded discursive fluency and verbal glibness over diligent study. Instead of trying to figure out how to give these students the cultural capital they needed, I chose to move on to a place where students already had these skills. Much more comfortable.

Oh yes, and what about that first book, the one that won awards, gained me tenure, and launched my career? Well, my advisor at Penn, Michael Katz, had published a book with an editor at Praeger, Gladys Topkis, who then ended up at Yale University Press. With his endorsement, I sent her a proposal for a book based on my dissertation. She gave me a contract. When I submitted the manuscript, a reviewer recommended against publication, but she convinced the editorial board to approve it anyway. Without my advisor, no editor. And without the editor, no book, no awards, no tenure, and no career. It’s as simple as that. In order to make it in academe, you need friends in high places. I had them.

All of this, plus two more books at Yale, helped me make the move up to Stanford. Never would have happened otherwise. By then, on paper I began to look like a golden boy, checking all the right boxes for an elite institution. And when I announced that I was making the move to Stanford in the spring of 2003, before I even assumed the role, things started changing in my life. Suddenly, it seemed, I got a lot smarter. People wanted me to come give a lecture, join an editorial board, contribute to a book, chair a committee. An old friend, a professor in Sweden, invited me to become a visiting professor in his university. Slightly embarrassed, he admitted that this was because of my new label as a Stanford professor. Swedes know only a few universities in the US, he said, and Stanford is one of them. Like others who find a spot near the top of the meritocracy, I was quite willing to accept this honour, without worrying too much about whether it was justified. Like the pay and perks, it just seemed exactly what I deserved. Special people get special benefits; it only makes sense.

And speaking of special benefits, it certainly didn’t hurt that I am a white male – a category that dominates the professoriate, especially at the upper levels. Among full-time faculty members in US degree-granting institutions, 72 per cent of assistant professors and 81 per cent of full professors are white; meanwhile, 47 per cent of assistants and 66 per cent of professors are male. At the elite level, the numbers are even more skewed. At Stanford, whites make up 54 per cent of tenure-line assistant professors but 82 per cent of professors; under-represented minorities account for only 8 per cent of assistants and 5 per cent of professors. Meanwhile, males constitute 60 per cent of assistants and 78 per cent of professors. In US higher education, white males still rule.

Oh, and what about my endowed chair? Well, it turns out that when the holder of the chair retires, the honour moves on to someone else. I inherited the title in 2017 and held it for a year and a half before I retired and it passed on to the next person. What came with the title? Nothing substantial, no additional salary or research funds. Except I did get one material benefit from this experience, which I was allowed to keep when I gave up the title. It’s an uncomfortable, black, wooden armchair bearing the school seal. Mine came with a brass plaque on the back proclaiming: ‘Professor David Labaree, The Lee L Jacks Professor in Education’.

Now, as I fade into retirement, still enjoying the glow from my emeritus status at a brand-name university, it all feels right. I’ve got money to live on, a great support community, and status galore. I get to display my badges of merit for all to see – the Stanford logo on my jacket, and the Jacks emeritus title in my email signature. What’s not to like? The question about whether I deserve it or not fades into the background, crowded out by all the benefits. Enjoy. The sun’s always shining at the summit of the meritocracy.

Is there a moral to be drawn from these two stories of life in the meritocracy? The most obvious one is that this life is not fair. The fix is in. Children of parents who have already succeeded in the meritocracy have a big advantage over other children whose parents have not. They know how the game is played, and they have the cultural capital, the connections and the money to increase their children’s chances for success in this game. They know that the key is doing well at school, since it’s the acquisition of degrees that determines what jobs you get and the life you live. They also know that it’s not just a matter of being a good student but of attending the right school – one that fosters academic achievement and, even more important, occupies an elevated position in the status hierarchy of educational institutions. Brand names open doors. This allows highly educated, upper-middle-class families to game the meritocratic system and to hoard a disproportionate share of the advantages it offers.

In fact, the only thing that’s less fair than the meritocracy is the system it displaced, in which people’s futures were determined strictly by the lottery of birth. Lords begat lords, and peasants begat peasants. In contrast, the meritocracy is sufficiently open that some children of the lower classes can prove themselves in school and win a place higher up the scale. The probability of doing so is markedly lower than the chances of success enjoyed by the offspring of the credentialed elite, but the possibility of upward mobility is nonetheless real. And this possibility is part of what motivates privileged parents to work so frantically to pull every string and milk every opportunity for their children. Through the jousting grounds of schooling, smart poor kids can, at times, displace dumb rich kids. The result is a system of status attainment that provides advantages for some while at the same time spreading fear for their children’s future across families of all social classes. In the end, the only thing that the meritocracy equalises is anxiety.

Posted in Education policy, Scholarship, School reform, Social Programs, Sociology, Systems of Schooling, Theory

Peter Rossi: The Iron Law of Evaluation and Other Metallic Rules

This post is a classic paper by Peter Rossi from 1987 (Research in Social Problems and Public Policy, Volume 4, pages 3-20) which addresses a chronic problem in all policy efforts to change complex social systems.  The social organizations of modern life are so large, so complex, so dependent on the cooperation of so many actors and agencies that making measurable changes in these organizations of the kind intended by the policymakers is fiendishly difficult.  These problems become particularly visible through the process of program evaluation.  As a result, Rossi comes up with a set of “laws” that govern the evaluation process.

The Iron Law of Evaluation: The expected value of any net impact
assessment of any large scale social program is zero.

The Stainless Steel Law of Evaluation: The better designed the
impact assessment of social program. the more likely is the resulting estimate of net impact to be zero.

The Brass Law of Evaluation: The more social programs are designed to change individuals, the more likely the net impact of the program will be zero.

The Zinc Law of Evaluation: Only those programs that are likely to
fail are evaluated.

Read this lovely piece and you will get a rich sense of how hard it is to design policies that will effect the kind of change that the policies aims to accomplish.  Social organizations have a life of their own whose momentum is difficult to deflect.

Here’s a link to the original paper.

 

THE IRON LAW OF EVALUATION
AND OTHER METALLIC RULES
Peter H. Rossi

INTRODUCTION

Evaluations of social programs have a long history, as history goes in the
social sciences, but it has been only in the last two decades that evaluation
has come close to becoming a routine activity that is a functioning part of
the policy formation process. Evaluation research has become an activity
that no agency administering social programs can do without and still
retain a reputation as modern and up to date. In academia, evaluation
research has infiltrated into most social science departments as an integral
constituent of curricula. In short, evaluation has become institutionalized.
There are many benefits to social programs and to the social sciences
from the institutionalization of evaluation research. Among the more
important benefits has been a considerable increase in knowledge concerning
social problems and about how social programs work (and do not
work). Along with these benefits. however, there have also been attached
some losses. For those concerned with the improvement of the lot of
disadvantaged persons, families and social groups, the resulting knowledge
has provided the bases for both pessimism and optimism. On the
pessimistic side, we ha\e learned that designing successful programs is a
difficult task that is not easily or often accomplished. On the optimistic
side, we have learned more and more about the kinds of programs that can
be successfully designed and implemented. Knowledge derived from evaluations
is beginning to guide our judgments concerning what is feasible
and how to reach those feasible goals.

To draw some important implications from this knowledge about the
workings of social programs is the objective of this paper. The first step is
to formulate a set of “laws” that summarize the major trends in evaluation
findings. Next. a set of explanations arc provided for those overall findings.
Finally, we explore the consequences for applied social science activities
that flow from our new knowledge of social programs.

SOME “LAWS” OF EVALUATION

A dramatic but slightly overdrawn view of two decades of evaluation
efforts can be stated as a set of “laws,” each summarizing some strong
tendency that can be discerned in that body of materials. Following a 19th
Century practice that has fallen into disuse in social science. these laws
are named after substances of varying durability. roughly indexing each
law’s robustness.

The Iron Law of Evaluation: The expected value of any net impact
assessment of any large scale social program is zero.

The Iron Law arises from the experience that few impact assessments
of large scale social programs have found that the programs in question
had any net impact. The law also means that. based on the evaluation
efforts of the las twenty years. the best a priori estimate of the net impact
assessment of any program is zero, i.e., that the the program will have no
effect.

The Stainless Steel Law of Evaluation: The better designed the
impact assessment of social program. the more likely is the resulting
estimate of net impact to be zero.

This law means that the more technically rigorous the net impact
assessment. the more likely arc its results to be zero–ur no effect.
Specifically, this law implies that estimating net impacts through randomized
controlled experiments, the avowedly best approach to estimating
nd impacts. is more likely to show zero effects than other less
rigorous approaches.

The Brass Law of Evaluation: The more social programs are designed
to change individuals, the more likely the net impact of the program will
be zero.

This law means that social programs designed to rehabilitate individuals
by changing them in some way or another are more likely to fail. The
Brass Law may appear to be redundant since all programs, including those
designed to deal with individuals, are covered by the Iron Law. This
redundancy is intended to emphasize the especially difficult task faced in
designing and implementing effective programs that are designed to rehabilitate
individuals.

The Zinc Law of Evaluation: Only those programs that are likely to
fail are evaluated.

Of the several metallic laws of evaluation, the zinc law has the most
optimistic slant since it implies that there are effective programs but that
such effective programs are never evaluated. It also implies that if a social
program is effective, that characteristic is obvious enough and hence
policy makers and others who sponsor and fund evaluations decide
against evaluation.

It is possible to formulate a number of additional laws of evaluation,
each attached to one or another of a variety of substances varying in
strength ranging from strong, robust metals to flimsy materials. The substances
involved are only limited by one’s imagination. But, if such laws
are to mirror the major findings of the last two decades of evaluation
research they would all carry the same message: The laws would claim
that a review of the history of the last two decades of efforts to evaluate
major social programs in the United States sustain the proposition that
over this period the American establishment of policy makers, agency
officials, professionals and social scientists did not know how to design
and implement social programs that were minimally effective, let alone
spectacularly so.

HOW FIRM ARE THE METALLIC LAWS OF EVALUATION?

How seriously should we take the metallic laws? Are they simply the
social science analogue of poetic license, intended to provide dramatic
emphasis? Or, do the laws accurately summarize the last two decades’
evaluation experiences?

First of all, viewed against the evidence, the iron law is not entirely
rigid. True, most impact assessments conform to the iron law’s dictates in
showing at best marginal effects and all too often no effects at all. There
are even a few evaluations that have shown effects in the wrong directions,
opposite to the desired effects. Some of the failures of large scale programs
have been particularly disappointing because of the large investments
of time and resources involved: Manpower retraining programs
have not been shown to improve earnings or employment prospects of
participants (Westat, 1976-1980). Most of the attempts to rehabilitate pris-
oners have failed to reduce recidivism (Lipton, Martinson, and Wilks, 1975).
Most educational innovations have not been shown to improve student
learning appreciably over traditional methods (Raizen and Rossi, 1981 ).

But, there are also many exceptions to the iron rule! The “iron” in the
Iron Law has shown itself to be somewhat spongy and therefore easily,
although not frequently, broken. Some social programs have shown
positive effects in the desired directions, and there are even some quite
spectacular successes: the American old age pension system plus Medicare
has dramatically improved the lives of our older citizens. Medicaid
has managed to deliver medical services to the poor to the extent that the
negative correlation between income and consumption of medical services
has declined dramatically since enactment. The family planning
clinics subsidized by the federal government were effective in reducing the
number of births in areas where they were implemented (Cutright and
Jaffe, 1977). There are also human services programs that have been shown
to be effective, although mainly on small scale, pilot runs: for example, the
Minneapolis Police Foundation experiment on the police handling of
family violence showed that if the police placed the offending abuser in
custody over night that the offender was less likely to show up as an
accused offender over the succeeding six months ( Sherman and Berk, 1984 ).
A meta-evaluation of psychotherapy showed that on the average, persons
in psychotherapy-no matter what brand-were a third of a standard
deviation improved over control groups that did not have any therapy
(Smith, Glass, and Miller, 1980). In most of the evaluations of manpower
training programs, women returning to the labor force benefitted
positively compared to women who did not take the courses, even though
in general such programs have not been successful. Even Head Start is
now beginning to show some positive benefits after many years of equivocal
findings. And so it goes on, through a relatively long list of successful
programs.

But even in the case of successful social programs, the sizes of the net
effects have not been spectacular. In the social program field, nothing has
yet been invented which is as effective in its way as the small pox vaccine
was for the field of public health. In short, as is well known (and widely
deplored) we arc not on the verge of wiping out the social scourges of our
time: ignorance, poverty, crime, dependency, or mental illness show great
promise to be with us for some time to come.

The Stainless Steel Law appears to be more likely to hold up over a
The Iron Law of Evaluation and Other Metallic Rules 7
large series of cases than the more general Iron Law. This is because the
fiercest competition as an explanation for the seeming success of any
program-especially human services programs-ordinarily is either selfor
administrator-selection of clients. In other words, if one finds that a
program appears to be effective, the most likely alternative explanation to
judging the program as the cause of that success is that the persons
attracted to that program were likely to get better on their own or that the
administrators of that program chose those who were already on the road
to recovery as clients. As the better research designs-particularly randomized
experiments-eliminate that competition, the less likely is a
program to show any positive net effect. So the better the research design,
the more likely the net impact assessment is likely to be zero.

How about the Zinc Law of Evaluation? First, it should be pointed out
that this law is impossible to verify in any literal sense. The only way that
one can be relatively certain that a program is effective is to evaluate it,
and hence the proposition that only ineffective programs are evaluated can
never be proven.

However, there is a sense in which the Zinc law is correct. If the a
priori, beyond-any-doubt expectations of decision makers and agency
heads is that a program will be effective, there is little chance that the
program will be evaluated at all. Our most successful social program,
social security payments to the aged has never been evaluated in a rigorous
sense. It is “well known” that the program manages to raise the incomes
of retired persons and their families, and “it stands to reason” that this
increase in income is greater than what would have happened, absent the
social security system.

Evaluation research is the legitimate child of skepticism, and where
there is faith, research is not called upon to make a judgment. Indeed, the
history of the income maintenance experiments bears this point out.
Those experiments were not undertaken to find out whether the main
purpose of the proposed program could be achieved: that is, no one
doubted that payments would provide income to poor people-indeed,
payments by definition are income, and even social scientists are not
inclined to waste resources investigating tautologies. Furthermore, no one
doubted that payments could be calculated and checks could be delivered
to households. The main purpose of the experiment was to estimate the
sizes of certain anticipated side effects of the payments, about which
economists and policy makers were uncertain-how much of a work
disincentive effect would be generated by the payments and whether the
payments would affect other aspects of the households in undesirable
ways-for instance, increasing the divorce rate among participants.

In short, when we look at the evidence for the metallic laws, the
evidence appears not to sustain their seemingly rigid character, but the
evidence does sustain the “laws” as statistical regularities. Why this
should be the case, is the topic to be explored in the remainder of this
paper.

IS THERE SOMETHING WRONG WITH EVALUATION RESEARCH?

A possibility that deserves very serious consideration is that there is
something radically wrong with the ways in which we go about conducting
evaluations. Indeed, this argument is the foundation of a revisionist school
of evaluation, composed of evaluators who are intent on calling into
question the main body of methodological procedures used in evaluation
research, especially those that emphasize quantitative and particularly
experimental approaches to the estimation of net impacts. The revisionists
include such persons as Michael Patton ( 1980) and Egon Guba (1981 ).
Some of the revisionists are reformed number crunchers who have seen
the errors of their ways and have been reborn as qualitative researchers.
Others have come from social science disciplines in which qualitative
ethnographic field methods have been dominant.

Although the issue of the appropriateness of social science methodology
is an important one, so far the revisionist arguments fall far short
of being fully convincing. At the root of the revisionist argument appears
to be that the revisionists find it difficult to accept the findings that most
social programs, when evaluated for impact assessment by rigorous quantitative
evaluation procedures, fail to register main effects: hence the
defects must be in the method of making the estimates. This argument per
se is an interesting one, and deserves attention: all procedures need to be
continually re-evaluated. There are some obvious deficiencies in most
evaluations, some of which are inherent in the procedures employed. For
example, a program that is constantly changing and evolving cannot
ordinarily be rigorously evaluated since the treatment to be evaluated
cannot be clearly defined. Such programs either require new evaluation
procedures or should not be evaluated at all.

The weakness of the revisionist approaches lies in their proposed
solutions to these deficiencies. Criticizing quantitative approaches for
their woodenness and inflexibility, they propose to replace current methods
with procedures that have even greater and more obvious deficiencies.
The qualitative approaches they propose are not exempt from issues of
internal and external validity and ordinarily do not attempt to address
these thorny problems. Indeed, the procedures which they advance as
substitutes for the mainstream methodology are usually vaguely des-
scribed, constituting an almost mystical advocacy of the virtues of qualitative
approaches, without clear discussion of the specific ways in which
such procedures meet validity criteria. In addition, many appear to adopt
program operator perspectives on effectiveness, reasoning that any effort
to improve social conditions must have some effect, with the burden of
proof placed on the evaluation researcher to find out what those effects
might be.

Although many of their arguments concerning the woodenness of many
quantitative researches are cogent and well taken, the main revisionist
arguments for an alternative methodology are unconvincing: hence one
must look elsewhere than to evaluation methodology for the reasons for
the failure of social programs to pass muster before the bar of impact
assessments.

SOURCES OF PROGRAM FAILURES

Starting with the conviction that the many findings of zero impact are real,
we are led inexorably to the conclusion that the faults must lie in the
programs. Three kinds of failure can be identified, each a major source of
the observed lack of impact:
The first two types of faults that lead a program to fail stem from
problems in social science theory and the third is a problem in the
organization of social programs:

I. Faults in Problem Theory: The program is built upon a faulty understanding
of the social processes that give rise to the problem to
which the social program is ostensibly addressed;

2. Faults in Program Theory: The program is built upon a faulty
understanding of how to translate problem theory into specific
programs.

3. Faults in Program Implementation: There are faults in the organizations,
resources levels and/or activities that are used to deliver
the program to its intended beneficiaries.

Note that the term theory is used above in a fairly loose way to cover all
sorts of empirically grounded generalized knowledge about a topic, and is
not limited to formal propositions.

Every social program, implicitly or explicitly is based on some understanding
of the social problem involved and some understanding of the
program. If one fails to arrive at an appropriate understanding of either,
the program in question will undoubtedly fail. In addition, every program
is given to some organization to implement. Failures to provide enough
resources, or to insure that the program is delivered with sufficient fidelity
can also lead to findings of ineffectiveness.

Problem Theory

Problem theory consists of the body of empirically tested understanding
of the social problem that underlies the design of the program in
question. For example, the problem theory that was the underpinning for
the many attempts at prisoner rehabilitation tried in the last two decades
was that criminality was a personality disorder. Even though there was a
lot of evidence for this viewpoint, it also turned out that the theory is not
relevant either to understanding crime rates or to the design of crime
policy. The changes in crime rates do not reflect massive shifts in personality
characteristics of the American population, nor does the personality
disorder theory of crime lead to clear implications for crime reduction
policies. Indeed, it is likely that large scale personality changes are beyond
the reach of social policy institutions in a democratic society.
The adoption of this theory is quite understandable. For example, how
else do we account for the fact that persons seemingly exposed to the
same influences do not show the same criminal (or noncriminal) tendencies?
But the theory is not useful for understanding the social distribution
of crime rates by gender, socio-economic level, or by age.

Program Theory

Program theory links together the activities that constitute a social
program and desired program outcomes. Obviously, program theory is
also linked to problem theory, but is partially independent. For example,
given the problem theory that diagnosed criminality is a personality disorder,
a matching program theory would have as its aims personality
change oriented therapy. But there are many specific ways in which
therapy can be defined and at many different points in the life history of
individuals. At the one extreme of the lifeline, one might attempt preventive
mental health work directed toward young children: at the other
extreme, one might provide psychiatric treatment for prisoners or set up
therapeutic groups in prison for convicted offenders.

Implementation

The third major source of failure is organizational in character and has
to do with the failure to implement properly programs. Human services
programs are notoriously difficult to deliver appropriately to the appropriate
clients. A well designed program that is based on correct problem and
program theories may simply be implemented improperly, including not
implementing any program at all. Indeed, in the early days of the War on
Proverty, many examples were found of non-programs-the failure to
implement anything at all.

Note that these three sources of failure are nested to some degree:

1. An incorrect understanding of the social problem being addressed
is clearly a major failure that invalidates a correct program theory
and an excellent implementation.

2. No matter how good the problem theory may be, an inappropriate
program theory will lead to failure.

3. And, no matter how good the problem and program theories, a
poor implementation will also lead to failure.

Sources of Theory Failure

A major reason for failures produced through incorrect problem and
program theories lies in the serious under-development of policy related
social science theories in many of the basic disciplines. The major problem
with much basic social science is that social scientists have tended to
ignore policy related variables in building theories because policy related
variables account for so little of the variance in the behavior in question.It
does not help the construction of social policy any to know that a major
determinant of criminality is age, because there is little, if anything, that
policy can do about the age distribution of a population, given a committment
to our current democratic, liberal values. There are notable exceptions
to this generalization about social science: economics and political
science have always been closely attentive to policy considerations; this
indictment concerns mainly such fields as sociology, anthropology and
psychology.

Incidentally, this generalization about social science and social scientists
should warn us not to expect too much from changes in social policy.
This implication is quite important and will be taken up later on in this
paper.

But the major reason why programs fail through failures in problem and
program theories is that the designers of programs are ordinarily amateurs
who know even less than the social scientists! There are numerous examples
of social programs that were concocted by well meaning amateurs
(but amateurs nevertheless). A prime example are Community Mental
Health Centers, an invention of the Kennedy administration, apparently
undertaken without any input from the National Institute of Mental
Health, the agency that was given the mandate to administer the program.
Similarly with Comprehensive Employment and Training Act (CETA) and
its successor, the current Job Partnership Training Act (JPTA) program,
both of which were designed by rank amateurs and then given over to the
Department of Labor to run and administer. Of course, some of the
amateurs were advised by social scientists about the programs in question,
so the social scientists are not completely blameless.

The amateurs in question are the legislators, judicial officials, and other
policy makers who initiate policy and program changes. The main problem
with amateurs lies not so much in their amateur status but in the fact
that they may know little or nothing about the problem in question or
about the programs they design. Social science may not be an extraordinarily
well developed set of disciplines, but social scientists do know
something about our society and how it works, knowledge that can prove
useful in the design of policy and programs that may have a chance to be
successfully effective.

Our social programs seemingly are designed by procedures that lie
somewhere in between setting monkeys to typing mindlessly on typewriters
in the hope that additional Shakespearean plays will eventually be
produced, and Edisonian trial-and-error procedures in which one tactic
after another is tried in the hope of finding out some method that works.
Although the Edisonian paradigm is not highly regarded as a scientific
strategy by the philosophers of science, there is much to recommend it in
a historical period in which good theory is yet to develop. It is also a
strategy that allows one to learn from errors. Indeed, evaluation is very
much a part of an Edisonian strategy of starting new programs, and
attempting to learn from each trial.

PROBLEM THEORY FAILURES

One of the more persistent failures in problem theory is to under-estimate
the complexity of the social world. Most of the social problems with which
we deal are generated by very complex causal processes involving interactions
of a very complex sort among societal level, community level, and
individual level processes. In all likelihood there are biological level processes
involved as well, however much our liberal ideology is repelled by
the idea. The consequence of under-estimating the complexity of the
problem is often to over-estimate our abilities to affect the amount and
course of the problem. This means that we are overly optimistic about how
much of an effect even the best of social programs can expect to achieve. It
also means that we under-design our evaluations, running the risk of
committing Type II errors: that is, not having enough statistical power in
our evaluation research designs to be able to detect reliably those small
effects that we are likely to encounter.

It is instructive to consider the example of the problem of crime in our
society. In the last two decades, we have learned a great deal about the
crime problem through our attempts by initiating one social program aft~r
another to halt the rising crime rate in our society. The end result of this
series of trials has largely failed to have significant impacts on the crime
rates. The research effort has yielded a great deal of empirical knowledge
about crime and criminals. For example, we now know a great deal about
the demographic characteristics of criminals and their victims. But, we
still have only the vaguest ideas about why the crime rates rose so steeply
in the period between 1970 and 1980 and, in the last few years, have started
what appears to be a gradual decline. We have also learned that the
criminal justice system has been given an impossible task to perform and,
indeed, practices a wholesale form of deception in which everyone acquiesces.

It has been found that most perpetrators of most criminal acts go
undetected, when detected go unprosecuted, and when prosecuted go
unpunished, Furthermore, most prosecuted and sentenced criminals are
dealt with by plea bargaining procedures that are just in the last decade
getting formal recognition as occurring at all. After decades of sub-rosa
existence, plea bargaining is beginning to get official recognition in the
criminal code and judicial interpretations of that code.

But most of what we have learned in the past two decades amounts to a
better description of the crime problem and the criminal justice system as
it presently functions. There is simply no doubt about the importance of
this detailed information: it is going to be the foundation of our understanding
of crime; but, it is not yet the basis upon which to build policies
and programs that can lessen the burden of crime in our society.
Perhaps the most important lesson learned from the descriptive and
evaluative researches of the past two decades is that crime and criminals
appear to be relatively insensitive to the range of policy and program
changes that have been evaluated in this period. This means that the
prospects for substantial improvements in the crime problem appear to be
slight, unless we gain better theoretical understanding of crime and criminals.
That is why the Iron Law of Evaluation appears to be an excellent
generalization for the field of social programs aimed at reducing crime and
leading criminals to the straight and narrow way of life. The knowledge
base for developing effective crime policies and programs simply does not
exist; and hence in this field, we are condemned-hopefully temporarilyto
Edisonian trial and error.

PROGRAM THEORY AND IMPLEMENTATION FAILURES

As defined earlier, program theory failures are translations of a proper
understanding of a problem into inappropriate programs, and program
implementation failures arise out of defects in the delivery system used.
Although in principle it is possible to distinguish program theory failures
from program implementation failures, in practice it is difficult to do so.
For example, a correct program may be incorrectly delivered, and hence
would constitute a “pure” example of implementation failure, but it would
be difficult to identify this case as such, unless there were some instances
of correct delivery. Hence both program theory and program implementation
failures will be discussed together in this section.

These kinds of failure are likely the most common causes of ineffective
programs in many fields. There are many ways in which program theory
and program implementation failures can occur. Some of the more common
ways are listed below.

Wrong Treatment

This occurs when the treatment is simply a seriously flawed translation
of the problem theory into a program. One of the best examples is the
housing allowance experiment in which the experimenters attempted to
motivate poor households to move into higher quality housing by offering
them a rent subsidy, contingent on their moving into housing that met
certain quality standards (Struyk and Bendick, 1981). The experimenters
found that only a small portion of the poor households to whom this offer
was made actually moved to better housing and thereby qualified for and
received housing subsidy payments. After much econometric calculation,
this unexpected outcome was found to have been apparently generated by
the fact that the experimenters unfortunately did not take into account
that the costs of moving were far from zero. When the anticipated dollar
benefits from the subsidy were compared to the net benefits, after taking
into account the costs of moving, the net benefits were in a very large
proportion of the cases uncomfortably close to zero and in some instances
negative. Furthermore, the housing standards applied almost totally
missed the point. They were technical standards that often characterized
housing as sub-standard that was quite acceptable to the households
involved. In other words, these were standards that were regarded as
irrelevant by the clients. It was unreasonable to assume that households
would undertake to move when there was no push of dissatisfaction from
the housing occupied and no substantial net positive benefit in dollar
terms for doing so. Incidentally, the fact that poor families with little
formal education were able to make decisions that were consistent with
the outcomes of highly technical econometric calculations improves one’s
appreciation of the innate intellectual abilities of that population.

Right Treatment But Insufficient Dosage

A very recent set of trial policing programs in Houston, Texas and
Newark, New Jersey exemplifies how programs may fail not so much
because they were administering the wrong treatment but because the
treatment was frail and puny (Police Foundation, 1985). Part of the goals of
the program was to produce a more positive evaluation of local police
departments in the views of local residents. Several different treatments
were attempted. In Houston, the police attempted to meet the presumed
needs of victims of crime by having a police officer call them up a week of
so after a crime complaint was received to ask “how they were doing” and
to offer help in “any way.” Over a period of a year, the police managed to
contact about 230 victims, but the help they could offer consisted mainly
of referrals to other agencies. Furthermore, the crimes in question were
mainly property thefts without personal contact between victims and
offenders, with the main request for aid being requests to speed up the
return of their stolen property. Anyone who knows even a little bit about
property crime in the United States would know that the police do little or
nothing to recover stolen property mainly because there is no way they can
do so. Since the callers from the police department could not offer any
substantial aid to remedy the problems caused by the crimes in question,
the treatment delivered by the program was essentially zero. It goes
without saying that those contacted by the police officers did not differ
from randomly selected controls-who had also been victimized but who
had not been called by the police-in their evaluation of the Houston
Police Department.

It seems likely that the treatment administered, namely expressions of
concern for the victims of crime, administered in a personal face-to-face
way, would have been effective if the police could have offered substantial
help to the victims.

Counter-acting Delivery System

It is obvious that any program consists not only of the treatment
intended to be delivered, but it also consists of the delivery system and
whatever is done to clients in the delivery of services. Thus the income
maintenance experiments’ treatments consist not only of the payments,
but the entire system of monthly income reports required of the clients,
the quarterly interviews and the annual income reviews, as well as the
payment system and its rules. In that particular case, it is likely that the
payments dominated the payment system, but in other cases that might
not be so, with the delivery system profoundly altering the impact of the
treatment.

Perhaps the most egregious example was the group counselling program
run in California prisons during the 1960s (Kassebaum, Ward, and
Wilner, 1972). Guards and other prison employees were used as counseling
group leaders, in sessions in which all participants-prisoners and
guards-were asked to be frank and candid with each other! There are
many reasons for the abysmal failure3 of this program to affect either
criminals’ behavior within prison or during their subsequent period of
parole, but among the leading contenders for the role of villain was the
prison system’s use of guards as therapists.

Another example is the failure of transitional aid payments to released
prisoners when the payment system was run by the state employment
security agency, in contrast to the strong positive effect found when run by
researchers (Rossi, Berk, and Lenihan, 1980). In a randomized experiment
run by social researchers in Baltimore, the provision of 3 months of
minimal support payments lowered the re-arrest rate by 8 percent, a small
decrement, but a significant one that was calculated to have very high cost
to benefit ratios. When, the Department of Labor wisely decided that
another randomized experiment should be run to see whether YOAA”
Your Ordinary American Agency”-could achieve the same results,
large scale experiments in Texas and Georgia showed that putting the
treatment in the hands of the employment security agencies in those two
states cancelled the positive effects of the treatment. The procedure which
produced the failure was a simple one: the payments were made contingent
on being unemployed, as the employment security agencies usually
administered unemployment benefits, creating a strong work disincentive
effect with the unfortunate consequence of a longer period of unemployment
for experimentals as compared to their randomized controls and
hence a higher than expected re-arrest rate.

Pilot and Production Runs

The last example can be subsumed under a more general point — namely,
given that a treatment is effective in a pilot test does not mean that
when turned over to YOAA, effectiveness can be maintained. This is the
lesson to be derived from the transitional aid experiments in Texas and
Georgia and from programs such as The Planned Variation teaching demonstration.
In the latter program leading teaching specialists were asked to
develop versions of their teaching methods to be implemented in actual
school systems. Despite generous support and willing cooperation from
their schools, the researchers were unable to get workable versions of
their teaching strategies into place until at least a year into the running of
the program. There is a big difference between running a program on a
small scale with highly skilled and very devoted personnel and running a
program with the lesser skilled and less devoted personnel that YOAA
ordinarily has at its disposal. Programs that appear to be very promising
when run by the persons who developed them, often turn out to be
disappointments when turned over to line agencies.

Inadequate Reward System

The internally defined reward system of an organization has a strong
effect on what activities are assiduously pursued and those that are
characterized by “benign neglect.” The fact that an agency is directed to
engage in some activity does not mean that it will do so unless the reward
system within that organization actively fosters compliance. Indeed, there
are numerous examples of reward systems that do not foster compliance.
Perhaps one of the best examples was the experience of several police
departments with the decriminalization of public intoxification. Both the
District of Columbia and Minneapolis-among other jurisdictions-rescinded
their ordinances that defined public drunkenness as misdemeanors,
setting up detoxification centers to which police were asked to
bring persons who were found to be drunk on the streets. Under the old
system, police patrols would arrest drunks and bring them into the local
jail for an overnight stay. The arrests so made would “count” towards the
department measures of policing activity. Patrolmen were motivated
thereby to pick up drunks and book them into the local jail, especially in
periods when other arrest opportunities were slight. In contrast, under the
new system, the handling of drunks did not count towards an officer’s
arrest record. The consequence: Police did not bring drunks into the new
detoxification centers and the municipalities eventually had to set up
separate service systems to rustle up clients for the dextoxification
systems.

The illustrations given above should be sufficient to make the general
point that the apropriate implementation of social programs is a problematic
matter. This is especially the case for programs that rely on persons to
deliver the service in question. There is no doubt that federal, state, and
local agencies can calculate and deliver checks with precision and efficiency.
There also can be little doubt that such agencies can maintain a
physical infra-structure that delivers public services efficiently, even
though there are a few examples of the failure of water and sewer systems
on scales that threaten public health. But there is a lot of doubt that human
services that are tailored to differences among individual clients can be
done well at all on a large scale basis.
We know that public education is not doing equally well in facilitating
the learning of all children. We know that our mental health system does
not often succeed in treating the chronically mentally ill in a consistent
and effective fashion. This does not mean that some children cannot be
educated or that the chronically mentally ill cannot be treated-it does
mean that our ability to do these activities on a mass scale is somewhat in
doubt.

CONCLUSIONS

This paper started out with a recital of the several metallic laws stating
that evaluations of social programs have rarely found them to be effective
in achieving their desired goals. The discussion modified the metallic laws
to express them as statistical tendencies rather than rigid and inflexible
laws to which all evaluations must strictly adhere. In this latter sense, the
laws simply do not hold. However, when stripped of their rigidity, the laws
can be seen to be valid as statistical generalizations, fairly accurately
representing what have been the end results of evaluations “on-the-average.”
In short, few large-scale social programs have been found to be even
minimally effective. There have been even fewer programs found to be spectacularly
effective. There are no social science equivalents of the Salk vaccine.

Were this conclusion the only message of this paper, then it would tell a
dismal tale indeed. But there is a more important message in the examination
of the reasons why social programs fail so often. In this connection,
the paper pointed out two deficiencies:

First, policy relevant social science theory that should be the intellectual
underpinning of our social policies and programs is either deficient or
simply missing. Effective social policies and programs cannot be designed
consistently until it is thoroughly understood how changes in policies and
programs can affect the social problems in question. The social policies
and programs that we have tested have been designed, at best, on the basis
of common sense and perhaps intelligent guesses, a weak foundation for
the construction of effective policies and programs.

In order to make progress, we need to deepen our understanding of the
long range and proximate causation of our social problems and our understanding
about how active interventions might alleviate the burdens of
those problems. This is not simply a call for more funds for social science
research but also a call for a redirection of social science research toward
understanding how public policy can affect those problems.

Second, in pointing to the frequent failures in the implementation of
social programs, especially those that involve labor intensive delivery of
services, we may also note an important missing professional activity in
those fields. The physical sciences have their engineering counterparts;
the biological sciences have their health care professionals; but social
science has neither an engineering nor a strong clinical component. To be
sure, we have clinical psychology, education, social work, public administration,
and law as our counterparts to engineering, but these are only
weakly connected with basic social science. What is apparently needed is
a new profession of social and organizational engineering devoted to the
design of human services delivery systems that can deliver treatments
with fidelity and effectiveness.

In short, the double message of this paper is an argument for
further development of policy relevant basic social science and the establishment
of the new profession of social engineer.

NOTES

I. Note that the law emphasizes that it applied primarily to “large scale” social
programs, primarily those that are implemented by an established governmental agency
covering a region or the nation as a whole. It does not apply to small scale demonstrations or to programs run by their designers.
2. Unfortunately, it has proven difficult to stop large scale programs even when evaluations prove them to be ineffective. The federal job training programs seem remarkably resistant to the almost consistent verdicts of ineffectiveness. This limitation on the Edisonian paradigm arises out of the tendency for large scale programs to accumulate staff and clients that have extensive stakes in the program’s continuation.
3. This is a complex example in which there are many competing explanations for the
failure of the program. In the first place, the program may be a good example of the failure of problem theory since the program was ultimately based on a theory of criminal behavior as psychopathology. In the second place, the program theory may have been at fault for employing counselling as a treatment. This example illustrates how difficult it is to separate out the three sources of program failures in specific instances.

REFERENCES

Cutright, P. and F. S. Jaffe
1977 Impact of Family Planning Programs on Fertility: The U.S. Experience. New
York: Praeger.
Guba, E. G. and Y. S. Lincoln
1981 Effective Evaluation: Improving the Usefulness of Evaluation Results Through
Responsive and Naturalistic Approaches. San Francisco: Jossey-Bass.
Kassebaum, G., D. Ward, and D. Wilner
1971 Prison Treatment and Parole Survival. New York: John Wiley.
Lipton, D., R. Martinson, and L. Wilks
1975 The Effectiveness of Correctional Treatment. New York: Praeger.
Patton, M.
1980 Qualitative Evaluation Methods. Beverly Hills, CA: Sage Publications.
Police Foundation
1985 Evaluation of Newark and Houston Policing Experiments. Washington, DC.
Raizen, S. A. and P. H. Rossi (eds.)
1980 Program Evaluation in Education: When? How? To What Ends? Washington,
DC: National Academy Press.
Rossi, P. H., R. A. Berk and K. J. Lenihan
1980 Money, Work and Crime. New York: Academic.
Sherman, L. W. and R. A. Berk.
1984. “Deterrent effects of arrest for domestic assault.” American Sociological Review
49: 261-271.
Smith, M. L., G. V. Glass, and T. I. Miller
1980 The Benefits of Psychotherapy: An Evaluation. Baltimore: The Johns Hopkins
University Press.
Struyk, R. J. and M. Bendick
1981 Housing Vouchers for the Poor. Washington, DC: The Urban Institute.
Westat, Inc.
1976- Continuous Longitudinal Manpower Survey, Reports 1-10. Rockville, MD:
1980 Westat, Inc.