Posted in Academic writing, Higher Education, Teaching, Writing

I Would Rather Do Anything Else than Grade Your Final Papers — Robin Lee Mozer

If the greatest joy that comes from retirement is that I no longer have to attend faculty meetings, the second greatest joy is that I no longer have to grade student papers.  I know, I know: commenting on student writing is a key component of being a good teacher, and there’s a real satisfaction that comes from helping someone become a better thinker and better writer.

But most students are not producing papers to improve their minds or hone their writing skills.  They’re just trying to fulfill a course requirement and get a decent grade.  And this creates a strong incentive not for excellence but for adequacy.  It encourages people to devote most of their energy toward gaming the system.

The key skill is to produce something that looks and feels like a good answer to the exam question or a good analysis of an intellectual problem.  Students have a powerful incentive to accomplish the highest grade for the lowest investment of time and intellectual effort.  This means aiming for quantity over quality (puff up the prose to hit the word count) and form over substance (dutifully refer to the required readings without actually drawing meaningful content from them).  Glibness provides useful cover for the absence of content.  It’s depressing to observe how the system fosters discursive means that undermine the purported aims of education.

Back in the days when students turned in physical papers and then received them back with handwritten comments from the instructor, I used to get a twinge in my stomach when I saw that most students didn’t bother to pick up their final papers from the box outside my office.  I felt like a sucker for providing careful comments that no one would ever see.  At one point I even asked students to tell me in advance if they wanted their papers back, so I only commented on the ones that might get read.  But this was even more depressing, since it meant that a lot of students didn’t even mind letting me know that they really only cared about the grade.  The fiction of doing something useful was what helped keep me going.

So, like many other faculty, I responded with joy to a 2016 piece that Robin Lee Mozer wrote in McSweeney’s called “I Would Rather Do Anything Else than Grade Your Final Papers.”  As a public service to teachers everywhere, I’m republishing her essay here.  Enjoy.

 

I WOULD RATHER DO ANYTHING ELSE THAN GRADE YOUR FINAL PAPERS

Dear Students Who Have Just Completed My Class,

I would rather do anything else than grade your Final Papers.

I would rather base jump off of the parking garage next to the student activity center or eat that entire sketchy tray of taco meat leftover from last week’s student achievement luncheon that’s sitting in the department refrigerator or walk all the way from my house to the airport on my hands than grade your Final Papers.

I would rather have a sustained conversation with my grandfather about politics and government-supported healthcare and what’s wrong with the system today and why he doesn’t believe in homeowner’s insurance because it’s all a scam than grade your Final Papers. Rather than grade your Final Papers, I would stand in the aisle at Lowe’s and listen patiently to All the Men mansplain the process of buying lumber and how essential it is to sight down the board before you buy it to ensure that it’s not bowed or cupped or crook because if you buy lumber with defects like that you’re just wasting your money even as I am standing there, sighting down a 2×4 the way my father taught me 15 years ago.

I would rather go to Costco on the Friday afternoon before a three-day weekend. With my preschooler. After preschool.

I would rather go through natural childbirth with twins. With triplets. I would rather take your chemistry final for you. I would rather eat beef stroganoff. I would rather go back to the beginning of the semester like Sisyphus and recreate my syllabus from scratch while simultaneously building an elaborate class website via our university’s shitty web-based course content manager and then teach the entire semester over again than grade your goddamn Final Papers.

I do not want to read your 3AM-energy-drink-fueled excuse for a thesis statement. I do not want to sift through your mixed metaphors, your abundantly employed logical fallacies, your incessant editorializing of your writing process wherein you tell me As I was reading through articles for this paper I noticed that — or In the article that I have chosen to analyze, I believe the author is trying to or worse yet, I sat down to write this paper and ideas kept flowing into my mind as I considered what I should write about because honestly, we both know that the only thing flowing into your mind were thoughts of late night pizza or late night sex or late night pizza and sex, or maybe thoughts of that chemistry final you’re probably going to fail later this week and anyway, you should know by now that any sentence about anything flowing into or out of or around your blessed mind won’t stand in this college writing classroom or Honors seminar or lit survey because we are Professors and dear god, we have Standards.

I do not want to read the one good point you make using the one source that isn’t Wikipedia. I do not want to take the time to notice that it is cited properly. I do not want to read around your 1.25-inch margins or your gauche use of size 13 sans serif fonts when everyone knows that 12-point Times New Roman is just. Fucking. Standard. I do not want to note your missing page numbers. Again. For the sixth time this semester. I do not want to attempt to read your essay printed in lighter ink to save toner, as you say, with the river of faded text from a failing printer cartridge splitting your paper like Charlton Heston in The Ten Commandments, only there, it was a sea and an entire people and here it is your vague stand-in for an argument.

I do not want to be disappointed.

I do not want to think less of you as a human being because I know that you have other classes and that you really should study for that chemistry final because it is organic chemistry and everyone who has ever had a pre-med major for a roommate knows that organic chemistry is the weed out course and even though you do not know this yet because you have never even had any sort of roommate until now, you are going to be weeded out. You are going to be weeded out and then you will be disappointed and I do not want that for you. I do not want that for you because you will have enough disappointments in your life, like when you don’t become a doctor and instead become a philosophy major and realize that you will never make as much money as your brother who went into some soul-sucking STEM field and landed some cushy government contract and made Mom and Dad so proud and who now gives you expensive home appliances like espresso machines and Dyson vacuums for birthday gifts and all you ever send him are socks and that subscription to that shave club for the $6 middle-grade blades.

I do not want you to be disappointed. I would rather do anything else than disappoint you and crush all your hopes and dreams —

Except grade your Final Papers.

The offer to take your chemistry final instead still stands.

Posted in Academic writing, Educational Research, Higher Education, Writing

Getting It Wrong — Rethinking a Life in Scholarship

This post is an overview of my life as a scholar.  I presented an oral version in my job talk at Stanford in 2002.  The idea was to make sense of the path I’d taken in my scholarly writing up to that point.  What were the issues I was looking at and why?  How did these ideas develop over time?  And what lessons can we learn from this process that might be of use to scholars who are just starting out.

This piece first appeared in print as the introduction to a 2005 book called Education, Markets, and the Public Good: The Selected Works of David F. Labaree.  As a friend told after hearing about the book, “Isn’t this kind of compilation something that’s published after you’re dead?”  So why was I doing this at as a mere youth of 58?  The answer: Routledge offered me the opportunity.  Was there ever an academic who turned out the chance to publish something when the chance arose?  The book was part of a series called — listen for the drum roll — The World Library of Educationalists, which must have a place near the top of the list of bad ideas floated by publishers.  After the first year, when a few libraries rose to the bait, annual sales of this volume never exceeded single digits.  It’s rank in the Amazon bestseller list is normally in the two millions.

Needless to say, no one ever read this piece in its originally published form.  So I tried again, this time slightly adapting it for a 2011 volume edited by Wayne Urban called Leaders in the Historical Study of American Education, which consisted of autobiographical sketches by scholars in the field.  It now ranks in the five millions on Amazon, so the essay still never found a reader.  As a result, I decided to give the piece one more chance at life in my blog.  I enjoyed reading it again and thought it offered some value to young scholars just starting out in a daunting profession.  I hope you enjoy it too.

The core insight is that research trajectories are not things you can  carefully map out in advance.  They just happen.  You learn as you go.  And the most effective means of learning from your own work — at least from my experience — arises from getting it wrong, time and time again.  If you’re not getting things wrong, you may not be learning much at all, since you may just be continually finding what you’re looking for.  It may well be that what you need to find are the things you’re not looking for and that you really don’t want to confront.  The things that challenge your own world view, that take you in a direction you’d rather not go, forcing you to give up ideas you really want to keep.

Another insight I got from this process of reflection is that it’s good to know what are the central weaknesses in the way you do research.  Everyone has them.  Best to acknowledge where you’re coming from and learn to live with that.  These weaknesses don’t discount the value of your work, they just put limits on it.  Your way of doing scholarship are probably better at producing some kinds of insights over others.  That’s OK.  Build on your strengths and let others point out your weaknesses.  You have no obligation and no ability to give the final answer on any important question.  Instead, your job is to make a provocative contribution to the ongoing scholarly conversation and let other scholars take it from there, countering your errors and filling in the gaps.  There is no last word.

Here’s a link to a PDF of the 2011 version.  Hope you find it useful.

 

Adventures in Scholarship

Instead of writing an autobiographical sketch for this volume, I thought it would be more useful to write about the process of scholarship, using my own case as a cautionary tale.  The idea is to help emerging scholars in the field to think about how scholars develop a line of research across a career, both with the hope of disabusing them of misconceptions and showing them how scholarship can unfold as a scary but exhilarating adventure in intellectual development.  The brief story I tell here has three interlocking themes:  You need to study things that resonate with your own experience; you need to take risks and plan to make a lot of mistakes; and you need to rely on friends and colleagues to tell you when you’re going wrong.  Let me explore each of these points.

Study What Resonates with Experience

First, a little about the nature of the issues I explore in my scholarship and then some thoughts about the source of my interest in these issues. My work focuses on the historical sociology of the American system of education and on the thick vein of irony that runs through it.  This system has long presented itself as a model of equal opportunity and open accessibility, and there is a lot of evidence to support these claims.  In comparison with Europe, this upward expansion of access to education came earlier, moved faster, and extended to more people.  Today, virtually anyone can go to some form of postsecondary education in the U.S., and more than two-thirds do.  But what students find when they enter the educational system at any level is that they are gaining equal access to a sharply unequal array of educational experiences.  Why?  Because the system balances open access with radical stratification.  Everyone can go to high school, but quality of education varies radically across schools.  Almost everyone can go to college, but the institutions that are most accessible (community colleges) provide the smallest boost to a student’s life chances, whereas the ones that offer the surest entrée into the best jobs (major research universities) are highly selective.  This extreme mixture of equality and inequality, of accessibility and stratification, is a striking and fascinating characteristic of American education, which I have explored in some form or another in all my work.

Another prominent irony in the story of American education is that this system, which was set up to instill learning, actually undercuts learning because of a strong tendency toward formalism.  Educational consumers (students and their parents) quickly learn that the greatest rewards of the system go to those who attain its highest levels (measured by years of schooling, academic track, and institutional prestige), where credentials are highly scarce and thus the most valuable.  This vertically-skewed incentive structure strongly encourages consumers to game the system by seeking to accumulate the largest number of tokens of attainment – grades, credits, and degrees – in the most prestigious programs at the most selective schools.  However, nothing in this reward structure encourages learning, since the payoff comes from the scarcity of the tokens and not the volume of knowledge accumulated in the process of acquiring these tokens.  At best, learning is a side effect of this kind of credential-driven system.  At worst, it is a casualty of the system, since the structure fosters consumerism among students, who naturally seek to gain the most credentials for the least investment in time and effort.  Thus the logic of the used-car lot takes hold in the halls of learning.

In exploring these two issues of stratification and formalism, I tend to focus on one particular mechanism that helps explain both kinds of educational consequences, and that is the market.  Education in the U.S., I argue, has increasingly become a commodity, which is offered and purchased through market processes in much the same way as other consumer goods.  Educational institutions have to be sensitive to consumers, by providing the mix of educational products that the various sectors of the market demand.  This promotes stratification in education, because consumers want educational credentials that will distinguish them from the pack in their pursuit of social advantage.  It also promotes formalism, because markets operate based on the exchange value of a commodity (what it can be exchanged for) rather than its use value (what it can be used for).  Educational consumerism preserves and increases social inequality, undermines knowledge acquisition, and promotes the dysfunctional overinvestment of public and private resources in an endless race for degrees of advantage.  The result is that education has increasingly come to be seen primarily as a private good, whose benefits accrue only to the owner of the educational credential, rather than a public good, whose benefits are shared by all members of the community even if they don’t have a degree or a child in school.  In many ways, the aim of my work has been to figure out why the American vision of education over the years made this shift from public to private.

This is what my work has focused on in the last 30 years, but why focus on these issues?  Why this obsessive interest in formalism, markets, stratification, and education as arbiter of status competition?  Simple. These were the concerns I grew up with.

George Orwell once described his family’s social location as the lower upper middle class, and this captures the situation of my own family.  In The Road to Wigan Pier, his meditation on class relations in England, he talks about his family as being both culture rich and money poor.[1]  Likewise for mine.  Both of my grandfathers were ministers.  On my father’s side the string of clergy went back four generations in the U.S.  On my mother’s side, not only was her father a minister but so was her mother’s father, who was in turn the heir to a long clerical lineage in Scotland.  All of these ministers were Presbyterians, whose clergy has long had a distinctive history of being highly educated cultural leaders who were poor as church mice.  The last is a bit of an exaggeration, but the point is that their prestige and authority came from learning and not from wealth.  So they tended to value education and disdain grubbing for money.  My father was an engineer who managed to support his family in a modest but comfortable middle-class lifestyle.  He and my mother plowed all of their resources into the education of their three sons, sending all of them to a private high school in Philadelphia (Germantown Academy) and to private colleges (Lehigh, Drexel, Wooster, and Harvard).  Both of my parents were educated at elite schools (Princeton and Wilson) – on ministerial scholarships – and they wanted to do the same for their own children.

What this meant is that we grew up taking great pride in our cultural heritage and educational accomplishments and adopting a condescending attitude to those who simply engaged in trade for a living.  Coupled with this condescension was a distinct tinge of envy for the nice clothes, well decorated houses, new cars, and fancy trips that the families of our friends experienced.  I thought of my family as a kind of frayed nobility, raising the flag of culture in a materialistic society while wearing hand-me-down clothes.  From this background, it was only natural for me to study education as the central social institution, and to focus in particular on the way education had been corrupted by the consumerism and status-competition of a market society.  In doing so I was merely entering the family business.  Someone out there needed to stand up for substantive over formalistic learning and for the public good over the private good, while at the same time calling attention to the dangers of a social hierarchy based on material status.  So I launched my scholarship from a platform of snobbish populism – a hankering for a lost world where position was grounded on the cultural authority of true learning and where mere credentialism could not hold sway.

Expect to Get Things Wrong

Becoming a scholar is not easy under the best of circumstances, and we may make it even harder by trying to imbue emerging scholars with a dedication for getting things right.[2]  In doctoral programs and tenure reviews, we stress the importance of rigorous research methods and study design, scrupulous attribution of ideas, methodical accumulation of data, and cautious validation of claims.  Being careful to stand on firm ground methodologically in itself is not a bad thing for scholars, but trying to be right all the time can easily make us overly cautious, encouraging us to keep so close to our data and so far from controversy that we end up saying nothing that’s really interesting.  A close look at how scholars actually carry out their craft reveals that they generally thrive on frustration.  Or at least that has been my experience.  When I look back at my own work over the years, I find that the most consistent element is a tendency for getting it wrong.  Time after time I have had to admit failure in the pursuit of my intended goal, abandon an idea that I had once warmly embraced, or backtrack to correct a major error.  In the short run these missteps were disturbing, but in the long run they have proven fruitful.

Maybe I’m just rationalizing, but it seems that getting it wrong is an integral part of scholarship.  For one thing, it’s central to the process of writing.  Ideas often sound good in our heads and resonate nicely in the classroom, but the real test is whether they work on paper.[3]  Only there can we figure out the details of the argument, assess the quality of the logic, and weigh the salience of the evidence.  And whenever we try to translate a promising idea into a written text, we inevitably encounter problems that weren’t apparent when we were happily playing with the idea over lunch.  This is part of what makes writing so scary and so exciting:  It’s a high wire act, in which failure threatens us with every step forward.  Can we get past each of these apparently insuperable problems?  We don’t really know until we get to the end.

This means that if there’s little risk in writing a paper there’s also little potential reward.  If all we’re doing is putting a fully developed idea down on paper, then this isn’t writing; it’s transcribing.  Scholarly writing is most productive when authors are learning from the process, and this happens only if the writing helps us figure out something we didn’t really know (or only sensed), helps us solve an intellectual problem we weren’t sure was solvable, or makes us turn a corner we didn’t know was there.  Learning is one of the main things that makes the actual process of writing (as opposed to the final published product) worthwhile for the writer.  And if we aren’t learning something from our own writing, then there’s little reason to think that future readers will learn from it either.  But these kinds of learning can only occur if a successful outcome for a paper is not obvious at the outset, which means that the possibility of failure is critically important to the pursuit of scholarship.

Getting it wrong is also functional for scholarship because it can force us to give up a cherished idea in the face of the kinds of arguments and evidence that accumulate during the course of research.  Like everyone else, scholars are prone to confirmation bias.  We look for evidence to support the analysis we prefer and overlook evidence that supports other interpretations.  So when we collide with something in our research or writing that deflects us from the path toward our preferred destination, we tend to experience this deflection as failure.  However, although these experiences are not pleasant, they can be quite productive.  Not only do they prompt us to learn things we don’t want to know, they can also introduce arguments into the literature that people don’t want to hear.  A colleague at the University of

Michigan, David Angus, had both of these benefits in mind when he used to pose the following challenge to every candidate for a faculty position in the School of Education:  “Tell me about some point when your research forced you to give up an idea you really cared about.”

I have experienced all of these forms of getting it wrong.  Books never worked out the way they were supposed to, because of changes forced on me by the need to come up with remedies for ailing arguments.  The analysis often turned in a direction that meant giving up something I wanted to keep and embracing something I preferred to avoid.  And nothing ever stayed finished.  Just when I thought I had a good analytical hammer and started using it to pound everything in sight, it would shatter into pieces and I would be forced to start over.  This story of misdirection and misplaced intentions starts, as does every academic story, with a dissertation.

Marx Gives Way to Weber

My dissertation topic fell into my lap one day during the final course in my doctoral program in sociology at the University of Pennsylvania, when I mentioned to Michael Katz that I had done a brief study of Philadelphia’s Central High School for an earlier class.  He had a new grant for studying the history of education in Philadelphia and Central was the lead school.  He needed someone to study the school, and I needed a topic, advisor, and funding; by happy accident, it all came together in 15 minutes.  I had first become interested in education as an object of study as an undergraduate at Harvard in the late 1960s, where I majored in Students for a Democratic Society and minored in sociology.  In my last year or two there, I worked on a Marxist analysis of Harvard as an institution of social privilege (is there a better case?), which whet my appetite for educational research.

For the dissertation, I wanted to apply the same kind of Marxist approach to Central High School, which seemed to beg for it.  Founded in 1838, it was the first high school in the city and one of the first in the county, and it later developed into the elite academic high school for boys in the city.  It looked like the Harvard of public high schools.  I had a model for this kind of analysis, Katz’s study of Beverly High School, in which he explained how this high school, shortly after its founding, came to be seen by many citizens as an institution that primarily served the upper classes, thus prompting the town meeting to abolish the school in 1861.[4]  I was planning to do this kind of study about Central, and there seemed to be plenty of evidence to support such an interpretation, including its heavily upper-middle-class student body, its aristocratic reputation in the press, and its later history as the city’s elite high school.

That was the intent, but my plan quickly ran into two big problems in the data I was gathering.  First, a statistical analysis of student attainment and achievement at the school over its first 80 years showed a consistent pattern:  only one-quarter of the students managed to graduate, which meant it was highly selective; but grades and not class determined who made it and who didn’t, which meant it was – surprise – highly meritocratic.  Attrition in modern high schools is strongly correlated with class, but this was not true in the early years at Central.  Middle class students were more likely to enroll in the first place, but they were no more likely to succeed than working class students.  The second problem was that the high school’s role in the Philadelphia school system didn’t fit the Marxist story of top-down control that I was trying to tell.  In the first 50 years of the high school, there was a total absence of bureaucratic authority over the Philadelphia school system.  The high school was an attractive good in the local educational market, offering elevated education in a grand building at a collegiate level (it granted bachelor degrees) and at no cost.  Grammar school students competed for access to this commodity by passing an entrance exam, and grammar school masters competed to get the most students into Central by teaching to the test.  The power that the high school exerted over the system was considerable but informal, arising from consumer demand from below rather than bureaucratic dictate from above.

Thus my plans to tell a story of class privilege and social control fell apart at the very outset of my dissertation; in its place, I found a story about markets and stratification:  Marx gives way to Weber.  The establishment of Central High school in the nation’s second largest city created a desirable commodity with instant scarcity, and this consumer-based market power not only gave the high school control over the school system but also gave it enough autonomy to establish a working meritocracy.  The high school promoted inequality: it served a largely middle class constituency and established an extreme form of educational stratification.  But it imposed a tough meritocratic regime equally on the children of the middle class and working class, with both groups failing most of the time.

Call on Your Friends for Help

In the story I’m telling here, the bad news is that scholarship is a terrain that naturally lures you into repeatedly getting it wrong.  The good news is that help is available if you look for it, which can turn scholarly wrong-headedness into a fruitful learning experience.  Just ask your friends and colleagues.  The things you most don’t want to hear may be just the things that will save you from intellectual confusion and professional oblivion.  Let me continue with the story, showing how colleagues repeatedly saved my bacon.

Markets Give Ground to Politics

Once I completed the dissertation, I gradually settled into being a Weberian, a process that took a while because of the disdain that Marxists hold for Weber.[5]  I finally decided I had a good story to tell about markets and schools, even if it wasn’t the one I had wanted to tell, so I used this story in rewriting the dissertation as a book.  When I had what I thought was a final draft ready to send to the publisher, I showed it to my colleague at Michigan State, David Cohen, who had generously offered to give it a reading.  His comments were extraordinarily helpful and quite devastating.  In the book, he said, I was interpreting the evolution of the high school and the school system as a result of the impact of the market, but the story I was really telling was about an ongoing tension for control of schools between markets and politics.[6]  The latter element was there in the text, but I had failed to recognize it and make it explicit in the analysis.  In short, he explained to me the point of my own book; so I had to rewrite the entire manuscript in order to bring out this implicit argument.

Framing this case in the history of American education as a tension between politics and markets allowed me to tap into the larger pattern of tensions that always exist in a liberal democracy:  the democratic urge to promote equality of power and access and outcomes, and the liberal urge to preserve individual liberty, promote free markets, and tolerate inequality.  The story of Central High School spoke to both these elements.  It showed a system that provided equal opportunity and unequal outcomes.  Democratic politics pressed for expanding access to high school for all citizens, whereas markets pressed for restricting access to high school credentials through attrition and tracking.  Central see-sawed back and forth between these poles, finally settling on the grand compromise that has come to characterize American education ever since:  open access to a stratified school system.  Using both politics and markets in the analysis also introduced me to the problem of formalism, since political goals for education (preparing competent citizens) value learning, whereas market goals (education for social advantage) value credentialing.

Disaggregating Markets

The book came out in 1988 with the title, The Making of an American High School.[7]  With politics and markets as my new hammer, everything looked like a nail.  So I wrote a series of papers in which I applied the idea to a wide variety of educational institutions and reform efforts, including the evolution of high school teaching as work, the history of social promotion, the history of the community college, the rhetorics of educational reform, and the emergence of the education school.

Midway through this flurry of papers, however, I ran into another big problem.  I sent a draft of my community college paper to David Hogan, a friend and former member of my dissertation committee at Penn, and his critique stopped me cold.  He pointed out that I was using the idea of educational markets to refer to two things that were quite different, both in concept and in practice.  One was the actions of educational consumers, the students who want education to provide the credentials they needed in order to get ahead; the other was the actions of educational providers, the taxpayers and employers who want education to produce the human capital that society needs in order to function.  The consumer sought education’s exchange value, providing selective benefits for the individual who owns the credential; the producer sought education’s use value, providing collective benefits to everyone in society, even those not in school.

This forced me to reconstruct the argument from the ground up, abandoning the politics and markets angle and constructing in its place a tension among three goals that competed for primacy in shaping the history of American education.  “Democratic equality” referred to the goal of using education to prepare capable citizens; “social efficiency” referred to the goal of using education to prepare productive workers; and “social mobility” referred to the goal of using education to enable individuals to get ahead in society.  The first was a stand-in for educational politics, the second and third were a disaggregation of educational markets.

Abandoning the Good, the Bad, and the Ugly

Once formulated, the idea of the three goals became a mainstay in my teaching, and for a while it framed everything I wrote.  I finished the string of papers I mentioned earlier, energized by the analytical possibilities inherent in the new tool.  But by the mid-1990s, I began to be afraid that its magic power would start to fade on me soon, as had happened with earlier enthusiasms like Marxism and politics-and-markets.  Most ideas have a relatively short shelf life, as metaphors quickly reach their limits and big ideas start to shrink upon close examination.  That doesn’t mean these images and concepts are worthless, only that they are bounded, both conceptually and temporally.  So scholars need to strike while the iron is hot.  Michael Katz once made this point to me with the Delphic advice, “Write your first book first.”  In other words, if you have an idea worth injecting into the conversation, you should do so now, since it will eventually evolve into something else, leaving the first idea unexpressed.  Since the evolution of an idea is never finished, holding off publication until the idea is done is a formula for never publishing.

So it seemed like the right time to put together a collection of my three-goals papers into a book, and I had to act quickly before they started to turn sour.  With a contract for the book and a sabbatical providing time to put it together, I now had to face the problem of framing the opening chapter.  In early 1996 I completed a draft and submitted it to American Educational Research Journal.  The reviews knocked me back on my heels.  They were supportive but highly critical.  One in particular, which I later found out was written by Norton Grubb, forced me to rethink the entire scheme of competing goals.  He pointed out something I had completely missed in my enthusiasm for the tool-of-the-moment.  In practice my analytical scheme with three goals turned into a normative scheme with two:  a Manichean vision of light and darkness, with Democratic Equality as the Good, and with Social Mobility and Social Efficiency as the Bad and the Ugly.  This ideologically colored representation didn’t hold up under close scrutiny.  Grubb pointed out that social efficiency is not as ugly as I was suggesting.  Like democratic equality and unlike social mobility, it promotes learning, since it has a stake in the skills of the workforce.  Also, like democratic equality, it views education as a public good, whose benefits accrue to everyone and not just (as with social mobility) to the credential holder.

This trenchant critique forced me to start over, putting a different spin on the whole idea of competing goals, abandoning the binary vision of good and evil, reluctantly embracing the idea of balance, and removing the last vestige of my original bumper-sticker Marxism.  As I reconstructed the argument, I put forward the idea that all three of these goals emerge naturally from the nature of a liberal democracy, and that all three are necessary.[8]  There is no resolution to the tension among educational goals, just as there is no resolution to the problem of being both liberal and democratic.  We need an educational system that makes capable citizens and productive workers while also enabling individuals to pursue their own aspirations.  And we all act out our support for each of these goals according to which social role is most salient to us at the moment.  As citizens, we want graduates who can vote intelligently; as taxpayers and employers, we want graduates who will increase economic productivity; and as parents, we want an educational system that offers our children social opportunity.  The problem is the imbalance in the current mix of goals, as the growing primacy of social mobility over the other two goals privileges private over public interests, stratification over equality, and credentials over learning.

Examining Life at the Bottom of the System

With this reconstruction of the story, I was able to finish my second book, published in 1997, and get it out the door before any other major problems could threaten its viability.[9]  One such problem was already coming into view.  In comments on my AERJ goals paper, John Rury (the editor) pointed out that my argument relied on a status competition model of social organization – students fighting for scarce credentials in order to move up or stay up – that did not really apply to the lower levels of the system.  Students in the lower tracks in high school and in the open-access realms of higher education (community colleges and regional state universities) lived in a different world from the one I was talking about.  They were affected by the credentials race, but they weren’t really in the race themselves.  For them, the incentives to compete were minimal, the rewards remote, and the primary imperative was not success but survival.

Fortunately, however, there was one place at the bottom of the educational hierarchy I did know pretty well, and that was the poor beleaguered education school.  From 1985 to 2003, while I was teaching in the College of Education at Michigan State University, I received a rich education in the subject.  I had already started a book about ed schools, but it wasn’t until the book was half completed that I realized it was forcing me to rethink my whole thesis about the educational status game.  Here was an educational institution that was the antithesis of the Harvards and Central High Schools that I had been writing about thus far.  Residing at the very bottom of the educational hierarchy, the ed school was disdained by academics, avoided by the best students, ignored by policymakers, and discounted by its own graduates.  It was the perfect case to use in answering a question I had been avoiding:  What happens to education when credentials carry no exchange value and the status game is already lost?

What I found is that life at the bottom has some advantages, but they are outweighed by disadvantages.  On the positive side, the education school’s low status frees it to focus efforts on learning rather than on credentials, on the use value rather than exchange value of education; in this sense, it is liberated from the race for credentials that consumes the more prestigious realms of higher education.  On the negative side, however, the ed school’s low status means that it has none of the autonomy that prestigious institutions (like Central High School) generate for themselves, which leaves it vulnerable to kibitzing from the outside.  This institutional weakness also has made the ed school meekly responsive to its environment, so that over the years it obediently produced large numbers of teachers at low cost and with modest professional preparation, as requested.

When I had completed a draft of the book, I asked for comments from two colleagues at Michigan State, Lynn Fendler and Tom Bird, who promptly pointed out several big problems with the text.  One had to do with the argument in the last few chapters, where I was trying to make two contradictory points:  ed schools were weak in shaping schools but effective in promoting progressive ideology.  The other problem had to do with the book’s tone:  as an insider taking a critical position about ed schools, I sounded like I was trying to enhance my own status at the expense of colleagues.  Fortunately, they were able to show me a way out of both predicaments.  On the first issue, they helped me see that ed schools were more committed to progressivism as a rhetorical stance than as a mode of educational practice.  In our work as teacher educators, we have to prepare teachers to function within an educational system that is hostile to progressive practices.  On the second issue, they suggested that I shift from the third person to the first person.  By announcing clearly both my membership in the community under examination and my participation in the problems I was critiquing, I could change the tone from accusatory to confessional.  With these important changes in place, The Trouble with Ed Schools was published in 2004.[10]

Enabling Limitations

In this essay I have been telling a story about grounding research in an unlovely but fertile mindset, getting it wrong repeatedly, and then trying to fix it with the help of friends.  However, I don’t want to leave the impression that I think any of these fixes really resolved the problems.  The story is more about filling potholes than about re-engineering the road.  It’s also about some fundamental limitations in my approach to the historical sociology of American education, which I have been unwilling and unable to fix since they lie at the core of my way of seeing things.  Intellectual frameworks define, shape, and enable the work of scholars.  Such frameworks can be helpful by allowing us to cut a slice through the data and reveal interesting patterns that are not apparent from other angles, but they can only do so if they maintain a sharp leading edge.  As an analytical instrument, a razor works better than a baseball bat, and a beach ball doesn’t work at all.  The sharp edge, however, comes at a cost, since it necessarily narrows the analytical scope and commits a scholar to one slice through a problem at the expense of others.  I’m all too aware of the limitations that arise from my own cut at things.

One problem is that I tend to write a history without actors.  Taking a macro-sociological approach to history, I am drawn to explore general patterns and central tendencies in the school-society relationship rather than the peculiarities of individual cases.  In the stories I tell, people don’t act.  Instead, social forces contend, social institutions evolve in response to social pressures, and collective outcomes ensue.  My focus is on general processes and structures rather than on the variations within categories.  What is largely missing from my account of American education is the radical diversity of traits and behaviors that characterizes educational actors and organizations.  I plead guilty to these charges.  However, my aim has been not to write a tightly textured history of the particular but to explore some of the broad socially structured patters that shape the main outlines of American educational life.  My sense is that this kind of work serves a useful purpose—especially in a field such as education, whose dominant perspectives have been psychological and presentist rather than sociological and historical; and in a sub-field like history of education, which can be prone to the narrow monograph with little attention to the big picture; and in a country like the United States, which is highly individualistic in orientation and tends to discount the significance of the collective and the categorical.

Another characteristic of my work is that I tend to stretch arguments well beyond the supporting evidence.  As anyone can see in reading my books, I am not in the business of building an edifice of data and planting a cautious empirical generalization on the roof.  My first book masqueraded as a social history of an early high school, but it was actually an essay on the political and market forces shaping the evolution of American education in general—a big leap to make from historical data about a single, atypical school.  Likewise my second book is a series of speculations about credentialing and consumerism that rests on a modest and eclectic empirical foundation.  My third book involves minimal data on education in education schools and maximal rumination about the nature of “the education school.”  In short, validating claims has not been my strong suit.  I think the field of educational research is sufficiently broad and rich that it can afford to have some scholars who focus on constructing credible empirical arguments about education and others who focus on exploring ways of thinking about the subject.

The moral of this story, therefore, may be that scholarship is less a monologue than a conversation.  In education, as in other areas, our field is so expansive that we can’t cover more than a small portion, and it’s so complex that we can’t even gain mastery over our own tiny piece of the terrain.  But that’s ok.  As participants in the scholarly conversation, our responsibility is not to get things right but to keep things interesting, while we rely on discomfiting interactions with our data and with our colleagues to provide the correctives we need to make our scholarship more durable.

[1]  George Orwell,  The Road to Wigan Pier (New York: Harcourt, Brace, 1958).

[2]  I am grateful to Lynn Fendler and Tom Bird for comments on an earlier draft of this portion of the essay.  As they have done before, they saved me from some embarrassing mistakes.  I presented an earlier version of this analysis in a colloquium at the Stanford School of Education in 2002 and in the Division F Mentoring Seminar at the American Educational Research Association annual meeting in New Orleans later the same year.  A later version was published as the introduction to Education, Markets, and the Public Good: The Selected Works of David F. Labaree (London: Routledge Falmer, 2007).  Reprinted with the kind permission of Taylor and Francis.

[3]  That doesn’t mean it’s necessarily the best way to start developing an idea.  For me, teaching has always served better as a medium for stimulating creative thought.  It’s a chance for me to engage with ideas from texts about a particular topic, develop a story about these ideas, and see how it sounds when I tell it in class and listen to student responses.  The classroom has a wonderful mix of traits for these purposes: by forcing discipline and structure on the creative process while allowing space for improvisation and offering the chance to reconstruct everything the next time around.  After my first book, most of my writing had its origins in this pedagogical process.  But at a certain point I find that I have to test these ideas in print.

[4]  Michael B. Katz, The Irony of Early School Reform: Educational Innovation in Mid-Nineteenth Century Massachusetts (Boston: Harvard University Press, 1968).

[5]  Marx’s message is rousing and it can fit on a bumper sticker:  Workers of the world, unite!  But Weber’s message is more complicated, pessimistic, and off-putting:  The iron cage of rationalization has come to dominate the structure of thought and social action, but we can’t stop it or even escape from it.

[6]  He also pointed out, in passing, that my chapter on the attainment system at the high school – which incorporated 17 tables in the book (30 in the dissertation), and which took me two years to develop by collecting, coding, keying, and statistically analyzing data from 2,000 student records – was essentially one big footnote in support of the statement, “Central High School was meritocratic.”  Depressing but true.

[7]  David F. Labaree, The Making of an American High School: The Credentials Market and the Central High School of Philadelphia, 1838-1939 (New Haven: Yale University Press, 1988).

[8]  David F. Labaree, “Public Goods, Private Goods: The American Struggle over Educational Goals. American Educational Research Journal 34:1 (Spring, 1998): 39-81.

[9]  David F. Labaree,  How to Succeed in School Without Really Learning: The Credentials Race in American Education (New Haven, Yale University Press, 1997).

[10] David F. Labaree,  The Trouble with Ed Schools (New Haven: Yale University Press, 2004).

Posted in Academic writing, Writing

Academic Writing Issues #9: Metaphors — The Poetry of Everyday Life

Earlier I posted a piece about mangled metaphors (Academic Writing Issues # 6), which focused on the trouble that writers get into when they use a metaphor without taking into account the root comparison that is embedded within it.  Example:  talking about “the doctrine set forth in Roe v. Wade and its progeny” — a still-born metaphor if there ever was one.  So writers need to be wary of metaphors, especially those that have become cliches, thus making the original reference dormant.

But don’t let these problems put you off from using metaphors altogether.  Actually, it’s nearly impossible to write without any metaphors, since they are so central to communication.  Literal meanings are useful, and in scientific writing precision is important to maintain clarity.  But literal language is boring, pedestrian.  It just plods along, telling a story without conveying what the story means.  Metaphor is how we create a richness of meaning, which comes from not just telling what something is but showing what’s it’s related to.  Metaphors create depth and resonance, and they stick in your mind.

Think about the power of a great book title, which captures the essence of the text in a vivid image:  Bowling Alone; Bell Curve; The Unbearable Lightness of Being; The Botany of Desire.

In the piece below, David Brooks talks about metaphors as the poetry of everyday life in a 2011 column from the New York Times.  I think you’ll like it.

 

April 11, 2011

Poetry for Everyday Life

By DAVID BROOKS

Here’s a clunky but unremarkable sentence that appeared in the British press before the last national election: “Britain’s recovery from the worst recession in decades is gaining traction, but confused economic data and the high risk of hung Parliament could yet snuff out its momentum.”

The sentence is only worth quoting because in 28 words it contains four metaphors. Economies don’t really gain traction, like a tractor. Momentum doesn’t literally get snuffed out, like a cigarette. We just use those metaphors, without even thinking about it, as a way to capture what is going on.

In his fine new book, “I Is an Other,” James Geary reports on linguistic research suggesting that people use a metaphor every 10 to 25 words. Metaphors are not rhetorical frills at the edge of how we think, Geary writes. They are at the very heart of it.

George Lakoff and Mark Johnson, two of the leading researchers in this field, have pointed out that we often use food metaphors to describe the world of ideas. We devour a book, try to digest raw facts and attempt to regurgitate other people’s ideas, even though they might be half-baked.

When talking about relationships, we often use health metaphors. A friend might be involved in a sick relationship. Another might have a healthy marriage.

When talking about argument, we use war metaphors. When talking about time, we often use money metaphors. But when talking about money, we rely on liquid metaphors. We dip into savings, sponge off friends or skim funds off the top. Even the job title stockbroker derives from the French word brocheur, the tavern worker who tapped the kegs of beer to get the liquidity flowing.

The psychologist Michael Morris points out that when the stock market is going up, we tend to use agent metaphors, implying the market is a living thing with clear intentions. We say the market climbs or soars or fights its way upward. When the market goes down, on the other hand, we use object metaphors, implying it is inanimate. The market falls, plummets or slides.

Most of us, when asked to stop and think about it, are by now aware of the pervasiveness of metaphorical thinking. But in the normal rush of events, we often see straight through metaphors, unaware of how they refract perceptions. So it’s probably important to pause once a month or so to pierce the illusion that we see the world directly. It’s good to pause to appreciate how flexible and tenuous our grip on reality actually is.

Metaphors help compensate for our natural weaknesses. Most of us are not very good at thinking about abstractions or spiritual states, so we rely on concrete or spatial metaphors to (imperfectly) do the job. A lifetime is pictured as a journey across a landscape. A person who is sad is down in the dumps, while a happy fellow is riding high.

Most of us are not good at understanding new things, so we grasp them imperfectly by relating them metaphorically to things that already exist. That’s a “desktop” on your computer screen.

Metaphors are things we pass down from generation to generation, which transmit a culture’s distinct way of seeing and being in the world. In his superb book “Judaism: A Way of Being,” David Gelernter notes that Jewish thought uses the image of a veil to describe how Jews perceive God — as a presence to be sensed but not seen, which is intimate and yet apart.

Judaism also emphasizes the metaphor of separateness as a path to sanctification. The Israelites had to separate themselves from Egypt. The Sabbath is separate from the week. Kosher food is separate from the nonkosher. The metaphor describes a life in which one moves from nature and conventional society to the sacred realm.

To be aware of the central role metaphors play is to be aware of how imprecise our most important thinking is. It’s to be aware of the constant need to question metaphors with data — to separate the living from the dead ones, and the authentic metaphors that seek to illuminate the world from the tinny advertising and political metaphors that seek to manipulate it.

Most important, being aware of metaphors reminds you of the central role that poetic skills play in our thought. If much of our thinking is shaped and driven by metaphor, then the skilled thinker will be able to recognize patterns, blend patterns, apprehend the relationships and pursue unexpected likenesses.

Even the hardest of the sciences depend on a foundation of metaphors. To be aware of metaphors is to be humbled by the complexity of the world, to realize that deep in the undercurrents of thought there are thousands of lenses popping up between us and the world, and that we’re surrounded at all times by what Steven Pinker of Harvard once called “pedestrian poetry.”

Posted in Education policy, Educational Research, History of education, Teaching, Writing

Education and the Pursuit of Optimism

This post is aabout a 1975 paper by James G. March, which was published in, of all places, the Texas Tech Journal of Education.  Given that provenance, it’s something you likely have never encountered before unless someone actually handed it to you.  I used it in a number of my classes and wanted to share it with you.

March was a fascinating scholar who had a long a distinguished career as an organizational theorist, teaching at Carnegie-Mellon and later at the Stanford business and education schools. He died last year.  I had the privilege of getting to know him in retirement after I moved to Stanford.  He was the rare combination of cutting edge social scientist and ardent humanist, who among his other accomplishments published a half dozen volumes of poetry.

This paper shows both sides of his approach to issues.  In it he explores the role that education has played in the U.S., in particular its complex relationship with all-American optimism.  Characteristically, in developing his analysis, he relies not on social science data but on literature — among others, Tolstoy, Cervantes, Solzhenitsyn, and Borges.

I love how he frames the nature of teaching and learning in a way that is vastly distant from the usual language of social efficiency and human capital production — and also distant from the chipper American faith that education can fix everything.  A tragic worldview pervades his discussion, reflecting the perspective of the great works of literature upon which he draws.

I find his argument particularly salient for teachers, who have been the majority of my own students over the years.  It’s common for teachers to ask the impossible of themselves, by trying to fulfill the promise that education with save all their students.  Too often the result is the feeling of failure and/or the fate of burnout.

Below I distill some of the core insights from this paper, but there is no substitute for reading and reveling in the original, which you can find here.

He starts out by asserting that “The modern history of American education is a history of optimism.”  The problem with this is that it blinds us to the limited ability of social engineering in general and education in particular to realize our greatest hopes.

By insisting that great action be justified by great hopes, we encourage a belief in the possibility of magic. For examples, read the litany of magic in the literature on free schools, Montessori, Head Start, Sesame Street, team teaching, open schools, structured schools, computer-assisted instruction, community control. and hot lunches. Inasmuch as there appears to be rather little magic in the world, great hopes are normally difficult to realize. Having been seduced into great expectations, we are abandoned to a choice between failure and delusion.

The temptations of delusion are accentuated both by our investment in hope and by the potential for ambiguity in educational outcomes. To a substantial extent we are able to believe whatever we want to believe, and we want to believe in the possibility of progress. We are unsure about what we want to accomplish, or how we would know when we had accomplished it, or how to allocate credit or blame for accomplishment or lack of it. So we fool ourselves.

The conversion of great hopes into magic, and magic into delusion describes much of modern educational history. It continues to be a dominant theme of educational reform in the United States. But there comes a time when the conversion docs not work for everyone. As we come to rccognize the political, sociological, and psychological dynamics of repeated waves of optimism based on heroic hopes, our willingness to participate in the process is compromised.

As an antidote to the problem, he proposes three paradoxical principles for action:  pessimism without despair; irrelevance without loss of faith; and optimism without hope.

Pessimism without despair:  This means embracing the essential connection between education and life, without expecting the most desirable outcome.  It is what it is.  The example is Solzhenitsyn’s character Shukov, learning to live in a prison camp.  The message is this:  Don’t set unreasonable expectations for what’s possible, defining anything else as failure.  Small victories in the classroom are a big deal.

Irrelevance without loss of faith:  This means recognizing that you can’t control events, so instead you do what you can wherever you are.  His example is General Kutuzov in War and Peace.  He won the war against Napoleon by continually retreating and by restraining his officers from attacking the enemy.  Making things happen is overrated.  There’s a lot the teacher simply can’t accomplish, and you need to recognize that.

Optimism without hope:  The aim here is to do what is needed rather than what seems to be effective.  His example is Don Quixote, a man who cuts a ridiculous figure by tilting at windmills, but who has a beneficial impact on everyone he encounters.  The message for teachers is that you set out to do what you think is best for your students, because it’s the right thing to do rather than because it is necessarily effective.  This is moral-political logic for schooling instead of the usual utilitarian logic.

So where does this leave you as a teacher, administrator, policymaker?

  • Don’t let anyone convince you that schooling is all about producing human capital, improving test scores, or pursuing any other technical and instrumentalist goal.

  • Its origins are political and moral: to form a nation state, build character, and provide social opportunity.

  • Teaching is not a form of social engineering, making society run more efficiently

  • It’s not about fixing social problems, for which it is often ill suited

  • Instead, it’s a normative practice organized around shaping the kind of people we want to be — about doing what’s right instead of what’s useful.

Posted in Academic writing, Writing

Academic Writing Issues #7 — Writing the Perfect Sentence

The art of writing ultimately comes down to the art of writing sentences.  In his lovely book, How to Write a Sentence, Stanley Fish explains that the heart of any sentence is not its content but its form.  The form is what defines the logical relationship between the various elements within the sentence.  The same formal set of relationships within a sentence structure can be filled with an infinite array of possible bits of content.  If you master the forms, he says, you will be able to harness them to your own aims in producing content.  His core counter-intuitive admonition is this:  “You shall tie yourself to forms and the forms shall set you free.”  Note the perfect form in Lewis Carrolls’ nonsense poem Jaberwocky:

Twas brillig, and the slithy toves

Did gyre and gimble in the wabe;

All mimsy were the borogoves,

And the mome raths outgrabe.

I strongly recommend reading the book, which I used for years in my class on academic writing.  You’ll learn a lot about writing and you’ll also accumulate a lovely collection of stunning quotes.

Below is a piece Fish published in the New Statesman in 2011, which deftly summarizes the core argument in the book.  Enjoy.  Here’s a link to the original.

 

How to write the perfect sentence

Stanley Fish

Published 17 February 2011

In learning how to master the art of putting words together, the trick is to concentrate on technique and not content. Substance comes second.

Look around the room you’re sitting in. Pick out four items at random. I’m doing it now and my items are a desk, a television, a door and a pencil. Now, make the words you have chosen into a sentence using as few additional words as possible. For example: “I was sitting at my desk, looking at the television, when a pencil fell off and rolled to the door.” Or: “The television close to the door obscured my view of the desk and the pencil I needed.” Or: “The pencil on my desk was pointed towards the door and away from the television.” You will find that you can always do this exercise – and you could do it for ever.

That’s the easy part. The hard part is to answer this question: what did you just do? How were you able to turn a random list into a sentence? It might take you a little while but, in time, you will figure it out and say something like this: “I put the relationships in.” That is to say, you arranged the words so that they were linked up to the others by relationships of cause, effect, contiguity, similarity, subordination, place, manner and so on (but not too far on; the relationships are finite). Once you have managed this – and you do it all the time in speech, effortlessly and unselfconsciously – hitherto discrete items participate in the making of a little world in which actors, actions and the objects of actions interact in ways that are precisely represented.

This little miracle you have performed is called a sentence and we are now in a position to define it: a sentence is a structure of logical relationships. Notice how different this is from the usual definitions such as, “A sentence is built out of the eight parts of speech,” or, “A sentence is an independent clause containing a subject and a predicate,” or, “A sentence is a complete thought.” These definitions seem like declarations out of a fog that they deepen. The words are offered as if they explained everything, but each demands an explanation.

When you know that a sentence is a structure of logical relationships, you know two things: what a sentence is – what must be achieved for there to be focused thought and communication – and when a sentence that you are trying to write goes wrong. This happens when the relationships that allow sense to be sharpened are missing or when there are too many of them for comfort (a goal in writing poetry but a problem in writing sentences). In such cases, the components of what you aspired to make into a sentence stand alone, isolated; they hang out there in space and turn back into items on a list.

Armed with this knowledge, you can begin to look at your own sentences and those of others with a view to discerning what is successful and unsuccessful about them. As you do this, you will be deepening your understanding of what a sentence is and introducing yourself to the myriad ways in which logical structures of verbal thought can be built, unbuilt, elaborated upon and admired.

My new book, How to Write a Sentence, is a light-hearted manual of instruction designed to teach you how to do these things – how to write a sentence and how to appreciate in analytical detail the sentences produced by authors who knock your socks off. These two aspects – lessons in sentence craft and lessons in sentence appreciation – reinforce each other; the better able you are to appreciate great sentences, the closer you are to being able to write one. An intimate knowledge of what makes sentences work is one prerequisite for writing them.

Consider the first of those aspects – sentence craft. The chief lesson here is: “It’s not the thought that counts.” By that, I mean that skill in writing sentences is a matter of understanding and mastering form not content. The usual commonplace wisdom is that you have to write about something, but actually you don’t. The exercise I introduced above would work even if your list was made up of nonsense words, as long as each word came tagged with its formal identification – actor, action, object of action, modifier, conjunction, and so on. You could still tie those nonsense words together in ligatures of relationships and come up with perfectly formed sentences like Noam Chomsky’s “Colourless green ideas sleep furiously,” or the stanzas of Lewis Carroll’s “Jabberwocky”.

If what you want to do is become facile (in a good sense) in producing sentences, the sentences with which you practise should be as banal and substantively inconsequential as possible; for then you will not be tempted to be interested in them. The moment that interest comes to the fore, the focus on craft will be lost. (I know that this sounds counter-intuitive, but stick with me.)

I call this the Karate Kid method of learning to write. In that 1984 cult movie (recently remade), the title figure learns how to fight not by participating in a match but by repeating (endlessly and pointlessly, it seems to him) the purely formal motions of waxing cars and painting fences. The idea is that when you are ready either to compete or to say something that is meaningful and means something to you, the forms you have mastered and internalised will generate the content that would have remained inchoate (at best) without them.

These points can be illustrated with senten­ces that are too good to be tossed aside. In the book, I use them to make points about form, but I can’t resist their power or the desire to explain it. When that happens, content returns to my exposition and I shift into full appreciation mode, caressing these extraordinary verbal productions even as I analyse them. I become like a sports commentator, crying, “Did you see that?” or “How could he have pulled that off?” or “How could she keep it going so long and still not lose us?” In the end, the apostle of form surrenders to substance, or rather, to the pleasure of seeing substance emerge though the brilliant deployment of forms.

As a counterpoint to that brilliance, let me hazard an imitation of two of the marvels I discuss. Take Swift’s sublimely malign sentence, “Last week I saw a woman flayed and you will hardly believe how much it altered her person for the worse.” And then consider this decidedly lame imitation: “Last night I ate six whole pizzas and you would hardly believe how sick I was.”

Or compare John Updike’s description in the New Yorker of the home run that the baseball player Ted Williams hit on his last at-bat in 1960 – “It was in the books while it was still in the sky” – to “He had won the match before the first serve.” My efforts in this vein are lessons both in form and humility.

The two strands of my argument can be brought together by considering sentences that are about their own form and unfolding; sentences that meditate on or burst their own limitations, and thus remind us of why we have to write sentences in the first place – we are mortal and finite – and of what rewards may await us in a realm where sentences need no longer be fashioned. Here is such a sentence by the metaphysical poet John Donne:

If we consider eternity, into that time never entered; eternity is not an everlasting flux of time, but time is a short parenthesis in a long period; and eternity had been the same as it is, though time had never been.

The content of the sentence is the unreality of time in the context of eternity, but because a sentence is necessarily a temporal thing, it undermines that insight by being. (Asserting in time the unreality of time won’t do the trick.) Donne does his best to undermine the undermining by making the sentence a reflection on its fatal finitude. No matter how long it is, no matter what its pretension to a finality of statement, it will be a short parenthesis in an enunciation without beginning, middle or end. That enunciation alone is in possession of the present – “is” – and what the sentence comes to rest on is the declaration of its already having passed into the state of non-existence: “had never been”.

Donne’s sentence is in my book; my analysis of it is not. I am grateful to the New Statesman for the opportunity to produce it and to demonstrate once again the critic’s inadequacy to his object.

Stanley Fish is Davidson-Kahn Distinguished University Professor of Humanities and Law at Florida International University. His latest book is “How to Write a Sentence: and How to Read One” (HarperCollins, £12.99)

https://www.newstatesman.com/books/2011/02/write-sentence-comes

 

 

Posted in Academic writing, Writing

Academic Writing Issues #5 — Failing to Use Dynamic Verbs

Many people have complained that academic writers are addicted to the passive voice, doing anything to avoid using the first person:  “Data were gathered.”  I wonder who did that?  But in some ways a bigger problem is that we refuse to use the kind of dynamic verbs that can energize our stories and drive the argument forward.  Below is a lovely piece by Constance Hale, originally published as part of the New York Times series in 2012 on writing called Draft.  In it she explains the difference between static verbs and power verbs.  Yes, she says, static verbs have their uses; but when we rely too heavily on them, we drain all energy, urgency, and personality from our authorial voices.  We can also end up lulling our readers to sleep.

She gives us some excellent examples about how we can use the full array of verbs at our disposal to tell compelling, nuanced, and engaging stories.  Enjoy.

Here’s a link to the original version.

 

New York Times

APRIL 16, 2012, 9:00 PM

Make-or-Break Verbs

By CONSTANCE HALE

Draft is a series about the art and craft of writing.

This is the third in a series of writing lessons by the author.

A sentence can offer a moment of quiet, it can crackle with energy or it can just lie there, listless and uninteresting.

What makes the difference? The verb.

Verbs kick-start sentences: Without them, words would simply cluster together in suspended animation. We often call them action words, but verbs also can carry sentiments (love, fear, lust, disgust), hint at cognition (realize, know, recognize), bend ideas together (falsify, prove, hypothesize), assert possession (own, have) and conjure existence itself (is, are).

Fundamentally, verbs fall into two classes: static (to be, to seem, to become) and dynamic (to whistle, to waffle, to wonder). (These two classes are sometimes called “passive” and “active,” and the former are also known as “linking” or “copulative” verbs.) Static verbs stand back, politely allowing nouns and adjectives to take center stage. Dynamic verbs thunder in from the wings, announcing an event, producing a spark, adding drama to an assembled group.

Static Verbs
Static verbs themselves fall into several subgroups, starting with what I call existential verbs: all the forms of to be, whether the present (am, are, is), the past (was, were) or the other more vexing tenses (is being, had been, might have been). In Shakespeare’s “Hamlet,” the Prince of Denmark asks, “To be, or not to be?” when pondering life-and-death questions. An aging King Lear uses both is and am when he wonders about his very identity:

“Who is it that can tell me who I am?”

Jumping ahead a few hundred years, Henry Miller echoes Lear when, in his autobiographical novel “Tropic of Cancer,” he wanders in Dijon, France, reflecting upon his fate:

“Yet I am up and about, a walking ghost, a white man terrorized by the cold sanity of this slaughter-house geometry. Who am I? What am I doing here?”

Drawing inspiration from Miller, we might think of these verbs as ghostly verbs, almost invisible. They exist to call attention not to themselves, but to other words in the sentence.

Another subgroup is what I call wimp verbs (appear, seem, become). Most often, they allow a writer to hedge (on an observation, description or opinion) rather than commit to an idea: Lear appears confused. Miller seems lost.

Finally, there are the sensing verbs (feel, look, taste, smell and sound), which have dual identities: They are dynamic in some sentences and static in others. If Miller said I feel the wind through my coat, that’s dynamic. But if he said I feel blue, that’s static.

Static verbs establish a relationship of equals between the subject of a sentence and its complement. Think of those verbs as quiet equals signs, holding the subject and the predicate in delicate equilibrium. For example, I, in the subject, equals feel blue in the predicate.

Power Verbs
Dynamic verbs are the classic action words. They turn the subject of a sentence into a doer in some sort of drama. But there are dynamic verbs — and then there are dynamos. Verbs like has, does, goes, gets and puts are all dynamic, but they don’t let us envision the action. The dynamos, by contrast, give us an instant picture of a specific movement. Why have a character go when he could gambol, shamble, lumber, lurch, sway, swagger or sashay?

Picking pointed verbs also allows us to forgo adverbs. Many of these modifiers merely prop up a limp verb anyway. Strike speaks softly and insert whispers. Erase eats hungrily in favor of devours. And whatever you do, avoid adverbs that mindlessly repeat the sense of the verb, as in circle around, merge together or mentally recall.

This sentence from “Tinkers,” by Paul Harding, shows how taking time to find the right verb pays off:

“The forest had nearly wicked from me that tiny germ of heat allotted to each person….”

Wick is an evocative word that nicely gets across the essence of a more commonplace verb like sucked or drained.

Sportswriters and announcers must be masters of dynamic verbs, because they endlessly describe the same thing while trying to keep their readers and listeners riveted. We’re not just talking about a player who singles, doubles or homers. We’re talking about, as announcers described during the 2010 World Series, a batter who “spoils the pitch” (hits a foul ball), a first baseman who “digs it out of the dirt” (catches a bad throw) and a pitcher who “scatters three singles through six innings” (keeps the hits to a minimum).

Imagine the challenge of writers who cover races. How can you write about, say, all those horses hustling around a track in a way that makes a single one of them come alive? Here’s how Laura Hillenbrand, in “Seabiscuit,” described that horse’s winning sprint:

“Carrying 130 pounds, 22 more than Wedding Call and 16 more than Whichcee, Seabiscuit delivered a tremendous surge. He slashed into the hole, disappeared between his two larger opponents, then burst into the lead… Seabiscuit shook free and hurtled into the homestretch alone as the field fell away behind him.”

Even scenes that at first blush seem quiet can bristle with life. The best descriptive writers find a way to balance nouns and verbs, inertia and action, tranquillity and turbulence. Take Jo Ann Beard, who opens the short story “Cousins” with static verbs as quiet as a lake at dawn:

“Here is a scene. Two sisters are fishing together in a flat-bottomed boat on an olive green lake….”

When the world of the lake starts to awaken, the verbs signal not just the stirring of life but crisp tension:

“A duck stands up, shakes out its feathers and peers above the still grass at the edge of the water. The skin of the lake twitches suddenly and a fish springs loose into the air, drops back down with a flat splash. Ripples move across the surface like radio waves. The sun hoists itself up and gets busy, laying a sparkling rug across the water, burning the beads of dew off the reeds, baking the tops of our mothers’ heads.”

Want to practice finding dynamic verbs? Go to a horse race, a baseball game or even walk-a-thon. Find someone to watch intently. Describe what you see. Or, if you’re in a quiet mood, sit on a park bench, in a pew or in a boat on a lake, and then open your senses. Write what you see, hear and feel. Consider whether to let your verbs jump into the scene or stand by patiently.

Verbs can make or break your writing, so consider them carefully in every sentence you write. Do you want to sit your subject down and hold a mirror to it? Go ahead, use is. Do you want to plunge your subject into a little drama? Go dynamic. Whichever you select, give your readers language that makes them eager for the next sentence.

Constance Hale, a journalist based in San Francisco, is the author of “Sin and Syntax” and the forthcoming “Vex, Hex, Smash, Smooch.” She covers writing and the writing life at sinandsyntax.com.

Posted in Academic writing, Writing

Academic Writing Issues #4 — Failing to Listen for the Music

All too often, academic writing is tone deaf to the music of language.  Just as we tend to consider unprofessional any writing that is playful, engaging, funny, or moving, so too with writing that is musical.  A professional monotone is the scholar’s voice of choice.  This stance leads to two big problems.  One is that it puts off the reader, exactly the person we should be trying to draw into our story.  Why so easily abandon one of the great tools of effective rhetoric?  Another is that it alienates academic writers from their own words, forcing them to adopt the generic voice of the pedant rather than the particular voice the person who is the author.

For better or for worse — usually for worse — we as scholars are contributing to the literary legacy of our culture, so why not do so in a way that sometimes sings or at least doesn’t end on a false note.  Speaking of which, consider a quote from one of the masters of English prose, Abraham Lincoln, from the last paragraph of his first inaugural address.  Picture him talking at the brink of the nation’s most terrible war, and then listen to his melodic phrasing:

I am loath to close. We are not enemies, but friends. We must not be enemies. Though passion may have strained it must not break our bonds of affection. The mystic chords of memory, stretching from every battlefield, and patriot grave, to every living heart and hearthstone, all over this broad land, will yet swell the chorus of the Union, when again touched, as surely they will be, by the better angels of our nature.

In the English language, there are two rhetorical storehouses that for centuries have grounded writers like Lincoln — Shakespeare and the King James Bible.  Both are compulsively quotable, and both provide models for how to combine meaning and music in the way we write.

Take a look at this lovely piece by Ann Wroe, an appreciation of the music of the King James Bible, which makes all the the other translations sound tone deaf.

Published in the Economist

March 30, 2011

IN THE BEGINNING WAS THE SOUND

By Ann Wroe

Bible

The King James Bible is 400 years old this year, and the music of its sentences is still ringing out. But what exactly made it so good? Ann Wroe gives chapter and verse…

Like many Catholics, I came late to the King James Bible. I was schooled in the flat Knox version, and knew the beautiful, musical Latin Vulgate well before I was introduced to biblical beauty in my own tongue. I was around 20, sitting in St John’s College Chapel in Oxford in the glow of late winter candlelight, though that fond memory may be embellished a little. A reading from the King James was given at Evensong. The effect was extraordinary: as if I had suddenly found, in the house of language I had loved and explored all my life, a hidden central chamber whose pillars and vaulting, rhythm and strength had given shape to everything around them.

The King James now breathes venerability. Even online it calls up crammed, black, indented fonts, thick rag paper and rubbed leather bindings—with, inside the heavy cover, spidery lists of family ancestors begotten long ago. To read it is to enter a sort of communion with everyone who has read or listened to it before, a crowd of ghosts: Puritan women in wide white collars, stern Victorian fathers clasping their canes, soldiers muddy from killing fields, serving girls in Sunday best, and every schoolboy whose inky fingers have burrowed to 2 Kings 27, where Rabshakeh says, “Hath my master not sent me to the men which sit on the wall, that they may eat their own dung, and drink their own piss with you?”

When it appeared, moreover, it was already familiar, in the sense that it borrowed freely from William Tyndale’s great translation of a century before. Deliberately, and with commendable modesty, the members of King James’s translation committees said they did not seek “to make a new translation, nor yet to make of a bad one a good one, but to make a good one better”. What exactly they borrowed and where they improved is a detective job for scholars, not for this piece. So where it mentions “translators” Tyndale is included among them, the original and probably the best; for this book still breathes him, as much as them.

In both his time and theirs this was a modern translation, the living language of streets, docks, workshops, fields. Ancient Israel and Jacobean England went easily together. The original writers of the books of the Old Testament knew about pruning trees, putting on armour, drawing water, the readying of horses for battle and the laying of stones for a wall; and in the King James all these activities are still evidently familiar, the jargon easy, and the language light. “Yet man is born unto trouble, as the sparks fly upward”, runs the wonderful phrase in Job 5: 7, and we are at a blacksmith’s door in an English village, watching hammer strike anvil, or kicking a rolling log on our own cottage hearth. “Hard as a piece of the nether millstone” brings the creak of a 17th-century mill, as well as the sweat of more ancient hands. In both worlds, “seedtime and harvest” are real seasons. This age-old continuity comforts us, even though we no longer know or share it.

By the same token, the reader of the King James lives vicariously in a world of solid certainties. There is nothing quaint here about a candle or a flagon, or money in a tied leather purse; nothing arcane about threads woven on a handloom, mire in the streets or the snuffle of swine outside the town gates. This is life. Everything is closely observed, tactile, and has weight. When Adam and Eve sew fig-leaves together to cover their shame they make “aprons” (Genesis 3: 7), leather-thick and workmanlike, the sort a cobbler might wear. Even the colours invoked in the King James—crimson, scarlet, purple—are nouns rather than adjectives (“though your sins be as scarlet”, Isaiah 1: 18), sold by the block as solid powder or heaped glossy on a brush. And God’s intervention in this world, whether as artist, builder, woodsman or demolition man, is as physical and real as the materials he works with.

English, of course, was richer in those days, full of neesings and axletrees, habergeons and gazingstocks, if indeed a gazingstock has a plural. Modern skin has spots: the King James gives us botches, collops and blains, horridly and lumpily different. It gives us curious clutter, too, a whole storehouse of tools and knick-knacks whose use is now half-forgotten—nuff-dishes, besoms, latchets and gins, and fashions seemingly more suited to a souped-up motor than to the daughters of Jerusalem:

The chains, and the bracelets, and the mufflers,
The bonnets, and the ornaments of the legs, and the
headbands, and the tablets, and the earrings,
The rings, and nose jewels,
The changeable suits of apparel, and the mantles, and the
wimples, and the crisping pins…  (Isaiah 3: 19-22)

“Crisping pins” have now been swallowed up (in the Good News version) in “fine robes, gowns, cloaks and purses”. And so we have lost that sharp, momentary image of varnished nails pushing pins into unruly frizzes of hair, and lipsticked mouths pursed in concentration, as the daughters of Zion prepare to take on the town. These women are “froward”, a word that has been lost now, but which haunts the King James like a strutting shadow with a shrill, hectoring voice. Few lines are longer-drawn out, freighted with sighs, than these from Proverbs 27:15: “A continual dropping in a very rainy day and a contentious woman are alike.”
Other characters cause trouble, too. In the King James, people are aggressively physical. They shoot out their lips, stretch forth their necks and wink with their eyes; they open their mouths wide and say “Aha, aha”, wagging their heads, in ways that would get them arrested in Wal-Mart. They do not simply refuse to listen, but pull away their shoulders and stop their ears; they do not merely trip, but dash their feet against stones. Sex is peremptory: men “know” women, lie with them, “go in unto” them, as brisk as the women are available. “Begat” is perhaps the word the King James is best known for, list after list of begetting. The curt efficiency of the word (did no one suggest “fathered”?) makes the erotic languor of the Song of Solomon, with its lilies and heaps of wheat, shine out like a jewel.

The world in which these things happen has a particular look and feel that comes not just from the original authors, but often from the translators and the words they favoured. Mystery colours much of it. They like “lurking places of the villages” (Psalms 10: 8), “secret places of the stairs” (Song of Solomon 2:14), and things done “privily”, or “close”. God hides in “pavilions” that seem as mysterious as the shifting dunes of the desert, or the white flapping tents of the clouds. The word “creeping” is used everywhere to suggest that something lives; very little moves fast here, and heads and bellies are bent close to the earth. Even flying is slow, through the thick darkness. People go forth abroad, and waters come down from above, with considerable effort, as though through slowly opening layers. Elements are divided into their constituent parts: the waters of the sea, a flame of fire. A rainbow curves brightly away from the astonished, struggling observer, “in sight like unto an emerald” (Revelation 4: 3). But the grandeur of the language gives momentousness even to the corner of a room, a drain running beside a field, a patch of abandoned ground:

I went by the field of the slothful, and by the vineyard of the
man void of understanding;
And lo, it was all grown over with thorns, and nettles had
covered the face thereof, and the stone wall thereof was
broken down.
Then I saw, and considered it well; I looked upon it, and
received instruction.
Yet a little sleep, a little slumber, a little folding of the hands
to sleep…  (Proverbs 24: 30-33)

In such places shepherds “abide” with their sheep, motionless as figures made of stone. This landscape is carved broad and deep, like a woodcut, with sharply folded mountains, thick woven water, stylised trees and cities piled and blocked as with children’s bricks (all the better to be scattered by God later, no stone upon another). A sense of desolation haunts these streets and gates, echoing and shelterless places in which even Wisdom runs wild and cries. Yet within them sometimes we find a scene paced as tensely as in any modern novel, as when a young man in Proverbs steps out,

Passing through the street near her corner; and he went the
way to her house,
In the twilight, in the evening, in the black and dark night:
And, behold, there met him a woman with the attire of an
harlot, and subtil of heart.  (Proverbs 7: 8-10)

Just as stained glass shines more brightly for being set in stone, so the King James gains in splendour by comparison with the Revised Standard, Good News, New International and Heaven-knows-what versions that have come later. Thus John’s magnificent “The Word was with God, and the Word was God” (John 1: 1), has become “The Word was already existing”, scholarship usurping splendour. That lilting line in Genesis (1: 8), “And the evening and the morning were the second day” (note that second “the”, so apparently expendable, yet so necessary to the music) becomes “There was morning, and there was evening”, a broken-backed crawl. The fig-leaf aprons are now reduced to “coverings for themselves”. And the garden planted “eastward in Eden” (Genesis 2: 8), another of the King James’s myriad and scarcely conscious touches of grace, has become “to the east, in Eden” a place from which the magic has drained away.

Everywhere modern translations are more specific, doubtless more accurate, but always less melodious. The King James, deeply scholarly as it is, displaying the best learning of the day, never forgets that the word of God must be heard, understood and retained by the simple. For them—children repeating after the teacher, workers fidgeting in their best clothes, Tyndale’s own whistling ploughboy—rhythm and music are the best aids to remembering. This is language not for silent study but for reading and declaiming aloud. It needs to work like poetry, and poetry it is.

The King James is famous for its monosyllables, great drumbeats of description or admonition: “And God said, Let there be light: and there was light” (Genesis 1: 3); “The fool hath said in his heart, There is no God” (Psalms 14: 1); “In the sweat of thy face shalt thou eat bread” (Genesis 3: 19). These are fundaments, bases, bricks to build with. Yet its rhythms are also far cleverer than that, endlessly and subtly adjusted. Typically, a King James sentence has two parts broken by a pause around the mid-point, with the first part slightly more declaratory and the second slightly more explanatory: the stronger syllables massed towards the beginning, the weaker crowding softly towards their end. “Surely there is a vein for the silver, and a place for gold where they fine it” (Job 28: 1); “He buildeth his house as a moth, and as a booth that the keeper maketh” (Job 27: 18). But sometimes the order is inverted, and the words too: “As the bird by wandering, as the swallow by flying, so the curse causeless shall not come” (Proverbs 26: 2); “Out of the south cometh the whirlwind: and cold out of the north” (Job 37: 9). Perhaps the whirlwind itself has disordered things. This contrapuntal system even allows for a bit of bathos and fun: “Divers weights are an abomination unto the lord; and a false balance is not good” (Proverbs 20: 23).

Certain devices were available then which modern writers may well envy. The old English language allowed rhythms and syncopations that cannot be employed any more. Consider the use of “even”, dropped in with an almost casual flourish: “And the stars of heaven fell unto the earth, even as a fig tree casteth her untimely figs, when she is shaken of a mighty wind” (Revelations 6: 13). Or “neither”, used in the same way: “Many waters cannot quench love, neither can the floods drown it” (Song of Solomon 8: 7). Modern translations separate those two thoughts, but the beauty lies in their conjunction with a word as light as air.
Undoubtedly the King James has been enhanced for us by the music that now curls round it. “For unto us a child is born” (Isaiah 9: 6) can’t now be read without Handel’s tripping chorus, or “Man that is born of a woman” without Purcell’s yearning melancholy (“He cometh forth like a flower, and is cut down” Job 14: 2). Even “To every thing there is a season”, from Ecclesiastes (3: 1), is now overlaid with the nasal, gently stoned tones of Simon & Garfunkel. Yet the King James also lured these musicians in the beginning, snaring them with stray lines that were already singing. “Stay me with flagons, comfort me with apples, for I am sick of love” (Song of Solomon 2: 5). “Thou hast heard me from the horns of the unicorns” (Psalms 22: 21). “The heavens declare the glory of God; and the firmament sheweth his handywork” (Psalms 19: 1). “I am a brother to dragons, and a companion to owls” (Job 30: 29). Or this, also from the Book of Job, possibly the most beautiful of all the Bible’s books—a passage that flows from one astonishingly random and sudden question, “Hast thou entered into the treasures of the snow?” (Job 38:22):

Hath the rain a father? Or who hath begotten the drops of
dew?
Out of whose womb came the ice? And the hoary frost of
heaven, who hath gendered it?
The waters are hid as with a stone, and the face of the deep
is frozen.
Canst thou bind the sweet influences of Plaeiades, or loose
the bands of Orion?  (Job 38:28-31)

The beauty of this is inherent, deep in the original mind and eye that formed it. But again, the translators have made choices here: “hid” rather than “hidden”, “gendered” rather than “engendered”, all for the very best rhythmic reasons.  We can trust them; we know that they would certainly have employed “hidden” and “engendered” if the music called for it. Unfailingly, their ear is sure. And if we suspect that rhythm sometimes matters more than meaning, that is fine too: it leaves space for the sacred and numinous, that which cannot be grasped, that which lies beyond all words, to move within the lines.

That subtle notion of divinity, however, is seldom uppermost in the Old Testament. This God smites a lot. Three close-printed columns of Young’s Concordance are filled with his smiting, lightly interspersed with other people’s. Mere men use hand weapons, bows and arrows, or, with Jacobean niftiness, “the edge of the sword”; but the God of the King James simply smites, whether Moabites or Jebusites, vines or rocks or first-born, like a broad, bright thunderbolt. No other word could be so satisfactory, the opening consonants clenched like a fist that propels God’s anger down, and in, and on. We know that these are tough workman’s hands: this is the God who “stretcheth out the north over the empty place, and hangeth the earth upon nothing” (Job 26: 7). Smiting must have survived after the King James; but perhaps it was now so soft with over-use, so bruised, that it faded out of the language.

This God surprises, too. He “hisses unto” people, perhaps a cross between a whistle and a whoop, as if marshalling a yard of hens. God goes before, “preventing” us; he whips off our disguises, our clothes or our leaves, “discovering” us, and the shock of the original meanings of those words alerts us to the origins of power itself. “Who can stay the bottles of heaven?” cries a voice in Job 38: 37; and we suspect God again, like a teenage yob this time, lurking in his pavilion of cloud.

At moments like this it also seems that the translators themselves might be mystified, fingers scratching neat beards while they survey the incomprehensible words. Did they really understand, for example, that odd medical diagnosis in Proverbs: “The blueness of a wound cleanseth away evil; so do stripes the inward parts of the belly” (20: 30)? Or these lines from the last chapter of Ecclesiastes, the mystifying staple of so many funerals?

…they shall be afraid of that which is high, and fears shall
be in the way, and the almond tree shall flourish, and
the grasshopper shall be a burden, and desire shall fail;
because man goeth to his long home, and the mourners
go about the streets:
Or ever the silver cord be loosed, or the golden bowl be
broken, or the pitcher be broken at the fountain, or the
wheel broken at the cistern.
Then shall the dust return to the earth as it was… (Ecclesiastes 12: 5-7)

These are surreal images, unlikely litter in the fields and streets; but they are made all the more potent by the heavy phrasing, the inevitability of the building lines, and the conscious repetition, broken, broken, broken. We know our translators have plenty of synonyms up their sleeves. They choose not to use them. When these lines are read, though we barely know what they mean, they spell despair. And they are meant to, as the man in the pulpit in a moment reminds us:

Vanity of vanities, saith the preacher; all is vanity (Ecclesiastes 12: 8)

Yet often, too, a spirit of playfulness seems to be at work. Consider, lastly, the rain. This is ordinary rain most of the time, malqosh in the Hebrew, and all modern translations make it so. But in the King James we also have “the latter rain”, and “small rain”, and we are alerted to their delicacy and difference. Small rain (“as the small rain upon the tender herb” Deuteronomy 32: 2) is presumably the sort that blows in the air, that makes no imprint on a puddle; the Irish would call it a soft day. And latter rain, perhaps, is the sort that skulks at the end of an afternoon, or suddenly cascades down in an autumn gust, or patters for a desultory few minutes after a day of approaching thunder—and then we open our mouths wide to it, laughing, grateful, as for the word of God.
Ann Wroe [3] is obituaries and briefings editor of The Economist and author of “Being Shelley”

 

Posted in Academic writing, Writing

Academic Writing Issues #3 — Failing to Tell a Story

Good writers tell stories.  This is just as true for academic writers as for novelists and journalists.  The story needs actors and actions, and it needs to flow.  A sentence is a mini-story.  Each sentence needs to flow into the next and so does each paragraph.  When readers finish your paper, the need to be able to tell themselves and other what your story is.  If they can’t, you haven’t succeeded in drawing them into that story.

Watch how Constance Hale explains how to tell a story in every sentence you write.  It’s a piece from Draft the New York Times series on writing from 2012.

New York Times

MARCH 19, 2012, 9:30 PM

The Sentence as a Miniature Narrative

By Constance Hale

I like to imagine a sentence as a boat. Each sentence, after all, has a distinct shape, and it comes with something that makes it move forward or stay still — whether a sail, a motor or a pair of oars. There are as many kinds of sentences as there are seaworthy vessels: canoes and sloops, barges and battleships, Mississippi riverboats and dinghies all-too-prone to leaks. And then there are the impostors, flotsam and jetsam — a log heading downstream, say, or a coconut bobbing in the waves without a particular destination.

My analogy seems simple, but it’s not always easy to craft a sentence that makes heads turn with its sleekness and grace. And yet the art of sentences is not really a mystery.

Over the course of several articles, I will give you the tools to become a sentence connoisseur as well as a sentence artisan. Each of my lessons will give you the insight to appreciate fine sentences and the vocabulary to talk about them.

*

At some point in our lives, early on, maybe in grade school, teachers give us a pat definition for a sentence — “It begins with a capital letter, ends with a period and expresses a complete thought.” We eventually learn that that period might be replaced by another strong stop, like a question mark or an exclamation point.

But that definition misses the essence of sentencehood. We are taught about the sentence from the outside in, about the punctuation first, rather than the essential components. The outline of our boat, the meaning of our every utterance, is given form by nouns and verbs. Nouns give us sentence subjects — our boat hulls. Verbs give us predicates — the forward momentum, the twists and turns, the abrupt stops.

For a sentence to be a sentence we need a What (the subject) and a So What (the predicate). The subject is the person, place, thing or idea we want to express something about; the predicate expresses the action, condition or effect of that subject. Think of the predicate as a predicament — the situation the subject is in.

I like to think of the whole sentence as a mini-narrative. It features a protagonist (the subject) and some sort of drama (the predicate): The searchlight sweeps. Harvey keeps on keeping on. The drama makes us pay attention.

Let’s look at some opening lines of great novels to see how the sentence drama plays out. Notice the subject, in bold, in each of the following sentences. It might be a simple noun or pronoun, a noun modified by an adjective or two or something even more complicated:

They shoot the white girl first.” — Toni Morrison, “Paradise”

Stately, plump Buck Mulligan came from the stairhead, bearing a bowl of lather on which a mirror and a razor lay crossed.” — James Joyce, “Ulysses”

The Miss Lonelyhearts of the New York Post-Dispatch (Are-you-in-trouble? — Do-you-need-advice? — Write-to-Miss-Lonelyhearts-and-she-will-help-you) sat at his desk and stared at a piece of white cardboard.” — Nathanael West, “Miss Lonelyhearts”

Switching to the predicate, remember that it is everything that is not the subject. In addition to the verb, it can contain direct objects, indirect objects, adverbs and various kinds of phrases. More important, the predicate names the predicament of the subject.

“Elmer Gantry was drunk.” — Sinclair Lewis, “Elmer Gantry”

“Every summer Lin Kong returned to Goose Village to divorce his wife, Shuyu.” — Ha Jin, “Waiting”

There are variations, of course. Sometimes the subject is implied rather than stated, especially when the writer uses the imperative mood:

Call me Ishmael.” — Herman Melville, “Moby Dick”

And sometimes there is more than one subject-predicate pairing within a sentence:

“We started dying before the snow, and like the snow, we continued to fall.” — Louise Erdrich, “Tracks”

One way to get the hang of such mini-narratives is to gently imitate great one-liners. Try taking each one of the sentences above and plugging in your own subjects and predicates, just to sense the way that nouns and verbs form little stories.

Another way to experiment with subjects and predicates is to write your epitaph — either seriously or in jest. The editors of SmithMagazine challenged their readers to put their lives into six words and have published the best results. Here are some examples of Six-Word Memoirs that do the subject-predicate tango:

“Told to Marry Rich, married Richard.” (JMorris)

“My parents should’ve kept their receipt.” (SarahBeth)

When a sentence lacks one of its two essential parts, it is called a sentence fragment. Like the flotsam I mentioned earlier, fragments are adrift, without clear direction or purpose.

Playing with sentence fragments can be fun — the best copywriters use them for memorable advertising slogans (Alka-Seltzer’s “Plop plop, fizz fizz”). But there are plenty of competing Madison Avenue slogans to convince you that a full sentence registers equally well — from Esso’s “Put a tiger in your tank” to The Heublein Company’s “Pardon me, would you have any Grey Poupon?” While sentence fragments can be witty, they are still shards of thoughts, better suited to hawking antacids than to penning the Great American Novel or earnestly attempting to put inchoate thoughts into indelible words.

If sentence fragments are like flotsam, a profusion of subjects is like jetsam. Too many subjects thrown in can cause a passage to become muddy. We are especially prone to losing control of our subjects when we speak. Take these off-the-cuff remarks by President George Bush at a 1988 Milwaukee campaign stop around Halloween:

“We had last night, last night we had a couple of our grandchildren with us in Kansas City — 6-year-old twins, one of them went as a package of Juicy Fruit, arms sticking out of the pack, the other was Dracula. A big rally there. And Dracula’s wig fell off in the middle of my speech and I got to thinking, watching those kids, and I said if I could look back and I had been president for four years: What would you like to do? Those young kids here. And I’d love to be able to say that working with our allies, working with the Soviets, I’d found a way to ban chemical and biological weapons from the face of the earth.”

As the subjects in those sentences keep shifting — from we to twinsone of themthe otherwe(implied), wigIIIyoukids, I, and I — his message keeps shifting, too. Mr. Bush’s speechwriter,Peggy Noonan, has written that the president was “allergic to I.” He seemed to feel uncomfortable calling attention to himself, so he performed what Noonan called “I-ectomies” in his speeches.

Vice President Joseph R. Biden Jr. may not share Mr. Bush’s aversion to I, but a sentence from his2008 vice-presidential debate shows how he, too, could lose track of his subjects:

“If you need any more proof positive of how bad the economic theories have been, this excessive deregulation, the failure to oversee what was going on, letting Wall Street run wild, I don’t think you needed any more evidence than what you see now.”

Biden not only shifts from you to I and back to you again, he throws three sentence fragments into the middle of his sentence, each featuring a different subject.

Syntax gets a lot more complicated than subjects and predicates, but understanding the relationship between the hull and the sail, the What and the So What, is the first step in mastering the dynamics of a sentence. In future weeks we’ll delve into more ways you can play with subjects and predicates, but first, in the next few lessons I will write, we’ll explore the raw materials of sentence-building: nouns, adjectives and verbs.

*

Just as there is no one perfect boat, there is no one perfect sentence structure. Mark Twain wrote sentences that were as humble, sturdy and American as a canoe; William Faulkner wrote sentences as gaudy as a Mississippi riverboat. But no matter the atmospherics, the best sentences bolt a clear subject to a dramatic predicate, making a mini-narrative. Tell us your favorite sentences from literature in the comments section below, and identify the subject and the predicate. We’ll publish some of the best ones in Draft later this week.

Constance Hale, a journalist based in San Francisco, is the author of “Sin and Syntax” and the forthcoming “Vex, Hex, Smash, Smooch.” She covers writing and the writing life atsinandsyntax.com.

 

Posted in Academic writing, Educational Research, Writing

Academic Writing Issues #2: Zombie Nouns

One of the most prominent and dysfunctional traits of academic writing is its heavy reliance on what Helen Sword, in the piece below, calls “zombie nouns.”  These are cases when the writer takes an agile verb or adjective or noun and transforms it into a more imposing noun with lead feet.  Just add the proper suffix to a simple word and you too can produce a term that looks thoroughly academic.  Visualize becomes visualization; collective becomes collectivity; institution becomes institutionalization.  The technical term for this, which is itself a case in point, is nominalization.  In limited numbers, these words can be useful in capturing an idea, but when they proliferate they can suck the life out of a text and drive the reader to, well, distraction.

For academic writers, the lure of these terms is that they allow you to display your mastery of professional jargon.  But the cost — in loss of verve, clarity, and grace — is very high.

Watch how he makes her case in this piece from Draft, the New York Times series on writing from a few years back.

After you’ve read it, try analyzing one of your own texts (or a random journal article) using her Writer’s Diet test. It will tell you how flabby or fit the writing is.  This is a bit humbling.  But you’ll have the pleasure of seeing how your badly the work of your esteemed senior colleagues fares in the same analysis.

 

Zombie Nouns

By Helen Sword

July 23, 2012

Take an adjective (implacable) or a verb (calibrate) or even another noun (crony) and add a suffix like ity, tion or ism. You’ve created a new noun: implacability, calibration, cronyism. Sounds impressive, right?

Nouns formed from other parts of speech are called nominalizations. Academics love them; so do lawyers, bureaucrats and business writers. I call them “zombie nouns” because they cannibalize active verbs, suck the lifeblood from adjectives and substitute abstract entities for human beings:

The proliferation of nominalizations in a discursive formation may be an indication of a tendency toward pomposity and abstraction.

The sentence above contains no fewer than seven nominalizations, each formed from a verb or an adjective. Yet it fails to tell us who is doing what. When we eliminate or reanimate most of the zombie nouns (tendency becomes tend, abstraction becomes abstract) and add a human subject and some active verbs, the sentence springs back to life:

Writers who overload their sentences with nominalizations tend to sound pompous and abstract.

Only one zombie noun – the key word nominalizations – has been allowed to remain standing.

At their best, nominalizations help us express complex ideas: perception, intelligence, epistemology. At their worst, they impede clear communication. I have seen academic colleagues become so enchanted by zombie nouns like heteronormativity and interpellation that they forget how ordinary people speak. Their students, in turn, absorb the dangerous message that people who use big words are smarter – or at least appear to be – than those who don’t.

In fact, the more abstract your subject matter, the more your readers will appreciate stories, anecdotes, examples and other handholds to help them stay on track. In her book “Darwin’s Plots,” the literary historian Gillian Beer supplements abstract nouns like evidence, relationships and beliefs with vivid verbs (rebuff, overturn, exhilarate) and concrete nouns that appeal to sensory experience (earth, sun, eyes):

Most major scientific theories rebuff common sense. They call on evidence beyond the reach of our senses and overturn the observable world. They disturb assumed relationships and shift what has been substantial into metaphor. The earth now only seems immovable. Such major theories tax, affront, and exhilarate those who first encounter them, although in fifty years or so they will be taken for granted, part of the apparently common-sense set of beliefs which instructs us that the earth revolves around the sun whatever our eyes may suggest.

Her subject matter – scientific theories – could hardly be more cerebral, yet her language remains firmly anchored in the physical world.

Contrast Beer’s vigorous prose with the following passage from a social sciences book:

The partial participation of newcomers is by no means “disconnected” from the practice of interest. Furthermore, it is also a dynamic concept. In this sense, peripherality, when it is enabled, suggests an opening, a way of gaining access to sources for understanding through growing involvement. The ambiguity inherent in peripheral participation must then be connected to issues of legitimacy, of the social organization of and control over resources, if it is to gain its full analytical potential.

Why does reading this paragraph feel like trudging through deep mud? The secret lies at its grammatical core: Participation is. . . . It is. . . . Peripherality suggests. . . . Ambiguity must be connected. Every single sentence has a zombie noun or a pronoun as its subject, coupled with an uninspiring verb. Who are the people? Where is the action? What story is being told?

To get a feeling for how zombie nouns work, release a few of them into a sentence and watch them sap all of its life. George Orwell played this game in his essay “Politics and the English Language,” contrasting a well-known verse from Ecclesiastes with his own satirical translation:

I returned and saw under the sun, that the race is not to the swift, nor the battle to the strong, neither yet bread to the wise, nor yet riches to men of understanding, nor yet favour to men of skill; but time and chance happeneth to them all.

Here it is in modern English:

Objective considerations of contemporary phenomena compel the conclusion that success or failure in competitive activities exhibits no tendency to be commensurate with innate capacity, but that a considerable element of the unpredictable must invariably be taken into account.

The Bible passage speaks to our senses and emotions with concrete nouns (sun, bread), descriptions of people (the swift, the wise, men of understanding, men of skill) and punchy abstract nouns (race, battle, riches, time, chance). Orwell’s “modern English” version, by contrast, is teeming with nominalizations (considerations, conclusion, activities, tendency, capacity, unpredictable) and other vague abstractions (phenomena, success, failure, element). The zombies have taken over, and the humans have fled the village.

Zombie nouns do their worst damage when they gather in jargon-generating packs and infect every noun, verb and adjective in sight: globe becomes global becomes globalize becomes globalization. The grandfather of all nominalizations, antidisestablishmentarianism, potentially contains at least two verbs, three adjectives and six other nouns.

A paragraph heavily populated by nominalizations will send your readers straight to sleep. Wake them up with vigorous, verb-driven sentences that are concrete, clearly structured and blissfully zombie-free.

*****

For an operationalized assessment of your own propensity for nominalization dependence (translation: to diagnose your own zombie habits), try pasting a few samples of your prose into the Writer’s Diet test. A score of “flabby” or “heart attack” in the noun category indicates that 5 percent or more of your words are nominalizations.

Helen Sword teaches at the University of Auckland and has published widely on academic writing, higher education pedagogy, modernist literature and digital poetics. Her latest book is “Stylish Academic Writing” (Harvard University Press 2012).

 

Posted in Academic writing, Scholarship, Writing

Academic Writing Issues #1: Excessive Signposting

One of the most characteristic and annoying tendencies in academic writing is the excessive use of signposting: here’s what I’m going to do, here I am doing it, and here’s what I just did.  You can trim a lot of text from your next paper (and earn the gratitude of your readers) by just telling your story instead of continually anticipating this story.

Here is a lovely take-down of an academic author who made the mistake of getting on Geoff Dyer’s nerves.  Enjoy.  The original from the New York Times.

New York Times

July 22, 2011

An Academic Author’s Unintentional Masterpiece

By GEOFF DYER

In this column I want to look at a not uncommon way of writing and structuring books. This approach, I will argue, involves the writer announcing at the outset what he or she will be doing in the pages that follow. The default format of academic research papers and textbooks, it serves the dual purpose of enabling the reader to skip to the bits that are of particular interest and — in keeping with the prerogatives of scholarship — preventing an authorial personality from intruding on the material being presented. But what happens when this basically plodding method seeps so deeply into a writer’s makeup as to constitute a stylistic signature, even a kind of ongoing flourish or extravagance?

Before continuing I will say something here about how I was drawn to this area of research. In the course of writing an article about the photographer Thomas Struth, I remembered that the highly regarded art historian Michael Fried had a chapter on Struth in his book “Why Photography Matters as Art as Never Before” (2008), henceforth WP. I’d read only a little of Fried before, but I knew that his earlier “Absorption and Theatricality: Painting and Beholder in the Age of Diderot” (1980) was regularly referred to and quoted by art historians. I will show later that one of those art historians is Fried himself, but as soon as I started to consult WP I realized I was reading something quite extraordinary: a masterpiece of its kind in that it takes the style of perpetual announcement of what is about to happen to extremes of deferment that have never been seen before. Imminence here becomes immanent.

I’ll come to the rest of the book later. Here I will simply remark that the first page of Fried’s introduction summarizes what he intends to do and ends with a summary of this summary: “This is what I have tried to do in ‘Why Photography Matters as Art as Never Before.’ ” The second page begins with another look ahead: “The basic idea behind what follows. . . . ” Fair enough, that’s what introductions are for, and it’s no bad thing to be reassured that the way in which the overall argument will manifest itself “in individual cases will become clear in the course of this book.” Page 3 begins: “The organization of ‘Why Photography Matters as Art as Never Before’ is as follows. . . . ” Well, O.K. again, even if it is a bit like watching a rolling news program: Coming up on CNN . . . A look ahead to what’s coming up on CNN. . . . More striking is the way that even though we have only just got going — even though, strictly speaking, we have not got going — Fried is already looking back (Previously on “NYPD Blue” . . . ) on what he did in such earlier books as “Art and Objecthood” and “Absorption and Theatricality.” The present book will not be like those earlier ones, however, “as the reader of ‘Why Photography Matters as Art as Never Before’ is about to discover.”

What the reader discovers, however, is that Fried will continue to announce what he’s about to do right to the end: “Later on in this book I shall examine . . . ”; “I shall discuss both of these after considering . . . ”; “I shall also be relating. . . . ” Fried’s brilliance, however, is that in spite of all the time spent looking ahead and harking back he also — and it’s this that I want to emphasize here — finds the time to tell you what he’s doing now, as he’s doing it: “But again I ask . . . ” ; “Let me try to clarify matters by noting . . . ”; “What I want to call attention to. . . . ” But that’s not all: the touch of genius is that on top of everything else he somehow manages to tell you what he is not doing (“I am not claiming that . . . ”), what he has not done (“What I have not said . . . ”) and what he is not going to do (“This is not the place for . . . ”). On occasions he combines several of these tropes in dazzling permutations like the negative-­implied-­forward and the double-­backward — “So far I have said nothing in this conclusion about Barthes’s ‘Camera Lucida,’ which in Chapter 4 I interpreted as a consistently antitheatrical text even as I also suggested . . . ” — before reverting, a paragraph later, to the tense endeavor of the present (i.e., telling us what he’s still got to do): “One further aspect of Barthes’s text remains to be dealt with.” There is, I would observe here, a kind of zero-sum perfection about the way the theatricality of the flamboyant, future-­oriented sign-­posting is matched by all the retrospection. The depths of self-­absorption that makes this possible are hard to fathom.

It could be argued that this is essentially an academic habit, and that Fried is faithfully observing the expected conventions — so faithfully that he has become an unconscious apostate. If academia elevates scholarly and impersonal inquiry above the kind of nutty, fictional, navel-gazing monologues of Nicholson Baker, then Fried is at once its high camp apotheosis and its disintegration into mere manner.

Lest you think I have been quoting unfairly, take a break here and run your eyes over a couple of pages of WP in a library or bookstore. You’ll be amazed. You’ll see that this is some of the most self-­worshiping — or, more accurately, self-­serving — prose ever written. I kept wondering why an editor had not scribbled “get on with it!” in huge red letters on every page of the manuscript — and then I realized that the cumulative flimflam was the it! And at that moment, as I hope to show, everything changed.

Suppose that you meet someone who is a compulsive name-­dropper. At first it’s irritating, then it’s boring. Once you have identified it as a defining characteristic, however, you long for the individual concerned to manifest this trait at every opportunity — whereupon it becomes a source of hilarity and delight. And so, having experienced a crescendo of frustration, I now look forward to a new book in which Fried advances his habit of recessive deferral to the extent that he doesn’t get round to what he wants to say until after the book is finished, until it’s time to start the next one (which will be spent entirely on looking back on what was said in the previous volume). At that point he will cross the border from criticism to the creation of a real work of art (fiction if you will) called “Kiss Marks on the Mirror: Why Michael Fried Matters as a Writer Even More Than He Did Before.”

Geoff Dyer is the author, most recently, of “Otherwise Known as the Human Condition: Selected Essays and Reviews, 1989-2010.” His “Reading Life” column will appear regularly in the Book Review.