The Blameless Only

Americans generally believe that the government should not take money from one person to give to another and generally believe that only recently – perhaps just since the 1960s or since the New Deal – has government done so. Consistent with these views, American “welfare” policy is distinctively limited, constrained, and grudging. Yet history shows that American government, notably the federal government, has for centuries used taxpayers’ money to help other people – for example, to assist businessmen with subsidies of various kinds and to provide large pensions for widows of Union Army veterans. Indeed, even a couple of centuries ago, Congress sent large sums of what we would today call “foreign aid” abroad. Two recent books clarify this seeming contradiction between American ideology and practice by showing that whether government helps or not depends not so much on principles of taxation and representation, but on whether those who are helped are seen as blameless or not.

Stanford law professor Michele Landis Dauber, in her 2013 book, The Sympathetic State, recounts the legislative history of federal relief programs, and Northwestern historian Susan J. Pearson, in her 2100 book, The Rights of the Defenseless, describes the evolution of anti-cruelty legislation. Both accounts revolve crucially around principles of self-reliance and responsibility.

It is the moral logic of blame, Dauber writes, that allowed Massachusetts Governor William Weld in 1995 to sign a bill sharply curtailing assistance to poor single mothers and to also simultaneously ask the federal government for millions in direct payments to the state’s fishermen. The arguments over both moves were arguments about blame and blamelessness.

Continue Reading »

As I write this post, it has been about three weeks since Thomas Duncan was diagnosed with Ebola in Texas. The media and political hysteria that has ensued in this country is amazing, statistically and historically. Unlike, say, tuberculosis or the flu, it is extremely hard to get infected with Ebola unless one is caring, without adequate protection, for an actively ill patient. Consider that none of the people who were living with Duncan has shown symptoms.

One person, Duncan himself, has died from Ebola in the United States in these three weeks. In contrast, during an average three-week period in the United States: 35 people die from tuberculosis; 3,200 from influenza and pneumonia – 500 of those people under 65 years of age; 1,100 from suicide by gun; 650 from homicide by gun; 1,000 by alcoholic cirrhosis; and 1,900 by motor vehicle accident.* These deaths are not only vastly more numerous, they are much more contagious, either in a medical sense or in a sociological sense. Where are screaming headlines for those risks?

So much for the statistics. From an historical view, there was a time when alarm, even a run-to-the-hills psychology, made sense in reaction to a disease appearing on our shores. We do not live in such times now.**

Continue Reading »

In 2002, then-Berkeley (now-NYU) sociologist Michael Hout and I published a paper pointing out a new trend in Americans’ religious identity: A rapidly increasing proportion of survey respondents answered “no religion” when asked questions such as “What is your religious preference? Is it Protestant, Catholic, Jewish, some other religion, or no religion?” In the 1991 General Social Survey, about 7 percent answered no religion and in the 2000 GSS, 14 percent did.* We explained the trend this way:

the increase was not connected to a loss of religious piety, [but] it was connected to politics. In the 1990s many people who had weak attachments to religion and either moderate or liberal political views found themselves at odds with the conservative political agenda of the Christian Right and reacted by renouncing their weak attachment to organized religion.

If that is what religion is, most of the “Nones” seemed to be saying, count me out.

In the years since, the trend has continued, Nones reaching 20 percent in the 2012 GSS. And a good deal of research has also accumulated on the topic (some of it reported in an earlier post). Notably, Robert Putnam and David Campbell refined our argument in their 2010 book, American Grace, pointing more sharply to lifestyle issues as the triggers for Americans declaring no religious identity.

Mike and I have just published a paper in Sociological Science updating the trend over an additional dozen years, applying new methods to the trend, and retesting explanations for the rise in Nones. We – actually it’s 90 percent Mike’s work – find that our earlier account stands up even more strongly.

Continue Reading »


As is now well-known, scores on “intelligence” tests rose strongly over the last few generations, world-wide – this is the “Flynn Effect.” One striking anomaly, however, appears in American data: slumping students’ scores on academic achievement tests like the SAT. Notes of the decline starting in the 1960s sparked a lot of concern and hand-wringing. A similar decline is evident among adult respondents to the General Social Survey. The GSS gives interviewees a 10-item, multiple choice vocabulary test. (Practically speaking, vocabulary tests yield pretty much the same results as intelligence tests.) In over 40 years of the survey, a pattern emerged: Correct scores rose from the generations born around 1900 to the generations born around 1950 and then dropped afterwards. Are recently-born cohorts dumber – or, at least, less literate – than their parents and grandparents?

A new study presented to the American Sociological Association in August by Shawn Dorius (Iowa State), Duane Alwin (Penn. State), and Juliana Pacheco (U. of Iowa) tested a hunch several researchers have had about the generational pattern in the GSS vocabulary test – that words have histories.

Continue Reading »

(This awkward title is one solution to complaints about “American Exceptionalism.” As discussed in a 2011 post, the phrase has recently come to mean, to some political partisans, “American Superiority.” For generations, however, students of American society have used the first dictionary definition of exceptionalism: “the condition of being different from the norm” [Merriam-Webster], more specifically meaning that the U.S. is an outlier among — way different from — other western nations.)

My take on the how and why of American Way-Differentism appears in the book, Made in America. Parts of the argument are summarized in a new essay for a joint project of the Smithsonian Institution and Zócalo Public Square on “What It Means to Be American.” The essay is here (and here as well).

Do Ideas Matter?

If you go to the Boston Review Web site, you’ll find the slogan “Ideas Matter” gracing the top of the homepage. Since I write a column for the magazine—and even wear a BR T-shirt announcing the slogan—I am not unsympathetic to the spirit of the claim. But in the social sciences, the idea that ideas matter has always been controversial. How much do ideas really matter? Do they affect individuals and societies more or less than do material circumstances such as economic incentives, physical constraints, and military force?

Arguments one way or the other often address broad historical issues, such as the economic rise of the West. Does the credit go to the Protestant ethic (Max Weber) or the West’s geographical advantages (Jared Diamond)? Do differences between Asian and European societies result from Confucianism versus Greek thought, collectivism versus individualism, late versus early industrialization—or something else? Disputes over individual differences in behavior are similarly polarized…. (See the rest of this post at the Boston Review site here.)

Alternative to Empathy

Paul Bloom, the noted Yale psychologist, wrote, in a 2013 New Yorker article and again in a 2014 Boston Review forum, “against empathy.” We are urged to feel empathy in order to do good for others, but empathy is a poor guide to altruism. Empathy is “parochial, narrow-minded, and innumerate,” Bloom writes. We empathize much more with people who resemble us in background, looks, or character (that is, people who seem moral and deserving just like we assume we are) than with people who are different, odd, or potentially at fault. Thus, the baby fallen in the well in the next town deserves moving heaven and earth to save her, while tens of thousands of starving, deformed refugees thousands of miles away — not so much. How can empathy’s discrimination be morally justified, Bloom asks. Isn’t there a better guide?

My small addition to the conversation is simply to note this oddity: Bloom and the Boston Review commentators did not refer to the obvious guide, at least for Americans: organized religion. (In the New Yorker, Bloom refers only to “religious ideologies that promote cruelty” and the BR essays make passing nods to a vague Buddhism.) No one acknowledges that Americans’ historically most important guide to moral decisions, the Bible, might avoid the empathy paradox – or at least it might if Americans had not watered down its guidance with empathy.

Continue Reading »


Get every new post delivered to your Inbox.

Join 261 other followers