The Gallup organization recently announced that it will not poll on the presidential primary races and perhaps not on the 2016 general election as well—in order, its editor-in-chief said, to focus on “understanding the issues.” Observers might suspect that Gallup is passing on elections because its forecasts were embarrassingly wrong in 2012. About 5 points off, they called Romney the winner. It is hard to sell your data to businesses and news organizations when you know it is questionable. Whatever the motive, Gallup’s withdrawal spotlights a deep worry in all survey research: declining accuracy because of plummeting response rates. …. For the rest of this post, see the Boston Review here.
The “wave of veteran suicides,” in the words of The New York Times editors last year, seems to cap the traumas that the vets have borne in service to the nation. It turns out, however, that actually establishing that there is a connection between military service and suicide is difficult. It may take years more research to fully understand its personal toll. The “Forever War” of an earlier generation, Vietnam, produced a particularly strong debate about serving and suicide. While the tragic consequences seemed clear to some, the data have been much more opaque. The veteran-suicide connection was, as a recent article describes, also opaque a century ago when the veterans in question had served in the Civil War.
Residents of small Barnstable, Massachusetts, on Cape Cod, were not sure what to make of “odd” Joseph Gorham, who lived–and wandered–among them in the first half of the 1700s. He would walk unannounced into their homes, “Searching and Rumiging for Victuals in a Ravenous manner without Leave,” gorge himself on what he could find, sometimes to the point of throwing up, and often spend the night by their fireplaces or in their barns. Gorham had no wife, did no work, and neither bought nor sold, being unable to bargain on his own behalf. At the same time, he had a “Very Extraordinary Genius” in playing checkers, visually estimating the weights of goods, and recalling calendar information in exact detail. Also remarkable from our vantage point, his oddness was tolerated for decades.
University of Connecticut historian Cornelia H. Dayton tells Gorham’s story in the fall, 2015 issue of the Journal of Social History, a story only made visible by a court case over Gorham’s will when he died at 73 and a story with several implications for how we understand mental illness, community, gender, and class.
An end-of-the-year crystal-ball statement by New York Times technology columnist, Farhad Manjoo, stimulated me to muse some more about how Americans think about technological change. Manjoo wrote:
In 2016, let’s begin to appreciate the dominant role technology now plays in shaping the world, and let’s strive to get smarter about how we think about its effects. “The pace of technological change has never been faster, so it’s more important for people to understand things that are harder to keep on top of,” said Julius Genachowski, the former chairman of the Federal Communications Commission. . . .
As I discussed in an earlier post, such claims are perennial. More examples: About 60 years before Manjoo, on April 22, 1957, noted Times journalist C. L. Sulzberger warned:
The dizzy speed with which mechanical techniques are now developing leads many serious thinkers to wonder if they may not soon exceed human capacity to absorb them.
And about 25 years before that, a member of my own tribe, sociology founding father William F. Ogburn, told a panel–as reported in the Times on January 2, 1931–that:
An increasing number of inventions . . . will mean an increasing pace of change and less peace. It will become increasingly difficult for the growing person to adapt himself to an ever more complicated environment; and so in the future, . . . . the problem will be met, perhaps, by prolonging infancy to say, thirty or forty years of age or even longer.
(Parents of 20-somethings may want to comment on the prediction of prolonged infancy.)
The Genachowski-like claims that Americans today are buffeted by unprecedented social change driven by an unprecedented technological pace is, I have argued (here and here), wrong. We may yet face, but do not yet experience, the sort of machine-assisted disruptions that were common a century ago.
Still, all this is about the views of talking (or writing) heads. How have average Americans thought about the pace and dangers of technological change?
The sharpest contrast in American communities is that between black and white neighborhoods. There is no greater spatial distinction in our cities. Everyone is aware of it. Would-be homebuyers shop accordingly; parents pick schools accordingly; employers hire accordingly; drivers plan routes accordingly–that is, when homebuyers, parents, employers, and drivers have some choice in the matter.
This great segregation of black and white, scholars had thought, was produced in the twentieth century. New research reveals a more complex story, as described in my latest column for the Boston Review — here.
The increasing delay of death for Americans over the last century or so has been extensive and consequential, probably in many profound ways that we do not fully appreciate. In the late 19th century, a newborn white boy would be expected to live, on average, to about 40; now, such a newborn can be expected to live into his late 70s. A ten-year-old then could expect to reach his late 50s and a ten-year-old can now expect to reach his mid-70s. Girls live longer and nonwhites shorter lives, but the trends have been dramatically upward for them, as well. Moreover, Americans’ health, while they lived, also improved markedly.
How did this happen? Historians have excavated old health records and applied new techniques to parse out why better health and lower mortality occurred. In a just-published review of the topic, appearing in the Journal of Economic Literature, UCLA economic historian Dora Costa describes how the United States extended its people’s lives in different ways in different eras, depending in part on the nature of the health threat and on public will.
“Diversity” became the announced goal of schools and employers and liberal activists once American voters and courts turned against “affirmative action” for black Americans (never mind the idea of reparations). Earlier, in the 1960s and ‘70s, the Johnson and Nixon administrations had pushed racial “goals” (not quotas, they stressed) in a not-so-transparent effort to redress some of the economic disadvantages accrued from centuries of slavery and Jim Crow. However, with votes such as the 1996 passage in California of a constitutional amendment outlawing state institutions from considering race in employment, contracts, or education, and with Supreme Court cases reaching almost as far, liberals retreated from affirmative action to promoting “diversity.” Ethnic diversity, they argue, is good for everyone, not just minorities; it makes learning, working, neighboring, and deliberating better (e.g., here). Thus was born a defense for legally considering race just a bit, as well as a set of careers in diversity promotion, management, training, and law.
Opinion leaders from school teachers to corporate CEOs now promote, with some support from research, the virtues of diversity. Yet, out of view from most public discussion of the topic, a line of scholarly research emerged that implies the opposite. It suggests that the more diverse neighborhoods, cities, or countries are, the less people cooperate to common ends and the more they socially disengage; they “hunker down,” in one colorful rendition. A new paper by Maria Abascal and Delia Baldassarri in the latest issue of the American Journal of Sociology revisits this academic line of research and forces us to think back to why diversity was important in the first place.