This blog is one small example of a media- and internet-wide phenomenon: the torrent of reports on social science research. There was a time, back in the ‘80s, when some of us bemoaned the dearth of social science reporting in the media. That dearth motivated my experiment in the early 2000s with Contexts, a magazine of sociology for general readers, and then this blog a decade later. Now, I’m here to bemoan too much social science reporting.
The voracious appetite of the media, particularly the online venues, for “content” has combined with trends in the social sciences to produce an efflorescence of reports on social science findings. Unfortunately, there are many weeds as well as blossoms in this dense garden. Maybe there is too much social science reporting, too much tabloid social science journalism.
Many have observed that, even as print outlets have diminished, online media and cable television have more than compensated by generating a far greater volume than before. (Although I don’t know of any hard numbers on that claim, there is suggestive evidence – h/t Jen Schradie.) Major media organizations seem increasingly eager to report pieces of psychological or sociological research. There’s, for example, the New York Times in its Science Times, Sunday Review, and Business sections, Harpers’ “Findings,” and NPR’s behavioral science beat. Media outlets are so hungry for such content that they have established their own “data journalism” research arms, such as Nate Silver’s 528, the Times’s Upshot, and Vox.
Simultaneously, developments in the academy feed more studies to the media: The competition to publish accelerates; the number of for-profit journals soliciting papers is booming; “open access” online publications provide yet further outlets but with often thin peer review; high-tech sources like Google and Twitter deliver fire-hose volumes of “big data”; lightning-speed computer programs yield results in seconds that once took days to calculate; and conducting online experiments and surveys is an order of magnitude cheaper and faster than pre-internet research was. (For what seems like pennies, online “workers” will take your survey or participate in your experiment; over a weekend you can have a publishable study completed.) All this provides a mounting supply of social science results to meet the growing demand for social science findings.
What’s not to like?
What’s not to like is that the media space is often filled by studies that command attention because they are startling, counter-intuitive, chilling, or charming. These attributes make for good news copy. They usually do not make for good social science. They are frequently one-offs, results never seen before and never again, attention-getting often because they run against the grain.
Even as the number of social science studies mount rapidly, researchers are increasingly realizing that they have a problem with failures to replicate: Too many studies get dramatic attention but cannot be repeated and are unsupported by comparable research. Once such a study is out in the media, retractions, corrections, or qualifications rarely catch up; the initially-reported result becomes a “fact.”
Such one-offs appear, usually, not because researchers are cooking the data (although that does happen occasionally), but because researchers are human and often succumb to wishful thinking. The results are so enticing, one just wants them to be true. Yet, if a researcher does enough studies running enough subjects – an increasingly easy task – something enticingly novel will pop up just by chance. (Statistician and political scientist Andrew Gelman devotes much of his blog to identifying such errors.) Researchers convince themselves that the novelty is a real finding – how disappointing intellectually and professionally if it were not – and the media are eager to run with it, to shout the results from the rooftops.
Good science is rarely if ever based on a critical study. It requires building up a corpus of studies with similar findings before going up to rooftops. Good science reporting should make sure that any new study is put into the context of comparable research. That practice is possible even on the heated internet. (For example, the political science blog Monkey Cage generally does this well.)
For the general reader, the media’s indiscriminate social science reporting calls for considerable skepticism, particularly about reports of what seem to be brand-new, revolutionary, startling, one-off studies. Indeed, that would be a good rule for reading all science reporting. And that would apply to reading reports in this blog, too.