The cliché is that modern commerce, media, and travel have washed out local cultures. A mall anchored by a Bloomingdale’s in Massachusetts is just like one in Arizona. Starbucks here is like Starbucks there. Everyone coast to coast listens to the same songs and watches the same cable and streaming channels. Except for the weather, the American experience is just about the same.

Curry, Baptism in Kansas, 1928
And yet: We repeatedly run into evidence that at some level, perhaps a quite deep level, America’s local cultures are persistently different, perhaps increasingly different. One example is politics. There is some evidence that Americans increasingly cluster by political leanings–gerrymandering aside–as they look to be among folks who share their tastes. Another example is how interracial and same-sex couples prefer to live in large center-cities. Localities thus become more culturally distinct because “birds of a feather” more easily flock together, sorting themselves into homogeneous enclaves. Beyond that process, however, there is evidence that localities themselves still–even in this globalized era–shape cultural tastes, actually “coloring” the birds’ “feathers.”
Two recent papers from different corners of the social sciences make bold claims about very long-lasting psychological effects of centuries-old local cultures. One study, published in a psychology journal, finds that residents of what were long ago coal-fueled industrial regions of Great Britain have especially “adverse” psychological profiles today because of that history. The second study, this by economists, finds that residents of what were long ago frontier counties of the United States are especially likely today to be “individualistic” and hostile to government because of that history.
Claiming that historical cultures reproduce themselves for generations is not new. Historian David Hackett Fischer (no relation), for example, argued in great detail in his 1989 book, Albion’s Seed, that regions in the U.S. vary culturally today in ways based on who settled them centuries ago. That the Puritans landed in New England and the Quakers in Pennsylvania still matters. Similarly, psychologists Richard Nisbett and Dov Cohen argued in their 1996 book, Culture of Honor, that variations in local rates of lethal violence can be explained by where groups from particular places in the British Isles came ashore–specifically, that immigrants from highlands herding regions brought with them a predilection to respond heatedly to personal challenges which lasts to this day. Long stretches of generations begetting generations and yet cultural distinctiveness lasts.
The two new studies expand the claim.
Coal Dust Lingers
Around 2010, about 380,000 residents of England and Wales filled out an online psychological survey. An international team of scholars used their answers to measure the extent to which 111 counties were home to people with personality profiles indicating “psychological adversity”–that is, regions with residents who scored high on “neuroticism” (mainly describing oneself as anxious and depressed), scored low on “conscientiousness” (mainly describing oneself as careless and disorganized), and scored low on expressions of “life satisfaction.” The researchers linked the industrial history of each county circa 1820 to the average psychological profiles of respondents in 2010. Counties where many men had worked in the coal mines or in coal-fueled industries in 1820 were the counties where two centuries later people tended to report greater “psychological adversity.” (So, by the way, were the heavily agricultural counties of 1820.)
The authors repeated their study with American data, although they had more limited and more recent historical data for the U.S. States with coal-based industries circa 1900 tended to have residents today who scored high in neuroticism and low in feelings of well-being.
The authors argue that, despite centuries of social change and migration, the dark psychological cloud of working in the “satanic mills” of the industrial revolution lingers still over those communities today. How can that be? I’ll turn to that after discussing the second study.
Frontier Ways
Economists Samuel Bazzi, Martin Fiszbein, and Mesay Gebresilasse recently presented a working paper (short version here) entitled, “Frontier Culture: The Roots and Persistence of ‘Rugged Individualism’ in the United States.” Resurrecting a classic historical argument, they argue that the frontier experience entailed an individualistic outlook on life and that such an outlook still characterizes today’s inhabitants of what were frontier communities centuries ago.
How did the authors measure the individualism of residents centuries ago? They did so mainly by counting the proportion of white children in the community who had been given unusual first names by their parents. For assessing recent years, they looked as well at survey respondents’ attitudes toward public spending and voting patterns. (Measuring individualism by first names may seem odd, but previous scholars have both used that indicator and connected high rates to frontier life–e.g., here.)
How do Bazzi and colleagues measure “frontier-ness”? They code each American county for each decade from 1790 to 1890 as a frontier county if it was near areas where population density dropped to a very low level (two people per square mile) and was itself of low density (less than six per square mile). In 1790, that frontier line ran roughly down the west side of the Appalachians, in 1860 roughly down the Mississippi river, and in 1890 roughly down the east side of the Rockies. The more decades any county fit those criteria for being on the frontier, the higher its score for “TFE,” total frontier experience.
The authors then show that the greater a county’s TFE, the (slightly) higher the percentage of uniquely named white children in the 19th century and, in the 20th century, the more opposed to government spending and federal regulation the residents were, the lower the local property taxes, and the higher the Republican vote–in 2016 more so than any recent year. Place–more important, the past culture of a place–matters.
Explaining Cultural Continuity
Let us assume, then, that the culture of communities becomes distinct and the distinctions persist generations later. (Each study has a number of complex data issues, but I will leave that to others.) How can that be? It can’t be the water…. Or maybe it can be–at least in the case of the coal-psychological adversity connection. The authors don’t consider this possibility, but coal mining and burning may leave toxic residues for decades that might adversely affect the brain.*
Setting that aside, two findings need to be explained: Why did these kinds of communities develop notable local cultures–personality styles in British coal regions, individualism and anti-statism in American frontier regions–to start with? And, then, perhaps more puzzling, how can they remain distinctive generations later when those industries are gone and the communities are long-settled with malls and McDonalds?
Each paper has an explanation for the initiation of difference. In the case of coal, the authors argue, first, that there was self-selection. More neurotic, less disciplined people enrolled in the “satanic” mines and mills for lack of better economic alternatives. Second, they argue, the work experience damaged individual psyches. In the frontier case, the authors also argue, first, for self-selection: Independent-thinking, perhaps even anti-social, people were more likely to move to and stay on the frontier. They argue, second, that the frontier environment rewarded self-reliance, teaching residents that success entails making it on your own.**
Once each pattern is set, how does it persist 100 or 200 years later when the conditions have become so different? Self-selection may still be part of the story: Decade after decade people who fit the local culture tend to drift in and people who don’t fit in drift out. And then there is the transmission of culture: Each generation of children born into the community–and, presumably, each generation of newcomers moving into the community–are taught, directly and indirectly, the way things are done and the way things are thought; we are an undisciplined lot here in what was long ago coal country, we hate government here in what was long ago the frontier.
There’s another process, less or not at all noted in these papers, that sociologists would stress: continuity through social structure and institutions. As communities develop, institutions emerge that embody local cultural assumptions and, in turn, make it reasonable for people to keep acting accordingly. Having or not having, say, effective systems of economic investment, law and order, schooling, religious practice, support for the needy, labor solidarity, and so on–or the specific versions of each that emerge–make self-discipline (in the coal case) or self-reliance (in the frontier case) more or less reasonable. If, for example, an unreliable justice system makes it sensible for individuals to enforce what they see as justice on their own, then the chances of establishing effective justice institutions diminishes, which then reinforces the DIY strategy, and so it continues. If easy-to-get jobs in a manual-labor industry like coal mining encourage local boys to drop out of school, that, in turn, can reduce the chances that industries needing educated workers will move into the region.
Whatever the explanations for local cultures’ continuity–self-selection, cultural learning, institutional divergence, or even toxins–these studies, along with ones like those I mentioned earlier, remind us that, even in the globalized, McDonaldized, Facebooked world, you can tell when you’re “not in Kansas anymore.”
Update, July 23, 2018
A recent study adds another case of how “history’s heavy hand” operates, albeit at a small scale. The authors claim that neighborhoods that saw experienced the crack cocaine “epidemic” and its accompanying gun violence in the 1980s and ’90s 17 years later still had elevated rates of gun deaths. The mechanism here seem straightforward: still many guns available. “We attribute,” they wrote, “nearly eight percent of the murders in 2000 to the long-run effects of the emergence of crack markets.”
———-
Notes
* The authors do consider the possibility of epigenetic effects, that something in the early coal industry history affected the germinal DNA of residents in ways that altered the genetic makeup of their descendants. A toxins-in the-water explanation would be simpler.
** There is a totally contrary argument which one reads particularly in the literature about early America, that frontier conditions put a premium on working with neighbors (in defense, coping with illness, barn-raising and harvesting, seeking social life, and so on) which rewarded community bonding more than individual autonomy. To make things even more complex, it may be that any effect depends on specific settlement patterns. Early American farmers settled in or near villages and along trails. Homesteaders of the post-Civil War era had to establish their farms on separate plots of land, far from one another. And, meanwhile, agricultural communities settled by Scandinavians in the upper Midwest were notable for the cooperative systems they brought from home (and were disdained by “Yankees” from the Northeast).