Feeds:
Posts
Comments

The phrase, “sharing economy,” when referring to services like Airbnb or Uber, is, of course, camouflage language. “Sharing” is what we urge our children to do with their toys at playtime. If, however, our kids rent their toys out, it is the “getting paid” economy, in the words of San Francisco’s super-pol, Willie Brown. Lyft’s slogan is “Your Friend with a Car,” but my friends don’t charge for a lift. (Using “sharing” to describe, say, free recycling or a co-op housing’s common kitchen, is another matter.)

In some ways, these new “peer-to-peer” purchases are a step back to a more “informal” economy, the economy of guys repairing cars in their front yards, women doing hair-dos in their kitchens, laborers waiting on street corners for construction jobs, workers selling their home-made lunches to fellow employees, and the like. This is work that is unrecorded, untaxed, and unregulated. In developing countries, many, if not most, workers are in the informal economy. The 21st-century American, high-tech versions are certainly recorded in multiple online receipt systems; whether and how much those transactions are taxed is a matter of struggle now in many communities, as is the issue of whether and how much they are regulated. We have been there before.

Continue Reading »

Of Places Past

We have become more aware that Americans’ chances of upward economic mobility have for decades been a lot lower than Americans imagined, that being poor or rich can last generations. Efforts to explain that lock-in have pointed to several patterns, from the intergenerational inheritance of assets (or debt, as the case may be) to intergenerational continuity in child-rearing styles (say, how much parents read to their children). In such ways, the past is not really past.

Increasingly, researchers have also identified the places – the communities, neighborhoods, blocks – where people live as a factor in slowing economic mobility. In a post earlier this year, I noted a couple of 2008 studies showing that growing up in poor neighborhoods impaired children’s cognitive skills and reduced their chances to advance beyond their parents. In this post, I report on further research by NYU sociologist Patrick Sharkey (see links below) suggesting that a bad environment can worsen the life chances not only of a child, but that of the child’s child, an unfortunate residential patrimony.

Continue Reading »

A recent article in Wired reported on the estimated 100,000 workers around the globe who risk their sanity culling the perverse, grotesque, horrific stuff that some people post on social media – child sexual abuse, close-ups of accident victims, self-mutilation, and the like. That there are circles of people who post such content and yet larger circles who presumably enjoy looking at and trading such content reminds us of the down-slide on the Internet’s “long tail.”

The “long tail” notion, argued in the early 2000s by then-Wired’s editor Chris Anderson, is that the internet allows businesses to make money even on products valued by an extremely tiny proportion of consumers. Sellers aggregate enough of those rare customers to make marketing to them profitable. Netflix, Anderson wrote, is a good example: “It doesn’t matter if the several thousand people who rent Doctor Who episodes each month are in one city or spread, one per town, across the country … What matters is not where customers are, or even how many of them are seeking a particular title, but only that some number of them exist, anywhere.” The same logic applies to producers and audiences of perverse contents.

At the same time, the Internet sustains niches for what most would consider positive activities, such as hobbyists trading tips, seekers of relatively rare sorts of mates finding one another (as in Jdate and FarmersOnly dating), fans of rarely-recorded world music discovering tracks, and sufferers from “orphan” diseases finding support and advice.

This “long tail” phenomenon – the good, the bad, and the very ugly – seems to be creating a new society. Except that we have been there before.

Continue Reading »

The Blameless Only

Americans generally believe that the government should not take money from one person to give to another and generally believe that only recently – perhaps just since the 1960s or since the New Deal – has government done so. Consistent with these views, American “welfare” policy is distinctively limited, constrained, and grudging. Yet history shows that American government, notably the federal government, has for centuries used taxpayers’ money to help other people – for example, to assist businessmen with subsidies of various kinds and to provide large pensions for widows of Union Army veterans. Indeed, even a couple of centuries ago, Congress sent large sums of what we would today call “foreign aid” abroad. Two recent books clarify this seeming contradiction between American ideology and practice by showing that whether government helps or not depends not so much on principles of taxation and representation, but on whether those who are helped are seen as blameless or not.

Stanford law professor Michele Landis Dauber, in her 2013 book, The Sympathetic State, recounts the legislative history of federal relief programs, and Northwestern historian Susan J. Pearson, in her 2100 book, The Rights of the Defenseless, describes the evolution of anti-cruelty legislation. Both accounts revolve crucially around principles of self-reliance and responsibility.

It is the moral logic of blame, Dauber writes, that allowed Massachusetts Governor William Weld in 1995 to sign a bill sharply curtailing assistance to poor single mothers and to also simultaneously ask the federal government for millions in direct payments to the state’s fishermen. The arguments over both moves were arguments about blame and blamelessness.

Continue Reading »

As I write this post, it has been about three weeks since Thomas Duncan was diagnosed with Ebola in Texas. The media and political hysteria that has ensued in this country is amazing, statistically and historically. Unlike, say, tuberculosis or the flu, it is extremely hard to get infected with Ebola unless one is caring, without adequate protection, for an actively ill patient. Consider that none of the people who were living with Duncan has shown symptoms.

One person, Duncan himself, has died from Ebola in the United States in these three weeks. In contrast, during an average three-week period in the United States: 35 people die from tuberculosis; 3,200 from influenza and pneumonia – 500 of those people under 65 years of age; 1,100 from suicide by gun; 650 from homicide by gun; 1,000 by alcoholic cirrhosis; and 1,900 by motor vehicle accident.* These deaths are not only vastly more numerous, they are much more contagious, either in a medical sense or in a sociological sense. Where are screaming headlines for those risks?

So much for the statistics. From an historical view, there was a time when alarm, even a run-to-the-hills psychology, made sense in reaction to a disease appearing on our shores. We do not live in such times now.**

Continue Reading »

In 2002, then-Berkeley (now-NYU) sociologist Michael Hout and I published a paper pointing out a new trend in Americans’ religious identity: A rapidly increasing proportion of survey respondents answered “no religion” when asked questions such as “What is your religious preference? Is it Protestant, Catholic, Jewish, some other religion, or no religion?” In the 1991 General Social Survey, about 7 percent answered no religion and in the 2000 GSS, 14 percent did.* We explained the trend this way:

the increase was not connected to a loss of religious piety, [but] it was connected to politics. In the 1990s many people who had weak attachments to religion and either moderate or liberal political views found themselves at odds with the conservative political agenda of the Christian Right and reacted by renouncing their weak attachment to organized religion.

If that is what religion is, most of the “Nones” seemed to be saying, count me out.

In the years since, the trend has continued, Nones reaching 20 percent in the 2012 GSS. And a good deal of research has also accumulated on the topic (some of it reported in an earlier post). Notably, Robert Putnam and David Campbell refined our argument in their 2010 book, American Grace, pointing more sharply to lifestyle issues as the triggers for Americans declaring no religious identity.

Mike and I have just published a paper in Sociological Science updating the trend over an additional dozen years, applying new methods to the trend, and retesting explanations for the rise in Nones. We – actually it’s 90 percent Mike’s work – find that our earlier account stands up even more strongly.

Continue Reading »

(source)

As is now well-known, scores on “intelligence” tests rose strongly over the last few generations, world-wide – this is the “Flynn Effect.” One striking anomaly, however, appears in American data: slumping students’ scores on academic achievement tests like the SAT. Notes of the decline starting in the 1960s sparked a lot of concern and hand-wringing. A similar decline is evident among adult respondents to the General Social Survey. The GSS gives interviewees a 10-item, multiple choice vocabulary test. (Practically speaking, vocabulary tests yield pretty much the same results as intelligence tests.) In over 40 years of the survey, a pattern emerged: Correct scores rose from the generations born around 1900 to the generations born around 1950 and then dropped afterwards. Are recently-born cohorts dumber – or, at least, less literate – than their parents and grandparents?

A new study presented to the American Sociological Association in August by Shawn Dorius (Iowa State), Duane Alwin (Penn. State), and Juliana Pacheco (U. of Iowa) tested a hunch several researchers have had about the generational pattern in the GSS vocabulary test – that words have histories.

Continue Reading »

Follow

Get every new post delivered to your Inbox.

Join 254 other followers