This morning, I read an interesting reflection on dealing with online cruelty in the New York Times by Stephanie Rosenbloom:
In the virtual world, anonymity and invisibility help us feel uninhibited. Some people are inspired to behave with greater kindness; others unleash their dark side. Trolls, who some researchers think could be mentally unbalanced, say the kinds of things that do not warrant deep introspection; their singular goal is to elicit pain. But then there are those people whose comments, while nasty, present an opportunity to learn something about ourselves.
Easier said than done. Social scientists say we tend to fixate on the negative. However, there are ways to game psychological realities. Doing so requires understanding that you are ultimately in charge. “Nobody makes you feel anything,” said Professor Suler, adding that you are responsible for how you interpret and react to negative comments. The key is managing what psychologists refer to as involuntary attention.
When I checked her reference, I found that Rosenbloom made an error with her citation of research, along with failing to link to it: the 2011 report on teens, kindness and cruelty on social networking sites by the Pew Research’s Internet and Life Project she cited found that a vast majority of young people (88%) had “seen someone be mean or cruel to another person on a social network site,” not 69%. That percentage refers to a happier statistic: “69% of social media-using teens think that peers are mostly kind to each other on social network sites.
On that count, I’m glad the author chose to end with a reflection on kindness and the psychology involved with focusing on positive comments and compliments, as opposed to the negative ones. Anyone who wants to see how a positive feedback loop works should look at how Justin Levy’s friends & networks are supporting him, or how dozens and dozens of friends, family and strangers supported me when I lost my beloved greyhound this week.
I’m not sure about the New York Times editor’s summary — that the “Web encourages bad behavior,” through anonymity and lack of consequences.
I think that what we see online reflects what humans are, as a mirror, and that what we see on social media (which is really what is discussed here, not the World Wide Web) is
1) a function of what the platforms allow, with respect to the architecture of participation, and
2) what the community norms established there are.
Compare newspapers’ online comments, YouTube comments and Twitter to what you find in the comments at Ars Technica, BoingBoing or even, dare I say it, in the blogs or public profiles I moderate. As Anil Dash has observed, the people who create and maintain online forums and platforms bear responsibility for what happens there:
It’s a surprisingly delicate balance to allow robust debate and disagreement on politics, current events, technology choices, or even sports (hello, tribalism) while guiding conversations away from cruelty, anger, or even hatred, whether we lead a classroom or an online discussion. The comments we allow to stand offline or online largely determine the culture of the class, town hall or thread they’re made within:
While people bear always responsibility for their own cruel actions or words, it’s incumbent upon those of us who host conversations or share our thoughts publicly online to try to respond with empathy, kindness and understanding where we can, and with polite but resolute moderation when others do not respond to those tactics or attack our friends and communities.
[IMAGE SOURCE: Amanda Lenhart, Pew Research Center]