Image by Jason A. Howie
Earlier this month, Twitter announced it was finally getting a “report abuse” button to help combat the rape and death threats some users receive. The news was of course controversial, not in some small part due to the fact that this button simply masks the real problem, which is why human beings think this kind of behaviour is appropriate in an online environment in the first place. This not helped by the fact that Twitter isn’t very good at determining who the abuser is in the first place. Even though the internet turned 20 this year, we still haven’t determined social contracts for the technology, and it seems like reports of abuse get worse and worse with each year.
Luckily, it’s finally starting to have “real” world consequences, as with Toronto feminist Steph Guthrie’s Twitter harasser, who is facing charges of criminal harassment after sending her threats on Twitter for six months. And while it’s a good thing that law enforcement are starting to take this seriously, the problem still persists, and more people are not taken to task than are. Most victims of online harassment, be it on social media, in online games, or on message boards, are still more likely to be told to “Chill out, it’s just the internet,” as if the Internet weren’t a real place where words and actions (should) have consequences. This view is what makes people think it’s OK to harass people online, or to lie about where they met their partners when it was actually on OK Cupid.
To try and figure out why we separate the ‘digital’ world and the ‘real’ one, I turned to York University professor and social psychologist David Toews. “It has now come to a point where people not only can, they are often expected to, communicate with each other in multiple media formats, frequently as many as possible and even many different ones simultaneously,” He explains. ”This means that the consensus that people turn to in order to define what is real and what is not real is much more unstable. People frequently tend to pick a side and emphasize that this is either good or bad, for society, for culture, for morals, for education, etc. So if people sense there is little consequence to an action, they tend to feel it is a bit unreal or lacks reality.”
This of course isn’t helped by not having traditional social cues, like body language, or even knowing what the other person looks like. It’s been known for quite a while that anonymity helps to allow antisocial behaviour, and that’s partially what’s going on here. Logically, this would suggest that all social media and online communities should not be anonymous, and that we’re on track by making online harassers as legally culpable as so called “real life” harassers, but, unfortunately, it’s a little more complicated than that.
The internet also allows individuals to indulge in their own confirmation bias — that is, the human desire to interact with people (and media) who share our beliefs. If an individual is found not to share them, then people often feel threatened, and that’s why people are so quick to attack a person who doesn’t conform. The internet makes this incredibly easy, since there’s a website for every niche interest, and Google makes it easier than ever to find websites and people who share your views. It also makes it easier for us to find these people to gang up on people who don’t share our opinions. A good, non-triggery example of this is when Anne Rice (probably) accidentally sic’d her fan base on a blogger for writing a bad review of one her books. This is, of course, a much lower stakes example than I could have used, because hate groups are pretty much known for engaging in this behaviour online all the time. Studies into online communities (yes, they exist!) largely support this. Then there is the question of online identity.
One study published earlier this year by Ayoung Suh of Ewha Womans University in Seoul, had some particularly pertinent observations regarding identity in online settings. The study was specifically looking at identity in online communities, how they can allow people a forum to experiment with their own identities, and what this meant for the individual and the community at large. She found that the more difference there is between a person’s virtual persona and their real self, the more a person participated in an online community, and the greater desire they had for privacy. She also found that the more these individuals were able to vent their negative feelings in an online community, the more likely they were to do so by attacking other members, and ignore the common good of the online community at large in favour of their own personal feelings. Suh refers to this as “catharsis,” and while it is usually good for the individual and society as a whole, when combined with anonymity it often had negative consequences. This explains a lot about why attacks like this happen on Twitter, actually. Twitter is pretty much a catharsis machine, a place where individuals can go to vent their frustrations about various things, under a handle of your choosing. But, since Twitter also allows individuals to only follow people they like, there’s a large element of confirmation bias as well. Most Twitter fights start with just one retweet by the wrong person, which allows all of their followers, and their follower’s followers, to gang up on a person they don’t agree with.
What Suh ultimately suggests as a solution is actually build the site so that it creates more positive and desirable personas, and rewards members for essentially not being an asshole. Developers who are looking to make social media or online communities should put some of this into consideration when making their site or program– not just, “How cool will this feature be!?” but also, “How will this service encourage people to behave?” The social aspect needs to become just as important as the media and technology aspect. As the virtual world and the real world become more and more intertwined with daily life and social interaction, this will become even more important than it already is.