Stopping video game harassment

Many women say popular video games such as World of Warcraft and Halo are rife with harassment, stalking and sexism that game companies don't police effectively.

But there are some steps the game industry, social media and law enforcement could take to combat harassment.

Video game companies create their own rules for acceptable behaviour, and some players complained that they didn't know what resulted from their reports of bad behaviour.

"One of the things that would be really cool to see is if all companies adopted similar standards for what constitutes harassment and behaviour so they work more in tandem,'' says Kate Edwards, executive director of the International Game Developers Association.

Players can mute, block or report users who bother them. But some say that's not enough: Avoiding a problem doesn't solve it or change how people act with others.

Several experts credit Riot Games, the company behind popular online game League of Legends, for building a system that aims to improve the culture of the game and change player behaviour. If a player is punished, the company's system tells them why they were reported and shows them the specific comments that other players didn't like. Video game developers remain overwhelmingly male, and companies could make a conscious effort to diversify their workforces, Edwards says. Doing so might result in games with broader appeal.

The industry is paying attention to criticism. At the June 2015 Electronic Entertainment Expo, or E3, video game conference, companies featured more games with prominent women characters. There were also more women attending and representing the industry than there had been in previous years.

Harassment isn't limited to game platforms. It spills out onto Twitter and other social media. After criticism, some sites have taken steps to try to curb abuse.

Twitter has tried to make it easier to report threats and abuse, adding staff and trying to streamline reporting and blocking. In February, the company formed a 'Trust & Safety Council' with outside groups to help develop tools and policies to fight abuse while still allowing people to speak freely.

Last year, the free-wheeling online discussion board Reddit adopted new guidelines that prohibit publishing people's private information (such as stolen pictures or home addresses), harassment, abuse and comments that incite violence.

The Electronic Frontier Foundation, a digital rights group, says that if companies aren't willing to build in filtering functions for harassing messages, they should let outside developers do so.

Federal law and many states prohibit stalking and threatening someone online. Law enforcement can pursue people who make threats by collecting user information from Twitter and other services. But such cases can be challenging.

It has to be a "true threat,'' says Wesley Hsu, an assistant US attorney in Los Angeles. "There is a difference, under the law, between 'I hope someone comes and kills you' and 'I'm coming to kill you.'''

It can also be difficult to track people who make online threats, Hsu says, given the prevalence of tools for staying anonymous on the Internet.

     

Author: 
AP