Andrew Quodling – QUT Social Media Research Group https://socialmedia.qut.edu.au Thu, 02 Apr 2015 04:19:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 Twitter wakes up to harassment but the law is still sleeping https://socialmedia.qut.edu.au/2015/04/02/twitter-wakes-up-to-harassment-but-the-law-is-still-sleeping/ Thu, 02 Apr 2015 05:00:17 +0000 http://socialmedia.qut.edu.au/?p=944 Twitter’s new system for reporting harassment and threats to law enforcement comes after the platform has received serious criticism for its poor handling of harassment.

The company’s chief executive, Dick Costolo, acknowledged the company’s failings in a leaked memo;

We suck at dealing with abuse and trolls on the platform and we’ve sucked at it for years. It’s no secret and the rest of the world talks about it every day.

We’re going to start kicking these people off right and left and making sure that when they issue their ridiculous attacks, nobody hears them.

Complex problems in search of solutions

It’s encouraging to see Twitter’s executive recognising that it has a problem. It’s even more encouraging to see tangible efforts made to fix this problem. A host of changes have been made recently, including:

  • amending the network’s rules to explicitly ban revenge porn,
  • a system that requires users who regularly create new accounts to supply and verify their mobile phone number,
  • a new opt-in filter that prevents tweets that contain “threats, offensive or abusive language, duplicate content, or are sent from suspicious accounts” from appearing in a user’s notifications.

None of these are perfect solutions. How will platforms adjudicate consent for revenge porn? Will attempts to verify user identity put anonymous users at risk?

The writer Jonathan Rauch once wrote that “the vocabulary of hate is potentially as rich as your dictionary” – and with this in mind, Twitter’s proposal to filter offensive language seems a little sisyphean – a laboured task that never ends.

Twitter’s new reporting system offers to email users a formal email copy of their reported Tweet, so that the user can pass it on as evidence to law enforcement agencies.

It took less than a day for the technology website Gizmodo to bill the new reporting tool a “useless punt”, noting that Twitter emails information that could’ve been captured with a screen-shot, and leaves the onus on the victim to report to local law enforcement agencies.

The weakest link

Conventional law enforcement agencies have a pretty mediocre track record for tackling online abuse and harassment.

United States Congresswoman Katherine Clark recently called on the Department of Justice to specifically focus on online harassment, after a “disappointing” meeting with the Federal Bureau of Investigations.

Technology journalist Claire Porter recently spoke of her frustrations with Australian police, who were seemingly unclear about their jurisdiction.

American journalist Amanda Hess described a similarly frustrating experience in explaining her harassment to an officer:

The cop anchored his hands on his belt, looked me in the eye, and said: ‘What is Twitter?’

If officers are confused about dealing with online harassment, then their ability to help the victims of threats and abuse is severely hindered. So what are the legal frameworks for dealing with this kind of abusive behaviour? Are they being adequately used?

The ‘lawless internet’ is a myth

The kind of abuse and harassment that people face on the internet is illegal under a variety of laws, and with varying penalties. Under Title 18 of the United States Code:

  • § 875 outlaws any interstate or foreign communications that threaten injury.

  • § 2261A outlaws any interstate or foreign electronic communications with the intent to kill, injure, harass or intimidate another person – especially conduct that creates reasonable fear of death or serious bodily injury, or that attempts to cause substantial emotional distress to another person.

Under the United Kingdom’s Communications Act 2003:

  • § 127 makes it an offence to send an electronic message that is grossly offensive, indecent, obscene or of menacing character.

Similarly, under Australian state and commonwealth law:

  • § 474.15, .16 and .17 of the Criminal Code Act 1995 (Commonwealth) makes it an offence to use electronic communications to threaten to kill or harm, to send hoaxes about explosives or dangerous substances, or to menace, harass and cause offence.

  • The Crimes Act 1900 (NSW) § 31, Criminal Code 1899 (Qld) § 308, Criminal Code Act 1924 (Tas) § 162, Crimes Act 1900 (ACT) § 3031, Criminal Code 1983 (NT) § 166, Criminal Law Consolidation Act (SA) § 19(1, 3), Crimes Act 1958 (Vic) § 20, and Criminal Code 1913 (WA) § 338 (A, B) each make it an offence to make threats that cause a person to fear death or violence.

In addition to this, the Australian Government has recently announced the new Children’s e-Safety Commisioner, and new legislation that will allow the commissioner to impose fines on social networking platforms.

The commissioner’s office is being established as part of a A$10 million dollar policy initiative from the Department of Communications to enhance online safety for children.

This is a well-intentioned initiative with a laudable goal, but by narrowly focusing on the harassment of children and ignoring the wealth of existing laws, it might just miss the forest for the trees when it comes to addressing online harassment.

Given the extensive number of laws that could already be used to address online harassment, we must ask where the weaknesses fall in enforcing these laws.

If police officers are not yet adequately trained to engage with crimes committed online by local, interstate or international aggressors, or familiar with procedures to request data from social networks, then legislators must look at providing these agencies with the training and resources required to engage with their responsibilities in online spaces.

The Conversation

This article was originally published on The Conversation.
Read the original article.

]]>
The dark art of Facebook fiddling with your news feed https://socialmedia.qut.edu.au/2014/09/04/the-dark-art-of-facebook-fiddling-with-your-news-feed/ https://socialmedia.qut.edu.au/2014/09/04/the-dark-art-of-facebook-fiddling-with-your-news-feed/#respond Wed, 03 Sep 2014 21:47:41 +0000 http://socialmedia.qut.edu.au/?p=737 Facebook’s news feed is probably the most-used feature of the social network. It organises posts, photos, links and advertisements from your friends and the pages you follow into a single stream of news. But lately we’ve seen the news feed making headlines of its own.

In August, users and journalists began to question Facebook’s news feed after noticing a scarcity of links and posts about the death of Michael Brown and the subsequent protests in Ferguson, Missouri.

Facebook also announced changes to the news feed to decrease the visibility of clickbait-style headlines. These are headlines that attempt to lure visitors to a webpage with intriguing but uninformative previews, and Facebook made up a typical example.

Facebook’s example of typical clickbait.
Facebook

Facebook says it will be tracking the amount of time that users spend on a website after clicking such a link, and penalising the publishers of links that don’t keep reader attention.

In June, Facebook faced criticism after the publication of research findings based on an “emotional contagion” experiment that manipulated the news feed of almost 700,000 users. It raised some ethical concern among both Facebook users and observers.

Given how little we understand of Facebook’s internal affairs and the machinations of the news feed’s filter algorithms, the growing public concern around Facebook’s operations is understandable.

Why do the algorithms matter?

As users, our readiness to trust Facebook as a hub for social, professional and familial interactions, as well as a source for following and discussing news, has afforded the company a privileged position as an intermediary in our social and political lives.

Twitter CEO Dick Costolo’s announcement that Twitter decided to censor user-uploaded images of American journalist James Foley’s execution is a timely reminder of the many roles of social networking platforms.

These platforms and their operators do not simply present data and human interaction in a neutral way — they also make editorial judgements about the kinds of data and interaction they want to facilitate.

This should lead us to question the ways in which Facebook’s roles as an intermediary of our information and social connections allows their operators to potentially influence their users.

Why does Facebook need algorithms to sort the news?

One of the most common responses to criticism of the news feed is the suggestion that Facebook does away with sorting entirely, and simply show everything chronologically — just like Twitter.

Showing everything can make the news feed seem a bit more like a news firehose. Facebook engineers estimate that the average user’s news feed would show around 1,500 new posts each day.

The “firehose model” is not without its own issues. By showing all posts as they happen, Twitter’s approach can tend to favour the users who post most often, and that can let the noisiest users drown out other worthy voices.

This concern may be an influence on Twitter’s recent changes to show tweets favourited by other followers in a user’s timeline, and its apparent readiness to experiment with algorithmic changes to their users’ timelines.

Algorithmic filtering may well be helpful given the amount of information we deal with on a day-to-day basis but the unexplained “black box” nature of most algorithmic systems can be headache too.

Changes to Facebook’s algorithms can dramatically affect the traffic some websites receive, much to the chagrin of their publishers. Publishers who have registered with Facebook receive some basic metrics as to the number of users who have seen their post. Individual users receive even less feedback as to how widely (if at all) their posts have been seen.

These algorithms are ostensibly created by the developers of Facebook and Twitter in service of creating a better experience for their users (both individuals and corporate).

But social platforms have a vested interest in keeping users engaged with their service. We must recognise that these interests can shape the development of the platform and its functions.

A social network’s filtering may be biased against showing content that engineers have deemed controversial or potentially upsetting to help users enjoy the the network. These filters could stop you from seeing a post that would have upset you but they might also limit the visibility of a cry for help from someone in need.

Are there antidotes to algorithms?

If users are concerned by the choices that a social media platform seems to be making, they can demand a greater degree of transparency. That being said, these systems can be complex. According to Facebook, more than 100,000 different variables are factored into the news feed algorithms.

Another option might be to regulate: subject sufficiently large technology companies and their social algorithms to regular independent auditing, similar to the regulations for algorithmic financial trading.

Alternatively, users could use the platform in unintended ways or learn to subvert and scam the system to their own advantage.

Users could also lessen their usage of Facebook and seek a less-filtered stream of news and information from a variety of other sources to suit their needs.

For better or worse, algorithmic filtering will likely become a staple of our data-fuelled, internet-mediated lives, but in time we may also see services that give users more direct control over the algorithms that govern what they get to see.

The Conversation

This article was originally published on The Conversation.
Read the original article.

]]>
https://socialmedia.qut.edu.au/2014/09/04/the-dark-art-of-facebook-fiddling-with-your-news-feed/feed/ 0
Any name will do from now on says Google – why the change? https://socialmedia.qut.edu.au/2014/07/24/any-name-will-do-from-now-on-says-google-why-the-change/ Thu, 24 Jul 2014 03:35:08 +0000 http://socialmedia.qut.edu.au/?p=686 Google has announced a surprising end to its controversial “Real Name” policy with a contrite post on Google+, telling users that there are “no more restrictions” on the names people can use.

This is a dramatic change in policy for the company which suspended users en masse in 2011 for using pseudonyms – an event that users have since described as The Nymwars.

The policy had been criticised since for being capriciously enforced, allowing celebrities such as American musician Soulja Boy (real name DeAndre Cortez Way) to use a pseudonym on the network, but ignoring users who wanted to do the same.

Some users who used their real name on the social network even ran afoul of Google because their names did not fit the assumptions that Google employees made about about what counts as a real name.

Technology writer Stilgherrian and reporter Violet Blue have both documented their problems with Google’s name policing wrongly affecting them, even though they used their real names.

The policy became even more vexed in recent months, as Google integrated Google+ with Android, Gmail and YouTube, where users expected support for pseudonyms.

Although some users hoped that Google+’s real names would fix YouTube’s nasty comment ecosystem, it became a controversial change for many YouTube users.

Why does this change matter?

The change to Google’s policy is important because it shows a change in attitude towards rights of users online.

Vint Cerf, a senior executive at Google, had argued that “anonymity and pseudonymity are perfectly reasonable under some situations”, especially where using a real name could endanger a user.

The new policy should bring Google into line with the Australian Privacy Principles for Anonymity and Pseudonymity announced by the Office of the Australian Information Commissioner (OAIC) this year.

While we might normally consider names and pseudonyms purely as markers of our identity, the OAIC argues that anonymity and pseudonymity are important privacy concepts that allow people to have greater control over their personal information.

Why are pseudonyms so contentious?

Letting people adopt a pseudonym or participate anonymously gives users a freedom to participate without fear of retribution. Academics call this disinhibition.

The freedom from restraint that anonymity brings isn’t a particularly new concern. In the 1970s Johnny Carson told The New Yorker that he couldn’t bear citizen’s band (CB) radio:

[…] all those sick anonymous maniacs shooting off their mouths.

Similarly, writers have told stories of morality and anonymity since Plato’s Republic and the Ring of Gyges which grants its wearer the power to become invisible, similar to the ring in Tolkien’s The Lord of the Rings.

This freedom can be valuable for people at risk of harm, as it can allow them to seek support or to participate in online communities without fear of being stalked or persecuted.

Similarly, lesbian, gay and transgender users at risk of discrimination can participate online without being publicly outed. It can also allow people the freedom to express themselves without endangering their relationships with friends and colleagues.

Employees even risk retribution when their employers perceive that their online behaviour reflects on their workplace. US Supreme Court Justice John Paul Stevens argued that anonymity is protected as part of their right to free speech as it can “protect unpopular individuals from retaliation — and their ideas from suppression”.

The problem with anonymity

The catch is that this freedom also empowers people who wish to hurt and harass others. “Trolls” can operate anonymously because it can free them from responsibility for their actions.

This becomes particularly problematic when anonymous or pseudonymous users threaten people with harm. A number of women have written about the bullying and violent threats they regularly experience at the hands of anonymous trolls.

In some moderated online environments, users are protected from these kinds of speech by the thankless work of comment moderators who help to manage online communities.

Ultimately, Google+’s new policy will empower people by letting them participate on the network with greater control over the identity they use. This will help trolls and new participants alike. It falls to Google and its team of moderators to make sure that the network remains a safe place for users.

Google’s policy change shows that the company has become responsive to user concerns. We should consider that for many websites, creating an environment where users are free to participate, and free from harm is a difficult affair.

The Conversation

This article was originally published on The Conversation.
Read the original article.

]]>