The Conversation – QUT Social Media Research Group https://socialmedia.qut.edu.au Fri, 26 Jul 2019 01:28:45 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 One Day in the Life of a National Twittersphere https://socialmedia.qut.edu.au/2019/07/26/one-day-in-the-life-of-a-national-twittersphere/ https://socialmedia.qut.edu.au/2019/07/26/one-day-in-the-life-of-a-national-twittersphere/#respond Fri, 26 Jul 2019 01:28:33 +0000 https://socialmedia.qut.edu.au/?p=1120 Taking a break from all the politics, Brenda Moon and I have examined everything that goes on in the Australian Twittersphere on a given day. We found that older, more sociable uses of Twitter persist in spite of everything. Our article is out now in The Conversation and Nordicom Review. The research was made possible by the TrISMA LIEF project, funded by the Australian Research Council and led by the QUT Digital Media Research Centre.

The Nordicom Review article was published under an open access licence – here’s the full abstract:

Previous research into social media platforms has often focused on the exceptional: key moments in politics, sports or crisis communication. For Twitter, it has usually centred on hashtags or keywords. Routine and everyday social media practices remain underexamined as a result; the literature has overrepresented the loudest voices: those users who contribute actively to popular hashtags. This article addresses this imbalance by exploring in depth the day-to-day patterns of activity within the Australian Twittersphere for a 24-hour period in March 2017. We focus especially on the previously less visible everyday social media practices that this shift in perspective reveals. This provides critical new insights into where, and how, to look for evidence of onlife traces in a systematic way.

]]>
https://socialmedia.qut.edu.au/2019/07/26/one-day-in-the-life-of-a-national-twittersphere/feed/ 0
The dark art of Facebook fiddling with your news feed https://socialmedia.qut.edu.au/2014/09/04/the-dark-art-of-facebook-fiddling-with-your-news-feed/ https://socialmedia.qut.edu.au/2014/09/04/the-dark-art-of-facebook-fiddling-with-your-news-feed/#respond Wed, 03 Sep 2014 21:47:41 +0000 http://socialmedia.qut.edu.au/?p=737 Facebook’s news feed is probably the most-used feature of the social network. It organises posts, photos, links and advertisements from your friends and the pages you follow into a single stream of news. But lately we’ve seen the news feed making headlines of its own.

In August, users and journalists began to question Facebook’s news feed after noticing a scarcity of links and posts about the death of Michael Brown and the subsequent protests in Ferguson, Missouri.

Facebook also announced changes to the news feed to decrease the visibility of clickbait-style headlines. These are headlines that attempt to lure visitors to a webpage with intriguing but uninformative previews, and Facebook made up a typical example.

Facebook’s example of typical clickbait.
Facebook

Facebook says it will be tracking the amount of time that users spend on a website after clicking such a link, and penalising the publishers of links that don’t keep reader attention.

In June, Facebook faced criticism after the publication of research findings based on an “emotional contagion” experiment that manipulated the news feed of almost 700,000 users. It raised some ethical concern among both Facebook users and observers.

Given how little we understand of Facebook’s internal affairs and the machinations of the news feed’s filter algorithms, the growing public concern around Facebook’s operations is understandable.

Why do the algorithms matter?

As users, our readiness to trust Facebook as a hub for social, professional and familial interactions, as well as a source for following and discussing news, has afforded the company a privileged position as an intermediary in our social and political lives.

Twitter CEO Dick Costolo’s announcement that Twitter decided to censor user-uploaded images of American journalist James Foley’s execution is a timely reminder of the many roles of social networking platforms.

These platforms and their operators do not simply present data and human interaction in a neutral way — they also make editorial judgements about the kinds of data and interaction they want to facilitate.

This should lead us to question the ways in which Facebook’s roles as an intermediary of our information and social connections allows their operators to potentially influence their users.

Why does Facebook need algorithms to sort the news?

One of the most common responses to criticism of the news feed is the suggestion that Facebook does away with sorting entirely, and simply show everything chronologically — just like Twitter.

Showing everything can make the news feed seem a bit more like a news firehose. Facebook engineers estimate that the average user’s news feed would show around 1,500 new posts each day.

The “firehose model” is not without its own issues. By showing all posts as they happen, Twitter’s approach can tend to favour the users who post most often, and that can let the noisiest users drown out other worthy voices.

This concern may be an influence on Twitter’s recent changes to show tweets favourited by other followers in a user’s timeline, and its apparent readiness to experiment with algorithmic changes to their users’ timelines.

Algorithmic filtering may well be helpful given the amount of information we deal with on a day-to-day basis but the unexplained “black box” nature of most algorithmic systems can be headache too.

Changes to Facebook’s algorithms can dramatically affect the traffic some websites receive, much to the chagrin of their publishers. Publishers who have registered with Facebook receive some basic metrics as to the number of users who have seen their post. Individual users receive even less feedback as to how widely (if at all) their posts have been seen.

These algorithms are ostensibly created by the developers of Facebook and Twitter in service of creating a better experience for their users (both individuals and corporate).

But social platforms have a vested interest in keeping users engaged with their service. We must recognise that these interests can shape the development of the platform and its functions.

A social network’s filtering may be biased against showing content that engineers have deemed controversial or potentially upsetting to help users enjoy the the network. These filters could stop you from seeing a post that would have upset you but they might also limit the visibility of a cry for help from someone in need.

Are there antidotes to algorithms?

If users are concerned by the choices that a social media platform seems to be making, they can demand a greater degree of transparency. That being said, these systems can be complex. According to Facebook, more than 100,000 different variables are factored into the news feed algorithms.

Another option might be to regulate: subject sufficiently large technology companies and their social algorithms to regular independent auditing, similar to the regulations for algorithmic financial trading.

Alternatively, users could use the platform in unintended ways or learn to subvert and scam the system to their own advantage.

Users could also lessen their usage of Facebook and seek a less-filtered stream of news and information from a variety of other sources to suit their needs.

For better or worse, algorithmic filtering will likely become a staple of our data-fuelled, internet-mediated lives, but in time we may also see services that give users more direct control over the algorithms that govern what they get to see.

The Conversation

This article was originally published on The Conversation.
Read the original article.

]]>
https://socialmedia.qut.edu.au/2014/09/04/the-dark-art-of-facebook-fiddling-with-your-news-feed/feed/ 0
Paid editors on Wikipedia – should you be worried? https://socialmedia.qut.edu.au/2014/08/22/paid-editors-on-wikipedia-should-you-be-worried/ https://socialmedia.qut.edu.au/2014/08/22/paid-editors-on-wikipedia-should-you-be-worried/#respond Fri, 22 Aug 2014 03:44:15 +0000 http://socialmedia.qut.edu.au/?p=718 Wiki_tee_shirt

Whether you trust it or ignore it, Wikipedia is one of the most popular websites in the world and accessed by millions of people every day. So would you trust it any more (or even less) if you knew people were being paid to contribute content to the encyclopedia?

The Wikimedia Foundation, the charitable organisation that supports Wikipedia, has changed its Terms of Use. Paid contributors can now make changes to Wikipedia articles so long as they clearly disclose their affiliations and potential conflicts of interest.

The website has previously not had an official policy on paid editing, despite a history of community opposition to editors who contribute for pay.

So the change in policy comes amid concerns from the Foundation about the potential damage to Wikipedia’s reputation as a free and objective source of knowledge from editors acting on behalf of a paying client or employer.

The concerns arose after the user community broke the story of its year-long investigation into large-scale editing by the consulting business Wiki-PR.

Working out of Austin, Texas, Wiki-PR employees used 250 fake accounts to create and contribute to pages about its clients. This resulted in several hundred promotional articles on Wikipedia, which the volunteer community subsequently had to remove for not meeting the encyclopedia’s quality standards.

What is paid editing?

Paid editing refers broadly to anyone who receives or expects to receive compensation for their contributions to the encyclopedia.

 

 

These editors are not paid by Wikipedia or the Wikimedia Foundation. They are understood to be contributing on behalf of a third party such as an employer or client.

At its heart, paid editing seems at odds with the open user-led model of volunteer collaboration that Wikipedia employs and is famous for. Therefore, the acknowledgement by the Wikimedia Foundation of such activity in the encyclopedia is a big deal.

Critics in the community say contributions from paid editors will never be compatible with the site’s core editing policy of neutrality, or that requiring disclosure is an invasion of privacy and the freedom to edit anonymously. Supporters of the change acknowledge the presence of these paid editors is important for fulfilling the site’s mission of being the encyclopedia that anyone can edit.

A short history of paid editing in Wikipedia

Paid editing has a tumultuous history in Wikipedia. In the last few years, there have been some high-profile instances of professionals whitewashing Wikipedia, known as wikiwashing. This is using a particular Wikipedia entry to further their clients’ interests, which is in violation of the site’s neutrality policy (among others).

Last year BP employee Arturo Silva was accused of providing nearly half the text for the British Petroleum article, including sections discussing the corporation’s environmental record.

The Gibraltarpedia controversy in 2012 resulted in a high-profile editor stepping down from trustee duties with Wikimedia UK after it was revealed his consultancy received fees from the Gibraltar Tourist Board.

In late 2011, UK newspaper The Independent filmed senior members of PR firm Bell Pottinger boasting of using “dark arts” to “sort” Wikipedia on behalf of governments with less-than-perfect human rights records.

It is also interesting that in all but the Bell Pottinger case, the Wikipedia community uncovered the activity.

What does the change mean for Wikipedia?

The change in the Terms of Use to acknowledge paid editing highlights Wikipedia’s importance in the management of corporate reputations.

But it also highlights the importance of managing Wikipedia’s own brand as a neutral and non-profit site of encyclopedic information.

The presence of paid editors on the site raises questions about the ability of the platform to meet this goal of neutrality. Can an article written about a company by an employee of that company ever be truly objective?

The fear is that opening up the platform to any form of commercial involvement changes its nature and threatens its sustainability as a site of free and neutral knowledge.

Is any editor a good editor?

On the other hand, can the site ever claim to really represent the sum of all knowledge without input from professionals? Paid editors have the time and inclination to spend on articles that otherwise may go unimproved, or may not exist at all.

Another argument for including paid editors in the community relates to the sustainability of the platform itself. The number of active volunteer editors is declining from a peak in 2007, although the number of new articles created each day continues to grow.

It is still important to make sure that Wikipedia remains the “encyclopedia that anyone can edit” so long as paid editors play by Wikipedia’s rules.

What does the change mean for users?

For readers, the change will remain largely unseen. It serves as an extra level of control for volunteer editors, and is flexible enough that site policies can be amended to reflect local legal requirements about fraud and conflicts of interest.

It means readers should continue to approach Wikipedia for what it is – a user-led encyclopedia. If the veracity of the information you seek is important, then you may need to click past the article and head to the talk page or the edit history to get an idea of how the article was constructed. You can then judge for yourself how you view any contribution from paid editors.

For contributors, the changed terms are meant to allow easier identification of edits that may present a conflict of interest and require extra scrutiny from uninvolved parties. It is hoped this will ultimately improve the quality of the encyclopedia.

Whether amending the Terms of Use invites a new wave of commercialism is yet to be seen. Either way the amendment signals that the platform is still open – to change at the very least.

 

This article was originally published on The Conversation.
Read the original article.

For an in depth look at one community response to paid editing see The Free Encyclopaedia that Anyone can Edit: The Shifting Values of Wikipedia Editors.

]]>
https://socialmedia.qut.edu.au/2014/08/22/paid-editors-on-wikipedia-should-you-be-worried/feed/ 0
Any name will do from now on says Google – why the change? https://socialmedia.qut.edu.au/2014/07/24/any-name-will-do-from-now-on-says-google-why-the-change/ Thu, 24 Jul 2014 03:35:08 +0000 http://socialmedia.qut.edu.au/?p=686 Google has announced a surprising end to its controversial “Real Name” policy with a contrite post on Google+, telling users that there are “no more restrictions” on the names people can use.

This is a dramatic change in policy for the company which suspended users en masse in 2011 for using pseudonyms – an event that users have since described as The Nymwars.

The policy had been criticised since for being capriciously enforced, allowing celebrities such as American musician Soulja Boy (real name DeAndre Cortez Way) to use a pseudonym on the network, but ignoring users who wanted to do the same.

Some users who used their real name on the social network even ran afoul of Google because their names did not fit the assumptions that Google employees made about about what counts as a real name.

Technology writer Stilgherrian and reporter Violet Blue have both documented their problems with Google’s name policing wrongly affecting them, even though they used their real names.

The policy became even more vexed in recent months, as Google integrated Google+ with Android, Gmail and YouTube, where users expected support for pseudonyms.

Although some users hoped that Google+’s real names would fix YouTube’s nasty comment ecosystem, it became a controversial change for many YouTube users.

Why does this change matter?

The change to Google’s policy is important because it shows a change in attitude towards rights of users online.

Vint Cerf, a senior executive at Google, had argued that “anonymity and pseudonymity are perfectly reasonable under some situations”, especially where using a real name could endanger a user.

The new policy should bring Google into line with the Australian Privacy Principles for Anonymity and Pseudonymity announced by the Office of the Australian Information Commissioner (OAIC) this year.

While we might normally consider names and pseudonyms purely as markers of our identity, the OAIC argues that anonymity and pseudonymity are important privacy concepts that allow people to have greater control over their personal information.

Why are pseudonyms so contentious?

Letting people adopt a pseudonym or participate anonymously gives users a freedom to participate without fear of retribution. Academics call this disinhibition.

The freedom from restraint that anonymity brings isn’t a particularly new concern. In the 1970s Johnny Carson told The New Yorker that he couldn’t bear citizen’s band (CB) radio:

[…] all those sick anonymous maniacs shooting off their mouths.

Similarly, writers have told stories of morality and anonymity since Plato’s Republic and the Ring of Gyges which grants its wearer the power to become invisible, similar to the ring in Tolkien’s The Lord of the Rings.

This freedom can be valuable for people at risk of harm, as it can allow them to seek support or to participate in online communities without fear of being stalked or persecuted.

Similarly, lesbian, gay and transgender users at risk of discrimination can participate online without being publicly outed. It can also allow people the freedom to express themselves without endangering their relationships with friends and colleagues.

Employees even risk retribution when their employers perceive that their online behaviour reflects on their workplace. US Supreme Court Justice John Paul Stevens argued that anonymity is protected as part of their right to free speech as it can “protect unpopular individuals from retaliation — and their ideas from suppression”.

The problem with anonymity

The catch is that this freedom also empowers people who wish to hurt and harass others. “Trolls” can operate anonymously because it can free them from responsibility for their actions.

This becomes particularly problematic when anonymous or pseudonymous users threaten people with harm. A number of women have written about the bullying and violent threats they regularly experience at the hands of anonymous trolls.

In some moderated online environments, users are protected from these kinds of speech by the thankless work of comment moderators who help to manage online communities.

Ultimately, Google+’s new policy will empower people by letting them participate on the network with greater control over the identity they use. This will help trolls and new participants alike. It falls to Google and its team of moderators to make sure that the network remains a safe place for users.

Google’s policy change shows that the company has become responsive to user concerns. We should consider that for many websites, creating an environment where users are free to participate, and free from harm is a difficult affair.

The Conversation

This article was originally published on The Conversation.
Read the original article.

]]>