Media – QUT Social Media Research Group https://socialmedia.qut.edu.au Thu, 04 Feb 2021 05:51:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 More ‘Fake News’ Research, and a PhD Opportunity! https://socialmedia.qut.edu.au/2020/08/03/more-fake-news-research-and-a-phd-opportunity/ https://socialmedia.qut.edu.au/2020/08/03/more-fake-news-research-and-a-phd-opportunity/#respond Mon, 03 Aug 2020 03:59:10 +0000 https://socialmedia.qut.edu.au/?p=1170 For those of you who have access to Australian television, this is an advance warning that the research on coronavirus-related mis- and disinformation that my colleagues and I at the QUT Digital Media Research Centre have conducted during the first half of this year will be featured prominently in tonight’s episode of the ABC’s investigative journalism programme Four Corners, which focusses on 5G conspiracy theories. A preview is below, and I hope that the full programme may also become available without geoblocking on ABC iView or the Four Corners Facebook page. The accompanying ABC News article has further information, too.

Related to this work, and the ARC Discovery research project that supports it, we are now also calling for expressions of interest in a three-year PhD scholarship on mis- and disinformation in social media, which will commence in early 2021. Please get in touch with me if you’re interested in the scholarship:

PhD Scholarship: ARC Discovery project on Mis- and Disinformation in Social Media (PhD commencing 2021)

The QUT Digital Media Research Centre is offering a three-year PhD scholarship associated with a major ARC Discovery research project on mis- and disinformation in social media. Working with DMRC research leaders Axel Bruns, Stephen Harrington, and Dan Angus, and collaborating with Scott Wright (Monash University, Melbourne), Jenny Stromer-Galley (Syracuse University, USA), and Karin Wahl-Jorgensen (Cardiff University, UK), the PhD researcher will use qualitative and quantitative analytics methods to investigate the dissemination patterns and processes for mis- and disinformation.

Ideally, the PhD researcher should be equally familiar with qualitative, close reading as well as quantitative, computational research methods. They will draw on the state-of-the-art social media analytics approaches to examine the role of specific individual, institutional, and automated actors in promoting or preventing the distribution of suspected ‘fake news’ content across Australian social media networks. Building on this work, they will develop a number of the case studies of the trajectories of specific stories across the media ecosystem, drawing crucially on issue mapping methods to produce a forensic analysis of how particular stories are disseminated by a combination of fringe outlets, social media platforms and their users, and potentially also by mainstream media publications.

Interested candidates should first contact Prof. Axel Bruns (a.bruns@qut.edu.au). You will then be asked to complete the DMRC EOI form (https://research.qut.edu.au/dmrc/dmrc-eois-2020-annual-scholarship-round/), by 31 August. We will assess your eligibility for PhD study, and work with you to develop a formal PhD application to QUT’s scholarship applications system, by 30 October. The PhD itself will commence in early 2021. International applicants are welcome.

The DMRC is a global leader in digital humanities and social science research with a focus on communication, media, and the law. It is one of Australia’s top organisations for media and communication research, areas in which QUT has achieved the highest possible rankings in ERA, the national research quality assessment exercise. Our research programs investigate the digital transformation of media industries, the challenges of digital inclusion and governance, the growing role of AI and automation in the information environment, and the role of social media in public communication. The DMRC has access to cutting-edge research infrastructure and capabilities in computational methods for the study of communication and society. We actively engage with industry and academic partners in Australia, Europe, Asia, the US, and South America; and we are especially proud of the dynamic and supportive research training environment we provide to our many local and international graduate students.

]]>
https://socialmedia.qut.edu.au/2020/08/03/more-fake-news-research-and-a-phd-opportunity/feed/ 0
‘Like a Virus’ – Disinformation in the Age of COVID-19 https://socialmedia.qut.edu.au/2020/05/19/like-a-virus-disinformation-in-the-age-of-covid-19/ https://socialmedia.qut.edu.au/2020/05/19/like-a-virus-disinformation-in-the-age-of-covid-19/#respond Mon, 18 May 2020 23:07:25 +0000 https://socialmedia.qut.edu.au/?p=1132 QUT DMRC social media researchers Dr Tim Graham and Prof. Axel Bruns participated in Essential Media’s Australia at Home online seminar series on 23 April, presenting early results from collaborative research in partnership with the Australia Institute’s Centre for Responsible Technology to a Zoom audience of more than 200 participants.

Also involving Assoc. Prof. Dan Angus and Dr Tobias Keller, the team is currently investigating the origins and spread of major conspiracy theories associated with the COVID-19 crisis across various social media platforms. Such conspiracy theories include false stories about coronavirus as a bioweapon created either in a Wuhan lab or by researchers associated with the Gates Foundation, and about connections between coronavirus and the roll-out of 5G mobile telephony technology.

Early results from this research point to the presence of a small but sustained coordinated effort by a network of Twitter accounts that pushed the bioweapon conspiracy story; such accounts were often associated with fringe political perspectives especially in the United States. Further, the research indicates that these conspiracy theories typically spread beyond the fringes of public discussion only once they are picked up and amplified by tabloid media exploiting them as clickbait, or by celebrities from the fields of music, movies, and sports who share them with their substantial social media audiences.

The research, which will be presented in extended form in a report for the Centre for Responsible Technology and subsequent scholarly publications, points to important inflection points in the trajectory of conspiracy theories from the fringes to the mainstream, and highlights a need both for further platform intervention against coordinated inauthentic behaviour and for the development of greater digital literacies not least also amongst influential social media users.

]]>
https://socialmedia.qut.edu.au/2020/05/19/like-a-virus-disinformation-in-the-age-of-covid-19/feed/ 0
Some Questions about Filter Bubbles, Polarisation, and the APIcalypse https://socialmedia.qut.edu.au/2019/08/26/some-questions-about-filter-bubbles-polarisation-and-the-apicalypse/ https://socialmedia.qut.edu.au/2019/08/26/some-questions-about-filter-bubbles-polarisation-and-the-apicalypse/#respond Mon, 26 Aug 2019 01:07:15 +0000 https://socialmedia.qut.edu.au/?p=1126

Rafael Grohmann from the Brazilian blog DigiLabour has asked me to answer some questions about my recent work – and especially my new book Are Filter Bubbles Real?, which is out now from Polity –, and the Portuguese version of that interview has just been published. I thought I’d post the English-language answers here, too:

1. Why are the ‘filter bubble’ and ‘echo chamber’ metaphors so dumb?

The first problem is that they are only metaphors: the people who introduced them never bothered to properly define them. This means that these concepts might sound sensible, but that they mean everything and nothing. For example, what does it mean to be inside an filter bubble or echo chamber? Do you need to be completely cut off from the world around you, which seems to be what those metaphors suggest? Only in such extreme cases – which are perhaps similar to being in a cult that has completely disconnected from the rest of society – can the severe negative effects that the supporters of the echo chamber or filter bubble theories imagine actually become reality, because they assume that people in echo chambers or filter bubbles no longer see any content that disagrees with their political worldviews.

Now, such complete disconnection is not entirely impossible, but very difficult to achieve and maintain. And most of the empirical evidence we have points in the opposite direction. In particular, the immense success of extremist political propaganda (including ‘fake news’, another very problematic and poorly defined term) in the US, the UK, parts of Europe, and even in Brazil itself in recent years provides a very strong argument against echo chambers and filter bubbles: if we were all locked away in our own bubbles, disconnected from each other, then such content could not have travelled as far, and could not have affected as many people, as quickly as it appears to have done. Illiberal governments wouldn’t invest significant resources in outfits like the Russian ‘Internet Research Agency’ troll farm if their influence operations were confined to existing ideological bubbles; propaganda depends crucially on the absence of echo chambers and filter bubbles if it seeks to influence more people than those who are already part of a narrow group of hyperpartisans.

Alternatively, if we define echo chambers and filter bubbles much more loosely, in a way that doesn’t require the people inside those bubble to be disconnected from the world of information around them, then the terms become almost useless. With such a weak definition, any community of interest would qualify as an echo chamber or filter bubble: any political party, religious group, football club, or other civic association suddenly is an echo chamber or filter bubble because it enables people with similar interests and perspectives to connect and communicate with each other. But in that case, what’s new? Such groups have always existed in society, and society evolves through the interaction and contest between them – there’s no need to create new and poorly defined metaphors like ‘echo chambers’ and ‘filter bubbles’ to describe this.

Some proponents of these metaphors claim that our new digital and social media have made things worse, though: that they have made it easier for people to create the first, strong type of echo chamber or filter bubble, by disconnecting from the rest of the world. But although this might sound sensible, there is practically no empirical evidence for this: for example, we now know that people who receive news from social media encounter a greater variety of news sources than those who don’t, and that those people who have the strongest and most partisan political views are also among the most active consumers of mainstream media. Even suggestions that platform algorithms are actively pushing people into echo chambers or filter bubbles have been disproven: Google search results, for instance, show very little evidence of personalisation at an individual level.

Part of the reason for this is that – unlike the people who support the echo chamber and filter bubble metaphors – most ordinary people actually don’t care much at all about politics. If there is any personalisation through the algorithms of Google, Facebook, Twitter, or other platforms, it will be based on many personal attributes other than our political interests. As multi-purpose platforms, these digital spaces are predominantly engines of context collapse, where our personal, professional, and political lives intersect and crash into each other and where we encounter a broad and unpredictable mixture of content from a variety of viewpoints. Overall, these platforms enable all of us to find more diverse perspectives, not less.

And this is where these metaphors don’t just become dumb, but downright dangerous: they create the impression, first, that there is a problem, and second, that the problem is caused to a significant extent by the technologies we use. This is an explicitly technologically determinist perspective, ignoring the human element and assuming that we are unable to shape these technologies to our needs. And such views then necessarily also invite technological solutions: if we assume that digital and social media have caused the current problems in society, then we must change the technologies (through technological, regulatory, and legal adjustments) to fix those problems. It’s as if a simple change to the Facebook algorithm would make fascism disappear.

In my view, by contrast, our current problems are social and societal, economic and political, and technology plays only a minor role in them. That’s not to say that they are free of blame – Facebook, Twitter, WhatsApp, and other platforms could certainly do much more to combat hate speech and abuse on their platforms, for example. But if social media and even the Internet itself suddenly disappeared tomorrow, we would still have those same problems in society, and we would be no closer to solving them. The current overly technological focus of our public debates – our tendency to blame social media for all our problems – obscures this fact, and prevents us from addressing the real issues.

2. Polarisation is a political fact, not a technological one. How do you understand political and societal polarisation today?

To me, this is the real question, and one which has not yet been researched enough. The fundamental problem is not echo chambers and filter bubbles: it is perfectly evident that the various polarised groups in society are very well aware of each other, and of each other’s ideological positions – which would be impossible if they were each locked away in their own bubbles. In fact, they monitor each other very closely: research in the US has shown that far-right fringe groups are also highly active followers of ‘liberal’ news sites like the New York Times, for example. But they no longer follow the other side in order to engage in any meaningful political dialogue, aimed at finding a consensus that both sides can live with: rather, they monitor their opponents in order to find new ways to twist their words, create believable ‘fake news’ propaganda, and attack them with such falsehoods. And yes, they use digital and social media to do so, but again this is not an inherently technological problem: if they didn’t have social media, they’d use the broadcast or print media instead, just as the fascists did in the 1920s and 1930s and as their modern-day counterparts still do today.

So, for me the key question is how we have come to this point: put simply, why do hyperpartisans do what they do? How do they become so polarised – so sure of their own worldview that they will dismiss any opposing views immediately, and will see any attempts to argue with them or to correct their views merely as a confirmation that ‘the establishment’ is out to get them? What are the (social and societal, rather than simply technological) processes by which people get drawn to these extreme political fringes, and how might they be pulled back from there? This question also has strong psychological elements, of course: how do hyperpartisans form their worldview? How do they incorporate new evidence into it? How do they interpret, and in doing so defuse, any evidence that goes against their own perspectives? We see this across so many fields today: from political argument itself to the communities of people who believe vaccinations are some kind of global mind control experiment, or to those who still deny the overwhelming scientific evidence for anthropogenic climate change. How do these people maintain their views even – and this again is evidence for the fact that echo chambers and filter bubbles are mere myths – they are bombarded on a daily basis with evidence of the fact that vaccinations save lives and that the global climate is changing with catastrophic consequences?

And since you include the word ‘today’ in your question, the other critical area of investigation in all this is whether any of this is new, and whether it is different today from the way it was ten, twenty, fifty, or one hundred years ago. On the one hand, it seems self-evident that we do see much more evidence of polarisation today than we have in recent decades: Brexit, Trump, Bolsonaro, and many others have clearly sensitised us to these deep divisions in many societies around the world. But most capitalist societies have always had deep divisions between the rich and the poor; the UK has always had staunch pro- and anti-Europeans; the US has always been racist. I think we need more research, and better ways of assessing, whether any of this has actually gotten worse in recent years, or whether it has simply become more visible.

For example, Trump and others have arguably made it socially acceptable in the US to be politically incorrect: to be deliberately misogynist; to be openly racist; to challenge the very constitutional foundations that the US political system was built on. But perhaps the people who now publicly support all this had always already been there, and had simply lacked the courage to voice their views in public – perhaps what has happened here is that Trump and others have smashed the spiral of silence that subdued such voices by credibly promising social and societal sanctions, and have instead created a spiral of reinforcement that actively rewards the expression of extremist views and leads hyperpartisans to try and outdo each other with more and more extreme statements. Perhaps the spiral of silence now works the other way, and the people who oppose such extremism now remain silent because they fear communicative and even physical violence.

Importantly, these are also key questions for media and communication research, but this research cannot take the simplistic perspective that ‘digital and social media are to blame’ for all of this. Rather, the question is to what extent the conditions and practices in our overall, hybrid media system – encompassing print and broadcast as well as digital and social media – have enabled such changes. Yes, digital and social platforms have enabled voices on the political fringes to publish their views, without editorial oversight or censorship from anyone else. But such voices find their audience often only once they have been amplified by more established outlets: for instance, once they have been covered – even if only negatively – by mainstream media journalists, or shared on via social media by more influential accounts (including even the US president himself). It is true that in the current media landscape, the flows of information are different from what they were in the past – not simply because of the technological features of the media, but because of the way that all of us (from politicians and journalists through to ordinary users) have chosen to incorporate these features into our daily lives. The question then is whether and how this affects the dynamics of polarisation, and what levers are available to us if we want to change those dynamics.

3. How can we continue critical research in social media after the APIcalypse?

With great tenacity and ingenuity even in the face of significant adversity – because we have a societal obligation to do so. I’ve said throughout my answers here that we cannot simplistically blame social media for the problems our societies are now facing: the social media technologies have not caused any of this. But the ways in which we, all of us, use social media – alongside other, older media forms – clearly play a role in how information travels and how polarisation takes place, and so it remains critically important to investigate the social media practices of ordinary citizens, of hyperpartisan activists, of fringe and mainstream politicians, of emerging and established journalists, of social bots and disinformation campaigns. And of course even beyond politics and polarisation, there are also many other important reasons to study social media.

The problem now is that over the past few years, many of the leading social media platforms have made it considerably more difficult for researchers even to access public and aggregate data about social media activities – a move I have described, in deliberately hyperbolic language, as the ‘APIcalypse’. Ostensibly, such changes were introduced to protect user data from unauthorised exploitation, but a convenient consequence of these access restrictions has been that independent, critical, public-interest research into social media practices has become a great deal more difficult even while the commercial partnerships between platforms and major corporations have remained largely unaffected. This limits our ability to provide an impartial assessment of social media practices and to hold the providers themselves to account for the effects of any changes they might make to their platforms, and increasingly forces scholars who seek to work with platform data into direct partnership arrangements that operate under conditions favouring the platform providers.

This requires several parallel responses from the scholarly community. Of course we must explore the new partnership models offered by the platforms, but we should treat these with a considerable degree of scepticism and cannot solely rely on such limited data philanthropy; in particular, the platforms are especially unlikely to provide data access in contexts where scholarly research might be highly critical of their actions. We must therefore also investigate other avenues for data gathering: this includes data donations from users of these platforms (modelled for instance on ProPublica’s browser plugin that captures the political ads encountered by Facebook users) or data scraping from the Websites of the platforms as an alternative to API-based data access, for example.

Platforms may seek to shut down such alternative modes of data gathering (as Facebook sought to do with the ProPublica browser plugin), or change their Terms of Service to explicitly forbid such practices – and this should lead scholars to consider whether the benefits of their research outweigh the platform’s interests. Terms of Service are often written to the maximum benefit of the platform, and may not be legally sound under applicable national legislation; the same legislation may also provide ‘fair use’ or ‘academic freedom’ exceptions that justify the deliberate breach of Terms of Service restrictions in specific contexts. As scholars, we must remember that we have a responsibility to the users of the platform, and to society as such, as well as to the platform providers. We must balance these responsibilities, by taking care that the user data we gather remain appropriately protected as we pursue questions of societal importance, and we should minimise the impact of our research on the legitimate commercial interests of the platform unless there is a pressing need to reveal malpractice in order to safeguard society. To do so can be a very difficult balancing act, of course.

Finally, we must also maintain our pressure on the platforms to provide scholarly researchers with better interfaces for data access, well beyond limited data philanthropy schemes that exclude key areas of investigation. Indeed, we must enlist others – funding bodies, policymakers, civil society institutions, and the general public itself – in bringing that pressure to bear: it is only in the face of such collective action, coordinated around the world, that these large and powerful corporations are likely to adjust their data access policies for scholarly research. And it will be important to confirm that they act on any promises of change they might make: too often have the end results they delivered not lived up to the grand rhetoric with which they were announced.

In spite of all of this, however, I want to end on a note of optimism: there still remains a crucial role for research that investigates social media practices, in themselves and especially also in the context of the wider, hybrid media system of older and newer media, and we must not and will not give up on this work. In the face of widespread hyperpartisanship and polarisation, this research is now more important than ever – and the adversities we are now confronted with are also a significant source of innovation in research methods and frameworks.

]]>
https://socialmedia.qut.edu.au/2019/08/26/some-questions-about-filter-bubbles-polarisation-and-the-apicalypse/feed/ 0
Presenting Gatewatching and News Curation at Media@Sydney https://socialmedia.qut.edu.au/2018/09/24/presenting-gatewatching-and-news-curation-at-mediasydney/ https://socialmedia.qut.edu.au/2018/09/24/presenting-gatewatching-and-news-curation-at-mediasydney/#respond Mon, 24 Sep 2018 06:10:20 +0000 http://socialmedia.qut.edu.au/?p=1108 A month ago I was able to present the themes of my latest book Gatewatching and News Curation at the University of Sydney, as part of its Media@Sydney series of talks – my sincere thanks to Francesco Bailo, Gerard Goggin, and everyone else who made this possible. The M@S team also posted video and audio recordings of the talk, which I’m sharing below; in case the presentation is difficult to make out in the video, I’ve also included the slides themselves.

Speaking on the day of Australia’s latest partyroom spill for the Prime Ministership, this was a timely opportunity to reflect on the intersections between journalism, social media, and the public sphere, and I thoroughly enjoyed the discussions after the presentation – many thanks to everyone who came along.

More information about the new book is here: Gatewatching and News Curation: Journalism, Social Media, and the Public Sphere.

]]>
https://socialmedia.qut.edu.au/2018/09/24/presenting-gatewatching-and-news-curation-at-mediasydney/feed/ 0
Twitter wakes up to harassment but the law is still sleeping https://socialmedia.qut.edu.au/2015/04/02/twitter-wakes-up-to-harassment-but-the-law-is-still-sleeping/ Thu, 02 Apr 2015 05:00:17 +0000 http://socialmedia.qut.edu.au/?p=944 Twitter’s new system for reporting harassment and threats to law enforcement comes after the platform has received serious criticism for its poor handling of harassment.

The company’s chief executive, Dick Costolo, acknowledged the company’s failings in a leaked memo;

We suck at dealing with abuse and trolls on the platform and we’ve sucked at it for years. It’s no secret and the rest of the world talks about it every day.

We’re going to start kicking these people off right and left and making sure that when they issue their ridiculous attacks, nobody hears them.

Complex problems in search of solutions

It’s encouraging to see Twitter’s executive recognising that it has a problem. It’s even more encouraging to see tangible efforts made to fix this problem. A host of changes have been made recently, including:

  • amending the network’s rules to explicitly ban revenge porn,
  • a system that requires users who regularly create new accounts to supply and verify their mobile phone number,
  • a new opt-in filter that prevents tweets that contain “threats, offensive or abusive language, duplicate content, or are sent from suspicious accounts” from appearing in a user’s notifications.

None of these are perfect solutions. How will platforms adjudicate consent for revenge porn? Will attempts to verify user identity put anonymous users at risk?

The writer Jonathan Rauch once wrote that “the vocabulary of hate is potentially as rich as your dictionary” – and with this in mind, Twitter’s proposal to filter offensive language seems a little sisyphean – a laboured task that never ends.

Twitter’s new reporting system offers to email users a formal email copy of their reported Tweet, so that the user can pass it on as evidence to law enforcement agencies.

It took less than a day for the technology website Gizmodo to bill the new reporting tool a “useless punt”, noting that Twitter emails information that could’ve been captured with a screen-shot, and leaves the onus on the victim to report to local law enforcement agencies.

The weakest link

Conventional law enforcement agencies have a pretty mediocre track record for tackling online abuse and harassment.

United States Congresswoman Katherine Clark recently called on the Department of Justice to specifically focus on online harassment, after a “disappointing” meeting with the Federal Bureau of Investigations.

Technology journalist Claire Porter recently spoke of her frustrations with Australian police, who were seemingly unclear about their jurisdiction.

American journalist Amanda Hess described a similarly frustrating experience in explaining her harassment to an officer:

The cop anchored his hands on his belt, looked me in the eye, and said: ‘What is Twitter?’

If officers are confused about dealing with online harassment, then their ability to help the victims of threats and abuse is severely hindered. So what are the legal frameworks for dealing with this kind of abusive behaviour? Are they being adequately used?

The ‘lawless internet’ is a myth

The kind of abuse and harassment that people face on the internet is illegal under a variety of laws, and with varying penalties. Under Title 18 of the United States Code:

  • § 875 outlaws any interstate or foreign communications that threaten injury.

  • § 2261A outlaws any interstate or foreign electronic communications with the intent to kill, injure, harass or intimidate another person – especially conduct that creates reasonable fear of death or serious bodily injury, or that attempts to cause substantial emotional distress to another person.

Under the United Kingdom’s Communications Act 2003:

  • § 127 makes it an offence to send an electronic message that is grossly offensive, indecent, obscene or of menacing character.

Similarly, under Australian state and commonwealth law:

  • § 474.15, .16 and .17 of the Criminal Code Act 1995 (Commonwealth) makes it an offence to use electronic communications to threaten to kill or harm, to send hoaxes about explosives or dangerous substances, or to menace, harass and cause offence.

  • The Crimes Act 1900 (NSW) § 31, Criminal Code 1899 (Qld) § 308, Criminal Code Act 1924 (Tas) § 162, Crimes Act 1900 (ACT) § 3031, Criminal Code 1983 (NT) § 166, Criminal Law Consolidation Act (SA) § 19(1, 3), Crimes Act 1958 (Vic) § 20, and Criminal Code 1913 (WA) § 338 (A, B) each make it an offence to make threats that cause a person to fear death or violence.

In addition to this, the Australian Government has recently announced the new Children’s e-Safety Commisioner, and new legislation that will allow the commissioner to impose fines on social networking platforms.

The commissioner’s office is being established as part of a A$10 million dollar policy initiative from the Department of Communications to enhance online safety for children.

This is a well-intentioned initiative with a laudable goal, but by narrowly focusing on the harassment of children and ignoring the wealth of existing laws, it might just miss the forest for the trees when it comes to addressing online harassment.

Given the extensive number of laws that could already be used to address online harassment, we must ask where the weaknesses fall in enforcing these laws.

If police officers are not yet adequately trained to engage with crimes committed online by local, interstate or international aggressors, or familiar with procedures to request data from social networks, then legislators must look at providing these agencies with the training and resources required to engage with their responsibilities in online spaces.

The Conversation

This article was originally published on The Conversation.
Read the original article.

]]>
Social Media at the Asian Cup: The Inside View https://socialmedia.qut.edu.au/2015/01/20/social-media-at-the-asian-cup-the-inside-view/ https://socialmedia.qut.edu.au/2015/01/20/social-media-at-the-asian-cup-the-inside-view/#respond Tue, 20 Jan 2015 05:10:40 +0000 http://socialmedia.qut.edu.au/?p=929 The AFC Asian Cup kicked off last Friday with a big crowd in Melbourne and the Socceroos setting a great vibe for the tournament, beating Kuwait 4-1. I arrived in Sydney on Sunday afternoon after the opening weekend and (after a false start when I left my laptop on the plane disembarking in Sydney – luckily I got it back the following day) I was ready to get involved and meet the Asian Cup digital communications and social media teams. My research on the Asian Cup is part of my PhD project exploring the use of social media in sport, in an effort to understand how a social media team operates at a large scale, international event.

IMG_4On Monday I arrived in the head office of the AFC Local Organising Committee, and met the digital communications team who were responsible for social media content in the lead up to the competition, and chiefly, running the website and dealing with any kind of communications issues throughout the competition. I learnt that the social media team was based out of the stadium and that I would be able to meet them at the Socceroos v Oman game the following night.

I started off my first day with a chat to Alison Hill, the General Manager of Government Relations and Communications, who gave me the run down on what they have been working on in the lead up to the tournament and explaining the how it all works within the organisation. We had a really good conversation and I learnt a lot about the efforts they had made to promote the tournament, which was relatively unknown in Australians, on their digital platforms. I spent the first two days of the tournament with her team, learning how they manage to keep the website updated throughout each match day, and attending the Socceroos v Oman game with them at Stadium Australia to see them work in real time.

IMG_2At the game I was introduced to Kapil Chettri, who is the team leader of the social media team, and he invited me to spend the next few days working alongside the socials team to find out what they do and how they do it. The team is based out of the Stadium Australia media centre; live blogging, live tweeting, posting on Facebook and Instagram and spreading Asian Cup news across platforms in six different languages (English, Arabic, Chinese, Indonesian, Japanese and Korean). There are eight team members based in Sydney, as well as a number off-site, and interns who go about “news gathering” on the ground at each stadium (I got to do a bit of this as well).

Although I am only four days into my observation period, there have been some interesting issues that have emerged. Firstly, there is the balancing act between being fast and being accurate, satisfying the needs of both the domestic and international audience, as well as keeping all stakeholders happy. In the Asian Cup, like many large scale sporting events, anything can happen or can change at a moment’s notice and the team function in a very reactive way to accommodate this fast-paced environment.

Another interesting observation I have made is in regards to the importance of having a well-drilled team that can act effectively on their own initiative. Though it seems chaotic in the build-up to a match and throughout, each member of the social media team knows what they are doing, and are able to carry out their tasks with little explicit direction from the team leader. Though the content of what is posted on social media changes day-to-day, the team follow an implied structure when posting content – with certain posts required in the build-up to a game (e.g. the match schedule for the day, posts about each host city, match highlights from the previous round and players to look out for), the live tweets (ensuring that images are added to as many posts as possible and providing stats visualisations, half time and full time updates with graphics) and post-game (highlights videos and man of the match). Kapil states that their only explicit strategy is to “tell a story,” and that it is then up to each individual team member to do this with the content that they post.

I will continue to explore these issues and more as the competition progresses. Below are some photos of me in action!

IMG_1IMG_5

http://credit-n.ru/zaymyi-next.html http://credit-n.ru/zaymyi-next.html buy viagra online займы на карту без отказа https://zp-pdl.com/online-payday-loans-in-america.php hairy girls мфо онлайн займвзять займ на банковский счетзайм под материнский капитал ижевск частный займ краснодарзайм на яндекс деньги без картызайм на карту срочно без отказа с 18 лет

]]>
https://socialmedia.qut.edu.au/2015/01/20/social-media-at-the-asian-cup-the-inside-view/feed/ 0
Big Brother’s Radar, Social Media and Public Votes https://socialmedia.qut.edu.au/2014/09/29/big-brothers-radar-social-media-and-public-votes/ https://socialmedia.qut.edu.au/2014/09/29/big-brothers-radar-social-media-and-public-votes/#respond Sun, 28 Sep 2014 23:53:04 +0000 http://socialmedia.qut.edu.au/?p=794 Big Brother is undoubtedly one of the most popular Australian shows on Social Media. Outside of ABC’s weekly hit Q&A, our 2013 study of Australian TV found Big Brother was constantly the show with the highest levels of conversation on Twitter, while precise Facebook data is hard to quantify, but the Official Big Brother page boasts 790,000 likes and over 38,000 comments since the start of the series, it has established a firm presence on that platform too.

 

Given this popularity, and a significant overlap between the target market for Big Brother viewers and the social media platforms, it will be interesting to observe the extent to which social media activity (and perhaps, eventually, sentiment) acts as a predictor for votes on the show. In this blog, following the first round of nominations, first eviction and the first round of single nominations, we are going to look to the data from the last 2.5 weeks to try to test whether social media activity acts as a predictor of public votes.

 

So far, at least, it has been a mixed bag, but let’s start with the positive; the public vote for the ‘Perfect Pair’ dance competition, in which the winners were awarded $30,000, was held between the final two pairs – Lawson and Aisha & Dion and Jason. The public then voted for the pair with the best dance through JumpIn, but did they actually just vote for their favourite pair? If we use social media activity as a barometer, it seems that could be the case. Our data showed a tight race, which Lawson & Aisha just pipped, and indeed the public vote came back 51.8% in favour of Lawson & Aisha. Perhaps, if they had been up against, say, Travis and Cat – who were hardly mentioned this week – they would have won by even more:

 

 

Lawson also tells an interesting story in the overall polling; as seen in the chart below which highlights the running total for all housemates; largely anonymous until the dance-off and his decision to give Aisha the lions share of the prize money ($20,000) was rewarded in the social media volume.

 

Below is a running total of Twitter mentions for the pairs since launch night, however we will focus on the last week’s long-winded and highly debated eviction process for the time being. Nominees made up 5 of the six most talked about housemates on the night before the eviction process began, and the ones not being talked about were being carried by their partner based on the pairs table:

 

 

Dash - Pairs

 

We can of course ask some other interesting questions from these charts: where were Skye and Lisa when they were ‘saved’? Were Jake and Gemma losers in the public vote due to anonymity, or hatred? What caused David and Sandra to be saved, when they were virtually anonymous through the first week, and only talked about subsequently in regard to David’s chauvinistic comments. Was it better for David to be hated, rather than not talked about at all? Related to this, there is the question of screen time and popularity inside the house, allowing us to address what went wrong for Gemma this week, given her achieved intent to secure airtime?

 

Up for eviction this week were Skye & Lisa, Jake & Gemma, Travis & Cat and David & Sandra. Ever since the Katie & Priya first week fiasco, Skye & Lisa have been by far the most talked about pair of the season and consequently were saved on Monday night as per our prediction based on the previous graph, with Skye & Lisa the most popular pair on the 22nd September. Interesting here, however, is that Gemma & Jake were the pair with the second most social media activity, and the most popular during the nomination period, indicating that the sentiment will also be a significant factor in creating further predictions.

 

Nominated pairs in week

 

While we have our own tool monitoring Big Brother discussion (http://bigbrother.thehypometer.com), Channel 9 (Mi9/JumpIn) have also launched a counter, the “Big Brother Radar”, which captures tweets and Facebook statuses by those who seek, deliberately, to be noticed by the radar using official C9 hashtags (e.g. #BBAUGemma). Our tool, by contrast, attempts to measure the underlying volume of discussion (and, by possible inference, interest) in the competitors as a whole, on social media.

 

BBFacebook Posthypo

 

Going forward, we hypothesise that those housemates who the public have no interest in will be those who struggle in a ‘vote to save’ format. That said, it’s probably not advisable to bet based on this information. It may be that the Radar format serves as a better prediction of those likely to be evicted (i.e. the effort to post with the correct hashtag is correlated to the effort to vote), it may be that sentiment proves highly significant, or indeed it may be that social media is not a good barometer of the BB voting public. Whichever of these proves to be the case however, the data is sure to be interesting.

 

Finally, it is worth noting that one of the problems of a lack of live feed – which we have ranted about previously – and indeed this year any live updates at all is that it allows producers to largely control the message; hence, social media reaction largely follows the amount of airtime given to contestants and the plot lines developed, much like a soap. By contrast in the USA, with 4 live camera views running 24 hours a day, users are able to create and share their own storylines about the housemates — generating ‘hype’ for the show which we do not see here. In Australian Big Brother we are told what to think, and we’ll leave it as an exercise for the reader how that reflects on wider society. Finally, we’ll leave you with a running total of the housemates mentions to date, where Skye continues to lead the way:

 

Housemate Twitter Mentions

 

 

]]>
https://socialmedia.qut.edu.au/2014/09/29/big-brothers-radar-social-media-and-public-votes/feed/ 0
The dark art of Facebook fiddling with your news feed https://socialmedia.qut.edu.au/2014/09/04/the-dark-art-of-facebook-fiddling-with-your-news-feed/ https://socialmedia.qut.edu.au/2014/09/04/the-dark-art-of-facebook-fiddling-with-your-news-feed/#respond Wed, 03 Sep 2014 21:47:41 +0000 http://socialmedia.qut.edu.au/?p=737 Facebook’s news feed is probably the most-used feature of the social network. It organises posts, photos, links and advertisements from your friends and the pages you follow into a single stream of news. But lately we’ve seen the news feed making headlines of its own.

In August, users and journalists began to question Facebook’s news feed after noticing a scarcity of links and posts about the death of Michael Brown and the subsequent protests in Ferguson, Missouri.

Facebook also announced changes to the news feed to decrease the visibility of clickbait-style headlines. These are headlines that attempt to lure visitors to a webpage with intriguing but uninformative previews, and Facebook made up a typical example.

Facebook’s example of typical clickbait.
Facebook

Facebook says it will be tracking the amount of time that users spend on a website after clicking such a link, and penalising the publishers of links that don’t keep reader attention.

In June, Facebook faced criticism after the publication of research findings based on an “emotional contagion” experiment that manipulated the news feed of almost 700,000 users. It raised some ethical concern among both Facebook users and observers.

Given how little we understand of Facebook’s internal affairs and the machinations of the news feed’s filter algorithms, the growing public concern around Facebook’s operations is understandable.

Why do the algorithms matter?

As users, our readiness to trust Facebook as a hub for social, professional and familial interactions, as well as a source for following and discussing news, has afforded the company a privileged position as an intermediary in our social and political lives.

Twitter CEO Dick Costolo’s announcement that Twitter decided to censor user-uploaded images of American journalist James Foley’s execution is a timely reminder of the many roles of social networking platforms.

These platforms and their operators do not simply present data and human interaction in a neutral way — they also make editorial judgements about the kinds of data and interaction they want to facilitate.

This should lead us to question the ways in which Facebook’s roles as an intermediary of our information and social connections allows their operators to potentially influence their users.

Why does Facebook need algorithms to sort the news?

One of the most common responses to criticism of the news feed is the suggestion that Facebook does away with sorting entirely, and simply show everything chronologically — just like Twitter.

Showing everything can make the news feed seem a bit more like a news firehose. Facebook engineers estimate that the average user’s news feed would show around 1,500 new posts each day.

The “firehose model” is not without its own issues. By showing all posts as they happen, Twitter’s approach can tend to favour the users who post most often, and that can let the noisiest users drown out other worthy voices.

This concern may be an influence on Twitter’s recent changes to show tweets favourited by other followers in a user’s timeline, and its apparent readiness to experiment with algorithmic changes to their users’ timelines.

Algorithmic filtering may well be helpful given the amount of information we deal with on a day-to-day basis but the unexplained “black box” nature of most algorithmic systems can be headache too.

Changes to Facebook’s algorithms can dramatically affect the traffic some websites receive, much to the chagrin of their publishers. Publishers who have registered with Facebook receive some basic metrics as to the number of users who have seen their post. Individual users receive even less feedback as to how widely (if at all) their posts have been seen.

These algorithms are ostensibly created by the developers of Facebook and Twitter in service of creating a better experience for their users (both individuals and corporate).

But social platforms have a vested interest in keeping users engaged with their service. We must recognise that these interests can shape the development of the platform and its functions.

A social network’s filtering may be biased against showing content that engineers have deemed controversial or potentially upsetting to help users enjoy the the network. These filters could stop you from seeing a post that would have upset you but they might also limit the visibility of a cry for help from someone in need.

Are there antidotes to algorithms?

If users are concerned by the choices that a social media platform seems to be making, they can demand a greater degree of transparency. That being said, these systems can be complex. According to Facebook, more than 100,000 different variables are factored into the news feed algorithms.

Another option might be to regulate: subject sufficiently large technology companies and their social algorithms to regular independent auditing, similar to the regulations for algorithmic financial trading.

Alternatively, users could use the platform in unintended ways or learn to subvert and scam the system to their own advantage.

Users could also lessen their usage of Facebook and seek a less-filtered stream of news and information from a variety of other sources to suit their needs.

For better or worse, algorithmic filtering will likely become a staple of our data-fuelled, internet-mediated lives, but in time we may also see services that give users more direct control over the algorithms that govern what they get to see.

The Conversation

This article was originally published on The Conversation.
Read the original article.

]]>
https://socialmedia.qut.edu.au/2014/09/04/the-dark-art-of-facebook-fiddling-with-your-news-feed/feed/ 0
Paid editors on Wikipedia – should you be worried? https://socialmedia.qut.edu.au/2014/08/22/paid-editors-on-wikipedia-should-you-be-worried/ https://socialmedia.qut.edu.au/2014/08/22/paid-editors-on-wikipedia-should-you-be-worried/#respond Fri, 22 Aug 2014 03:44:15 +0000 http://socialmedia.qut.edu.au/?p=718 Wiki_tee_shirt

Whether you trust it or ignore it, Wikipedia is one of the most popular websites in the world and accessed by millions of people every day. So would you trust it any more (or even less) if you knew people were being paid to contribute content to the encyclopedia?

The Wikimedia Foundation, the charitable organisation that supports Wikipedia, has changed its Terms of Use. Paid contributors can now make changes to Wikipedia articles so long as they clearly disclose their affiliations and potential conflicts of interest.

The website has previously not had an official policy on paid editing, despite a history of community opposition to editors who contribute for pay.

So the change in policy comes amid concerns from the Foundation about the potential damage to Wikipedia’s reputation as a free and objective source of knowledge from editors acting on behalf of a paying client or employer.

The concerns arose after the user community broke the story of its year-long investigation into large-scale editing by the consulting business Wiki-PR.

Working out of Austin, Texas, Wiki-PR employees used 250 fake accounts to create and contribute to pages about its clients. This resulted in several hundred promotional articles on Wikipedia, which the volunteer community subsequently had to remove for not meeting the encyclopedia’s quality standards.

What is paid editing?

Paid editing refers broadly to anyone who receives or expects to receive compensation for their contributions to the encyclopedia.

 

 

These editors are not paid by Wikipedia or the Wikimedia Foundation. They are understood to be contributing on behalf of a third party such as an employer or client.

At its heart, paid editing seems at odds with the open user-led model of volunteer collaboration that Wikipedia employs and is famous for. Therefore, the acknowledgement by the Wikimedia Foundation of such activity in the encyclopedia is a big deal.

Critics in the community say contributions from paid editors will never be compatible with the site’s core editing policy of neutrality, or that requiring disclosure is an invasion of privacy and the freedom to edit anonymously. Supporters of the change acknowledge the presence of these paid editors is important for fulfilling the site’s mission of being the encyclopedia that anyone can edit.

A short history of paid editing in Wikipedia

Paid editing has a tumultuous history in Wikipedia. In the last few years, there have been some high-profile instances of professionals whitewashing Wikipedia, known as wikiwashing. This is using a particular Wikipedia entry to further their clients’ interests, which is in violation of the site’s neutrality policy (among others).

Last year BP employee Arturo Silva was accused of providing nearly half the text for the British Petroleum article, including sections discussing the corporation’s environmental record.

The Gibraltarpedia controversy in 2012 resulted in a high-profile editor stepping down from trustee duties with Wikimedia UK after it was revealed his consultancy received fees from the Gibraltar Tourist Board.

In late 2011, UK newspaper The Independent filmed senior members of PR firm Bell Pottinger boasting of using “dark arts” to “sort” Wikipedia on behalf of governments with less-than-perfect human rights records.

It is also interesting that in all but the Bell Pottinger case, the Wikipedia community uncovered the activity.

What does the change mean for Wikipedia?

The change in the Terms of Use to acknowledge paid editing highlights Wikipedia’s importance in the management of corporate reputations.

But it also highlights the importance of managing Wikipedia’s own brand as a neutral and non-profit site of encyclopedic information.

The presence of paid editors on the site raises questions about the ability of the platform to meet this goal of neutrality. Can an article written about a company by an employee of that company ever be truly objective?

The fear is that opening up the platform to any form of commercial involvement changes its nature and threatens its sustainability as a site of free and neutral knowledge.

Is any editor a good editor?

On the other hand, can the site ever claim to really represent the sum of all knowledge without input from professionals? Paid editors have the time and inclination to spend on articles that otherwise may go unimproved, or may not exist at all.

Another argument for including paid editors in the community relates to the sustainability of the platform itself. The number of active volunteer editors is declining from a peak in 2007, although the number of new articles created each day continues to grow.

It is still important to make sure that Wikipedia remains the “encyclopedia that anyone can edit” so long as paid editors play by Wikipedia’s rules.

What does the change mean for users?

For readers, the change will remain largely unseen. It serves as an extra level of control for volunteer editors, and is flexible enough that site policies can be amended to reflect local legal requirements about fraud and conflicts of interest.

It means readers should continue to approach Wikipedia for what it is – a user-led encyclopedia. If the veracity of the information you seek is important, then you may need to click past the article and head to the talk page or the edit history to get an idea of how the article was constructed. You can then judge for yourself how you view any contribution from paid editors.

For contributors, the changed terms are meant to allow easier identification of edits that may present a conflict of interest and require extra scrutiny from uninvolved parties. It is hoped this will ultimately improve the quality of the encyclopedia.

Whether amending the Terms of Use invites a new wave of commercialism is yet to be seen. Either way the amendment signals that the platform is still open – to change at the very least.

 

This article was originally published on The Conversation.
Read the original article.

For an in depth look at one community response to paid editing see The Free Encyclopaedia that Anyone can Edit: The Shifting Values of Wikipedia Editors.

]]>
https://socialmedia.qut.edu.au/2014/08/22/paid-editors-on-wikipedia-should-you-be-worried/feed/ 0
Any name will do from now on says Google – why the change? https://socialmedia.qut.edu.au/2014/07/24/any-name-will-do-from-now-on-says-google-why-the-change/ Thu, 24 Jul 2014 03:35:08 +0000 http://socialmedia.qut.edu.au/?p=686 Google has announced a surprising end to its controversial “Real Name” policy with a contrite post on Google+, telling users that there are “no more restrictions” on the names people can use.

This is a dramatic change in policy for the company which suspended users en masse in 2011 for using pseudonyms – an event that users have since described as The Nymwars.

The policy had been criticised since for being capriciously enforced, allowing celebrities such as American musician Soulja Boy (real name DeAndre Cortez Way) to use a pseudonym on the network, but ignoring users who wanted to do the same.

Some users who used their real name on the social network even ran afoul of Google because their names did not fit the assumptions that Google employees made about about what counts as a real name.

Technology writer Stilgherrian and reporter Violet Blue have both documented their problems with Google’s name policing wrongly affecting them, even though they used their real names.

The policy became even more vexed in recent months, as Google integrated Google+ with Android, Gmail and YouTube, where users expected support for pseudonyms.

Although some users hoped that Google+’s real names would fix YouTube’s nasty comment ecosystem, it became a controversial change for many YouTube users.

Why does this change matter?

The change to Google’s policy is important because it shows a change in attitude towards rights of users online.

Vint Cerf, a senior executive at Google, had argued that “anonymity and pseudonymity are perfectly reasonable under some situations”, especially where using a real name could endanger a user.

The new policy should bring Google into line with the Australian Privacy Principles for Anonymity and Pseudonymity announced by the Office of the Australian Information Commissioner (OAIC) this year.

While we might normally consider names and pseudonyms purely as markers of our identity, the OAIC argues that anonymity and pseudonymity are important privacy concepts that allow people to have greater control over their personal information.

Why are pseudonyms so contentious?

Letting people adopt a pseudonym or participate anonymously gives users a freedom to participate without fear of retribution. Academics call this disinhibition.

The freedom from restraint that anonymity brings isn’t a particularly new concern. In the 1970s Johnny Carson told The New Yorker that he couldn’t bear citizen’s band (CB) radio:

[…] all those sick anonymous maniacs shooting off their mouths.

Similarly, writers have told stories of morality and anonymity since Plato’s Republic and the Ring of Gyges which grants its wearer the power to become invisible, similar to the ring in Tolkien’s The Lord of the Rings.

This freedom can be valuable for people at risk of harm, as it can allow them to seek support or to participate in online communities without fear of being stalked or persecuted.

Similarly, lesbian, gay and transgender users at risk of discrimination can participate online without being publicly outed. It can also allow people the freedom to express themselves without endangering their relationships with friends and colleagues.

Employees even risk retribution when their employers perceive that their online behaviour reflects on their workplace. US Supreme Court Justice John Paul Stevens argued that anonymity is protected as part of their right to free speech as it can “protect unpopular individuals from retaliation — and their ideas from suppression”.

The problem with anonymity

The catch is that this freedom also empowers people who wish to hurt and harass others. “Trolls” can operate anonymously because it can free them from responsibility for their actions.

This becomes particularly problematic when anonymous or pseudonymous users threaten people with harm. A number of women have written about the bullying and violent threats they regularly experience at the hands of anonymous trolls.

In some moderated online environments, users are protected from these kinds of speech by the thankless work of comment moderators who help to manage online communities.

Ultimately, Google+’s new policy will empower people by letting them participate on the network with greater control over the identity they use. This will help trolls and new participants alike. It falls to Google and its team of moderators to make sure that the network remains a safe place for users.

Google’s policy change shows that the company has become responsive to user concerns. We should consider that for many websites, creating an environment where users are free to participate, and free from harm is a difficult affair.

The Conversation

This article was originally published on The Conversation.
Read the original article.

]]>