fake news – QUT Social Media Research Group https://socialmedia.qut.edu.au Mon, 03 Aug 2020 03:59:22 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 More ‘Fake News’ Research, and a PhD Opportunity! https://socialmedia.qut.edu.au/2020/08/03/more-fake-news-research-and-a-phd-opportunity/ https://socialmedia.qut.edu.au/2020/08/03/more-fake-news-research-and-a-phd-opportunity/#respond Mon, 03 Aug 2020 03:59:10 +0000 https://socialmedia.qut.edu.au/?p=1170 For those of you who have access to Australian television, this is an advance warning that the research on coronavirus-related mis- and disinformation that my colleagues and I at the QUT Digital Media Research Centre have conducted during the first half of this year will be featured prominently in tonight’s episode of the ABC’s investigative journalism programme Four Corners, which focusses on 5G conspiracy theories. A preview is below, and I hope that the full programme may also become available without geoblocking on ABC iView or the Four Corners Facebook page. The accompanying ABC News article has further information, too.

Related to this work, and the ARC Discovery research project that supports it, we are now also calling for expressions of interest in a three-year PhD scholarship on mis- and disinformation in social media, which will commence in early 2021. Please get in touch with me if you’re interested in the scholarship:

PhD Scholarship: ARC Discovery project on Mis- and Disinformation in Social Media (PhD commencing 2021)

The QUT Digital Media Research Centre is offering a three-year PhD scholarship associated with a major ARC Discovery research project on mis- and disinformation in social media. Working with DMRC research leaders Axel Bruns, Stephen Harrington, and Dan Angus, and collaborating with Scott Wright (Monash University, Melbourne), Jenny Stromer-Galley (Syracuse University, USA), and Karin Wahl-Jorgensen (Cardiff University, UK), the PhD researcher will use qualitative and quantitative analytics methods to investigate the dissemination patterns and processes for mis- and disinformation.

Ideally, the PhD researcher should be equally familiar with qualitative, close reading as well as quantitative, computational research methods. They will draw on the state-of-the-art social media analytics approaches to examine the role of specific individual, institutional, and automated actors in promoting or preventing the distribution of suspected ‘fake news’ content across Australian social media networks. Building on this work, they will develop a number of the case studies of the trajectories of specific stories across the media ecosystem, drawing crucially on issue mapping methods to produce a forensic analysis of how particular stories are disseminated by a combination of fringe outlets, social media platforms and their users, and potentially also by mainstream media publications.

Interested candidates should first contact Prof. Axel Bruns (a.bruns@qut.edu.au). You will then be asked to complete the DMRC EOI form (https://research.qut.edu.au/dmrc/dmrc-eois-2020-annual-scholarship-round/), by 31 August. We will assess your eligibility for PhD study, and work with you to develop a formal PhD application to QUT’s scholarship applications system, by 30 October. The PhD itself will commence in early 2021. International applicants are welcome.

The DMRC is a global leader in digital humanities and social science research with a focus on communication, media, and the law. It is one of Australia’s top organisations for media and communication research, areas in which QUT has achieved the highest possible rankings in ERA, the national research quality assessment exercise. Our research programs investigate the digital transformation of media industries, the challenges of digital inclusion and governance, the growing role of AI and automation in the information environment, and the role of social media in public communication. The DMRC has access to cutting-edge research infrastructure and capabilities in computational methods for the study of communication and society. We actively engage with industry and academic partners in Australia, Europe, Asia, the US, and South America; and we are especially proud of the dynamic and supportive research training environment we provide to our many local and international graduate students.

]]>
https://socialmedia.qut.edu.au/2020/08/03/more-fake-news-research-and-a-phd-opportunity/feed/ 0
‘Like a Virus’ – Disinformation in the Age of COVID-19 https://socialmedia.qut.edu.au/2020/05/19/like-a-virus-disinformation-in-the-age-of-covid-19/ https://socialmedia.qut.edu.au/2020/05/19/like-a-virus-disinformation-in-the-age-of-covid-19/#respond Mon, 18 May 2020 23:07:25 +0000 https://socialmedia.qut.edu.au/?p=1132 QUT DMRC social media researchers Dr Tim Graham and Prof. Axel Bruns participated in Essential Media’s Australia at Home online seminar series on 23 April, presenting early results from collaborative research in partnership with the Australia Institute’s Centre for Responsible Technology to a Zoom audience of more than 200 participants.

Also involving Assoc. Prof. Dan Angus and Dr Tobias Keller, the team is currently investigating the origins and spread of major conspiracy theories associated with the COVID-19 crisis across various social media platforms. Such conspiracy theories include false stories about coronavirus as a bioweapon created either in a Wuhan lab or by researchers associated with the Gates Foundation, and about connections between coronavirus and the roll-out of 5G mobile telephony technology.

Early results from this research point to the presence of a small but sustained coordinated effort by a network of Twitter accounts that pushed the bioweapon conspiracy story; such accounts were often associated with fringe political perspectives especially in the United States. Further, the research indicates that these conspiracy theories typically spread beyond the fringes of public discussion only once they are picked up and amplified by tabloid media exploiting them as clickbait, or by celebrities from the fields of music, movies, and sports who share them with their substantial social media audiences.

The research, which will be presented in extended form in a report for the Centre for Responsible Technology and subsequent scholarly publications, points to important inflection points in the trajectory of conspiracy theories from the fringes to the mainstream, and highlights a need both for further platform intervention against coordinated inauthentic behaviour and for the development of greater digital literacies not least also amongst influential social media users.

]]>
https://socialmedia.qut.edu.au/2020/05/19/like-a-virus-disinformation-in-the-age-of-covid-19/feed/ 0
Some Questions about Filter Bubbles, Polarisation, and the APIcalypse https://socialmedia.qut.edu.au/2019/08/26/some-questions-about-filter-bubbles-polarisation-and-the-apicalypse/ https://socialmedia.qut.edu.au/2019/08/26/some-questions-about-filter-bubbles-polarisation-and-the-apicalypse/#respond Mon, 26 Aug 2019 01:07:15 +0000 https://socialmedia.qut.edu.au/?p=1126

Rafael Grohmann from the Brazilian blog DigiLabour has asked me to answer some questions about my recent work – and especially my new book Are Filter Bubbles Real?, which is out now from Polity –, and the Portuguese version of that interview has just been published. I thought I’d post the English-language answers here, too:

1. Why are the ‘filter bubble’ and ‘echo chamber’ metaphors so dumb?

The first problem is that they are only metaphors: the people who introduced them never bothered to properly define them. This means that these concepts might sound sensible, but that they mean everything and nothing. For example, what does it mean to be inside an filter bubble or echo chamber? Do you need to be completely cut off from the world around you, which seems to be what those metaphors suggest? Only in such extreme cases – which are perhaps similar to being in a cult that has completely disconnected from the rest of society – can the severe negative effects that the supporters of the echo chamber or filter bubble theories imagine actually become reality, because they assume that people in echo chambers or filter bubbles no longer see any content that disagrees with their political worldviews.

Now, such complete disconnection is not entirely impossible, but very difficult to achieve and maintain. And most of the empirical evidence we have points in the opposite direction. In particular, the immense success of extremist political propaganda (including ‘fake news’, another very problematic and poorly defined term) in the US, the UK, parts of Europe, and even in Brazil itself in recent years provides a very strong argument against echo chambers and filter bubbles: if we were all locked away in our own bubbles, disconnected from each other, then such content could not have travelled as far, and could not have affected as many people, as quickly as it appears to have done. Illiberal governments wouldn’t invest significant resources in outfits like the Russian ‘Internet Research Agency’ troll farm if their influence operations were confined to existing ideological bubbles; propaganda depends crucially on the absence of echo chambers and filter bubbles if it seeks to influence more people than those who are already part of a narrow group of hyperpartisans.

Alternatively, if we define echo chambers and filter bubbles much more loosely, in a way that doesn’t require the people inside those bubble to be disconnected from the world of information around them, then the terms become almost useless. With such a weak definition, any community of interest would qualify as an echo chamber or filter bubble: any political party, religious group, football club, or other civic association suddenly is an echo chamber or filter bubble because it enables people with similar interests and perspectives to connect and communicate with each other. But in that case, what’s new? Such groups have always existed in society, and society evolves through the interaction and contest between them – there’s no need to create new and poorly defined metaphors like ‘echo chambers’ and ‘filter bubbles’ to describe this.

Some proponents of these metaphors claim that our new digital and social media have made things worse, though: that they have made it easier for people to create the first, strong type of echo chamber or filter bubble, by disconnecting from the rest of the world. But although this might sound sensible, there is practically no empirical evidence for this: for example, we now know that people who receive news from social media encounter a greater variety of news sources than those who don’t, and that those people who have the strongest and most partisan political views are also among the most active consumers of mainstream media. Even suggestions that platform algorithms are actively pushing people into echo chambers or filter bubbles have been disproven: Google search results, for instance, show very little evidence of personalisation at an individual level.

Part of the reason for this is that – unlike the people who support the echo chamber and filter bubble metaphors – most ordinary people actually don’t care much at all about politics. If there is any personalisation through the algorithms of Google, Facebook, Twitter, or other platforms, it will be based on many personal attributes other than our political interests. As multi-purpose platforms, these digital spaces are predominantly engines of context collapse, where our personal, professional, and political lives intersect and crash into each other and where we encounter a broad and unpredictable mixture of content from a variety of viewpoints. Overall, these platforms enable all of us to find more diverse perspectives, not less.

And this is where these metaphors don’t just become dumb, but downright dangerous: they create the impression, first, that there is a problem, and second, that the problem is caused to a significant extent by the technologies we use. This is an explicitly technologically determinist perspective, ignoring the human element and assuming that we are unable to shape these technologies to our needs. And such views then necessarily also invite technological solutions: if we assume that digital and social media have caused the current problems in society, then we must change the technologies (through technological, regulatory, and legal adjustments) to fix those problems. It’s as if a simple change to the Facebook algorithm would make fascism disappear.

In my view, by contrast, our current problems are social and societal, economic and political, and technology plays only a minor role in them. That’s not to say that they are free of blame – Facebook, Twitter, WhatsApp, and other platforms could certainly do much more to combat hate speech and abuse on their platforms, for example. But if social media and even the Internet itself suddenly disappeared tomorrow, we would still have those same problems in society, and we would be no closer to solving them. The current overly technological focus of our public debates – our tendency to blame social media for all our problems – obscures this fact, and prevents us from addressing the real issues.

2. Polarisation is a political fact, not a technological one. How do you understand political and societal polarisation today?

To me, this is the real question, and one which has not yet been researched enough. The fundamental problem is not echo chambers and filter bubbles: it is perfectly evident that the various polarised groups in society are very well aware of each other, and of each other’s ideological positions – which would be impossible if they were each locked away in their own bubbles. In fact, they monitor each other very closely: research in the US has shown that far-right fringe groups are also highly active followers of ‘liberal’ news sites like the New York Times, for example. But they no longer follow the other side in order to engage in any meaningful political dialogue, aimed at finding a consensus that both sides can live with: rather, they monitor their opponents in order to find new ways to twist their words, create believable ‘fake news’ propaganda, and attack them with such falsehoods. And yes, they use digital and social media to do so, but again this is not an inherently technological problem: if they didn’t have social media, they’d use the broadcast or print media instead, just as the fascists did in the 1920s and 1930s and as their modern-day counterparts still do today.

So, for me the key question is how we have come to this point: put simply, why do hyperpartisans do what they do? How do they become so polarised – so sure of their own worldview that they will dismiss any opposing views immediately, and will see any attempts to argue with them or to correct their views merely as a confirmation that ‘the establishment’ is out to get them? What are the (social and societal, rather than simply technological) processes by which people get drawn to these extreme political fringes, and how might they be pulled back from there? This question also has strong psychological elements, of course: how do hyperpartisans form their worldview? How do they incorporate new evidence into it? How do they interpret, and in doing so defuse, any evidence that goes against their own perspectives? We see this across so many fields today: from political argument itself to the communities of people who believe vaccinations are some kind of global mind control experiment, or to those who still deny the overwhelming scientific evidence for anthropogenic climate change. How do these people maintain their views even – and this again is evidence for the fact that echo chambers and filter bubbles are mere myths – they are bombarded on a daily basis with evidence of the fact that vaccinations save lives and that the global climate is changing with catastrophic consequences?

And since you include the word ‘today’ in your question, the other critical area of investigation in all this is whether any of this is new, and whether it is different today from the way it was ten, twenty, fifty, or one hundred years ago. On the one hand, it seems self-evident that we do see much more evidence of polarisation today than we have in recent decades: Brexit, Trump, Bolsonaro, and many others have clearly sensitised us to these deep divisions in many societies around the world. But most capitalist societies have always had deep divisions between the rich and the poor; the UK has always had staunch pro- and anti-Europeans; the US has always been racist. I think we need more research, and better ways of assessing, whether any of this has actually gotten worse in recent years, or whether it has simply become more visible.

For example, Trump and others have arguably made it socially acceptable in the US to be politically incorrect: to be deliberately misogynist; to be openly racist; to challenge the very constitutional foundations that the US political system was built on. But perhaps the people who now publicly support all this had always already been there, and had simply lacked the courage to voice their views in public – perhaps what has happened here is that Trump and others have smashed the spiral of silence that subdued such voices by credibly promising social and societal sanctions, and have instead created a spiral of reinforcement that actively rewards the expression of extremist views and leads hyperpartisans to try and outdo each other with more and more extreme statements. Perhaps the spiral of silence now works the other way, and the people who oppose such extremism now remain silent because they fear communicative and even physical violence.

Importantly, these are also key questions for media and communication research, but this research cannot take the simplistic perspective that ‘digital and social media are to blame’ for all of this. Rather, the question is to what extent the conditions and practices in our overall, hybrid media system – encompassing print and broadcast as well as digital and social media – have enabled such changes. Yes, digital and social platforms have enabled voices on the political fringes to publish their views, without editorial oversight or censorship from anyone else. But such voices find their audience often only once they have been amplified by more established outlets: for instance, once they have been covered – even if only negatively – by mainstream media journalists, or shared on via social media by more influential accounts (including even the US president himself). It is true that in the current media landscape, the flows of information are different from what they were in the past – not simply because of the technological features of the media, but because of the way that all of us (from politicians and journalists through to ordinary users) have chosen to incorporate these features into our daily lives. The question then is whether and how this affects the dynamics of polarisation, and what levers are available to us if we want to change those dynamics.

3. How can we continue critical research in social media after the APIcalypse?

With great tenacity and ingenuity even in the face of significant adversity – because we have a societal obligation to do so. I’ve said throughout my answers here that we cannot simplistically blame social media for the problems our societies are now facing: the social media technologies have not caused any of this. But the ways in which we, all of us, use social media – alongside other, older media forms – clearly play a role in how information travels and how polarisation takes place, and so it remains critically important to investigate the social media practices of ordinary citizens, of hyperpartisan activists, of fringe and mainstream politicians, of emerging and established journalists, of social bots and disinformation campaigns. And of course even beyond politics and polarisation, there are also many other important reasons to study social media.

The problem now is that over the past few years, many of the leading social media platforms have made it considerably more difficult for researchers even to access public and aggregate data about social media activities – a move I have described, in deliberately hyperbolic language, as the ‘APIcalypse’. Ostensibly, such changes were introduced to protect user data from unauthorised exploitation, but a convenient consequence of these access restrictions has been that independent, critical, public-interest research into social media practices has become a great deal more difficult even while the commercial partnerships between platforms and major corporations have remained largely unaffected. This limits our ability to provide an impartial assessment of social media practices and to hold the providers themselves to account for the effects of any changes they might make to their platforms, and increasingly forces scholars who seek to work with platform data into direct partnership arrangements that operate under conditions favouring the platform providers.

This requires several parallel responses from the scholarly community. Of course we must explore the new partnership models offered by the platforms, but we should treat these with a considerable degree of scepticism and cannot solely rely on such limited data philanthropy; in particular, the platforms are especially unlikely to provide data access in contexts where scholarly research might be highly critical of their actions. We must therefore also investigate other avenues for data gathering: this includes data donations from users of these platforms (modelled for instance on ProPublica’s browser plugin that captures the political ads encountered by Facebook users) or data scraping from the Websites of the platforms as an alternative to API-based data access, for example.

Platforms may seek to shut down such alternative modes of data gathering (as Facebook sought to do with the ProPublica browser plugin), or change their Terms of Service to explicitly forbid such practices – and this should lead scholars to consider whether the benefits of their research outweigh the platform’s interests. Terms of Service are often written to the maximum benefit of the platform, and may not be legally sound under applicable national legislation; the same legislation may also provide ‘fair use’ or ‘academic freedom’ exceptions that justify the deliberate breach of Terms of Service restrictions in specific contexts. As scholars, we must remember that we have a responsibility to the users of the platform, and to society as such, as well as to the platform providers. We must balance these responsibilities, by taking care that the user data we gather remain appropriately protected as we pursue questions of societal importance, and we should minimise the impact of our research on the legitimate commercial interests of the platform unless there is a pressing need to reveal malpractice in order to safeguard society. To do so can be a very difficult balancing act, of course.

Finally, we must also maintain our pressure on the platforms to provide scholarly researchers with better interfaces for data access, well beyond limited data philanthropy schemes that exclude key areas of investigation. Indeed, we must enlist others – funding bodies, policymakers, civil society institutions, and the general public itself – in bringing that pressure to bear: it is only in the face of such collective action, coordinated around the world, that these large and powerful corporations are likely to adjust their data access policies for scholarly research. And it will be important to confirm that they act on any promises of change they might make: too often have the end results they delivered not lived up to the grand rhetoric with which they were announced.

In spite of all of this, however, I want to end on a note of optimism: there still remains a crucial role for research that investigates social media practices, in themselves and especially also in the context of the wider, hybrid media system of older and newer media, and we must not and will not give up on this work. In the face of widespread hyperpartisanship and polarisation, this research is now more important than ever – and the adversities we are now confronted with are also a significant source of innovation in research methods and frameworks.

]]>
https://socialmedia.qut.edu.au/2019/08/26/some-questions-about-filter-bubbles-polarisation-and-the-apicalypse/feed/ 0