Showing posts with label misinformation. Show all posts
Showing posts with label misinformation. Show all posts

Sunday, September 4, 2022

Crowd control of misinformation, by limiting fast forwarding, by Jackson, Malladi and McAdams in PNAS

 Here's a paper in PNAS about limiting the spread of misinformation by changing the network of information flows:

Learning through the grapevine and the impact of the breadth and depth of social networks by Matthew O. Jackson, Suraj Malladi, and David McAdams

"Abstract: We study how communication platforms can improve social learning without censoring or fact-checking messages, when they have members who deliberately and/or inadvertently distort information. Message fidelity depends on social network depth (how many times information can be relayed) and breadth (the number of others with whom a typical user shares information). We characterize how the expected number of true minus false messages depends on breadth and depth of the network and the noise structure. Message fidelity can be improved by capping depth or, if that is not possible, limiting breadth, e.g., by capping the number of people to whom someone can forward a given message. Although caps reduce total communication, they increase the fraction of received messages that have traveled shorter distances and have had less opportunity to be altered, thereby increasing the signal-to-noise ratio."

Thursday, August 19, 2021

Radical content on YouTube, in PNAS

 Here's a paper in PNAS which finds that YouTube viewing of politically radical content reflects viewers' other web behavior, rather than being driven by the YouTube recommender system.

Examining the consumption of radical content on YouTube by Homa Hosseinmardi,  Amir Ghasemian,   Aaron Clauset,   Markus Mobius,   eDavid M. Rothschild, and   Duncan J. Watts. 

PNAS August 10, 2021 118 (32) e2101967118; https://doi.org/10.1073/pnas.2101967118

Abstract: Although it is under-studied relative to other social media platforms, YouTube is arguably the largest and most engaging online media consumption platform in the world. Recently, YouTube’s scale has fueled concerns that YouTube users are being radicalized via a combination of biased recommendations and ostensibly apolitical “anti-woke” channels, both of which have been claimed to direct attention to radical political content. Here we test this hypothesis using a representative panel of more than 300,000 Americans and their individual-level browsing behavior, on and off YouTube, from January 2016 through December 2019. Using a labeled set of political news channels, we find that news consumption on YouTube is dominated by mainstream and largely centrist sources. Consumers of far-right content, while more engaged than average, represent a small and stable percentage of news consumers. However, consumption of “anti-woke” content, defined in terms of its opposition to progressive intellectual and political agendas, grew steadily in popularity and is correlated with consumption of far-right content off-platform. We find no evidence that engagement with far-right content is caused by YouTube recommendations systematically, nor do we find clear evidence that anti-woke channels serve as a gateway to the far right. Rather, consumption of political content on YouTube appears to reflect individual preferences that extend across the web as a whole.


"Our data are drawn from Nielsen’s nationally representative desktop web panel, spanning January 2016 through December 2019 (SI Appendix, section B), which records individuals’ visits to specific URLs. We use the subset of N = 309,813 panelists who have at least one recorded YouTube pageview. Parsing the recorded URLs, we found a total of 21,385,962 watched-video pageviews (Table 1). We quantify the user’s attention by the duration of in-focus visit to each video in total minutes (32)."


Tuesday, November 10, 2020

Lies, damn lies, and internet conspiracy theories

 One reason lies spread faster than true stories on social media is that lies can be designed to be clickbait, while the truth is constrained by the facts.  This has been notably true with stories about election fraud, but it is by no means confined to lies originating at the top.

The Washington Post has the story:

Big Tech still hasn’t figured out how to make truth spread faster than lies--Warnings from Twitter and Facebook were the equivalent of slapping the “PARENTAL ADVISORY” labels from album covers on the president of the United States.    By Geoffrey A. Fowler

"President Trump tweeted that America’s election was being stolen, and Twitter put labels over his lies over a dozen times and counting. “This tweet is disputed and might be misleading,” it warned.

...

"But as tech products, the labels were too little, too late. There’s scant evidence that labels make a lick of difference to viewers. Moreover, they didn’t stop the flow of toxic election content on social media. That’s because social media’s business model is toxic content.

...

"when we look back on the 2020 election, we’ll remember it for the domestic disinformation campaigns and alternate-reality bubbles that grew, in part, because of technology designed to amplify them. This was the year where some 70 candidates for office embraced at least parts of the wacky QAnon online conspiracy theory, and one of them — Marjorie Taylor Greene of Georgia — got elected to Congress.

...

"There is one way labels could definitely be effective, disinformation experts agree: by making it physically harder to share misinformation — adding speed bumps to the information superhighway.

"Facebook said Friday it had added a mini speed bump: forcing people to look at an additional message before they could share a flagged post.

"Twitter was the only one that made a significant speed bump effort on election night. Trump’s tweets covered by warning labels had to be clicked on to be seen, and didn’t show retweet and like counts. And they couldn’t be shared without adding your own context on top.