Showing posts with label YouTube. Show all posts
Showing posts with label YouTube. Show all posts

Wednesday, March 4, 2020

Can YouTube's recommender engine tone down conspiracy theories?

The NY Times has the story:

Can YouTube Quiet Its Conspiracy Theorists?
A new study examines YouTube’s efforts to limit the spread of conspiracy theories on its site, from videos claiming the end times are near to those questioning climate change. By Jack Nicas

"In January 2019, YouTube said it would limit the spread of videos “that could misinform users in harmful ways.”

"One year later, YouTube recommends conspiracy theories far less than before. But its progress has been uneven and it continues to advance certain types of fabrications, according to a new study from researchers at University of California, Berkeley.

"YouTube’s efforts to curb conspiracy theories pose a major test of Silicon Valley’s ability to combat misinformation, particularly ahead of this year’s elections. The study, which examined eight million recommendations over 15 months, provides one of the clearest pictures yet of that fight, and the mixed findings show how challenging the issue remains for tech companies like Google, Facebook and Twitter."
***********

The paper referred to seems to be this one on Hany Farid's website at Berkeley. It's very recent ("compiled on March 2, 2020," the day of the NY Times story, and e.g. it finds that corona virus conspiracy videos are not being recommended despite being readily available). 

A longitudinal analysis of YouTube’s promotion of conspiracy videos
Marc Faddoul, Guillaume Chaslot, and Hany Farid


Abstract: Conspiracy  theories  have  flourished  on  social  media,  raising  concerns that such content is fueling the spread of disinformation, sup-porting extremist ideologies, and in some cases, leading to violence. Under increased scrutiny and pressure from legislators and the public, YouTube announced efforts to change their recommendation algorithms so that the most egregious conspiracy videos are demoted and demonetized.  To verify this claim, we have developed a classifier for automatically determining if a video is conspiratorial (e.g., the moon landing was faked, the pyramids of Giza were built by aliens, end of the world prophecies, etc.). We coupled this classifier with an emulation of YouTube’s watch-next algorithm on more than a thousand popular informational channels to obtain a year-long picture of the videos actively promoted by YouTube.  We also obtained trends of the so-called filter-bubble effect for conspiracy theories



And here are the concluding paragraphs:


"Summary. The overall reduction of conspiratorial recommendations is an encouraging trend. Nonetheless, this reduction does not make the problem of radicalization on YouTube obsolete nor fictional, as some have claimed (41). Aggregatedata hide very different realities for individuals, and although radicalization is a serious issue, it is only relevant for a fraction of the users. Those with a history of watching conspiratorial content can certainly still experience YouTube as filter-bubble, reinforced by personalized recommendations and channel subscriptions. In general, radicalization is a more complex problem than what an analysis of default recommendations cans cope, for it involves the unique mindset and viewing patternsof a user interacting over time with an opaque multi-layer neural network tasked to pick personalized suggestions from a dynamic and virtually infinite pool of ideas.


"With two billion monthly active users on YouTube, the design of the recommendation algorithm has more impact on the flow of information than the editorial boards of traditional media. The role of this engine is made even more crucial in the light of (1) The increasing use of YouTube as a primary source of information, particularly among the youth (42); (2)The nearly monopolistic position of YouTube on its market;and (3) The ever-growing weaponization of YouTube to spread disinformation and partisan content around the world (43). And yet, the decisions made by the recommendation engine are largely unsupervised and opaque to the public.


"This research is an effort to make the behavior of the algorithm more transparent, in an effort to increase the awarenessof the public and YouTube’s accountability for their statements. We hope it will fuel a public discussion, not about whether YouTube should allow for conspiratorial content on the platform, but about whether such content is appropriate to be part of the baseline recommendations on the informational YouTube.