Tuesday, September 22, 2020

Filtering 'inappropriate' content is still a challenge for AI--human monitoring of YouTube

The Financial Times has the story:

YouTube reverts to human moderators in fight against misinformation--Video platform relied on machines after lockdown but found they over-censored borderline content

by Alex Barker and Hannah Murphy 

"Google’s YouTube has reverted to using more human moderators to vet harmful content after the machines it relied on during lockdown proved to be overzealous censors of its video platform.

"When some of YouTube’s 10,000-strong team filtering content were “put offline” by the pandemic, YouTube gave its machine systems greater autonomy to stop users seeing hate speech, violence or other forms of harmful content or misinformation.

"But Neal Mohan, YouTube’s chief product officer, told the Financial Times that one of the results of reducing human oversight was a jump in the number of videos removed, including a significant proportion that broke no rules." 

*****************

Wired Magazine has a good backgrounder on the AI attempt to alter the recommender engine:

YouTube’s Plot to Silence Conspiracy Theories--From flat-earthers to QAnon to Covid quackery, the video giant is awash in misinformation. Can AI keep the lunatic fringe from going viral?

by Clive Thompson

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.