The Financial Times has the story:
by Alex Barker and Hannah Murphy
"Google’s YouTube has reverted to using more human moderators to vet harmful content after the machines it relied on during lockdown proved to be overzealous censors of its video platform.
"When some of YouTube’s 10,000-strong team filtering content were “put offline” by the pandemic, YouTube gave its machine systems greater autonomy to stop users seeing hate speech, violence or other forms of harmful content or misinformation.
"But Neal Mohan, YouTube’s chief product officer, told the Financial Times that one of the results of reducing human oversight was a jump in the number of videos removed, including a significant proportion that broke no rules."
*****************
Wired Magazine has a good backgrounder on the AI attempt to alter the recommender engine:
by Clive Thompson
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.