In the wake of widespread criticism concerning the vast amount of hateful and/or extremist content still present on their platform, Alphabet-owned YouTube are now taking additional steps to counteract the spread of such material with the announcement that they will take on more staff to work toward addressing policy-violating content. Not only will these individuals allow for more thorough human analysis of content, the additional data they provide should also help YouTube to improve their machine learning software, perhaps lessening the need for human interference in the future.
“The goal is to bring the total number of people across Google working to address content that might violate its policies to over 10,000 in 2018,” YouTube chief executive Susan Wojcicki said in one of a pair of blog posts on Monday.
She also addressed criticism from the platform’s content creators regarding videos that were mistakenly flagged as hateful or otherwise inappropriate. These videos often had their advertising options removed, costing innocent creators money in the process.
“We need an approach that does a better job determining which channels and videos should be eligible for advertising,” said Wojcicki. “We’ve heard loud and clear from creators that we have to be more accurate when it comes to reviewing content, so we don’t demonetise videos by mistake.”
In addition, Ms Wojcicki said the company would take “aggressive action on comments, launching new comment moderation tools and in some cases shutting down comments altogether.”
Comments
Post a Comment