News

TikTok starts removing certain categories of videos in the US without human reviewing

TikTok starts removing certain categories of videos in the US without human reviewing

TikTok- a short video-sharing app has decided to remove the videos on its platforms that violate the community standards’ Policy of the company. Moreover, the app will be using its AI technology for removing the videos that fall under the following categories of videos:

  • Nudity
  • Sex
  • Violence
  • Graphic content
  • Illegal activities
  • Violations of its Community Standards

In addition to this, the initial test and update will be first available for the users of the USA and Canada. Interestingly, the new feature will not allow the reviews of humans, rather than it will automatically detect the videos that feel inappropriate and falls under the above-mentioned categories. After the system automation removes videos, the creator of the video will be notified. Notably, TikTok announced that its platform will be launching the automated review system after two to three weeks in the USA. After the US, this update will gradually be launching all over the world.

TikTok

At current, the videos have been processing under certain technology tools for looking whether they come against their community standards or not. According to TikTok, after the videos have been recognized by the system with the help of technology, a team of safety members will again check the content. After the whole process, from technology tools to the safety team, the video will be deleted from the platform, and the video creator is notified. The ByteDance-owned company further added that the new or upcoming feature and the update will work along with the safety team within the application.

TikTok announced, at first violation the video or content creator will only be notified on the app. whereas, after several and repeated violations, the account of that violator will permanently ban.

Read This Too: YouTube brings out TikTok-like 'Shorts' to the United States 

To recall, these changes from the video streaming service come after it came under fire alongside Facebook for promoting hate speech and spreading misinformation globally from their platforms.

As per TikTok, the new feature will be helpful for the safety teams as they will be focusing on more important areas, such as bullying, harassment, misinformation, and hateful behavior. According to the spokesperson of the app, until now, the videos have been uploading and scrutinizing with the help of human moderators.  According to one of the spokespersons from TikTok said to The Verge that human moderators or members of the safety team will still be checking community reports, appeals, and other videos recognized by its automated systems.

As per the opinions of analysts and experts, the company will be facing the issue of an automated reviewing system as it has never been 100 percent accurate. For instance, TikTok removed content in the big markets, such as Brazil and Pakistan with the AI system tools. The results showed that only 5 percent of the total content removed was actually needed to be removed. So 95 percent of the total videos didn’t need to be removed. Therefore, according to experts, the automated reviewing system or you can say AI tools are not able to take the place of human moderators and safety teams.

Related Articles

Back to top button
Close
Close