YouTube cracks down on AI-generated crime videos


YouTube is taking strong action to stop the spread of AI-generated content in true crime stories. Starting January 16, 2024, YouTube’s new rules focus on videos that “realistically imitate” minors or victims describing violent events. This aims to stop the trend of using AI to make videos of victims, often using childlike voices to narrate violent experiences.

Stopping harmful videos

YouTube’s updated policy against cyberbullying now also target AI-made content with crime victims. Families of victims have said these videos are “disgusting.” If a video breaks the rules, the user gets a strike. The first strike removes the video from the platform and limits what the user can do for a week. 

YouTube said in a blog post, “If your content violates this policy, we will remove the content and send you an email to let you know. If we can’t verify that a link you post is safe, we may remove the link.” The company also added that if the user gets three strikes within 90 days, their channel will be terminated.

A challenge for numerous platforms

AI images without labels are a challenge for different platforms. YouTube is taking a similar stance to other platforms in addressing problems related to content created by artificial intelligence. Similar to TikTok and Instagram, YouTube requires creators to disclose if their content is AI-generated. These actions demonstrate how seriously major platforms take the creation and distribution of content to safeguard users.

Clearer rules for AI-made content

In November 2023, YouTube made new guidelines on how to use AI in videos responsibly. Creators must notify their viewers if they make realistic-looking AI content. If they don’t, their content might be removed, or they could be removed as a YouTube Partner Program. These rules help make sure people are honest in using AI to keep content safe for everyone.

 



Source link