YouTube Introduces Self-Labeling for AI-Generated Content
YouTube has launched a new feature allowing creators to indicate if their videos include AI-generated or synthetic material. This initiative is intended to maintain transparency as AI-generated content becomes increasingly realistic and accessible. When uploading videos, creators will now see a checkbox to tick if their content includes any 'altered or synthetic' elements that could be mistaken for reality.
What Content Requires Disclosure?
The requirement to label content applies to alterations making a person appear to say or do something they haven't, modified recordings of real-life events, or fabricated scenarios that look authentic. Examples given by YouTube entail a computer-generated tornado approaching a real town or deepfake technology to simulate a person's voiceover. Nonetheless, creators aren't required to disclose the use of beauty filters, background blurs, or content that is obviously fictional, like cartoons.
Details of the AI Content Policy
In a policy update last November, YouTube implemented dual-level regulations. A stringent set protects music artists and labels, while a more relaxed set covers other content. For instance, deepfake renditions of music can be taken down at an artist's request. The policy mentioned obligatory disclosure of AI-generated content, but the specifics of the implementation were not clear until now. In most cases outside the music industry, individuals affected by deepfakes will need to follow privacy report protocols which YouTube is looking to refine further.
While the platform depends on content creators to truthfully label their videos, YouTube has acknowledged its intention to automatically apply disclosures to certain videos, particularly when there's a risk of misinformation. Videos related to sensitive subjects like health, elections, and financial advice may receive more visible labels to caution viewers.
YouTube, AI, Disclosure