Deepfake has been a frequent topic of discussion and concern lately. The discussion stirred up in India, especially after the recent deepfake case where a fake video of Rashmika Mandanna went viral, followed by one of Katrina Kaif’s. This issue hasn’t been limited to India and movie stars, it has also been a worry for political and social issues –– especially as some of the biggest democracies in the world, like India, the US and the UK, are about to have their general elections in 2024. Keeping that in mind, Meta has announced a policy that will require advertisers on Instagram and Facebook to disclose whenever an ad related to social issues, elections, or politics includes a convincingly realistic image or video, or authentic-sounding audio that has been digitally generated or modified.
For the uninitiated, deepfake content refers to media, like videos or images, that are created or manipulated using artificial intelligence to replace the likeness of a person with someone else’s, often convincingly. Deepfake is not the same as an edited or “photoshopped” video or image. This technology allows for the fabrication of realistic-looking content, making it seem as if individuals are saying or doing things they never did. Deepfakes can be used for entertainment, but they also pose risks, as they can be misused to spread false information or create fake scenarios. It’s essential to be aware of this technology’s potential impact on the authenticity of visual and audio content online.
And so, to control the spread of misinformation, Meta’s new policy requires that any time an advert is altered or modified digitally, users will be notified about it. The disclosure will appear on the ad itself. Should Meta find that an advertiser has failed to disclose as mandated, it will cancel the ad, and persistent non-compliance with disclosure requirements may lead to penalties imposed on the advertiser.
Meta says advertisers don’t have to disclose if the alterations are minor, like changes in size, image cropping, colour correction, image sharpening, or any other “inconsequential” change. The policy will go into effect starting January 2024 and will be applicable to advertisers across the world.
Besides advertisers, Meta already has policies in place for all users regarding using deepfake videos on Instagram and Facebook. As a policy, if any user puts up content that claims that someone said said that they didn’t, or presents misinformation by posing it as a fact or truth, will be punished by the platform. The current policy essentially extends the existing one, now officially reprimanding advertisers for violating guidelines on deepfake content.
In September, Google announced a similar policy that requires advertisers to let users know when an image or audio has been created using artificial intelligence. The change came into effect in November 2023.
{Categories} *ALL*,_Category: Implications{/Categories}
{URL}https://www.indiatoday.in/technology/news/story/instagram-facebook-users-in-india-know-image-video-deepfake-2461314-2023-11-10{/URL}
{Author}unknown{/Author}
{Image}https://akm-img-a-in.tosshub.com/businesstoday/images/story/202311/deepfake-are-a-form-of-digital-manipulation-that-uses-artificial-intelligence-ai-algorithms-to-creat-094115728-16×9-original.jpeg{/Image}
{Keywords}{/Keywords}
{Source}All{/Source}
{Thumb}{/Thumb}