Social media platforms and tech companies will be required to stamp out harmful material created using artificial intelligence, such as deep fake intimate images and hate speech, under new online safety rules the federal government.
The communications minister, Michelle Rowland, has signalled a suite of changes are needed to respond to “new and emerging harms”, with the Labor government expressing significant concern at widely available generative AI services being used to create offensive or dangerous images, videos and text.
Updates to the Basic Online Safety Expectations (Bose) on tech companies will also ask them to ensure their algorithms, which dictate what users see online, do not amplify harmful or “extreme” content, including racism, sexism and homophobia. In a speech to the National Press Club on Wednesday, Rowland will raise concerns about rising antisemitic and Islamophobic content on X, formerly known as Twitter.
“It’s clear there is more industry can do on harms that are not explicitly set out in the current expectations,” Rowland will say.
“Over the past two years, it has become harder and harder to distinguish between AI generated images and genuine ones. While this technology has incredible potential as a positive tool, we have also seen it used to create images designed to humiliate, embarrass, offend – and even abuse – others.”
Rowland will outline two key announcements in the speech: a statutory review of the Online Safety Act, led by the former ACCC deputy chair Delia Rickard, and an intention to amend the Bose system, which applies to tech and social media companies.
The Bose allows the eSafety commissioner, who oversees the Online Safety Act, to require companies to report on their progress on certain indicators. The government will open consultation on a suite of changes relating to generative AI and reducing the amplification of hate speech.
Rowland’s speech will flag “deep concern” in the community about hate speech online.
“Recent reporting about the rise in antisemitic and Islamophobic rhetoric on X makes this clear,” she will say.
A consultation paper to be shared by Rowland’s office will create new expectations that companies must take reasonable steps to “proactively minimise the extent to which generative AI capabilities produce material or facilitate activity that is unlawful or harmful.”
“This would cover, for example, the production of ‘deepfake’ intimate images or videos, class 1 material such as child sexual exploitation or abuse material, or the generation of images, video, audio or text to facilitate cyber abuse or hate speech,” the paper said.
skip past newsletter promotionOur Australian morning briefing breaks down the key stories of the day, telling you what’s happening and why it matters
Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.after newsletter promotion
Another expectation will obligate services to “ensure the best interests of the child is a primary consideration for all services used by children”.
A swathe of publicly-available AI services – including well-known platforms ChatGPT, Midjourney and DALL-E – can quickly and freely create text, photo or video material from user prompts. AI tools are also increasingly being incorporated into government, business and scientific work.
The eSafety Commission in September announced a new industry code covering search engines, requiring them to eliminate child abuse material from their results and ensure generative AI products can’t be used to generate such material.
The new consultation paper says the government wants to ensure users can “enjoy the benefits of generative AI capabilities” while stamping out harms – which include “potential for the production of harmful material and activity on a scale and speed not previously possible”.
The eSafety commissioner, Julie Inman Grant, has raised concerns about child abusers using AI to create chat bots to contact victims.
The new expectations would require companies to prevent user requests that could generate harmful material, with suggestions of prompts or nudges urging users to reconsider their requests.
“The types and volume of harm that may be produced or facilitated through the use of generative AI will continue to change,” the consultation paper said, saying harm reduction tools must “continue to evolve”.
Rowland said the government must “continually reassess the tools” available to it and the eSafety Commissioner.
The Bose will also be updated to require companies to consider how their “recommender systems” serve content to users, with the consultation paper particularly mentioning social media platforms including Facebook, Instagram, TikTok, X and YouTube.
The paper notes those systems could “amplify harmful and extreme content” including terrorism or violent extremism, with the new expectations obliging platforms to minimise amplification of such content.
The Bose can be updated by government regulation and will not require legislation, with consultation running until February 2024. The Online Safety Act review will commence in early 2024.
{Categories} *ALL*,_Category: Implications{/Categories}
{URL}https://www.theguardian.com/australia-news/2023/nov/22/australia-social-media-ai-laws-crackdown-esafety{/URL}
{Author}Josh Butler{/Author}
{Image}https://i.guim.co.uk/img/media/def3a18d235375f77b9f505184d21c0cce6e35f2/1134_113_3111_1867/master/3111.jpg?width=1200&height=630&quality=85&auto=format&fit=crop&overlay-align=bottom%2Cleft&overlay-width=100p&overlay-base64=L2ltZy9zdGF0aWMvb3ZlcmxheXMvdGctZGVmYXVsdC5wbmc&s=8493f90019130f2f74d5765c35c88ede{/Image}
{Keywords}Australian politics,Australia news,Social media,Australian media,Artificial intelligence (AI),Technology,Digital media{/Keywords}
{Source}All{/Source}
{Thumb}{/Thumb}