Australia Sets Global Precedent with Strict Standards on Deepfake Content and Online Safety
In an unprecedented move, Australia is stepping up to impose stringent standards on major tech companies such as Apple, Google, and Meta regarding the policing of online child sexual abuse and extremist content. This groundbreaking initiative is being spearheaded by the country's eSafety Commissioner, Julie Inman Grant, who has set forth draft regulations that require tech platforms to intensify their efforts in eradicating illegal content from their services, including cloud storage and messaging apps.
World-First Industry Standards
After a comprehensive two-year process, Commissioner Inman Grant is poised to establish draft standards that will hold tech giants accountable for the content shared through their platforms. These standards are intended to deal with serious issues like child exploitation and terrorism-related materials, including AI-generated 'deepfake' pornography. The goal is not to compel tech firms to break their encryption but to encourage proactive measures against illegal content within the bounds of what is technically feasible.
AI and the Fight Against Online Harm
The fact that these new standards are addressing issues such as 'deepfake' child pornography underscores how artificial intelligence technology is becoming a significant factor in the online safety landscape. 'Deepfakes' are generated by AI, which can create realistic images and videos by manipulating existing media. This technology has led to a worrying trend in which fabricated images and videos are used to create and spread sexual abuse material online. By including AI-generated content in its standards, Australia is recognizing and acting against the evolving nature of online threats.
Designing Safer Technology
Inman Grant emphasizes the importance of integrating safety features during the technology design phase, thus preventing harmful content from proliferating. She points out that while encrypted messaging services like WhatsApp are generally secure, these applications have developed methods to detect and report abuse without breaking end-to-end encryption. By focusing on the design and testing of safety measures upfront, Australia hopes to lead by example and influence global regulatory practices.
Public Consultation and Standoff with Tech Giants
The eSafety Commissioner has launched a 31-day public consultation on the draft standards, which are set to be tabled in federal parliament and implemented within six months once registered. The adept standards will mandate tech companies to invest in operational trust and safety measures. However, not all companies have been receptive; for instance, social media company X (formerly known as Twitter) has refused to pay a significant fine imposed by the Commissioner for failing to address child exploitation content and is seeking a judicial review.
Australia, eSafety, deepfake