After banning use of social media for children below 16 years of age, Australia may be preparing to tighten rules on artificial intelligence platforms as well, Reuters report. The country’s internet regulator has warned that search engines and app stores could be asked to block AI services that fail to verify users’ ages. The move comes after a Reuters review found that more than half of popular AI platforms had not publicly shared steps to meet new age-restriction rules ahead of a deadline next week. As per the report, Australia’s eSafety commissioner said that from March 9, internet services including AI chat tools such as OpenAI’s ChatGPT must prevent users under 18 from accessing content related to pornography, extreme violence, self-harm and eating disorders. Companies that fail to comply could face fines of up to A$49.5 million ($35 million). The regulator said it could take action not only against AI platforms but also against “gatekeeper services” such as search engines and app stores that provide access to these tools.
Compliance still limited
A Reuters review of the 50 most popular text-based AI products found that only nine had introduced or announced age-verification systems. Another 11 platforms either applied blanket content filters or planned to block Australian users entirely. However, 30 platforms showed no clear signs of taking steps to meet the new rules.Large AI providers such as ChatGPT, Replika and Anthropic’s Claude have begun rolling out age controls or stronger filters. Some companion chatbot providers said they would comply, while others had not published clear policies.
Growing global scrutiny
The crackdown follows rising concerns that AI chatbots may expose young users to harmful content or encourage risky behaviour. OpenAI and other AI firms have faced lawsuits abroad over claims linked to harmful interactions with minors.Australia has not reported chatbot-linked violence, but officials said children as young as 10 have been spending hours daily on AI platforms. The regulator expressed concern that some AI tools may use emotional engagement techniques that encourage excessive use.With this move, Australia appears set to expand its youth online safety push from social media to artificial intelligence platforms.