Australia will consider requiring app stores to block AI services without age verification
Engadget
by Anna WashenkoMarch 2, 2026
AI-Generated Deep Dive Summary
Australia's government is considering strict measures to prevent younger users from accessing AI chatbots through app stores without proper age verification. According to Reuters, regulators may require app storefronts to block AI services that fail to implement age checks for mature content by March 9. eSafety, Australia’s cyber safety watchdog, has emphasized its commitment to enforcing compliance, potentially targeting gatekeeper services like app stores and search engines if they fail to act.
A recent review by Reuters revealed that out of 50 leading text-based AI chat services in the region, only nine have introduced or planned age assurance measures. Eleven services have implemented blanket content filters or blocked all Australians from using their services. With a deadline approaching, many services remain non-compliant, potentially facing hefty fines of up to A$49.5 million for failing to meet regulations.
The issue of responsibility in safeguarding children from harmful online content is a global debate. In the U.S., tech giants like Apple and Google have advocated for platforms, rather than app stores, to manage age restrictions. While Australia's stance is not yet definitive, past strict measures, such as banning social media use for under-16s, suggest a likely aggressive approach.
This proposed regulation highlights Australia’s proactive efforts in digital safety, setting a precedent for global tech governance. For readers interested in tech, this underscores the growing importance of regulatory frameworks in balancing innovation with user protection, particularly for minors. The outcome could significantly influence how AI services and app stores operate globally.
Verticals
techconsumer-tech
Originally published on Engadget on 3/2/2026