Australia’s push to protect children from online harms has taken a significant step forward with proposed codes that could fine social media platforms up to $50 million for failing to prevent access to adult content. This development builds on a recent report that highlighted widespread use of social media by young children aged between 8 and 12, which is below the recommended age.
Last month, Australia’s eSafety regulator revealed that young Aussies are routinely accessing platforms like YouTube, TikTok, and Snapchat — even if these sites are officially for teens and adults. The research showed that many children are either using accounts set up by parents or, in some cases, managing their own profiles with a little too much digital freedom. With such a high percentage of kids online, the stage was already set for a major regulatory shake-up.
Now, fresh draft codes are hitting the scene with a clear deadline: within six months, social media and technology companies must install a suite of new measures to ensure that minors are blocked from viewing adult content. The proposed regulations, which cover everything from social media and gaming services to search engines and even device manufacturers, mean that it isn’t just the platforms themselves under the microscope.
The draft codes require platforms that host pornography to implement age assurance measures, effectively barring children from stumbling upon explicit material. But the new rules don’t stop there — equipment makers will need to enable child-specific accounts and set up default safety restrictions, and search engines are expected to switch on “safe search” at the highest setting for users likely to be underage.
The proposals have sparked lively responses from industry giants. For example, TikTok and Meta have both voiced their concerns, with TikTok pointing to a controversial carve-out that appears to single out YouTube from the under-16 ban. Despite the ban’s jab at digital freedom, industry players agree that protecting children online is no laughing matter.
Jennifer Duxbury of Digi summed it up nicely: while the digital realm offers endless opportunities for learning and exploration, it’s high time to ensure that children’s online adventures don’t expose them to harmful material. “Online spaces should be safe and supportive,” Duxbury remarked, echoing the growing sentiment that robust digital safeguards can coexist with a fun, engaging internet experience.
A global signal
Australia’s tough stance is not just about local safety — it’s setting a global benchmark. With other nations, including the UK, watching closely, this regulatory push could trigger similar moves worldwide. The proposed codes, once approved by eSafety Commissioner Julie Inman Grant, will not only redefine how platforms manage age verification but also signal that child safety is paramount in the digital age.
If the codes are green-lighted, companies will have just six months to implement these new safety measures under the federal Online Safety Act. Failure to do so could mean fines that, at up to $50 million, might just be enough to wake up even the biggest tech giants.
In a digital playground where children have long been the unexpected VIPs, Australia is now drawing clear boundaries. It’s a decisive and firm reminder that while the internet is a place for creativity and connection, protecting its youngest users is a responsibility that can’t be put on hold.