EXPLAINER

Why is Australia banning social media for teenagers?

Critics say new law is ‘normalising surveillance’ for young people and risks cutting them off from vital support networks.

A girl in Sydney holds her phone in the run-up to Australia’s nationwide social media ban for children aged below 16 [File: Hollie Adams/Reuters]

By Farah Najjar

Published On 9 Dec 20259 Dec 2025

Save

Australia will introduce the world’s first outright ban on social media for under-16s on Wednesday.

The move marks the first time a country has imposed a blanket age-based ban on social media platforms of this scale, raising questions over how it will work and whether it will protect young people.

Recommended Stories

list of 3 itemsend of list

What ban is Australia introducing?

From December 10, children below 16 years of age will be barred from opening or using accounts on popular social media platforms under new federal rules announced by Prime Minister Anthony Albanese’s government.

The policy requires platforms to block new underage accounts and to remove existing ones belonging to users younger than 16. Companies must also introduce systems to detect minors who attempt to access their services. It does not include video gaming platforms, however.

The measure comes after amendments to the Online Safety Act, overseen by Australia’s eSafety Commissioner, were announced last year. The government says the policy is its response to rising concerns about cyberbullying, sexual exploitation, self-harm content and mental health risks.

Why is Australia doing this?

Research commissioned by the Australian government in 2023 found four out of five children aged eight to 16 use social media, often beginning between the ages of 10 and 12. That report was led by former National Australia Bank CEO Andrew Thorburn, who recommended age restrictions.

The government says the ban is necessary to “keep children safe” amid rising concerns about online harms.

Advertisement

The eSafety Commissioner has also reported a sharp rise in the number of complaints related to child exploitation, cyberbullying and exposure to self-harm content in recent years.

The government has framed the policy as part of efforts to “lead globally” on online safety.

 How will this ban be implemented and enforced?

The eSafety Commissioner said it will issue specific standards that platforms wishing to operate in Australia must follow, including age-verification systems such as ID checks, which could include uploading an image of a user’s face, regular audits, and compulsory reporting on how platforms identify underage users.

Penalties for platforms which do not comply may include fines of up to 49.5 million Australian dollars ($31.95m). Penalties will apply to the companies, not children or parents.

But experts have warned that enforcing the ban will be extremely difficult.

Joanna Orlando, a researcher in digital wellbeing and the author of Generation Connected: How to Parent in a Digital World, told Al Jazeera: “Tech-savvy teens simply use VPNs, fake birth photos for face scans, or migrate to less regulated platforms like Lemon8, or to platforms not part of the ban like video games. Enforcement is proving to be difficult in the days leading up to the ban.”

Louise La Sala, senior research fellow in suicide prevention at Orygen – Australia’s National Centre of Excellence in Youth Mental Health – said, “Ultimately, reducing online harm would be a great outcome; however, we know from evidence that ‘banning’ anything from young people won’t work on its own.”

Aaron Mackey, the Electronic Frontier Foundation’s free speech and transparency litigation director, agreed that the ban is “quite impractical to enforce at scale”, noting that some verification methods “are often inaccurate” while others “are easily circumvented”. So, while some children below 16 will get around the restrictions, some adults may find themselves mistakenly barred.

He added inaccuracy by biometric systems can also “discriminate against people of colour and people with disabilities”.

What implications does this have for users’ privacy?

Mackey said all forms of age-gating are “a privacy nightmare that burdens the civil liberties of people both young and old”.

He explained that age verification, whether via ID uploads or biometrics, requires people “to share sensitive information about themselves that could then be abused or hacked”.

“Kids are popular targets of identity theft,” he warned.

Orlando added: “Age verification requires collecting sensitive data, including government IDs, biometrics, creating risk in terms of hackers. It is also normalising surveillance for young people.”

Advertisement

Leading platforms, including Meta – which owns Facebook, Instagram and Threads, in addition to WhatsApp and Messenger, which are outside the purview of the new regulation – TikTok, Snapchat, YouTube and X have all indicated that they intend to comply with the new law.

Meta said it supported efforts “to create safer online spaces for young people”, while TikTok said it was still reviewing the requirements issued by the government.

According to Meta, which is banning new accounts for under-16s, it has already begun removing underage minors from its Facebook, Instagram and Threads platforms.

What do critics of the ban say?

Supporters, including some youth mental health organisations, say social media platforms have so far failed to enforce their own age limits and that early exposure to social media can heighten the risk of bullying and being exposed to harmful content.

Australian groups such as mental health foundation Headspace and Orygen, therefore, largely welcome stronger protections for some young people who are likely to benefit from delaying social media use during vulnerable developmental stages in childhood and adolescence.

They also warn, however, that a blanket ban could pose a risk for children. “Many young people are impacted in various ways. For example, those using platforms for legitimate support networks, education, or creative expression lose access,” said Orlando.

La Sala, of Orygen, said as the ban covers only social media platforms and not video gaming, it may not work. “It’s important to recognise that harms occur on platforms not currently included in this policy. This age delay does not prevent young people from accessing content available on these platforms without an account, and we need to support young people who use these platforms to seek help or connection.

“We also cannot forget those who are 16 and older. The platforms that they use also need to be safe. Reducing exposure to harmful content and other online harms needs to be a core element of social media use for everyone.”

Critics also say the ban could end up harming the very children it aims to protect, as social media access can be life-saving for many young people.

“We’ve surveyed young people and found that access to social media can be not only beneficial but life-saving to some. Censoring such access can shut off their ability to find community and engage in self-discovery, to pursue artistic education and opportunities, and to express themselves freely and receive valuable information,” Mackay told Al Jazeera.

Young people, he added, could become “cut off from communities and information that help them grow and develop, or even that help them preserve their own domestic safety”.

“A number of young people use social media platforms to chat with their friends, stay connected, meet others, and seek support for their mental health,” said La Sala. “This is particularly true for young people from marginalised communities. It’s important that alternative places for support and connection are shared with these young people so that we do not risk further isolating them or cutting off important supports.”

Research undertaken in Australia and other countries has shown that the impact of social media on young people is complex and varies widely. Some studies link heavy use to distress and mental health problems, while others show that online platforms can provide connection and support, especially for teenagers.

Advertisement

Orlando said there is no research which shows social media use directly causes mental health problems. “There is no research showing that removing social media will directly improve mental health. Instead, the research shows that social media may exacerbate existing mental health issues or exploit teenagers’ vulnerabilities.

“Removing social media will likely help, but it will not cure mental health issues. Many factors influence these, such as cost of living issues, family breakdown, stress, as we know, mental health is impacted by many factors.”

However, in Australia, studies by the eSafety Commissioner have found that children who use social media frequently are more likely to be exposed to harmful content.

It found that almost three in four children aged 10 to 15 have viewed content associated with harm, including hateful material, violent videos and body-image pressure.

The regulator has also found high levels of cyberbullying, with boys and girls saying they were targeted online in the past year.

Internationally, organisations such as UNICEF have highlighted the benefits social media can offer. UNICEF’s research shows that online platforms can help young people stay connected, explore their identities and access support, especially those who live far from their peers.

Are other countries likely to follow suit?

In the United States, several states, including Utah and Arkansas, have passed laws in recent years to restrict minors’ access to social media, though many have been blocked by courts on constitutional grounds.

Malaysia has indicated it is planning to introduce a ban similar to Australia’s next year.

In the United Kingdom, the 2023 Online Safety Act imposes strict obligations on platforms to protect users below the age of 18, but does not ban them. People are required to upload proof of their age before they can view certain material deemed harmful to children.

In October, Denmark announced that it plans to ban children under the age of 15 from holding social media accounts. Those aged 13 and 14 would be allowed access with the permission of their parents. There is no timetable for this to take effect as yet.

Denmark has been jointly testing an age-verification app from the European Commission alongside France, Spain, Italy, and Greece since July this year.

In Germany, children aged 13 to 16 are only permitted to access social media with consent from their parents. However, critics say this rule is not well enforced.

In France, a 2023 law requires parental consent before children under the age of 15 can obtain social media accounts, however technical challenges mean this has not been enforced yet.

The European Commission, Greece, Romania and New Zealand have also indicated an interest in setting a minimum age for social media use.