There’s a more powerful way to protect Indonesian children online, and Indonesia has the numbers to force it.
Every generation decides that something new is destroying children. Comic books. Rock music. Television. Video games. The diagnosis changes. The panic is the same.
Indonesia has now joined the queue. On Friday, Communications Minister Meutya Hafid announced the implementation of a regulation restricting children under 16 from platforms the government classifies as higher-risk — YouTube, TikTok, Instagram, Facebook, X, Threads, Bigo Live, Roblox. Children under 13 are blocked entirely. Thirteen to fifteen-year-olds get access to platforms the ministry considers lower-risk. Full access at 16. Sanctions fall on platforms that fail to comply, not on children or parents. Phased implementation begins March 28, one year since the President signed the PP Tunas regulation.
The intent is not invented. UNICEF data cited by the government found around half of Indonesian children have encountered sexual content on social media. Forty-two percent said it made them feel frightened or uncomfortable. That’s not a moral panic. That’s a real problem that deserves a real response.
This isn’t one.
The mechanism doesn’t work. Age-gating requires age verification. Age verification requires either self-declaration — which every child on the internet already knows how to defeat — or something more invasive: NIK-linked identity checks, biometric data, government databases. Indonesia’s government databases have leaked repeatedly. SIM card registrations. Voter records. Millions of NIK-linked personal records circulating because someone decided security was someone else’s problem. The confidence required to trust a new verification layer built on the same infrastructure has no basis.
Then there’s the VPN problem. Indonesia has always been among the top nations for VPN usage — penetration reached 61% in previous years — though by 2025 that figure had come down to around 31% according to Meltwater, still placing it fourth globally. The infrastructure for circumvention is normalised, widely installed, and modelled by adults in the household. When the government blocked PayPal and gaming platforms in July 2022, VPN demand jumped 196% in a single day. A 14-year-old who wants TikTok will have it within hours.
And even if enforcement worked perfectly, the harms would still be there because they are the result of design, not of access. Recommendation algorithms amplify harmful content because engagement is revenue and they are not calibrated to real-world emotional impact. Infinite scroll removes natural stopping points. Notifications are engineered to pull users back. A 16-year-old encounters the same architecture as a 14-year-old, and there is no reliable technical mechanism to change that at the platform level without structural redesign. Delaying access doesn’t change what they walk into.
Europe worked this out. The EU’s approach to tech — GDPR, the Digital Services Act, the Digital Markets Act, the UK’s Age Appropriate Design Code — places the compliance burden on the platforms. Not on citizens. Europeans are not banned from TikTok. TikTok is required to change how it works.
The consequences have been real. Apple withheld Apple Intelligence from EU users at launch, citing DMA regulatory uncertainty, before eventually releasing it in April 2025. Meta declined to launch its latest Llama model in Europe, citing the unpredictable regulatory environment. Google withheld AI Overviews. Platforms have had to build separate compliance infrastructure for the European market rather than simply exit it.
On children specifically: the UK Children’s Code prohibits algorithmic curation and targeted advertising for minors. In direct response, TikTok stopped sending push notifications to children in the evenings. YouTube disabled autoplay, personalisation, and targeted advertising on content made for kids. The platforms changed their products because the cost of not doing so was higher than the cost of compliance.
That’s the playbook. Restrict what platforms can do to citizens. Not restrict citizens from platforms.
The question is leverage. Europe’s came from market size. TikTok’s own DSA report put its average monthly active users across EU member states at 169 million between January and June 2025.
Indonesia has around 108 million TikTok users, the second-largest national user base in the world after the US. Those users spend nearly 45 hours per month on the app, among the highest engagement rates globally. Add the broader Southeast Asian picture: 298 million TikTok users across the region, already exceeding all of Europe’s 258 million combined.
Indonesian users aren’t just a revenue line, they’re a core driver of the content loops that keep global audiences on the app. The creator economy runs on engagement. Losing Indonesian audiences doesn’t just affect ad revenue. It affects what TikTok is. The company already lost India, they don’t want to lose Indonesia, too.
That’s a different kind of leverage but at least it’s still leverage that the government could have used.
Indonesia isn’t using it, however. Instead of demanding that platforms restructure their products, it’s restricting its own people’s access to them. That’s the wrong target. It’s also the weaker negotiating position, because it costs the platforms nothing.
Malaysia is already moving on the same issue. The Philippines, Vietnam, Thailand are watching. If Indonesia led a coordinated regional push, not for access bans, but for platform-level obligations, the numbers become harder to ignore. Southeast Asia already exceeds Europe’s TikTok footprint. A unified regulatory framework from five of those markets is not a marginal threat.
ASEAN has never coordinated at this level. Non-interference is a founding principle and a genuine obstacle. But children’s safety is politically uncontroversial across every member state. That’s rare common ground. It’s worth using.
What Indonesia should actually demand is structurally different products for minor users, not age-gated versions of the same thing. No algorithmic amplification for minors by default. No compulsive notification design. No interaction with strangers unless a parent has unlocked it. These are not radical asks — the UK Children’s Code already mandates several of them, TikTok and YouTube have already complied in Europe, and the framework for enforcement exists. What’s missing is a government willing to use its market position to demand compliance rather than restrict its own citizens’ access.
Platforms already accept that accessibility is a non-negotiable compliance requirement. Screen readers, keyboard navigation, colour contrast standards — these add development time, they complicate QA, and no platform gets to opt out on the grounds that inclusion is expensive. The legal and reputational cost of being inaccessible is now high enough that the work gets done. Child safety is a universal value in a way that very few things are, and there is no principled argument for why it should be treated as optional where accessibility is not.
Theme parks don’t let you on a ride if you’re under the height requirement. They don’t let you board if you’re pregnant, or if the ride poses a risk to people with certain health conditions. Nobody describes this as censorship or an infringement on personal freedom. It’s a safety standard applied at the point of access, and the burden of enforcing it falls on the operator, not on the person being turned away. The same logic applies here. If a platform cannot ensure a reasonably safe experience for a 12-year-old, it should not be permitted to let a 12-year-old on. The design choices that make it unsafe are in the platform’s control.
The problem isn’t identification. It’s intent. Platforms already know a great deal about their users without ever asking for a government ID. Behavioral signals — what content a user engages with, how long they stay on certain types of posts, what they search for, what time of day they’re active, how their interaction patterns compare to known demographic clusters — are processed continuously and used to target advertising with remarkable precision. A platform that can infer that a user is likely a woman in her late twenties with an interest in fitness and a recent interest in pregnancy is not a platform that lacks the tools to infer that a user is probably fourteen.
The distinction between age verification and age estimation matters here. Verification ties a user to an identity document. Estimation uses behavioral and interaction signals to assign a probability range. Instagram and TikTok have both deployed versions of age estimation in limited contexts already. Instagram in particular uses it to reclassify accounts that show signals of being underage even when no age was declared. The UK Children’s Code endorses a version of this logic directly: if a significant proportion of a platform’s users are likely to be children, apply protective standards to all users unless there is a positive reason not to. That’s not surveillance. It’s designing conservatively.
Conservative defaults are the cleanest answer to the identification problem. If a platform cannot determine with confidence whether a user is fourteen or twenty-four, the appropriate response is to treat them as fourteen until there is a positive reason to do otherwise — not to treat them as twenty-four and make protection an optional extra that requires parental setup. The current architecture is inverted: full algorithmic engagement is the default, and restrictions are the opt-in. Flipping that default requires no new technology. It requires intent.
Platforms also already collect birthdates. Most use them for nothing protective. The UK Children’s Code requires that if a platform collects a birthdate showing the user is a child, it must actually apply child-appropriate settings. That is not a technically demanding ask. It is a demand that platforms act on information they already have.
None of this is difficult to build. Kaskus has moderated a large, pseudonymous community since 1999 using account signals, behavioural patterns, and community escalation without ever requiring identity documents. Reddit has done the same at far larger scale for two decades — automated filters calibrated to account age and karma, community moderators with real removal powers, shadowbanning for persistent bad actors. Neither platform knows who its users are. Both have functional accountability structures. The missing ingredient on TikTok, Instagram, and YouTube is not technical capability. It is the cost structure that makes investment in moderation and protective defaults worthwhile, and that only changes when non-compliance becomes more expensive than compliance.
China’s solution doesn’t work. The identity question comes up because it sounds like the obvious enforcement mechanism: if platforms know who someone is, they can verify their age and apply the right rules. China went furthest with this logic. Real-name registration tied to national identity, facial recognition deployed by Douyin and other major platforms, internet curfews for all under-18s from 10pm to 6am, screen time caps that limit children under 8 to 40 minutes a day.
The results were real in places — gaming hours among under-18s fell sharply, Tencent reported a 96% drop, sedentary behaviour among children decreased and physical activity went up. But more than three-quarters of heavy young gamers found workarounds within months — relatives’ accounts, rented adult credentials, photographs held up to facial recognition cameras. A black market in adult accounts emerged. Some children were scammed trying to buy access to their own internet.
Age and identity verification affects everyone. Real-name registration doesn’t just verify age. It makes every online action traceable to a person’s identity, which has effects that go well beyond screen time. It’s a chilling mechanism for political speech, for religious expression, for any view that might attract attention from authorities. Anonymity is not a design flaw. It’s a protection — for people whose safety depends on not being findable, and for the ordinary possibility of holding an opinion without a record. Once the surveillance infrastructure is built and normalised, who controls it and what it gets used for is not a question that can be answered in advance.
TikTok, Instagram, and YouTube, however, have not, because moderation is expensive and impunity is good for engagement. That is a deliberate design choice. One that is reversible if a powerful enough force necessitates it. It requires these companies to spend money they would rather not spend, and the only way to make them spend it is to make non-compliance more costly than compliance. That requires leverage. Indonesia has leverage but it’s not using it.
The regulation signed on Friday restricts Indonesian citizens from platforms rather than restricting platforms from harming Indonesian citizens. It relies on verification infrastructure built on databases that have already failed the people it’s supposed to protect. It will be circumvented by a population that has been circumventing government internet restrictions for years, with the tools already installed. And it does nothing — nothing — to change the architecture that produces the harm.
Indonesia has 108 million reasons to ask for more than the appearance of having done something but the government won’t do it. The President already signed a trade agreement with the US which makes it much more difficult to regulate US companies. Indonesia traded away leverage over U.S. platforms in exchange for tariff relief that the US Supreme Court then invalidated anyway.