Komdigi’s media circus at Meta’s Jakarta office

Communications and Digital Minister Meutya Hafid showed up unannounced at Meta’s Jakarta office on Wednesday with a full media entourage in tow — cameras rolling, reporters trailing, executives summoned, questions fired, headlines secured.

The official justification was that Meta had complied with just 28.47 percent of the government’s requests to act on online gambling and DFK content — disinformation, defamation, and hate speech — making it one of the lowest-performing platforms operating in Indonesia. That is a legitimate regulatory concern. With Facebook and WhatsApp each reaching around 112 million Indonesian users, the scale of exposure to harmful content is not hypothetical.

During the meeting, the Minister also raised Meta’s content moderation practices more broadly, including the removal of a photo she had posted from a visit to Palestine. Posts supporting Iran or Palestine are routinely flagged or taken down while comparable content directed elsewhere passes without review. These are real problems that deserve real scrutiny.

But serious regulatory scrutiny does not typically arrive with a press pack — or with the Director General of Digital Space Supervision, the State Intelligence Agency, the National Cyber and Encryption Agency, TNI Cyber Command, and Bareskrim Polri all in tow. That is not a regulatory inspection. That is a show of state force conducted in front of television cameras, and the distinction matters.

What happened on Wednesday looked far less like oversight and far more like a carefully staged demonstration of authority, timed to produce the evening news clip that shows the Minister doing something, at the precise moment when public anxiety about children and social media makes that image most politically useful. Look tough. Look decisive. Look morally righteous. The optics write themselves.

The trouble is that optics are not policy, and performative enforcement doesn’t fix anything. The content moderation problems she raised on Wednesday are real but they won’t be resolved by a televised office visit any more than Indonesia’s children will be protected by an access ban the platforms can’t meaningfully enforce and teenagers will circumvent within hours. Both moves share the same logic: do the thing that looks like action, and let the appearance of having acted carry the political weight.

That’s a reasonable strategy if the goal is the news cycle. It’s a poor substitute for the harder work of making platforms structurally accountable, which requires leverage, sustained regulatory pressure, and the willingness to demand compliance rather than just demonstrate displeasure in front of cameras.

Indonesia to limit social media for under-16s but it’s solving the wrong problem

There’s a more powerful way to protect Indonesian children online, and Indonesia has the numbers to force it.

Every generation decides that something new is destroying children. Comic books. Rock music. Television. Video games. The diagnosis changes. The panic is the same.

Indonesia has now joined the queue. On Friday, Communications Minister Meutya Hafid announced the implementation of a regulation restricting children under 16 from platforms the government classifies as higher-risk — YouTube, TikTok, Instagram, Facebook, X, Threads, Bigo Live, Roblox. Children under 13 are blocked entirely. Thirteen to fifteen-year-olds get access to platforms the ministry considers lower-risk. Full access at 16. Sanctions fall on platforms that fail to comply, not on children or parents. Phased implementation begins March 28, one year since the President signed the PP Tunas regulation.

The intent is not invented. UNICEF data cited by the government found around half of Indonesian children have encountered sexual content on social media. Forty-two percent said it made them feel frightened or uncomfortable. That’s not a moral panic. That’s a real problem that deserves a real response.

This isn’t one.

The mechanism doesn’t work. Age-gating requires age verification. Age verification requires either self-declaration — which every child on the internet already knows how to defeat — or something more invasive: NIK-linked identity checks, biometric data, government databases. Indonesia’s government databases have leaked repeatedly. SIM card registrations. Voter records. Millions of NIK-linked personal records circulating because someone decided security was someone else’s problem. The confidence required to trust a new verification layer built on the same infrastructure has no basis.

Then there’s the VPN problem. Indonesia has always been among the top nations for VPN usage — penetration reached 61% in previous years — though by 2025 that figure had come down to around 31% according to Meltwater, still placing it fourth globally. The infrastructure for circumvention is normalised, widely installed, and modelled by adults in the household. When the government blocked PayPal and gaming platforms in July 2022, VPN demand jumped 196% in a single day. A 14-year-old who wants TikTok will have it within hours.

And even if enforcement worked perfectly, the harms would still be there because they are the result of design, not of access. Recommendation algorithms amplify harmful content because engagement is revenue and they are not calibrated to real-world emotional impact. Infinite scroll removes natural stopping points. Notifications are engineered to pull users back. A 16-year-old encounters the same architecture as a 14-year-old, and there is no reliable technical mechanism to change that at the platform level without structural redesign. Delaying access doesn’t change what they walk into.

Europe worked this out. The EU’s approach to tech — GDPR, the Digital Services Act, the Digital Markets Act, the UK’s Age Appropriate Design Code — places the compliance burden on the platforms. Not on citizens. Europeans are not banned from TikTok. TikTok is required to change how it works.

The consequences have been real. Apple withheld Apple Intelligence from EU users at launch, citing DMA regulatory uncertainty, before eventually releasing it in April 2025. Meta declined to launch its latest Llama model in Europe, citing the unpredictable regulatory environment. Google withheld AI Overviews. Platforms have had to build separate compliance infrastructure for the European market rather than simply exit it.

On children specifically: the UK Children’s Code prohibits algorithmic curation and targeted advertising for minors. In direct response, TikTok stopped sending push notifications to children in the evenings. YouTube disabled autoplay, personalisation, and targeted advertising on content made for kids. The platforms changed their products because the cost of not doing so was higher than the cost of compliance.

That’s the playbook. Restrict what platforms can do to citizens. Not restrict citizens from platforms.

The question is leverage. Europe’s came from market size. TikTok’s own DSA report put its average monthly active users — or recipients as they call them — across EU member states at 178 million by the end of February 2026, with the broader European figure reaching 200 million by September 2025.

TikTok’s own published figures put Indonesia at 160 million users as of November 2025 — second only to the US, if not the largest market in the world. A separate measure of TikTok’s adult advertising audience by Data Reportal, which excludes under-18 and uses a different methodology, puts that figure at 180 million, the largest adult TikTok audience of any country globally. Those users spend nearly 45 hours per month on the app, among the highest engagement rates globally. Add the broader Southeast Asian picture: 460 million TikTok users across the region — more than double Europe’s 200 million.

Indonesian users aren’t just a revenue line, they’re a core driver of the content loops that keep global audiences on the app. The creator economy runs on engagement. Losing Indonesian audiences doesn’t just affect ad revenue. It affects what TikTok is. The company already lost India, they don’t want to lose Indonesia, too.

That’s a different kind of leverage but at least it’s still leverage that the government could have used.

Indonesia isn’t using it, however. Instead of demanding that platforms restructure their products, it’s restricting its own people’s access to them. That’s the wrong target. It’s also the weaker negotiating position, because it costs the platforms nothing.

Malaysia is already moving on the same issue. The Philippines, Vietnam, Thailand are watching. If Indonesia led a coordinated regional push, not for access bans, but for platform-level obligations, the numbers become harder to ignore. Southeast Asia already exceeds Europe’s TikTok footprint. A unified regulatory framework from five of those markets is not a marginal threat.

ASEAN has never coordinated at this level. Non-interference is a founding principle and a genuine obstacle. But children’s safety is politically uncontroversial across every member state. That’s rare common ground. It’s worth using.

What Indonesia should actually demand is not restricted versions of these platforms for children. It’s less predatory versions for everyone — with children as the primary beneficiaries. Disable algorithmic amplification by default, for all users: opt in to recommendation, not out of it. No compulsive notification design, no engineered pull-back for any user, minor or adult. Restrict stranger interaction by default, not as a parental unlock. Tie feature access to account history: new accounts get conservative defaults, and trust expands as behavior is established, not when an identity document is produced.

Reddit has run on a version of this model for two decades. The platform doesn’t need to know you’re a child. It just needs to stop assuming you’re a safe adult the moment you arrive. Automated filters calibrated to account age and karma, community moderators with real removal powers, conservative defaults for new accounts regardless of declared age. None of this requires a national ID. It requires intent.

These are not radical asks, the UK Children’s Code already mandates several of them, TikTok and YouTube have already complied in Europe, and the framework for enforcement exists. What’s missing is a government willing to use its market position to demand compliance rather than restrict its own citizens’ access.

Platforms already accept that accessibility is a non-negotiable compliance requirement. Screen readers, keyboard navigation, color contrast standards, these add development time, they complicate QA, and no platform gets to opt out on the grounds that inclusion is expensive. The legal and reputational cost of being inaccessible is now high enough that the work gets done. Child safety is a universal value in a way that very few things are, and there is no principled argument for why it should be treated as optional where accessibility is not.

Theme parks don’t let you on a ride if you’re under the height requirement. They don’t let you board if you’re pregnant, or if the ride poses a risk to people with certain health conditions. Nobody describes this as censorship or an infringement on personal freedom. It’s a safety standard applied at the point of access, and the burden of enforcing it falls on the operator, not on the person being turned away. The same logic applies here. If a platform cannot ensure a reasonably safe experience for a 12-year-old, it should not be permitted to let a 12-year-old on. The design choices that make it unsafe are in the platform’s control.

The problem isn’t identification. It’s intent. Platforms already know a great deal about their users without ever asking for a government ID. Behavioral signals, what content a user engages with, how long they stay on certain types of posts, what they search for, what time of day they’re active, how their interaction patterns compare to known demographic clusters, are processed continuously and used to target advertising with remarkable precision. A platform that can infer that a user is likely a woman in her late twenties with an interest in fitness and a recent interest in pregnancy is not a platform that lacks the tools to infer that a user is probably fourteen.

The distinction between age verification and age estimation matters here. Verification ties a user to an identity document. Estimation uses behavioral and interaction signals to assign a probability range. Instagram and TikTok have both deployed versions of age estimation in limited contexts already. Instagram in particular uses it to reclassify accounts that show signals of being underage even when no age was declared. The UK Children’s Code endorses a version of this logic directly: if a significant proportion of a platform’s users are likely to be children, apply protective standards to all users unless there is a positive reason not to. That’s not surveillance. It’s designing conservatively.

Conservative defaults are the cleanest answer to the identification problem. If a platform cannot determine with confidence whether a user is fourteen or twenty-four, the appropriate response is to treat them as fourteen until there is a positive reason to do otherwise, not to treat them as twenty-four and make protection an optional extra that requires parental setup. The current architecture is inverted: full algorithmic engagement is the default, and restrictions are the opt-in. Flipping that default requires no new technology. It requires intent.

Platforms also already collect birthdates but most use them for nothing protective. The UK Children’s Code requires that if a platform collects a birthdate showing the user is a child, it must actually apply child-appropriate settings. That is not a technically demanding ask. It is a demand that platforms act on information they already have.

None of this is difficult to build. Kaskus has moderated a large, pseudonymous community since 1999 using account signals, behavioural patterns, and community escalation without ever requiring identity documents. Reddit has done the same at far larger scale for two decades, automated filters calibrated to account age and karma, community moderators with real removal powers, shadowbanning for persistent bad actors. Neither platform knows who its users are. Both have functional accountability structures. The missing ingredient on TikTok, Instagram, and YouTube is not technical capability. It is the cost structure that makes investment in moderation and protective defaults worthwhile, and that only changes when non-compliance becomes more expensive than compliance.

China’s solution doesn’t work. The identity question comes up because it sounds like the obvious enforcement mechanism: if platforms know who someone is, they can verify their age and apply the right rules. China went furthest with this logic. Real-name registration tied to national identity, facial recognition deployed by Douyin and other major platforms, internet curfews for all under-18s from 10pm to 6am, screen time caps that limit children under 8 to 40 minutes a day.

The results were real in places: gaming hours among under-18s fell sharply, Tencent reported a 96% drop, sedentary behaviour among children decreased and physical activity went up. But more than three-quarters of heavy young gamers found workarounds within months through relatives’ accounts, rented adult credentials, photographs held up to facial recognition cameras. A black market in adult accounts emerged. Some children were scammed trying to buy access to their own internet.

Age and identity verification affects everyone. Real-name registration doesn’t just verify age. It makes every online action traceable to a person’s identity, which has effects that go well beyond screen time. It’s a chilling mechanism for political speech, for religious expression, for any view that might attract attention from authorities. Anonymity is not a design flaw. It’s a protection for people whose safety depends on not being findable, and for the ordinary possibility of holding an opinion without a record. Once the surveillance infrastructure is built and normalised, who controls it and what it gets used for is not a question that can be answered in advance.

Functional accountability doesn’t require knowing who your users are, it requires investing in the infrastructure to enforce it. TikTok, Instagram, and YouTube, however, have not, because moderation is expensive and impunity is good for engagement. That is a deliberate design choice. One that is reversible if a powerful enough force necessitates it. It requires these companies to spend money they would rather not spend, and the only way to make them spend it is to make non-compliance more costly than compliance. That requires leverage. Indonesia has leverage but it’s not using it.

The regulation that rolls out on the 28th restricts Indonesian citizens from platforms rather than restricting platforms from harming Indonesian citizens. It relies on verification infrastructure built on databases that have already failed the people it’s supposed to protect. It will be circumvented by a population that has been circumventing government internet restrictions for years, with the tools already installed. And it does nothing — nothing — to change the architecture that produces the harm.

Indonesia has 270 million reasons to ask for more than the appearance of having done something but the government won’t do it. Or maybe they can’t. The President already signed a trade agreement with the US which makes it much more difficult to regulate US companies. Indonesia traded away leverage over U.S. platforms in exchange for tariff relief that the US Supreme Court then invalidated anyway.

Twitter Backflips on Block Policy

Twitter today made changes to how the block function works in a way that seems counterintuitive and perhaps even the exact opposite of what it had been previously. The change essentially acts as an earplug rather than a barrier which separates people from accounts they don’t wish to interact with. A few hours later though, the company backed down.

The new policy, which came into effect immediately and without public notification, allowed blocked accounts to see, follow, and interact with the accounts that block them, except that the blocked account won’t be able to see or know that. It effectively performs a mute function rather than a proper block. Why Twitter didn’t just rename it to “mute” is unclear because the action obviously performs what a mute function is expected to do.

By changing the block function to the new behavior, it meant that stalkers or people with malicious intent can far more easily monitor their target, keep track of them, store their tweets, distribute, or use them as they wish.

//platform.twitter.com/widgets.js

Preventing Retaliation Twitter told TechCrunch that the new behavior was designed to prevent retaliatory actions. When a person is blocked, they would know it when they try to visit their target’s profile because Twitter would tell them that they are unauthorized to do so. Apparently there have been instances in which this led to elevated and and extreme responses although the company did not provide more specific examples. Twitter also reiterated the point that tweets are public and therefore can be seen by everyone.

On one hand, Twitter has a point. Blocked people have always been able to see tweets from people who block them by going directly to the target account without logging in, which can easily be done from any web browser. They can also create other accounts, with varying inconspicuous names, to follow them again.

However, tracking tweets without logging in severely limits a person’s activities to merely viewing and perhaps taking screenshots of the tweets. They won’t be able to interact with their targets on Twitter in any way at all.

When people use different accounts to follow their targets, sooner or later their activities will be noticed and they will subsequently be blocked again.

Derek Powazek perhaps said it best in explaining how the new block works.

//platform.twitter.com/widgets.js

This leads people to form an opinion that Twitter is siding with the stalkers and abusers by letting them do what they wish and making them invisible to the target or victim.

//platform.twitter.com/widgets.js

Twitter’s position seems to be that ignorance is bliss. What they don’t know won’t hurt them.

//platform.twitter.com/widgets.js

//platform.twitter.com/widgets.js

Reverting the block In less than five hours though, Twitter reversed its decision and reverted nearly all the changes to the blocking function it had implemented.

Twitter’s VP of product Michael Sippey posted on Twitter’s main blog emphasizing that the changes were made to prevent post-block reactions which can be far more severe than pre-block abuse but the company decided to turn back on its decision because the backlash #restoretheblock had been so overwhelming, there was even a change.org petition.

In all fairness, neither solutions are ideal. One has the potential to spark severe reactions, even offline, another lets abusers roam free around their targets. The Twitter crowd certainly prefers the prior block behavior because it allowed a more immediate control over who can interact with them at the risk of retaliation, expecting that such a risk may be relatively low.