The MacBook Neo: A Wuling Air EV in a World of R34 Delusions

The “spec goblins” are at it again. You can hear them now, huddled in their dark corners of Reddit, screeching about the “indignity” of a USB 2.0 port and the lack of MagSafe. They’re busy counting cores and measuring nits while completely missing the point. Apple hasn’t released a “bad” laptop; they’ve finally figured out how to sell us their leftovers—and make us love them for it.

At $599, the MacBook Neo is the “Fisher-Price Macintosh.” It’s a heat-seeking missile aimed directly at the bloated, plastic-laden mid-range Windows market. And the secret sauce? It’s powered by the A18 Pro—or more accurately, the A18 Pro chips that weren’t “Pro” enough for the iPhone. Apple has taken the “binned” silicon and given it a second life in an aluminum shell. It’s genius logistics disguised as a “breakthrough” price.

But let’s talk about the automotive delusion. The spec bros are trashing the Neo because it isn’t an over-spec’d BMW M3 or some twin-turbo Nissan Skyline GT-R R34. They want a machine that can handle 150 mph on a track they’ll never visit. People buying the MacBook Neo won’t be doing 4K video productions (though they probably can) or heavy workloads that need the latest and greatest. They’ll be using apps like Google Docs, Canva, Notion, or Office, maybe a little Claude Code, not multitrack FCP or sophisticated Photoshop and motion graphics work

Then there’s the other side: the “Appliance” crowd. They love the Wuling Air EV or the BYD Atto 1. These cars are the darlings of the streets right now because they’re compact, affordable, and honest. They don’t pretend to be racers; they’re just stylish city-slicers.

Now, full disclosure: I wouldn’t buy a Neo for myself. And I certainly wouldn’t trade my 1997 BMW 323i for a BYD Atto 1. I’ll take an aging inline-six with actual road feel over a silent electric pod any day (though I wouldn’t say no to a more proper EV like the Hyundai Ioniq or a BYD Seal).

But we have to stop projecting our “enthusiast” needs onto the general public. Most people don’t need 0-60 in three seconds or a liquid-cooled GPU. They need to get to the grocery store or finish a term paper without the “transmission” falling out.

The Neo is the Air EV of computing. The benchmarks don’t lie: even a “binned” A18 Pro wipes the floor with the legendary M1 in single-core tasks. It’s 50% faster than Window PCs where it actually matters for daily use. It’s a 3nm monster in a toy’s clothing.

Yes, the speakers sound like tin cans, there’s no Thunderbolt and only one of the two USB ports is high speed but you don’t buy a budget EV and then complain that it lacks Italian leather. You buy it because it’s $599, it looks great in different colors, and it’s a portal into an ecosystem that actually works. Right now you can budget $2000 and get an Apple MacBook Neo, an Apple Watch SE, an iPhone 17e, a pair of AirPods 4, a year of Apple One subscription and still have some change. Before this month it wasn’t possible!

The MacBook Neo is the spiritual successor to the base-model iPad. It’s for the switcher who is tired of Windows breaking their life, and the parent who isn’t buying a $1,200 Facebook machine. I’ll keep my “classic” power-user gear, and you probably will too. But while the spec-heads are busy complaining about port speeds, these low end MacBooks will fly off the shelves and into the hands of people who need them.

Komdigi’s media circus at Meta’s Jakarta office

Communications and Digital Minister Meutya Hafid showed up unannounced at Meta’s Jakarta office on Wednesday with a full media entourage in tow — cameras rolling, reporters trailing, executives summoned, questions fired, headlines secured.

The official justification was that Meta had complied with just 28.47 percent of the government’s requests to act on online gambling and DFK content — disinformation, defamation, and hate speech — making it one of the lowest-performing platforms operating in Indonesia. That is a legitimate regulatory concern. With Facebook and WhatsApp each reaching around 112 million Indonesian users, the scale of exposure to harmful content is not hypothetical.

During the meeting, the Minister also raised Meta’s content moderation practices more broadly, including the removal of a photo she had posted from a visit to Palestine. Posts supporting Iran or Palestine are routinely flagged or taken down while comparable content directed elsewhere passes without review. These are real problems that deserve real scrutiny.

But serious regulatory scrutiny does not typically arrive with a press pack — or with the Director General of Digital Space Supervision, the State Intelligence Agency, the National Cyber and Encryption Agency, TNI Cyber Command, and Bareskrim Polri all in tow. That is not a regulatory inspection. That is a show of state force conducted in front of television cameras, and the distinction matters.

What happened on Wednesday looked far less like oversight and far more like a carefully staged demonstration of authority, timed to produce the evening news clip that shows the Minister doing something, at the precise moment when public anxiety about children and social media makes that image most politically useful. Look tough. Look decisive. Look morally righteous. The optics write themselves.

The trouble is that optics are not policy, and performative enforcement doesn’t fix anything. The content moderation problems she raised on Wednesday are real but they won’t be resolved by a televised office visit any more than Indonesia’s children will be protected by an access ban the platforms can’t meaningfully enforce and teenagers will circumvent within hours. Both moves share the same logic: do the thing that looks like action, and let the appearance of having acted carry the political weight.

That’s a reasonable strategy if the goal is the news cycle. It’s a poor substitute for the harder work of making platforms structurally accountable, which requires leverage, sustained regulatory pressure, and the willingness to demand compliance rather than just demonstrate displeasure in front of cameras.

Indonesia to limit social media for under-16s but it’s solving the wrong problem

There’s a more powerful way to protect Indonesian children online, and Indonesia has the numbers to force it.

Every generation decides that something new is destroying children. Comic books. Rock music. Television. Video games. The diagnosis changes. The panic is the same.

Indonesia has now joined the queue. On Friday, Communications Minister Meutya Hafid announced the implementation of a regulation restricting children under 16 from platforms the government classifies as higher-risk — YouTube, TikTok, Instagram, Facebook, X, Threads, Bigo Live, Roblox. Children under 13 are blocked entirely. Thirteen to fifteen-year-olds get access to platforms the ministry considers lower-risk. Full access at 16. Sanctions fall on platforms that fail to comply, not on children or parents. Phased implementation begins March 28, one year since the President signed the PP Tunas regulation.

The intent is not invented. UNICEF data cited by the government found around half of Indonesian children have encountered sexual content on social media. Forty-two percent said it made them feel frightened or uncomfortable. That’s not a moral panic. That’s a real problem that deserves a real response.

This isn’t one.

The mechanism doesn’t work. Age-gating requires age verification. Age verification requires either self-declaration — which every child on the internet already knows how to defeat — or something more invasive: NIK-linked identity checks, biometric data, government databases. Indonesia’s government databases have leaked repeatedly. SIM card registrations. Voter records. Millions of NIK-linked personal records circulating because someone decided security was someone else’s problem. The confidence required to trust a new verification layer built on the same infrastructure has no basis.

Then there’s the VPN problem. Indonesia has always been among the top nations for VPN usage — penetration reached 61% in previous years — though by 2025 that figure had come down to around 31% according to Meltwater, still placing it fourth globally. The infrastructure for circumvention is normalised, widely installed, and modelled by adults in the household. When the government blocked PayPal and gaming platforms in July 2022, VPN demand jumped 196% in a single day. A 14-year-old who wants TikTok will have it within hours.

And even if enforcement worked perfectly, the harms would still be there because they are the result of design, not of access. Recommendation algorithms amplify harmful content because engagement is revenue and they are not calibrated to real-world emotional impact. Infinite scroll removes natural stopping points. Notifications are engineered to pull users back. A 16-year-old encounters the same architecture as a 14-year-old, and there is no reliable technical mechanism to change that at the platform level without structural redesign. Delaying access doesn’t change what they walk into.

Europe worked this out. The EU’s approach to tech — GDPR, the Digital Services Act, the Digital Markets Act, the UK’s Age Appropriate Design Code — places the compliance burden on the platforms. Not on citizens. Europeans are not banned from TikTok. TikTok is required to change how it works.

The consequences have been real. Apple withheld Apple Intelligence from EU users at launch, citing DMA regulatory uncertainty, before eventually releasing it in April 2025. Meta declined to launch its latest Llama model in Europe, citing the unpredictable regulatory environment. Google withheld AI Overviews. Platforms have had to build separate compliance infrastructure for the European market rather than simply exit it.

On children specifically: the UK Children’s Code prohibits algorithmic curation and targeted advertising for minors. In direct response, TikTok stopped sending push notifications to children in the evenings. YouTube disabled autoplay, personalisation, and targeted advertising on content made for kids. The platforms changed their products because the cost of not doing so was higher than the cost of compliance.

That’s the playbook. Restrict what platforms can do to citizens. Not restrict citizens from platforms.

The question is leverage. Europe’s came from market size. TikTok’s own DSA report put its average monthly active users — or recipients as they call them — across EU member states at 178 million by the end of February 2026, with the broader European figure reaching 200 million by September 2025.

TikTok’s own published figures put Indonesia at 160 million users as of November 2025 — second only to the US, if not the largest market in the world. A separate measure of TikTok’s adult advertising audience by Data Reportal, which excludes under-18 and uses a different methodology, puts that figure at 180 million, the largest adult TikTok audience of any country globally. Those users spend nearly 45 hours per month on the app, among the highest engagement rates globally. Add the broader Southeast Asian picture: 460 million TikTok users across the region — more than double Europe’s 200 million.

Indonesian users aren’t just a revenue line, they’re a core driver of the content loops that keep global audiences on the app. The creator economy runs on engagement. Losing Indonesian audiences doesn’t just affect ad revenue. It affects what TikTok is. The company already lost India, they don’t want to lose Indonesia, too.

That’s a different kind of leverage but at least it’s still leverage that the government could have used.

Indonesia isn’t using it, however. Instead of demanding that platforms restructure their products, it’s restricting its own people’s access to them. That’s the wrong target. It’s also the weaker negotiating position, because it costs the platforms nothing.

Malaysia is already moving on the same issue. The Philippines, Vietnam, Thailand are watching. If Indonesia led a coordinated regional push, not for access bans, but for platform-level obligations, the numbers become harder to ignore. Southeast Asia already exceeds Europe’s TikTok footprint. A unified regulatory framework from five of those markets is not a marginal threat.

ASEAN has never coordinated at this level. Non-interference is a founding principle and a genuine obstacle. But children’s safety is politically uncontroversial across every member state. That’s rare common ground. It’s worth using.

What Indonesia should actually demand is not restricted versions of these platforms for children. It’s less predatory versions for everyone — with children as the primary beneficiaries. Disable algorithmic amplification by default, for all users: opt in to recommendation, not out of it. No compulsive notification design, no engineered pull-back for any user, minor or adult. Restrict stranger interaction by default, not as a parental unlock. Tie feature access to account history: new accounts get conservative defaults, and trust expands as behavior is established, not when an identity document is produced.

Reddit has run on a version of this model for two decades. The platform doesn’t need to know you’re a child. It just needs to stop assuming you’re a safe adult the moment you arrive. Automated filters calibrated to account age and karma, community moderators with real removal powers, conservative defaults for new accounts regardless of declared age. None of this requires a national ID. It requires intent.

These are not radical asks, the UK Children’s Code already mandates several of them, TikTok and YouTube have already complied in Europe, and the framework for enforcement exists. What’s missing is a government willing to use its market position to demand compliance rather than restrict its own citizens’ access.

Platforms already accept that accessibility is a non-negotiable compliance requirement. Screen readers, keyboard navigation, color contrast standards, these add development time, they complicate QA, and no platform gets to opt out on the grounds that inclusion is expensive. The legal and reputational cost of being inaccessible is now high enough that the work gets done. Child safety is a universal value in a way that very few things are, and there is no principled argument for why it should be treated as optional where accessibility is not.

Theme parks don’t let you on a ride if you’re under the height requirement. They don’t let you board if you’re pregnant, or if the ride poses a risk to people with certain health conditions. Nobody describes this as censorship or an infringement on personal freedom. It’s a safety standard applied at the point of access, and the burden of enforcing it falls on the operator, not on the person being turned away. The same logic applies here. If a platform cannot ensure a reasonably safe experience for a 12-year-old, it should not be permitted to let a 12-year-old on. The design choices that make it unsafe are in the platform’s control.

The problem isn’t identification. It’s intent. Platforms already know a great deal about their users without ever asking for a government ID. Behavioral signals, what content a user engages with, how long they stay on certain types of posts, what they search for, what time of day they’re active, how their interaction patterns compare to known demographic clusters, are processed continuously and used to target advertising with remarkable precision. A platform that can infer that a user is likely a woman in her late twenties with an interest in fitness and a recent interest in pregnancy is not a platform that lacks the tools to infer that a user is probably fourteen.

The distinction between age verification and age estimation matters here. Verification ties a user to an identity document. Estimation uses behavioral and interaction signals to assign a probability range. Instagram and TikTok have both deployed versions of age estimation in limited contexts already. Instagram in particular uses it to reclassify accounts that show signals of being underage even when no age was declared. The UK Children’s Code endorses a version of this logic directly: if a significant proportion of a platform’s users are likely to be children, apply protective standards to all users unless there is a positive reason not to. That’s not surveillance. It’s designing conservatively.

Conservative defaults are the cleanest answer to the identification problem. If a platform cannot determine with confidence whether a user is fourteen or twenty-four, the appropriate response is to treat them as fourteen until there is a positive reason to do otherwise, not to treat them as twenty-four and make protection an optional extra that requires parental setup. The current architecture is inverted: full algorithmic engagement is the default, and restrictions are the opt-in. Flipping that default requires no new technology. It requires intent.

Platforms also already collect birthdates but most use them for nothing protective. The UK Children’s Code requires that if a platform collects a birthdate showing the user is a child, it must actually apply child-appropriate settings. That is not a technically demanding ask. It is a demand that platforms act on information they already have.

None of this is difficult to build. Kaskus has moderated a large, pseudonymous community since 1999 using account signals, behavioural patterns, and community escalation without ever requiring identity documents. Reddit has done the same at far larger scale for two decades, automated filters calibrated to account age and karma, community moderators with real removal powers, shadowbanning for persistent bad actors. Neither platform knows who its users are. Both have functional accountability structures. The missing ingredient on TikTok, Instagram, and YouTube is not technical capability. It is the cost structure that makes investment in moderation and protective defaults worthwhile, and that only changes when non-compliance becomes more expensive than compliance.

China’s solution doesn’t work. The identity question comes up because it sounds like the obvious enforcement mechanism: if platforms know who someone is, they can verify their age and apply the right rules. China went furthest with this logic. Real-name registration tied to national identity, facial recognition deployed by Douyin and other major platforms, internet curfews for all under-18s from 10pm to 6am, screen time caps that limit children under 8 to 40 minutes a day.

The results were real in places: gaming hours among under-18s fell sharply, Tencent reported a 96% drop, sedentary behaviour among children decreased and physical activity went up. But more than three-quarters of heavy young gamers found workarounds within months through relatives’ accounts, rented adult credentials, photographs held up to facial recognition cameras. A black market in adult accounts emerged. Some children were scammed trying to buy access to their own internet.

Age and identity verification affects everyone. Real-name registration doesn’t just verify age. It makes every online action traceable to a person’s identity, which has effects that go well beyond screen time. It’s a chilling mechanism for political speech, for religious expression, for any view that might attract attention from authorities. Anonymity is not a design flaw. It’s a protection for people whose safety depends on not being findable, and for the ordinary possibility of holding an opinion without a record. Once the surveillance infrastructure is built and normalised, who controls it and what it gets used for is not a question that can be answered in advance.

Functional accountability doesn’t require knowing who your users are, it requires investing in the infrastructure to enforce it. TikTok, Instagram, and YouTube, however, have not, because moderation is expensive and impunity is good for engagement. That is a deliberate design choice. One that is reversible if a powerful enough force necessitates it. It requires these companies to spend money they would rather not spend, and the only way to make them spend it is to make non-compliance more costly than compliance. That requires leverage. Indonesia has leverage but it’s not using it.

The regulation that rolls out on the 28th restricts Indonesian citizens from platforms rather than restricting platforms from harming Indonesian citizens. It relies on verification infrastructure built on databases that have already failed the people it’s supposed to protect. It will be circumvented by a population that has been circumventing government internet restrictions for years, with the tools already installed. And it does nothing — nothing — to change the architecture that produces the harm.

Indonesia has 270 million reasons to ask for more than the appearance of having done something but the government won’t do it. Or maybe they can’t. The President already signed a trade agreement with the US which makes it much more difficult to regulate US companies. Indonesia traded away leverage over U.S. platforms in exchange for tariff relief that the US Supreme Court then invalidated anyway.

I Wanted to Be Wrong About eFishery. I Really Did.

I remember the pitch. I remember the guy. I remember sitting in the same room as him around ten years ago, listening to people praising his tenacity, seeing well regarded people and startup figures laud him as a visionary, and walking away feeling that gnawing sense I’ve come to trust over the years. When the story feels too clean, too heartwarming, too startup-perfect.

But I didn’t say anything publicly to avoid being called out for having a markedly opposing view and being highly skeptic about it, not to mention the predictable judgment that would have come, accusing me of being envious while not being anywhere near successful. It was after all a gut feeling with little to back it up and I wasn’t about to go on a mission to take down the latest tech darling of the nation, the pride and Joy of the Indonesian startup community, with no support. This company was an international sensation and people in my circle knew of my doubts but I don’t recall posting publicly about it.

When everyone else was throwing praise and cash at a fish-feeder startup like it’s the second coming of Grameen Bank, it’s easy to start wondering if maybe you’re just being cynical. Maybe you’re jaded. Maybe the founder really was a scrappy visionary from East Jakarta who’s cracked aquaculture and was about to scale empathy and catfish across Southeast Asia. I mean look at all those articles about the company and how this guy appearing out of nowhere becoming something of a tech startup prophet.

Except now, here we are: $300 million gone, farmers screwed, machines abandoned, and the poster child of “tech for good” exposed as a meticulously constructed con.

And you know what? I’m not surprised. I’m pissed.

Because I wanted to be wrong. I wanted this story to be true. I wanted this to be the one that proved that impact and innovation and bottom-of-the-pyramid hustle could build something real. But from the beginning, eFishery had all the wrong kinds of charm: the underdog myth polished to perfection, the handcrafted pitch deck trauma-bonding with VCs who wanted to save the world without leaving the hotel lounge.

He said all the right things. He did all the right gestures, looking all pious and revered. And when the numbers didn’t line up? When the tech was too expensive for the people it was supposed to help? When the revenue made zero sense for a company claiming to transform Indonesia’s rural fish farms? Everyone just nodded harder.

I watched as global investors, SoftBank, Temasek, Sequoia (Peak XV), Social Capital, lined up to outbid each other for a slice of this sweet, scalable fiction. And the media? Oh, we played along too. We love a redemption arc. We love a startup that feeds fish and our desire to feel like capitalism might still be capable of doing something decent. Again, with all these big name international funds coming in to feed the fish feeding startup, who am I to contradict their supposed intellect and superior judgment?

But deep down, I kept thinking: this doesn’t smell like fish. It smells like a fishy performance.

Now that it’s unraveled, this wasn’t just a few optimistic numbers or an overzealous forecast. This was systemic. Two sets of books. Ghost transactions. Fake shell companies. A finance operation so convoluted it’d make a crypto bro blush. All of it propped up by a moral calculus so warped it might as well have been cribbed from a freshman philosophy seminar: “Yes, I lied, but I helped some farmers, so doesn’t that count for something?”

No, it doesn’t. You don’t get to run over everyone with the trolley and call it “net positive.”

The real damage here isn’t just financial. It’s reputational. It’s trust. It’s yet another blow to the already fragile belief that startups in emerging markets can build something real without burning down the ecosystem around them. This kind of fraud doesn’t just hurt investors. It makes it harder for every honest founder grinding away on a real solution with real traction and real limitations.

And don’t get me started on due diligence. Multiple rounds of funding, multiple term sheets, global funds with armies of analysts, and no one noticed the company stopped filing basic financials in Singapore? That feeder machines were supposedly deployed at scale with zero supply chain footprint? That fish feed producers weren’t even aware of this supposed revolution happening in their own backyard?

The worst part? Some people will still excuse it. They’ll frame it as a tragedy. As a good person corrupted by pressure. A “lesson” for the ecosystem. I get it. That’s cleaner. Easier. But I can’t do that. Not after watching people celebrate this company like it was changing the world, when some of us knew it wasn’t adding up.

There were moments when I wondered if I was just being too harsh, too skeptical. I thought, maybe I’m just tired of the hype machine. Maybe I’m projecting.

Turns out I wasn’t projecting. I was just paying attention and my gut was screaming against my rationale.

And now, here’s the wreckage: laid-off staff, bankrupt farmers, investors licking wounds, and a founder who thinks starting a frozen seafood business is part of his redemption arc.

No. You don’t get to fail upward on the backs of people you lied to.

This wasn’t inevitable. This wasn’t an honest mistake. This was a choice, repeated, amplified, and dressed up as progress. And he did it because everyone he asked told him it’s okay to do it because they all did it too. They all failed him and everyone paid the price. Fake it til you make it, they said. Well, in this story, nobody made it.

And I hate that my gut feeling was right.

On the other hand he managed to hoodwink Chamath Palihapitiya who deserves everything coming at him.

Apple’s billion dollar Indonesian drama

The Apple investment saga in Indonesia highlights the tension between government ambitions, expectations, and the realities of global business strategies.


Tirto published an article about what’s happening with the Apple investment story in Indonesia with quotes and statements from government officials and analysts. It wouldn’t be the Indonesian government if it didn’t generate drama out of foreign relations or commercial arrangements worthy of a telenovela.

A few things about this drama. Apple has yet to deposit or realize the last $14 million of its $100 million investment commitment made in 2016. It’s chump change for the company but necessary to unlock the permit for the latest iPhones and end the sales ban which the government enacted last year because of it. Only Apple knows definitively why they haven’t delivered on this. Meanwhile there’s been no update on the status of the Bali Apple Academy, announced by Tim Cook in April on his visit to the country. This fourth Academy in the country is likely to be part of the unrealized investment.

Indonesia has also been on Apple’s sales performance radar for a few years now having posted consecutive quarterly sales increases and mentioned specifically during multiple financial calls, so it’s in Apple’s best interest to keep the momentum going. The country makes roughly 50 million Android phones a year mainly for the domestic market, and 85% of phone imports in 2023, or 2.3 million of them, worth around $2 billion, were iPhones. The government is keen to reduce this foreign spending by getting Apple to make phones locally.

Armed with this information and situation, the Indonesian government decided to increase pressure on the company to make good on their promise and weaponised it to force them to eventually offer an investment worth a billion dollars late last year.

Political ego meets business reality

Expecting companies to invest in Indonesia just because they’re doing well in sales ignores the realities of running a sustainable business. Sure, it’s fair to want businesses to contribute to the markets they profit from, but investments can’t be driven by sales numbers alone. They need to make sense, whether it’s about supply chains, regulations, or long-term viability. Pressuring companies to invest without considering these factors often leads to rushed, unsustainable decisions that end up costing everyone in the long run.

That said, there’s room for a balanced approach. Instead of tying investments directly to sales, Indonesia could focus on creating conditions that make investing worthwhile, like improving infrastructure, offering clear incentives, and ensuring regulatory stability. This way, companies can contribute meaningfully without being forced into decisions that don’t align with their business goals. Fair contributions are important, but they should come from partnerships built on mutual benefit, not pressure. Otherwise, it’s just a short-term fix with a long-term price tag.

Apple’s Vietnamese success

Indonesian officials and analysts love to compare Apple’s meager investment in the country with the $16 billion Apple already spent in Vietnam since 2019. The company has 26 suppliers and 28 factories in the country as of 2022 and they announced in April that they will spend much more.

Apple didn’t invest in Vietnam because the market loves the iPhone so much, they’be been investing for years and each time increasing their commitment because the government offer attractive investment opportunities and incentives, provide a stable and consistent environment for businesses, deliver the necessary labor force, and ensure long term investment and production sustainability and security despite political upheavals. Not to mention the factories are mostly located near China which allows them to maintain a streamlined supply chain operation. Indonesia doesn’t have that advantage.

Vietnamese mobile developers also took up the Apple platforms because they saw opportunities, not because they were pushed or coaxed into the platforms. They didn’t need an Apple Academy to get developers going. Most Indonesian developers and companies only see opportunities based on local sales numbers and market size. They don’t see beyond the domestic market. That’s why it was a struggle to find quality Mac and iOS apps and developers from Indonesia before the Academies opened.

By the way

The article also mentioned about the Ministry of Industries spokesperson saying that Apple submitted their investment proposal over WhatsApp. It sounds like the government wants to shame Apple for sending such an important document over a chat app but the country runs almost entirely on WhatsApp. Comms within and across government ministries and agencies are done almost exclusively on the platform, with letterhead documents for official records.

What are the chances that they sent it that way because they were told to submit the document ASAP and the paper doc would follow after, and that they haven’t managed to schedule the meeting with the Ministries because November and December are holiday months for the company? I mean, if it’s that important, Tim Cook could get a few execs to drop their holiday plans and make the meeting but it seems that the urgency of this deal has yet to reach that critical level.

Indonesia’s big tech dream among broken systems

Bloomberg has a piece criticizing the way the Indonesian government has forced Apple to invest a billion dollars and make a commitment to build a factory or two in the country.

Using a protectionist playbook to get companies to build factories could end up sidelining Southeast Asia’s largest economy when neighbors are rolling out the red carpet for investors who are relocating from China ahead of Donald Trump’s potential tariffs, analysts said.

What Indonesian policymakers, officials, and ultranationalists refuse to acknowledge isn’t just the shortsightedness of protectionist policies, but the recklessness of enforcing them without the infrastructure to support a modern tech manufacturing ecosystem.

They cling to the illusion that forcing tech giants to build products locally is enough, ignoring the fact that manufacturing doesn’t happen in isolation, it’s an interconnected ecosystem dependent on robust infrastructure, not just financial sticks and carrots.

The Indonesian government isn’t just using the wrong policy, they’re operating with the wrong mindset entirely. They also haven’t squared the collapsing textile industry and the falling demand in the auto industry with their tech ambitions. Apple manufacture devices for the global market regardless of their origin while Indonesia’s manufacturing industries tend to be dominated by domestic sales.

Local content requirements cover a range of industries, from cars to medical devices. Together with decades-old problems such as red tape, high taxes and a less productive workforce, Indonesia’s manufacturing growth has slowed to a crawl.

In contrast, neighbors like Vietnam and India are offering tax incentives, swift approvals and the freedom to source their components from across their global supply chains, Gupta of the Center for Indonesian Policy Studies said.

That makes them attractive for companies looking to produce for export and explains why Apple can invest a much larger $15 billion in Vietnam despite the nation having a smaller domestic market than Indonesia, he said.

How platforms like TikTok and Twitter are like life itself

Social platforms reflect people’s behaviors but unlike life, you can uninstall and stop visiting them.

TikTok and Twitter are often described as mirrors of life; chaotic, messy, sometimes brilliant, sometimes horrifying. But here’s the thing: life didn’t come with an “uninstall” button. These platforms do, sort of (you can remove the apps or stop visiting them altogether). And that makes it a lot harder to accept their messiness as something we just have to live with.

The harm they cause is undeniable. The misinformation, the rabbit holes, the amplification of violence and hate, it’s all right there, front and center. And because these aren’t immutable forces of nature but products of human design, it feels logical to think: Why not just turn them off? If a bridge kept collapsing under people’s feet, we’d stop letting people walk on it. If a factory was spewing toxins into the air, we wouldn’t celebrate the occasional mural painted on its walls, we’d shut the thing down.

But TikTok and Twitter aren’t just digital bridges or toxic factories, they’re also marketplaces, stages, classrooms, protest grounds, and cultural archives. They’ve been instrumental in amplifying marginalized voices, organizing grassroots movements, and spreading ideas that would’ve otherwise been silenced. Shutting them down wouldn’t just erase the harm, it would also erase the joy, the connection, the organizing power, and the little moments of humanity they enable.

That’s the tension we’re stuck with: the pull between “this is causing so much damage” and “this is doing so much good.” And it’s not a tension we can resolve cleanly, because both are true. These platforms are not neutral, they’re shaped by design choices, incentives, and algorithms that reward outrage, escalate conflict, and keep users scrolling no matter the emotional cost. But they’re also spaces where real, meaningful things happen, sometimes in spite of those same algorithms.

It’s easier to point fingers at the platforms themselves than to reckon with the fact that their messiness isn’t an anomaly, it’s a reflection. They thrive on the same things we do: conflict, validation, novelty, and the occasional hit of collective catharsis. The darkness they expose isn’t artificially generated, it’s drawn out from people who were always capable of it. TikTok and Twitter didn’t invent bad faith arguments, moral panic cycles, or performative empathy, they just turned them into highly optimized content formats.

That’s why it’s so tempting to reach for the “off” switch. Because these platforms don’t just show us other people’s mess, they show us our own. They force us to confront the uncomfortable reality that the world doesn’t just have ugliness, it produces it. And no matter how advanced our moderation tools get, or how many advisory panels are assembled, there’s no elegant way to algorithm our way out of human nature.

But accepting that doesn’t mean we stop holding these platforms accountable. They’re still products of human design, and every design choice, from the algorithm’s preferences to the placement of a “like” button, shapes behavior and incentives. The companies behind them can and should do better. But even if they do, the fundamental tension remains: these spaces are built on human behavior, and human behavior will always be messy.

Maybe the real discomfort isn’t just about what TikTok and Twitter are. It’s about what they reveal about us. The chaos, the harm, the brilliance, the joy, it’s all a reflection. And if we can’t figure out how to look at that reflection without flinching, no amount of platform reform is going to save us from ourselves.

P.S: Let me just add that I’m talking about the old Twitter, not the cesspool of unhinged miseducated misinformed mass of misguided white supremacists that it has increasingly become, a.k.a discount 4Chan. On top of that, outside of the English speaking sphere of the platform, the old Twitter still exists unbothered or unaffected by what’s happening outside of their spheres partly due to cultural differences, partly due to lack of relevance, partly due to language, and perhaps a handful of other reasons.

Study shows AI overwhelmingly favors white male in hiring job seekers

Just read an article at Ars Technica that highlights something we should all be paying more attention to: AI-driven hiring tools still have a long way to go in terms of fairness. Tests show these systems tend to favor white and male candidates, confirming that even with all our tech advances, biases persist in ways we can’t ignore. And this isn’t the only article discussing this, it’s only the latest, which means it’s a long known problem that hasn’t been rectified.

For all the hype around AI’s potential to revolutionize hiring, if it’s just reinforcing biases, what’s the point? How are these algorithms trained and why are they showing a such a strong bias towards white male candidates?

If you’re a recruiter or decision-maker, it might be time to rethink the role of AI in hiring. We all understand the basic tenet of data processing, garbage in, garbage out. Until there’s a proper process in the middle that takes away such biases, people shouldn’t be fully reliant on technology for such purposes because it’ll only reinforce them.

These high end filters make “decisions” based on their training data and will reflect biases that are already incorporated. I’m sure you’ve heard about facial identification or hand sensors that don’t work properly or have high error rates when the skin color is darker.

Not saying human-led processes aren’t prone to bias, I mean these tech “solutions” were after all built to minimize the impact of biases from human judgements, but when the outcome is no different or maybe even worse, that’s no solution at all.

The Collapse of Chinese EV Startups is a Wake-Up Call for the Industry

Rest of World published a compelling piece on the collapse of several Chinese EV startups, and there’s a glaring lesson for the entire industry: Car startups shouldn’t be developing their own software. They should rely on established software companies to build, license, and deploy that software.

Think about it: You shell out tens of thousands, maybe more, for a car—an investment you expect to last for a decade or longer. And for EVs, the software isn’t just a dashboard convenience, it’s central to the entire driving experience. From battery management to over-the-air updates and self-driving features, software makes or breaks the car.

Now, let’s imagine you buy a shiny new EV from a flashy startup. Three years later, that startup folds. What happens to your software updates? What happens to the core functionality of your vehicle if the startup disappears? Spoiler: you’re screwed. 

When an EV company collapses, it’s not just a question of no more updates or no more customer support. It’s much worse. Your vehicle could be left with outdated software that becomes incompatible with new systems, or worse, it could stop functioning altogether. This is especially true when startups decide to build their own custom software ecosystems from the ground up. It sounds like a smart idea to stand out in a crowded market, but it’s more like building a sandcastle next to the ocean—it’s not going to last, and it’s the customers who end up with sand on their shoes.

Take Byton, for example. They were around for four years, from 2017 to 2021, with big dreams of luxury electric SUVs featuring fancy tech. It was a Tencent – Foxconn joint venture, but all their resources couldn’t save them from the software pit they dug themselves into. They poured talent, effort, and money into creating a massive dashboard screen with a custom UI, promising AI-driven features. And where did it get them? Bankruptcy.

Or consider Bordrin Motors, another four-year wonder from 2016 to 2020. They developed their own vehicle operating system and digital cockpit platform. Sounds cool, right? Well, it would be if they hadn’t run out of money trying to maintain it all.

Established software companies like Apple, Google, and Microsoft have been making software for decades. They know what it takes to keep an ecosystem alive, stable, and, more importantly, secure. The idea of rolling your own software is not new—remember when every gadget maker wanted to make their own OS? It was a disaster then, and it’s a disaster now. Why should carmakers fare any better?

Instead, EV startups should focus on what they’re supposed to be good at: building great electric cars. Let the software experts handle the software. Tesla, for all its faults, is still in the game partly because of its strong software focus. They’ve managed to build a robust platform that, so far, has stood the test of time. But here’s the catch: not every startup can be Tesla, nor should they try to be. 

Some Chinese EV companies are getting it right. Look at Xpeng Motors and Li Auto. These guys are smart. They’re doing a mix of in-house development and licensing from established tech providers. Xpeng partnered with NVIDIA for AI computing and works with Desay SV Automotive for some software components. Li Auto isn’t too proud to license components for specific functionalities. And guess what? They’re still in business!

Licensing software from a more established player means that even if your car startup fails, your customers aren’t stuck with an expensive, bricked paperweight in their driveway. Their car can still receive updates, still work, and they aren’t left holding the bag. It’s akin to separating hardware from software in the tech world: Apple doesn’t build its processors, TSMC does. Apple doesn’t make its screens; Samsung does. Division of labor works for a reason.

Apple is an example of a tech company that wants to do the whole widget and they mostly do these days, but still not everything. They design their hardware but they don’t build them. The manufacturing and assembly go to partners like Foxconn and Pegatron. They didn’t design their own processors until they have the resources to put together the teams for it. And in their early days they didn’t even design their own products. Apple hired frogdesign (now just Frog) to design their computers and come up with a design language to be followed by the company’s lines of products so they all have the same style.

The problem is that too many startups have founders who think they can do it all. They want to control every aspect of the experience, which is admirable until reality sets in. Building a car is hard enough. Making great software is equally hard. Trying to do both? It’s a fool’s errand. And who pays the price for that arrogance? The consumers.

It’s one thing to purchase a phone or computer and no longer receiving updates or support after 3-4 years but when it’s a car that costs tens of thousands of dollars, you damn sure want to be able to use it for more than just a few years or at least sell it at a decent price when you need to.

No startup founder builds their company expecting to fail, so of course they will spend resources to do everything. However, when a company is just starting up the leaders need to be able to determine what their areas of strengths are, what sort of resources can they pull, and what factors can or should be outsourced to leverage outside expertise and increase internal efficiency. Once a company is strong enough to maintain a solid core and grow a business from there, then they can begin to consider building or developing non core elements internally. Of course, they also need to be able to identify what their actual core strengths are, lest they focus on the wrong things and end up accelerating their own collapse.

The key takeaway here is simple: EV startups need to know their limits. No matter how much venture capital you have or how many big names are on your board, you’re not a software company just because you hire a few software engineers. You’re a car company, so act like one. Leave the software to those who know it best. Because in the end, if you go under, it’s the customers who will feel the real crash.

Late night talk shows could be in danger, Jimmy Kimmel says

The entertainment landscape has changed so much since 20 years ago there’s no guarantee we’ll even have regularly scheduled programming anymore on TV, let alone late night talk shows in 10-20 years.

Shifting behavior means people watch clips or recordings of talk shows online instead and unlikely to watch the original broadcasts, taking away the value of advertising on live shows. If they want these shows to stick around there has to be a new business model to justify their production.

Streaming companies have been experimenting with hosting traditional TV content such as reality TV, talk shows, and current affairs, but the success of these types of shows are far and few in between. The context in which these types were made popular no longer exist with non linear entertainment.

The era of conventional TV programming is coming to an end and it’s going to be very challenging for many to deal with.

Variety:

“It used to be Johnny Carson was the only thing on at 11:30 p.m. and so everybody watched and then David Letterman was on after Johnny so people watched those two shows, but now they’re so many options.”

Not only are there so many options, but Kimmel argued that streaming platforms like YouTube and social media channels that break up late-night episodes into clips to watch after the episode airs has also limited the urgency of tuning in live.

“Maybe more significantly, the fact that people are easily able to watch your monologue online the next day, it really cancels out the need to watch it when it’s on the air,” he said. “Once people stop watching it when it’s on the air, networks are going to stop paying for it to be made.”