The MacBook Neo: A Wuling Air EV in a World of R34 Delusions

The “spec goblins” are at it again. You can hear them now, huddled in their dark corners of Reddit, screeching about the “indignity” of a USB 2.0 port and the lack of MagSafe. They’re busy counting cores and measuring nits while completely missing the point. Apple hasn’t released a “bad” laptop; they’ve finally figured out how to sell us their leftovers—and make us love them for it.

At $599, the MacBook Neo is the “Fisher-Price Macintosh.” It’s a heat-seeking missile aimed directly at the bloated, plastic-laden mid-range Windows market. And the secret sauce? It’s powered by the A18 Pro—or more accurately, the A18 Pro chips that weren’t “Pro” enough for the iPhone. Apple has taken the “binned” silicon and given it a second life in an aluminum shell. It’s genius logistics disguised as a “breakthrough” price.

But let’s talk about the automotive delusion. The spec bros are trashing the Neo because it isn’t an over-spec’d BMW M3 or some twin-turbo Nissan Skyline GT-R R34. They want a machine that can handle 150 mph on a track they’ll never visit. People buying the MacBook Neo won’t be doing 4K video productions (though they probably can) or heavy workloads that need the latest and greatest. They’ll be using apps like Google Docs, Canva, Notion, or Office, maybe a little Claude Code, not multitrack FCP or sophisticated Photoshop and motion graphics work

Then there’s the other side: the “Appliance” crowd. They love the Wuling Air EV or the BYD Atto 1. These cars are the darlings of the streets right now because they’re compact, affordable, and honest. They don’t pretend to be racers; they’re just stylish city-slicers.

Now, full disclosure: I wouldn’t buy a Neo for myself. And I certainly wouldn’t trade my 1997 BMW 323i for a BYD Atto 1. I’ll take an aging inline-six with actual road feel over a silent electric pod any day (though I wouldn’t say no to a more proper EV like the Hyundai Ioniq or a BYD Seal).

But we have to stop projecting our “enthusiast” needs onto the general public. Most people don’t need 0-60 in three seconds or a liquid-cooled GPU. They need to get to the grocery store or finish a term paper without the “transmission” falling out.

The Neo is the Air EV of computing. The benchmarks don’t lie: even a “binned” A18 Pro wipes the floor with the legendary M1 in single-core tasks. It’s 50% faster than Window PCs where it actually matters for daily use. It’s a 3nm monster in a toy’s clothing.

Yes, the speakers sound like tin cans, there’s no Thunderbolt and only one of the two USB ports is high speed but you don’t buy a budget EV and then complain that it lacks Italian leather. You buy it because it’s $599, it looks great in different colors, and it’s a portal into an ecosystem that actually works. Right now you can budget $2000 and get an Apple MacBook Neo, an Apple Watch SE, an iPhone 17e, a pair of AirPods 4, a year of Apple One subscription and still have some change. Before this month it wasn’t possible!

The MacBook Neo is the spiritual successor to the base-model iPad. It’s for the switcher who is tired of Windows breaking their life, and the parent who isn’t buying a $1,200 Facebook machine. I’ll keep my “classic” power-user gear, and you probably will too. But while the spec-heads are busy complaining about port speeds, these low end MacBooks will fly off the shelves and into the hands of people who need them.

Komdigi’s media circus at Meta’s Jakarta office

Communications and Digital Minister Meutya Hafid showed up unannounced at Meta’s Jakarta office on Wednesday with a full media entourage in tow — cameras rolling, reporters trailing, executives summoned, questions fired, headlines secured.

The official justification was that Meta had complied with just 28.47 percent of the government’s requests to act on online gambling and DFK content — disinformation, defamation, and hate speech — making it one of the lowest-performing platforms operating in Indonesia. That is a legitimate regulatory concern. With Facebook and WhatsApp each reaching around 112 million Indonesian users, the scale of exposure to harmful content is not hypothetical.

During the meeting, the Minister also raised Meta’s content moderation practices more broadly, including the removal of a photo she had posted from a visit to Palestine. Posts supporting Iran or Palestine are routinely flagged or taken down while comparable content directed elsewhere passes without review. These are real problems that deserve real scrutiny.

But serious regulatory scrutiny does not typically arrive with a press pack — or with the Director General of Digital Space Supervision, the State Intelligence Agency, the National Cyber and Encryption Agency, TNI Cyber Command, and Bareskrim Polri all in tow. That is not a regulatory inspection. That is a show of state force conducted in front of television cameras, and the distinction matters.

What happened on Wednesday looked far less like oversight and far more like a carefully staged demonstration of authority, timed to produce the evening news clip that shows the Minister doing something, at the precise moment when public anxiety about children and social media makes that image most politically useful. Look tough. Look decisive. Look morally righteous. The optics write themselves.

The trouble is that optics are not policy, and performative enforcement doesn’t fix anything. The content moderation problems she raised on Wednesday are real but they won’t be resolved by a televised office visit any more than Indonesia’s children will be protected by an access ban the platforms can’t meaningfully enforce and teenagers will circumvent within hours. Both moves share the same logic: do the thing that looks like action, and let the appearance of having acted carry the political weight.

That’s a reasonable strategy if the goal is the news cycle. It’s a poor substitute for the harder work of making platforms structurally accountable, which requires leverage, sustained regulatory pressure, and the willingness to demand compliance rather than just demonstrate displeasure in front of cameras.

Indonesia to limit social media for under-16s but it’s solving the wrong problem

There’s a more powerful way to protect Indonesian children online, and Indonesia has the numbers to force it.

Every generation decides that something new is destroying children. Comic books. Rock music. Television. Video games. The diagnosis changes. The panic is the same.

Indonesia has now joined the queue. On Friday, Communications Minister Meutya Hafid announced the implementation of a regulation restricting children under 16 from platforms the government classifies as higher-risk — YouTube, TikTok, Instagram, Facebook, X, Threads, Bigo Live, Roblox. Children under 13 are blocked entirely. Thirteen to fifteen-year-olds get access to platforms the ministry considers lower-risk. Full access at 16. Sanctions fall on platforms that fail to comply, not on children or parents. Phased implementation begins March 28, one year since the President signed the PP Tunas regulation.

The intent is not invented. UNICEF data cited by the government found around half of Indonesian children have encountered sexual content on social media. Forty-two percent said it made them feel frightened or uncomfortable. That’s not a moral panic. That’s a real problem that deserves a real response.

This isn’t one.

The mechanism doesn’t work. Age-gating requires age verification. Age verification requires either self-declaration — which every child on the internet already knows how to defeat — or something more invasive: NIK-linked identity checks, biometric data, government databases. Indonesia’s government databases have leaked repeatedly. SIM card registrations. Voter records. Millions of NIK-linked personal records circulating because someone decided security was someone else’s problem. The confidence required to trust a new verification layer built on the same infrastructure has no basis.

Then there’s the VPN problem. Indonesia has always been among the top nations for VPN usage — penetration reached 61% in previous years — though by 2025 that figure had come down to around 31% according to Meltwater, still placing it fourth globally. The infrastructure for circumvention is normalised, widely installed, and modelled by adults in the household. When the government blocked PayPal and gaming platforms in July 2022, VPN demand jumped 196% in a single day. A 14-year-old who wants TikTok will have it within hours.

And even if enforcement worked perfectly, the harms would still be there because they are the result of design, not of access. Recommendation algorithms amplify harmful content because engagement is revenue and they are not calibrated to real-world emotional impact. Infinite scroll removes natural stopping points. Notifications are engineered to pull users back. A 16-year-old encounters the same architecture as a 14-year-old, and there is no reliable technical mechanism to change that at the platform level without structural redesign. Delaying access doesn’t change what they walk into.

Europe worked this out. The EU’s approach to tech — GDPR, the Digital Services Act, the Digital Markets Act, the UK’s Age Appropriate Design Code — places the compliance burden on the platforms. Not on citizens. Europeans are not banned from TikTok. TikTok is required to change how it works.

The consequences have been real. Apple withheld Apple Intelligence from EU users at launch, citing DMA regulatory uncertainty, before eventually releasing it in April 2025. Meta declined to launch its latest Llama model in Europe, citing the unpredictable regulatory environment. Google withheld AI Overviews. Platforms have had to build separate compliance infrastructure for the European market rather than simply exit it.

On children specifically: the UK Children’s Code prohibits algorithmic curation and targeted advertising for minors. In direct response, TikTok stopped sending push notifications to children in the evenings. YouTube disabled autoplay, personalisation, and targeted advertising on content made for kids. The platforms changed their products because the cost of not doing so was higher than the cost of compliance.

That’s the playbook. Restrict what platforms can do to citizens. Not restrict citizens from platforms.

The question is leverage. Europe’s came from market size. TikTok’s own DSA report put its average monthly active users — or recipients as they call them — across EU member states at 178 million by the end of February 2026, with the broader European figure reaching 200 million by September 2025.

TikTok’s own published figures put Indonesia at 160 million users as of November 2025 — second only to the US, if not the largest market in the world. A separate measure of TikTok’s adult advertising audience by Data Reportal, which excludes under-18 and uses a different methodology, puts that figure at 180 million, the largest adult TikTok audience of any country globally. Those users spend nearly 45 hours per month on the app, among the highest engagement rates globally. Add the broader Southeast Asian picture: 460 million TikTok users across the region — more than double Europe’s 200 million.

Indonesian users aren’t just a revenue line, they’re a core driver of the content loops that keep global audiences on the app. The creator economy runs on engagement. Losing Indonesian audiences doesn’t just affect ad revenue. It affects what TikTok is. The company already lost India, they don’t want to lose Indonesia, too.

That’s a different kind of leverage but at least it’s still leverage that the government could have used.

Indonesia isn’t using it, however. Instead of demanding that platforms restructure their products, it’s restricting its own people’s access to them. That’s the wrong target. It’s also the weaker negotiating position, because it costs the platforms nothing.

Malaysia is already moving on the same issue. The Philippines, Vietnam, Thailand are watching. If Indonesia led a coordinated regional push, not for access bans, but for platform-level obligations, the numbers become harder to ignore. Southeast Asia already exceeds Europe’s TikTok footprint. A unified regulatory framework from five of those markets is not a marginal threat.

ASEAN has never coordinated at this level. Non-interference is a founding principle and a genuine obstacle. But children’s safety is politically uncontroversial across every member state. That’s rare common ground. It’s worth using.

What Indonesia should actually demand is not restricted versions of these platforms for children. It’s less predatory versions for everyone — with children as the primary beneficiaries. Disable algorithmic amplification by default, for all users: opt in to recommendation, not out of it. No compulsive notification design, no engineered pull-back for any user, minor or adult. Restrict stranger interaction by default, not as a parental unlock. Tie feature access to account history: new accounts get conservative defaults, and trust expands as behavior is established, not when an identity document is produced.

Reddit has run on a version of this model for two decades. The platform doesn’t need to know you’re a child. It just needs to stop assuming you’re a safe adult the moment you arrive. Automated filters calibrated to account age and karma, community moderators with real removal powers, conservative defaults for new accounts regardless of declared age. None of this requires a national ID. It requires intent.

These are not radical asks, the UK Children’s Code already mandates several of them, TikTok and YouTube have already complied in Europe, and the framework for enforcement exists. What’s missing is a government willing to use its market position to demand compliance rather than restrict its own citizens’ access.

Platforms already accept that accessibility is a non-negotiable compliance requirement. Screen readers, keyboard navigation, color contrast standards, these add development time, they complicate QA, and no platform gets to opt out on the grounds that inclusion is expensive. The legal and reputational cost of being inaccessible is now high enough that the work gets done. Child safety is a universal value in a way that very few things are, and there is no principled argument for why it should be treated as optional where accessibility is not.

Theme parks don’t let you on a ride if you’re under the height requirement. They don’t let you board if you’re pregnant, or if the ride poses a risk to people with certain health conditions. Nobody describes this as censorship or an infringement on personal freedom. It’s a safety standard applied at the point of access, and the burden of enforcing it falls on the operator, not on the person being turned away. The same logic applies here. If a platform cannot ensure a reasonably safe experience for a 12-year-old, it should not be permitted to let a 12-year-old on. The design choices that make it unsafe are in the platform’s control.

The problem isn’t identification. It’s intent. Platforms already know a great deal about their users without ever asking for a government ID. Behavioral signals, what content a user engages with, how long they stay on certain types of posts, what they search for, what time of day they’re active, how their interaction patterns compare to known demographic clusters, are processed continuously and used to target advertising with remarkable precision. A platform that can infer that a user is likely a woman in her late twenties with an interest in fitness and a recent interest in pregnancy is not a platform that lacks the tools to infer that a user is probably fourteen.

The distinction between age verification and age estimation matters here. Verification ties a user to an identity document. Estimation uses behavioral and interaction signals to assign a probability range. Instagram and TikTok have both deployed versions of age estimation in limited contexts already. Instagram in particular uses it to reclassify accounts that show signals of being underage even when no age was declared. The UK Children’s Code endorses a version of this logic directly: if a significant proportion of a platform’s users are likely to be children, apply protective standards to all users unless there is a positive reason not to. That’s not surveillance. It’s designing conservatively.

Conservative defaults are the cleanest answer to the identification problem. If a platform cannot determine with confidence whether a user is fourteen or twenty-four, the appropriate response is to treat them as fourteen until there is a positive reason to do otherwise, not to treat them as twenty-four and make protection an optional extra that requires parental setup. The current architecture is inverted: full algorithmic engagement is the default, and restrictions are the opt-in. Flipping that default requires no new technology. It requires intent.

Platforms also already collect birthdates but most use them for nothing protective. The UK Children’s Code requires that if a platform collects a birthdate showing the user is a child, it must actually apply child-appropriate settings. That is not a technically demanding ask. It is a demand that platforms act on information they already have.

None of this is difficult to build. Kaskus has moderated a large, pseudonymous community since 1999 using account signals, behavioural patterns, and community escalation without ever requiring identity documents. Reddit has done the same at far larger scale for two decades, automated filters calibrated to account age and karma, community moderators with real removal powers, shadowbanning for persistent bad actors. Neither platform knows who its users are. Both have functional accountability structures. The missing ingredient on TikTok, Instagram, and YouTube is not technical capability. It is the cost structure that makes investment in moderation and protective defaults worthwhile, and that only changes when non-compliance becomes more expensive than compliance.

China’s solution doesn’t work. The identity question comes up because it sounds like the obvious enforcement mechanism: if platforms know who someone is, they can verify their age and apply the right rules. China went furthest with this logic. Real-name registration tied to national identity, facial recognition deployed by Douyin and other major platforms, internet curfews for all under-18s from 10pm to 6am, screen time caps that limit children under 8 to 40 minutes a day.

The results were real in places: gaming hours among under-18s fell sharply, Tencent reported a 96% drop, sedentary behaviour among children decreased and physical activity went up. But more than three-quarters of heavy young gamers found workarounds within months through relatives’ accounts, rented adult credentials, photographs held up to facial recognition cameras. A black market in adult accounts emerged. Some children were scammed trying to buy access to their own internet.

Age and identity verification affects everyone. Real-name registration doesn’t just verify age. It makes every online action traceable to a person’s identity, which has effects that go well beyond screen time. It’s a chilling mechanism for political speech, for religious expression, for any view that might attract attention from authorities. Anonymity is not a design flaw. It’s a protection for people whose safety depends on not being findable, and for the ordinary possibility of holding an opinion without a record. Once the surveillance infrastructure is built and normalised, who controls it and what it gets used for is not a question that can be answered in advance.

Functional accountability doesn’t require knowing who your users are, it requires investing in the infrastructure to enforce it. TikTok, Instagram, and YouTube, however, have not, because moderation is expensive and impunity is good for engagement. That is a deliberate design choice. One that is reversible if a powerful enough force necessitates it. It requires these companies to spend money they would rather not spend, and the only way to make them spend it is to make non-compliance more costly than compliance. That requires leverage. Indonesia has leverage but it’s not using it.

The regulation that rolls out on the 28th restricts Indonesian citizens from platforms rather than restricting platforms from harming Indonesian citizens. It relies on verification infrastructure built on databases that have already failed the people it’s supposed to protect. It will be circumvented by a population that has been circumventing government internet restrictions for years, with the tools already installed. And it does nothing — nothing — to change the architecture that produces the harm.

Indonesia has 270 million reasons to ask for more than the appearance of having done something but the government won’t do it. Or maybe they can’t. The President already signed a trade agreement with the US which makes it much more difficult to regulate US companies. Indonesia traded away leverage over U.S. platforms in exchange for tariff relief that the US Supreme Court then invalidated anyway.

I Wanted to Be Wrong About eFishery. I Really Did.

I remember the pitch. I remember the guy. I remember sitting in the same room as him around ten years ago, listening to people praising his tenacity, seeing well regarded people and startup figures laud him as a visionary, and walking away feeling that gnawing sense I’ve come to trust over the years. When the story feels too clean, too heartwarming, too startup-perfect.

But I didn’t say anything publicly to avoid being called out for having a markedly opposing view and being highly skeptic about it, not to mention the predictable judgment that would have come, accusing me of being envious while not being anywhere near successful. It was after all a gut feeling with little to back it up and I wasn’t about to go on a mission to take down the latest tech darling of the nation, the pride and Joy of the Indonesian startup community, with no support. This company was an international sensation and people in my circle knew of my doubts but I don’t recall posting publicly about it.

When everyone else was throwing praise and cash at a fish-feeder startup like it’s the second coming of Grameen Bank, it’s easy to start wondering if maybe you’re just being cynical. Maybe you’re jaded. Maybe the founder really was a scrappy visionary from East Jakarta who’s cracked aquaculture and was about to scale empathy and catfish across Southeast Asia. I mean look at all those articles about the company and how this guy appearing out of nowhere becoming something of a tech startup prophet.

Except now, here we are: $300 million gone, farmers screwed, machines abandoned, and the poster child of “tech for good” exposed as a meticulously constructed con.

And you know what? I’m not surprised. I’m pissed.

Because I wanted to be wrong. I wanted this story to be true. I wanted this to be the one that proved that impact and innovation and bottom-of-the-pyramid hustle could build something real. But from the beginning, eFishery had all the wrong kinds of charm: the underdog myth polished to perfection, the handcrafted pitch deck trauma-bonding with VCs who wanted to save the world without leaving the hotel lounge.

He said all the right things. He did all the right gestures, looking all pious and revered. And when the numbers didn’t line up? When the tech was too expensive for the people it was supposed to help? When the revenue made zero sense for a company claiming to transform Indonesia’s rural fish farms? Everyone just nodded harder.

I watched as global investors, SoftBank, Temasek, Sequoia (Peak XV), Social Capital, lined up to outbid each other for a slice of this sweet, scalable fiction. And the media? Oh, we played along too. We love a redemption arc. We love a startup that feeds fish and our desire to feel like capitalism might still be capable of doing something decent. Again, with all these big name international funds coming in to feed the fish feeding startup, who am I to contradict their supposed intellect and superior judgment?

But deep down, I kept thinking: this doesn’t smell like fish. It smells like a fishy performance.

Now that it’s unraveled, this wasn’t just a few optimistic numbers or an overzealous forecast. This was systemic. Two sets of books. Ghost transactions. Fake shell companies. A finance operation so convoluted it’d make a crypto bro blush. All of it propped up by a moral calculus so warped it might as well have been cribbed from a freshman philosophy seminar: “Yes, I lied, but I helped some farmers, so doesn’t that count for something?”

No, it doesn’t. You don’t get to run over everyone with the trolley and call it “net positive.”

The real damage here isn’t just financial. It’s reputational. It’s trust. It’s yet another blow to the already fragile belief that startups in emerging markets can build something real without burning down the ecosystem around them. This kind of fraud doesn’t just hurt investors. It makes it harder for every honest founder grinding away on a real solution with real traction and real limitations.

And don’t get me started on due diligence. Multiple rounds of funding, multiple term sheets, global funds with armies of analysts, and no one noticed the company stopped filing basic financials in Singapore? That feeder machines were supposedly deployed at scale with zero supply chain footprint? That fish feed producers weren’t even aware of this supposed revolution happening in their own backyard?

The worst part? Some people will still excuse it. They’ll frame it as a tragedy. As a good person corrupted by pressure. A “lesson” for the ecosystem. I get it. That’s cleaner. Easier. But I can’t do that. Not after watching people celebrate this company like it was changing the world, when some of us knew it wasn’t adding up.

There were moments when I wondered if I was just being too harsh, too skeptical. I thought, maybe I’m just tired of the hype machine. Maybe I’m projecting.

Turns out I wasn’t projecting. I was just paying attention and my gut was screaming against my rationale.

And now, here’s the wreckage: laid-off staff, bankrupt farmers, investors licking wounds, and a founder who thinks starting a frozen seafood business is part of his redemption arc.

No. You don’t get to fail upward on the backs of people you lied to.

This wasn’t inevitable. This wasn’t an honest mistake. This was a choice, repeated, amplified, and dressed up as progress. And he did it because everyone he asked told him it’s okay to do it because they all did it too. They all failed him and everyone paid the price. Fake it til you make it, they said. Well, in this story, nobody made it.

And I hate that my gut feeling was right.

On the other hand he managed to hoodwink Chamath Palihapitiya who deserves everything coming at him.

How platforms like TikTok and Twitter are like life itself

Social platforms reflect people’s behaviors but unlike life, you can uninstall and stop visiting them.

TikTok and Twitter are often described as mirrors of life; chaotic, messy, sometimes brilliant, sometimes horrifying. But here’s the thing: life didn’t come with an “uninstall” button. These platforms do, sort of (you can remove the apps or stop visiting them altogether). And that makes it a lot harder to accept their messiness as something we just have to live with.

The harm they cause is undeniable. The misinformation, the rabbit holes, the amplification of violence and hate, it’s all right there, front and center. And because these aren’t immutable forces of nature but products of human design, it feels logical to think: Why not just turn them off? If a bridge kept collapsing under people’s feet, we’d stop letting people walk on it. If a factory was spewing toxins into the air, we wouldn’t celebrate the occasional mural painted on its walls, we’d shut the thing down.

But TikTok and Twitter aren’t just digital bridges or toxic factories, they’re also marketplaces, stages, classrooms, protest grounds, and cultural archives. They’ve been instrumental in amplifying marginalized voices, organizing grassroots movements, and spreading ideas that would’ve otherwise been silenced. Shutting them down wouldn’t just erase the harm, it would also erase the joy, the connection, the organizing power, and the little moments of humanity they enable.

That’s the tension we’re stuck with: the pull between “this is causing so much damage” and “this is doing so much good.” And it’s not a tension we can resolve cleanly, because both are true. These platforms are not neutral, they’re shaped by design choices, incentives, and algorithms that reward outrage, escalate conflict, and keep users scrolling no matter the emotional cost. But they’re also spaces where real, meaningful things happen, sometimes in spite of those same algorithms.

It’s easier to point fingers at the platforms themselves than to reckon with the fact that their messiness isn’t an anomaly, it’s a reflection. They thrive on the same things we do: conflict, validation, novelty, and the occasional hit of collective catharsis. The darkness they expose isn’t artificially generated, it’s drawn out from people who were always capable of it. TikTok and Twitter didn’t invent bad faith arguments, moral panic cycles, or performative empathy, they just turned them into highly optimized content formats.

That’s why it’s so tempting to reach for the “off” switch. Because these platforms don’t just show us other people’s mess, they show us our own. They force us to confront the uncomfortable reality that the world doesn’t just have ugliness, it produces it. And no matter how advanced our moderation tools get, or how many advisory panels are assembled, there’s no elegant way to algorithm our way out of human nature.

But accepting that doesn’t mean we stop holding these platforms accountable. They’re still products of human design, and every design choice, from the algorithm’s preferences to the placement of a “like” button, shapes behavior and incentives. The companies behind them can and should do better. But even if they do, the fundamental tension remains: these spaces are built on human behavior, and human behavior will always be messy.

Maybe the real discomfort isn’t just about what TikTok and Twitter are. It’s about what they reveal about us. The chaos, the harm, the brilliance, the joy, it’s all a reflection. And if we can’t figure out how to look at that reflection without flinching, no amount of platform reform is going to save us from ourselves.

P.S: Let me just add that I’m talking about the old Twitter, not the cesspool of unhinged miseducated misinformed mass of misguided white supremacists that it has increasingly become, a.k.a discount 4Chan. On top of that, outside of the English speaking sphere of the platform, the old Twitter still exists unbothered or unaffected by what’s happening outside of their spheres partly due to cultural differences, partly due to lack of relevance, partly due to language, and perhaps a handful of other reasons.

Study shows AI overwhelmingly favors white male in hiring job seekers

Just read an article at Ars Technica that highlights something we should all be paying more attention to: AI-driven hiring tools still have a long way to go in terms of fairness. Tests show these systems tend to favor white and male candidates, confirming that even with all our tech advances, biases persist in ways we can’t ignore. And this isn’t the only article discussing this, it’s only the latest, which means it’s a long known problem that hasn’t been rectified.

For all the hype around AI’s potential to revolutionize hiring, if it’s just reinforcing biases, what’s the point? How are these algorithms trained and why are they showing a such a strong bias towards white male candidates?

If you’re a recruiter or decision-maker, it might be time to rethink the role of AI in hiring. We all understand the basic tenet of data processing, garbage in, garbage out. Until there’s a proper process in the middle that takes away such biases, people shouldn’t be fully reliant on technology for such purposes because it’ll only reinforce them.

These high end filters make “decisions” based on their training data and will reflect biases that are already incorporated. I’m sure you’ve heard about facial identification or hand sensors that don’t work properly or have high error rates when the skin color is darker.

Not saying human-led processes aren’t prone to bias, I mean these tech “solutions” were after all built to minimize the impact of biases from human judgements, but when the outcome is no different or maybe even worse, that’s no solution at all.

Angry Indonesian Internet Users Create Virtual Roadblocks on Google Maps in Response to Mob Murder

Indonesian internet users have flooded Google Maps with virtual roadblocks on nearly every road and street in the Sukolilo district, Pati, Central Java.

This digital protest comes in the wake of a tragic incident where a mob of local residents set fire to a rental car owner and his vehicle, resulting in his death. The victim was reportedly attempting to retrieve the car from suspected car thieves when the mob attacked. Three other men who accompanied the deceased victim were also assaulted and are in a coma in a hospital.

Several rental car business owners have come forward, revealing that they have long blacklisted rentals to individuals carrying Pati-issued identification cards due to concerns about vehicle theft. They claim that the regency is widely known within the industry as a hub for stolen motor vehicles, with many vehicles in the area lacking license plates.

Sukolilo subdistrict head Andrik Sulaksono rejected the allegations, saying the area is not a fencing hub and that it was all said by angry netizens reacting to the news of the murder.

Until recently, law enforcement authorities have reportedly taken little action in response to suspicions and public reports of vehicle theft in the region. This apparent lack of action has prompted some angry Indonesians to resort to vigilante justice.

The incident has sparked outrage among Indonesian internet users, leading to the virtual roadblock campaign on Google Maps as a form of protest and a call for increased attention to the issue of vehicle theft and the need for improved law enforcement in the area.

Police have apprehended ten suspects with evidence belonging to the victims found at their homes, and seized 27 motorcycles and 6 cars with fake registration papers, from one property.

Composite image of one neighborhood in Sukolilo showing virtual roadblocks on Google Maps on nearly every road.

Adam Mosseri further clarifies position about news on Threads

Instagram and Threads Chief Adam Mosseri posted on Threads to clarify what people perceive to be suppression of news on the platform.

I don’t believe the IG team and especially leadership are sneaky or malicious in any way but it’s difficult to see this statement and take it at face value.

Just to clarify, and this is on me for not being specific enough in my language historically, we’re not trying to avoid being a place for any news. News about sports, music, fashion, culture is something we’re actively pursuing. Political news is the topic where are looking to be more careful. Politics is already very much on threads, and that’s okay, we’re just not looking to amplify it.

He said that the kind of news (and presumably other types of discussions) they want on the platform is around sports, music, fashion, and culture. They prefer those to be driving the conversation instead of hard news or politics which are not actually banned but they want to be “more careful” about those topics, presumably, and it’s my guess, because of how sensitive and delicate they can be, not to mention Meta’s issue and history with the news media in general.

Everything in life is about politics. Sports is a battleground for political ideologies (Colin Kaepernick, anyone?), the fashion scene is a statement of political allegiances (Cate Blanchett, we see you), and music is a hotbed of political discourse (where do I even start?). As for culture, oh boy, if it’s not a political minefield then what is?

These are hot button arenas rife with debates over subjects such as race, social justice, equality, opportunity, and exploitation—topics that Meta appears to prefer to avoid. It seems Meta’s ideal platform is one of superficial harmony and feel-good aesthetics, shunning the gritty realities of societal discourse in favor of saccharine content and elaborate platitudes.

The more fundamental issue

Choosing what topics to focus on isn’t even their main problem. The Threads platform’s algorithmic approach to content curation is fundamentally flawed, prioritizing stale content and undermining the user experience.

The default ‘For You’ feed is plagued by a glaring disconnect between user expectations and delivery, as it frequently surfaces posts that are either already two days old—a virtual eon in the text-based social media space—or irrelevant and unwanted. This not only diminishes the freshness of the feed but also calls into question the platform’s understanding of ‘relevancy,’ which is intrinsically tied to the timeliness of content.

Additionally, the apparently elusive ‘Following’ feed, which offers a chronological timeline, is marred by its clunky activation and its baffling tendency to revert to the ‘For You’ feed at random. This erratic behavior disrupts the user’s control over their own social media experience, forcing them back into a loop of outdated content.

Threads says it wants to be a conversation platform but its default feed still struggles to surface timely and relevant posts. It is certainly a challenge to algorithmically deliver content that matches everyone’s unique sets of interests, and it has to be algorithmically driven if they want to ensure people don’t miss posts that they may be interested.

Clearly it’s not impossible to run a purely chronologically driven feed because Twitter did it before and Mastodon, along with its ActivityPub gang, still do, but unless you’re chronically online, the likelihood of seeing posts that are published while you’re away is small.

Without an algorithm that can be tuned to identify your interests and serve you posts that match them you’ll have to rely on other people surfacing them to you either by replying to those posts or have someone repost them and for some people that works just fine but when you run a platform with the intention of keeping as much of people’s time and attention, an algorithm is necessary.

In essence, Threads still has some ways to go to address the critical issue of recency, leaving users drowning in a sea of irrelevance. The platform’s inability to provide a consistently up-to-date and relevant feed not only frustrates users but also undermines the very purpose of social media—to connect people with what matters to them, here and now.

A text based social platform is inherently different to one that’s based on images or videos. Usage on TikTok and Instagram are driven more by entertainment value while text platforms are about what’s happening. If Meta wants Threads to be a place for conversations, let people follow their interests, not just accounts, and tune the algorithm to lean heavier on recency.

Why Apple debuted the M4 on the iPad Pro instead of the Mac

I’m working on a longer piece about iPads but I just want to put this out first. Fast Company’s Harry McCracken sat down with Apple Senior Vice Presidents Greg Joswiak and John Ternus to talk about the latest iPad models that just came out this week.

I’ve been wondering why Apple decided to launch the M4 with the iPad, breaking “tradition” with previous M series chip releases. Apple did mention previously that this generation of iPad Pro wouldn’t have been possible without the M4 and there’s been plenty of dicsussions about the M4’s capabilities and significance, but for some, the M series had unofficially stood for “Mac”. It’s a high performance class chip designed to do deliver the most power but also incredibly long battery life. While it does make sense for it to eventually make it to the iPad, I didn’t expect a brand new version to debut on the iPad. It had debuted on a Mac and new versions had been showcased first on Macs, until now.

According to Joz, Apple’s engineers were able to incorporate in the M4 the capabilities they need to support the technologies they want to include in the latest iPad Pro, which was why they went with it.

That Apple is in a position to incorporate the technologies it needs into the chips it designs doesn’t just explain how it was able to build the thin, powerful iPad Pro. It’s also why the M4 is showing up first in the iPad Pro rather than a Mac: Rather than being a Mac processor repurposed for an iPad, it was conceived from the start to drive the iPad Pro’s new OLED screen.

“Our chip team was able to build that controller into the road map,” explains Joswiak. “And the place they could put it was the M4.”

This to me is a sign that Apple remains faithful to the iPad line despite years of seeming neglect in terms of the direction of the product. At some point the iPad was going to be the future of Apple’s computing, potentially replacing the Mac, at least for the masses, but with the release of Vision Pro and the resurgence of the Mac thanks to the M series chips, that plan isn’t so clear anymore. Maybe now the plan is to offer different devices for different types of consumers. I’ll get into that and more in the upcoming piece.

Artifact News Reader is Being Shut Down

I’ve enjoyed using Artifact and it’s upsetting that it’s being shut down because it really seemed like it was on its way to be a really good news reader. It’s often the first or second app I open to kickstart the day. I like that Artifact lets you load an AI-generated article summary if you don’t have time to read the full story.

Artifact at some point added social elements but people just didn’t see it that way because it’s a news reader first and foremost. It also let you publish your own takes of the news linking to them, making it a blog platform. This part I enjoyed a lot. I didn’t post too many times but enough to keep me writing my thoughts on things that bugged me.

They said Artifact will remain up until til the end of February. I’ll be spending some time to republish those posts here and backdating them accordingly.

Ultimately for a blogger it all comes back to running your own space if you want to keep your published thoughts available to read on the web. Maybe one day I’ll eventually decide to have my own self hosted blog and social web instance like it’s always meant to be and move everything to that because platforms like there, including Medium and Tumblr, may one day shut down if they can’t justify keeping them around whether through lack of revenue or something else.

For my daily news reading there’s always Flipboard which I also still use regularly but I’m going to miss Artifact.