The Apple – Perplexity rumor that won’t go away

This is a longer think piece from the quick post I had on Mastodon the other day.

Every time someone floats the idea that Apple should acquire Perplexity to “supercharge” its AI efforts, I get whiplash, not just from the sheer strategic laziness of the suggestion, but from the deeper cultural misalignment it completely ignores. The very idea is a perplexing thought.

Perplexity isn’t some misunderstood innovator quietly building the future. It’s a company fundamentally unsure of what it is, what it stands for, or how to exist without parasitizing the open web. It’s been posing as a search engine, an AI-powered Q&A tool, a research assistant, and lately, some vague hybrid of all three, depending on who’s asking and what narrative sounds hottest that week. The only throughline is this: a constant need to justify its own existence, retrofitting its product pitch to whatever the industry is currently foaming at the mouth about.

And then there’s the CEO.

Perplexity CEO Aravind Srinivas has made a habit of saying the quiet parts out loud, and not in a refreshing, brutally honest way, but in a way that suggests he hasn’t thought them through. Case in point: TechCrunch Disrupt 2024, where he was asked point blank to define plagiarism and couldn’t answer. Not didn’t answer. Couldn’t. That wasn’t just a missed PR opportunity. That was a red flag, flapping violently in the face of a company that scrapes content from other publishers, slaps a “summarized by AI” badge on it, and tries to call that innovation.

When you can’t define plagiarism as the CEO of a company built on other people’s work, that’s not strategic ambiguity, that’s an ethical void. And it’s telling. Perplexity has made a business of riding the razor-thin line between fair use and flat-out theft, and they want the benefit of the doubt without the burden of responsibility.

Which is where the Apple comparisons get absurd.

Yes, Apple stumbled. For more than a decade, Siri was a rudderless ship, a clunky commuter train in an age where everyone else was racing to build maglevs. The company completely missed the LLM Shinkansen as it rocketed past, leaving Siri coughing in the dust. What followed was a scramble, an engine swap mid-ride, and the painful attempt to retrofit a creaky voice assistant into something worthy of generative AI expectations.

That failure — public, prolonged, and still unresolved — gave the impression that Apple had no idea what was coming. That they were too slow, too self-contained, and too arrogant to evolve. And to some extent, that criticism landed. The year-long silence after ChatGPT’s breakout moment painted Apple as unprepared, reactive, even out of touch.

But here’s the thing: while Apple still hasn’t shown much of anything tangible since the Apple Intelligence announcement at WWDC 2024 (Genmoji? Really? Messed up email and notification summary?), the signals are clear. The company has changed course. They’ve acknowledged they’re behind and now they’re moving, quietly but with force. Once Apple has its engineering machine locked onto a target, the company doesn’t need to acquire noisy, erratic startups to plug the gaps. What it needs is time. And direction. And both are now in motion.

Which brings us back to Perplexity. Apple doesn’t need it. Not for the tech — which is just a UX layer on top of open models and scraped data. Not for the team — which seems more interested in testing the boundaries of IP law than building products people trust. And definitely not for the culture — which is allergic to accountability and powered by vibes over values.

Apple’s entire value proposition is control: of the user experience, of the ecosystem, and of the narrative. Perplexity brings chaos. Unapologetically so. It doesn’t have a sustainable moat, a mature product, or a north star. It has hype. It has press. And it has the moral compass of a company that thinks citation is a permission slip to republish everyone else’s work for free.

If Apple wants a better search experience, it can build one, with privacy built in, on-device processing, and full-stack integration. If it wants a smarter assistant, it can leverage its silicon and software in ways that Perplexity simply can’t touch. What it doesn’t need is a cultural virus from a startup that treats copyright like a rounding error and ethics like an optional plugin.

So no, Apple shouldn’t buy Perplexity. Not because it can’t. But because it finally knows what it needs to build, and it’s building it the Apple way. At least that’s what I think they’re doing.

Study shows AI overwhelmingly favors white male in hiring job seekers

Just read an article at Ars Technica that highlights something we should all be paying more attention to: AI-driven hiring tools still have a long way to go in terms of fairness. Tests show these systems tend to favor white and male candidates, confirming that even with all our tech advances, biases persist in ways we can’t ignore. And this isn’t the only article discussing this, it’s only the latest, which means it’s a long known problem that hasn’t been rectified.

For all the hype around AI’s potential to revolutionize hiring, if it’s just reinforcing biases, what’s the point? How are these algorithms trained and why are they showing a such a strong bias towards white male candidates?

If you’re a recruiter or decision-maker, it might be time to rethink the role of AI in hiring. We all understand the basic tenet of data processing, garbage in, garbage out. Until there’s a proper process in the middle that takes away such biases, people shouldn’t be fully reliant on technology for such purposes because it’ll only reinforce them.

These high end filters make “decisions” based on their training data and will reflect biases that are already incorporated. I’m sure you’ve heard about facial identification or hand sensors that don’t work properly or have high error rates when the skin color is darker.

Not saying human-led processes aren’t prone to bias, I mean these tech “solutions” were after all built to minimize the impact of biases from human judgements, but when the outcome is no different or maybe even worse, that’s no solution at all.

Google Photos’ New AI Tools for Pixel 8 Raise Messy Questions

The Verge raises serious journalistic questions on the legitimacy of images taken using Google’s latest phones because the AI tools in Pixel 8 are much more readily available to manipulate them from the moment they’re taken to having them saved and published.

While the AI-adjusted images may have certain markers embedded, they may not be easily detected without specific tools unless it’s an obvious visual label permanently affixed to the image.

What is a photo? Is it a snapshot of a single millisecond in time? An imprecise memory of a moment? An ideal depiction of an otherwise imperfect brief period? How much is too much manipulation?

Smartphone captures from any brand are almost entirely manipulated after all with software adjustments converging to create the best version of a snapshot, but until now, they are still generally accepted as accurate photographs of a specific moment due to the lack of meaningful deviation from the truth.

When it comes to casual personal collection of photos and videos, these adjustments don’t or won’t amount to anything too serious but for journalistic purposes, these techniques advances pose questions and challenges.

Journalism outlets have guidelines to determine what photo or footage is acceptable to be considered a true capture and the results of a typical smartphone snapshot usually don’t change anything meaningful from the actual scene, but when the definitive capture no longer represents the truth, will the media authorities need to restrict the use of certain devices?

While manipulated images have made their way to major publications undetected until it was too late, they are still relatively rare. 

Of course, photographers have always been able to manipulate situations by changing or adjusting the scene before capturing and sometimes only the presence of a witness or the existence of another image depicting the actual truth can serve as evidence of manipulation.

When the tools people use can significantly alter what was actually taken by the lens before a definitive record is made or saved into the camera’s memory, instead of after, journalism authorities and watchdogs will need to be even more vigilant.

I have two posts on Netflix Indonesia’s price drop, one written by myself, the other by ChatGPT. It was a fun exercise in seeing how different the pieces would turn out. ChatGPT took a very general analysis view on the subject matter while I dug deeper on the reasons and give more business and competitive context to the readers. Let me know what you think of both.

So Couple had this for April fools

They posted this and the corresponding blog post on March 31 but the email announcement didn’t arrive in my inbox until this morning. Almost a week late. In the meantime, on April 2 they put up another blog post saying that Alice, and her male version Alex, had discovered each other and ran away, leaving only a chat record of what happened.

You can go to http://couple.me/alice to see the intro. The whole thing does look straight out of the movie Her, except you don’t get Scarlett Johansson’s voice.

That’s cute.