ChatGPT will always give you an answer even if it’s wrong

Between fast and correct ChatGPT will always choose fast because it’s not programmed or trained to say it doesn’t have the answer. ChatGPT is trained to respond confidently and as a result will always provide you with an answer regardless whether the answer is false or factual.

In this example ChatGPT was asked in what episode of The Simpsons did Homer Simpson ask how much is a free gift. ChatGPT confidently answered episode 7 of season 7, King Size Homer.

That episode was indeed titled King Size Homer but it’s not the episode where Homer asked the question. That episode was The Joy of Sect, season 9 episode 13 as confirmed by IMDb.

Given that a chatbot built upon a Large Language Model (LLM) would be beneficial for IMDb to have, maybe it’s just a matter of time before they build one for themselves. After all, search seems to be making way for chatbots or voice queries lately. There is however, a caveat, which I will get to in a second.

If you enter “Homer how much free gift” into YouTube’s search bar, it will give you this video as the first answer. Granted it only says the episode title, number, and season in the video description, which can easily be incorrect or even blank but cross referencing that with IMDb confirmed it’s the correct one.

If you search on Google, it may suggest to you the more accurate quote to search for which should give you the same YouTube video at or near the top of the search results. Google 1, ChatGPT 0.

Google search suggestions for “Homer how much”

ChatGPT may well be the most advanced LLM chatbot today but it’s a language model designed and trained to deliver the most grammatically or syntactically correct response according to the material it has been trained on. It is not trained to provide the correct answer to queries because it’s not meant to be a search engine or database that you submit queries to.

While ChatGPT does not verify or fact check its responses, other models may be trained to find the correct answer and infer the answer from information it comes across online.

If ChatGPT happens to provide the correct answer it’s only because the answer is contained in the training material with the relevant context which it manages to pull out. In other words correct answers are delivered by chance without verification. How many times have you realized it was lying to you and you had to ask why it gave you the wrong answer or to check whether the information it gave you is correct?

Remember last year when everyone was reporting that ChatGPT had passed the bar exam (lawyer’s exam) in the top 10%? It actually didn’t. It was more like in the top 69% to 48% depending on the peers or cohorts. The grading method was flawed.

Again, ChatGPT is not an information database but it does know how to form correct sentences, write in certain styles, translate texts, and help you with your coding. Maybe one day it may become an information database but today, with GPT 3.5, 4.0, and 4o, is not that day.

Use ChatGPT as a writing or coding assistant and you’re probably golden but you still have to play the role of an editor to make sure it gives you the correct output for you needs.

AI voice detection and recognition are becoming more crucial

This Twitter thread shows how far along artificial voices have come. For those who are familiar with Steve Jobs’ voice, the voice in these recordings is almost indistinguishable from the original. When you listen to them, you can be forgiven to think that it’s actually Steve Jobs saying these words, never mind that he’s been gone for more than a decade.

The only catch is that because the training set must have been taken from the many recordings of his Apple keynote speeches and product announcements, they all sound like he’s reading from a script or making announcements. None of the sentences sound natural the way someone would speak if they were having a regular conversation or answering questions but that’s not too difficult to overcome. The tools to make adjustments to AI generated voices to sound more natural already exists.

Here’s another example. The YouTube channel Star Wars Comics have started to experiment with using generated voices to narrate some storyline’s from the Star Wars comic books to keep their audience up to date with what’s happening in the comics. In one video, they used James Earl Jones’ Darth Vader voice to say the lines in the pages of the comic book. Their latest video voiced a conversation between Emperor Palpatine and Darth Vader from another issue in the recent Darth Vader comic book series, both using the generated voices of their real actors.

As many in the comments noted, while their voices sound indistinguishable from the original, the speech patterns make it obvious that these were generated. That’s because the voices weren’t adjusted to the way a person would speak in a proper conversation given in the situation. Again, these are relatively trivial changes that one could make using their AI voice generators.

While these may be little more than fun projects for the curious minds, the day when someone can create entirely fabricated recordings to manipulate the public is already here. You can already create fake videos of a person saying things that they never actually said, now the voices sound even closer to the original.

When deepfake videos started popping up in 2020, people knew that this was going to be a significant problem. People are already easily fooled by fabricated articles or stories and this is just going to make it far more challenging for people to fact check and verify the validity of recordings.

All I can say for that is, brace for impact.

screen grabs of the siri vs tellme video. the important parts.

Microsoft’s TellMe vs Apple’s Siri