THIS IS A SERIOUS FLAW: Chatbots Sometimes Make Things Up. Is AI’s Hallucination Problem Fixable?

Sharing is Caring!

via MSN:

Spend enough time with ChatGPT and other artificial intelligence chatbots and it doesn’t take long for them to spout falsehoods.

Described as hallucination, confabulation or just plain making things up, it’s now a problem for every business, organization and high school student trying to get a generative AI system to compose documents and get work done. Some are using it on tasks with the potential for high-stakes consequences, from psychotherapy to researching and writing legal briefs.

See also  19 Things the Middle Class Won’t Be Able to Afford in 5 Years

“I don’t think that there’s any model today that doesn’t suffer from some hallucination,” said Daniela Amodei, co-founder and president of Anthropic, maker of the chatbot Claude 2.

“They’re really just sort of designed to predict the next word,” Amodei said. “And so there will be some rate at which the model does that inaccurately.”

See also  Consumer Sentiment just imploded, credit card delinquency rates in Q4 23 were the worst on record, ‘phantom debt’ is soon to be a huge problem

Anthropic, ChatGPT-maker OpenAI and other major developers of AI systems known as large language models say they’re working to make them more truthful.

How long that will take — and whether they will ever be good enough to, say, safely dole out medical advice — remains to be seen.

 

Views: 120

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.