Understanding AI Chatbots' Fabrications: How They 'Hallucinate' and Ways to Identify Fictions
'Hallucination' typically invokes images of unreal perceptions in the human mind, but in the realm of artificial intelligence, it has a distinct meaning. It refers to AI chatbots creating fictitious information that is seemingly accurate. Imagine asking an AI chatbot about the Statue of Liberty and it incorrectly says it's in California, or invents nonexistent designers and an inaccurate year of construction. These are instances of 'hallucinations' in AI.
The reason behind these falsehoods is how AI chatbots are educated. They absorb vast amounts of text data to recognize word patterns and topic links. When prompted, they use this knowledge to compose text. However, as they predict the next possible words, they may weave convincing yet untrue statements.
Even in serious contexts like legal affairs, hallucinations occur. For example, a legal document penned by an AI filled with fake quotes and court cases was once presented in court, highlighting the issue in the practical application of AI.
Understanding these potential falsehoods in AI responses is increasingly vital. Words like 'hallucinate' have even gained prominence for encapsulating the curious implications AI might have on language and life, becoming Dictionary.com's word of the year.
Combatting AI Hallucinations
Leading tech firms, OpenAI and Google, acknowledge these AI-generated inaccuracies and encourage users to verify chatbot responses. To combat hallucinations, these organizations are investing in corrective measures.
Google, for example, has implemented a feedback loop for its Bard AI. Users can signal inaccuracies by giving a thumbs down and explaining the mistake, which assists Bard in learning and improving.
Similarly, OpenAI uses a method called 'process supervision.' This means their AI rewards itself not just for correct answers, but also for following logical reasoning to reach conclusions. As per Karl Cobbe, a researcher at OpenAI, recognizing and reducing these logical missteps is crucial for aligning AI with human intelligence.
While AI chatbots like ChatGPT and Google's Bard offer convenience, it's important to remember they are not perfect and their output should be critically examined for accuracy.
AI, hallucination, chatbots