Apple Suspends AI News Notifications Due to Inaccuracies
Apple has decided to temporarily suspend its new artificial intelligence feature that summarizes news notifications after it consistently produced inaccurate and misleading headlines. This decision came in response to complaints from various news organizations and advocates for press freedom.
The Apple Intelligence feature, which was heavily promoted, began generating erroneous summaries that closely resembled standard push notifications to users. As a result of these errors, Apple deployed a beta update for developers on Thursday to disable this AI feature for all news and entertainment notifications. The company has announced that it is working on improvements for the feature and intends to reintroduce it in a future update.
In the update, Apple revealed that the AI-generated summaries, which users can choose to enable, will now clearly indicate that the content is generated by artificial intelligence. This change aims to inform users that the summaries might not always be accurate.
In recent instances, the technology has faced significant scrutiny. For example, the BBC raised concerns with Apple after false headlines indicated that a murder suspect had shot himself. Another troubling summary incorrectly suggested that Israeli Prime Minister Benjamin Netanyahu had been arrested, combining details from multiple articles into a single erroneous notification.
A spokesperson from the BBC emphasized the importance of accuracy in news, stating, "These AI summaries by Apple do not reflect - and in some cases completely contradict - the original BBC content." This sentiment has been echoed by other organizations as well.
On a separate occasion, the AI feature incorrectly summarized a notification from the Washington Post, with claims that were entirely false, leading to frustration among journalists and media outlets.
Press freedom organizations have raised alarms about the potential misinformation that may arise from these AI summaries, calling them a threat to the public's right to access reliable news. Both Reporters Without Borders and the National Union of Journalists expressed concern that users should not be left guessing the accuracy of the news they receive.
This incident highlights ongoing issues with AI technologies, as problems like incorrect information generation are not unique to Apple's tools. Similar challenges have been faced by other developers utilizing large-language models like ChatGPT, which are also known to produce confidently incorrect content, referred to as "hallucinations." These models are designed to generate plausible-sounding answers to prompts without necessarily ensuring accuracy.
The persistence of such inaccuracies has been supported by research indicating that top AI models remain unreliable, often generating fabricated information.
apple, news, ai, technology, inaccuracies