The BBC has raised concerns with Apple over a new iPhone feature that generated a misleading notification tied to a high-profile murder in the United States.
Apple Intelligence, a feature introduced in the UK earlier this week, leverages artificial intelligence to condense and group notifications.
However, this tool misrepresented BBC News by falsely suggesting it had published a headline claiming Luigi Mangione, the man arrested in connection with the murder of healthcare insurance CEO Brian Thompson in New York, had shot himself—a claim that is untrue.
The BBC confirmed it had reached out to Apple “to raise this concern and fix the problem.” A spokesperson for the corporation stressed, “BBC News is the most trusted news media in the world.
It is essential to us that our audiences can trust any information or journalism published in our name and that includes notifications.”
A phone screenshot of the erroneous notification showed it reading, “BBC News, Luigi Mangione shoots himself; Syrian mother hopes Assad pays the price; South Korea police raid Yoon Suk Yeol’s office.”
While other parts of the AI-generated summary appeared to accurately reflect updates about Syria and South Korea, the portion about Mangione was entirely false.
The BBC isn’t the only news organization affected by issues with Apple’s AI tool. On November 21, the New York Times faced a similar incident when three unrelated articles were grouped together in a notification, one of which inaccurately summarized a report about the International Criminal Court issuing an arrest warrant for Israeli Prime Minister Benjamin Netanyahu as simply, “Netanyahu arrested.”
The New York Times did not comment, and the screenshot was highlighted on Bluesky by a journalist from ProPublica.
Apple touts its AI notification summaries as a way to reduce disruptions and help users focus on important alerts. Currently, the feature is available on select devices, including iPhone 16 models and certain iPads and Macs running iOS 18.1 or later.
A media policy expert at City University in London, Prof. Petros Iosifidis, described the mishap as a significant misstep for Apple.
He noted, “I can see the pressure getting to the market first, but I am surprised that Apple put their name on such demonstrably half-baked product. Yes, potential advantages are there – but the technology is not there yet and there is a real danger of spreading disinformation.”
Apple provides an option for users to flag inaccurate notification summaries, though the company has not disclosed the number of complaints received.
The issue of AI-generated inaccuracies isn’t unique to Apple. In May, Google’s AI Overviews feature sparked criticism after advising users looking for ways to make cheese stick to pizza to consider using “non-toxic glue.”
It also claimed geologists recommend humans consume one rock daily—examples the company described as isolated incidents.
While AI tools aim to simplify information, cases like these highlight the challenges in ensuring accuracy, particularly when summarizing content on behalf of reputable publishers.