Thursday, December 4, 2025
Trusted News Since 2020
American News Network
Truth. Integrity. Journalism.
US Tech & AI

The 3 biggest AI fails of 2025 — Friend, imaginary summer reading lists, and so many hallucinations

By Eric December 4, 2025

In 2025, the world witnessed a surge in AI-related mishaps, underscoring the growing pains of this transformative technology. Hallucination, the phenomenon where AI generates misleading or entirely fabricated information, became a significant concern, particularly in academia, government, and the legal sector. A study from Deakin University revealed that ChatGPT, a widely used generative AI tool, produced erroneous citations in about 20% of cases, with half of its references containing inaccuracies stemming from hallucinations. This troubling trend extended beyond academia; even the Health and Human Services Department of Robert F. Kennedy Jr. relied on AI to cite non-existent studies. The Chicago Sun-Times faced backlash for publishing a summer reading list that included fictional book titles alongside real authors. In the legal arena, 635 cases were reportedly influenced by AI-generated hallucinations, raising serious questions about the reliability of AI in critical decision-making processes.

The year also saw the launch of the Friend, a controversial wearable device designed to record ambient audio and engage users through a connected app. Despite a staggering marketing investment exceeding $1 million, including one of the largest advertising campaigns in New York City’s subway history, the Friend faced immediate backlash. Commuters vandalized the ads, and the device was mocked to the point of becoming a Halloween costume. Reviews reflected the public’s disdain, with many expressing concerns that such technology could exacerbate feelings of isolation and loneliness rather than foster genuine connections.

On the corporate front, a report from MIT’s Media Lab painted a grim picture of AI implementation in business. Despite significant investments ranging from $30 billion to $40 billion, an astonishing 95% of corporate AI initiatives failed. While tools like ChatGPT and Copilot gained traction, enhancing individual productivity, they fell short of impacting overall business performance. The report highlighted that many organizations struggled to align AI tools with their daily operations, leading to brittle workflows and a lack of contextual learning. As businesses grapple with these challenges, the hope is that 2026 will bring fewer AI pitfalls and a more refined approach to integrating AI technologies into everyday practices.

https://www.youtube.com/watch?v=Q6Lreosjz0g

Generative
AI
could have written this introduction, but there’s a good chance it would have started
hallucinating
.
Hallucination, which
Google failed to mention in its AI-filled 2025 keynote
, led to many, many AI fails in 2025. But it wasn’t the only factor. Below, please enjoy our picks for the biggest AI fails from this past year.
Hallucinations hit academia, government, and the law
AI has been making stuff up for some time;
hallucinate was the word of the year in 2023
for good reason.
But in 2025, the problem got a lot worse. Google AI Overviews may no longer be telling you to
put glue on pizza
, but they can also still
claim the latest
Call of Duty
doesn’t exist
.

SEE ALSO:

Google AI overviews: Confident when wrong, yet more visible than ever

And it’s not like academics are immune. A
study from Deakin University
found that ChatGPT fabricated about one in five of its academic citations, while half of its citations contained other error-laden elements of generative AI hallucination.
Such proof of hallucination hasn’t stopped politicians, publications, or lawyers. Robert F. Kennedy Jr.’s Health and Human Services Department
used AI
to cite studies that
don’t exist
. The
Chicago Sun-Times

published a summer reading list in May
full of real authors along with hallucinated book titles.
Meanwhile, lawyers and litigants in 635 cases have used AI
hallucinations in their arguments
.
The Friend wearable failed fast
The Friend is a wearable device that looks like a large necklace pendant and records all of the audio from around the wearer, sends it to a connected phone app, and uses that data to chat with the user by sending texts in real time.
How incredibly odd, you might think. Could such a device increase our epidemic of isolation and
loneliness
, which is
already being exploited by tech companies
?
That didn’t stop Friend spending more than $1 million on advertisements on the New York City subway system. Ads hit over 11,000 rail cars, 1,000 platform posters, and 130 urban panels, in
one of the largest marketing campaigns
in NYC subway history.

This Tweet is currently unavailable. It might be loading or has been removed.

The result? Commuters
immediately vandalized it
. Criticism was so widespread that the subway ads themselves
became Halloween costumes
. No wonder reviews of the Friend came with headlines noting ”
everybody hates it
.”
Most corporate AI pilots crashed
Across the business world, companies are being told they simply
have
to start using AI. The problem: they’re just not very good at it.
According to a report from MIT’s Media Lab, ”
The State of AI in Business 2025
,” 95 percent of corporate AI initiatives fail despite investments that cost those companies somewhere between $30 billion and $40 billion.
“Tools like ChatGPT and Copilot are widely adopted. Over 80 percent of organizations have explored or piloted them, and nearly 40 percent report deployment,” the report explains.
“But these tools primarily enhance individual productivity, not P&L performance. Meanwhile, enterprise grade systems, custom or vendor-sold, are being quietly rejected. Sixty percent of organizations evaluated such tools, but only 20 percent reached pilot stage and just 5 percent reached production. Most fail due to brittle workflows, lack of contextual learning, and misalignment with day-to-day operations.”
Here’s hoping 2026 will hold fewer AI fails.
Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

Related Articles

Moon phase today: What the moon will look like on December 4
US Tech & AI

Moon phase today: What the moon will look like on December 4

Read More →
Wordle today: Answer, hints for December 4, 2025
US Tech & AI

Wordle today: Answer, hints for December 4, 2025

Read More →
Last chance to get Peacock for free this Black Friday — here’s how to stream for free
US Tech & AI

Last chance to get Peacock for free this Black Friday — here’s how to stream for free

Read More →