Thursday, December 4, 2025
Trusted News Since 2020
American News Network
Truth. Integrity. Journalism.
US Tech & AI

The 3 biggest AI fails of 2025 — Friend, imaginary summer reading lists, and so many hallucinations

By Eric December 4, 2025

In 2025, the world of artificial intelligence (AI) faced significant setbacks, primarily attributed to a phenomenon known as “hallucination,” where AI systems generate false or misleading information. This issue, which had been on the rise since 2023, became alarmingly prevalent across various sectors, including academia, government, and the legal field. For instance, a study from Deakin University revealed that ChatGPT fabricated about 20% of its academic citations, with half of the citations containing other inaccuracies stemming from generative AI hallucinations. Notably, even high-profile entities like Robert F. Kennedy Jr.’s Health and Human Services Department fell victim to these errors, citing non-existent studies in their reports. The Chicago Sun-Times also made headlines when it published a summer reading list featuring real authors alongside fictitious book titles, and lawyers in over 635 cases relied on AI-generated hallucinations to support their arguments. This widespread issue underscores the urgent need for improved accuracy and reliability in AI systems, a topic that was notably absent from Google’s AI-focused keynote earlier in the year.

Beyond hallucinations, the year also witnessed the rapid failure of innovative AI products and corporate initiatives. A notable example is The Friend, a wearable device designed to record ambient audio and engage users through a connected app. Despite a hefty marketing campaign costing over $1 million in New York City, the product faced immediate backlash, with commuters vandalizing the ads and mocking the concept, which many felt exacerbated social isolation rather than alleviating it. The negative reception was so pronounced that reviews often featured headlines like “everybody hates it.” In the corporate landscape, a report from MIT’s Media Lab highlighted that a staggering 95% of AI initiatives failed, despite hefty investments totaling between $30 billion and $40 billion. While tools like ChatGPT and Copilot gained traction, they primarily enhanced individual productivity rather than driving significant business outcomes. The report revealed that many organizations struggled to integrate AI into their operations effectively, with only a small fraction advancing to pilot or production stages due to issues like brittle workflows and misalignment with daily tasks.

As we look forward to 2026, the hope is that the lessons learned from these AI failures will lead to more reliable and effective applications of technology. The challenges of hallucinations, product reception, and corporate integration serve as critical reminders of the complexities involved in harnessing AI’s potential. With ongoing advancements and increased scrutiny, the coming year may pave the way for a more thoughtful and responsible approach to AI development and deployment.

https://www.youtube.com/watch?v=Q6Lreosjz0g

Generative
AI
could have written this introduction, but there’s a good chance it would have started
hallucinating
.
Hallucination, which
Google failed to mention in its AI-filled 2025 keynote
, led to many, many AI fails in 2025. But it wasn’t the only factor. Below, please enjoy our picks for the biggest AI fails from this past year.
Hallucinations hit academia, government, and the law
AI has been making stuff up for some time;
hallucinate was the word of the year in 2023
for good reason.
But in 2025, the problem got a lot worse. Google AI Overviews may no longer be telling you to
put glue on pizza
, but they can also still
claim the latest
Call of Duty
doesn’t exist
.

SEE ALSO:

Google AI overviews: Confident when wrong, yet more visible than ever

And it’s not like academics are immune. A
study from Deakin University
found that ChatGPT fabricated about one in five of its academic citations, while half of its citations contained other error-laden elements of generative AI hallucination.
Such proof of hallucination hasn’t stopped politicians, publications, or lawyers. Robert F. Kennedy Jr.’s Health and Human Services Department
used AI
to cite studies that
don’t exist
. The
Chicago Sun-Times

published a summer reading list in May
full of real authors along with hallucinated book titles.
Meanwhile, lawyers and litigants in 635 cases have used AI
hallucinations in their arguments
.
The Friend wearable failed fast
The Friend is a wearable device that looks like a large necklace pendant and records all of the audio from around the wearer, sends it to a connected phone app, and uses that data to chat with the user by sending texts in real time.
How incredibly odd, you might think. Could such a device increase our epidemic of isolation and
loneliness
, which is
already being exploited by tech companies
?
That didn’t stop Friend spending more than $1 million on advertisements on the New York City subway system. Ads hit over 11,000 rail cars, 1,000 platform posters, and 130 urban panels, in
one of the largest marketing campaigns
in NYC subway history.

This Tweet is currently unavailable. It might be loading or has been removed.

The result? Commuters
immediately vandalized it
. Criticism was so widespread that the subway ads themselves
became Halloween costumes
. No wonder reviews of the Friend came with headlines noting ”
everybody hates it
.”
Most corporate AI pilots crashed
Across the business world, companies are being told they simply
have
to start using AI. The problem: they’re just not very good at it.
According to a report from MIT’s Media Lab, ”
The State of AI in Business 2025
,” 95 percent of corporate AI initiatives fail despite investments that cost those companies somewhere between $30 billion and $40 billion.
“Tools like ChatGPT and Copilot are widely adopted. Over 80 percent of organizations have explored or piloted them, and nearly 40 percent report deployment,” the report explains.
“But these tools primarily enhance individual productivity, not P&L performance. Meanwhile, enterprise grade systems, custom or vendor-sold, are being quietly rejected. Sixty percent of organizations evaluated such tools, but only 20 percent reached pilot stage and just 5 percent reached production. Most fail due to brittle workflows, lack of contextual learning, and misalignment with day-to-day operations.”
Here’s hoping 2026 will hold fewer AI fails.
Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

Related Articles

Moon phase today: What the moon will look like on December 4
US Tech & AI

Moon phase today: What the moon will look like on December 4

Read More →
Wordle today: Answer, hints for December 4, 2025
US Tech & AI

Wordle today: Answer, hints for December 4, 2025

Read More →
Last chance to get Peacock for free this Black Friday — here’s how to stream for free
US Tech & AI

Last chance to get Peacock for free this Black Friday — here’s how to stream for free

Read More →