Friday, January 30, 2026
Trusted News Since 2020
American News Network
Truth. Integrity. Journalism.
General

The World Still Hasn’t Made Sense of ChatGPT

By Eric December 1, 2025

As we mark the third anniversary of ChatGPT’s launch, we reflect on how this groundbreaking AI technology has reshaped our society and economy. OpenAI initially introduced ChatGPT as a “low-key research preview,” but its rapid adoption—over 1 million users within just five days—revealed its potential to transform human interaction with technology. Today, with approximately 800 million weekly users, ChatGPT has become synonymous with large language models, influencing various sectors from customer service to creative industries. The technology’s ability to simulate conversation has led many to rely on it for tasks ranging from homework assistance to business communications, raising concerns about dependency and the erosion of critical thinking skills.

The impact of ChatGPT extends beyond mere convenience; it has sparked a cultural shift, particularly in Silicon Valley, where the excitement surrounding AI has fostered a new wave of innovation and investment. However, this rapid advancement has also triggered significant backlash. Artists and content creators have voiced fears over job displacement, while educational institutions grapple with the challenges posed by students using AI to bypass traditional learning methods. The proliferation of AI-generated content has led to a cluttered digital landscape, with concerns about misinformation and the commodification of creativity. As AI tools become increasingly integrated into our lives, the ethical implications and societal consequences are becoming more pronounced, prompting discussions about privacy, mental health, and the future of work.

Moreover, the ongoing evolution of generative AI raises existential questions about our relationship with technology. While proponents argue that these tools offer transformative potential, critics warn of a future where human cognition and creativity are undervalued. The narrative surrounding AI is rife with uncertainty, as investors and technologists anticipate a paradigm shift that may never materialize. As we navigate this precarious landscape, it becomes clear that generative AI is not just a technological advancement but a reflection of our hopes, fears, and aspirations for the future. As we celebrate this milestone, the conversation around AI’s role in our lives is only just beginning, with implications that will resonate for years to come.

This story is part of a series marking ChatGPT’s third anniversary. Read Ian Bogost on how ChatGPT
broke reality
, Lila Shroff on
the people who can no longer make decisions
without ChatGPT’s input, or
browse more AI coverage from
The Atlantic
.
O
n this day three years ago,
OpenAI released what it referred to internally as a “low-key research preview.” This preview was so low-key that, inside OpenAI, staff were
instructed
not to frame it as a product launch. Some OpenAI employees were nervous that the company was rushing out an unfinished product, but CEO Sam Altman forged ahead, hoping to beat a competitor to market and to see how everyday people might use the company’s AI. They called it ChatGPT.
And people sure did use it—more than 1 million of them in the first five days. ChatGPT grew
faster
than any other consumer app in history. Today, it has 800 million weekly users. Numbers matter, but what is undeniable is that ChatGPT’s success has quickly rewired parts of our society and economy. Now we are living in a world that ChatGPT helped build.
OpenAI’s product solidified the oracular chatbot as the primary way the world interacts with large language models. Other companies released their own spin on the technology, such as Google Bard (now named Gemini) and Microsoft’s Bing chatbot, the latter of which quickly went off the rails and told a
New York Times
reporter to
leave his spouse
and spend the rest of his life with the bot instead. ChatGPT introduced millions to a tool that, although
prone to presenting false information
, simulates conversation well enough that people began to use it as an interface for countless tasks, such as finding information. Others employ it to automate the act of creation itself. The bot has proved handy for cheating on homework, writing boring work emails, researching, and coding. Now some people struggle to do anything without it.
[
Read: Welcome to the slopverse
]
ChatGPT improved, as did its competitors, all new releases
performing better on rigorous benchmark tests
. Companies embedded chatbots in customer-service platforms, and social-media grifters used them to create bot armies. Amazon became flooded with spammy, synthetically generated books. Articles written by robots clogged Google, making the site less and less useful. Already beleaguered universities struggled to
adapt to the reality
that their curricula are now gamed effortlessly by students.
Artists of all kinds protested
as large language models, trained on the creative output of humankind, threatened to render their jobs irrelevant or obsolete—or to simply devalue creative work altogether. Many media companies chose to
strike a deal
with the scrapers;
others
sued. (OpenAI entered into a corporate partnership with
The Atlantic
last year.) Some
businesses laid off staff
as chatbots became more useful.
A nascent culture ballooned in the Bay Area
—hacker houses and manifestos. “You can see the future first in San Francisco” was the
overall argument
articulated by the AI researcher Leopold Aschenbrenner. More people started using phrases such as
p(doom)
and
situational awareness
. There were more
manifestos
about technological timelines; “superintelligence” and “artificial general intelligence” became things that rich people with serious-sounding jobs said in public without laughing.
The models got better, and the unintended consequences grew commensurately. People confided in the chatbots as they would therapists. They
confessed their darkest desires
despite no guarantee of perfect privacy. They expressed joy and sorrow and
intentions to kill themselves
; in one high-profile incident, ChatGPT reportedly offered
help
, suggesting the right material for a noose. (OpenAI
denies
responsibility for this incident.) People fell in love with the tools and gave them names. Others
saw something in their conversations
—a discovery or a conspiracy on the horizon. Some withdrew from daily life. Some found help; others didn’t.
[
From the December 2025 issue: The age of anti-social media is here
]
ChatGPT is just one tool for interacting with large language models, but its runaway success was the spark that led to further excitement and investment, and the rollout of other AI interfaces:
text-to-speech voice clones
; image, video, and music generators;
web browsers
. The models have continued to get better, helping build websites and
other models
, and allowing people to outsource more and more of their decisions. Generative-AI tools are used to write personalized bedtime stories and digitally
reanimate children killed in mass shootings
. People use them to generate entire songs; at least one
debuted on a
Billboard
chart
. Low-quality synthetic renderings are staples of
political propaganda
and click-farm rage bait. People came up with a name for it: Slop.
These tools are not magic, nor are they
“intelligent” in any human way
. But for plenty of people, their first encounter with ChatGPT checked many of the boxes of a transformative technology. The bot is intuitive yet uncanny—a piece of the future dropped into the present. If the disappointing-technology hype cycles that preceded large language models—cryptocurrency booms and busts, Web3 and the metaverse—felt like solutions in search of a problem, generative AI seemed to offer limitless applications. Rather than casting about for a use case, its boosters argued that it would
eat the world
. In a sense, it has. How else to explain a timeline in which OpenAI has partnered with Mattel to embed ChatGPT into Barbies, and the
pope
has warned students, “AI cannot ever replace the unique gift that you are to the world”?
[
Read: AI is a mass-delusion event
]
These models are unknowable—black boxes with anthropomorphic traits, but that are ultimately a series of complex calculations and statistical inferences
based on mind-boggling sums of training data
; much of that information was taken without express permission from its creators. The models do not have souls or rights. But their ability to mimic us—in part due to the human feedback in their training—has inspired scientists and researchers to ask questions about our cognition and
further probe how our minds work
.
This list barely begins to capture the past three years—the enthusiasm for these machines, as well as the loathing and anxiety they inspire. Depending on a person’s view, one might see these models as a useful tool; others as
“stochastic parrots”
or fancy autocorrect; and others still as catalysts for a fearsome alien intelligence.
[
Read: The alien intelligence in your pocket
]
This is disruption, in the less technical sense of the word. In August,
I wrote that
“one of AI’s enduring impacts is to make people feel like they’re losing it.” If you genuinely believe that we are just years away from the arrival of a paradigm-shifting, society-remaking superintelligence, behaving irrationally makes sense. If you believe that Silicon Valley’s elites have lost their minds, foisting a useful-but-not-magical technology on society, declaring that it’s building God, investing historic amounts of money in its development, and
fusing the fate of its tools with the fate of the global economy
, being furious makes sense.
The world that ChatGPT built is a world defined by a particular type of precarity. It is a world that is perpetually waiting for a shoe to drop. Young generations feel this instability acutely as they prepare to graduate into a workforce about which they are cautioned that
there may be no predictable path to a career
. Older generations, too, are told that the future might be unrecognizable, that the marketable skills they’ve honed may not be relevant. Investors are waiting too, dumping unfathomable amounts of capital into AI companies, data centers, and the physical infrastructure that they believe is necessary to bring about this arrival. It is, we’re told, a race—a
geopolitical one
, but also a race against the market, a bubble, a circular movement of money and byzantine financial instruments and debt investment that could tank the economy. The AI boosters are waiting. They’ve
created detailed timelines
for this arrival. Then the timelines shift.
[
Read: Here’s how the AI crash happens
]
We are waiting because a defining feature of generative AI, according to its true believers, is that it is never in its final form. Like ChatGPT before its release, every model in some way is also a “low-key research preview”—a proof of concept for what’s really possible. You think the models are good now? Ha!
Just wait
. Depending on your views, this is trademark showmanship, a truism of innovation, a hostage situation, or a long con. Where you fall on this rapture-to-bullshit continuum likely tracks with how optimistic you are for the future. But you are waiting nonetheless—for a bubble to burst, for a genie to arrive with a plan to print money, for a bailout, for Judgment Day. In that way,
generative AI is a faith-based technology
.
It doesn’t matter that the technology is already useful to many, that it can code and write marketing copy and complete basic research tasks. Because Silicon Valley is not selling
useful
; it’s selling
transformation
—with all the grand promises, return on investment, genuine risk, and collateral damage that entails. And even if you aren’t buying it, three years out, you’re definitely feeling it.

Related Articles

The New Allowance
General

The New Allowance

Read More →
Fake Ozempic, Zepbound: Counterfeit weight loss meds booming in high-income countries despite the serious health risks
General

Fake Ozempic, Zepbound: Counterfeit weight loss meds booming in high-income countries despite the serious health risks

Read More →
The Trump Administration Actually Backed Down
General

The Trump Administration Actually Backed Down

Read More →