Thursday, March 26, 2026
Trusted News Since 2020
American News Network
Truth. Integrity. Journalism.
General

The New Brutality of OpenAI

By Eric November 12, 2025

On September 12, attorney Jay Edelson, representing the parents of Adam Raine, received a legal document from OpenAI that quickly escalated from routine to invasive. The Raine family is suing OpenAI, alleging that their 16-year-old son took his life under the influence of ChatGPT, the company’s AI chatbot. OpenAI’s discovery requests included not only the expected inquiries about Raine’s therapy history but also deeply personal demands, such as videos from memorial services and a comprehensive list of individuals who had supervised or cared for Raine over the past five years, encompassing a wide array of acquaintances from friends to school staff. Edelson condemned these requests as “despicable,” arguing that they exploit the grief of the Raine family. This aggressive legal strategy marks a stark contrast to OpenAI’s previous, more conciliatory approach to litigation, as the company faces increasing scrutiny and legal challenges related to its AI technologies.

OpenAI’s shift towards a more combative stance in legal disputes reflects its evolution from a nonprofit research entity to a formidable player in the tech industry, now valued at $500 billion. As it navigates a growing list of lawsuits—including seven new cases in California alleging that ChatGPT has contributed to suicidal ideation—OpenAI has begun to adopt aggressive legal tactics, including subpoenas targeting non-profit organizations critical of its operations. This has raised concerns among advocacy groups, like Encode, which have found themselves entangled in OpenAI’s legal maneuvers. Such subpoenas often demand extensive documentation on topics unrelated to the lawsuits, creating a chilling effect on organizations advocating for AI safety and regulation. Critics argue that these tactics are oppressive, with legal experts noting that while broad discovery requests are common in corporate litigation, targeting nonprofits poses unique ethical challenges.

As OpenAI transitions into a more traditional for-profit model, its focus appears to have shifted from its original mission of developing AI for the benefit of humanity to a more commercial agenda. The company has launched various products and services, including a social media app and a web browser, while also engaging in aggressive lobbying efforts. CEO Sam Altman’s public persona has evolved to reflect this shift, as he engages with media and investors in a manner reminiscent of tech giants like Meta and Google. The juxtaposition of OpenAI’s original altruistic goals with its current commercial ambitions raises critical questions about the influence of powerful AI technologies on society and the ethical responsibilities of those who create and manage them. As OpenAI continues to navigate this complex landscape, the implications of its legal strategies and business practices will undoubtedly shape the future of AI and its role in our lives.

https://www.youtube.com/watch?v=F2D3LimEATo

On September 12, Jay Edelson received what he expected to be a standard legal document. Edelson is a lawyer representing the parents of Adam Raine; they are suing OpenAI, alleging that their 16-year-old son took his life at the encouragement of ChatGPT. OpenAI’s lawyers had some inquiries for the opposing counsel, which is normal. For instance, they requested information about therapy Raine may have received, and Edelson complied.
But some of the asks began to feel invasive, he told me. OpenAI wanted the family to send any videos taken at memorial services for Raine, according to documents I have reviewed. It wanted a list of people who attended or were invited to any memorial services. And it wanted the names of anyone who had cared for or supervised Raine over the past five years, including friends, teachers, school-bus drivers, coaches, and “car pool divers [
sic
].”
“Going after grieving parents, it is despicable,” Edelson told me, and he objected to the requests. OpenAI did not respond to multiple inquiries from me about discovery in the Raine case, nor did Mayer Brown, the law firm representing the company. (OpenAI has
announced
that it would work on a number of algorithmic and design changes, including the addition of new parental controls, following the Raine lawsuit.) According to Edelson, OpenAI also has not provided any documents in response to his own discovery requests in preparation for trial.
Companies play hardball in legal disputes all the time. But until recently, OpenAI didn’t seem to be taking that approach. Many lawsuits have been filed against the firm—in particular by publishers and authors alleging that OpenAI infringed on their intellectual-property rights by training ChatGPT on their books and articles without permission—but OpenAI has appeared to stick to legal arguments and attempted to strike a somewhat conciliatory posture—while also entering licensing partnerships with a number of other media organizations,
including
The Atlantic
, presumably as a way to avoid further lawsuits. (
The Atlantic
’s corporate agreement with OpenAI is unrelated to the editorial team.)
Now, however, OpenAI is going on the offensive. Gone are the days of a nonprofit research lab publicly sharing its top AI model’s code,
unsure
that it would ever have a product or revenue. Today, ChatGPT and OpenAI CEO Sam Altman are the faces of potentially historic technological upheaval, and OpenAI is worth $500 billion, making it the most valuable private company in the world. Altman and other company executives have used aggressive social-media posts and interviews to
rebuke

critics
and
antagonize

competitors
; over the summer, at a live
New York Times
event, Altman interrupted to ask, “Are you going to talk about where you sue us because you don’t like user privacy?” (The
Times
is suing OpenAI over copyright infringement, which OpenAI denies.) Recently, Altman
bristled
at questions from the investor Brad Gerstner over how OpenAI will meet its $1.4 trillion spending commitments, given its far smaller annual revenues: “If you want to sell your shares, I’ll find you a buyer. I just—enough.”
As it continues to grow, OpenAI will almost certainly be sued many more times. At the end of last week,
seven new lawsuits
were filed against the company in California, all of them alleging that ChatGPT pushed someone toward suicide or severe psychological distress.
Situations like Edelson’s have been playing out in another of OpenAI’s high-profile legal entanglements. In August, Nathan Calvin opened his door to a sheriff’s deputy, who had come to serve a subpoena from OpenAI. Calvin is general counsel at Encode, an AI-policy nonprofit with three full-time employees. Encode has been critical of OpenAI, joining a coalition of other organizations rallying against the start-up’s attempt to restructure from nonprofit governance into a more traditional for-profit business, which they fear would come at
the expense of AI safety
.
In December, Encode filed a brief in support of part of a lawsuit by Elon Musk, in which he asked the court to block OpenAI’s restructure (his request was denied). The subpoena sought documents and communications that would show if Encode had received funding or otherwise coordinated with Musk, which Calvin denied. But as with the legal requests of the Raine family, this one asked Encode to produce information about far-flung topics, including documents that Encode might have had about potential changes to OpenAI’s structure and a major California AI regulation that Encode co-sponsored.
Over the past several months, OpenAI has
subpoenaed
at least seven nonprofit organizations in relation to Musk’s lawsuit, typically asking for any ties to Musk in addition to a broader set of queries. The other six have not submitted briefs in the Musk litigation. Beyond the encumbrance—paying lawyers is expensive, and producing documents is very time-consuming—some of the targeted groups have said the subpoenas have already had a punishing effect. Tyler Johnston, the founder and one of two employees at the Midas Project, a small AI-industry watchdog, told me he has been trying to get an insurance policy that would protect Midas in the event that it’s sued over media it publishes—a standard practice—but every insurer has turned him down. Multiple insurance companies pointed to the OpenAI subpoena as the reason, according to Johnston. Being subpoenaed “makes people less likely to want to talk with you during a really critical period,” Calvin said—the late stages of getting that AI regulation passed—“and does create just some sense of, ‘Oh, maybe you have done something wrong.’”
In response to an inquiry about its subpoenas related to the Musk litigation, an OpenAI spokesperson pointed me to a
series
of social-media posts by Jason Kwon, the firm’s chief strategy officer. Kwon wrote that the subpoenas were a standard part of the legal process, and he’s right. “To target nonprofits is really oppressive, but I can’t say that it’s so unusual,” David Zarfes, a University of Chicago law professor who is not involved with the litigation between OpenAI and Musk, told me. Indeed, “broad” and even “aggressive” discovery requests
are

advised

by
law firms that represent corporations.
Kwon also wrote that OpenAI had “transparency questions” about the funding and control of several organizations that launched or joined campaigns critical of OpenAI shortly after Musk sued. It is true that Musk is an external adviser and has donated to at least one of the subpoenaed groups, the Future of Life Institute, and FLI has itself given money to Encode. But FLI has not received any funding from Musk since 2021, according to a spokesperson. Some of the subpoenaed nonprofits, including FLI, Ekō, and Legal Advocates for Safe Science and Technology, have also been publicly critical of Musk and xAI for, among other things, neglecting or abandoning their commitments to AI safety.
Whatever the motivations, this legal strategy represents the new normal for OpenAI: an outwardly aggressive approach. OpenAI’s determination to shift from the nonprofit model was apparently motivated in part by the desire to fundraise. The Japanese investment group SoftBank, for instance, had conditioned $22.5 billion on OpenAI making such a change. (OpenAI
completed
its transition to a more traditional for-profit model last week. The actual structure is a bit more complicated than it initially seemed, and a nonprofit board still
technically
retains control of the business side. But nothing about OpenAI’s recent actions or the board’s makeup—Altman is himself a
member
—suggests any changes to the company’s commercial ambitions.)
And over the past year, the company has morphed into today’s version of the famous 1904 political
cartoon
depicting Standard Oil as an octopus wrapping its tentacles around the globe. OpenAI has launched or revealed plans for a
social-media app
, a
web browser
,
shopping
inside ChatGPT, a
personal device
. There is the
commercial
showing ChatGPT suggesting a recipe for a date night; Altman’s appearances on
Theo Von’s
and
Tucker Carlson’s
podcasts; all of the lobbying documents and influence OpenAI appears to have had on
Donald Trump’s AI policy
. Building artificial general intelligence that “benefits all of humanity”—the company’s original mission—seems less the focus than the inverse: shaping human civilization and the planet to the benefit of building AGI.

The OpenAI of today resembles Meta or Google far more than a research lab or nonprofit. In a recent post on X, Altman
wrote
that the “first part” of OpenAI consisted of developing very powerful AI models, what “i believe is the most important scientific work of this generation.” Meanwhile, “this current part” of OpenAI’s evolution is about trying to “make a dent in the universe”—which largely consists, it would seem, of getting his products into the world. First was research; now comes business.

Related Articles

The New Allowance
General

The New Allowance

Read More →
Fake Ozempic, Zepbound: Counterfeit weight loss meds booming in high-income countries despite the serious health risks
General

Fake Ozempic, Zepbound: Counterfeit weight loss meds booming in high-income countries despite the serious health risks

Read More →
The Trump Administration Actually Backed Down
General

The Trump Administration Actually Backed Down

Read More →