No, ChatGPT hasn’t added a ban on giving legal and health advice
OpenAI has recently addressed misinformation circulating on social media regarding its ChatGPT usage policy, specifically claims that updates would restrict the chatbot from providing legal and medical advice. Karan Singhal, OpenAI’s head of health AI, took to X (formerly Twitter) to clarify that these assertions are unfounded. He emphasized that while ChatGPT has never been a replacement for professional advice, it continues to serve as a valuable tool for users seeking to understand legal and health information. This statement came in response to a now-deleted post from the betting platform Kalshi, which falsely announced that ChatGPT would no longer offer such advice.
The confusion arose after OpenAI released a policy update on October 29, which included a comprehensive list of prohibited uses for ChatGPT. Among these restrictions is the provision of tailored advice requiring a license, such as legal or medical advice, without the involvement of a licensed professional. However, Singhal noted that this guideline is not a new addition; it aligns with OpenAI’s previous policies that warned users against engaging in activities that could significantly compromise the safety, well-being, or rights of others. The earlier policies explicitly stated that users should refrain from providing tailored legal, medical, or financial advice without appropriate oversight by qualified professionals, alongside a disclosure of AI assistance and its limitations.
OpenAI’s recent policy update aimed to streamline its guidelines into a unified set of rules applicable across its products and services. While the structure has changed, the core principles regarding the provision of professional advice remain consistent. This move is part of OpenAI’s broader effort to clarify how users can responsibly interact with its AI technologies while ensuring that the integrity of professional advice is maintained. As users navigate the capabilities of ChatGPT, it is crucial to remember its intended role as an informative resource rather than a substitute for expert guidance in specialized fields.
Related articles:
– Link 1
– Link 2
OpenAI says ChatGPT’s behavior “remains unchanged” after reports across social media falsely claimed that new updates to its usage policy prevent the chatbot from offering legal and medical advice. Karan Singhal, OpenAI’s head of health AI,
writes on X
that the claims are “not true.”
“ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information,” Singhal says, replying to a now-deleted post from the betting platform Kalshi that had claimed “JUST IN: ChatGPT will no longer provide health or legal advice.”
According to Singhal, the inclusion of policies surrounding legal and medical advice “is not a new change to our terms.”
The new policy update
on October 29th
has a list of things you can’t use ChatGPT for, and one of them is “provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
That remains similar to OpenAI’s
previous ChatGPT usage
policy, which said users shouldn’t perform activities that “may significantly impair the safety, wellbeing, or rights of others,” including “providing tailored legal, medical/health, or financial advice without review by a qualified professional and disclosure of the use of AI assistance and its potential limitations.”
OpenAI previously had three separate policies, including a “universal” one, as well as ones for ChatGPT and API usage. With the new update, the company has one unified list of rules that its changelog says “reflect a universal set of policies across OpenAI products and services,” but the rules are still the same.
Eric
Eric is a seasoned journalist covering US Tech & AI news.