Microsoft AI CEO calls artificial superintelligence an ‘anti-goal’
In a recent episode of the “Silicon Valley Girl Podcast,” Mustafa Suleyman, Microsoft’s AI chief and co-founder of DeepMind, voiced a contrarian perspective on the race towards artificial superintelligence (ASI) that has captivated much of Silicon Valley. Suleyman characterized the pursuit of ASI as an “anti-goal,” expressing concerns that such an advanced form of artificial intelligence—capable of reasoning far beyond human capabilities—poses significant risks. He articulated that the vision of superintelligence does not align with a positive future, emphasizing the challenges of aligning such powerful systems with human values and the difficulty of containing them. Instead, Suleyman advocates for the development of a “humanist superintelligence,” which prioritizes human interests and ethical considerations in its design and functionality.
Suleyman’s remarks come at a time when industry leaders, including OpenAI CEO Sam Altman and Google DeepMind co-founder Demis Hassabis, are optimistic about the imminent arrival of artificial general intelligence (AGI) and beyond. Altman has previously stated that OpenAI’s mission is centered around achieving AGI, with aspirations for superintelligence to follow soon after. He envisions that such advancements could dramatically enhance scientific discovery and innovation, potentially leading to unprecedented levels of abundance and prosperity. Conversely, skepticism exists among experts, with figures like Meta’s chief AI scientist, Yann LeCun, suggesting that AGI could still be decades away, highlighting the complexities involved in scaling AI capabilities effectively.
Suleyman’s caution reflects a growing discourse within the tech community regarding the ethical implications of advanced AI technologies. He firmly believes that attributing consciousness or moral status to AI is misguided, asserting that these systems do not experience suffering or emotions—they merely simulate high-quality interactions. As the debate intensifies, Suleyman’s push for a more human-centered approach to AI development serves as a critical reminder of the need to carefully consider the societal impacts and ethical ramifications of creating technologies that could fundamentally alter our future.
https://www.youtube.com/watch?v=qbrcZB-1hck
Microsoft’s AI chief, Mustafa Suleyman, is pushing back on Silicon Valley’s AGI race, calling superintelligence an “anti-goal.”
Bonnie Biess/Getty Images for Lesbians Who Tech & Allies
Microsoft AI CEO said artificial superintelligence should be an “anti-goal.”
Mustafa Suleyman said his team is trying to build a “humanist superintelligence” instead.
His comments come as industry leaders focus on building superintelligence fast.
While much of Silicon Valley races to build godlike AI, Microsoft’s AI chief is trying to pump the brakes.
Mustafa Suleyman said on an episode of the “Silicon Valley Girl Podcast” published Saturday that the idea of artificial superintelligence shouldn’t just be avoided. It should be considered an “anti-goal.”
Artificial superintelligence — AI that can reason far beyond human capability — “doesn’t feel like a positive vision of the future,” said Suleyman.
“It would be very hard to contain something like that or align it to our values,” he added.
Suleyman, who cofounded DeepMind before moving to Microsoft, said his team is “trying to build a humanist superintelligence” — one that supports human interest.
Suleyman also said that granting AI anything resembling consciousness or moral status is a mistake.
“These things don’t suffer. They don’t feel pain,” Suleyman said. “They’re just simulating high-quality conversation.”
The debate on superintelligence
Suleyman’s comments come as some industry leaders speak about building artificial superintelligence. Some say that it could arrive this decade.
OpenAI CEO Sam Altman has repeatedly described artificial general intelligence — AI that can reason like a human — as the
company’s core mission
. Altman said earlier this year that OpenAI is already looking beyond AGI to superintelligence.
“Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity,” Altman said in January.
Altman also said in an interview in September that he’d be very surprised if
superintelligence doesn’t emerge
by 2030.
Google DeepMind’s cofounder, Demis Hassabis, offered a similar timeline. He said in April that AGI could be achieved “in the next five to 10 years.”
“We’ll have a system that really understands everything around you in very nuanced and deep ways and kind of embedded in your everyday life,” he said.
Other leaders have urged skepticism. Meta’s chief AI scientist,
Yann LeCun
, said we may still be “decades” away from achieving AGI.
“Most interesting problems scale extremely badly,” LeCun said at the National University of Singapore in April. “You cannot just assume that more data and more compute means smarter AI.”
Read the original article on
Business Insider
Eric
Eric is a seasoned journalist covering Business news.