Silicon Valley is going all in on ‘superintelligent’ AI, and there’s plenty of hype
Forget artificial intelligence. Heck, forget artificial general intelligence. Those are passé.
Now Silicon Valley is looking toward superintelligence. OpenAI (OPAI.PVT), Microsoft (MSFT), Anthropic (ANTH.PVT), Meta (META), and a slew of other AI companies are looking to a future where wildly intelligent AI can perform tasks better than most people.
Anthropic CEO Dario Amodei has written about AI that’s smarter than a Nobel Prize winner across a number of fields and can “prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc.”
Meta CEO Mark Zuckerberg has spoken about developing personal superintelligence that “has the potential to begin a new era of personal empowerment where people will have greater agency to improve the world in the directions they choose.”
And just last week, Microsoft AI CEO Mustafa Suleyman announced the formation of the group’s MAI Superintelligence team.
“This year it feels like everyone in AI is talking about the dawn of superintelligence,” Suleyman wrote. “Such a system will have an open-ended ability of ‘learning to learn,’ the ultimate meta skill.”
But superintelligence is a rather vague concept, and it’s not exactly clear what it will mean for humanity or if and when it’ll become a reality.
“I think all these terms are a little bit ill-defined, and people mean different things when they talk about them,” explained John Thickstun, assistant professor of computer science at Cornell University.
“I think this technology has evolved so fast and in such surprising ways that it’s a bit hubristic to be really confident about these things,” he added.
And there are still doubts that the technology is as close at hand as companies suggest.
The hype around AI superintelligence kicked off thanks to the explosion in interest around ChatGPT, coupled with the development of reasoning AI models that can respond to complex requests by breaking down a user’s questions into smaller pieces.
From there, AI boosters say, the technology will progress to artificial general intelligence (AGI) and then superintelligence.
Think of AGI as your one buddy that seems to know something about everything. They’re smart, but not necessarily smarter than an expert in a field.
AI superintelligence, though, would go beyond that and promises to be smarter than even specialists in their respective fields. It’s the kind of sci-fi technology you’ve seen in countless comics, movies, and TV shows that can create its own separate AIs and discover new technologies.
Meta’s Zuckerberg claims that his company has already seen glimpses of its AI systems improving themselves and that developing superintelligence is now in sight.
How long that will take, however, is up for debate.
OpenAI CEO Sam Altman said that by 2026, we’ll see AI that can figure out novel insights, and by the following year, we’ll have robots capable of performing tasks in the real world. Anthropic’s Amodei has said AGI could be here by 2026, though he admits he could be wrong.
But not everyone agrees that we’re on our way to AI superintelligence, or even what it is.
James Landay, co-founder and co-director of Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI), said the hype around superintelligence, and even AGI, is just that. What’s more, AI boosters, he said, keep moving the goalpost as to what defines AGI.
“Originally, [AGI] was [defined as] being able to do any kind of economically viable task at a kind of human-level quality,” Landay explained.
“Then eventually they changed it to be only information tasks, because they knew robotics wasn’t going to work. And then later they kind of weaken it to something like most. And then, you know, it becomes a question of, well, what does most really mean?” he said.
OpenAI CEO Sam Altman attends an event to pitch AI for businesses in Tokyo, Japan, on Feb. 3, 2025. (Reuters/Kim Kyung-Hoon/File Photo) ·Reuters / Reuters
According to Thickstun, AI systems that can beat world champions at games like Google DeepMind’s AlphaGo, which defeated Go champion Lee Sedol, could be considered superintelligent, though only at very specific tasks.
“So we have AI for games like chess or Go that are far better than any human at playing these games. But maybe what we don’t have yet is a superintelligent general intelligence,” he explained.
Like Thickstun, Landay said the vagaries of AI definitions make it difficult to suss out exactly what AGI and superintelligence are. Computers have been better than humans at performing calculations for decades, and we don’t consider them superintelligent.
“I think this industry really pumps up their evaluations, which is required for them to get these huge amounts of investment, to build these data centers, to get these VC investment, private equity investments, government investment, and they have to keep selling something,” Landay said.
“I don’t think it’s actually supported at all by what AI can do right now,” he added, “or what scientific research would lead you to believe AI can do in two years, five years, or 10 years.”
Sign up for Yahoo Finance’s Week in Tech newsletter. ·yahoofinance
Email Daniel Howley at dhowley@yahoofinance.com. Follow him on Twitter at @DanielHowley.
Leave a Comment
Your email address will not be published. Required fields are marked *