OpenAI’s strategy chief says compute is not the real edge for AI companies — 3 other things are the key
-
Compute isn’t the most important edge for a lab, said OpenAI’s chief strategy officer.
-
Scarcity, bet selection, and organizational structure matter more, said Jason Kwon.
-
“It’s how you make use of that resource and apply it to various bets,” he added.
Graphic processing units aren’t what separate winning labs from the rest, said OpenAI’s chief strategy officer.
Jason Kwon said on an episode of the “Auren Hoffman” podcast published Tuesday that while compute is crucial for the AI industry as a whole, it isn’t necessarily the most important at the organizational level.
“If you just reduce compute to capital, you know, it’s just money, and then you buy the physical infra,” Kwon said. “We don’t necessarily assume in lots of other industries or even in technology industries that if you’re just the most capitalized company or organization, that you’re automatically going to win.”
“It’s how you make use of that resource and apply it to various bets,” he added.
Kwon said that three things matter more: scarcity, bet selection, and organizational structure.
Scarcity can sometimes create innovation, forcing sharper decisions about how to use limited resources, Kwon said.
Bet selection — what kinds of research directions to pursue, when to double down, and when to pivot — is also what gives a lab its edge, he added.
Kwon said organizations need to have the right capacity or structure to make and sustain those bets well, as well as the “right taste and selection criteria.”
Kwon and OpenAI did not respond to a request for comment from Business Insider.
At the country or industry level, the most important thing is “probably compute,” Kwon said.
Compute is needed for “the breadth and diversity of experiments that you can run on research,” he added.
OpenAI has been vocal about its insatiable demand for computing power. Its CEO Sam Altman said in a post on X on Monday that the company is testing new features by throwing “a lot of compute” at them.
“We also want to learn what’s possible when we throw a lot of compute, at today’s model costs, at interesting new ideas,” he wrote.
OpenAI’s chief product officer, Kevin Weil, said on an episode of the “Moonshot” podcast published last month that “every time we get more GPUs, they immediately get used.”
Altman said in July that the company will bring on more than 1 million GPUs by the end of the year. For comparison, Elon Musk‘s xAI disclosed that it used a supercluster of over 200,000 GPUs called Colossus to help train Grok4.
Leave a Comment
Your email address will not be published. Required fields are marked *