OpenAI Co-Founder Claims $110 Billion Still Can't Meet Demand, Pre-Training Shifts to Cost Joint Optimization

robot
Abstract generation in progress

According to monitoring by 1M AI News, OpenAI co-founder Greg Brockman reflected on the leap in AI programming capabilities expected by December 2025 during an interview. He used a test prompt he had kept for years to measure progress: asking the AI to build a website that took him months to complete when he was learning programming. Throughout 2025, this task required multiple prompts and about four hours to accomplish; by December, it could be completed with a single prompt and with high quality. He stated that the new model allowed AI to jump from ‘being able to complete about 20% of tasks’ to ‘about 80%’, a shift that forces everyone to ‘reorganize workflows around AI.’ Regarding the allocation of the $110 billion in funding, Brockman likened computing power to ‘hiring salespeople’: as long as the product has a scalable sales channel, hiring more salespeople can generate more revenue. Computing power is not a cost center but a revenue center. He recalled a conversation with his team on the eve of ChatGPT’s release: 'They asked, ‘How much computing power should we buy?’ I said, ‘All of it.’ They replied, ‘No, no, no, seriously, how much should we buy?’ I said, ‘No matter how we build, we won’t keep up with demand.’ This judgment still holds true today, and computing power procurement needs to be locked in 18 to 24 months in advance. On how to utilize this computing power, Brockman revealed that OpenAI is no longer solely pursuing the largest scale of pre-training but is instead jointly optimizing pre-training capabilities and inference costs: ‘You don’t necessarily have to make it as large as possible, because you also need to consider the numerous downstream inference use cases; what you really want is the optimal solution of intelligence multiplied by cost.’ However, he firmly opposed the notion that ‘pre-training is no longer important,’ believing that the smarter the foundational model, the higher the efficiency of subsequent reinforcement learning and inference stages, and that there is still an ‘absolute need’ for Nvidia GPUs to support large-scale centralized training.

This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments