Advertisement Reports on an AI progress slowdown raised concerns about model scaling on Nvidia's earnings call. An analyst questioned if models are plateauing and if Nvidia's Blackwell chips could help. Huang said there are three elements in scaling and that each continues to advance.

If the foundation models driving the panicked rush toward generative AI stop improving, Nvidia will have a problem. Silicon Valley's whole value proposition is the continued demand for more and more computing power. Concerns about scaling laws started recently with reports that OpenAI's progress in improving its models was slowing.

But Jensen Huang isn't worried. Related Video An AI expert discusses the hardware and infrastructure needed to properly run and train AI models The Nvidia CEO got the question Wednesday, on the company's third-quarter earnings call. Has progress stalled? And could the power of Nvidia's Blackwell chips start it up again? "Foundation model pre-training scaling is intact and it's continuing," Huang said.

He added that scaling isn't as narrow as many think. Advertisement In the past, it may have been true that models only improved with more data and more pre-training. Now, AI can generate synthetic data and check its own answer to —in a way— train itself.

But, we're running out of data that hasn't already been ingested by these models, and the impact of synthetic data for pre-training is debatable. As the AI ecosystem matures, tools for improving models are gaining im.