Daerah  

Nvidia CEO readies investors for renewed Intel, AMD fight

Nvidia’s Bold Play: Embracing the CPU in the AI Revolution

Nvidia, a titan built on the specialised power of graphics processing units (GPUs) that fuel the artificial intelligence (AI) boom, is increasingly vocal about its growing affection for the more traditional central processing unit (CPU). While GPUs have dominated headlines for their prowess in AI model training, CEO Jensen Huang is signalling a significant shift, suggesting the CPU is poised for a major resurgence, particularly as AI applications move from development to deployment.

For decades, the CPU stood as the undisputed brain of any computer, a domain largely owned by giants like Intel and Advanced Micro Devices (AMD). Huang himself has often noted a dramatic reversal in computing workloads; where once 90% of processing power resided in CPUs and a mere 10% in specialised chips like his own, that ratio has flipped in recent years. However, the narrative is evolving. The CPU is now re-emerging as a compelling, and in some cases superior, option for AI companies as they transition their focus from training complex models to efficiently deploying them in real-world applications. Nvidia, naturally, intends to be at the forefront of this evolving landscape.

“We love CPUs as well as GPUs,” Huang stated during a recent earnings call, assuring analysts that Nvidia is not only prepared for the CPU’s comeback but also confident that its own data centre CPU offerings, launched in 2023, will outperform competitors. His enthusiasm was palpable at the Consumer Electronics Show in January, where he predicted an “explosion” in the use of high-performance Nvidia CPUs in data centres, even going so far as to suggest Nvidia could become one of the world’s largest CPU manufacturers.

Understanding the CPU vs. GPU Dynamic

The fundamental differences between CPUs and GPUs have dictated their respective roles in computing for years.

  • CPUs (Central Processing Units): These are the generalists of the chip world. Designed to handle a vast array of mathematical tasks with versatility and reasonable speed, they are adept at managing the diverse demands of general software applications. Their strength lies in their ability to execute complex, sequential instructions.

  • GPUs (Graphics Processing Units): In contrast, GPUs are specialists. They excel at performing a narrower range of mathematical calculations, but do so with incredible efficiency through massive parallelisation. This means they can execute thousands of simple calculations simultaneously. This parallel processing power was initially key for rendering complex graphics in video games, where millions of pixels need to be calculated many times per second. In AI, this translates to the ability to perform the large-scale matrix multiplications and additions that are fundamental to processing data like text and images.

The Rise of AI Agents and the CPU’s Role

The current trajectory of AI development is seeing the rise of “agents” – sophisticated programs capable of independently performing tasks such as writing code, analysing vast quantities of documents, and generating research reports. According to Ben Bajarin, an analyst at Creative Strategies, this type of computing “is happening more and more, and sometimes primarily, on the CPU.”

This shift has significant implications for hardware configurations. Nvidia’s current flagship AI server, the NVL72, is equipped with 36 of its own CPUs and 72 GPUs. Bajarin speculates that for “agentic” AI workloads, this ratio could potentially shift to one-to-one, or in some scenarios, the GPU might even be bypassed entirely in favour of CPU-centric solutions.

Nvidia’s Strategic Push into the CPU Market

Underscoring its serious commitment to the CPU space, Nvidia recently announced a significant partnership with Meta Platforms. Under this agreement, the social media giant will extensively utilise Nvidia’s Grace and Vera CPU chips on a standalone basis. This marks a notable departure from Nvidia’s current AI server architecture, where CPUs are typically paired with multiple GPUs. While Meta is not switching CPU vendors entirely, this deal highlights its strategy of diversifying its hardware suppliers. This announcement was quickly followed by AMD revealing its own substantial CPU deal with Meta, underscoring the competitive landscape.

During the earnings call, Huang elaborated on Nvidia’s distinct approach to CPU design. He explained that Nvidia has largely avoided the strategy of breaking down chips into smaller, more numerous components, a common practice among Intel and AMD. Instead, Nvidia’s CPU is engineered for high-throughput data processing and efficient sequential execution of many simple tasks, coupled with robust access to ample computer memory.

“It is designed to be focused on very high data processing capabilities,” Huang explained. “And the reason for that is because most of the computing problems that we’re interested in are data driven – artificial intelligence being one.”

Dave Altavilla, principal analyst at HotTech Vision and Analysis, interprets Nvidia’s strategy as an effort to challenge the long-held assumption that CPUs from traditional suppliers like Intel form the indispensable foundation of modern compute infrastructure. He suggests that Nvidia aims to position its CPUs as a compelling architectural option among several available choices.

Further details regarding Nvidia’s CPU roadmap and advancements are expected to be unveiled at the company’s annual developer conference, scheduled to take place in Silicon Valley next month. This event is anticipated to shed more light on Nvidia’s vision for its role in the evolving CPU landscape.

Tinggalkan Balasan

Alamat email Anda tidak akan dipublikasikan. Ruas yang wajib ditandai *