Technology News

Intel and Nvidia sitting in a tree, NVLink-I-N-G

But still no hero customer for Chipzilla's Foundry biz

Nvidia is set to become one of Intel’s largest shareholders after the GPU giant announced on Thursday it would invest $5 billion in the struggling chipmaker under a co-development agreement targeting PCs and datacenter infrastructure.

For Nvidia, the arrangement presents an opportunity to extend its GPU empire to the integrated graphics arena, a space historically dominated by Intel and AMD. Under the partnership, Intel will design PC processors with Nvidia’s GPU chiplets inside.

At the same time, Nvidia is clearing the way for Intel’s Xeon platform to play a bigger role in its AI infrastructure and datacenter product lines. While Xeons are found in products like Nvidia’s Blackwell-based B200 and B300, the company’s biggest and most powerful rack-scale systems, like the GB300 NVL72 all use its Arm-based Grace CPUs.

By integrating Nvidia’s NVLink interconnect tech into its CPU designs, Nvidia will be able to offer NVL72 style systems based on both its in-house CPUs and Intel’s Xeons, while also addressing a larger share of the PC graphics arena.

NVLink is the key
If you’re not familiar, NVLink is a high-speed interconnect that’s been used by Nvidia for years to efficiently distribute workloads across multiple GPUs, and more recently, its Arm-based Grace and Vera CPUs.

These interconnects are incredibly fast, achieving 1.8TB/s of bandwidth (900GB/s in each direction) per GPU. That’s about 14x the bandwidth of a PCIe 5.0 x16 slot.

By integrating NVLink directly into its Grace CPUs, Nvidia was able to scale beyond eight GPUs per node to a rack-scale architecture with 72 called the GB200 NVL72.

Unfortunately, for Intel, the lack of NVLink connectivity meant its Xeon processors were limited to Nvidia’s smaller and less desirable air-cooled systems like the DGX B200 and B300.

The introduction of NVLink Fusion this spring saw Nvidia open the tech to third-party CPU or XPU vendors. The only caveat was that chip vendors could only use the tech for communication with Nvidia’s own GPUs or CPUs.

Qualcomm and Fujitsu were named as Nvidia’s initial CPU partners. But with Intel entering the fray, Nvidia’s customers will soon have the option of purchasing NVL72-style rack systems using Intel’s x86-based Xeon processors.

“We can now, with Intel x86 CPU, integrate it directly into the NVLink ecosystem and create these rack scale AI supercomputers,” Nvidia CEO Jensen Huang said during a press conference on Thursday.

And because Intel’s latest generation of Xeon processors utilizes modular I/O dies, integrating NVLink should be relatively straightforward compared to just a few years ago.

For Intel, the partnership is effectively a shortcut to joining the rack scale computing fight, after several failed attempts to bring a competitive AI accelerator or datacenter GPU to the market. Last we heard, Intel was working on a rack scale GPU architecture of its own, but it remains to be seen whether it’ll ever make it to market.

 

What does Nvidia get out of the deal?
In addition to a roughly 4 percent share of Chipzilla, Nvidia’s partnership with Intel gives it tighter cooperation on co-designing hardware, and greater access to Intel’s customer base.

“The return on that investment is going to be fantastic, both, of course, in our own business, but also in our equity share of Intel,” Huang said, estimating the addressable market for datacenter CPUs at about $25 billion, and the laptop market at 150 million units a year.

The latter appears to be what Nvidia is really paying for. By integrating its GPUs directly into Intel’s processors at the chiplet level, Nvidia gains access to a market that was previously inaccessible to it. Nvidia is certainly no stranger to the PC arena, but as Huang pointed out, its offerings have largely been constrained to high-end gaming systems.

“There’s an entire segment of the market where the CPU and the GPU are integrated. It’s integrated for form factor reasons, maybe it’s for cost reasons, maybe it’s for battery life reasons, all kinds of different reasons. And that segment has been largely unaddressed by Nvidia today,” Huang said. “That segment of the market is really quite rich, and it’s really quite large, and it’s underserved today.”

Nvidia has been moving in this direction for a little while now. Some versions of the tiny GB10 Superchip built in collaboration with MediaTek are also expected to see wider availability at some point. So, it’s not just Intel CPUs getting the Nvidia graphics treatment.

But the deal could mean bad news for Intel’s in-house graphics division. Intel has been trying to expand its influence over the PC graphics market with its Arc GPUs for several years now with limited success.

 

Still loving Arm
While Nvidia may be cozying up to Intel, that doesn’t mean its relationship with Arm is over.

“Our Arm road map is going to continue. We’re fully committed to the Arm road map. We have lots and lots of customers for Arm. We’re building the next generation of Vera,” Huang said.

Nvidia unveiled the next-gen CPU, which is set to launch alongside its Rubin family of GPUs next year, at GTC this spring. The part will feature 88 custom Arm cores with simultaneous multithreading and 1.8TB/s of NVLink-C2C connectivity.

Nvidia has a long-standing relationship with both Arm and Arm-based SoC designers. Prior to its 72-core Grace CPUs, Nvidia worked with Arm on its Tegra family of chips, which power consoles like Nintendo Switch. And as we previously reported, Nvidia has extended similar support for its NVLink tech to Qualcomm and Fujitsu.

 

This isn’t the Intel Foundry win you’re looking for

Despite ongoing efforts to expand its domestic manufacturing capacity, Huang emphasized that Nvidia’s partnership with Intel is primarily focused on products, not manufacturing.

“We’ve always evaluated Intel’s foundry technology, and we’re going to continue to do that. But today, this announcement is squarely focused on these custom CPUs,” Huang said.

Intel of course still needs to find a hero customer for Foundry, but Nvidia isn’t ready to make that commitment just yet.

Nvidia will continue to manufacture its chips at TSMC fabs, whether that be in Taiwan or Arizona.

However, Intel Foundry may still benefit indirectly. While TSMC will make Nvidia’s GPU chiplets, Foundry will likely end up doing the packaging and final assembly. Huang was quick to praise these technologies, which he said are the reason that the two companies are going to be able to bring joint products to market so quickly.

Related Articles

Check Also
Close
Back to top button