Nvidia’s absence from an Intel-led effort among industry players to develop next-gen chips with a more vibrant mix of cores is raising questions about the GPU maker’s chiplet strategy, particularly regarding the integration of graphics when it comes to future PC processors.
Essentially, Intel and its friends this week launched a collaborative effort to foster and push a common-language interconnection between dies of CPU and GPU cores, AI engines, hardware accelerators, and other blocks.
This technology has been dubbed Universal Chiplet Interconnect Express, or UCIe, and yes, it has parallels with PCIe.
It’s hoped this interconnect will allow these dies to seamlessly communicate with each other inside a single package. The idea being that you can – for instance – design your own custom die with some special acceleration on it, and then drop it into a package alongside compatible dies of someone else’s CPU cores and acceleration units, and so on, and have the whole thing manufactured as a single processor. With a common die-to-die interconnect, the difficulty of engineering this is reduced.
These dies are also known as chiplets or compute tiles, and can be laid out flat (2D) or stacked (3D) as needed. Arranging a processor as a set of tiles in this way may bring benefits over integrating it all in one die, as a traditional system-on-chip, in terms of power, bandwidth, space, and more.
This approach suits Intel, for one, because it is keen to fab these kinds of chips for customers. AMD has, by the way, used chiplets for years in its Zen families of processors: it has TSMC manufacture AMD-designed dies and place them in single packages.
The UCIe working group is a who’s who of chip and hardware companies as backers, including AMD, Arm, Google, Meta, Microsoft, Qualcomm, Samsung, and TSMC. Apple and Nvidia were the notable names missing.
Intel is spending billions of dollars to establish factories using 3D packaging to make chips. Intel is also aligning its CPU, GPU, and accelerator releases with its aggressive manufacturing roadmap, which calls for four new nodes by 2025.
To keep its busy factories, Intel is adopting a “multi-ISA” strategy, where it’s opening up its assembly lines to make components powered by Arm and RISC-V core architectures. It is also licensing x86 cores, which can be packaged alongside dies of Arm and RISC-V compute tiles in a custom chip. UCIe will help those cores work in concert.
“The chiplet ecosystem created by UCIe is a critical step in the creation of unified standards for interoperable chiplets, which will ultimately allow for the next generation of technological innovations,” Intel said in a statement.
Nvidia isn’t backing the UCIe interconnect at its launch, and that raises questions on how it’ll develop GPU compute tiles to co-exist with x86 CPUs in future chips, especially for PCs.
Iffy about chiplets?
“Nvidia has a preference for really large monolithic die,” Kevin Krewell, principal analyst at Tirias Research, told The Register. “I think it’s because they have experience building big dies and it gives them a differentiation. Nvidia has had research projects with chiplets, but have not committed to a production strategy – to date.”
The idea of GPU compute tiles was floated by Intel last month as a way to blur the lines between integrated and discrete GPUs. It’s upcoming Alchemist GPU, code-named Battlemage, will be integrated as a tile alongside other chiplets containing CPU cores and support circuitry in a chip code-named Meteor Lake.
“Nvidia likes to build larger GPUs that will not fit in a normal package with an x86 CPU. There’s still an issue with what is the right size CPU and the right size GPU in one package. If anything, Nvidia will integrate an Arm CPU, Krewell said.
Intel, which recently released its first discrete GPU, is well positioned for this chiplet approach as it designs and fabs its processors. Nvidia didn’t respond to requests for comment from The Register about not being involved in UCIe, and has previously declined to comment on its compute tile strategy.
Plowing the Bluefield
Nvidia’s divergent approach to chip design is highlighted with its Bluefield chip, which links up Arm CPU cores, its homegrown GPUs, and Mellanox networking tech, in one package.
The biz failed to close a mega-deal to buy arm, though CEO Jensen Huang addressed the company’s three-chip strategy of CPUs, GPUs, and DPUs like Bluefield, on a recent earnings call.
Nvidia has a 20-year architectural license from Arm, which grants Nvidia “the full breadth and flexibility of options across technologies and markets” to deliver on its three-chip strategy, Huang said.
The GPU giant is on track to launch its Arm-based Grace processor, targeting giant AI and HPC workloads, in the first half of next year, Huang said, and later adding that we should expect “a lot of CPU development around the Arm architecture .”
At the same time, Huang said “whether x86 or Arm, we will use the best CPU for the job, and together with partners in the computer industry, offer the world’s best computing platform to tackle the impactful challenges of our time.”
Intel hopes a collaborative approach through efforts – one of which is UCIe – will dent Nvidia’s strong position in the markets of graphics, supercomputing, and artificial intelligence. Intel has specifically pointed out that Nvidia’s closed approach with platforms, such as Omniverse, can’t compete against emerging opportunities like the metaverse.
“They aim to eat into the ecosystem. While their closed proprietary approach may have some short-term benefits. We don’t believe a closed approach is scalable in the long run for this large an opportunity,” said Raja Koduri, vice president and general manager of the Accelerated Computing Systems and Graphics Group at Intel, during the company’s investor conference last month. ®