Nvidia has overtaken Intel as the world’s most valuable chip maker, at least briefly. Increases in the GPU manufacturers’ market capitalization pushed its stock price to $404 on Wednesday, giving it a market capitalization at $248B, just about Intel’s $246B.
Nvidia has generally enjoyed very strong growth over the last few years, buoyed by the tremendous growth in markets like AI and HPC. The company has faced virtually no competition in these spaces — AMD’s own efforts rely on emulating CUDA and adoption of its ROCm platform, but support for AMD equipment as a matter of practical deployment seems more theoretical than actual. A recent paper dove into the challenges of deploying ROCm and found that the project’s rapid update schedule and AMD’s decision to replace its HCC compiler with GPU support already baked into the LLVM framework made the prospect of long-term support more challenging than it would be otherwise. A lack of documentation is also highlighted as a major challenge with ROCm. As of this paper’s publication, there was no centralized, official source for ROCm documentation.
The performance data from this paper suggests the performance situation with ROCm continues to favor Nvidia, with AMD’s GPUs generally slower than their Team Green counterparts. Given that AMD is effectively performing code translation, that’s not too surprising.
Intel, meanwhile, is still fighting to establish itself in these new markets as well. The company’s server side business has done excellently in recent years, though Wall Street hasn’t showered the same degree of loving attention on it, but it’s specific AI efforts have born smaller fruit. The company bought Havana Labs last year and effectively relaunched some of its AI efforts from scratch. We’re still waiting to see what Xe brings to the table after the cancellation of Xeon Phi a few years ago.
Intel’s CPU-centric efforts have focused on integrating capabilities like AVX-512 and bfloat16 into its CPUs, the latter of which debuted in top-end server CPUs this year with the launch of Cooper Lake. Cooper Lake was originally going to launch across the entire Xeon stack, which would have brought the capability to Intel’s entire server family. Instead, Ice Lake will handle the lower-end server launches (sans bfloat16, for now) and Intel will introduce the capability to its 10nm CPUs with a later server launch. This implies Intel sees the target market for bfloat16 as being the upper-end of the server space, at least for now, with limited expected impact for lower-end parts.
Early tests of Xeon’s boosted AI capabilities against Nvidia GPUs has suggested that while Intel CPUs are far more capable in these workloads than they used to be, absolute performance-per-watt advantages are still held by NV. AMD has focused most of its efforts on the CPU side of the equation for now, while Nvidia has had the scientific side of the business largely to itself. That may change in future years, with AMD talking about its CDNA compute architecture, but it’s not surprising to see Nvidia in this position. The company’s stock has surged 68 percent since the pandemic began, as investors bet the shutdown and work-from-home orders will be good for its data center business. Intel’s stock, in contrast, is down 3 percent in 2020.
- One Standard to (Maybe) Rule Them All: Intel Debuts Thunderbolt 4
- Intel Forced to Suspend Sales to Inspur, China’s Largest AI and Server Vendor
- AMD mATX, Mini-ITX Motherboards Are Significantly More Expensive Than Intel Equivalents