Gaming News
Computing

Chiplets Do Not ‘Reinstate’ Moore’s Law


Ever since chiplets became a topic of discussion in the semiconductor industry, there’s been something of a fight over how to talk about them. It’s not unusual to see articles claiming that chiplets represent some kind of new advance that will allow us to return to an era of idealized scaling and higher performance generation.

There are two problems with this framing. First, while it’s not exactly wrong, it’s too simplistic and obscures some important details in the relationship between chiplets and Moore’s Law. Second, casting chiplets strictly in terms of Moore’s Law ignores some of the most exciting ideas for how we should use them in the future.

Chiplets Reverse a Long-Standing Trend Out of Necessity

The history of computing is the history of function integration. The very name integrated circuit recalls the long history of improving computer performance by building circuit components closer together. FPUs, CPU caches, memory controllers, GPUs, PCIe lanes, and I/O controllers are just some of the once-separate components that are now commonly integrated on-die.

Chiplets fundamentally reverse this trend by breaking once-monolithic chips into separate functional blocks based on how amenable these blocks are to further scaling. In AMD’s case, I/O functions and the chip’s DRAM channels are built on a 14nm die from GF (using 12nm design rules), while the actual chiplets containing the CPU cores and the L3 cache were scaled down on TSMC’s new node.

Prior to 7nm, we didn’t need chiplets because it was still more valuable to keep the entire chip unified than to break it into pieces and deal with the higher latency and power costs.

AMD-Epyc-Chiplet

Epyc’s I/O die, as shown at AMD’s New Horizon event.

Do chiplets improve scaling by virtue of focusing that effort where it’s needed most? Yes.

Is it an extra step that we didn’t previously need to take? Yes.

Chiplets are both a demonstration of how good engineers are at finding new ways to improve performance and a demonstration of how continuing to improve performance requires compromising in ways that didn’t used to be necessary. Even if they allow companies to accelerate density improvements, they’re still only applying those improvements to part of what has typically been considered a CPU.

Also, keep in mind that endlessly increasing transistor density is of limited effectiveness without corresponding decreases in power consumption. Higher transistor densities also inevitably mean a greater chance of a performance-limiting hot spot on the die.

Chiplets: Beyond Moore’s Law

The most interesting feature of chiplets, in my own opinion, has nothing to do with their ability to drive future density scaling. I’m very curious to see if we see firms deploying chiplets made from different types of semiconductors within the same CPU. The integration of different materials, like III-V semiconductors, could allow for chiplet-to-chiplet communication to be handled via optical interconnects in future designs, or allow a conventional chiplet with a set of standard CPU cores to be paired with, say, a spintronics-based chip built on gallium nitride.

We don’t use silicon because it’s the highest-performing transistor material. We use silicon because it’s affordable, easy to work with, and doesn’t have any enormous flaws that limit its usefulness in any particular application. Probably the best feature of chiplets is the way they could allow a company like Intel or AMD to take a smaller risk on adopting a new material for silicon engineering without betting the entire farm in the process.

Imagine a scenario where Intel or AMD wanted to introduce a chiplet-based CPU with four ultra-high-performance cores built with something like InGaAs (indium gallium arsenide), and 16 cores based on improved-but-conventional silicon. If the InGaAs project fails, the work done on the rest of the chip isn’t wasted and neither company is stuck starting from scratch on an entire CPU design.

The idea of optimizing chiplet design for different types of materials and use-cases within the same SoC is a logical extension of the trend towards specialization that created chiplets themselves. Intel has even discussed using III-V semiconductors like InGaAs before, though not since ~2015, as far as I know.

The most exciting thing about chiplets, in my opinion, isn’t that they offer a way to keep packing transistors. It’s that they may give companies more latitude to experiment with new materials and engineering processes that will accelerate performance or improve power efficiency without requiring them to deploy these technologies across an entire SoC simultaneously. Chiplets are just one example of how companies are rethinking the traditional method of building products with an eye towards improving performance through something other than smaller manufacturing nodes. The idea of getting rid of PC motherboards or of using wafer-scale processing to build super-high-performance processors are both different applications of the same concept: Radically changing our preconceived notions on what a system looks like in ways that aren’t directly tied to Moore’s Law.

Now Read:

  • Chiplets Are Both Solution to and Symptom of a Larger Problem
  • Chiplets Are the Future, but They Won’t Replace Moore’s Law
  • TSMC Starts Development on 2nm Process Node, but What Technologies Will It Use?

Related posts

AMD’s Rumored 4700G APU Packs 8 Cores, Boosted Clock Speeds

admin

Nervana Nevermore: Intel Shifts Focus to Habana Labs, Cancels NNP-T, NNP-I

admin

Good News: Microsoft Won’t Force Office 365 Pro Plus Users to Use Bing

admin