AMD’s New Chipset Network to Artificially Continue Moore’s Law

“Chiplets” to Utilize Interconnected Integrated Circuits on Larger Silicon Wafers.

Chips are individual separate components attached to printed circuit boards, but researchers at AMD and elsewhere are trying to ward off the slowing or possible end of Moore’s Law by experimenting with a concept called “chiplets”. 

This illustration shows a potential future system with GPUs and a CPU chiplet attached to the same network-enabled silicon, helping improve data transfer rates and energy efficiency. (Image courtesy of AMD.)

This illustration shows a potential future system with GPUs and a CPU chiplet attached to the same network-enabled silicon, helping improve data transfer rates and energy efficiency. (Image courtesy of AMD.)

The idea’s genesis originates from Dr. Puneet Gupta and colleagues at UCLA, who propose replacing the printed circuit board altogether in favor of a portion of silicon wafer called “silicon integrated fabric.” The structure of silicon integrated fabric allows engineers to use interconnects—like those used in integrated circuits (IC’s)—to connect bare silicon chips within 100 micrometers of one another. Dr. Gupta and his colleagues argue that splitting up systems-on-a-chip (SoC) into small chiplets that perform the different functions of various cores would reduce latency and increase efficiency.

The expense of making SoCs with extremely tight integrations is also greater than that of producing smaller chips. Silicon also has an advantage over a printed circuit board in that it is better at conducting heat. Chip packages and printed circuit boards are known in the industry to be inefficient primarily because they are poor conductors of heat, which means that they create a ceiling for how much power you can send through them. This has an effect of increasing the energy needed to bits between chips.

AMD’s researchers hope that chiplets with allow data to move faster and more smoothly between system components. By porting and chopping SoCs into chiplets that perform the function of the SoC’s cores onto an active silicon interposer, engineers can integrate components more efficiently.

There’s a major flaw that still needs to be worked out. On every chiplet is an on-chip routing system. Each chiplet is connected on the interposer’s network. A deadlock can occur when the interposer’s network attempts to route data in such a way that different messages are competing for similar resources. This could be fixed by designing chiplets to avoid this problem. But the prospect of solving this interposer network problem is also problematic. Though it could be done, the effort would diminish the advantages of using chiplets in the first place because the chiplets would become troublesome to design by different teams and wouldn’t be as interchangeable, which is what makes them useful in forming new systems efficiently.

Since discovering this, the research team at AMD created a set of rules to prevent active interposers from seizing into deadlocks. These rules act like administrators who oversee data and direct where it enters and exits the chip. These rules have the effect of disguising or interpreting everything on the interposer—memory, all other logic chiplets and even the interposer’s network as just one node of the on-chip network. This allows separate teams of engineers to design interchangeable chiplets without concerning themselves with the way networks on other chiplets function.