Engineers Design New Computer Memory Technique to Boost App Speeds

Dense Footprint Cache is a novel method for storing frequently accessed data.

A generic pair of 32 MB dynamic random access memory (DRAM) modules.

A generic pair of 32 MB dynamic random access memory (DRAM) modules.

Computer and electrical engineers have developed a technique that allows computer processors to retrieve data more efficiently. Their approach was shown to improve application speeds by 9.5 percent, while reducing energy consumption by an average of 4.3 percent.

Dense Footprint Cache

The technique developed by the engineers is a new memory design called Dense Footprint Cache (DFC). It improves upon current designs used in dynamic random access memory (DRAM) caches, which are used to store frequently accessed data in close proximity to the computer processor. This means the processor doesn’t have to waste time retrieving the data from the off-chip main memory.

The cache data is typically organized into large blocks, called Mblocks (major blocks), to help the processor find the data it needs. However, Mblocks often contain many bytes of data that are irrelevant to the processor, so retrieving them wastes time and energy. Current techniques used to solve this problem leave “holes” in the Mblocks that serve as wasted space for what could have been useful data.

DFC addresses both of these concerns. It relies on useful block prediction to learn which data the processor actually needs from the Mblocks, and allows the cache to do two things: one, it can compress the Mblock to retrieve only the relevant data; and two, it frees up space in the cache for other data that the processor is likely to need.

An illustration of the Dense Footprint Cache. See paper for full details. (Image courtesy of Shin et al.)

An illustration of the Dense Footprint Cache. See paper for full details. (Image courtesy of Shin et al.)

Another advantage of DFC is that it reduces the rate of last-level cache (LLC) misses, which occur when the processor attempts to retrieve cache data that isn’t actually there. These misses force the processor to retrieve data from the main memory, resulting in a decrease in efficiency. DFC reduces the ratio of LLC misses by approximately 43 percent.

The engineers ran simulations of Big Data applications to demonstrate the DFC results. They also proposed several solutions to the associated challenges of their new design, such as replacement and placement policies. However, since it’s hard to say when or if the technique will find its way into your next PC, you’ll just have to rely on your own speed to win that next game of Rocket League.

You can read the full paper to learn more about the team’s work with DFC.

For more advancements in computer memory, read about a new material that can switch between two crystal states.

Written by

Michael Alba

Michael is a senior editor at engineering.com. He covers computer hardware, design software, electronics, and more. Michael holds a degree in Engineering Physics from the University of Alberta.