A New Supercomputer on the Rise?
Kyle Maxey posted on November 19, 2014 |

supercomputer, nuclear, big-data, GPUEarly this week Top500, the arbiter of supercomputing power rankings, released its list of the world’s top systems. Unsurprisingly, China’s Tianhe-2 (Milky Way 2) retained its place as the planet’s most powerful computer for the fourth consecutive time, setting a mark of 33.86 petaflops/s. What is surprising, however, is how static the upper echelon of supercomputing has become. In Top500’s most recent top 10 list every system remained in the same place, save for a single new entry, a Cray CS-Storm system that staked its claim at 10th place. Yet, while the last two years of supercomputing history have seemed calm, engineers behind the scene are hard at work building a new generation of machines that could make today’s systems look like the slide-rules of old.

Heading up the pack of this new generation of supercomputers are two government-funded systems being built by IBM. The first, called Summit, has been designated as the Oak Ridge National Lab’s new machine for simulating nuclear reactor physics and climate models, astrophysics, medical imaging and much more. The second, named Sierra, will be housed at Lawrence Livermore National Laboratory with the principle duty of facilitating virtual testing on the US’s mature nuclear weapons stockpile. Given its primary focus, details concerning Sierra are a bit harder to come by.

While the two new machines, which will cost the government around $325M, are destined for different ends of the continental US, the technology behind each supercomputer will be similar.

supercomputer, nuclear, big-data, GPUAccording to IBM, the key to breaking through the deadlock atop the high-performance computing peak is a “data-centric” approach to architecture.  Rather than sending data back and forth from storage to processor, these news systems will make extensive use of GPUs to help them crunch information much faster and in parallel with onboard CPUs. In addition to the speed increases associated with this new architecture, IBM’s engineers have projected that Sierra will consume roughly 10% more energy than its predecessor Titan, a dramatic dip in what was once an unsustainable supercomputing power consumption curve.

In the end, IBM expects that both systems will sit atop Top500’s supercomputing list by 2017. In their estimation the super-secretive Summit will be capable of 150-300 petaflops/s, with Sierra clocking in at 100-150 petaflops/s.

While Sierra and Summit will represent a marked improvement over current systems, will the future of supercomputing look more like the last few years, filled with years-long lulls that are occasionally interrupted by revolutionary advances? Or is there a steadier curve ahead?

Recommended For You

Recommended For You