Consumer GPU–driven supercomputer makes HPC affordable.
Minnesota’s Nor-Tech has announced the release of a new business-class version of their high performance computer (HPC)/ Graphic Processing Unit (GPU) server for a price that’s going to be hard to refuse.
For some time now, people in the graphics and simulation communities have known that consumer-grade GPUs perform just as well as commercial-grade GPUs in many applications and do so for a fraction of the cost. You see, consumer GPUs aren’t too different from commercial GPUs, except the lack of error-correcting code (ECC) memory that can be pricey to put on a chip. Although there’s no doubt that for some applications ECC memory is critical, for most applications—especially where algorithms for trending and averages are being employed—the loss of ECC memory isn’t going to be missed.
But wait, you might ask, if everyone knows this, why aren’t engineers and researchers the world over building HPCs and GPU servers that use consumer-grade chips? Unfortunately, the answer is pretty straightforward. Until now, there have been few, if any, off- (or, rather, on) the rack solutions for networking an array of consumer-grade hardware. That type of infrastructure was built solely for commercial-grade HPCs and their bulkier form factors.
Knowing this, Nor-Tech’s engineers have developed a new rackmount chassis that will accommodate consumer-level GPUs—and slash the price for entry into the HPC world.
Specs for Nor-Tech’s powerful business-class rackmount server include:
· Support for up to 3x double-slot computing cards
· NVIDIA G-Force GTX series GPUs
· Intel Xeon processor E5-2600 V3 product family
· 24x RDIMM/LRDIMM ECC DIMM slots
· 24 x 2.5″ hot-swappable HDD/SSD bays
· Support for SATA III 6 Gb/s connections
· 2x GbE LAN ports (Intel I350-BT2)
· 2x 1600 W 80 PLUS Platinum 100~220 V AC redundant PSU
So, who should use a HPC based on consumer-grade GPUs? Nor-Tech believes that there’s a big market out there for technical and scientific companies looking to leverage computing power to further their product development and research products. One high profile example is Google, which uses consumer GPU–driven HPCs to dive into deep learning and machine learning problems.
Still, whenever you’re talking HPCs, there’s always a cloud on the horizon. These days, its name is Amazon Web Services, and one has to wonder when a storm of decentralized computing power will wash away most in-house computing operations for good.