Amazon Integrates NVIDIA Tesla V100 GPUs to Supercharge Cloud for Parallel Processing
Andrew Wheeler posted on November 03, 2017 | 1739 views

Recently, Amazon Web Services announced P3 instances, the next generation of Amazon Elastic Compute Cloud (Amazon EC2) GPU instances to help improve the performance of applications that require huge amounts of parallel floating-point processing. 

The GPU, as predicted by NVIDIA, is powering many of today’s computationally intensive tasks like machine learning, autonomous vehicle systems, computational fluid dynamics, genomics and seismic analysis. 

P3 instances are the first to include NVIDIA Tesla V100 GPUs, and Amazon claims that customers will be able to write and deploy advanced applications much faster than they could with previous-generation Amazon EC2 GPU compute instances. 

Machine learning applications will be trained faster with up to eight NVIDIA Tesla V100 GPUs available on P3 instances. This gives Amazon’s customers access to 1 petaflop of mixed precision, 125 teraflops of single precision, as well as 62 teraflops of double-precision floating-point processing. 

P3 instances also include up to 64 CPUs with custom Intel Xeon E5 processors with Broadwell architecture and 488 GB of DRAM.

If you go to AWS Marketplace, and are already an AWS customer, you will find AWS Deep Learning Machine Images (AMIs) available with preinstalled versions of TensorFlow, Apache MXNet and Caffe2 with support for Tesla V100 GPUS.

The NVIDIA Volta Deep Learning AMI gives customers access to deep learning framework containers from NVIDIA GPU Cloud, or AMIs for Windows Server 2012 R2, Windows Server 2016, Ubuntu 16.04 or Amazon Linux. This gives customers some flexibility in choosing an optimal AMI.

What may prevent you from utilizing Amazon EC2 P3 Instances is your location. If you are located in the Eastern U.S., Western U.S., Asia Pacific or Western EU, then you are in luck. Amazon EC2 P3 Instances are available in three sizes: one, four and eight GPUs. These can be purchased On-demand, Reserved or Spot instances.

The emergence of GPUs as the cornerstone of machine learning and deep learning, combined with the increasing computing power of the cloud, will no doubt yield some noteworthy applications in the near future.

Recommended For You