This report was originally published in September 2019.
The NVIDIA DGX-1 is a state-of-the-art integrated system for deep learning and AI development. In this white paper, you will learn the best practices for dramatic acceleration of deep learning algorithms over CPU-based hardware. This includes how the DGX-1 can bring efficiencies to training on batch size, input image size and model complexity.
The NVIDIA DGX-1 is a state of the art integrated system for deep learning and AI development. Making use of 8 interconnected NVIDIA Tesla V100 GPUs, the DGX-1 offers dramatic acceleration of deep learning algorithms over CPU-based hardware. In this paper, we highlight a few best practices that enable the DGX-1 end-user to fully capitalize on its industry-leading performance. Benchmark testing was conducted with a common GPU workload, convolutional neural network (CNN) training, using the Keras deep learning API. We first examined the dependence of training efficiency on 3 factors: batch size, input image size and model complexity. Next, the scalability of training speed was assessed using a multi-tower, data-parallel approach. Finally, we demonstrate the importance of learning rate scaling when employing multiple GPU workers.
"WWT Research reports provide in-depth analysis of the latest technology and industry trends, solution comparisons and expert guidance for maturing your organization's capabilities. By logging in or creating a free account you’ll gain access to other reports as well as labs, events and other valuable content."
Thanks for reading. Want to continue?
Log in or create a free account to continue viewing Deep Learning With NVIDIA DGX-1 and access other valuable content.