Hewlett Packard Enterprise, United States of America
Deep learning is rapidly and fundamentally transforming the way science and industry use data to solve problems. Deep neural network models have been shown to be powerful tools for extracting insights from data across a large number of domains. As these models grow in complexity to solve increasingly challenging problems with larger and larger datasets, the need for scalable methods and software to train them grows accordingly.
The Deep Learning at Scale tutorial aims to provide attendees with a working knowledge on deep learning on HPC class systems, including performance optimization, techniques for scaling and scientific applications. We will not cover deep learning basics in detail but will provide introductory resources. We will give demos and provide code examples with datasets to show attendees how to effectively utilize HPC systems for optimized, scalable distributed training and hyperparameter optimization.