No
Yes
View More
View Less
Working...
Close
OK
Cancel
Confirm
System Message
Delete
My Schedule
An unknown error has occurred and your request could not be completed. Please contact support.
Scheduled
Scheduled
Wait Listed
Personal Calendar
Speaking
Conference Event
Meeting
Interest
There aren't any available sessions at this time.
Conflict Found
This session is already scheduled at another time. Would you like to...
Loading...
Please enter a maximum of {0} characters.
{0} remaining of {1} character maximum.
Please enter a maximum of {0} words.
{0} remaining of {1} word maximum.
must be 50 characters or less.
must be 40 characters or less.
Session Summary
We were unable to load the map image.
This has not yet been assigned to a map.
Search Catalog
Reply
Replies ()
Search
New Post
Microblog
Microblog Thread
Post Reply
Post
Your session timed out.
This web page is not optimized for viewing on a mobile device. Visit this site in a desktop browser to access the full set of features.
2019 GTC San Jose

S9830 - Training AI Models Faster With Distributed Training in PyTorch

Session Speakers
Session Description

In addition to the new production deployment-oriented capabilities included in the 1.0 release of PyTorch, the deep learning framework also added improved distributed training, allowing researchers and developers to easily parallelize computations across processes and clusters of machines. The PyTorch dev team at Facebook has continued to up the performance, and will be walking through new benchmarks, and how developers can readily take advantage of distributed training in PyTorch and NVIDIA GPUs to train their models faster.


Additional Information
Deep Learning/AI Frameworks
Deep Learning/AI Frameworks
Software
Advanced technical
Talk
50 minutes
Session Schedule