No
Yes
View More
View Less
Working...
Close
OK
Cancel
Confirm
System Message
Delete
Schedule
An unknown error has occurred and your request could not be completed. Please contact support.
Scheduled
Wait Listed
Personal Calendar
Speaking
Conference Event
Meeting
Interest
There aren't any available sessions at this time.
Conflict Found
This session is already scheduled at another time. Would you like to...
Loading...
Please enter a maximum of {0} characters.
{0} remaining of {1} character maximum.
Please enter a maximum of {0} words.
{0} remaining of {1} word maximum.
must be 50 characters or less.
must be 40 characters or less.
Session Summary
We were unable to load the map image.
This has not yet been assigned to a map.
Search Catalog
Reply
Replies ()
Search
New Post
Microblog
Microblog Thread
Post Reply
Post
Your session timed out.
This web page is not optimized for viewing on a mobile device. Visit this site in a desktop browser to access the full set of features.
2019 GTC San Jose
Add to My Interests
Remove from My Interests

WORKSHOPS MARCH 17, 2019 | CONFERENCE MARCH 18-21, 2019


Login to your account and click "Add to My Interests" to build your personalized interest list.


Add-on package required to attend workshops. Conference & Training pass required to attend trainings.


New sessions added each week, scheduler will be available in early February.



S9266 - 3D Object Tracking and Localization for AI City We will discuss 3D vehicle and pedestrian tracking and localization by monocular surveillance cameras for an AI city. We'll explain how to use 2D detections to localize vehicles in 3D world coordinates and how to estimate the GPS speed with a tracking approach. We will also examine how appearance features and temporal consistency combine to define clustering loss between two tracklets and how we used five clustering operations for loss minimization. Talk Gaoang Wang - Research Assistant of University of Washington, University of Washington
DLIT928 - 3D Segmentation for Medical Imaging with V-Net

Image segmentation is a key application of deep learning in the healthcare field. You will:

  • Learn to implement skip connections between layers to improve segmentation performance
  • Apply the V-Net network architecture to segment 3D imaging data
  • Train a network to segment 3D prostate scans

Upon completion, you'll be able to use the V-Net architecture to segment 3D images. All attendees must bring their own laptop and charger. We recommend using a current version of Chrome, Firefox, or Safari for an optimal experience. Create an account at http://courses.nvidia.com/join before you arrive.

Instructor-Led Training
S9518 - Accelerated Deep Learning Applied to Algorithmic Trading: Some Lessons Learned We'll discuss the challenges AI/ML technologies pose for large financial services organizations. These include the vast number of potential AI application areas, which may include NLP, human-resource management, time-series analysis, distinct classes of anomaly-detection problems, operational readiness and optimization, and the assessment, design, and deployment of new information infrastructures. Add in a complex regulatory environment and the effort required to build effective applied data science teams, and it's easy to see why no large financial services organization gets this transformation 100 percent right the first time. We'll describe some of the hurdles we encountered, and discuss lessons learned in the process of adopting accelerated deep learning. Talk Ahmed Jedda - Technology Associate, Morgan Stanley
Richard Huddleston - Executive Director, Morgan Stanley
S9756 - Accelerated Hyperscale Compute for AI at the Edge We'll examine what the evolution of the 5G network means for telecommunications providers and examine how supporting the jump to 5G will require accelerated computing deployed in new patterns. We'll cover how telecommunications companies can tackle the data tsunami that will emerge with 5G, explore the new intelligent edge, and share solutions to the challenges of 5G. We will also provide examples of application deployments to the edge and their use cases. Talk Brandon Jones - Power Technical Specialist, IBM
S9213 - Accelerate Time Series Databases with GPUs and Machine Learning Learn how to more efficiently use time series databases, which have applications in fields like finance, Internet services, and application performance management. Data storage and processing can affect their performance and cost. We'll present a GPU-Based architecture to accelerate time series database query performance and compare it with a CPU-Based version. In addition, we'll introduce an algorithm that reduces storage size by compressing these databases. We'll also show how to apply machine learning with reinforcement learning to make mode decisions in data compression. Talk Xinyang Yu - Engineer, Alibaba
S9751 - Accelerate Your CUDA Development with Latest Debugging and Code Analysis Developer Tools We will provide an overview of debugging and dynamic analysis CUDA software tools that would be required during any development process. We'll talk about specific areas in the development process that NVIDIA addresses to ensure that a comprehensive set of development tools are available to ensure completion of software application development. Talk Steve Ulrich - Manager: Compute Debugger, NVIDIA
Aurelien Chartier - Senior Systems Software Engineer, NVIDIA
S9672 - Accelerate Your Speech Recognition Pipeline on the GPU Automatic speech recognition (ASR) algorithms allow us to interact with devices, appliances, and services using spoken language. Used in cloud services like Siri, Google Voice, and Amazon Echo, speech recognition is growing in popularity, which substantially increases the computational demand on the data center. We'll discuss the latest work by NVIDIA to accelerate the ASR pipeline, which includes a lattice-generating language model decoder, and explain how we're enabling online speech decoding across a range of NVIDIA GPUs. Talk Justin Luitjens - Senior Developer Technologies Engineer, NVIDIA
Hugo Braun - AI Developer Technology Engineer, NVIDIA
DLIT901 - Accelerating Applications with CUDA C/C++

The CUDA computing platform enables acceleration of CPU-only applications to run on the world's fastest massively parallel GPUs. Learn how to accelerate C/C++ applications by:

  • Exposing the parallelization of CPU-only applications, and refactoring them to run in parallel on GPUs
  • Successfully managing memory
  • Utilizing CUDA parallel thread hierarchy to further increase performance

Upon completion, you'll be able to utilize CUDA to accelerate your CPU-only C/C++ applications for massive performance gains. All attendees must bring their own laptop and charger. We recommend using a current version of Chrome, Firefox, or Safari for an optimal experience. Create an account at http://courses.nvidia.com/join before you arrive.

Instructor-Led Training
S9168 - Accelerating Data Access for Deep Learning on Large-Scale and High-Bandwidth Distributed GPU Systems I/O bandwidth is often the bottleneck for distributed deep learning with GPUs. We'll discuss a distributed caching and data location-aware scheduling system that solves the I/O bandwidth problem without the need for expensive storage systems. Our system uses local disk attached to the GPU nodes for transparent distributed cache and maintains the data in the cache even after the job has finished. The scheduler can also take cached data into consideration in scheduling jobs. Together, these systems optimize placement of jobs on the nodes with data, while allowing the cluster to be shared among multiple users. This results in increased cluster utilization and allows GPUs to process the data as fast as necessary without any I/O bottlenecks. We will describe the design considerations for the cache and experimental results with several applications. Talk Steven Eliuk - Vice President, Deep Learning, IBM Global Chief Data Office, IBM
Seetharami Seelam - Research Staff Member, IBM T. J. Watson Research Center
DLIT937 - Accelerating Data Science Workflows with RAPIDS

The open source RAPIDS project allows data scientists to GPU-accelerate their data science and data analytics applications from beginning to end, creating possibilities for drastic performance gains and techniques not available through traditional CPU-only workflows. Learn how to GPU-accelerate your data science applications by:

· Utilizing key RAPIDS libraries like cuDF (GPU-enabled Pandas-like dataframes) and cuML (GPU-accelerated machine learning algorithms)
· Learning techniques and approaches to end-to-end data science, made possible by rapid iteration cycles created by GPU acceleration
· Understanding key differences between CPU-driven and GPU-driven data science, including API specifics and best practices for refactoring

Upon completion, you'll be able to refactor existing CPU-only data science workloads to run much faster on GPUs and write accelerated data science workflows from scratch.

All attendees must bring their own laptop and charger. We recommend using a current version of Chrome, Firefox, or Safari for an optimal experience. Create an account at http://courses.nvidia.com/join before you arrive.

Instructor-Led Training
S9783 - Accelerating Graph Algorithms with RAPIDS Graphs are a ubiquitous part of technology we use daily in systems like GPS — graphs help find the shortest path between two points — and in social networks, which use them to help users find friends. We'll explain why analyzing these vast networks with possibly billions of entries requires the computing power of GPUs. We'll then discuss the performance of graph algorithms on the GPU and show benchmarking results from several graph frameworks. We'll also cover the RAPIDS roadmap that will help unify these frameworks and make them easy to use and simple to deploy. Talk Joe Eaton - Technical Lead for Data and Graph Analytics, NVIDIA
S9207 - Accelerating Magnetic Resonance Imaging (MRI) Using GPUs We'll discuss how NVIDIA GPUs can accelerate MR image reconstruction by exploiting parallelism in advanced MR reconstruction algorithms, which reduces MRI scan time and makes MRIs clinically more useful. Learn how to address the computational complexities of MR algorithms to reduce scan time by exploiting their inherent parallelism using GPUs. We'll explain how to implement look-up tables based on CUDA kernels to avoid dependencies in the conventional MRI gridding. We'll also present reconstruction results of the simulated phantom and in-vivo datasets to demonstrate the computation time and reconstruction accuracy of GPU-Based implementations of the SENSE, GRAPPA and Magnetic Resonance Fingerprinting algorithms for MR reconstruction. Talk Hammad Omer - Assistant Professor, COMSATS University Islamabad, Pakistan
S9556 - Accelerating Model Development by Reducing Operational Barriers Advanced hardware like NVIDIA technology lowers technical barriers to model size and scope, but issues remain in areas like model performance and training infrastructure management. We'll discuss operational challenges to training models at scale with a particular focus on how training management and hyperparameter tuning can inform each other to accomplish specific goals. We'll also explore techniques like parallelism and scheduling, discuss their impact on model optimization, and compare various techniques. We'll also evaluate results of this approach. In particular, we'll focus on how new tools that automate training orchestration accelerate model development and increase the volume and quality of models in production. Talk Patrick Hayes - CTO, SigOpt
S9923 - Accelerating Product Visualization in KeyShot using RTX KeyShot is widely used in industrial design to visualize products for design evaluations, marketing images, technical illustrations, and more. We'll discuss our experience integrating RTX support into KeyShot. We'll explain why we decided to use RTX and outline the technical design decisions we made to provide the best experience for our users. Talk Henrik Jensen - Chief Scientist, Luxion
S9479 - Accelerating the Next Generation of Seismic Interpretation We will discuss how deep learning can automate complex seismic interpretation tasks that are crucial to exploration and production in the energy industry. Seismic interpretation often involves tasks of extracting structural features — horizons, faults, and salt bodies, for example — from 3D seismic images. Manually interpreting such seismic structural features can be time-consuming and labor-intensive. We'll explain how we're improving automatic seismic geobody interpretation by using a convolutional neural network for image classification and segmentation. Talk Yunzhi Shi - Graduate Research Assistant, The University of Texas at Austin
S9665 - Acceleration of an Adaptive Cartesian Mesh CFD Solver in the Current Generation Processor Architectures We'll explore the challenges of accelerating an adaptive Cartesian mesh CFD Solver, PARAS-3D, in existing CPUs and GPUs. The memory-bound nature of CFD codes is an obstacle to higher performance, and the opt-tree structure of adaptive Cartesian meshes adds the challenge of data parallelism. Cartesian mesh solvers have higher memory bandwidth requirements due to their larger and varying stencil. We'll detail how redesigning and implementing a legacy Cartesian mesh CFD solver and improving algorithms and data structures helped us achieve higher performance in CPUs. We'll also explain how we used a structure of array-based data layout and GPU features like Unified Memory and Multi Process Service to improve GPU performance over a CPU-only version. Talk Bharatkumar Sharma - Senior Solution Architect, NVIDIA
Harichand M V - Scientist, Vikram Sarabhai Space Centre
S9652 - Achieving Deterministic Execution Times in CUDA Applications CUDA has been an industry standard for high-performance computing applications to use GPU parallelism for general-purpose computing. Achieving high compute throughput has always been an important goal. But CUDA is increasingly used in autonomous vehicles and robotics, where deterministic execution time is important. We'll discuss some application and system design considerations to help CUDA developers achieve deterministic execution times. We will also talk about tricks to avoid bubbles in the GPU pipeline and improve GPU utilization, general programming practices, and application design for deterministic execution times. Talk Yogesh Kini - Engineering Manager, NVIDIA
Aayush Rajoria - System Software Engineer, NVIDIA
S9272 - A Deep Learning-Based Method for Automated Volumetric Assessment of Liver Lesions Radiological assessment and quantification of liver lesions currently relies on measurements of longest linear diameter to quantify size, which is a misleading way to measure irregularly shaped lesions. Volumetric assessment, on the other hand, gives a much better impression of overall lesion size. One of the greatest roadblocks to calculating lesion volume is the amount of time it takes to demarcate the boundaries of an individual lesion. Arterys is working on empowering the radiologist with an automated method for volumetric assessment of liver lesions. This automated method is built using a convolutional network. When integrated into Arterys web platform, it enables volumetric assessment with a single mouse click. Talk Daniel Golden - Director of Machine Learning, Arterys
Daniel Golden - Director of Machine Learning, Arterys
S9692 - Advanced In-Situ Visualization of Glactic Wind Simulations using NVIDIA IndeX We will discuss how we're using NVIDIA IndeX to visualize a large-scale galactic wind simulation that has been simulated on a GPU cluster. We'll demonstrate an in-situ integration of NVIDIA IndeX running tightly coupled with the CUDA-based scientific simulation code, Cholla, to make optimal use of available computing resources. In addition, we will showcase the novel compute functionality of NVIDIA IndeX, which allows us to actively pre-process the simulation data for more complex visualization operations at run-time. Talk Marc Nienhaus - Sr. Manager, Product Technology Lead, NVIDIA
Alexander Kuhn - Senior Graphics Engineer, NVIDIA
Dragos Tatulea - Software Engineer, NVIDIA
S9378 - Advanced Technologies and Techniques for Debugging CUDA GPU HPC Applications Debugging and analyzing NVIDIA GPU-Based HPC applications requires a tool that supports the demands of today's complex CUDA applications. Debuggers must deal with the extensive use of C++ templates, STL, many shared libraries, and debugging optimized code. They need to seamlessly support debugging both host and GPU code, Python, and C/C++ mixed-language applications. They must also scale to the complexities of today's multi-GPU cluster supercomputers such as Summit and Sierra. We'll discuss the advanced technologies provided by the TotalView for HPC debugger and explain how they're used to analyze and debug complex CUDA applications to make code easily understood and to quickly solve difficult problems. We'll also show TotalView's new user interface. Learn how to easily debug multi-GPU environments and OpenACC, and see a unified debugging view for Python applications that leverage C++ Python extensions such as TensorFlow. Talk Nikolay Piskun - Director of Continuing Engineering, Rogue Wave Software
S9164 - Advanced Weather Information Recall with DGX-2 We'll talk about how we're applying deep learning to weather forecasting at Weather News, one of the world's largest forecasting companies. We're now able to provide Japanese TV news shows with AI-generated weather information, and we plan to expand elsewhere in Asia. We'll explain how we used TensorFlow on an NVIDIA DGX-2 machine and innovative learning model to add measurement results and increase accuracy of our forecaster. We'll also talk about how we're creating new learning models with TensorRT on the DGX-2. We'll touch on other potential uses for our weather technology in settings such as autonomous cars and solar power plants. Talk Tomohiro Ishibashi - Director, Weather News, Inc.
Shigehisa Omatsu - CEO, dAIgnosis,Inc.
S9436 - Advances in Computational Particle Mechanics Using GPUs Our talk will examine advances in the simulation of particulate systems in computer-aided engineering applications. We'll focus on the discrete element method (DEM) and the strides made in the number of particles and particle shape using the GPU-Based code, Blaze-DEM. We'll cover a number of industrial applications including mining, agriculture, civil engineering, and pharmaceuticals. We will ook at fluid and heat couplings made possible by the increased computational power of the latest NVIDIA GPUs. We'll also discuss work by various groups to create a multi-physics GPU-Based platform using Blaze-DEM. Talk Nicolin Govender - Research Fellow, University of Surrey
S9209 - Advances in Real-Time Automotive Visualisation We'll provide an overview of new techniques developed by ZeroLight using NVIDIA's Volta and Turing GPUs to enhance real-time 3D visualization in the automotive industry for compelling retail experiences. We'll cover the challenges involved in integrating real-time ray-traced reflections at 60fps in 4k and how future developments using DXR and NVIDIA RTX will enable improvements to both graphics and performance. We'll also discuss the challenges to achieving state-of-the-art graphical quality in virtual reality. Specifically, we'll explain how the team created a compelling commercial VR encounter using StarVR One and its eye-tracking capabilities for foveated rendering. Talk Chris O'Connor - Technical Director, ZeroLight
S9728 - Advancing Astrophysics with the GPU-Native, Massively Parallel Code, Cholla Learn about Cholla, a GPU-Native massively parallel hydrodynamics code that runs on the world's largest supercomputers and is pushing the forefront of astrophysical research. We'll describe Cholla's design, including our innovations in transferring classic computational fluid dynamics algorithms to GPUs. We'll cover our ongoing research in astrophysics, highlighting results from our 2017-2018 INCITE program to understand the role galactic winds play in the ongoing evolution of galaxies. In addition, we'll describe our current efforts to couple Cholla with NVIDIA's IndeX visualization software to provide high-fidelity in-situ renderings of our simulations. Talk Evan Schneider - Postdoctoral Research Fellow, Princeton University
S9202 - Advancing Fusion Science With CGYRO Using GPU-Based Leadership Systems Learn about the science of magnetically confined plasmas to develop the predictive capability needed for a sustainable fusion energy source. Gyrokinetic simulations are one of the most useful tools for understanding fusion science. We'll explain the CGYRO code, built by researchers at General Atomics to effectively and efficiently simulate plasma evolution over multiple scales that range from electrons to heavy ions. Fusion plasma simulations are compute- and memory-intensive and usually run on leadership-class, GPU-Accelerated HPC systems like Oak Ridge National Laboratory's Titan and Summit. We'll explain how we designed and implemented CGYRO to make good use of the tens of thousands of GPUs on such systems, which provide simulations that bring us closer to fusion as an abundant clean energy source. We'll also share benchmarking results of both CPU- and GPU-Based systems. Talk Jeff Candy - Manager, Turbulence and Transport Group, General Atomics, General Atomics
Igor Sfiligoi - HPC Software Developer, Energy group, General Atomics, General Atomics
S9750 - Advancing U.S. Weather Prediction Capabilities with Exascale HPC We'll discuss the revolution in computing, modeling, data handling and software development that's needed to advance U.S. weather-prediction capabilities in the exascale computing era. Creating prediction models to cloud-resolving 1 KM-resolution scales will require an estimated 1,000-10,000 times more computing power, but existing models can't exploit exascale systems with millions of processors. We'll examine how weather-prediction models must be rewritten to incorporate new scientific algorithms, improved software design, and use new technologies such as deep learning to speed model execution, data processing, and information processing. We'll also offer a critical and visionary assessment of key technologies and developments needed to advance U.S. operational weather prediction in the next decade. Talk Mark Govett - Chief, High Performance Computing Section, NOAA Earth System Research Laboratory
S9737 - Adversarial Attacks and Defenses to Deep Neural Networks Learn about a key weakness of current deep learning technology. We'll describe an intriguing property of neural networks, an adversarial example, that shows the vulnerability of the state-of-the-art neural networks. An adversarial example is crafted when some small but significant noise is added to a normal example, which gives it the ability to fool or even attack neural networks. We'll discuss how we attack and defend neural networks, methods that won all three tracks in the NIPS 2017 competition on adversarial attacks and defenses organized by Google Brain. Talk Xiaolin Hu - Associate Professor, Tsinghua University
S9872 - A Fast Forward through "Ray Tracing Gems" Ray tracing is a 50-year-old algorithm that is suddenly new again. While the film industry has embraced this rendering technique, until last year the idea of using it in an interactive setting was limited to high-end systems. With the introduction of Microsoft's DirectX for ray tracing and NVIDIA's RTX GPUs, ray tracing is now viable and affordable for workstations and consumer systems. To help developers, NVIDIA collected 32 technical R&D articles into the new book, "Ray Tracing Gems." As the book's editors, we'll quickly explain each article to help you achieve your rendering goals. When it is available, the book will be a free e-book as well as a hardback. Talk Tomas Akenine-Möller - Distinguished Research Scientist, NVIDIA
Eric Haines - Distinguished Engineer, NVIDIA
S9577 - A GPU-Accelerated Streaming AI Data Platform Leveraging RAPIDS We will discuss the evolution of GPU-Accelerated data platforms for big data streaming and ETL analytics and provide an overview of the RAPIDS data science platform. Structured telemetry events and unstructured logs are growing by 1,000 percent a year, so it's essential to handle scale with same strict SLAs and high reliability. We'll talk about our next-generation GPU-Accelerated big data platform for streaming workloads, which leverages GPU memory and GPU cores to process data at scale while maintaining extremely low latency for real-time analytics and monitoring. We'll showcase how we leveraged RAPIDs for NVIDIA's internal big data platform, which processes more than 10 billion events a day in support of NVIDIA products such as GPU Cloud, GeForce cloud gaming and NGC cloud deep learning. We will also cover benchmarks and best practices for running end-to-end big data workloads on GPUs. Talk Joshua Patterson - Director, Applied Solution Engineering, NVIDIA
S9784 - AI and Machine Learning in Radiology: A Reality Check Learning Objectives: 1) A "realistic" perspective on how machine learning and artificial intelligence can add value to radiology will be discussed. 2) The significant challenges with respect to practical implementation of machine learning/artificial intelligence offerings by existing radiology workflow and existing IT infrastructure will be reviewed. 3) Strategies for preparing the radiology department and IT for machine learning/artificial intelligence will be discussed. Talk Paul Chang - Professor & Vice Chairman, Department of Radiology, University of Chicago School of Medicine
S9385 - AI-Based Anomaly Detections and Threat Forecasting for Unified Communications Networks Learn how NVIDIA's GPUs are used to accelerate unified communications (UC) analytics processing by mathematically classifying UC call flows. We'll discuss how we're leveraging NVIDIA GPU parallelization technology to support classification and baselining of UC call flows, protect UC against fraudulent attacks, and establish predictive UC forecasting models. We'll explain how this allows us to more accurately identify and forecast deviations that may represent malicious use of UC against a baseline of normal traffic. Talk Kevin Riley - CTO, Ribbon Communications
S9399 - AI/Deep Learning: Transformational Health Care Use Cases We'll talk about healthcare deep learning initiatives we're pursuing to better serve our patients. These include improving the process of prior authorization, which is not only costly, but takes time that can affect patients' conditions and customer satisfaction. We'll discuss how we're applying deep learning to enable real-time processing of prior authorizations. We'll cover how we're using deep learning to more effectively detect medical claims fraud. Instead of traditional unsupervised outlier detection, deep learning can predict the provider or member's unique features, and use those to detect abnormal medical claims and reduce false positives. And we'll also explain how we're using deep learning for multiple disease imputation and prediction. Based on a patient's historical EHR , we can accurately impute multiple medical conditions as well as predict future conditions with an eye toward intervention. Talk Dima Rekesh - Senior Distinguished Engineer, Optum - UHG
Julie Zhu - Distinguished Engineer/Chief Data Scientist, Optum Technology, United Health Group
S9749 - AI Deployment in Manufacturing: Deep Learning Visual Inspection to Improve Productivity Manufacturers are increasingly adopting AI to improve productivity. We'll discuss our work to automate the inspection process, which represent 20 percent of the manufacturing pipeline. We're developing deep learning for automated visual inspection, aiming for human-level accuracy using NVIDIA GPUs and TensorRT to deploy the neural network on Jetson AGX Xavier. We'll also introduce our other new deep learning products. Talk Keisuke Fujita - AI Project Co-Founder, Musashi Seimitsu Industries Co., Ltd
S9508 - AI in Astrophysics: Applying Artificial Intelligence and Deep Learning to Astronomical Research AI and related technologies are beginning to revolutionize astronomy and astrophysics. As facilities like the Large Synoptic Survey Telescope and the Wide Field InfraRed Telescope come online, data volumes in astronomy will increase. We will describe a deep learning framework that allows astronomers to identify and categorize astronomical objects in enormous datasets with more fidelity than ever. We'll also review new applications of AI in astrophysics, including data analysis and numerical simulation. Talk Brant Robertson - Associate Professor, UC Santa Cruz
S9924 - AI in Diagnostic Imaging: An Opportunity to Reinvent the Clinical Workflow Understanding the current workflow and challenges in radiology is important when developing potential AI solutions, but developers must be aware of the pitfall of simply replacing existing workflow steps with AI. Instead, we should consider how AI can actually reinvent the workflow, making it possible for radiologists and physicians in other diagnostic imaging specialties to deliver better (i.e., more effective, personalized, cost-effective, and accessible) care to patients. In this talk, I will discuss some challenges specific to patient care in radiology, brainstorm some solutions, and describe some AI initiatives we are piloting. Talk Tessa Cook - Assistant Professor of Radiology, Penn Medicine
S9334 - AI Infrastructure: Lessons Learned from NVIDIA DGX POD NVIDIA DGX POD is a new way of thinking about AI infrastructure, combining DGX servers with networking and storage to accelerate AI workflow deployment and time to insight. We'll discuss lessons learned about building, deploying, and managing AI infrastructure at scale — from design to configuration to management to deployment — and offer insights useful for any infrastructure deployment of AI. We'll talk about how NVIDIA storage partners used the DGX POD concept to create their own versions of our design approach. In addition, we'll discuss NVIDIA's derivatives focused on vertical markets including medical imaging, autonomous vehicles, and other industries. Tutorial Darrin Johnson - Global Technical Markerting for Enterprise, NVIDIA
S9671 - AI Innovation Success Stories in Retail and Consumer Products Industries AI is driving success in many areas of the retail and consumer industries. Learn more about use cases, customer references, and compelling value propositions from GPU-Enabled technology. Topics include AI, computer vision, and machine learning. We'll discuss success stories with production-grade business impact that cultivate an innovative fast-fail approach to driving consumer engagement, brand awareness, and operational efficiency. Talk Paul Hendricks - Solution Architect - Retail, NVIDIA
Eric Thorsen - IBD Retail, NVIDIA
S9720 - AI Manufacturing Innovation AI and deep learning are about to change the manufacturing industry by boosting capacity, increasing efficiency, and reducing costs and inventory. We'll share how Foxconn Interconnect Technology, a professional parts and components company, developed an industrial inspection application in-house and show how to leverage NVIDIA SDKs to accelerate the overall develop and deployment process. Talk Joseph Wang - CTO, Foxconn Interconnect Technology (FIT)
S9902 - AI+VR: The Future of Data Analytics We'll discuss how virtual reality and machine learning tools can help extract insights from large, complex datasets and help build effective storytelling. We'll explain why standard data analytics tools and techniques are no longer sufficient, and discuss how AI-powered visual analytics with immersive environments provide a novel and robust framework for collaborative data exploration and understanding. Talk Ciro Donalek - CTO, Co-Founder, Virtualitics Inc
S9241 - All You Need to Know about Programming NVIDIA's DGX-2 NVIDIA's DGX-2 system offers a unique architecture which connects 16 GPUs together via the high-speed NVLink interface, along with NVSwitch which enables unprecedented bandwidth between processors. This talk will take an in depth look at the properties of this system along with programming techniques to take maximum advantage of the system architecture. Talk Lars Nyland - GPU Computing Architect, NVIDIA
Stephen Jones - ​​​​​Principal Software Engineer​​​, NVIDIA
S9843 - A Machine Learning Method in Computational Materials Science We'll discuss our work using neural networks to fit the interatomic potential function and describe how we tested the network's potential function in atomic simulation software. This method has lower computational cost than traditional density functional theory methods. We'll show how our work is applicable to different atom types and architectures and how it avoids relying on the physical model. Instead, it uses a purely mathematical representation, which reduces the need for human intervention. Talk Boyao Zhang - Engineer, Computer Network Information Center, Chinese Academy of Sciences
Yangang Wang - Professor, Computer Network Information Center, Chinese Academy of Sciences
S9825 - A Massively Scalable Architecture for Learning Representations from Heterogeneous Graphs Working with sparse, high-dimensional graphs can be challenging in machine learning. Traditional ML architectures can only learn weights for a limited number of features, and even highly flexible neural networks can become overburdened if there isn't enough data. We'll discuss neural graph embeddings, used extensively in unsupervised dimensionality reduction of large graph networks. They can also be used as a transfer learning layer in other ML tasks. We'll explain our proposed architecture, which is highly scalable while keeping node types in their own distinct embedding space. Our approach involves a massively scalable architecture for distributed training in which the graph is distributed across a number of computational nodes that can fetch and update parameters of the embedding space. The parameters are served from several servers that scale with the number of parameters. We'll share our preliminary results showing training acceleration and reduced time to convergence. Talk C. Bayan Bruss - Machine Learning Engineer, Capital One
Athanassios Kintsakis - Machine Learning Engineer, Capital One
S9658 - Amber18: An Enhanced Molecular Simulations Program for Studying Biopolymers and Dissecting Ligand Binding Energies We'll discuss advanced implementations of molecular dynamics for studying the motions of biochemical systems and dissecting free energies in drug binding and molecular recognition. Attendees should have basic knowledge of CUDA. We'll focus on strategies for parallelism based on the computer science of how graphics cards operate. We'll also talk about the results for applications in pharmaceutical and academic computational biology. Talk Taisung Lee - Research Professor, Rutgers, the State University
David Cerutti - Research Professor, Rutgers, the State University
DLIT923 - Analogous Image Generation using CycleGAN

AI can automatically generate every horse as a zebra, while the same process can generate satellite imagery from any map. The same AI can take a sprite sheet and generate a sheet with a different theme for automatic digital asset creation. You'll learn how to:

  • Use image analogies to translate image to image
  • Create Autoencoder architecture using encoder, transformer, and decoder
  • Employ PatchGAN discriminator to complete the generative adversarial network (GAN)

Upon completion, you'll be able to automatically create analogous images using CycleGAN. All attendees must bring their own laptop and charger. We recommend using a current version of Chrome, Firefox, or Safari for an optimal experience. Create an account at http://courses.nvidia.com/join before you arrive.

Instructor-Led Training
S9422 - An Automatic Batching API for High Performance RNN Inference We will describe a new API that more effectively utilizes the GPU hardware for multiple single inference instances of the same RNN model. Many NLP applications have real-time run time requirements for multiple independent inference instances. Our proposed API accepts independent inference requests from an application and seamlessly combines them to a large batch execution. Time steps from independent inference tasks are combined together so that we achieve high performance while staying within the latency budgets of an application for a time step. We also discuss functionality that allows the user to wait on completion of a certain time step, a task that's possible because our implementation is mainly composed of non-blocking function calls. Finally, we'll present performance data from the Turing architecture for an example RNN model with LSTM cells and projections. Talk Murat Guney - Developer Technology Engineer, NVIDIA
S9111 - A New Direct Connected Component Labeling and Analysis Algorithm for GPUs We'll talk about the evolution of the Connected Component Labeling (CCL) algorithm for GPUs and introduce a new algorithm for both CCL and connected-component analysis. CCL is a central algorithm for low- and high-level image processing used computer vision applications such as OCR, motion detection, and tracking. Formerly After an era on single-core processors, where many sequential algorithms were developed and few codes were released, new parallel algorithms were developed on multi-core processors, SIMD processors and GPUs. We'll discuss how a benchmark on an NVIDIA Jetson TX2 shows that the new algorithm is up to 2.7 times faster than the state-of-the-art and can reach a processing rate of 200 fps for a resolution of 2048x2048. Talk Arthur Hennequin - Ph.D., CERN
Lionel Lacassagne - Professor in Computer Architecture, LIP6 Sorbonne University
S9875 - Anomaly Detection for Smart Home We'll discuss how we're using computer vision and machine learning to identify abnormal events for notification. Talk Hongcheng Wang - Director of Research, Comcast Applied AI
DLIT936 - Anomaly Detection with Variational Autoencoders

Anomaly detection is critical in many industries, especially cybersecurity, finance, healthcare, retail and telecom. Variational autoencoders can outperform traditional techniques for anomaly detection. You'll learn how to:

  • Use Bayesian inference roots of variational autoencoders and their implementation
  • Define anomalies as the probability of being generated from a given model below a certain threshold and set thresholds that are specific to industry or use case
  • Use variational autoencoders to detect anomalies from all data points

Upon completion, you'll know how to train a variational autoencoder to detect anomalies within the data. All attendees must bring their own laptop and charger. We recommend using a current version of Chrome, Firefox, or Safari for an optimal experience. Create an account at http://courses.nvidia.com/join before you arrive.

Instructor-Led Training
S9686 - Applied Deep Learning Research at NVIDIA TBD Talk Bryan Catanzaro - VP of Applied Deep Learning Research, NVIDIA
S9805 - Applying Deep Learning NLP Techniques to the Cybersecurity Challenge We will discuss a compelling application of NLP processing techniques with multi-layer RNNs for cyber event log anomaly detection. We'll compare the effectiveness of standard and bidirectional RNN language models for detecting malicious activity within network log data. We'll also describe how using our technique on the Los Alamos National Laboratory cybersecurity database provides performance that is superior to two popular algorithms, Isolation Forest and Principal Components Analysis. Talk Robert Jasper - Manager, Pacific Northwest National Lab
Remove From Schedule Add To Schedule Are you sure you would like to Delete this personal time? Edit My Schedule Edit Personal Time This session is full. Would you like to be added to the waiting list? Would you like to remove "{0}" from your schedule? Would you like to add "{0}" to your schedule? Sorry, this session is full. Waitlist Available Sorry, this session and it's waiting list are completely full. Sessions Available Adding this multi-day session automatically enrolls you for all times shown below. Removing this multi-day session automatically removes you for all times shown below. Adding this multi-day session automatically enrolls you for all session times for this session. Removing this multi-day session automatically removes you for all session times for this session. Click to view details Interests Hide Interests Search Sessions Export Schedule There is a scheduling conflict. You cannot add this session to your schedule because you are participating in another session at this time. Schedule Conflict. An error occurred while processing this request.. Adding this item creates a conflict with another session on your schedule. Remove from Waiting List Add to waiting list Removing this will remove you from the waiting list for all session times for this session Adding this will add you to the waiting list for all session times for this session. You have nothing scheduled Tap below to see a list of sessions and activities that are available to add to your schedule this week Choose from the list of sessions to the left to add to your schedule for the day Add a Session

Registration Complete!

So we can prepare the best experience for you,
What can we do to help you?
Click here to skip
All Tab
The All tab
Attendee Tab
The Attendee Tab
Tailored Experiences
The Tailored Experiences Tab
Session Tab
The Session Tab
Speaker Tab
The Speaker Tab
Exhibitor Tab
The Exhibitor Tab
Files Tab
The Files Tab
Search Box
The search box
Filters
Filters
Dashboard
Dashboard Link
My Schedule
My Schedule Link
Recommendations
Recommendations Link
Interests
Interests Link
Meetings
Meetings Link
Agenda
Agenda Link
My Account
My Account Link
Catalog tips
Get More Results