Title: Architectures for Deep Neural Networks
Speaker:
Stephen Keckler, NVIDIA Research
Date/Time:
Monday, December 5th @ 4:15pm
Location:
233 Phillips Hall (Phillips Lounge)
Host:
Christopher Batten

Abstract: Deep Neural Networks (DNNs) have emerged as a key algorithms for a wide range of difficult applications including image recognition, speech processing, and autonomous vehicles. Today’s DNNs are often trained on farms of GPUs and then deployed in a wide range of systems from mobile to server. Current trends in DNN architectures are toward deeper and more complex networks, placing more stress on both training and inference. This talk will discuss the challenges associated with emerging DNNs and describe recent work that (1) enables larger and more complex networks to be trained on a single GPU with limited memory capacity; and (2) reduces the memory and computation footprints of DNNs at inference time, enabling them to run with vastly improved energy efficiency.

Bio: Dr. Stephen W. Keckler is the Vice President of Architecture Research at NVIDIA and an Adjunct Professor of Computer Science at the University of Texas at Austin, where he served on the faculty from 1998-2012. His research interests include parallel computer architectures, high-performance computing, energy-efficient architectures, and embedded computing. Dr. Keckler is a Fellow of the ACM, a Fellow of the IEEE, an Alfred P. Sloan Research Fellow, and a recipient of the NSF CAREER award, the ACM Grace Murray Hopper award, the President’s Associates Teaching Excellence Award at UT-Austin, and the Edith and Peter O’Donnell award for Engineering. He earned a B.S. in Electrical Engineering from Stanford University and M.S. and a Ph.D. degrees in Computer Science from the Massachusetts Institute of Technology.

ECE Colloquium: Stephen Keckler, NVIDIA Research