Sergey Levine – Data-Driven Reinforcement Learning: Deriving Common Sense from Past Experience



Speaker: Sergey Levine from UC Berkeley
Abstract:
Reinforcement learning affords autonomous agents, such as robots, the ability to acquire behavioral skills through their own experience. However, a central challenge for machine learning systems deployed in real-world settings is generalization, and generalization has received comparatively less attention in recent research in reinforcement learning, with many methods focusing on optimization performance and relying on hand-designed simulators or closed-world domains such as games. In domains where generalization has been studied successfully — computer vision, natural language processing, speech recognition, etc., — invariably good generalization stems from access to large, diverse, and representative datasets. Put another way, data drives generalization. Can we transplant this lesson into the world of reinforcement learning? What does a data-driven reinforcement learning system look like, and what types of algorithmic and conceptual challenges must be overcome to devise such a system? In this talk, I will discuss how data-driven methods that utilize past experience can enable wider generalization for reinforcement learning agents, particularly as applied to challenging problems in robotic manipulation and navigation in open-world environments. I will show how robotic systems trained on large and diverse datasets can attain state-of-the-art results for robotic grasping, acquire a kind of “common sense” that allows them to generalize to new situations, learn flexible skills that allow users to set new goals at test-time, and even enable a ground robot to navigate sidewalks in the city of Berkeley with an entirely end-to-end learned model.

Bio:
Sergey Levine received a BS and MS in Computer Science from Stanford University in 2009, and a Ph.D. in Computer Science from Stanford University in 2014. He joined the faculty of the Department of Electrical Engineering and Computer Sciences at UC Berkeley in fall 2016. His work focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms. Applications of his work include autonomous robots and vehicles, as well as computer vision and graphics. His research includes developing algorithms for end-to-end training of deep neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, deep reinforcement learning algorithms, and more. His work has been featured in many popular press outlets, including the New York Times, the BBC, MIT Technology Review, and Bloomberg Business.

Comments are closed.