Learning and leveraging disentangled representations for RL

Image credit: Unsplash


Deep Reinforcement Learning has shown great success in tackling increasingly more complex tasks, but it still lacks the kind of general and modular reasoning that humans and animals can readily deploy when solving new tasks. A key challenge to overcoming this limitation is learning better state representations for our RL algorithms, to make them more general, useful, interpretable and able to reason about the statistics of the world. I will cover advances in unsupervised representation learning that our team has published over the years, including Beta-VAE, SCAN and more recent works. I will then show how one can leverage such representations for RL and talk about the challenges that arise while doing so.

Oct 5, 2018 8:00 PM
Loic Matthey
Loic Matthey
Staff Research Scientist in Machine Learning

ex-Neuroscientist working on Artificial General Intelligence at Google DeepMind. Unsupervised learning, structured generative models, concepts and how to make AI actually generalize is what I do.