Deep Reinforcement Learning has shown great success in tackling increasingly more complex tasks, but it still lacks the kind of general and modular reasoning that humans and animals can readily deploy when solving new tasks. A key challenge to overcoming this limitation is learning better state representations for our RL algorithms, to make them more general, useful, interpretable and able to reason about the statistics of the world. I will cover advances in unsupervised representation learning that our team has published over the years, including Beta-VAE, SCAN and more recent works. I will then show how one can leverage such representations for RL and talk about the challenges that arise while doing so.