Loic Matthey

Loic Matthey

Staff Research Scientist in Machine Learning

Google DeepMind


I am an ex-Neuroscientist now working on Artificial General Intelligence.

I’m working as a Staff Research Scientist at Google DeepMind, more precisely on Concepts understanding, structured representation learning and RL.

Basically worked a long while on unsupervised structure learning and generative models, leveraging them to make predictions and trying to make model-based Deep Reinforcement learning work like it should.

But like everybody, more recently I’m mostly working with assessing massive visual-language models for their common-sense abilities, like an ex-neuroscientist would 😃

  • Machine Learning
  • Unsupervised Structure Learning
  • Deep Reinforcement Learning
  • Computational Neuroscience
  • PhD in Computational Neuroscience and Machine Learning

    UCL Gatsby Computational Neuroscience Unit

  • MSc in Computer Science / Biocomputing

    Ecole Polytechnique Federale de Lausanne (EPFL)


Python / Lua / C++ / Java / Scala / Matlab
(Py)Torch / Tensorflow / Jax
Scientific analysis
Dataset collection and post-processing (Apache Beam)
Large scale model training and evaluation
Model-based Reinforcement Learning (cherry and the cake)
Technical infrastructure
Post-metal / hardcore/ midwest emo / hyper-pop
Weird Coffee Person


Google DeepMind
Staff Research Scientist
October 2020 – Present London

Lead a variety of research efforts tackling several core AI problems.


  1. Model-based RL leveraging structured generative models and Transformer-based world models.
  2. Episodic learning of object abstractions
  3. Video structured generative models and diffusion models


  • Research & Tech lead and management (~10 Scientists/Engineers)
  • Model building, training and optimization for large-scale distributed systems
  • Analysis, presentation to core stakeholders.
  • Integration, testing and debugging
Google DeepMind
Senior Research Scientist
May 2018 – October 2020 London

Field defining research on object-based/structured generative models, and how to leverage them for learning better autonomous agents (e.g. graph neural networks).

  • Research lead and core contributor (~3 Scientists/Engineers)
  • Disentanglement research, environment development, benchmarks and advanced data collection.
  • Deep Reinforcement Learning research (model-free, model-based, planning)
Google DeepMind
Research Scientist
June 2014 – May 2018 London

Core research on concepts and generative models, co-author on papers that started the disentanglement representation research sub-field.

  • Main co-author on β-VAE (4400 citations), Understanding Disentangling, SCAN, DARLA, MONet, IODINE, and many others.
  • Designed and released the dSprites dataset, among other core datasets used by the community.
R&D Engineer
July 2008 – July 2009 Switzerland
Research on swarm robotics with low-cost components and low-quality sensors. Robustness through redundancy and biologically inspired algorithms.

Recent Publications

Quickly discover relevant content by filtering publications.
(2023). Evaluating VLMs for Score-Based, Multi-Probe Annotation of 3D Objects. arXiv.


(2023). SODA: Bottleneck Diffusion Models for Representation Learning. arXiv.


(2021). SIMONe: View-Invariant, Temporally-Abstracted Object Representations via Unsupervised Video Decomposition. NeurIPS 2021.

PDF Cite Animated figures

(2019). Spatial Broadcast Decoder: A simple architecture for learning disentangled representations in VAEs. ICLR 2019 Workshop on Learning from Limited Labeled Data.

PDF Cite