Loic Matthey
Loic Matthey
Home
Experience
Publications
Talks
Contact
CV
Light
Dark
Automatic
article
Evaluating VLMs for Score-Based, Multi-Probe Annotation of 3D Objects
Unlabeled 3D objects present an opportunity to leverage pretrained vision language models (VLMs) on a range of annotation tasks – …
Rishabh Kabra
,
Loic Matthey
,
Alexander Lerchner
,
Niloy J Mitra
PDF
SODA: Bottleneck Diffusion Models for Representation Learning
We introduce SODA, a self-supervised diffusion model, designed for representation learning. The model incorporates an image encoder, …
Drew A. Hudson
,
Daniel Zoran
,
Mateusz Malinowski
,
Andrew K Lampinen
,
Andrew Jaegle
,
James L McClelland
,
Loic Matthey
,
Felix Hill
,
Alexander Lerchner
PDF
Spatial Broadcast Decoder: A simple architecture for learning disentangled representations in VAEs
We present a simple neural rendering architecture that helps variational autoencoders (VAEs) learn disentangled representations. …
Nicholas Watters
,
Loic Matthey
,
Christopher P Burgess
,
Alexander Lerchner
PDF
Cite
COBRA: Data-Efficient Model-Based RL through Unsupervised Object Discovery and Curiosity-Driven Exploration
Data efficiency and robustness to task-irrelevant perturbations are long-standing challenges for deep reinforcement learning …
Loic Matthey
,
Nicholas Watters
,
Matko Bosnjak
,
Christopher P Burgess
,
Alexander Lerchner
PDF
Cite
Code
Twitter thread explainer
Multi-Object Representation Learning with Iterative Variational Inference
Human perception is structured around objects which form the basis for our higher-level cognition and impressive systematic …
Klaus Greff
,
Raphael Lopez Kaufmann
,
Rishabh Kabra
,
Nicholas Watters
,
Christopher P Burgess
,
Daniel Zoran
,
Loic Matthey
,
Matthew Botvinick
,
Alexander Lerchner
PDF
Cite
MONet: Unsupervised Scene Decomposition and Representation
The ability to decompose scenes in terms of abstract building blocks is crucial for general intelligence. Where those basic building …
Christopher P Burgess
,
Loic Matthey
,
Nicholas Watters
,
Rishabh Kabra
,
Irina Higgins
,
Matthew Botvinick
,
Alexander Lerchner
PDF
Cite
Understanding disentangling in β-VAE
We present new intuitions and theoretical assessments of the emergence of disentangled representation in variational autoencoders. …
Christopher P Burgess
,
Irina Higgins
,
Arka Pal
,
Loic Matthey
,
Nicholas Watters
,
Guillaume Desjardins
,
Alexander Lerchner
PDF
Cite
SCAN: Learning Abstract Hierarchical Compositional Visual Concepts
This paper describes SCAN (Symbol-Concept Association Network), a new framework for learning recombinable concepts in the visual domain. We first use the previously published beta-VAE (Higgins et al., 2017a) architecture to learn a disentangled representation of the latent structure of the visual world, before training SCAN to extract abstract concepts grounded in such disentangled visual primitives through fast symbol association.
Irina Higgins
,
Nicolas Sonnerat
,
Loic Matthey
,
Arka Pal
,
Christopher P Burgess
,
Matthew Botvinick
,
Demis Hassabis
,
Alexander Lerchner
PDF
Cite
DeepMind Blog
Cite
×