Scalable Instructable Multiworld Agent (SIMA) - A generalist AI agent for 3D virtual environments

Game showcase - courtesy of Google DeepMind


Building embodied AI systems that can follow arbitrary language instructions in any 3D environment is a key challenge for creating general AI. Accomplishing this goal requires learning to ground language in perception and embodied actions, in order to accomplish complex tasks. The Scalable, Instructable, Multiworld Agent (SIMA) project tackles this by training agents to follow free-form instructions across a diverse range of virtual 3D environments, including curated research environments as well as openended, commercial video games. Our goal is to develop an instructable agent that can accomplish anything a human can do in any simulated 3D environment. Our approach focuses on language-driven generality while imposing minimal assumptions. Our agents interact with environments in real-time using a generic, human-like interface: the inputs are image observations and language instructions and the outputs are keyboard-and-mouse actions. This general approach is challenging, but it allows agents to ground language across many visually complex and semantically rich environments while also allowing us to readily run agents in new environments. In this paper we describe our motivation and goal, the initial progress we have made, and promising preliminary results on several diverse research environments and a variety of commercial video games

Google DeepMind Blog
Loic Matthey
Loic Matthey
Staff Research Scientist in Machine Learning

ex-Neuroscientist working on Artificial General Intelligence at Google DeepMind. Unsupervised learning, structured generative models, concepts and how to make AI actually generalize is what I do.