Selected Publications

Continual learning consists of algorithms that learn from a stream of data/tasks continuously and adaptively thought time, enabling the incremental development of ever more complex knowledge and skills. The lack of consensus in evaluating continual learning algorithms and the almost exclusive focus on forgetting motivate us to propose a more comprehensive set of implementation independent metrics accounting for several factors we believe have practical implications worth considering in the deployment of real AI systems that learn continually: accuracy or performance over time, backward and forward knowledge transfer, memory overhead as well as computational efficiency. Drawing inspiration from the standard Multi-Attribute Value Theory (MAVT) we further propose to fuse these metrics into a single score for ranking purposes and we evaluate our proposal with five continual learning strategies on the iCIFAR-100 continual learning benchmark.
In Arxiv., 2015

State representation learning aims at learning compact representations from raw observations in robotics and control applications. Approaches used for this objective are auto-encoders, learning forward models, inverse dynamics or learning using generic priors on the state characteristics. However, the diversity in applications and methods makes the field lack standard evaluation datasets, metrics and tasks. This paper provides a set of environments, data generators, robotic control tasks, metrics and tools to facilitate iterative state representation learning and evaluation in reinforcement learning settings.
In Arxiv, 2013

Representation learning algorithms are designed to learn abstract features that characterize data. State representation learning (SRL) focuses on a particular kind of representation learning where learned features are in low dimension, evolve through time, and are influenced by actions of an agent. The representation is learned to capture the variation in the environment generated by the agent’s actions; this kind of representation is particularly suitable for robotics and control scenarios. In particular, the low dimension characteristic of the representation helps to overcome the curse of dimensionality, provides easier interpretation and utilization by humans and can help improve performance and speed in policy learning algorithms such as reinforcement learning. This survey aims at covering the state-of-the-art on state representation learning in the most recent years. It reviews different SRL methods that involve interaction with the environment, their implementations and their applications in robotics control tasks (simulated or real). In particular, it highlights how generic learning objectives are differently exploited in the reviewed algorithms. Finally, it discusses evaluation methods to assess the representation learned and summarizes current and future lines of research.
In Neural Networks, 2013

Recent & Upcoming Talks

State Representation Learning: an Overview Talk at INRIA Flowers Deep RL workshop 4/4/2018.

2min Spotlight DREAM Project

Recent Posts

What I learned at the PRAIRIE PAISS Summer school, Grenoble, July 2018.

Intrinsic Motivation and Open Ended Learning (IMOL): Learnings from the 3rd IMOL (Intrinsic Motivation and Open Ended Learning) Workshop Rome 4-6 Oct 2017.



S-RL Toolbox

S-RL Toolbox: Reinforcement Learning (RL) and State Representation Learning (SRL) for Robotics

Continual AI

Continual AI is an Open Community of Researchers and Enthusiasts on Continual/Lifelong Learning and AI.

DREAM EU H2020 Project

DREAM will enable robots to cope with the complexity of being an information-processing entity in domains that are open-ended both in terms of space and time. It paves the way for a new generation of robots whose existence and purpose goes far beyond the mere execution of dull tasks.


  • IN104: Computer Science Project (Projet Informatique)

  • IA301 (Telecom ParisTech): Logics and Symbolic Artificial Intelligence (Logique et IA symbolique)

Teaching assistant at ENSTA ParisTech for:

  • ROB313: Computer vision for autonomous systems (Perception pour les Systèmes Autonomes)