Séminaire au DIC: «Active exploration in reinforcement learning: From neuroscience to robotics and vice versa» par Mehdi Khamassi
Séminaire ayant lieu dans le cadre du Doctorat en informatique cognitive, en alliance avec le centre de recherche CRIA et de l'ISC
Jeudi le 22 septembre 2022
Vidéoconférence - zoom : https://uqam.zoom.us/j/88481835073
Titre : Active exploration in reinforcement learning: From neuroscience to robotics and vice versa
One of the key ingredients of learning in autonomous agents in volatile environments is the exploration-exploitation trade-off: finding the right balance between exploiting previously acquired knowledge and exploring alternatives, and adapting this balance on-the-fly when the environment changes. Throughout the presentation, I will use an illustrative example from the human–robot interaction (HRI) domain: Among relevant signals, non-verbal cues such as the human’s gaze can provide the robot with important information about the human’s current engagement in the task, and whether the robot should continue its current behavior or not. Various solutions have been proposed in the reinforcement learning literature, often inspired by developmental psychology (studying how human infants explore their surrounding world). Some mechanisms have neurobiological counterparts in the human brain: dynamic regulations of exploration rate as a function of volatility; information (uncertainty)-based solutions; and progress-based solutions. I will also illustrate existing bridges with Karl Friston's active inference which he will later present in this seminar series.
Mehdi Khamassi is a CNRS research director, Institute of Intelligent Systems and Robotics (ISIR), Sorbonne Université, Paris. His background is in Computer Science, Cognitive Sciences and Cognitive Neuroscience. He is co-director of studies of the CogMaster program at Ecole Normale Supérieure (PSL) / EHESS / University of Paris and Editor of the several scientific journals, like Intellectica, Frontiers in Neurorobotics, Frontiers in Decision Neuroscience, ReScience X, and Neurons, Behavior, Data analysis and Theory. His main topics of research include decision-making and reinforcement learning in robots and humans, and the role of social and non-social rewards in learning.