Robot Cognition Laboratory
Peter Ford Dominey PhD, CNRS Research Director
DR1
INSERM - U1093 Cognition, Action, and
Sensorimotor Plasticity
Université de Bourgogne
Faculté des Sciences du Sport (UFR Staps)
BP 27877
21078 Dijon, France
see also:
http://u1093.u-bourgogne.fr/en/
https://sites.google.com/site/pfdominey/
https://www.facebook.com/RobotCognitionLaboratory/
https://scholar.google.fr/citations?user=plk1bUYAAAAJ&hl=en
History:
The RCL is a joint CNRS/INSERM endeavor whose objective is to understand human cognition, and to implement and demonstrate this understanding in humanoid robots. The RCL was initially created in 2003 while I was a CNRS researcher at the Institute des Sciences Cognitives, CNRS UPR 9075.
Today the RCL is proud to be a part of INSERM U1093 CAPS, in the prestigous Marey Building.
Researchers associated with the RCL:
Jocelyne
Ventre-Dominey INSERM CR1, Carol Madden-Lombard CNRS CR1,
Clement Delgrange PhD student, David Farizon PhD student, Nicolas Lair PhD student
First Robot Activity in Marey Center in Dijon in collaboration with France Mourey and Celia Ruffino
Current Funding and Projects:
CIFRE – Jarvis – an intelligent
digital assistant – with Cloud Temple
Robot Memory
Assistant – The introduction of an Autobiographical Memory (from our work on EU
Project What You Say is What you Did) makes a
big difference:
Past Funding:
We greatfully
acknowledge funding from EU FP7 Projects CHRIS and ORGANIC, EFAA and WYSIWYD,
and ANR projects Amorces and Comprendre; ANR (NSF Cooperative Research in Computational
Neuroscience)
Spaquence
EFAA - Experimental Functional Android Assistant
Cooperative Human Robot Interaction Systems
In cooperation with the Bristol Robotics Laboratory (Chris Melhuish Director and Project FP7 Project Leader), CNRS
LAAS Toulouse, The Max Plank Institute for
Evolutionary Anthropology, the Italian Institute of Technology and our
group, we investigate the developmental foundations of human cooperation, and
implement these in humanoid robots for safe human robot cooperation.
ORGANIC
Current speech recognition technology is based on
mathematical-statistical models of language. Although these models have become extremely
refined over the last decades, progress in automated speech recognition has
become very slow. Human-level speech recognition seems unreachable. The ORGANIC
project ventures on an altogether different route toward automated speech
recognition: not starting from statistical models of language, but from models
of biological neural information processing – from neurodynamical
models. The overall approach is guided by the paradigm of Reservoir
Computing, a biologically inspired perspective on how arbitrary computations
can be learnt and performed in complex artificial neural networks. We
INSERM contibutes our significant background in
recurrent neural network computation, indeed our founding work in this area.
Two important
papers on reservoir computing came from this work:
http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004967
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0052946
AMORCES
Algorithmes et Modèles pour un Robot Collaboratif Eloquent et Social
In cooperation with CNRS LAAS Toulouse (Rachid Alami ANR Project leader), the GIPSA-LAB Grenoble,
GREYC (
Comprendre
Comprendre
will develop a neurophysiologically motivated hybrid model of comprehension (WP1),
test this model using behavioral, fMRI and ERP methodologies (WP2), and finally
develop neural network and robotic implementations of the resulting model
(WP3).
Our Approach:
One of the long-term goals in the domain of human-robot interaction is
that robots will approach these interactions equipped with some of the same
fundamental cognitive capabilities that humans use. This will include the
ability to perceive and understand human action in terms of an ultimate goal,
and more generally to represent shared intentional plans in which the goal
directed actions of the robot and the human are interlaced into a shared
representation of how to achieve a common goal in a cooperative manner. Our
approach to robot cognition makes a serious commitment to cognitive
neuroscience and child development, as sources of knowledge about how cognition
is and can be implemented.
One of the "canonical" scenarios that we use in human-robot
cooperation is the table building scenario. This requires cooperation,
and manipulating the four legs, one after another, give's the robot a chance to
learn during the course of the task. We have
implemented successively sophisticated cooperation capabilities
with the Lynx 6, and the HRP-2 and iCub Humanoids.
Work with the iCub
We have a significant collaboration with the Italian Institue
of Technology , creators of the iCub
|
HRP2 Humanoid Robot – JRL Project
As part of the JRL, we were pioneers on spoken language programming
of the HRP2:
Spoken Language programming of the HRP2 in a cooperative construction
task
Peter DOMINEY, Anthony MALLET, Eiichi YOSHIDA
The paper
accepted at the 2007 International Conference on Robotics and Automation ICRA
is available below.
Here we
demonstrate how spoken language can be used to pilot the HRP2 Humanoid during
human-robot interaction, and more importantly, how language can be used to
program the robot, i.e. to teach it new composite behaviours that can be used
in the future.
We
presented the first spoken language programming of HRP2 at ICRA 2007. First we
see how the robot is programmed: And now wesee Running the Learned Program on the HRP2 Before doing
this with the HRP-2 we prototyped it with a small educational arm Lynx
Robot Arm in a Cooperative Construction Task |
We next
introduced more elaborate perception and learning (Humanoids 2007) Spring
2007 we worked together in the JRL with
Anthony Mallet and Eiichi Yoshida to introduced vision and inverse kinematics
so that the HRP2 can perform visually guided grasping of the table legs, thus
increasing its behavioral autonomy and cooperation capability. Some videos including stereo vision localization and
inverse kinematics planning can be seen here. |
Video Demonstrations:
Cooperative
Activity and Helping in Human Robot Interaction (October 12, 2006)
The
following video demonstrates results from 6 experiments on Spoken language and
vision processing for robot command, imitation, learning a simple game, helping
the human when he is stuck, learning a complex game, and helping the human
again.
Here are some details of how it works Dominey
PF (2007)
Here are
some relatively cool videos from the pre-humanoid days....
Lynx robot
arm sequence learning
Lynx robot
arm sentence based commanding
Khepera
Robot sequence learning
Humanoids2005/KepSequence.wmv
Robot event
describer
Humanoids2005/EventDescShort.wmv
Aibo
Sequence learning
Humanoids2005/AiboSequence4.wmv
Aibo
spoken telecommanding
Selected Publications:
Dominey PF (2002) Conceptual
Grounding in Simulation Studies of Language Acquisition, Evolution of
Communication, 4(1) 57-85.
This is a prophetic paper that has been out of print until recently!!
Lallee
S, Yoshida E, Mallet A, Nori F, Natale L, Metta G, Warneken
F, Dominey PF (2010) Human-Robot
Cooperation Based on Learning & Spoken Language Interaction From
Motor Learning to Interaction Learning in Robots, Studies in Computational
Intelligence, vol. 264, Springer-Verlag
Lallee S, Warneken F, Dominey PF (2009) Learning to Collaborate by Observation International Conference on Epigenetic Robotics
Lallee S, Warneken
F, Dominey PF (2009) Learning
to Collaborate by Observation ; IEEE Humanoids
Workshop on developmental psychology contributions to cooperative human robot
interaction
Dominey, P., F., Metta, G., Natale, L. and Nori, F., Anticipation and Initiative in Dialog and Behavior During Cooperative Human-Humanoid Interaction, IEEE-RAS International Conference on Humanoid Robots, December 1-3, Daejeon, Korea, 2008
Dominey PF, Mallet A, Yoshida E (2007) Real-Time Cooperative Behavior Acquisition by a Humanoid Apprentice, Proceedings of IEEE/RAS 2007 International Conference on Humanoid Robotics, Pittsburg Pennsylvania.
Yoshida E, Mallet A, Lamiraux
F, Kanoun O, Stasse
O, Poirier M, Dominey PF, Laumond J-P, Yokoi K
(2008)` Give me the Purple Ball'' --he said to HRP-2 N.14, Proceedings
of IEEE/RAS 2007 International Conference on Humanoid Robotics, Pittsburg
Pennsylvania.
Dominey PF (2007) Sharing Intentional Plans for Imitation and Cooperation: Integrating Clues from Child Developments and Neurophysiology into Robotics, Proceedings of the AISB 2007 Workshop on Imitation.
Dominey PF, Mallet A, Yoshida E (2007) Progress in Programming the HRP-2 Humanoid Using spoken Language, Proceedings of ICRA 2007, Rome.
Dominey PF, Hoen M, Inui T (2006) A Neurolinguistic Model of Grammatical Construction Processing, In Press, Journal of Cognitive Neuroscience.
Dominey PF, Hoen M (2006) Structure Mapping and Semantic Integration in a Construction-Based Neurolinguistic Model of Sentence Processing, Cortex, 42(4):476-9
Hoen M,
Pachot-Clouard M, Segebarth
C, Dominey P.F. (2006) When Broca
experiences the Janus syndrome. An er-fMRI
study comparing sentence comprehension and cognitive sequence processing.
Cortex, 42(4):605-23
Boucher
J-D, Dominey PF (2006) Perceptual-Motor
Sequence Learning Via Human-Robot Interaction, S. Nolfi
et al. (Eds.): SAB 2006, LNAI 4095, pp. 224–235, 2006. Springer-Verlag Berlin Heidelberg 2006
Brunelliere A, Hoen M, Dominey PF. (2005) ERP correlates of lexical analysis: N280 reflects processing complexity rather than category or frequency effects. Neuroreport. Sep 8;16(13):1435-8.
Voegtlin T, Dominey PF. (2005) Linear recursive distributed representations. Neural Netw. Sep;18(7):878-95.
Dominey PF
(2005a) From sensorimotor sequence to grammatical construction: Evidence
from Simulation and Neurophysiology, Adaptive Behavior, 13, 4 : 347-362
Dominey
PF (2005b) Towards a Construction-Based Account of Shared Intentions in
Social Cognition, Comment on Tomasello et al. Understanding and sharing
intentions: The origins of cultural cognition, Behavioral and Brain
Sciences, In press
Dominey PF (2005c) Emergence of Grammatical Constructions: Evidence from Simulation and Grounded Agent Experiments. Connection Science, 17(3-4) 289-306
Dominey PF, Boucher JD (2005) Learning To Talk About Events From Narrated Video in the Construction Grammar Framework, Artificial Intelligence, 167 (2005) 31–61
Dominey
PF, Boucher JD (2005) Developmental stages of perception and language
acquisition in a perceptually grounded robot, Cognitive Systems Research.
Volume 6, Issue 3, September 2005, Pages 243-259
Dominey PF (2005) Aspects of Descriptive, Referential and Information
Structure in Phrasal Semantics: A Construction Based Model ; Interaction
Studies: Social Behavior and Communication in Biological and Artificial Systems
6(2) 287–310
Dominey
PF (1995) Complex Sensory-Motor Sequence Learning Based on Recurrent
State-Representation and Reinforcement Learning, Biological Cybernetics, 73,
265-274
Complete List (click
Here)
The results of this research are exploited through the publication of
research articles (see below), and also through the development of potential
"real world" applications. An exciting new venue for these real
world applications is the At Home league
of RoboCup.
RoboCupAtHome emphasises the aspects of robotics that will
progressively allow robots to take their natural place at home, with people
like me and you. Recently we have collaborated with Alfredo Weitzenfeld
in a French-Mexican project supported by the LAFMI that involved Human Robot Interaction in the RoboCup At Home Competition
in Bremen Germany, June 2006. The details including some action packed
video can be found at
For the 2007 competion in Atlanta, we introduced some
more articulated Human-Robot Cooperation into the Open Competion,
and qualified for the Finals. At the 2010 competition in Singapore we made it
to phase 2, t he first team
to use the Nao.
Examples of some Current Research
Over the
past several years we have made technical
progress in providing spoken language, motor control and vision capabilities to
robotic systems. This begins to provide the basis for progressively
elaborate human-robot interaction.
Some of our
current research takes specific experimental protocols from studies of
cognitive development to define behavior milestones for a perceptual-motor
robotic system. Based on a set of previously established principals for
defining the “innate” functions available to such a system, a cognitive
architecture is developed that allows the robot to perform cooperative tasks at
the level of an 18 month human child. At the interface of cognitive
development and robotics, the results are interesting in that they (1) provide
concrete demonstration of how cognitive science can contribute to human-robot
interaction fidelity, and (2) they demonstrate how robots can be used to
experiment with theories on the implementation of cognition in the developing
human. See Dominey 2007 below).
In addtion to looking at how relatively explicit communication
can be used between humans and robots, we are also investigating how the robot
can more autonomously discover the structure of its environment. This work is
being carried out by Jean-David Boucher, a PhD student
whoo is financed by the Rhone-Alpes
Region, under the Presence Project of the Isle Cluster. Some results from
the first year can be seen in Boucher
J-D, Dominey PF (2006).
Background on Research Projects and Collaborations:
The Robot Cognition
Laboratory in Lyon benefits from a number of fruitful interactions in Europe
and abroad, that are partially defined by the Robot Cogntion
Working Group. Here is some History of the Robot Cognition Working Group,
including the Initial
Project. which was
supported by the French ACI Computational and Integrative Neuroscience Project.
Cooperation with the Ecole Centrale de Lyon
Over
the last four years we have had several engineers from the Ecole Centrale de Lyon work over the summer,
helping to do some of the technical nuts and bolts of system integration,
including Nicolas Dermine (2003), Marc Jeambrun, Bin Gao, Manuel Alvarez (2004), Julien Lestavel and Joseph Pairraud
(2006). The 2007 ECL Team was made up of
Benoit Miniere, Oussama
Abdoun and Frédéric Grandet. The project involved development of a spoken
langauge based posture and behavior editor for our Lynx two-arm system, with autonomous sequence
learning. This included development of a webots
simulator (Grandet),
vision-based inverse kinematics for object grasping (Abdoun)
and the spoken language based editor for postures and behavioral sequences (Miniere). The 2009 team
worked on the iCub Robot.