Robots for perceptual Interactions Dedicated to Daily Life Environment
Main issues and objectives
In order to best meet his/her needs, a robotic platform designed to assist a person in a daily living context must be endowed with multiple functionalities enabling it to perceive the person and his own environment (living place, everyday objects and activities). Robust navigation in indoor environments, dexterous object manipulation, and intuitive communication (speech, gestures, body language) with the user are the main focuses of the RIDDLE project.
Many issues are still to be solved, such as perception, fusion and system integration. Multiple and uncertain perceptual analyses (of video and audio data) related to objects (like keys, glasses, mobile phone…), space (user’s home) and multimodal communication regarding contextual information are the core of our project. This collaborative work, involving five partners (LAAS-CNRS (RAP and MINC), IRIT (SAMoVA), CHUT (Gerontopôle), and two companies: Magellium and Aldebaran Robotics), is summed up by the following figure. The services targeted by our application concern mild memory assistance and search/carry services using Human Robot Interaction based on concepts learnt through multimodal communication with the human user: places, furniture, household objects i.e. properties, storage location, and temporal associations. The purpose of this cognitive robot is to learn environmental information with the user in the loop (“learning by interacting with a human user”) in terms of interactions with a set of household objects.
LAAS and IRIT are working together on user intentionality in order to make the robot aware of the user will to start or stop proximal interaction. This involves user tracking, audio environment and speech analysis. A first user study has been carried out with the CHUT team. The next step will be to integrate knowledge from the spatial map built by Magellium, carry out object seeking scenarios involving proximal interaction, object (visual and through RFID tag) and activities detection. The developed functionalities will be integrated on the PR2 platform (LAAS) as well as on Nao or Romeo from Aldebaran Robotics.
- LAAS RAP Team (Project Coordinator)
- LAAS MINC Team
- Aldebaran Robotics
- CHU Toulouse – Gerontopôle
People involved in SAMOVA team
- ANR Contint (2011 call)
- Start time: 1st september 2012
- End time: 30th november 2015