PEPS CNRS S2IH INS2I 2018

Robots with a Theory of Mind: from logical formalization to implementation (RoToM)



Project summary

The RoToM project aims to exploit methods and techniques from epistemic logic, mainly developed in artificial intelligence (AI), in order to endow a robot with a sophisticated Theory of Mind (ToM) that will improve its social skills in the interaction with a human. ToM is the general capacity of ascribing mental attitudes and mental operations to others and of predicting the behavior of others on the basis of this attribution. The scenarios of interaction studied in the project will mainly concern healtcare robotics.

Positioning of the project

The project RoToM focuses on human-robot interaction. We envisage concrete situations in which the robot plays the role of assistant for the elderly and has to take care of the human’s well-being. In order to interact with the human in an efficient way and to be believable and persuasive, the robot must have a Theory of Mind (ToM), that is, the general capacity of ascribing mental attitudes and mental operations to others and of predicting the behavior of others on the basis of this attribution (see Goldman 2012 for a philosophical introduction to ToM). The following is an example of scenario that we expect to study during the project.

Scenario. A robotic assistant called 3PO has to take care of an old person called Cath. Cath has to take a medicine at regular time and the aim of 3PO is to ensure that Cath follows the medical prescription. The problem is that Cath is reticent to taking the medicine, as she believes that it is not necessary for her survival and that it causes stomach problems as a side effect. In this situation, 3PO has to play a tutor role: he has to ensure that Cath will take the medicine in her interest. To this aim, 3PO needs to use his persuasive capabilities in order to induce Cath to take it. This requires a proper understanding of Cath’s mind by 3PO and, in particular, of the relationship between her cognitive attitudes and her actions (i.e., the way Cath’s cognitive attitudes such as her beliefs, desires and preferences determine her actions). More generally, 3PO must have a theory of Cath’s mind in order to understand what she does and predict what she will do.

ToM has been studied by developmental and clinical psychologists (Baron-Cohen et al., 1985; Wellman et al., 2001). For instance, in their seminal work, Baron-Cohen et al. suggested that children with autism have a ToM deficit, as they have difficulties in understanding beliefs of others. In recent times, economists have started to consider the role of ToM in strategic contexts. For instance, according to existing theories of level-k thinking (Camerer et al., 2004; Mohlin, 2012), each agent in a population is identified with a type which corresponds to the agent’s depth of reasoning in strategic situations: the higher the agent’s depth of reasoning, the higher the level of sophistication of the agent’s reasoning and the bigger the effort that the agent will make to take a decision. For instance, a level-0 agent chooses stochastically and her choice is not derived from strategic thinking, a level-1 agent chooses by best-responding to the other agents’ choices after assuming that all agents are level-0 agents, a level-2 agent chooses by best-responding to the other agents’ choices after assuming that all agents are level-k agents with k < 2, and so on so forth. More generally, a level-k agent (with k > 0) chooses by best-responding to the other agents’ choices, after assuming that the others have levels of thinking lower than k. Models of ToM have been implemented in real robots to increase their social capabilities (see, e.g., Lemaignan et al., 2017, Milliez et al., 2014, Scassellati, 2002). Nonetheless, these robotic implementations have some limitations. First of all, they only allow to represent high-order beliefs of depth at most 2, where the depth of a high-order belief is defined inductively as follows: (i) an agent’s belief has depth 1 if and only if its content is an objective formula that does not mention beliefs of others (e.g., an agent i’s belief that it is a sunny day); (ii) an agent’s belief has depth n if and only it is a belief about a belief of depth n-1 of another agent (e.g., an agent i’s belief of depth 2 that another agent j believes that it is a sunny day). For instance, in the previous example, one could not represent 3PO’s belief that Cath believes that 3PO believes that Cath has to take the medicine at regular time. The latter belief is part of the common ground of conversation between 3PO and Cath (Stalnaker, 2002) and is a necessary prerequisite for the communication between them to be successful. Secondly, models of ToM used for robotic implementations do not have a high level of generality that makes them applicable to a variety of different scenarios and situations.  Sophisticated models of ToM have been developed in artificial intelligence (AI) with the aid of epistemic logic tools and techniques. Epistemic logic is the variant of modal logic at the intersection between philosophy (Hintikka, 1962), AI (Fagin et al., 1995) and economics (Lismont & Mongin, 1994) that is devoted to the formal representation of epistemic attitudes of agents including belief and knowledge. The relevance of epistemic logic for AI lies in two aspects. First of all, it is a very general framework that can be used to model a variety of complex interactive situations between artificial agents and humans. Secondly, it supports modeling artificial agents’ high-order beliefs of any depth.

The general aim of the RoToM project is to make social robotics and epistemic logic converge in the formalization of ToM and its implementation in real robots. Specifically, the project is intended to bring together logicians from «Institut de Recherche en Informatique de Toulouse (IRIT)» and roboticians from «Laboratoire d’analyse et d’architecture de systèmes (LAAS)» in order to provide a better understanding of the way epistemic logic can be used in practice for endowing real robots with general ToM capacities.

References
- S. Baron-Cohen, A. M. Leslie, U. Frith. Does the autistic child have a "theory of mind"?. Cognition, 21(1):37-46, 1985.
- C. F. Camerer, T.-H. Ho and J.-K. Chong. A Cognitive Hierarchy Model of Games. The Quarterly Journal of Economics, 119(3):861-898, 2004.
- R. Fagin, J. Y. Halpern, Y. Moses, and M. Vardi. Reasoning about knowledge. MIT Press, Cambridge, 1995.
- A. Goldman. Theory of mind. In E. Margolis, R. Samuels, and S. Stich, editors, Oxford Handbook of Philosophy and Cognitive Science. Oxford University Press, 2012.
- J. Hintikka. Knowledge and belief: an introduction to the logic of the two notions. Cornell University Press, 1962.
- S. Lemaignan, M. Warnier, E.A. Sisbot, A. Clodic, R. Alami. Artificial cognition for social human-robot interaction: an implementation. Artificial Intelligence, 247:45-69, 2017.
- L. Lismont and P. Mongin. On the logic of common belief and common knowledge. Theory and Decision, 37:75-106, 1994.
- G. Milliez, M. Warnier, A. Clodic, R. Alami. A framework for endowing an interactive robot with reasoning capabilities about perspective-taking and belief management. In Proceedings of the 23rd IEEE International Symposium on Robot and Human Interactive Communication, IEEE Press, p. 1103-1109, 2014.
- E. Mohlin. Evolution of theories of mind. Games and Economic Behavior, 75(1):299-318, 2012.
- B. Scassellati. Theory of mind for a humanoid robot. Autonomous Robots, 12, 13-24, 2002.
- R. Stalnaker. Common ground. Linguistics and Philosophy, 25(5-6):701-721, 2002.
- H. W. Wellman, D. Cross, J. Watson. Meta-analysis of theory-of-mind development: The truth about false belief. Child Development, 72(3), 655-684, 2001.

Project challenges

The RoToM project aims to provide a number of solutions concerning the use of epistemic logic for modeling and implementing ToM in real robots in order to endow them with sophisticated social skills.  Examples of such skills are the capacity to infer the human’s mental attitudes including belief, knowledge, preference and intentions from a given set of observables including verbal messages, bodily movements, posture and gaze direction as well as  the capacity to represent the human’s high-order beliefs of any depth. Specifically, we plan to develop a number of decision procedures for epistemic logic with a semantics exploiting the notion of belief base. This semantics is more compact than the semantics of standard epistemic logic and is particularly well-suited for robotic implementations.

Participants

Results

Tool

Implemented tableau-based satisfiability checking procedure for epistemic logic with a semantics using a belief base abstraction and exploitable for robotic applications. Available here