Surprises occur frequently in real-world environments, being able to program agents that can adapt to theses surprises is a challenge for artificial intelligence.
The aim of this internship project is first to situate the framework of Goal-Driven autonomy (GDA) agents with regard to the extrapolation of scenario framework (indeed in the two frameworks the aim is to find explanations about unexpected facts in a scenario). Then the student should propose a GDA algorithm that adapts the goals of an autonomous agent according to the answers given by its environment.
On the right hand side of this Figure we consider a human agent having its logical knowledge base (with factual observations about the world and generic statements such as Miradoux is a wheat variety, wheat contains proteins), its own associations (proteins are related to nutrition) and its opinions (I like Miradoux, I don’t like spoiled wheat). The logical knowledge is stored as a logical knowledge base (I, in the Figure) expressed in Datalog+/- (for re-utilisability purpose).
The associations (II) (that can relate two pieces of information as well as a piece of information to an appreciation—in that case it is called opinion) are more difficult to elicit and are not often handled in the literature. One of the main difficulties is that the associations depend on the profile of the person (expert, non-expert, etc.).
A third parameter is also required by our model, namely, the cognitive availability (abbreviated ca) of the agent (III) which depends on the agent’s interest about the particular argument and on the amount of attention he has to spent (its precise definition is not studied here, it may be based on the agent mood, her knowledge, sometimes on the speaker, on the topic of the argument, etc.). This cognitive availability is a parameter that we use to filter the possible reasoning the agent is able to do.
On the left hand side of the figure, we show the proposed process of argument acceptance using the cognitive model. When the agent hears a new argument [step (1)], a number of critical questions are fired [‘‘is the premise of the argument correct?’’, ‘‘do I agree with the conclusion?’’, ‘‘can I infer is the premise of the argument correct?”, “do I agree with the conclusion?”, “can I infer the conclusion from the premise based on what I know?” - step (2)).
Thanks to the proposed cognitive model, it will then be possible to compute reasoning paths (i.e. sequences of logical rules and association rules constituting a chain of inferences that leads to a desired conclusion) for each critical question (step (3)). For a reasoning path we introduce the notion of effort (cognitive effort to use the rules of the reasoning path) which is confronted to the cognitive availability of the agent. An association is usually effortless while logical reasoning is considered as more expensive. The cognitive availability of the agent allows us to have an upper bound of the effort the agent is able to put into her reasoning paths. The reasoning paths will be selected based on the effort needed to carry on (step (4)). Based on this selection we can accept or reject an argument. Note that the reasoning paths are constructed from the logical knowledge base and the associations that computationally represent the knowledge of the expert.
The aim is to develop a computational cognitive model for argument acceptance based on the dual model system in cognitive psychology.
Supervision: Florence Dupin de Saint-Cyr with maybe the help of some Montpellier researchersNowadays the transmission of ideas mainly goes through writing. The written language must meet some syntactic and grammatical rules, with a reading direction of the words which are the building blocks of sentences that should be read in a sequential order. Even if it allows us to express complex and subtle ideas, some representations go beyond the linear mode by adding a spatial dimension to information, possibly on an interactive support, or an approach combining a comprehensive view and a detailed one.
They already exist a lot of visual representation frameworks that are more or less commonly used : Mind maps introduced by Tony Buzan, Concept maps, Venn diagrams, Historical timelines, Programming flowcharts, Geographical maps... However they all have some drawbacks.
In order to overcome these drawbacks, we have formalized four principles and expressed eight postulates as a first step towards the definition of a theoretical system that could help to characterize user-friendly and efficient visual representation languages. Moreover, we have proposed a new language called VTL, for visual typed representation language, which can satisfy these postulates.
The next directions of research are the development of a graphical user interface (GUI) specific for VTL. The study of the automatic translation of VTL into a formal non visual language in order to propose inferences and consistency checks. Another direction of work would be to study how VTL allows to encompass links that were not handle by MOT, namely RCC8 relations or Allen intervals, or other relations between concepts (we could refer for instance to the linking-words typology written by Christian Barette).
The practical goal of this internship would be the development of a graphical user interface (GUI) specific for VTL enabling to read/write and reason in VTL. The theoretical goal would be to progress on the issue of formalising the characterisation of a rational visual language.
Supervision: Florence Dupin de Saint-Cyr and Denis Parade