IRIT, CNRS Researcher
It is generally agreed that trust is a key concept in nowadays information technologies, which applies not only in contexts where security is in focus. Beyond system reliability it ensures its usability by both human and artificial agents. Numerous works in sociology, psychology, philosophy and cognitive science on the one hand, and in computer science on the other, show that trust is a complex notion with multiple facets. While the concept is used by now in many applications, there is still no consensus about a clear-cut and unified definition.|
In this project we propose to start from Castelfranchi et col.'s theory of social trust, which is certainly on of the best established theories among the above mentioned disciplines. We are going to confront its analysis to the specific needs in security in order to extract the required key elements, and complete it by some notions that are required in implementations and a priori absent from the theory (such as trust dynamics, the link with the topic notion,...). We are also going to formalize the resulting theory in logic, and implement the properties that have thus been laid bare within agent platforms. The latter step will be done at two levels, viz. the individual and the collective level.