Workshop on Trust, Advice, and Reputation

Toulouse, October 22-24, 2009


Jonathan Ben-naim (IRIT-CNRS, Toulouse)
"Preliminary results on reputation systems: Balancing quantity and quality"

In many multi-agent systems (especially e-commerce applications like Ebay), the users have to choose the agents with whom to interact. But, this choice is difficult, in particular because the agents are too numerous. In addition, it is impossible to call on an arbiter that can objectively judge every agent. However, the agents provide a lot of information about their peers (e.g. interaction feedbacks). Consequently, reputation systems have been developed. They are conceived to exploit such information in order to provide an object helping the users to make decisions about the agents. In the present preliminary work, we develop and partially analyze (from an axiomatic point of view) new reputation systems. The information available about the agents is modeled by a graph (the nodes represent the agents and an arrow means that the source agent supports the destination agent). The object constructed by our reputation systems is a total ranking of the agents. When comparing two agents, there are essentially two criteria to consider : the number of supporters and the importance of the supporters. We prefer not to prioritize these criteria. Similarly, we prefer not to attach weights to them and take some weighted mean. The reason is that such strategies allow certain agents suffering from big flaws to get a high position in the ranking. What we try do to here is to favor those agents that receive a balanced form of support.

Robert Demolombe (IRIT, Toulouse)
"Trust: Definition, Motivation and Justification. An essay in Modal Logic"


Rino Falcone (ISTC-CNR, Rome)
"Trust networks: a path for the efficient cooperation"

Dependence networks among agents (describing how each agent can be linked with each other able to satisfy any need/goal/desire) are really important for evaluating future collaborations among them. In fact, the Dependence Networks alone are not sufficient for a real allocation of tasks among the agents. For this allocation, it is also necessary that each agent could satisfy his own expectations about the trustworthiness of the other agents with respect to the specific tasks. We present in this paper a cognitive theory of trust as a capital, which is, in our view, a good starting point to include this concept in the issue of negotiation power. That is to say that if somebody is (potentially) strongly useful to other agents (in the sense that are declared a set of its skills), but it is not trusted, its negotiation power does not improve. Our claim is to underline how (for a set of agents linked to each other) the competitive advantage is not simply of being part of a network, but more precisely of being trusted in that network.

Nigel Harvey (University College London)
"Verbal and behavioural indicators of trust: How well are they associated?"

I shall discuss trust in different domains but focus primarily on studies of trust in risk communication. Models of trust in most areas include at least two main components: trust in the motives (benevolence, integrity, honesty) of an agent (e.g., a source of advice about risk) and trust in the agent’s competence (knowledge, abilities). These components converge on a final common pathway that determines trusting or cooperative intentions and actions. This implies that different expressions of trust should not dissociate. I shall discuss evidence from a number of experiments that suggests that people’s verbal expressions of trust are not consistently associated with the degree to which they trust different advisors when making judgments about degrees of risk associated with various activities. In some cases, there is no association; in other cases, there is a weak positive association; finally, there are cases in which a weak but negative association is observed. People may not have full insight into the degree to which they trust organizations or other people and task-specific factors may affect the degree of insight they have. When we need to assess how much people trust different sources of information, it may be unwise to rely wholly on results of polls, surveys, or questionnaires.

Sylvie Leblois (CLLE-LTC, Toulouse)
"Strategic and non strategic effects of benevolence when advisor-advisee interests are in conflict"

When advisors and advisees have conflicting interests, malevolent advisors may take advantage of their position to advance their own interests. In such a situation, advisees may take into account the perceived benevolence of their advisors, before they act upon the advice they receive. The present work aims at providing experimental evidence of these two phenomena.
METHOD. The study used a trivia-quizz task where advisors and advisees had either conflicting or non-conflicting interests. We assessed the advice-giving behavior of advisors as a function of their (genuine) self-rated benevolence, and we assessed the advice uptake of advisees as a function of the (rigged) declared benevolence of their advisors.
RESULTS. Self-declared malevolent advisors give less (but still accurate) advice when interests are in conflict, and advisees spend more time pondering the advice they receive when interests are in conflict. Advisees, however, appear to react non-strategically to the perceived malevolence of their advisor: recommendations from purportedly malevolent advisors are discarded without consideration for the fact that interests are in conflict or not.

Emiliano Lorini(IRIT-CNRS, Toulouse)
"A logic of trust and reputation"

Abstract. The aim of this work is to present a logical framework in which the concepts of trust and reputation can be formally characterized and their properties studied. We start from the definition of trust proposed by Castelfranchi & Falcone (C&F). We formalize this definition in a logic of time, action, beliefs and choices. Then, we provide a refinement of C&F's definition by distinguishing two general types of trust: occurrent trust and dispositional trust. In the second part of the talk we present a definition of reputation that is structurally similar to the definition of trust but moves the basic concept of belief to a collective dimension of group belief.

Claudio Masolo (LOA-ISTC-CNR, Trento)
"Service-based organizations: some preliminary considerations"

I will discuss the following working hypotheses. (1) The design of (the structure of) an organization mirrors the plan the organization has to reach its goals. A parallel can be traced between the hierarchical structure of an organization - characterized by mechanisms of delegation of sub-plans - and the refinement between plans. Very roughly, an organization can thus be seen as a structured entity able to act and whose structure is determined by how its plan (to reach its goals) is distributed among its members and external collaborators. (2) Assuming the provider's promise to guarantee the achievement of a goal (under some constraints) in the interest of potential beneficiaries as a central aspect of services, it is possible to establish a link between the delegation of a sub-plan in organizations and the service agreement between an (sub-)organization and a member or an external collaborator. (3) Putting together these two hypotheses, an organization can be seen as a structured cluster of service agreements that allow (at least potentially) to achieve the goals of the organization.

Manh-Hung Nguyen (IRIT and CLLE-LTC, Toulouse)
"The Effect of Trust/DisTrust on Social Emotions: A Logical Approach"

Trust and social emotions such as gratitude and anger have natural relations and they both play a key role in research of interaction systems in the context of ambient intelligence and affective computing nowadays. This paper presents a logical approach to formalize both the relations between trust and anger for one hand, and between distrust and gratitude for another hand. Our formal framework is a multimodal logic that combines a logic of belief and choice, a logic of linear time, and a logic of norms. We also provide the behavioral validation for these relations.

Matthias Nickles (University of Bath)
"Modeling and Learning Contextualized Probabilistic Trust"

With open environments, like electronic marketplaces or the Semantic Web, moving to the center of attention, there is a rapidly growing interest in computational trust. Trust models allow socially interacting agents to evaluate the trustability of their peers and are hence particularly useful in areas where interaction mechanisms which aim to enforce fairness and reliability are ineffective, inefficient, or impossible. However, most existing approaches to computational trust either do not sufficiently take into account the interaction context of trust, or operate with one-dimensional trust values only. This talk presents an approach to trust modeling and learning which allows to characterize the context structure of trust-situations and provides meaningful multi-dimensional trust assessments in many situations where older approaches turn out to be ineffective.

Jordi Sabater (IIIA-CSIC, Barcelona)
"Cognitive computational reputation models. Taming the beast."

In this presentation we will introduce our work in cognitive computational reputation models and their integration in cognitive agents (BDI). For simple applications, a game theoretical approach similar to that used in most reputation models is enough. However, if we want to undertake problems found in socially complex virtual societies, we need more sophisticated trust and reputation systems. In this context, reputation-based decisions that agents make take on special relevance and can be as important as the reputation model itself. We propose a possible integration of a cognitive reputation model, Repage, into a cognitive BDI agent. We define a BDI model as a multi-context system whose regular logical reasoning process incorporates reputation information. Then we introduce how reputation information can be used not only for selecting partners to interact with, but also in dialog processes, like negotiation or persuasion. Some of these processes rely on arguments that support agent's points of view. Taking the previous Repage integration in a BDI architecture, we highlight on the necessary elements to build an argumentation framework.

Ilan Yaniv (Hebrew University of Jerusalem)
"Receiving advisory opinions: A bumpy road to the goal of making better decisions"

It is common practice to solicit other people's opinions prior to making a decision. An editor solicits three qualified reviewers for their opinions on a manuscript; a patient seeks a second opinion regarding a medical condition; a consumer considers the “word of mouth” of a dozen people for guidance in the purchase of an expensive product. Advice taking may well be the “oldest decision aid in history,” yet it has received, until recently, little attention in research. In this talk, I will review research findings and theories concerning the optimal way of using advice (accuracy gains) and the policies judges use in integrating others’ opinions (cognitive processes). I will discuss recent findings on the phenomenon of egocentric discounting and the impact of a cognitive procedure called “blindfolding” on decision makers’ performance. In addition, I will present results showing the detrimental effect of “spurious consensus” and how it could lead to confidence-accuracy dissociations. Finally, I will review recent findings on the utilization of advice on matters of taste and, in particular, the role of similarity and taste discrimination. The results will be discussed in connection with theories of belief revision, attitude change, and group decision making.

Mascha van 't Wout (Departments of Psychology; Cognitive & Linguistic Sciences; Psychiatry Brown University, Providence, USA and Neural Decision Sciences Laboratory, University of Arizona, Tucson, USA)
"Friend or Foe: the influence of implicit trust judgments on cooperation in social interactive decision-making"

There is a growing body of evidence that demonstrates that emotions and social behavior play a crucial role in human social interactive decision-making. For instance, the expression and repayment of trust is an important social aspect that influences competitive and cooperative behavior. Given that the human face appears to play an important role in signaling social intentions and people can form reliable impressions on the basis of someone's facial appearance, facial signals could have a substantial influence on how people behave towards another person in a social interaction. One particular social cue that may be especially important in assessing how to interact with a partner is facial trustworthiness, a rapid, implicit assessment of the likelihood that a partner will reciprocate a generous gesture. Trustworthiness judgments are reliably associated with activity in brain areas important for emotion processing, in particular the amygdala. In the studies to be presented we examined the influence of perceived subjective trustworthiness on trust behavior when interacting with unfamiliar social partners. In addition, we tested whether neural activation in response to the automatic processing of trustworthiness influences decision-making behavior. Participants played single-shot Trust Games with different hypothetical partners who were previously rated on subjective trustworthiness. In each game, participants made a decision about how much to trust their partner, as measured by how much money they invested with that partner, with no guarantee of return. As predicted, people invested more money in partners who were subjectively rated as more trustworthy, despite no objective relationship between these factors. These data indicate that the implicit perception of trustworthiness influences social interactive decision-making, suggesting that it is an important social cue.