IRIT - UMR 5505

CNRS
INPT
UPS
UT1

UTM
  IRIT
 

“Social Validation” Experiment

Investigating human perception of consensus in argumentative debates

  • Detailed Description

    Guillaume Cabanac, Max Chevalier, Claude Chrisment, Christine Julien. Social validation of collective annotations: Definition and experiment. Journal of the American Society for Information Science and Technology, 61(2):271-287, 2010 (download preprint).


  • How to Launch the Experiment?

  • Requirements

    The above button only works if Java 1.6 (at least) is installed on your machine (check it below).

    Java is not installed on your computer.

    If Java is installed but not registered with your browser, you may use the following command: "javaws https://www.irit.fr/~Guillaume.Cabanac/expe/expe.jnlp".

     

  • Your Task as a Participant

    For this experiment, you will have to evaluate debates. As you can see on the following screen shot, a debate is represented as a hierarchy where the root ("Smoking should be banned at parties") is an original statement. The replies to this statement are expressed in the nodes of the hierarchy, e.g. "Smoking increases the chances ..." We define an "argument" as either the root statement or a node reply.


    This is a debate


    Your mission consists in :

    • Assigning opinion and comment values to each argument, thanks to the buttons. For example, a click on the "red flag" means: "this argument is refuting its parent argument in the debate."   Alternatively, a click on the "question mark" icon means that you identified a question in this argument ...


    • Evaluating the social validation of an argument with respect to its replies (if any), thanks to the sliders. The slider value associated with a given argument represents a synthesis of the opinions that are expressed in its replies.


    Please try your best to evaluate the arguments regarding their argumentative contents rather than your own personal opinion.


    Note that you need to supply a value for each slider - that first appears in gray color. In addition, at least one opinion value (button) by argument is also required. The remaining work appears in the status bar of the application.



  • Example of a Debate Evaluation

    The figure below shows a debate containing 6 arguments. The participant (you) has identified an opinion type for each argument. Moreover, he has positioned the sliders of 2 nodes by synthesizing the opinions of their replies.


    This is a debate


  • Extra Information about this Experiment

    • General Research Context

      On the Web, amounts of digital documents are freely accessible. With a classical Web browser, people are rather passive as they can only read documents. Indeed, one cannot indicate a mistake he has found, ask a question, link the document to another one or simply express his thought.


      In order to enable people to interact with a digital document, "annotation systems" have been developped since the early 1990's. Such software makes it possible to annotate every digital document the same way as paper, for personal purposes, e.g. critical reading, proofreading, learning, etc.


      Moreover, as modern computers are networked, digital annotations can be stored in a common dabatase. This makes it possible to display a document along with its annotations that may come from numerous readers all over the world. Then readers can reply to annotations and also to replies, forming "discussion threads" i.e. debates that are displayed in the context of commented documents.


    • Purpose of this Experiment

      A main concept of this experiment is called "social validation": it represents the global opinion expressed in the debate through replies. Thus replies may confirm or refute an annotation, with a varying strength.


      The aim of this experiment is to determine if our algorithm (i.e. a computer program) is able to compute a "social validation" value that is close to human perception. As an application, this would enable readers to focus on annotations that have been confirmed by their replies, for instance.


    • Scientific References Related to this Experiment

      Guillaume Cabanac, Max Chevalier, Claude Chrisment, Christine Julien. Social validation of collective annotations: Definition and experiment. Journal of the American Society for Information Science and Technology, 61(2):271-287, 2010 (download preprint).


      Guillaume Cabanac. Annotation collective dans le contexte RI : définition d'une plate-forme pour expérimenter la validation sociale. In: CORIA/RJCRI'08: 3e Rencontres Jeunes Chercheurs en Recherche d'Information, Trégastel, France, March, 12, 2008 - March, 14, 2008, Université de Rennes 1, March 2008. (in French)


      Guillaume Cabanac, Max Chevalier, Claude Chrisment, Christine Julien. Collective Annotation: Perspectives for Information Retrieval Improvement. In: RIAO 2007: Proceedings of the 8th International Conference on Information Retrieval - Large-Scale Semantic Access to Content (Text, Image, Video and Sound), Pittsburgh, PA, USA, May, 30, 2007 - June, 1, 2007, C.I.D. Editions, May 2007.


      Guillaume Cabanac, Max Chevalier, Claude Chrisment, Christine Julien. Validation sociale d'annotations collectives : argumentation bipolaire graduelle pour la théorie sociale de l'information. In: INFORSID'06: 24e congrès de l'INFormatique des Organisations et Systèmes d'Information et de Décision, Hammamet, Tunisia, June, 1, 2006 - June 3, 2006, INFORSID Editions, pp. 467-482, June 2006. (in French)


      Guillaume Cabanac, Max Chevalier, Claude Chrisment, Christine Julien. A Social Validation of Collaborative Annotations on Digital Documents. In: IWAC'05: International Workshop Annotation for Collaboration - Methods, Tools and Practices, Paris, France, November, 23, 2005 - November, 24, 2005, Jean-François Boujut (Ed.), CNRS - programme société de l'information, pp. 31-40, November 2005.


    • Feedback From The Web

      Annotating the Web” by Bill Papantoniou, PhD.

      On “Psychological Research on the Net” by John H. Krantz, PhD.

      On “The Web Experiment List” by Ulf-Dietrich Reips, PhD.


    • Contact Information

      Guillaume Cabanac, PhD
      https://www.irit.fr/~Guillaume.Cabanac

      IRIT - Institute of Computer Science, Toulouse University, France
      Generalized Information Systems team




      Please note that any information obtained in this study in which you can be identified will remain confidential and will be disclosed only with your permission.