ECol 2015, Melbourne Australia, October 24, 2015


The paradigm of CIS/CIR refers to methodologies and technologies that support collective-knowledge sharing within a work team in order to solve a shared complex problem. One main challenge in CIS/CIR is to satisfy the mutual beneficial goals of both individual users and the collaborative group while maintaining a reasonable level of cognitive effort. In previous work, CIS/CIR evaluation relies on simulation-based protocols, user or log-based studies and metrics leveraging the individual vs. collaborative dimensions of search effectiveness. The most common evaluation strategy is to undertake both qualitative and quantitative evaluation according to the search tasks characteristics and evaluation objectives such as cognitive effort, usability, and individual vs. collective effectiveness. However, to date and to our knowledge, although substantial research advances in the evaluation of plenty information retrieval and seeking tasks have been achieved through international evaluation campaigns such as TREC, CLEF and NTCIR, no standardization effort has been achieved for the evaluation of CIS/CIR. We believe therein that there is an important need to investigate the evaluation challenge in CIS/CIR with the hope of building standardized evaluation frameworks that would foster the research area.


Both theoretical and practical research papers are welcome from both research and industrial communities addressing the main conference topic, but will also consider related aspects including models, methods, techniques and examples of CIS/CIR in theory and in practice. Original and unpublished papers are welcome on any aspects including:

  • CIS/CIR evaluation framework design and implementation
  • System-based vs. user-based evaluation approaches for CIS/CIR
  • Novel or extended traditional evaluation measures, test collections, methodologies of operational evaluation.
  • Models and methods of and for CIS/CIR
  • Impact of the collaborative session temporal synchronicity on the evaluation
  • Evaluation of domain application-oriented CIS/CIR: exploratory search, travel planning, education project, legal and medical domain...
  • Evaluation of specific collaborative tasks (search, sense-making, intent understanding…)
  • Studies on collective relevance judging
  • Studies of collaborative behavior applicable to evaluation
  • Simulation vs. log-studies vs. user-studies for collaborative search
  • Evaluation of single vs. collaborative search session
  • Connections to related research in contextual and interactive information seeking and retrieval
  • Evaluation Concerns and Issues: Reliability, Repeatability, Reproducibility, Replicability
  • Proposed evaluation tasks and collections for CIS/CIR