Have a look on the workshop report!

Collaborative Information Seeking/Retrieval (CIS/CIR) has given rise to several challenges in terms of search behavior analysis, retrieval model formalization as well as interface design. However, the major issue of evaluation in CIS/CIR is still underexplored. The goal of this workshop is to investigate the evaluation challenges in CIS/CIR with the hope of building standardized evaluation frameworks, methodologies, and task specifications that would foster and grow the research area (in a collaborative fashion). These last years and particularly since 2005, CIS and CIR have became emerging topics that have been addressed in several IR and IS conferences including CIKM and SIGIR conferences. While the potential of collaboration has been highlighted with respect to individual settings, other challenges remain and need to be thoroughly explored. Despite most of experimental evaluations have been done with the objective of highlighting the synergic effect of the proposed contributions, there is an important need for the future to discuss about what should be evaluated in terms of collaboration aspects (e.g. cognitive effort, mutual beneficial goal satisfaction, collective relevance…). Moreover, it does not exist standard framework as proposed in ad-hoc information retrieval through the evaluation campaign, as those proposed by TREC, INEX, CLEF. Our workshop would both interest and benefit from researchers with a complementary expertise that would cover all the aspects dealing with the evaluation in CIS and CIR.

  • CIS/CIR evaluation framework design and implementation
  • System-based vs. user-based evaluation approaches for CIS/CIR
  • Novel or extended traditional evaluation measures, test collections, methodologies of operational evaluation.
  • Models and methods of and for CIS/CIR
  • Impact of the collaborative session temporal synchronicity on the evaluation
  • Evaluation of domain application-oriented CIS/CIR: exploratory search, travel planning, education project, legal and medical domain...
  • Evaluation of specific collaborative tasks (search, sense-making, intent understanding…)
  • Studies on collective relevance judging
  • Studies of collaborative behavior applicable to evaluation
  • Simulation vs. log-studies vs. user-studies for collaborative search
  • Evaluation of single vs. collaborative search session
  • Connections to related research in contextual and interactive information seeking and retrieval
  • Evaluation Concerns and Issues: Reliability, Repeatability, Reproducibility, Replicability
  • Proposed evaluation tasks and collections for CIS/CIR