ECol 2017, Oslo Norway, March 11, 2017

Abstract Papers (1 page)

Position papers, open perspectives, or interesting discussion topics in the field are welcome. Feel free to let us know you what you would like to talk during the workshop!

Research Papers (No longer than 4 pages)

Both theoretical and practical research papers are welcome from both research and industrial communities addressing the main conference topic (evaluation frameworks), but we will also consider related aspects including models, methods, techniques, and examples of CIS/CIR in theory and in practice.

Collection papers (No longer than 4 pages)

We are also seeking papers describing test collections usable for the experimental evaluation of contributions related to CIS/CIR. The collection should be publicly available and different from previously available collections and data sets and allowing to investigate a variety of research questions that could rise from CIS/CIR challenges.


Evaluation and Methodologies

  • Studies on collective relevance judging.
  • Studies of collaborative behavior applicable to evaluation.
  • Simulation vs. log-studies vs. user-studies for collaborative search.
  • Evaluation of single vs. collaborative search session.
  • Novel or extended traditional evaluation measures, test collections, methodologies of operational evaluation.
  • Evaluation Concerns and Issues: Reliability, Repeatability, Reproducibility, Replicability.


  • Exploratory search (knowledge acquisition, multi-faceted search)
  • Recommending social collaborators (experts, answerers, sympathizers)
  • Collaborative ranking on social platforms
  • Collaborative intent understanding
  • New tasks

Application Areas

  • Medical CIS/CIR e.g. Systematic Review
  • Legal CIS/CIR
  • Academic Search
  • E-science and digital libraries
  • Leisure Search
  • Security and Vetting