skip to main content
10.1145/2513150acmconferencesBook PagePublication PagescikmConference Proceedingsconference-collections
LivingLab '13: Proceedings of the 2013 workshop on Living labs for information retrieval evaluation
ACM2013 Proceeding
Publisher:
  • Association for Computing Machinery
  • New York
  • NY
  • United States
Conference:
CIKM'13: 22nd ACM International Conference on Information and Knowledge Management San Francisco California USA 1 November 2013
ISBN:
978-1-4503-2420-5
Published:
01 November 2013
Sponsors:
Next Conference
Bibliometrics
Skip Abstract Section
Abstract

It is our great pleasure to welcome you to the Workshop on Living Labs for Information Retrieval Evaluation -- LL'13, held at CIKM 2013 in San Francisco, on November 1, 2013.

In the past few years the information retrieval (IR) community has been exploring ways to move further away from the Cranfield style evaluation paradigm, and make evaluations more "realistic" (more centered on real users, their needs and behaviours). As part of this drive, living labs, which involve and integrate users in the research process, have been proposed. Living labs would offer huge benefits to the community, such as: availability of, potentially larger, cohorts of real users and their behaviours; cross-comparability across research centres; and greater knowledge transfer between industry and academia, when industry partners are involved. The need for this methodology is further amplified by the increased reliance of IR approaches on proprietary data; living labs are a way to bridge the data divide between academia and industry. Progress towards realising actual living labs has nevertheless been limited. There are many challenges to be overcome before the benefits associated with living labs for IR can be realised, including challenges associated with living labs architecture and design, hosting, maintenance, security, privacy, participant recruiting, and scenarios and tasks for use development. This workshop brings together, for the first time, people interested in progressing the living labs for IR evaluation methodology. Our aim is to work together to identify natural use cases, barriers to success, and share opinions on ways and means of addressing them.

The call for papers attracted 7 submissions, all of which were found acceptable by the program committee. These include 2 short papers, 2 position papers, and 3 demonstrators. In addition, the workshop programme features an invited talk by Jan Pedersen (Microsoft Bing). The workshop is intended to be highly interactive to encourage group discussion and active collaboration among attendees; multiple breakout sessions are scheduled throughout the day. A final discussion session wraps up the event with the objective to identify and formulate specific action items for future research and development.

Skip Table Of Content Section
SESSION: Keynote address
keynote
Online metrics for web search relevance

Information Retrieval has a long tradition of being metrics driven. Ranking algorithms are assessed with respect to some utility measure that reflects the likelihood of satisfying an information need. Traditionally these metrics are based on offline ...

SESSION: Short papers
short-paper
A private living lab for requirements based evaluation

A "Living Lab" is described as an open innovation space for the cooperation of users, researchers and even companies to participate in a common process to develop innovative solutions. An architecture for a living lab for IR has been proposed in [1]. In ...

short-paper
A month in the life of a production news recommender system

During the last decade, recommender systems have become a ubiquitous feature in the online world. Research on systems and algorithms in this area has flourished, leading to novel techniques for personalization and recommendation. The evaluation of ...

SESSION: Position papers
abstract
Evaluation for operational IR applications: generalizability and automation

Black box information retrieval (IR) application evaluation allows practitioners to measure the quality of their IR application. Instead of evaluating specific components, e.g. solely the search engine, a complete IR application, including the user's ...

abstract
Factors affecting conditions of trust in participant recruiting and retention: a position paper

This paper contemplates some of the challenges faced in recruiting and developing a community of contributors (participants/subjects) for a living laboratory for IR evaluation. We briefly review several factors that may affect the efficacy of ...

DEMONSTRATION SESSION: Demo papers
demonstration
Using CrowdLogger for in situ information retrieval system evaluation

A major hurdle faced by many information retrieval researchers---especially in academia---is evaluating retrieval systems in the wild. Challenges include tapping into large user bases, collecting user behavior, and modifying a given retrieval system. We ...

demonstration
FindiLike: a preference driven entity search engine for evaluating entity retrieval and opinion summarization

We describe a novel preference-driven search engine (FindiLike) which allows users to find entities of interest based on preferences and also allows users to digest opinions about the retrieved entities easily. FindiLike leverages large amounts of ...

demonstration
Lerot: an online learning to rank framework

Online learning to rank methods for IR allow retrieval systems to optimize their own performance directly from interactions with users via click feedback. In the software package Lerot, presented in this paper, we have bundled all ingredients needed for ...

Contributors
  • University of Stavanger
  • University of Regensburg
  • University of Amsterdam
  • Maynooth University
  • University of Waterloo

Index Terms

  1. Proceedings of the 2013 workshop on Living labs for information retrieval evaluation
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Acceptance Rates

      LivingLab '13 Paper Acceptance Rate7of7submissions,100%Overall Acceptance Rate7of7submissions,100%
      YearSubmittedAcceptedRate
      LivingLab '1377100%
      Overall77100%