skip to main content
10.1145/3213586.3226215acmconferencesArticle/Chapter ViewAbstractPublication PagesumapConference Proceedingsconference-collections
short-paper

Multi-method Evaluation in Scientific Paper Recommender Systems

Published:02 July 2018Publication History

ABSTRACT

Recommendation techniques in scientific paper recommender systems (SPRS) have been generally evaluated in an offline setting, without much user involvement. Nonetheless, user relevance of recommended papers is equally important as system relevance. In this paper, we present a scientific paper recommender system (SPRS) prototype which was subject to both offline and user evaluations. The lessons learnt from the evaluation studies are described. In addition, the challenges and open questions for multi-method evaluation in SPRS are presented.

References

  1. Beel, J., Genzmehr, M., Langer, S., Nürnberger, A. and Gipp, B. 2013. A comparative analysis of offline and online evaluations and discussion of research paper recommender system evaluation. Proceedings of the International Workshop on Reproducibility and Replication in Recommender Systems Evaluation - RepSys '13 (New York, New York, USA, 2013), 7--14. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Dwork, C., Kumar, R., Naor, M. and Sivakumar, D. 2001. Rank aggregation methods for the web. Proceedings of the 10th international conference on World Wide Web (Hong Kong, Hong Kong, 2001), 613--622. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Ekstrand, M.D., Kannan, P., Stemper, J.A., Butler, J.T., Konstan, J.A. and Riedl, J.T. 2010. Automatically Building Research Reading Lists. Proceedings of the fourth ACM conference on Recommender systems (New York, New York, USA, 2010), 159--166. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Jardine, J.G. 2014. Automatically generating reading lists. University of Cambridge.Google ScholarGoogle Scholar
  5. Knijnenburg, B.P., Willemsen, M.C., Gantner, Z., Soncu, H. and Newell, C. 2012. Explaining the user experience of recommender systems. User Modelling and User-Adapted Interaction. 22, 4--5 (2012), 441--504. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Mcnee, S.M. 2006. Meeting User Information Needs in Recommender Systems. University of Minnesota.Google ScholarGoogle Scholar
  7. Rogers, Y., Sharp, H. and Preece, J. 2011. Interaction Design?: Beyond Human-Computer Interaction. Wiley. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Saldana, J. 2009. The Coding Manual for Qualitative Researchers. SAGE.Google ScholarGoogle Scholar
  9. Sesagiri Raamkumar, A. 2018. A task-based scientific paper recommender system for literature review and manuscript preparation. Nanyang Technological University.Google ScholarGoogle Scholar
  10. Sesagiri Raamkumar, A., Foo, S. and Pang, N. 2017. Evaluating a threefold intervention framework for assisting researchers in literature review and manuscript preparatory tasks. Journal of Documentation. 73, 3 (May 2017), JD-06--2016-0072.Google ScholarGoogle ScholarCross RefCross Ref
  11. Sesagiri Raamkumar, A., Foo, S. and Pang, N. 2016. Proposing a Scientific Paper Retrieval and Recommender Framework. Proceedings of International Conference on Asia-Pacific Digital Libraries, ICADL 2016 (Tsukuba, Japan, 2016).Google ScholarGoogle Scholar
  12. Venkatesh, V. and Bala, H. 2008. Technology Acceptance Model 3 and a Research Agenda on Interventions. Decision Sciences. 39, 2 (May 2008), 273--315.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Multi-method Evaluation in Scientific Paper Recommender Systems

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        UMAP '18: Adjunct Publication of the 26th Conference on User Modeling, Adaptation and Personalization
        July 2018
        349 pages
        ISBN:9781450357845
        DOI:10.1145/3213586
        • General Chairs:
        • Tanja Mitrovic,
        • Jie Zhang,
        • Program Chairs:
        • Li Chen,
        • David Chin

        Copyright © 2018 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 2 July 2018

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • short-paper

        Acceptance Rates

        UMAP '18 Paper Acceptance Rate26of93submissions,28%Overall Acceptance Rate162of633submissions,26%

        Upcoming Conference

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader