ABSTRACT
Recommendation techniques in scientific paper recommender systems (SPRS) have been generally evaluated in an offline setting, without much user involvement. Nonetheless, user relevance of recommended papers is equally important as system relevance. In this paper, we present a scientific paper recommender system (SPRS) prototype which was subject to both offline and user evaluations. The lessons learnt from the evaluation studies are described. In addition, the challenges and open questions for multi-method evaluation in SPRS are presented.
- Beel, J., Genzmehr, M., Langer, S., Nürnberger, A. and Gipp, B. 2013. A comparative analysis of offline and online evaluations and discussion of research paper recommender system evaluation. Proceedings of the International Workshop on Reproducibility and Replication in Recommender Systems Evaluation - RepSys '13 (New York, New York, USA, 2013), 7--14. Google ScholarDigital Library
- Dwork, C., Kumar, R., Naor, M. and Sivakumar, D. 2001. Rank aggregation methods for the web. Proceedings of the 10th international conference on World Wide Web (Hong Kong, Hong Kong, 2001), 613--622. Google ScholarDigital Library
- Ekstrand, M.D., Kannan, P., Stemper, J.A., Butler, J.T., Konstan, J.A. and Riedl, J.T. 2010. Automatically Building Research Reading Lists. Proceedings of the fourth ACM conference on Recommender systems (New York, New York, USA, 2010), 159--166. Google ScholarDigital Library
- Jardine, J.G. 2014. Automatically generating reading lists. University of Cambridge.Google Scholar
- Knijnenburg, B.P., Willemsen, M.C., Gantner, Z., Soncu, H. and Newell, C. 2012. Explaining the user experience of recommender systems. User Modelling and User-Adapted Interaction. 22, 4--5 (2012), 441--504. Google ScholarDigital Library
- Mcnee, S.M. 2006. Meeting User Information Needs in Recommender Systems. University of Minnesota.Google Scholar
- Rogers, Y., Sharp, H. and Preece, J. 2011. Interaction Design?: Beyond Human-Computer Interaction. Wiley. Google ScholarDigital Library
- Saldana, J. 2009. The Coding Manual for Qualitative Researchers. SAGE.Google Scholar
- Sesagiri Raamkumar, A. 2018. A task-based scientific paper recommender system for literature review and manuscript preparation. Nanyang Technological University.Google Scholar
- Sesagiri Raamkumar, A., Foo, S. and Pang, N. 2017. Evaluating a threefold intervention framework for assisting researchers in literature review and manuscript preparatory tasks. Journal of Documentation. 73, 3 (May 2017), JD-06--2016-0072.Google ScholarCross Ref
- Sesagiri Raamkumar, A., Foo, S. and Pang, N. 2016. Proposing a Scientific Paper Retrieval and Recommender Framework. Proceedings of International Conference on Asia-Pacific Digital Libraries, ICADL 2016 (Tsukuba, Japan, 2016).Google Scholar
- Venkatesh, V. and Bala, H. 2008. Technology Acceptance Model 3 and a Research Agenda on Interventions. Decision Sciences. 39, 2 (May 2008), 273--315.Google ScholarCross Ref
Index Terms
- Multi-method Evaluation in Scientific Paper Recommender Systems
Recommendations
Research paper recommender system evaluation: a quantitative literature survey
RepSys '13: Proceedings of the International Workshop on Reproducibility and Replication in Recommender Systems EvaluationOver 80 approaches for academic literature recommendation exist today. The approaches were introduced and evaluated in more than 170 research articles, as well as patents, presentations and blogs. We reviewed these approaches and found most evaluations ...
Research-paper recommender systems: a literature survey
In the last 16 years, more than 200 research articles were published about research-paper recommender systems. We reviewed these articles and present some descriptive statistics in this paper, as well as a discussion about the major advancements and ...
A comparative analysis of offline and online evaluations and discussion of research paper recommender system evaluation
RepSys '13: Proceedings of the International Workshop on Reproducibility and Replication in Recommender Systems EvaluationOffline evaluations are the most common evaluation method for research paper recommender systems. However, no thorough discussion on the appropriateness of offline evaluations has taken place, despite some voiced criticism. We conducted a study in which ...
Comments