A Systematic Literature Review on (Multi-method) Recommender Systems Evaluation

Thesis Type Master
Thesis Status
Currently running
Student Joschka Schepp
Start
Thesis Supervisor
Contact
Research Field

Recommender systems aim to mitigate the inherent choice overload problem in today's digital world by providing personalized recommendations of items to users. These recommendations are computed based on previous user behavior (e.g., implicit feedback such as items purchased, viewed or listened to or explicit feedback such as ratings for an item). To this end, recommender systems research has mostly focused on improving the prediction accuracy of recommendation algorithms. However, recently, we observe a shift towards more user-centric evaluation methods as accuracy-driven development and advancement of recommender systems has been shown to not be able to actually capture all aspects that are relevant for a user's satisfaction with a given recommender system. Kaminskas and Bridge find that the focus of recommender systems has shifted to also include a wider range of ''beyond accuracy'' objectives to be evaluated. 

In this master thesis, we aim to perform a systematic literature review on evaluation methods for recommender systems. We are particularly interested in getting a deeper understanding on multi-method (or mixed-method) evaluation approaches that aim to combine a set of evaluation strategies to gain a deeper and more profound understanding for user satisfaction.

Marius Kaminskas, Derek Bridge: Diversity, Serendipity, Novelty, and Coverage: A Survey and Empirical Analysis of Beyond-Accuracy Objectives in Recommender Systems. TiiS 7(1): 2:1-2:42 (2017)