Recommender System Validation Platform

Abstract

With most applications where recommender systems are used, it is impor- tant that they produce a better result than a system with no recommender, or one with a previous recommender. Deploying an untested system, even to a smaller user sample can be very costly if the system produces negative results. It is often in a developer’s interest to create several candidate systems. They need some way of comparing recommender systems before selecting one or a few to launch. While the methods of testing have been explored, and their statistical soundness motivated, in other work, it is not obvious how to do it in practice. This report describes the implementation of a modular and configurable framework, and analyses this framework with two different cases. The experimentation shows the power of how such a framework can be utilized to reduce overhead work when approaching evaluation of a new recommender system

    Similar works