Personalised interactive systems such as recommender systems require
selecting relevant items from massive catalogs dependent on context.
Reward-driven offline optimisation of these systems can be achieved by a
relaxation of the discrete problem resulting in policy learning or REINFORCE
style learning algorithms. Unfortunately, this relaxation step requires
computing a sum over the entire catalogue making the complexity of the
evaluation of the gradient (and hence each stochastic gradient descent
iterations) linear in the catalogue size. This calculation is untenable in many
real world examples such as large catalogue recommender systems, severely
limiting the usefulness of this method in practice. In this paper, we derive an
approximation of these policy learning algorithms that scale logarithmically
with the catalogue size. Our contribution is based upon combining three novel
ideas: a new Monte Carlo estimate of the gradient of a policy, the self
normalised importance sampling estimator and the use of fast maximum inner
product search at training time. Extensive experiments show that our algorithm
is an order of magnitude faster than naive approaches yet produces equally good
policies.Comment: Accepted at AAAI 202