10,936 research outputs found
Unified Off-Policy Learning to Rank: a Reinforcement Learning Perspective
Off-policy Learning to Rank (LTR) aims to optimize a ranker from data
collected by a deployed logging policy. However, existing off-policy learning
to rank methods often make strong assumptions about how users generate the
click data, i.e., the click model, and hence need to tailor their methods
specifically under different click models. In this paper, we unified the
ranking process under general stochastic click models as a Markov Decision
Process (MDP), and the optimal ranking could be learned with offline
reinforcement learning (RL) directly. Building upon this, we leverage offline
RL techniques for off-policy LTR and propose the Click Model-Agnostic Unified
Off-policy Learning to Rank (CUOLR) method, which could be easily applied to a
wide range of click models. Through a dedicated formulation of the MDP, we show
that offline RL algorithms can adapt to various click models without complex
debiasing techniques and prior knowledge of the model. Results on various
large-scale datasets demonstrate that CUOLR consistently outperforms the
state-of-the-art off-policy learning to rank algorithms while maintaining
consistency and robustness under different click models
Speeding-up the decision making of a learning agent using an ion trap quantum processor
We report a proof-of-principle experimental demonstration of the quantum
speed-up for learning agents utilizing a small-scale quantum information
processor based on radiofrequency-driven trapped ions. The decision-making
process of a quantum learning agent within the projective simulation paradigm
for machine learning is implemented in a system of two qubits. The latter are
realized using hyperfine states of two frequency-addressed atomic ions exposed
to a static magnetic field gradient. We show that the deliberation time of this
quantum learning agent is quadratically improved with respect to comparable
classical learning agents. The performance of this quantum-enhanced learning
agent highlights the potential of scalable quantum processors taking advantage
of machine learning.Comment: 21 pages, 7 figures, 2 tables. Author names now spelled correctly;
sections rearranged; changes in the wording of the manuscrip
Estimation of Markov Chain via Rank-Constrained Likelihood
This paper studies the estimation of low-rank Markov chains from empirical
trajectories. We propose a non-convex estimator based on rank-constrained
likelihood maximization. Statistical upper bounds are provided for the
Kullback-Leiber divergence and the risk between the estimator and the
true transition matrix. The estimator reveals a compressed state space of the
Markov chain. We also develop a novel DC (difference of convex function)
programming algorithm to tackle the rank-constrained non-smooth optimization
problem. Convergence results are established. Experiments show that the
proposed estimator achieves better empirical performance than other popular
approaches.Comment: Accepted at ICML 201
- …