1 research outputs found
Hyper-parameter Tuning for the Contextual Bandit
We study here the problem of learning the exploration exploitation trade-off
in the contextual bandit problem with linear reward function setting. In the
traditional algorithms that solve the contextual bandit problem, the
exploration is a parameter that is tuned by the user. However, our proposed
algorithm learn to choose the right exploration parameters in an online manner
based on the observed context, and the immediate reward received for the chosen
action. We have presented here two algorithms that uses a bandit to find the
optimal exploration of the contextual bandit algorithm, which we hope is the
first step toward the automation of the multi-armed bandit algorithm.Comment: arXiv admin note: text overlap with arXiv:1705.0382