6 research outputs found

    Dimension Reduction in Contextual Online Learning via Nonparametric Variable Selection

    Full text link
    We consider a contextual online learning (multi-armed bandit) problem with high-dimensional covariate x\mathbf{x} and decision y\mathbf{y}. The reward function to learn, f(x,y)f(\mathbf{x},\mathbf{y}), does not have a particular parametric form. The literature has shown that the optimal regret is O~(T(dx+dy+1)/(dx+dy+2))\tilde{O}(T^{(d_x+d_y+1)/(d_x+d_y+2)}), where dxd_x and dyd_y are the dimensions of x\mathbf x and y\mathbf y, and thus it suffers from the curse of dimensionality. In many applications, only a small subset of variables in the covariate affect the value of ff, which is referred to as \textit{sparsity} in statistics. To take advantage of the sparsity structure of the covariate, we propose a variable selection algorithm called \textit{BV-LASSO}, which incorporates novel ideas such as binning and voting to apply LASSO to nonparametric settings. Our algorithm achieves the regret O~(T(dxβˆ—+dy+1)/(dxβˆ—+dy+2))\tilde{O}(T^{(d_x^*+d_y+1)/(d_x^*+d_y+2)}), where dxβˆ—d_x^* is the effective covariate dimension. The regret matches the optimal regret when the covariate is dxβˆ—d^*_x-dimensional and thus cannot be improved. Our algorithm may serve as a general recipe to achieve dimension reduction via variable selection in nonparametric settings
    corecore