23 research outputs found

    Stochastic Contextual Bandits with Graph-based Contexts

    Full text link
    We naturally generalize the on-line graph prediction problem to a version of stochastic contextual bandit problems where contexts are vertices in a graph and the structure of the graph provides information on the similarity of contexts. More specifically, we are given a graph G=(V,E)G=(V,E), whose vertex set VV represents contexts with {\em unknown} vertex label yy. In our stochastic contextual bandit setting, vertices with the same label share the same reward distribution. The standard notion of instance difficulties in graph label prediction is the cutsize ff defined to be the number of edges whose end points having different labels. For line graphs and trees we present an algorithm with regret bound of O~(T2/3K1/3f1/3)\tilde{O}(T^{2/3}K^{1/3}f^{1/3}) where KK is the number of arms. Our algorithm relies on the optimal stochastic bandit algorithm by Zimmert and Seldin~[AISTAT'19, JMLR'21]. When the best arm outperforms the other arms, the regret improves to O~(KTf)\tilde{O}(\sqrt{KT\cdot f}). The regret bound in the later case is comparable to other optimal contextual bandit results in more general cases, but our algorithm is easy to analyze, runs very efficiently, and does not require an i.i.d. assumption on the input context sequence. The algorithm also works with general graphs using a standard random spanning tree reduction

    Sketch-based Randomized Algorithms for Dynamic Graph Regression

    Full text link
    A well-known problem in data science and machine learning is {\em linear regression}, which is recently extended to dynamic graphs. Existing exact algorithms for updating the solution of dynamic graph regression problem require at least a linear time (in terms of nn: the size of the graph). However, this time complexity might be intractable in practice. In the current paper, we utilize {\em subsampled randomized Hadamard transform} and \textsf{CountSketch} to propose the first randomized algorithms. Suppose that we are given an n×mn\times m matrix embedding MM of the graph, where mnm \ll n. Let rr be the number of samples required for a guaranteed approximation error, which is a sublinear function of nn. Our first algorithm reduces time complexity of pre-processing to O(n(m+1)+2n(m+1)log2(r+1)+rm2)O(n(m + 1) + 2n(m + 1) \log_2(r + 1) + rm^2). Then after an edge insertion or an edge deletion, it updates the approximate solution in O(rm)O(rm) time. Our second algorithm reduces time complexity of pre-processing to O(nnz(M)+m3ϵ2log7(m/ϵ))O \left( nnz(M) + m^3 \epsilon^{-2} \log^7(m/\epsilon) \right), where nnz(M)nnz(M) is the number of nonzero elements of MM. Then after an edge insertion or an edge deletion or a node insertion or a node deletion, it updates the approximate solution in O(qm)O(qm) time, with q=O(m2ϵ2log6(m/ϵ))q=O\left(\frac{m^2}{\epsilon^2} \log^6(m/\epsilon) \right). Finally, we show that under some assumptions, if lnn<ϵ1\ln n < \epsilon^{-1} our first algorithm outperforms our second algorithm and if lnnϵ1\ln n \geq \epsilon^{-1} our second algorithm outperforms our first algorithm

    Online Matrix Completion with Side Information

    Get PDF
    We give an online algorithm and prove novel mistake and regret bounds for online binary matrix completion with side information. The mistake bounds we prove are of the form O~(D/γ2)\tilde{O}(D/\gamma^2). The term 1/γ21/\gamma^2 is analogous to the usual margin term in SVM (perceptron) bounds. More specifically, if we assume that there is some factorization of the underlying m×nm \times n matrix into PQP Q^\intercal where the rows of PP are interpreted as "classifiers" in Rd\mathcal{R}^d and the rows of QQ as "instances" in Rd\mathcal{R}^d, then γ\gamma is the maximum (normalized) margin over all factorizations PQP Q^\intercal consistent with the observed matrix. The quasi-dimension term DD measures the quality of side information. In the presence of vacuous side information, D=m+nD= m+n. However, if the side information is predictive of the underlying factorization of the matrix, then in an ideal case, DO(k+)D \in O(k + \ell) where kk is the number of distinct row factors and \ell is the number of distinct column factors. We additionally provide a generalization of our algorithm to the inductive setting. In this setting, we provide an example where the side information is not directly specified in advance. For this example, the quasi-dimension DD is now bounded by O(k2+2)O(k^2 + \ell^2)
    corecore