8,538 research outputs found

    Bethe States of the integrable spin-s chain with generic open boundaries

    Full text link
    Based on the inhomogeneous T-Q relation and the associated Bethe Ansatz equations obtained via the off-diagonal Bethe Ansatz, we construct the Bethe-type eigenstates of the SU(2)-invariant spin-s chain with generic non-diagonal boundaries by employing certain orthogonal basis of the Hilbert space.Comment: 16 pages, no figure, published versio

    Analysis of Nuclear Norm Regularization for Full-rank Matrix Completion

    Full text link
    In this paper, we provide a theoretical analysis of the nuclear-norm regularized least squares for full-rank matrix completion. Although similar formulations have been examined by previous studies, their results are unsatisfactory because only additive upper bounds are provided. Under the assumption that the top eigenspaces of the target matrix are incoherent, we derive a relative upper bound for recovering the best low-rank approximation of the unknown matrix. Our relative upper bound is tighter than previous additive bounds of other methods if the mass of the target matrix is concentrated on its top eigenspaces, and also implies perfect recovery if it is low-rank. The analysis is built upon the optimality condition of the regularized formulation and existing guarantees for low-rank matrix completion. To the best of our knowledge, this is first time such a relative bound is proved for the regularized formulation of matrix completion

    Recovering the Optimal Solution by Dual Random Projection

    Full text link
    Random projection has been widely used in data classification. It maps high-dimensional data into a low-dimensional subspace in order to reduce the computational cost in solving the related optimization problem. While previous studies are focused on analyzing the classification performance of using random projection, in this work, we consider the recovery problem, i.e., how to accurately recover the optimal solution to the original optimization problem in the high-dimensional space based on the solution learned from the subspace spanned by random projections. We present a simple algorithm, termed Dual Random Projection, that uses the dual solution of the low-dimensional optimization problem to recover the optimal solution to the original problem. Our theoretical analysis shows that with a high probability, the proposed algorithm is able to accurately recover the optimal solution to the original problem, provided that the data matrix is of low rank or can be well approximated by a low rank matrix.Comment: The 26th Annual Conference on Learning Theory (COLT 2013
    corecore