244 research outputs found

    Bounded-Distortion Metric Learning

    Full text link
    Metric learning aims to embed one metric space into another to benefit tasks like classification and clustering. Although a greatly distorted metric space has a high degree of freedom to fit training data, it is prone to overfitting and numerical inaccuracy. This paper presents {\it bounded-distortion metric learning} (BDML), a new metric learning framework which amounts to finding an optimal Mahalanobis metric space with a bounded-distortion constraint. An efficient solver based on the multiplicative weights update method is proposed. Moreover, we generalize BDML to pseudo-metric learning and devise the semidefinite relaxation and a randomized algorithm to approximately solve it. We further provide theoretical analysis to show that distortion is a key ingredient for stability and generalization ability of our BDML algorithm. Extensive experiments on several benchmark datasets yield promising results

    Convex Optimization In Identification Of Stable Non-Linear State Space Models

    Full text link
    A new framework for nonlinear system identification is presented in terms of optimal fitting of stable nonlinear state space equations to input/output/state data, with a performance objective defined as a measure of robustness of the simulation error with respect to equation errors. Basic definitions and analytical results are presented. The utility of the method is illustrated on a simple simulation example as well as experimental recordings from a live neuron.Comment: 9 pages, 2 figure, elaboration of same-title paper in 49th IEEE Conference on Decision and Contro

    Location of the spectrum of Kronecker random matrices

    Full text link
    For a general class of large non-Hermitian random block matrices X\mathbf{X} we prove that there are no eigenvalues away from a deterministic set with very high probability. This set is obtained from the Dyson equation of the Hermitization of X\mathbf{X} as the self-consistent approximation of the pseudospectrum. We demonstrate that the analysis of the matrix Dyson equation from [arXiv:1604.08188v4] offers a unified treatment of many structured matrix ensembles.Comment: 33 pages, 4 figures. Some assumptions in Section 3.1 and 3.2 relaxed. Some typos corrected and references update

    Author index to volumes 41–60 (1981–1984)

    Get PDF

    Sherali - Adams Strikes Back

    Get PDF
    Let G be any n-vertex graph whose random walk matrix has its nontrivial eigenvalues bounded in magnitude by 1/sqrt{Delta} (for example, a random graph G of average degree Theta(Delta) typically has this property). We show that the exp(c (log n)/(log Delta))-round Sherali - Adams linear programming hierarchy certifies that the maximum cut in such a G is at most 50.1 % (in fact, at most 1/2 + 2^{-Omega(c)}). For example, in random graphs with n^{1.01} edges, O(1) rounds suffice; in random graphs with n * polylog(n) edges, n^{O(1/log log n)} = n^{o(1)} rounds suffice. Our results stand in contrast to the conventional beliefs that linear programming hierarchies perform poorly for max-cut and other CSPs, and that eigenvalue/SDP methods are needed for effective refutation. Indeed, our results imply that constant-round Sherali - Adams can strongly refute random Boolean k-CSP instances with n^{ceil[k/2] + delta} constraints; previously this had only been done with spectral algorithms or the SOS SDP hierarchy
    • …
    corecore