1,659 research outputs found

    On the Convergence Rate of Decomposable Submodular Function Minimization

    Full text link
    Submodular functions describe a variety of discrete problems in machine learning, signal processing, and computer vision. However, minimizing submodular functions poses a number of algorithmic challenges. Recent work introduced an easy-to-use, parallelizable algorithm for minimizing submodular functions that decompose as the sum of "simple" submodular functions. Empirically, this algorithm performs extremely well, but no theoretical analysis was given. In this paper, we show that the algorithm converges linearly, and we provide upper and lower bounds on the rate of convergence. Our proof relies on the geometry of submodular polyhedra and draws on results from spectral graph theory.Comment: 17 pages, 3 figure

    Exponentially convergent data assimilation algorithm for Navier-Stokes equations

    Full text link
    The paper presents a new state estimation algorithm for a bilinear equation representing the Fourier- Galerkin (FG) approximation of the Navier-Stokes (NS) equations on a torus in R2. This state equation is subject to uncertain but bounded noise in the input (Kolmogorov forcing) and initial conditions, and its output is incomplete and contains bounded noise. The algorithm designs a time-dependent gain such that the estimation error converges to zero exponentially. The sufficient condition for the existence of the gain are formulated in the form of algebraic Riccati equations. To demonstrate the results we apply the proposed algorithm to the reconstruction a chaotic fluid flow from incomplete and noisy data

    A unified framework for solving a general class of conditional and robust set-membership estimation problems

    Full text link
    In this paper we present a unified framework for solving a general class of problems arising in the context of set-membership estimation/identification theory. More precisely, the paper aims at providing an original approach for the computation of optimal conditional and robust projection estimates in a nonlinear estimation setting where the operator relating the data and the parameter to be estimated is assumed to be a generic multivariate polynomial function and the uncertainties affecting the data are assumed to belong to semialgebraic sets. By noticing that the computation of both the conditional and the robust projection optimal estimators requires the solution to min-max optimization problems that share the same structure, we propose a unified two-stage approach based on semidefinite-relaxation techniques for solving such estimation problems. The key idea of the proposed procedure is to recognize that the optimal functional of the inner optimization problems can be approximated to any desired precision by a multivariate polynomial function by suitably exploiting recently proposed results in the field of parametric optimization. Two simulation examples are reported to show the effectiveness of the proposed approach.Comment: Accpeted for publication in the IEEE Transactions on Automatic Control (2014

    Matrix Scaling and Balancing via Box Constrained Newton's Method and Interior Point Methods

    Full text link
    In this paper, we study matrix scaling and balancing, which are fundamental problems in scientific computing, with a long line of work on them that dates back to the 1960s. We provide algorithms for both these problems that, ignoring logarithmic factors involving the dimension of the input matrix and the size of its entries, both run in time O~(mlogκlog2(1/ϵ))\widetilde{O}\left(m\log \kappa \log^2 (1/\epsilon)\right) where ϵ\epsilon is the amount of error we are willing to tolerate. Here, κ\kappa represents the ratio between the largest and the smallest entries of the optimal scalings. This implies that our algorithms run in nearly-linear time whenever κ\kappa is quasi-polynomial, which includes, in particular, the case of strictly positive matrices. We complement our results by providing a separate algorithm that uses an interior-point method and runs in time O~(m3/2log(1/ϵ))\widetilde{O}(m^{3/2} \log (1/\epsilon)). In order to establish these results, we develop a new second-order optimization framework that enables us to treat both problems in a unified and principled manner. This framework identifies a certain generalization of linear system solving that we can use to efficiently minimize a broad class of functions, which we call second-order robust. We then show that in the context of the specific functions capturing matrix scaling and balancing, we can leverage and generalize the work on Laplacian system solving to make the algorithms obtained via this framework very efficient.Comment: To appear in FOCS 201

    Volumetric Spanners: an Efficient Exploration Basis for Learning

    Full text link
    Numerous machine learning problems require an exploration basis - a mechanism to explore the action space. We define a novel geometric notion of exploration basis with low variance, called volumetric spanners, and give efficient algorithms to construct such a basis. We show how efficient volumetric spanners give rise to the first efficient and optimal regret algorithm for bandit linear optimization over general convex sets. Previously such results were known only for specific convex sets, or under special conditions such as the existence of an efficient self-concordant barrier for the underlying set
    corecore