1,210 research outputs found

    Randomized Extended Kaczmarz for Solving Least-Squares

    Full text link
    We present a randomized iterative algorithm that exponentially converges in expectation to the minimum Euclidean norm least squares solution of a given linear system of equations. The expected number of arithmetic operations required to obtain an estimate of given accuracy is proportional to the square condition number of the system multiplied by the number of non-zeros entries of the input matrix. The proposed algorithm is an extension of the randomized Kaczmarz method that was analyzed by Strohmer and Vershynin.Comment: 19 Pages, 5 figures; code is available at https://github.com/zouzias/RE

    Sparsity Constrained Inverse Problems - Application to Vibration-based Structural Health Monitoring

    Get PDF
    Vibration-based structural health monitoring (SHM) seeks to detect, quantify, locate, and prognosticate damage by processing vibration signals measured while the structure is operational. The basic premise of vibration-based SHM is that damage will affect the stiffness, mass or energy dissipation properties of the structure and in turn alter its measured dynamic characteristics. In order to make SHM a practical technology it is necessary to perform damage assessment using only a minimum number of permanently installed sensors. Deducing damage at unmeasured regions of the structural domain requires solving an inverse problem that is underdetermined and(or) ill-conditioned. In addition, the effects of local damage on global vibration response may be overshadowed by the effects of modelling error, environmental changes, sensor noise, and unmeasured excitation. These theoretical and practical challenges render the damage identification inverse problem ill-posed, and in some cases unsolvable with conventional inverse methods. This dissertation proposes and tests a novel interpretation of the damage identification inverse problem. Since damage is inherently local and strictly reduces stiffness and(or) mass, the underdetermined inverse problem can be made uniquely solvable by either imposing sparsity or non-negativity on the solution space. The goal of this research is to leverage this concept in order to prove that damage identification can be performed in practical applications using significantly less measurements than conventional inverse methods require. This dissertation investigates two sparsity inducing methods, L1-norm optimization and the non-negative least squares, in their application to identifying damage from eigenvalues, a minimal sensor-based feature that results in an underdetermined inverse problem. This work presents necessary conditions for solution uniqueness and a method to quantify the bounds on the non-unique solution space. The proposed methods are investigated using a wide range of numerical simulations and validated using a four-story lab-scale frame and a full-scale 17 m long aluminum truss. The findings of this study suggest that leveraging the attributes of both L1-norm optimization and non-negative constrained least squares can provide significant improvement over their standalone applications and over other existing methods of damage detection

    Structured Sparsity Models for Multiparty Speech Recovery from Reverberant Recordings

    Get PDF
    We tackle the multi-party speech recovery problem through modeling the acoustic of the reverberant chambers. Our approach exploits structured sparsity models to perform room modeling and speech recovery. We propose a scheme for characterizing the room acoustic from the unknown competing speech sources relying on localization of the early images of the speakers by sparse approximation of the spatial spectra of the virtual sources in a free-space model. The images are then clustered exploiting the low-rank structure of the spectro-temporal components belonging to each source. This enables us to identify the early support of the room impulse response function and its unique map to the room geometry. To further tackle the ambiguity of the reflection ratios, we propose a novel formulation of the reverberation model and estimate the absorption coefficients through a convex optimization exploiting joint sparsity model formulated upon spatio-spectral sparsity of concurrent speech representation. The acoustic parameters are then incorporated for separating individual speech signals through either structured sparse recovery or inverse filtering the acoustic channels. The experiments conducted on real data recordings demonstrate the effectiveness of the proposed approach for multi-party speech recovery and recognition.Comment: 31 page

    Computational Methods for Sparse Solution of Linear Inverse Problems

    Get PDF
    The goal of the sparse approximation problem is to approximate a target signal using a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, making these algorithms versatile and relevant to a plethora of applications

    Oracle-order Recovery Performance of Greedy Pursuits with Replacement against General Perturbations

    Full text link
    Applying the theory of compressive sensing in practice always takes different kinds of perturbations into consideration. In this paper, the recovery performance of greedy pursuits with replacement for sparse recovery is analyzed when both the measurement vector and the sensing matrix are contaminated with additive perturbations. Specifically, greedy pursuits with replacement include three algorithms, compressive sampling matching pursuit (CoSaMP), subspace pursuit (SP), and iterative hard thresholding (IHT), where the support estimation is evaluated and updated in each iteration. Based on restricted isometry property, a unified form of the error bounds of these recovery algorithms is derived under general perturbations for compressible signals. The results reveal that the recovery performance is stable against both perturbations. In addition, these bounds are compared with that of oracle recovery--- least squares solution with the locations of some largest entries in magnitude known a priori. The comparison shows that the error bounds of these algorithms only differ in coefficients from the lower bound of oracle recovery for some certain signal and perturbations, as reveals that oracle-order recovery performance of greedy pursuits with replacement is guaranteed. Numerical simulations are performed to verify the conclusions.Comment: 27 pages, 4 figures, 5 table

    On affine scaling inexact dogleg methods for bound-constrained nonlinear systems

    Get PDF
    Within the framework of affine scaling trust-region methods for bound constrained problems, we discuss the use of a inexact dogleg method as a tool for simultaneously handling the trust-region and the bound constraints while seeking for an approximate minimizer of the model. Focusing on bound-constrained systems of nonlinear equations, an inexact affine scaling method for large scale problems, employing the inexact dogleg procedure, is described. Global convergence results are established without any Lipschitz assumption on the Jacobian matrix, and locally fast convergence is shown under standard assumptions. Convergence analysis is performed without specifying the scaling matrix used to handle the bounds, and a rather general class of scaling matrices is allowed in actual algorithms. Numerical results showing the performance of the method are also given

    Subsquares Approach - Simple Scheme for Solving Overdetermined Interval Linear Systems

    Full text link
    In this work we present a new simple but efficient scheme - Subsquares approach - for development of algorithms for enclosing the solution set of overdetermined interval linear systems. We are going to show two algorithms based on this scheme and discuss their features. We start with a simple algorithm as a motivation, then we continue with a sequential algorithm. Both algorithms can be easily parallelized. The features of both algorithms will be discussed and numerically tested.Comment: submitted to PPAM 201

    Convergence Properties of the Randomized Extended Gauss-Seidel and Kaczmarz Methods

    Get PDF
    The Kaczmarz and Gauss-Seidel methods both solve a linear system Xβ=y by iteratively refining the solution estimate. Recent interest in these methods has been sparked by a proof of Strohmer and Vershynin which shows the randomized Kaczmarz method converges linearly in expectation to the solution. Lewis and Leventhal then proved a similar result for the randomized Gauss-Seidel algorithm. However, the behavior of both methods depends heavily on whether the system is under or overdetermined, and whether it is consistent or not. Here we provide a unified theory of both methods, their variants for these different settings, and draw connections between both approaches. In doing so, we also provide a proof that an extended version of randomized Gauss-Seidel converges linearly to the least norm solution in the underdetermined case (where the usual randomized Gauss Seidel fails to converge). We detail analytically and empirically the convergence properties of both methods and their extended variants in all possible system settings. With this result, a complete and rigorous theory of both methods is furnished
    • …
    corecore