142,626 research outputs found

    Scalable Tensor Factorizations for Incomplete Data

    Full text link
    The problem of incomplete data - i.e., data with missing or unknown values - in multi-way arrays is ubiquitous in biomedical signal processing, network traffic analysis, bibliometrics, social network analysis, chemometrics, computer vision, communication networks, etc. We consider the problem of how to factorize data sets with missing values with the goal of capturing the underlying latent structure of the data and possibly reconstructing missing values (i.e., tensor completion). We focus on one of the most well-known tensor factorizations that captures multi-linear structure, CANDECOMP/PARAFAC (CP). In the presence of missing data, CP can be formulated as a weighted least squares problem that models only the known entries. We develop an algorithm called CP-WOPT (CP Weighted OPTimization) that uses a first-order optimization approach to solve the weighted least squares problem. Based on extensive numerical experiments, our algorithm is shown to successfully factorize tensors with noise and up to 99% missing data. A unique aspect of our approach is that it scales to sparse large-scale data, e.g., 1000 x 1000 x 1000 with five million known entries (0.5% dense). We further demonstrate the usefulness of CP-WOPT on two real-world applications: a novel EEG (electroencephalogram) application where missing data is frequently encountered due to disconnections of electrodes and the problem of modeling computer network traffic where data may be absent due to the expense of the data collection process

    A quadrilateral inverse-shell element with drilling degrees of freedom for shape sensing and structural health monitoring

    Get PDF
    The inverse Finite Element Method (iFEM) is a state-of-the-art methodology originally introduced by Tessler and Spangler for real-time reconstruction of full-field structural displacements in plate and shell structures that are instrumented by strain sensors. This inverse problem is commonly known as shape sensing. In this effort, a new four-node quadrilateral inverse-shell element, iQS4, is developed that expands the library of existing iFEM-based elements. This new element includes hierarchical drilling rotation degrees-of-freedom (DOF) and further extends the practical usefulness of iFEM for shape sensing analysis of large-scale structures. The iFEM/iQS4 formulation is derived from a weighted-least-squares functional that has Mindlin theory as its kinematic framework. Two validation problems, (1) a cantilevered plate under static transverse force near the free tip, and (2) a short cantilever beam under shear loading, are solved and discussed in detail. Following the validation cases, the applicability of the iQS4 element to more complex structures is demonstrated by the analysis of a thin-walled cylinder. For this problem, the effects of noisy strain measurements on the accuracy of the iFEM solution are examined using strain measurements that involve five and ten percent random noise, respectively. Finally, the effect of sensor locations, number of sensors, the discretization of the geometry, and the influence of noise on the strain measurements are assessed with respect to the solution accuracy

    Significance Regression: Robust Regression for Collinear Data

    Get PDF
    This paper examines robust linear multivariable regression from collinear data. A brief review of M-estimators discusses the strengths of this approach for tolerating outliers and/or perturbations in the error distributions. The review reveals that M-estimation may be unreliable if the data exhibit collinearity. Next, significance regression (SR) is discussed. SR is a successful method for treating collinearity but is not robust. A new significance regression algorithm for the weighted-least-squares error criterion (SR-WLS) is developed. Using the weights computed via M-estimation with the SR-WLS algorithm yields an effective method that robustly mollifies collinearity problems. Numerical examples illustrate the main points

    Can coercive formulations lead to fast and accurate solution of the Helmholtz equation?

    Full text link
    A new, coercive formulation of the Helmholtz equation was introduced in [Moiola, Spence, SIAM Rev. 2014]. In this paper we investigate hh-version Galerkin discretisations of this formulation, and the iterative solution of the resulting linear systems. We find that the coercive formulation behaves similarly to the standard formulation in terms of the pollution effect (i.e. to maintain accuracy as kk\to\infty, hh must decrease with kk at the same rate as for the standard formulation). We prove kk-explicit bounds on the number of GMRES iterations required to solve the linear system of the new formulation when it is preconditioned with a prescribed symmetric positive-definite matrix. Even though the number of iterations grows with kk, these are the first such rigorous bounds on the number of GMRES iterations for a preconditioned formulation of the Helmholtz equation, where the preconditioner is a symmetric positive-definite matrix.Comment: 27 pages, 7 figure

    Weighted and Robust Archetypal Analysis

    Get PDF
    Archetypal analysis represents observations in a multivariate data set as convex combinations of a few extremal points lying on the boundary of the convex hull. Data points which vary from the majority have great influence on the solution; in fact one outlier can break down the archetype solution. This paper adapts the original algorithm to be a robust M-estimator and presents an iteratively reweighted least squares fitting algorithm. As required first step, the weighted archetypal problem is formulated and solved. The algorithm is demonstrated using both an artificial and a real world example

    Robust state estimation using mixed integer programming

    Get PDF
    This letter describes a robust state estimator based on the solution of a mixed integer program. A tolerance range is associated with each measurement and an estimate is chosen to maximize the number of estimated measurements that remain within tolerance (or equivalently minimize the number of measurements out of tolerance). Some small-scale examples are given which suggest that this approach is robust in the presence of gross errors, is not susceptible to leverage points, and can solve some pathological cases that have previously caused problems for robust estimation algorithms

    On a Problem of Weighted Low-Rank Approximation of Matrices

    Full text link
    We study a weighted low rank approximation that is inspired by a problem of constrained low rank approximation of matrices as initiated by the work of Golub, Hoffman, and Stewart (Linear Algebra and Its Applications, 88-89(1987), 317-327). Our results reduce to that of Golub, Hoffman, and Stewart in the limiting cases. We also propose an algorithm based on the alternating direction method to solve our weighted low rank approximation problem and compare it with the state-of-art general algorithms such as the weighted total alternating least squares and the EM algorithm
    corecore