386,782 research outputs found

    Perturbation Analysis and Randomized Algorithms for Large-Scale Total Least Squares Problems

    Full text link
    In this paper, we present perturbation analysis and randomized algorithms for the total least squares (TLS) problems. We derive the perturbation bound and check its sharpness by numerical experiments. Motivated by the recently popular probabilistic algorithms for low-rank approximations, we develop randomized algorithms for the TLS and the truncated total least squares (TTLS) solutions of large-scale discrete ill-posed problems, which can greatly reduce the computational time and still keep good accuracy.Comment: 27 pages, 10 figures, 8 table

    De-Biasing the Dynamic Mode Decomposition for Applied Koopman Spectral Analysis

    Full text link
    The Dynamic Mode Decomposition (DMD)---a popular method for performing data-driven Koopman spectral analysis---has gained increased adoption as a technique for extracting dynamically meaningful spatio-temporal descriptions of fluid flows from snapshot measurements. Often times, DMD descriptions can be used for predictive purposes as well, which enables informed decision-making based on DMD model-forecasts. Despite its widespread use and utility, DMD regularly fails to yield accurate dynamical descriptions when the measured snapshot data are imprecise due to, e.g., sensor noise. Here, we express DMD as a two-stage algorithm in order to isolate a source of systematic error. We show that DMD's first stage, a subspace projection step, systematically introduces bias errors by processing snapshots asymmetrically. To remove this systematic error, we propose utilizing an augmented snapshot matrix in a subspace projection step, as in problems of total least-squares, in order to account for the error present in all snapshots. The resulting unbiased and noise-aware total DMD (TDMD) formulation reduces to standard DMD in the absence of snapshot errors, while the two-stage perspective generalizes the de-biasing framework to other related methods as well. TDMD's performance is demonstrated in numerical and experimental fluids examples

    Analyzing the Quantum Annealing Approach for Solving Linear Least Squares Problems

    Full text link
    With the advent of quantum computers, researchers are exploring if quantum mechanics can be leveraged to solve important problems in ways that may provide advantages not possible with conventional or classical methods. A previous work by O'Malley and Vesselinov in 2016 briefly explored using a quantum annealing machine for solving linear least squares problems for real numbers. They suggested that it is best suited for binary and sparse versions of the problem. In our work, we propose a more compact way to represent variables using two's and one's complement on a quantum annealer. We then do an in-depth theoretical analysis of this approach, showing the conditions for which this method may be able to outperform the traditional classical methods for solving general linear least squares problems. Finally, based on our analysis and observations, we discuss potentially promising areas of further research where quantum annealing can be especially beneficial.Comment: 16 pages, 2 appendice

    Efficient Algorithms for Positive Semi-Definite Total Least Squares Problems, Minimum Rank Problem and Correlation Matrix Computation

    Full text link
    We have recently presented a method to solve an overdetermined linear system of equations with multiple right hand side vectors, where the unknown matrix is to be symmetric and positive definite. The coefficient and the right hand side matrices are respectively named data and target matrices. A more complicated problem is encountered when the unknown matrix is to be positive semi-definite. The problem arises in estimating the compliance matrix to model deformable structures and approximating correlation and covariance matrices in financial modeling. Several methods have been proposed for solving such problems assuming that the data matrix is unrealistically error free. Here, considering error in measured data and target matrices, we propose a new approach to solve a positive semi-definite constrained total least squares problem. We first consider solving the problem when the rank of the unknown matrix is known, by defining a new error formulation for the positive semi-definite total least squares problem and use of optimization methods on Stiefel manifolds. We prove quadratic convergence of our proposed approach. We then describe how to generalize our proposed method to solve the general positive semi-definite total least squares problem. We further apply the proposed approach to solve the minimum rank problem and the problem of computing correlation matrix. Comparative numerical results show the efficiency of our proposed algorithms. Finally, the Dolan-More performance profiles are shown to summarize our comparative study.Comment: 22 pages,16 tables and 4 figure

    Condition numbers for the truncated total least squares problem and their estimations

    Full text link
    In this paper, we present explicit expressions for the mixed and componentwise condition numbers of the truncated total least squares (TTLS) solution of Ax≈bA\boldsymbol{x} \approx \boldsymbol{b} under the genericity condition, where AA is a m×nm\times n real data matrix and b\boldsymbol{b} is a real mm-vector. Moreover, we reveal that normwise, componentwise and mixed condition numbers for the TTLS problem can recover the previous corresponding counterparts for the total least squares (TLS) problem when the truncated level of for the TTLS problem is nn. When AA is a structured matrix, the structured perturbations for the structured truncated TLS (STTLS) problem are investigated and the corresponding explicit expressions for the structured normwise, componentwise and mixed condition numbers for the STTLS problem are obtained. Furthermore, the relationships between the structured and unstructured normwise, componentwise and mixed condition numbers for the STTLS problem are studied. Based on small sample statistical condition estimation (SCE), reliable condition estimation algorithms for both unstructured and structured normwise, mixed and componentwise are devised, which utilize the SVD of the augmented matrix [A b][A~\boldsymbol{b} ]. The efficient proposed condition estimation algorithms can be integrated into the SVD-based direct solver for the small and medium size TTLS problem to give the error estimation for the numerical TTLS solution. Numerical experiments are reported to illustrate the reliability of the proposed estimation algorithms, which coincide with our theoretical results

    What can Lattice QCD theorists learn from NMR spectroscopists?

    Full text link
    Euclidean-time hadron correlation functions computed in Lattice QCD (LQCD) are modeled by a sum of decaying exponentials, reminiscent of the exponentially damped sinusoid models of free induction decay (FID) in Nuclear Magnetic Resonance (NMR) spectroscopy. We present our initial progress in studying how data modeling techniques commonly used in NMR perform when applied to LQCD data.Comment: 11 pages, svmult.cls. Minor changes in response to reviewers' comments. To appear in the Proceedings of the Third International Workshop on Numerical Analysis and Lattice QCD, Edinburgh, Scotland, 30 Jun - 04 Jul 200

    Runtime Guarantees for Regression Problems

    Full text link
    We study theoretical runtime guarantees for a class of optimization problems that occur in a wide variety of inference problems. these problems are motivated by the lasso framework and have applications in machine learning and computer vision. Our work shows a close connection between these problems and core questions in algorithmic graph theory. While this connection demonstrates the difficulties of obtaining runtime guarantees, it also suggests an approach of using techniques originally developed for graph algorithms. We then show that most of these problems can be formulated as a grouped least squares problem, and give efficient algorithms for this formulation. Our algorithms rely on routines for solving quadratic minimization problems, which in turn are equivalent to solving linear systems. Finally we present some experimental results on applying our approximation algorithm to image processing problems

    Isogeometric Least-squares Collocation Method with Consistency and Convergence Analysis

    Full text link
    In this paper, we present the isogeometric least-squares collocation (IGA-L) method, which determines the numerical solution by making the approximate differential operator fit the real differential operator in a least-squares sense. The number of collocation points employed in IGA-L can be larger than that of the unknowns. Theoretical analysis and numerical examples presented in this paper show the superiority of IGA-L over state-of-the-art collocation methods. First, a small increase in the number of collocation points in IGA-L leads to a large improvement in the accuracy of its numerical solution. Second, IGA-L method is more flexible and more stable, because the number of collocation points in IGA-L is variable. Third, IGA-L is convergent in some cases of singular parameterization. Moreover, the consistency and convergence analysis are also developed in this paper

    Distributed Least-Squares Iterative Methods in Networks: A Survey

    Full text link
    Many science and engineering applications involve solving a linear least-squares system formed from some field measurements. In the distributed cyber-physical systems (CPS), often each sensor node used for measurement only knows partial independent rows of the least-squares system. To compute the least-squares solution they need to gather all these measurement at a centralized location and then compute the solution. These data collection and computation are inefficient because of bandwidth and time constraints and sometimes are infeasible because of data privacy concerns. Thus distributed computations are strongly preferred or demanded in many of the real world applications e.g.: smart-grid, target tracking etc. To compute least squares for the large sparse system of linear equation iterative methods are natural candidates and there are a lot of studies regarding this, however, most of them are related to the efficiency of centralized/parallel computations while and only a few are explicitly about distributed computation or have the potential to apply in distributed networks. This paper surveys the representative iterative methods from several research communities. Some of them were not originally designed for this need, so we slightly modified them to suit our requirement and maintain the consistency. In this survey, we sketch the skeleton of the algorithm first and then analyze its time-to-completion and communication cost. To our best knowledge, this is the first survey of distributed least-squares in distributed networks

    A New Error in Variables Model for Solving Positive Definite Linear System Using Orthogonal Matrix Decompositions

    Full text link
    The need to estimate a positive definite solution to an overdetermined linear system of equations with multiple right hand side vectors arises in several process control contexts. The coefficient and the right hand side matrices are respectively named data and target matrices. A number of optimization methods were proposed for solving such problems, in which the data matrix is unrealistically assumed to be error free. Here, considering error in measured data and target matrices, we present an approach to solve a positive definite constrained linear system of equations based on the use of a newly defined error function. To minimize the defined error function, we derive necessary and sufficient optimality conditions and outline a direct algorithm to compute the solution. We provide a comparison of our proposed approach and two existing methods, the interior point method and a method based on quadratic programming. Two important characteristics of our proposed method as compared to the existing methods are computing the solution directly and considering error both in data and target matrices. Moreover, numerical test results show that the new approach leads to smaller standard deviations of error entries and smaller effective rank as desired by control problems. Furthermore, in a comparative study, using the Dolan-Mor\'{e} performance profiles, we show the approach to be more efficient.Comment: 22 pages, 10 figures, 10 table
    • …
    corecore