20 research outputs found

    Algorithms for Toeplitz Matrices with Applications to Image Deblurring

    Get PDF
    In this thesis, we present the O(n(log n)^2) superfast linear least squares Schur algorithm (ssschur). The algorithm we will describe illustrates a fast way of solving linear equations or linear least squares problems with low displacement rank. This program is based on the O(n^2) Schur algorithm speeded up via FFT. The algorithm solves a ill-conditioned Toeplitz-like system using Tikhonov regularization. The regularized system is Toeplitz-like of displacement rank 4. We also show the effect of choice of the regularization parameter on the quality of the image reconstructed

    A fast semi-direct least squares algorithm for hierarchically block separable matrices

    Full text link
    We present a fast algorithm for linear least squares problems governed by hierarchically block separable (HBS) matrices. Such matrices are generally dense but data-sparse and can describe many important operators including those derived from asymptotically smooth radial kernels that are not too oscillatory. The algorithm is based on a recursive skeletonization procedure that exposes this sparsity and solves the dense least squares problem as a larger, equality-constrained, sparse one. It relies on a sparse QR factorization coupled with iterative weighted least squares methods. In essence, our scheme consists of a direct component, comprised of matrix compression and factorization, followed by an iterative component to enforce certain equality constraints. At most two iterations are typically required for problems that are not too ill-conditioned. For an M×NM \times N HBS matrix with M≄NM \geq N having bounded off-diagonal block rank, the algorithm has optimal O(M+N)\mathcal{O} (M + N) complexity. If the rank increases with the spatial dimension as is common for operators that are singular at the origin, then this becomes O(M+N)\mathcal{O} (M + N) in 1D, O(M+N3/2)\mathcal{O} (M + N^{3/2}) in 2D, and O(M+N2)\mathcal{O} (M + N^{2}) in 3D. We illustrate the performance of the method on both over- and underdetermined systems in a variety of settings, with an emphasis on radial basis function approximation and efficient updating and downdating.Comment: 24 pages, 8 figures, 6 tables; to appear in SIAM J. Matrix Anal. App

    Detecting phase synchronization in coupled oscillators by combining multivariate singular spectrum analysis and fast factorization of structured matrices

    Get PDF
    It is shown that a fast reliable block Fourier algorithm for the factorization of structured matrices improves computational efficiency of known method for detecting phase synchronization in a large system of coupled oscillators, based on multivariate singular spectrum analysis. In this paper, a novel algorithm for the detection of cluster synchronization in a system of coupled oscillators is proposed. The block Toeplitz covariance matrix of the total trajectory matrix is efficiently block-diagonalized by means of the Fast Fourier Transform by embedding it first into a block circulant matrix. The synchronization structure of the underlying multivariate data set is defined based on the 2D spatiotemporal eigenvalue spectrum. The benefits of the proposed method are illustrated by simulations of the phase synchronization effects in a chain of coupled chaotic Rössler oscillators and using multichannel electroencephalogram (EEG) recordings from epilepsy patients

    Image reconstruction with multisensors.

    Get PDF
    by Wun-Cheung Tang.Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.Includes bibliographical references.Abstract also in Chinese.Abstracts --- p.1Introduction --- p.3Toeplitz and Circulant Matrices --- p.3Conjugate Gradient Method --- p.6Cosine Transform Preconditioner --- p.7Regularization --- p.10Summary --- p.13Paper A --- p.19Paper B --- p.3

    Matrices, moments and rational quadrature

    Get PDF
    15 pages, no figures.-- MSC2000 code: 65D15.MR#: MR2456794 (2009h:65035)Zbl#: Zbl pre05362059^aMany problems in science and engineering require the evaluation of functionals of the form F_u(A)=u^\ssf Tf(A)u , where A is a large symmetric matrix, u a vector, and f a nonlinear function. A popular and fairly inexpensive approach to determining upper and lower bounds for such functionals is based on first carrying out a few steps of the Lanczos procedure applied to A with initial vector u, and then evaluating pairs of Gauss and Gauss–Radau quadrature rules associated with the tridiagonal matrix determined by the Lanczos procedure. The present paper extends this approach to allow the use of rational Gauss quadrature rules.Publicad

    Structural Variability from Noisy Tomographic Projections

    Full text link
    In cryo-electron microscopy, the 3D electric potentials of an ensemble of molecules are projected along arbitrary viewing directions to yield noisy 2D images. The volume maps representing these potentials typically exhibit a great deal of structural variability, which is described by their 3D covariance matrix. Typically, this covariance matrix is approximately low-rank and can be used to cluster the volumes or estimate the intrinsic geometry of the conformation space. We formulate the estimation of this covariance matrix as a linear inverse problem, yielding a consistent least-squares estimator. For nn images of size NN-by-NN pixels, we propose an algorithm for calculating this covariance estimator with computational complexity O(nN4+ÎșN6log⁥N)\mathcal{O}(nN^4+\sqrt{\kappa}N^6 \log N), where the condition number Îș\kappa is empirically in the range 1010--200200. Its efficiency relies on the observation that the normal equations are equivalent to a deconvolution problem in 6D. This is then solved by the conjugate gradient method with an appropriate circulant preconditioner. The result is the first computationally efficient algorithm for consistent estimation of 3D covariance from noisy projections. It also compares favorably in runtime with respect to previously proposed non-consistent estimators. Motivated by the recent success of eigenvalue shrinkage procedures for high-dimensional covariance matrices, we introduce a shrinkage procedure that improves accuracy at lower signal-to-noise ratios. We evaluate our methods on simulated datasets and achieve classification results comparable to state-of-the-art methods in shorter running time. We also present results on clustering volumes in an experimental dataset, illustrating the power of the proposed algorithm for practical determination of structural variability.Comment: 52 pages, 11 figure

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page
    corecore