9,635 research outputs found

    A fast algorithm for LR-2 factorization of Toeplitz matrices

    Get PDF
    In this paper a new order recursive algorithm for the efficient −1 factorization of Toeplitz matrices is described. The proposed algorithm can be seen as a fast modified Gram-Schmidt method which recursively computes the orthonormal columns i, i = 1,2, 
,p, of , as well as the elements of R−1, of a Toeplitz matrix with dimensions L × p. The factor estimation requires 8Lp MADS (multiplications and divisions). Matrix −1 is subsequently estimated using 3p2 MADS. A faster algorithm, based on a mixed and −1 updating scheme, is also derived. It requires 7Lp + 3.5p2 MADS. The algorithm can be efficiently applied to batch least squares FIR filtering and system identification. When determination of the optimal filter is the desired task it can be utilized to compute the least squares filter in an order recursive way. The algorithm operates directly on the experimental data, overcoming the need for covariance estimates. An orthogonalized version of the proposed −1 algorithm is derived. Matlab code implementing the algorithm is also supplied

    EIT Reconstruction Algorithms: Pitfalls, Challenges and Recent Developments

    Full text link
    We review developments, issues and challenges in Electrical Impedance Tomography (EIT), for the 4th Workshop on Biomedical Applications of EIT, Manchester 2003. We focus on the necessity for three dimensional data collection and reconstruction, efficient solution of the forward problem and present and future reconstruction algorithms. We also suggest common pitfalls or ``inverse crimes'' to avoid.Comment: A review paper for the 4th Workshop on Biomedical Applications of EIT, Manchester, UK, 200

    A fast semi-direct least squares algorithm for hierarchically block separable matrices

    Full text link
    We present a fast algorithm for linear least squares problems governed by hierarchically block separable (HBS) matrices. Such matrices are generally dense but data-sparse and can describe many important operators including those derived from asymptotically smooth radial kernels that are not too oscillatory. The algorithm is based on a recursive skeletonization procedure that exposes this sparsity and solves the dense least squares problem as a larger, equality-constrained, sparse one. It relies on a sparse QR factorization coupled with iterative weighted least squares methods. In essence, our scheme consists of a direct component, comprised of matrix compression and factorization, followed by an iterative component to enforce certain equality constraints. At most two iterations are typically required for problems that are not too ill-conditioned. For an M×NM \times N HBS matrix with M≄NM \geq N having bounded off-diagonal block rank, the algorithm has optimal O(M+N)\mathcal{O} (M + N) complexity. If the rank increases with the spatial dimension as is common for operators that are singular at the origin, then this becomes O(M+N)\mathcal{O} (M + N) in 1D, O(M+N3/2)\mathcal{O} (M + N^{3/2}) in 2D, and O(M+N2)\mathcal{O} (M + N^{2}) in 3D. We illustrate the performance of the method on both over- and underdetermined systems in a variety of settings, with an emphasis on radial basis function approximation and efficient updating and downdating.Comment: 24 pages, 8 figures, 6 tables; to appear in SIAM J. Matrix Anal. App
    • 

    corecore