182 research outputs found

    Generalized Triangular Decomposition in Transform Coding

    Get PDF
    A general family of optimal transform coders (TCs) is introduced here based on the generalized triangular decomposition (GTD) developed by Jiang This family includes the Karhunen-Loeve transform (KLT) and the generalized version of the prediction-based lower triangular transform (PLT) introduced by Phoong and Lin as special cases. The coding gain of the entire family, with optimal bit allocation, is equal to that of the KLT and the PLT. Even though the original PLT introduced by Phoong is not applicable for vectors that are not blocked versions of scalar wide sense stationary processes, the GTD-based family includes members that are natural extensions of the PLT, and therefore also enjoy the so-called MINLAB structure of the PLT, which has the unit noise-gain property. Other special cases of the GTD-TC are the geometric mean decomposition (GMD) and the bidiagonal decomposition (BID) transform coders. The GMD-TC in particular has the property that the optimum bit allocation is a uniform allocation; this is because all its transform domain coefficients have the same variance, implying thereby that the dynamic ranges of the coefficients to be quantized are identical

    Computation of generalized matrix functions

    Get PDF
    We develop numerical algorithms for the efficient evaluation of quantities associated with generalized matrix functions [J. B. Hawkins and A. Ben-Israel, Linear and Multilinear Algebra 1(2), 1973, pp. 163-171]. Our algorithms are based on Gaussian quadrature and Golub--Kahan bidiagonalization. Block variants are also investigated. Numerical experiments are performed to illustrate the effectiveness and efficiency of our techniques in computing generalized matrix functions arising in the analysis of networks.Comment: 25 paged, 2 figure

    Spectral Separation of Quantum Dots within Tissue Equivalent Phantom Using Linear Unmixing Methods in Multispectral Fluorescence Reflectance Imaging

    Get PDF
    Introduction Non-invasive Fluorescent Reflectance Imaging (FRI) is used for accessing physiological and molecular processes in biological media. The aim of this article is to separate the overlapping emission spectra of quantum dots within tissue-equivalent phantom using SVD, Jacobi SVD, and NMF methods in the FRI mode. Materials and Methods In this article, a tissue-like phantom and an optical setup in reflectance mode were developed. The algorithm of multispectral imaging method was then written in Matlab environment. The setup included the diode-pumped solid-state lasers at 479 nm, 533 nm, and 798 nm, achromatic telescopic, mirror, high pass and low pass filters, and EMCCD camera. The FRI images were acquired by a CCD camera using band pass filter centered at 600 nm and high pass max at 615 nm for the first region and high pass filter max at 810 nm for the second region. The SVD and Jacobi SVD algorithms were written in Matlab environment and compared with a Non-negative Matrix Factorization (NMF) and applied to the obtained images. Results PSNR, SNR, CNR of SVD, and NMF methods were obtained as 39 dB, 30.1 dB, and 0.7 dB, respectively. The results showed that the difference of Jacobi SVD PSNR with PSNR of NMF and modified NMF algorithm was significant (p<0.0001). The statistical results showed that the Jacobi SVD was more accurate than modified NMF. Conclusion In this study, the Jacobi SVD was introduced as a powerful method for obtaining the unmixed FRI images. An experimental evaluation of the algorithm will be done in the near future

    Decomposition of an updated correlation matrix via hyperbolic transformations

    Get PDF
    summary:An algorithm for hyperbolic singular value decomposition of a given complex matrix based on hyperbolic Householder and Givens transformation matrices is described in detail. The main application of this algorithm is the decomposition of an updated correlation matrix

    Minimizing Communication in Linear Algebra

    Full text link
    In 1981 Hong and Kung proved a lower bound on the amount of communication needed to perform dense, matrix-multiplication using the conventional O(n3)O(n^3) algorithm, where the input matrices were too large to fit in the small, fast memory. In 2004 Irony, Toledo and Tiskin gave a new proof of this result and extended it to the parallel case. In both cases the lower bound may be expressed as Ω\Omega(#arithmetic operations / M\sqrt{M}), where M is the size of the fast memory (or local memory in the parallel case). Here we generalize these results to a much wider variety of algorithms, including LU factorization, Cholesky factorization, LDLTLDL^T factorization, QR factorization, algorithms for eigenvalues and singular values, i.e., essentially all direct methods of linear algebra. The proof works for dense or sparse matrices, and for sequential or parallel algorithms. In addition to lower bounds on the amount of data moved (bandwidth) we get lower bounds on the number of messages required to move it (latency). We illustrate how to extend our lower bound technique to compositions of linear algebra operations (like computing powers of a matrix), to decide whether it is enough to call a sequence of simpler optimal algorithms (like matrix multiplication) to minimize communication, or if we can do better. We give examples of both. We also show how to extend our lower bounds to certain graph theoretic problems. We point out recently designed algorithms for dense LU, Cholesky, QR, eigenvalue and the SVD problems that attain these lower bounds; implementations of LU and QR show large speedups over conventional linear algebra algorithms in standard libraries like LAPACK and ScaLAPACK. Many open problems remain.Comment: 27 pages, 2 table

    Numerical methods and accurate computations with structured matrices

    Get PDF
    Esta tesis doctoral es un compendio de 11 artículos científicos. El tema principal de la tesis es el Álgebra Lineal Numérica, con énfasis en dos clases de matrices estructuradas: las matrices totalmente positivas y las M-matrices. Para algunas subclases de estas matrices, es posible desarrollar algoritmos para resolver numéricamente varios de los problemas más comunes en álgebra lineal con alta precisión relativa independientemente del número de condición de la matriz. La clave para lograr cálculos precisos está en el uso de una parametrización diferente que represente la estructura especial de la matriz y en el desarrollo de algoritmos adaptados que trabajen con dicha parametrización.Las matrices totalmente positivas no singulares admiten una factorización única como producto de matrices bidiagonales no negativas llamada factorización bidiagonal. Si conocemos esta representación con alta precisión relativa, se puede utilizar para resolver ciertos sistemas de ecuaciones y para calcular la inversa, los valores propios y los valores singulares con alta precisión relativa. Nuestra contribución en este campo ha sido la obtención de la factorización bidiagonal con alta precisión relativa de matrices de colocación de polinomios de Laguerre generalizados, de matrices de colocación de polinomios de Bessel, de clases de matrices que generalizan la matriz de Pascal y de matrices de q-enteros. También hemos estudiado la extensión de varias propiedades óptimas de las matrices de colocación de B-bases normalizadas (que en particular son matrices totalmente positivas). En particular, hemos demostrado propiedades de optimalidad de las matrices de colocación del producto tensorial de B-bases normalizadas.Si conocemos las sumas de filas y las entradas extradiagonales de una M-matriz no singular diagonal dominante con alta precisión relativa, entonces podemos calcular su inversa, determinante y valores singulares también con alta precisión relativa. Hemos buscado nuevos métodos para lograr cálculos precisos con nuevas clases de M-matrices o matrices relacionadas. Hemos propuesto una parametrización para las Z-matrices de Nekrasov con entradas diagonales positivas que puede utilizarse para calcular su inversa y determinante con alta precisión relativa. También hemos estudiado la clase denominada B-matrices, que está muy relacionada con las M-matrices. Hemos obtenido un método para calcular los determinantes de esta clase con alta precisión relativa y otro para calcular los determinantes de las matrices de B-Nekrasov también con alta precisión relativa. Basándonos en la utilización de dos matrices de escalado que hemos introducido, hemos desarrollado nuevas cotas para la norma infinito de la inversa de una matriz de Nekrasov y para el error del problema de complementariedad lineal cuando su matriz asociada es de Nekrasov. También hemos obtenido nuevas cotas para la norma infinito de las inversas de Bpi-matrices, una clase que extiende a las B-matrices, y las hemos utilizado para obtener nuevas cotas del error para el problema de complementariedad lineal cuya matriz asociada es una Bpi-matriz. Algunas clases de matrices han sido generalizadas al caso de mayor dimensión para desarrollar una teoría para tensores extendiendo la conocida para el caso matricial. Por ejemplo, la definición de la clase de las B-matrices ha sido extendida a la clase de B-tensores, dando lugar a un criterio sencillo para identificar una nueva clase de tensores definidos positivos. Hemos propuesto una extensión de la clase de las Bpi-matrices a Bpi-tensores, definiendo así una nueva clase de tensores definidos positivos que puede ser identificada en base a un criterio sencillo basado solo en cálculos que involucran a las entradas del tensor. Finalmente, hemos caracterizado los casos en los que las matrices de Toeplitz tridiagonales son P-matrices y hemos estudiado cuándo pueden ser representadas en términos de una factorización bidiagonal que sirve como parametrización para lograr cálculos con alta precisión relativa.<br /

    Non-equispaced B-spline wavelets

    Full text link
    This paper has three main contributions. The first is the construction of wavelet transforms from B-spline scaling functions defined on a grid of non-equispaced knots. The new construction extends the equispaced, biorthogonal, compactly supported Cohen-Daubechies-Feauveau wavelets. The new construction is based on the factorisation of wavelet transforms into lifting steps. The second and third contributions are new insights on how to use these and other wavelets in statistical applications. The second contribution is related to the bias of a wavelet representation. It is investigated how the fine scaling coefficients should be derived from the observations. In the context of equispaced data, it is common practice to simply take the observations as fine scale coefficients. It is argued in this paper that this is not acceptable for non-interpolating wavelets on non-equidistant data. Finally, the third contribution is the study of the variance in a non-orthogonal wavelet transform in a new framework, replacing the numerical condition as a measure for non-orthogonality. By controlling the variances of the reconstruction from the wavelet coefficients, the new framework allows us to design wavelet transforms on irregular point sets with a focus on their use for smoothing or other applications in statistics.Comment: 42 pages, 2 figure
    corecore