694 research outputs found

    An adapted version of the element-wise weighted total least squares method for applications in chemometrics

    No full text
    The Maximum Likelihood PCA (MLPCA) method has been devised in chemometrics as a generalization of the well-known PCA method in order to derive consistent estimators in the presence of errors with known error distribution. For similar reasons, the Total Least Squares (TLS) method has been generalized in the field of computational mathematics and engineering to maintain consistency of the parameter estimates in linear models with measurement errors of known distribution. In a previous paper [M. Schuermans, I. Markovsky, P.D. Wentzell, S. Van Huffel, On the equivalance between total least squares and maximum likelihood PCA, Anal. Chim. Acta, 544 (2005), 254–267], the tight equivalences between MLPCA and Element-wise Weighted TLS (EW-TLS) have been explored. The purpose of this paper is to adapt the EW-TLS method in order to make it useful for problems in chemometrics. We will present a computationally efficient algorithm and compare this algorithm with the standard EW-TLS algorithm and the MLPCA algorithm in computation time and convergence behaviour on chemical data

    Two-way bidiagonalization scheme for downdating the singular-value decomposition

    Get PDF
    AbstractWe present a method that transforms the problem of downdating the singular-value decomposition into a problem of diagonalizing a diagonal matrix bordered by one column. The first step in this diagonalization involves bidiagonalization of a diagonal matrix bordered by one column. For updating the singular-value decomposition, a two-way chasing scheme has been recently introduced, which reduces the total number of rotations by 50% compared to previously developed one-way chasing schemes. Here, a two-way chasing scheme is introduced for the bidiagonalization step in downdating the singular-value decomposition. We show how the matrix elements can be rearranged and how the nonzero elements can be chased away towards two corners of the matrix. The newly proposed scheme saves nearly 50% of the number of plane rotations required by one-way chasing schemes

    Overview of total least squares methods

    No full text
    We review the development and extensions of the classical total least squares method and describe algorithms for its generalization to weighted and structured approximation problems. In the generic case, the classical total least squares problem has a unique solution, which is given in analytic form in terms of the singular value decomposition of the data matrix. The weighted and structured total least squares problems have no such analytic solution and are currently solved numerically by local optimization methods. We explain how special structure of the weight matrix and the data matrix can be exploited for efficient cost function and first derivative computation. This allows to obtain computationally efficient solution methods. The total least squares family of methods has a wide range of applications in system theory, signal processing, and computer algebra. We describe the applications for deconvolution, linear prediction, and errors-in-variables system identification

    Separable nonlinear least squares fitting with linear bound constraints and its application in magnetic resonance spectroscopy data quantification

    Get PDF
    AbstractAn application in magnetic resonance spectroscopy quantification models a signal as a linear combination of nonlinear functions. It leads to a separable nonlinear least squares fitting problem, with linear bound constraints on some variables. The variable projection (VARPRO) technique can be applied to this problem, but needs to be adapted in several respects. If only the nonlinear variables are subject to constraints, then the Levenberg–Marquardt minimization algorithm that is classically used by the VARPRO method should be replaced with a version that can incorporate those constraints. If some of the linear variables are also constrained, then they cannot be projected out via a closed-form expression as is the case for the classical VARPRO technique. We show how quadratic programming problems can be solved instead, and we provide details on efficient function and approximate Jacobian evaluations for the inequality constrained VARPRO method

    Automatic artifact removal of resting-state fMRI with Deep Neural Networks

    Full text link
    Functional Magnetic Resonance Imaging (fMRI) is a non-invasive technique for studying brain activity. During an fMRI session, the subject executes a set of tasks (task-related fMRI study) or no tasks (resting-state fMRI), and a sequence of 3-D brain images is obtained for further analysis. In the course of fMRI, some sources of activation are caused by noise and artifacts. The removal of these sources is essential before the analysis of the brain activations. Deep Neural Network (DNN) architectures can be used for denoising and artifact removal. The main advantage of DNN models is the automatic learning of abstract and meaningful features, given the raw data. This work presents advanced DNN architectures for noise and artifact classification, using both spatial and temporal information in resting-state fMRI sessions. The highest performance is achieved by a voting schema using information from all the domains, with an average accuracy of over 98% and a very good balance between the metrics of sensitivity and specificity (98.5% and 97.5% respectively).Comment: Under Review, ICASSP 202
    corecore