24 research outputs found

    Randomized Matrix Decompositions Using R

    Get PDF
    Matrix decompositions are fundamental tools in the area of applied mathematics, statistical computing, and machine learning. In particular, low-rank matrix decompositions are vital, and widely used for data analysis, dimensionality reduction, and data compression. Massive datasets, however, pose a computational challenge for traditional algorithms, placing significant constraints on both memory and processing power. Recently, the powerful concept of randomness has been introduced as a strategy to ease the computational load. The essential idea of probabilistic algorithms is to employ some amount of randomness in order to derive a smaller matrix from a high-dimensional data matrix. The smaller matrix is then used to compute the desired low-rank approximation. Such algorithms are shown to be computationally efficient for approximating matrices with low-rank structure. We present the R package rsvd, and provide a tutorial introduction to randomized matrix decompositions. Specifically, randomized routines for the singular value decomposition, (robust) principal component analysis, interpolative decomposition, and CUR decomposition are discussed. Several examples demonstrate the routines, and show the computational advantage over other methods implemented in R

    Scalable learning for geostatistics and speaker recognition

    Get PDF
    With improved data acquisition methods, the amount of data that is being collected has increased severalfold. One of the objectives in data collection is to learn useful underlying patterns. In order to work with data at this scale, the methods not only need to be effective with the underlying data, but also have to be scalable to handle larger data collections. This thesis focuses on developing scalable and effective methods targeted towards different domains, geostatistics and speaker recognition in particular. Initially we focus on kernel based learning methods and develop a GPU based parallel framework for this class of problems. An improved numerical algorithm that utilizes the GPU parallelization to further enhance the computational performance of kernel regression is proposed. These methods are then demonstrated on problems arising in geostatistics and speaker recognition. In geostatistics, data is often collected at scattered locations and factors like instrument malfunctioning lead to missing observations. Applications often require the ability interpolate this scattered spatiotemporal data on to a regular grid continuously over time. This problem can be formulated as a regression problem, and one of the most popular geostatistical interpolation techniques, kriging is analogous to a standard kernel method: Gaussian process regression. Kriging is computationally expensive and needs major modifications and accelerations in order to be used practically. The GPU framework developed for kernel methods is extended to kriging and further the GPU's texture memory is better utilized for enhanced computational performance. Speaker recognition deals with the task of verifying a person's identity based on samples of his/her speech - "utterances". This thesis focuses on text-independent framework and three new recognition frameworks were developed for this problem. We proposed a kernelized Renyi distance based similarity scoring for speaker recognition. While its performance is promising, it does not generalize well for limited training data and therefore does not compare well to state-of-the-art recognition systems. These systems compensate for the variability in the speech data due to the message, channel variability, noise and reverberation. State-of-the-art systems model each speaker as a mixture of Gaussians (GMM) and compensate for the variability (termed "nuisance"). We propose a novel discriminative framework using a latent variable technique, partial least squares (PLS), for improved recognition. The kernelized version of this algorithm is used to achieve a state of the art speaker ID system, that shows results competitive with the best systems reported on in NIST's 2010 Speaker Recognition Evaluation

    Applications

    Get PDF

    Advanced Multilinear Data Analysis and Sparse Representation Approaches and Their Applications

    Get PDF
    Multifactor analysis plays an important role in data analysis since most real-world datasets usually exist with a combination of numerous factors. These factors are usually not independent but interdependent together. Thus, it is a mistake if a method only considers one aspect of the input data while ignoring the others. Although widely used, Multilinear PCA (MPCA), one of the leading multilinear analysis methods, still suffers from three major drawbacks. Firstly, it is very sensitive to outliers and noise and unable to cope with missing values. Secondly, since MPCA deals with huge multidimensional datasets, it is usually computationally expensive. Finally, it loses original local geometry structures due to the averaging process. This thesis sheds new light on the tensor decomposition problem via the ideas of fast low-rank approximation in random projection and tensor completion in compressed sensing. We propose a novel approach called Compressed Submanifold Multifactor Analysis (CSMA) to solve the three problems mentioned above. Our approach is able to deal with the problem of missing values and outliers via our proposed novel sparse Higher-order Singular Value Decomposition approach, named HOSVD-L1 decomposition. The Random Projection method is used to obtain the fast low-rank approximation of a given multifactor dataset. In addition, our method can preserve geometry of the original data. In the second part of this thesis, we present a novel pattern classification approach named Sparse Class-dependent Feature Analysis (SCFA), to connect the advantages of sparse representation in an overcomplete dictionary, with a powerful nonlinear classifier. The classifier is based on the estimation of class-specific optimal filters, by solving an L1-norm optimization problem using the Alternating Direction Method of Multipliers. Our method as well as its Reproducing Kernel Hilbert Space (RKHS) version is tolerant to the presence of noise and other variations in an image. Our proposed methods achieve very high classification accuracies in face recognition on two challenging face databases, i.e. the CMU Pose, Illumination and Expression (PIE) database and the Extended YALE-B that exhibit pose and illumination variations; and the AR database that has occluded images. In addition, they also exhibit robustness on other evaluation modalities, such as object classification on the Caltech101 database. Our method outperforms state-of-the-art methods on all these databases and hence they show their applicability to general computer vision and pattern recognition problems

    Model Order Reduction

    Get PDF
    An increasing complexity of models used to predict real-world systems leads to the need for algorithms to replace complex models with far simpler ones, while preserving the accuracy of the predictions. This three-volume handbook covers methods as well as applications. This third volume focuses on applications in engineering, biomedical engineering, computational physics and computer science

    Approximation methodologies for explicit model predictive control of complex systems

    No full text
    This thesis concerns the development of complexity reduction methodologies for the application of multi-parametric/explicit model predictive (mp-MPC) control to complex high fidelity models. The main advantage of mp-MPC is the offline relocation of the optimization task and the associated computational expense through the use of multi-parametric programming. This allows for the application of MPC to fast sampling systems or systems for which it is not possible to perform online optimization due to cycle time requirements. The application of mp-MPC to complex nonlinear systems is of critical importance and is the subject of the thesis. The first part is concerned with the adaptation and development of model order reduction (MOR) techniques for application in combination to mp-MPC algorithms. This first part includes the mp-MPC oriented use of existing MOR techniques as well as the development of new ones. The use of MOR for multi-parametric moving horizon estimation is also investigated. The second part of the thesis introduces a framework for the ‘equation free’ surrogate-model based design of explicit controllers as a possible alternative to multi-parametric based methods. The methodology relies upon the use of advanced data-classification approaches and surrogate modelling techniques, and is illustrated with different numerical examples.Open Acces

    Generalized averaged Gaussian quadrature and applications

    Get PDF
    A simple numerical method for constructing the optimal generalized averaged Gaussian quadrature formulas will be presented. These formulas exist in many cases in which real positive GaussKronrod formulas do not exist, and can be used as an adequate alternative in order to estimate the error of a Gaussian rule. We also investigate the conditions under which the optimal averaged Gaussian quadrature formulas and their truncated variants are internal
    corecore