1,448 research outputs found

    Dimensionality reduction with image data

    Get PDF
    A common objective in image analysis is dimensionality reduction. The most common often used data-exploratory technique with this objective is principal component analysis. We propose a new method based on the projection of the images as matrices after a Procrustes rotation and show that it leads to a better reconstruction of images

    Bayesian matching of unlabelled point sets using Procrustes and configuration models

    Full text link
    The problem of matching unlabelled point sets using Bayesian inference is considered. Two recently proposed models for the likelihood are compared, based on the Procrustes size-and-shape and the full configuration. Bayesian inference is carried out for matching point sets using Markov chain Monte Carlo simulation. An improvement to the existing Procrustes algorithm is proposed which improves convergence rates, using occasional large jumps in the burn-in period. The Procrustes and configuration methods are compared in a simulation study and using real data, where it is of interest to estimate the strengths of matches between protein binding sites. The performance of both methods is generally quite similar, and a connection between the two models is made using a Laplace approximation

    Disentangling Orthogonal Matrices

    Full text link
    Motivated by a certain molecular reconstruction methodology in cryo-electron microscopy, we consider the problem of solving a linear system with two unknown orthogonal matrices, which is a generalization of the well-known orthogonal Procrustes problem. We propose an algorithm based on a semi-definite programming (SDP) relaxation, and give a theoretical guarantee for its performance. Both theoretically and empirically, the proposed algorithm performs better than the na\"{i}ve approach of solving the linear system directly without the orthogonal constraints. We also consider the generalization to linear systems with more than two unknown orthogonal matrices

    Most Likely Separation of Intensity and Warping Effects in Image Registration

    Full text link
    This paper introduces a class of mixed-effects models for joint modeling of spatially correlated intensity variation and warping variation in 2D images. Spatially correlated intensity variation and warp variation are modeled as random effects, resulting in a nonlinear mixed-effects model that enables simultaneous estimation of template and model parameters by optimization of the likelihood function. We propose an algorithm for fitting the model which alternates estimation of variance parameters and image registration. This approach avoids the potential estimation bias in the template estimate that arises when treating registration as a preprocessing step. We apply the model to datasets of facial images and 2D brain magnetic resonance images to illustrate the simultaneous estimation and prediction of intensity and warp effects

    Crosslingual Document Embedding as Reduced-Rank Ridge Regression

    Get PDF
    There has recently been much interest in extending vector-based word representations to multiple languages, such that words can be compared across languages. In this paper, we shift the focus from words to documents and introduce a method for embedding documents written in any language into a single, language-independent vector space. For training, our approach leverages a multilingual corpus where the same concept is covered in multiple languages (but not necessarily via exact translations), such as Wikipedia. Our method, Cr5 (Crosslingual reduced-rank ridge regression), starts by training a ridge-regression-based classifier that uses language-specific bag-of-word features in order to predict the concept that a given document is about. We show that, when constraining the learned weight matrix to be of low rank, it can be factored to obtain the desired mappings from language-specific bags-of-words to language-independent embeddings. As opposed to most prior methods, which use pretrained monolingual word vectors, postprocess them to make them crosslingual, and finally average word vectors to obtain document vectors, Cr5 is trained end-to-end and is thus natively crosslingual as well as document-level. Moreover, since our algorithm uses the singular value decomposition as its core operation, it is highly scalable. Experiments show that our method achieves state-of-the-art performance on a crosslingual document retrieval task. Finally, although not trained for embedding sentences and words, it also achieves competitive performance on crosslingual sentence and word retrieval tasks.Comment: In The Twelfth ACM International Conference on Web Search and Data Mining (WSDM '19
    • …
    corecore