638 research outputs found
Rotationally Invariant Image Representation for Viewing Direction Classification in Cryo-EM
We introduce a new rotationally invariant viewing angle classification method
for identifying, among a large number of Cryo-EM projection images, similar
views without prior knowledge of the molecule. Our rotationally invariant
features are based on the bispectrum. Each image is denoised and compressed
using steerable principal component analysis (PCA) such that rotating an image
is equivalent to phase shifting the expansion coefficients. Thus we are able to
extend the theory of bispectrum of 1D periodic signals to 2D images. The
randomized PCA algorithm is then used to efficiently reduce the dimensionality
of the bispectrum coefficients, enabling fast computation of the similarity
between any pair of images. The nearest neighbors provide an initial
classification of similar viewing angles. In this way, rotational alignment is
only performed for images with their nearest neighbors. The initial nearest
neighbor classification and alignment are further improved by a new
classification method called vector diffusion maps. Our pipeline for viewing
angle classification and alignment is experimentally shown to be faster and
more accurate than reference-free alignment with rotationally invariant K-means
clustering, MSA/MRA 2D classification, and their modern approximations
Interpretable Transformations with Encoder-Decoder Networks
Deep feature spaces have the capacity to encode complex transformations of
their input data. However, understanding the relative feature-space
relationship between two transformed encoded images is difficult. For instance,
what is the relative feature space relationship between two rotated images?
What is decoded when we interpolate in feature space? Ideally, we want to
disentangle confounding factors, such as pose, appearance, and illumination,
from object identity. Disentangling these is difficult because they interact in
very nonlinear ways. We propose a simple method to construct a deep feature
space, with explicitly disentangled representations of several known
transformations. A person or algorithm can then manipulate the disentangled
representation, for example, to re-render an image with explicit control over
parameterized degrees of freedom. The feature space is constructed using a
transforming encoder-decoder network with a custom feature transform layer,
acting on the hidden representations. We demonstrate the advantages of explicit
disentangling on a variety of datasets and transformations, and as an aid for
traditional tasks, such as classification.Comment: Accepted at ICCV 201
Semantic-Context-Based Augmented Descriptor For Image Feature Matching
Abstract. This paper proposes an augmented version of local features that enhances the discriminative power of the feature without affecting its invariance to image deformations. The idea is about learning local features, aiming to estimate its semantic, which is then exploited in conjunction with the bag of words paradigm to build an augmented feature descriptor. Basically, any local descriptor can be casted in the proposed context, and thus the approach can be easy generalized to fit in with any local approach. The semantic-context signature is a 2D histogram which accumulates the spatial distribution of the visual words around each local feature. The obtained semantic-context component is concatenated with the local feature to generate our proposed feature descriptor. This is expected to handle ambiguities occurring in images with multiple similar motifs and depicting slight complicated non-affine distortions, outliers, and detector errors. The approach is evaluated for two data sets. The first one is intentionally selected with images containing multiple similar regions and depicting slight non-affine distortions. The second is the standard data set of Mikolajczyk. The evaluation results showed our approach performs significantly better than expected results as well as in comparison with other methods.
Mahalanobis Distance for Class Averaging of Cryo-EM Images
Single particle reconstruction (SPR) from cryo-electron microscopy (EM) is a
technique in which the 3D structure of a molecule needs to be determined from
its contrast transfer function (CTF) affected, noisy 2D projection images taken
at unknown viewing directions. One of the main challenges in cryo-EM is the
typically low signal to noise ratio (SNR) of the acquired images. 2D
classification of images, followed by class averaging, improves the SNR of the
resulting averages, and is used for selecting particles from micrographs and
for inspecting the particle images. We introduce a new affinity measure, akin
to the Mahalanobis distance, to compare cryo-EM images belonging to different
defocus groups. The new similarity measure is employed to detect similar
images, thereby leading to an improved algorithm for class averaging. We
evaluate the performance of the proposed class averaging procedure on synthetic
datasets, obtaining state of the art classification.Comment: Final version accepted to the 14th IEEE International Symposium on
Biomedical Imaging (ISBI 2017
Discriminative learning of local image descriptors
In this paper, we explore methods for learning local image descriptors from training data. We describe a set of building blocks for constructing descriptors which can be combined together and jointly optimized so as to minimize the error of a nearest-neighbor classifier. We consider both linear and nonlinear transforms with dimensionality reduction, and make use of discriminant learning techniques such as Linear Discriminant Analysis (LDA) and Powell minimization to solve for the parameters. Using these techniques, we obtain descriptors that exceed state-of-the-art performance with low dimensionality. In addition to new experiments and recommendations for descriptor learning, we are also making available a new and realistic ground truth data set based on multiview stereo data
- …