566 research outputs found

    Structural Variability from Noisy Tomographic Projections

    Full text link
    In cryo-electron microscopy, the 3D electric potentials of an ensemble of molecules are projected along arbitrary viewing directions to yield noisy 2D images. The volume maps representing these potentials typically exhibit a great deal of structural variability, which is described by their 3D covariance matrix. Typically, this covariance matrix is approximately low-rank and can be used to cluster the volumes or estimate the intrinsic geometry of the conformation space. We formulate the estimation of this covariance matrix as a linear inverse problem, yielding a consistent least-squares estimator. For nn images of size NN-by-NN pixels, we propose an algorithm for calculating this covariance estimator with computational complexity O(nN4+κN6logN)\mathcal{O}(nN^4+\sqrt{\kappa}N^6 \log N), where the condition number κ\kappa is empirically in the range 1010--200200. Its efficiency relies on the observation that the normal equations are equivalent to a deconvolution problem in 6D. This is then solved by the conjugate gradient method with an appropriate circulant preconditioner. The result is the first computationally efficient algorithm for consistent estimation of 3D covariance from noisy projections. It also compares favorably in runtime with respect to previously proposed non-consistent estimators. Motivated by the recent success of eigenvalue shrinkage procedures for high-dimensional covariance matrices, we introduce a shrinkage procedure that improves accuracy at lower signal-to-noise ratios. We evaluate our methods on simulated datasets and achieve classification results comparable to state-of-the-art methods in shorter running time. We also present results on clustering volumes in an experimental dataset, illustrating the power of the proposed algorithm for practical determination of structural variability.Comment: 52 pages, 11 figure

    Wavelets, ridgelets and curvelets on the sphere

    Full text link
    We present in this paper new multiscale transforms on the sphere, namely the isotropic undecimated wavelet transform, the pyramidal wavelet transform, the ridgelet transform and the curvelet transform. All of these transforms can be inverted i.e. we can exactly reconstruct the original data from its coefficients in either representation. Several applications are described. We show how these transforms can be used in denoising and especially in a Combined Filtering Method, which uses both the wavelet and the curvelet transforms, thus benefiting from the advantages of both transforms. An application to component separation from multichannel data mapped to the sphere is also described in which we take advantage of moving to a wavelet representation.Comment: Accepted for publication in A&A. Manuscript with all figures can be downloaded at http://jstarck.free.fr/aa_sphere05.pd

    Quantum State Tomography of a Single Qubit: Comparison of Methods

    Full text link
    The tomographic reconstruction of the state of a quantum-mechanical system is an essential component in the development of quantum technologies. We present an overview of different tomographic methods for determining the quantum-mechanical density matrix of a single qubit: (scaled) direct inversion, maximum likelihood estimation (MLE), minimum Fisher information distance, and Bayesian mean estimation (BME). We discuss the different prior densities in the space of density matrices, on which both MLE and BME depend, as well as ways of including experimental errors and of estimating tomography errors. As a measure of the accuracy of these methods we average the trace distance between a given density matrix and the tomographic density matrices it can give rise to through experimental measurements. We find that the BME provides the most accurate estimate of the density matrix, and suggest using either the pure-state prior, if the system is known to be in a rather pure state, or the Bures prior if any state is possible. The MLE is found to be slightly less accurate. We comment on the extrapolation of these results to larger systems.Comment: 15 pages, 4 figures, 2 tables; replaced previous figure 5 by new table I. in Journal of Modern Optics, 201

    Probabilistic RGB-D Odometry based on Points, Lines and Planes Under Depth Uncertainty

    Full text link
    This work proposes a robust visual odometry method for structured environments that combines point features with line and plane segments, extracted through an RGB-D camera. Noisy depth maps are processed by a probabilistic depth fusion framework based on Mixtures of Gaussians to denoise and derive the depth uncertainty, which is then propagated throughout the visual odometry pipeline. Probabilistic 3D plane and line fitting solutions are used to model the uncertainties of the feature parameters and pose is estimated by combining the three types of primitives based on their uncertainties. Performance evaluation on RGB-D sequences collected in this work and two public RGB-D datasets: TUM and ICL-NUIM show the benefit of using the proposed depth fusion framework and combining the three feature-types, particularly in scenes with low-textured surfaces, dynamic objects and missing depth measurements.Comment: Major update: more results, depth filter released as opensource, 34 page

    Efficient Calculation of Resolution and Covariance for Penalized-Likelihood Reconstruction in Fully 3-D SPECT

    Full text link
    Resolution and covariance predictors have been derived previously for penalized-likelihood estimators. These predictors can provide accurate approximations to the local resolution properties and covariance functions for tomographic systems given a good estimate of the mean measurements. Although these predictors may be evaluated iteratively, circulant approximations are often made for practical computation times. However, when numerous evaluations are made repeatedly (as in penalty design or calculation of variance images), these predictors still require large amounts of computing time. In Stayman and Fessler (2000), we discussed methods for precomputing a large portion of the predictor for shift-invariant system geometries. In this paper, we generalize the efficient procedure discussed in Stayman and Fessler (2000) to shift-variant single photon emission computed tomography (SPECT) systems. This generalization relies on a new attenuation approximation and several observations on the symmetries in SPECT systems. These new general procedures apply to both two-dimensional and fully three-dimensional (3-D) SPECT models, that may be either precomputed and stored, or written in procedural form. We demonstrate the high accuracy of the predictions based on these methods using a simulated anthropomorphic phantom and fully 3-D SPECT system. The evaluation of these predictors requires significantly less computation time than traditional prediction techniques, once the system geometry specific precomputations have been made.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85992/1/Fessler54.pd

    Feasibility and performances of compressed-sensing and sparse map-making with Herschel/PACS data

    Full text link
    The Herschel Space Observatory of ESA was launched in May 2009 and is in operation since. From its distant orbit around L2 it needs to transmit a huge quantity of information through a very limited bandwidth. This is especially true for the PACS imaging camera which needs to compress its data far more than what can be achieved with lossless compression. This is currently solved by including lossy averaging and rounding steps on board. Recently, a new theory called compressed-sensing emerged from the statistics community. This theory makes use of the sparsity of natural (or astrophysical) images to optimize the acquisition scheme of the data needed to estimate those images. Thus, it can lead to high compression factors. A previous article by Bobin et al. (2008) showed how the new theory could be applied to simulated Herschel/PACS data to solve the compression requirement of the instrument. In this article, we show that compressed-sensing theory can indeed be successfully applied to actual Herschel/PACS data and give significant improvements over the standard pipeline. In order to fully use the redundancy present in the data, we perform full sky map estimation and decompression at the same time, which cannot be done in most other compression methods. We also demonstrate that the various artifacts affecting the data (pink noise, glitches, whose behavior is a priori not well compatible with compressed-sensing) can be handled as well in this new framework. Finally, we make a comparison between the methods from the compressed-sensing scheme and data acquired with the standard compression scheme. We discuss improvements that can be made on ground for the creation of sky maps from the data.Comment: 11 pages, 6 figures, 5 tables, peer-reviewed articl

    PLVS: A SLAM System with Points, Lines, Volumetric Mapping, and 3D Incremental Segmentation

    Full text link
    This document presents PLVS: a real-time system that leverages sparse SLAM, volumetric mapping, and 3D unsupervised incremental segmentation. PLVS stands for Points, Lines, Volumetric mapping, and Segmentation. It supports RGB-D and Stereo cameras, which may be optionally equipped with IMUs. The SLAM module is keyframe-based, and extracts and tracks sparse points and line segments as features. Volumetric mapping runs in parallel with respect to the SLAM front-end and generates a 3D reconstruction of the explored environment by fusing point clouds backprojected from keyframes. Different volumetric mapping methods are supported and integrated in PLVS. We use a novel reprojection error to bundle-adjust line segments. This error exploits available depth information to stabilize the position estimates of line segment endpoints. An incremental and geometric-based segmentation method is implemented and integrated for RGB-D cameras in the PLVS framework. We present qualitative and quantitative evaluations of the PLVS framework on some publicly available datasets. The appendix details the adopted stereo line triangulation method and provides a derivation of the Jacobians we used for line error terms. The software is available as open-source

    Reduced and coded sensing methods for x-ray based security

    Full text link
    Current x-ray technologies provide security personnel with non-invasive sub-surface imaging and contraband detection in various portal screening applications such as checked and carry-on baggage as well as cargo. Computed tomography (CT) scanners generate detailed 3D imagery in checked bags; however, these scanners often require significant power, cost, and space. These tomography machines are impractical for many applications where space and power are often limited such as checkpoint areas. Reducing the amount of data acquired would help reduce the physical demands of these systems. Unfortunately this leads to the formation of artifacts in various applications, thus presenting significant challenges in reconstruction and classification. As a result, the goal is to maintain a certain level of image quality but reduce the amount of data gathered. For the security domain this would allow for faster and cheaper screening in existing systems or allow for previously infeasible screening options due to other operational constraints. While our focus is predominantly on security applications, many of the techniques can be extended to other fields such as the medical domain where a reduction of dose can allow for safer and more frequent examinations. This dissertation aims to advance data reduction algorithms for security motivated x-ray imaging in three main areas: (i) development of a sensing aware dimensionality reduction framework, (ii) creation of linear motion tomographic method of object scanning and associated reconstruction algorithms for carry-on baggage screening, and (iii) the application of coded aperture techniques to improve and extend imaging performance of nuclear resonance fluorescence in cargo screening. The sensing aware dimensionality reduction framework extends existing dimensionality reduction methods to include knowledge of an underlying sensing mechanism of a latent variable. This method provides an improved classification rate over classical methods on both a synthetic case and a popular face classification dataset. The linear tomographic method is based on non-rotational scanning of baggage moved by a conveyor belt, and can thus be simpler, smaller, and more reliable than existing rotational tomography systems at the expense of more challenging image formation problems that require special model-based methods. The reconstructions for this approach are comparable to existing tomographic systems. Finally our coded aperture extension of existing nuclear resonance fluorescence cargo scanning provides improved observation signal-to-noise ratios. We analyze, discuss, and demonstrate the strengths and challenges of using coded aperture techniques in this application and provide guidance on regimes where these methods can yield gains over conventional methods
    corecore