6,578 research outputs found

    Registration of Standardized Histological Images in Feature Space

    Full text link
    In this paper, we propose three novel and important methods for the registration of histological images for 3D reconstruction. First, possible intensity variations and nonstandardness in images are corrected by an intensity standardization process which maps the image scale into a standard scale where the similar intensities correspond to similar tissues meaning. Second, 2D histological images are mapped into a feature space where continuous variables are used as high confidence image features for accurate registration. Third, we propose an automatic best reference slice selection algorithm that improves reconstruction quality based on both image entropy and mean square error of the registration process. We demonstrate that the choice of reference slice has a significant impact on registration error, standardization, feature space and entropy information. After 2D histological slices are registered through an affine transformation with respect to an automatically chosen reference, the 3D volume is reconstructed by co-registering 2D slices elastically.Comment: SPIE Medical Imaging 2008 - submissio

    Robust Non-Rigid Registration with Reweighted Position and Transformation Sparsity

    Get PDF
    Non-rigid registration is challenging because it is ill-posed with high degrees of freedom and is thus sensitive to noise and outliers. We propose a robust non-rigid registration method using reweighted sparsities on position and transformation to estimate the deformations between 3-D shapes. We formulate the energy function with position and transformation sparsity on both the data term and the smoothness term, and define the smoothness constraint using local rigidity. The double sparsity based non-rigid registration model is enhanced with a reweighting scheme, and solved by transferring the model into four alternately-optimized subproblems which have exact solutions and guaranteed convergence. Experimental results on both public datasets and real scanned datasets show that our method outperforms the state-of-the-art methods and is more robust to noise and outliers than conventional non-rigid registration methods.Comment: IEEE Transactions on Visualization and Computer Graphic

    Automatic Spatial Calibration of Ultra-Low-Field MRI for High-Accuracy Hybrid MEG--MRI

    Full text link
    With a hybrid MEG--MRI device that uses the same sensors for both modalities, the co-registration of MRI and MEG data can be replaced by an automatic calibration step. Based on the highly accurate signal model of ultra-low-field (ULF) MRI, we introduce a calibration method that eliminates the error sources of traditional co-registration. The signal model includes complex sensitivity profiles of the superconducting pickup coils. In ULF MRI, the profiles are independent of the sample and therefore well-defined. In the most basic form, the spatial information of the profiles, captured in parallel ULF-MR acquisitions, is used to find the exact coordinate transformation required. We assessed our calibration method by simulations assuming a helmet-shaped pickup-coil-array geometry. Using a carefully constructed objective function and sufficient approximations, even with low-SNR images, sub-voxel and sub-millimeter calibration accuracy was achieved. After the calibration, distortion-free MRI and high spatial accuracy for MEG source localization can be achieved. For an accurate sensor-array geometry, the co-registration and associated errors are eliminated, and the positional error can be reduced to a negligible level.Comment: 11 pages, 8 figures. This work is part of the BREAKBEN project and has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 68686

    Interpretable Transformations with Encoder-Decoder Networks

    Full text link
    Deep feature spaces have the capacity to encode complex transformations of their input data. However, understanding the relative feature-space relationship between two transformed encoded images is difficult. For instance, what is the relative feature space relationship between two rotated images? What is decoded when we interpolate in feature space? Ideally, we want to disentangle confounding factors, such as pose, appearance, and illumination, from object identity. Disentangling these is difficult because they interact in very nonlinear ways. We propose a simple method to construct a deep feature space, with explicitly disentangled representations of several known transformations. A person or algorithm can then manipulate the disentangled representation, for example, to re-render an image with explicit control over parameterized degrees of freedom. The feature space is constructed using a transforming encoder-decoder network with a custom feature transform layer, acting on the hidden representations. We demonstrate the advantages of explicit disentangling on a variety of datasets and transformations, and as an aid for traditional tasks, such as classification.Comment: Accepted at ICCV 201

    A factorization approach to inertial affine structure from motion

    Full text link
    We consider the problem of reconstructing a 3-D scene from a moving camera with high frame rate using the affine projection model. This problem is traditionally known as Affine Structure from Motion (Affine SfM), and can be solved using an elegant low-rank factorization formulation. In this paper, we assume that an accelerometer and gyro are rigidly mounted with the camera, so that synchronized linear acceleration and angular velocity measurements are available together with the image measurements. We extend the standard Affine SfM algorithm to integrate these measurements through the use of image derivatives
    • …
    corecore