5,036 research outputs found

    MR image reconstruction using deep density priors

    Full text link
    Algorithms for Magnetic Resonance (MR) image reconstruction from undersampled measurements exploit prior information to compensate for missing k-space data. Deep learning (DL) provides a powerful framework for extracting such information from existing image datasets, through learning, and then using it for reconstruction. Leveraging this, recent methods employed DL to learn mappings from undersampled to fully sampled images using paired datasets, including undersampled and corresponding fully sampled images, integrating prior knowledge implicitly. In this article, we propose an alternative approach that learns the probability distribution of fully sampled MR images using unsupervised DL, specifically Variational Autoencoders (VAE), and use this as an explicit prior term in reconstruction, completely decoupling the encoding operation from the prior. The resulting reconstruction algorithm enjoys a powerful image prior to compensate for missing k-space data without requiring paired datasets for training nor being prone to associated sensitivities, such as deviations in undersampling patterns used in training and test time or coil settings. We evaluated the proposed method with T1 weighted images from a publicly available dataset, multi-coil complex images acquired from healthy volunteers (N=8) and images with white matter lesions. The proposed algorithm, using the VAE prior, produced visually high quality reconstructions and achieved low RMSE values, outperforming most of the alternative methods on the same dataset. On multi-coil complex data, the algorithm yielded accurate magnitude and phase reconstruction results. In the experiments on images with white matter lesions, the method faithfully reconstructed the lesions. Keywords: Reconstruction, MRI, prior probability, machine learning, deep learning, unsupervised learning, density estimationComment: Published in IEEE TMI. Main text and supplementary material, 19 pages tota

    Discrete Point Flow Networks for Efficient Point Cloud Generation

    Get PDF
    Generative models have proven effective at modeling 3D shapes and their statistical variations. In this paper we investigate their application to point clouds, a 3D shape representation widely used in computer vision for which, however, only few generative models have yet been proposed. We introduce a latent variable model that builds on normalizing flows with affine coupling layers to generate 3D point clouds of an arbitrary size given a latent shape representation. To evaluate its benefits for shape modeling we apply this model for generation, autoencoding, and single-view shape reconstruction tasks. We improve over recent GAN-based models in terms of most metrics that assess generation and autoencoding. Compared to recent work based on continuous flows, our model offers a significant speedup in both training and inference times for similar or better performance. For single-view shape reconstruction we also obtain results on par with state-of-the-art voxel, point cloud, and mesh-based methods.Comment: In ECCV'2

    Diagnostic performance of the specific uptake size index for semi-quantitative analysis of I-123-FP-CIT SPECT: harmonized multi-center research setting versus typical clinical single-camera setting

    Get PDF
    Introduction: The specific uptake size index (SUSI) of striatal FP-CIT uptake is independent of spatial resolution in the SPECT image, in contrast to the specific binding ratio (SBR). This suggests that the SUSI is particularly appropriate for multi-site/multi-camera settings in which camera-specific effects increase inter-subject variability of spatial resolution. However, the SUSI is sensitive to inter-subject variability of striatum size. Furthermore, it might be more sensitive to errors of the estimate of non-displaceable FP-CIT binding. This study compared SUSI and SBR in the multi-site/multi-camera (MULTI) setting of a prospective multi-center study and in a mono-site/mono-camera (MONO) setting representative of clinical routine. Methods: The MULTI setting included patients with Parkinson’s disease (PD, n = 438) and healthy controls (n = 207) from the Parkinson Progression Marker Initiative. The MONO setting included 122 patients from routine clinical patient care in whom FP-CIT SPECT had been performed with the same double-head SPECT system according to the same acquisition and reconstruction protocol. Patients were categorized as “neurodegenerative” (n = 84) or “non-neurodegenerative” (n = 38) based on follow-up data. FP-CIT SPECTs were stereotactically normalized to MNI space. SUSI and SBR were computed for caudate, putamen, and whole striatum using unilateral ROIs predefined in MNI space. SUSI analysis was repeated in native patient space in the MONO setting. The area (AUC) under the ROC curve for identification of PD/“neurodegenerative” cases was used as performance measure. Results: In both settings, the highest AUC was achieved by the putamen (minimum over both hemispheres), independent of the semi-quantitative method (SUSI or SBR). The putaminal SUSI provided slightly better performance with ROI analysis in MNI space compared to patient space (AUC = 0.969 vs. 0.961, p = 0.129). The SUSI (computed in MNI space) performed slightly better than the SBR in the MULTI setting (AUC = 0.993 vs. 0.991, p = 0. 207) and slightly worse in the MONO setting (AUC = 0.969 vs. AUC = 0.976, p = 0.259). There was a trend toward larger AUC difference between SUSI and SBR in the MULTI setting compared to the MONO setting (p = 0.073). Variability of voxel intensity in the reference region was larger in misclassified cases compared to correctly classified cases for both SUSI and SBR (MULTI setting: p = 0.007 and p = 0.012, respectively). Conclusions: The SUSI is particularly useful in MULTI settings. SPECT images should be stereotactically normalized prior to SUSI analysis. The putaminal SUSI provides better diagnostic performance than the SUSI of the whole striatum. Errors of the estimate of non-displaceable count density in the reference region can cause misclassification by both SUSI and SBR, particularly in borderline cases. These cases might be identified by visual checking FP-CIT uptake in the reference region for particularly high variability

    Image-based window detection: an overview

    Get PDF
    Automated segmentation of buildings’ façade and detection of its elements is of high relevance in various fields of research as it, e. g., reduces the effort of 3 D reconstructing existing buildings and even entire cities or may be used for navigation and localization tasks. In recent years, several approaches were made concerning this issue. These can be mainly classified by their input data which are either images or 3 D point clouds. This paper provides a survey of image-based approaches. Particularly, this paper focuses on window detection and therefore groups related papers into the three major detection strategies. We juxtapose grammar based methods, pattern recognition and machine learning and contrast them referring to their generality of application. As we found out machine learning approaches seem most promising for window detection on generic façades and thus we will pursue these in future work

    Fat fraction mapping using bSSFP Signal Profile Asymmetries for Robust multi-Compartment Quantification (SPARCQ)

    Get PDF
    Purpose: To develop a novel quantitative method for detection of different tissue compartments based on bSSFP signal profile asymmetries (SPARCQ) and to provide a validation and proof-of-concept for voxel-wise water-fat separation and fat fraction mapping. Methods: The SPARCQ framework uses phase-cycled bSSFP acquisitions to obtain bSSFP signal profiles. For each voxel, the profile is decomposed into a weighted sum of simulated profiles with specific off-resonance and relaxation time ratios. From the obtained set of weights, voxel-wise estimations of the fractions of the different components and their equilibrium magnetization are extracted. For the entire image volume, component-specific quantitative maps as well as banding-artifact-free images are generated. A SPARCQ proof-of-concept was provided for water-fat separation and fat fraction mapping. Noise robustness was assessed using simulations. A dedicated water-fat phantom was used to validate fat fractions estimated with SPARCQ against gold-standard 1H MRS. Quantitative maps were obtained in knees of six healthy volunteers, and SPARCQ repeatability was evaluated in scan rescan experiments. Results: Simulations showed that fat fraction estimations are accurate and robust for signal-to-noise ratios above 20. Phantom experiments showed good agreement between SPARCQ and gold-standard (GS) fat fractions (fF(SPARCQ) = 1.02*fF(GS) + 0.00235). In volunteers, quantitative maps and banding-artifact-free water-fat-separated images obtained with SPARCQ demonstrated the expected contrast between fatty and non-fatty tissues. The coefficient of repeatability of SPARCQ fat fraction was 0.0512. Conclusion: The SPARCQ framework was proposed as a novel quantitative mapping technique for detecting different tissue compartments, and its potential was demonstrated for quantitative water-fat separation.Comment: 20 pages, 7 figures, submitted to Magnetic Resonance in Medicin
    • …
    corecore