5 research outputs found

    Improving full-cardiac cycle strain estimation from tagged CMR by accurate modeling of 3D image appearance characteristics

    Get PDF
    To improve the tagged cardiac magnetic resonance (CMR) image analysis, we propose a 3D (2D space + 1D time) energy minimization framework, based on learning first- and second-order visual appearance models from voxel intensities. The former model approximates the marginal empirical distribution of intensities with two linear combinations of discrete Gaussians (LCDG). The second-order model considers an image of a sample from a translation–rotation invariant 3D Markov–Gibbs random field (MGRF) with multiple pairwise spatiotemporal interactions within and between adjacent temporal frames. Abilities of the framework to accurately recover noise-corrupted strain slopes were experimentally evaluated and validated on 3D geometric phantoms and independently on in vivo data. In multiple noise and motion conditions, the proposed method outperformed comparative image filtering in restoring strain curves and reliably improved HARP strain tracking during the entirety of the cardiac cycle. According to these results, our framework can augment popular spectral domain techniques, such as HARP, by optimizing the spectral domain characteristics and thereby providing more reliable estimates of strain parameters

    Infant Brain Extraction in T1-Weighted MR Images Using BET and Refinement Using LCDG and MGRF Models.

    Get PDF
    In this paper, we propose a novel framework for the automated extraction of the brain from T1-weighted MR images. The proposed approach is primarily based on the integration of a stochastic model [a two-level Markov-Gibbs random field (MGRF)] that serves to learn the visual appearance of the brain texture, and a geometric model (the brain isosurfaces) that preserves the brain geometry during the extraction process. The proposed framework consists of three main steps: 1) Following bias correction of the brain, a new three-dimensional (3-D) MGRF having a 26-pairwise interaction model is applied to enhance the homogeneity of MR images and preserve the 3-D edges between different brain tissues. 2) The nonbrain tissue found in the MR images is initially removed using the brain extraction tool (BET), and then the brain is parceled to nested isosurfaces using a fast marching level set method. 3) Finally, a classification step is applied in order to accurately remove the remaining parts of the skull without distorting the brain geometry. The classification of each voxel found on the isosurfaces is made based on the first- and second-order visual appearance features. The first-order visual appearance is estimated using a linear combination of discrete Gaussians (LCDG) to model the intensity distribution of the brain signals. The second-order visual appearance is constructed using an MGRF model with analytically estimated parameters. The fusion of the LCDG and MGRF, along with their analytical estimation, allows the approach to be fast and accurate for use in clinical applications. The proposed approach was tested on in vivo data using 300 infant 3-D MR brain scans, which were qualitatively validated by an MR expert. In addition, it was quantitatively validated using 30 datasets based on three metrics: the Dice coefficient, the 95% modified Hausdorff distance, and absolute brain volume difference. Results showed the capability of the proposed approach, outperforming four widely used BETs: BET, BET2, brain surface extractor, and infant brain extraction and analysis toolbox. Experiments conducted also proved that the proposed framework can be generalized to adult brain extraction as well
    corecore