1,845 research outputs found

    Fast and robust hybrid framework for infant brain classification from structural MRI : a case study for early diagnosis of autism.

    Get PDF
    The ultimate goal of this work is to develop a computer-aided diagnosis (CAD) system for early autism diagnosis from infant structural magnetic resonance imaging (MRI). The vital step to achieve this goal is to get accurate segmentation of the different brain structures: whitematter, graymatter, and cerebrospinal fluid, which will be the main focus of this thesis. The proposed brain classification approach consists of two major steps. First, the brain is extracted based on the integration of a stochastic model that serves to learn the visual appearance of the brain texture, and a geometric model that preserves the brain geometry during the extraction process. Secondly, the brain tissues are segmented based on shape priors, built using a subset of co-aligned training images, that is adapted during the segmentation process using first- and second-order visual appearance features of infant MRIs. The accuracy of the presented segmentation approach has been tested on 300 infant subjects and evaluated blindly on 15 adult subjects. The experimental results have been evaluated by the MICCAI MR Brain Image Segmentation (MRBrainS13) challenge organizers using three metrics: Dice coefficient, 95-percentile Hausdorff distance, and absolute volume difference. The proposed method has been ranked the first in terms of performance and speed

    Modeling small objects under uncertainties : novel algorithms and applications.

    Get PDF
    Active Shape Models (ASM), Active Appearance Models (AAM) and Active Tensor Models (ATM) are common approaches to model elastic (deformable) objects. These models require an ensemble of shapes and textures, annotated by human experts, in order identify the model order and parameters. A candidate object may be represented by a weighted sum of basis generated by an optimization process. These methods have been very effective for modeling deformable objects in biomedical imaging, biometrics, computer vision and graphics. They have been tried mainly on objects with known features that are amenable to manual (expert) annotation. They have not been examined on objects with severe ambiguities to be uniquely characterized by experts. This dissertation presents a unified approach for modeling, detecting, segmenting and categorizing small objects under uncertainty, with focus on lung nodules that may appear in low dose CT (LDCT) scans of the human chest. The AAM, ASM and the ATM approaches are used for the first time on this application. A new formulation to object detection by template matching, as an energy optimization, is introduced. Nine similarity measures of matching have been quantitatively evaluated for detecting nodules less than 1 em in diameter. Statistical methods that combine intensity, shape and spatial interaction are examined for segmentation of small size objects. Extensions of the intensity model using the linear combination of Gaussians (LCG) approach are introduced, in order to estimate the number of modes in the LCG equation. The classical maximum a posteriori (MAP) segmentation approach has been adapted to handle segmentation of small size lung nodules that are randomly located in the lung tissue. A novel empirical approach has been devised to simultaneously detect and segment the lung nodules in LDCT scans. The level sets methods approach was also applied for lung nodule segmentation. A new formulation for the energy function controlling the level set propagation has been introduced taking into account the specific properties of the nodules. Finally, a novel approach for classification of the segmented nodules into categories has been introduced. Geometric object descriptors such as the SIFT, AS 1FT, SURF and LBP have been used for feature extraction and matching of small size lung nodules; the LBP has been found to be the most robust. Categorization implies classification of detected and segmented objects into classes or types. The object descriptors have been deployed in the detection step for false positive reduction, and in the categorization stage to assign a class and type for the nodules. The AAMI ASMI A TM models have been used for the categorization stage. The front-end processes of lung nodule modeling, detection, segmentation and classification/categorization are model-based and data-driven. This dissertation is the first attempt in the literature at creating an entirely model-based approach for lung nodule analysis

    Stochastic Algorithm For Parameter Estimation For Dense Deformable Template Mixture Model

    Full text link
    Estimating probabilistic deformable template models is a new approach in the fields of computer vision and probabilistic atlases in computational anatomy. A first coherent statistical framework modelling the variability as a hidden random variable has been given by Allassonni\`ere, Amit and Trouv\'e in [1] in simple and mixture of deformable template models. A consistent stochastic algorithm has been introduced in [2] to face the problem encountered in [1] for the convergence of the estimation algorithm for the one component model in the presence of noise. We propose here to go on in this direction of using some "SAEM-like" algorithm to approximate the MAP estimator in the general Bayesian setting of mixture of deformable template model. We also prove the convergence of this algorithm toward a critical point of the penalised likelihood of the observations and illustrate this with handwritten digit images

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention

    Probabilistic and geometric shape based segmentation methods.

    Get PDF
    Image segmentation is one of the most important problems in image processing, object recognition, computer vision, medical imaging, etc. In general, the objective of the segmentation is to partition the image into the meaningful areas using the existing (low level) information in the image and prior (high level) information which can be obtained using a number of features of an object. As stated in [1,2], the human vision system aims to extract and use as much information as possible in the image including but not limited to the intensity, possible motion of the object (in sequential images), spatial relations (interaction) as the existing information, and the shape of the object which is learnt from the experience as the prior information. The main objective of this dissertation is to couple the prior information with the existing information since the machine vision system cannot predict the prior information unless it is given. To label the image into meaningful areas, the chosen information is modelled to fit progressively in each of the regions by an optimization process. The intensity and spatial interaction (as the existing information) and shape (as the prior information) are modeled to obtain the optimum segmentation in this study. The intensity information is modelled using the Gaussian distribution. Spatial interaction that describes the relation between neighboring pixels/voxels is modelled by assuming that the pixel intensity depends on the intensities of the neighboring pixels. The shape model is obtained using occurrences of histogram of training shape pixels or voxels. The main objective is to capture the shape variation of the object of interest. Each pixel in the image will have three probabilities to be an object and a background class based on the intensity, spatial interaction, and shape models. These probabilistic values will guide the energy (cost) functionals in the optimization process. This dissertation proposes segmentation frameworks which has the following properties: i) original to solve some of the existing problems, ii) robust under various segmentation challenges, and iii) fast enough to be used in the real applications. In this dissertation, the models are integrated into different methods to obtain the optimum segmentation: 1) variational (can be considered as the spatially continuous), and 2) statistical (can be considered as the spatially discrete) methods. The proposed segmentation frameworks start with obtaining the initial segmentation using the intensity / spatial interaction models. The shape model, which is obtained using the training shapes, is registered to the image domain. Finally, the optimal segmentation is obtained using the optimization of the energy functionals. Experiments show that the use of the shape prior improves considerably the accuracy of the alternative methods which use only existing or both information in the image. The proposed methods are tested on the synthetic and clinical images/shapes and they are shown to be robust under various noise levels, occlusions, and missing object information. Vertebral bodies (VBs) in clinical computed tomography (CT) are segmented using the proposed methods to help the bone mineral density measurements and fracture analysis in bones. Experimental results show that the proposed solutions eliminate some of the existing problems in the VB segmentation. One of the most important contributions of this study is to offer a segmentation framework which can be suitable to the clinical works

    Factorized Topic Models

    Full text link
    In this paper we present a modification to a latent topic model, which makes the model exploit supervision to produce a factorized representation of the observed data. The structured parameterization separately encodes variance that is shared between classes from variance that is private to each class by the introduction of a new prior over the topic space. The approach allows for a more eff{}icient inference and provides an intuitive interpretation of the data in terms of an informative signal together with structured noise. The factorized representation is shown to enhance inference performance for image, text, and video classification.Comment: ICLR 201

    Segmentation of Infant Brain Using Nonnegative Matrix Factorization

    Get PDF
    This study develops an atlas-based automated framework for segmenting infants\u27 brains from magnetic resonance imaging (MRI). For the accurate segmentation of different structures of an infant\u27s brain at the isointense age (6-12 months), our framework integrates features of diffusion tensor imaging (DTI) (e.g., the fractional anisotropy (FA)). A brain diffusion tensor (DT) image and its region map are considered samples of a Markov-Gibbs random field (MGRF) that jointly models visual appearance, shape, and spatial homogeneity of a goal structure. The visual appearance is modeled with an empirical distribution of the probability of the DTI features, fused by their nonnegative matrix factorization (NMF) and allocation to data clusters. Projecting an initial high-dimensional feature space onto a low-dimensional space of the significant fused features with the NMF allows for better separation of the goal structure and its background. The cluster centers in the latter space are determined at the training stage by the K-means clustering. In order to adapt to large infant brain inhomogeneities and segment the brain images more accurately, appearance descriptors of both the first-order and second-order are taken into account in the fused NMF feature space. Additionally, a second-order MGRF model is used to describe the appearance based on the voxel intensities and their pairwise spatial dependencies. An adaptive shape prior that is spatially variant is constructed from a training set of co-aligned images, forming an atlas database. Moreover, the spatial homogeneity of the shape is described with a spatially uniform 3D MGRF of the second-order for region labels. In vivo experiments on nine infant datasets showed promising results in terms of the accuracy, which was computed using three metrics: the 95-percentile modified Hausdorff distance (MHD), the Dice similarity coefficient (DSC), and the absolute volume difference (AVD). Both the quantitative and visual assessments confirm that integrating the proposed NMF-fused DTI feature and intensity MGRF models of visual appearance, the adaptive shape prior, and the shape homogeneity MGRF model is promising in segmenting the infant brain DTI
    • …
    corecore