20 research outputs found

    A generative approach for image-based modeling of tumor growth

    Get PDF
    22nd International Conference, IPMI 2011, Kloster Irsee, Germany, July 3-8, 2011. ProceedingsExtensive imaging is routinely used in brain tumor patients to monitor the state of the disease and to evaluate therapeutic options. A large number of multi-modal and multi-temporal image volumes is acquired in standard clinical cases, requiring new approaches for comprehensive integration of information from different image sources and different time points. In this work we propose a joint generative model of tumor growth and of image observation that naturally handles multi-modal and longitudinal data. We use the model for analyzing imaging data in patients with glioma. The tumor growth model is based on a reaction-diffusion framework. Model personalization relies only on a forward model for the growth process and on image likelihood. We take advantage of an adaptive sparse grid approximation for efficient inference via Markov Chain Monte Carlo sampling. The approach can be used for integrating information from different multi-modal imaging protocols and can easily be adapted to other tumor growth models.German Academy of Sciences Leopoldina (Fellowship Programme LPDS 2009-10)Academy of Finland (133611)National Institutes of Health (U.S.) (NIBIB NAMIC U54-EB005149)National Institutes of Health (U.S.) (NCRR NAC P41- RR13218)National Institutes of Health (U.S.) (NINDS R01-NS051826)National Institutes of Health (U.S.) (NIH R01-NS052585)National Institutes of Health (U.S.) (NIH R01-EB006758)National Institutes of Health (U.S.) (NIH R01-EB009051)National Institutes of Health (U.S.) (NIH P41-RR014075)National Science Foundation (U.S.) (CAREER Award 0642971

    Validation of a semi-automatic co-registration of MRI scans in patients with brain tumors during treatment follow-up.

    Get PDF
    There is an expanding research interest in high-grade gliomas because of their significant population burden and poor survival despite the extensive standard multimodal treatment. One of the obstacles is the lack of individualized monitoring of tumor characteristics and treatment response before, during and after treatment. We have developed a two-stage semi-automatic method to co-register MRI scans at different time points before and after surgical and adjuvant treatment of high-grade gliomas. This two-stage co-registration includes a linear co-registration of the semi-automatically derived mask of the preoperative contrast-enhancing area or postoperative resection cavity, brain contour and ventricles between different time points. The resulting transformation matrix was then applied in a non-linear manner to co-register conventional contrast-enhanced T1 -weighted images. Targeted registration errors were calculated and compared with linear and non-linear co-registered images. Targeted registration errors were smaller for the semi-automatic non-linear co-registration compared with both the non-linear and linear co-registered images. This was further visualized using a three-dimensional structural similarity method. The semi-automatic non-linear co-registration allowed for optimal correction of the variable brain shift at different time points as evaluated by the minimal targeted registration error. This proposed method allows for the accurate evaluation of the treatment response, essential for the growing research area of brain tumor imaging and treatment response evaluation in large sets of patients. Copyright © 2016 John Wiley & Sons, Ltd.This research was funded by a National Institute of Health Clinician Scientist Fellowship [SJP], a Remmert Adriaan Laan Fund [AH], a René Vogels Fund [AH] and a grant from the Chang Gung Medical Foundation and Chang Gung Memorial Hospital, Keelung [JLY]. None of the authors have financial of other conflict of interest related to the work presented in this paper. This paper presents independent research funded by the UK National Institute for Health Research (NIHR). The views expressed are those of the author(s) and not necessarily those of the UK NHS, the UK NIHR or the UK Department of Health.This is the author accepted manuscript. It is currently under an indefinite embargo pending publication by Wiley

    Validation of a magnetic resonance imaging-based auto-contouring software tool for gross tumour delineation in head and neck cancer radiotheraphy planning

    Get PDF
    To perform statistical validation of a newly developed magnetic resonance imaging (MRI) auto-contouring software tool for gross tumour volume (GTV) delineation in head and neck tumours to assist in radiotherapy planning. Axial MRI baseline scans were obtained for 10 oropharyngeal and laryngeal cancer patients. GTV was present on 102 axial slices and auto-contoured using the modified fuzzy c-means clustering integrated with level set method (FCLSM). Peer reviewed (C-gold) manual contours were used as the reference standard to validate auto-contoured GTVs (C-auto) and mean manual contours (C-manual) from 2 expert clinicians (C1 and C2). Multiple geometrical metrics, including Dice Similarity Coefficient (DSC) were used for quantitative validation. A DSC ≥0.7 was deemed acceptable. Inter-and intra-variabilities amongst the manual contours were also validated. The 2-dimension (2D) contours were then reconstructed in 3D for GTV volume calculation, comparison and 3D visualisation. The mean DSC between C-gold and C-auto was 0.79. The mean DSC bet ween C-gold and C-manual was 0.79 and that between C1 and C2 was 0.80. The average time for GTV auto-contouring per patient was 8 minutes (range 6-13mins; mean 45seconds per axial slice) compared to 15 minutes (range 6-23mins; mean 88 seconds per axial slice) for C1. The average volume concordance between C-gold and C-auto volumes was 86. 51% compared to 74.16% between C-gold and C-manual. The average volume concordance between C1 and C2 volumes was 86.82%. This newly-designed MRI-based auto-contouring software tool shows initial acceptable results in GTV delineation of oropharyngeal and laryngeal tumours using FCLSM. This auto-contouring software tool may help reduce inter-and intra- variability and can assist clinical oncologists with time-consuming, complex radiotherapy planning

    An Optimised Linear Mechanical Model for Estimating Brain Shift Caused by Meningioma Tumours

    Full text link

    Segmentation of corpus callosum using diffusion tensor imaging: validation in patients with glioblastoma

    Get PDF
    Abstract Background This paper presents a three-dimensional (3D) method for segmenting corpus callosum in normal subjects and brain cancer patients with glioblastoma. Methods Nineteen patients with histologically confirmed treatment naïve glioblastoma and eleven normal control subjects underwent DTI on a 3T scanner. Based on the information inherent in diffusion tensors, a similarity measure was proposed and used in the proposed algorithm. In this algorithm, diffusion pattern of corpus callosum was used as prior information. Subsequently, corpus callosum was automatically divided into Witelson subdivisions. We simulated the potential rotation of corpus callosum under tumor pressure and studied the reproducibility of the proposed segmentation method in such cases. Results Dice coefficients, estimated to compare automatic and manual segmentation results for Witelson subdivisions, ranged from 94% to 98% for control subjects and from 81% to 95% for tumor patients, illustrating closeness of automatic and manual segmentations. Studying the effect of corpus callosum rotation by different Euler angles showed that although segmentation results were more sensitive to azimuth and elevation than skew, rotations caused by brain tumors do not have major effects on the segmentation results. Conclusions The proposed method and similarity measure segment corpus callosum by propagating a hyper-surface inside the structure (resulting in high sensitivity), without penetrating into neighboring fiber bundles (resulting in high specificity)

    Deep ensemble learning of sparse regression models for brain disease diagnosis

    Get PDF
    Recent studies on brain imaging analysis witnessed the core roles of machine learning techniques in computer-assisted intervention for brain disease diagnosis. Of various machine-learning techniques, sparse regression models have proved their effectiveness in handling high-dimensional data but with a small number of training samples, especially in medical problems. In the meantime, deep learning methods have been making great successes by outperforming the state-of-the-art performances in various applications. In this paper, we propose a novel framework that combines the two conceptually different methods of sparse regression and deep learning for Alzheimer’s disease/mild cognitive impairment diagnosis and prognosis. Specifically, we first train multiple sparse regression models, each of which is trained with different values of a regularization control parameter. Thus, our multiple sparse regression models potentially select different feature subsets from the original feature set; thereby they have different powers to predict the response values, i.e., clinical label and clinical scores in our work. By regarding the response values from our sparse regression models as target-level representations, we then build a deep convolutional neural network for clinical decision making, which thus we call ‘ Deep Ensemble Sparse Regression Network.’ To our best knowledge, this is the first work that combines sparse regression models with deep neural network. In our experiments with the ADNI cohort, we validated the effectiveness of the proposed method by achieving the highest diagnostic accuracies in three classification tasks. We also rigorously analyzed our results and compared with the previous studies on the ADNI cohort in the literature

    Generalized div-curl based regularization for physically constrained deformable image registration

    Get PDF
    Variational image registration methods commonly employ a similarity metric and a regularization term that renders the minimization problem well-posed. However, many frequently used regularizations such as smoothness or curvature do not necessarily reflect the underlying physics that apply to anatomical deformations. This, in turn, can make the accurate estimation of complex deformations particularly challenging. Here, we present a new highly flexible regularization inspired from the physics of fluid dynamics which allows applying independent penalties on the divergence and curl of the deformations and/or their nth order derivative. The complexity of the proposed generalized div-curl regularization renders the problem particularly challenging using conventional optimization techniques. To this end, we develop a transformation model and an optimization scheme that uses the divergence and curl components of the deformation as control parameters for the registration. We demonstrate that the original unconstrained minimization problem reduces to a constrained problem for which we propose the use of the augmented Lagrangian method. Doing this, the equations of motion greatly simplify and become managable. Our experiments indicate that the proposed framework can be applied on a variety of different registration problems and produce highly accurate deformations with the desired physical properties

    Robust anatomical landmark detection with application to MR brain image registration

    Get PDF
    Comparison of human brain MR images is often challenged by large inter-subject structural variability. To determine correspondences between MR brain images, most existing methods typically perform a local neighborhood search, based on certain morphological features. They are limited in two aspects: (1) pre-defined morphological features often have limited power in characterizing brain structures, thus leading to inaccurate correspondence detection, and (2) correspondence matching is often restricted within local small neighborhoods and fails to cater to images with large anatomical difference. To address these limitations, we propose a novel method to detect distinctive landmarks for effective correspondence matching. Specifically, we first annotate a group of landmarks in a large set of training MR brain images. Then, we use regression forest to simultaneously learn (1) the optimal sets of features to best characterize each landmark and (2) the non-linear mappings from the local patch appearances of image points to their 3D displacements towards each landmark. The learned regression forests are used as landmark detectors to predict the locations of these landmarks in new images. Because each detector is learned based on features that best distinguish the landmark from other points and also landmark detection is performed in the entire image domain, our method can address the limitations in conventional methods. The deformation field estimated based on the alignment of these detected landmarks can then be used as initialization for image registration. Experimental results show that our method is capable of providing good initialization even for the images with large deformation difference, thus improving registration accuracy
    corecore