461 research outputs found

    Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates

    Get PDF
    The study of cerebral anatomy in developing neonates is of great importance for the understanding of brain development during the early period of life. This dissertation therefore focuses on three challenges in the modelling of cerebral anatomy in neonates during brain development. The methods that have been developed all use Magnetic Resonance Images (MRI) as source data. To facilitate study of vascular development in the neonatal period, a set of image analysis algorithms are developed to automatically extract and model cerebral vessel trees. The whole process consists of cerebral vessel tracking from automatically placed seed points, vessel tree generation, and vasculature registration and matching. These algorithms have been tested on clinical Time-of- Flight (TOF) MR angiographic datasets. To facilitate study of the neonatal cortex a complete cerebral cortex segmentation and reconstruction pipeline has been developed. Segmentation of the neonatal cortex is not effectively done by existing algorithms designed for the adult brain because the contrast between grey and white matter is reversed. This causes pixels containing tissue mixtures to be incorrectly labelled by conventional methods. The neonatal cortical segmentation method that has been developed is based on a novel expectation-maximization (EM) method with explicit correction for mislabelled partial volume voxels. Based on the resulting cortical segmentation, an implicit surface evolution technique is adopted for the reconstruction of the cortex in neonates. The performance of the method is investigated by performing a detailed landmark study. To facilitate study of cortical development, a cortical surface registration algorithm for aligning the cortical surface is developed. The method first inflates extracted cortical surfaces and then performs a non-rigid surface registration using free-form deformations (FFDs) to remove residual alignment. Validation experiments using data labelled by an expert observer demonstrate that the method can capture local changes and follow the growth of specific sulcus

    Visual Exploration And Information Analytics Of High-Dimensional Medical Images

    Get PDF
    Data visualization has transformed how we analyze increasingly large and complex data sets. Advanced visual tools logically represent data in a way that communicates the most important information inherent within it and culminate the analysis with an insightful conclusion. Automated analysis disciplines - such as data mining, machine learning, and statistics - have traditionally been the most dominant fields for data analysis. It has been complemented with a near-ubiquitous adoption of specialized hardware and software environments that handle the storage, retrieval, and pre- and postprocessing of digital data. The addition of interactive visualization tools allows an active human participant in the model creation process. The advantage is a data-driven approach where the constraints and assumptions of the model can be explored and chosen based on human insight and confirmed on demand by the analytic system. This translates to a better understanding of data and a more effective knowledge discovery. This trend has become very popular across various domains, not limited to machine learning, simulation, computer vision, genetics, stock market, data mining, and geography. In this dissertation, we highlight the role of visualization within the context of medical image analysis in the field of neuroimaging. The analysis of brain images has uncovered amazing traits about its underlying dynamics. Multiple image modalities capture qualitatively different internal brain mechanisms and abstract it within the information space of that modality. Computational studies based on these modalities help correlate the high-level brain function measurements with abnormal human behavior. These functional maps are easily projected in the physical space through accurate 3-D brain reconstructions and visualized in excellent detail from different anatomical vantage points. Statistical models built for comparative analysis across subject groups test for significant variance within the features and localize abnormal behaviors contextualizing the high-level brain activity. Currently, the task of identifying the features is based on empirical evidence, and preparing data for testing is time-consuming. Correlations among features are usually ignored due to lack of insight. With a multitude of features available and with new emerging modalities appearing, the process of identifying the salient features and their interdependencies becomes more difficult to perceive. This limits the analysis only to certain discernible features, thus limiting human judgments regarding the most important process that governs the symptom and hinders prediction. These shortcomings can be addressed using an analytical system that leverages data-driven techniques for guiding the user toward discovering relevant hypotheses. The research contributions within this dissertation encompass multidisciplinary fields of study not limited to geometry processing, computer vision, and 3-D visualization. However, the principal achievement of this research is the design and development of an interactive system for multimodality integration of medical images. The research proceeds in various stages, which are important to reach the desired goal. The different stages are briefly described as follows: First, we develop a rigorous geometry computation framework for brain surface matching. The brain is a highly convoluted structure of closed topology. Surface parameterization explicitly captures the non-Euclidean geometry of the cortical surface and helps derive a more accurate registration of brain surfaces. We describe a technique based on conformal parameterization that creates a bijective mapping to the canonical domain, where surface operations can be performed with improved efficiency and feasibility. Subdividing the brain into a finite set of anatomical elements provides the structural basis for a categorical division of anatomical view points and a spatial context for statistical analysis. We present statistically significant results of our analysis into functional and morphological features for a variety of brain disorders. Second, we design and develop an intelligent and interactive system for visual analysis of brain disorders by utilizing the complete feature space across all modalities. Each subdivided anatomical unit is specialized by a vector of features that overlap within that element. The analytical framework provides the necessary interactivity for exploration of salient features and discovering relevant hypotheses. It provides visualization tools for confirming model results and an easy-to-use interface for manipulating parameters for feature selection and filtering. It provides coordinated display views for visualizing multiple features across multiple subject groups, visual representations for highlighting interdependencies and correlations between features, and an efficient data-management solution for maintaining provenance and issuing formal data queries to the back end

    Image registration and visualization of in situ gene expression images.

    Get PDF
    In the age of high-throughput molecular biology techniques, scientists have incorporated the methodology of in-situ hybridization to map spatial patterns of gene expression. In order to compare expression patterns within a common tissue structure, these images need to be registered or organized into a common coordinate system for alignment to a reference or atlas images. We use three different image registration methodologies (manual; correlation based; mutual information based) to determine the common coordinate system for the reference and in-situ hybridization images. All three methodologies are incorporated into a Matlab tool to visualize the results in a user friendly way and save them for future work. Our results suggest that the user-defined landmark method is best when considering images from different modalities; automated landmark detection is best when the images are expected to have a high degree of consistency; and the mutual information methodology is useful when the images are from the same modality

    Improving Dose-Response Correlations for Locally Advanced NSCLC Patients Treated with IMRT or PSPT

    Get PDF
    The standard of care for locally advanced non-small cell lung cancer (NSCLC) is concurrent chemo-radiotherapy. Despite recent advancements in radiation delivery methods, the median survival time of NSCLC patients remains below 28 months. Higher tumor dose has been found to increase survival but also a higher rate of radiation pneumonitis (RP) that affects breathing capability. In fear of such toxicity, less-aggressive treatment plans are often clinically preferred, leading to metastasis and recurrence. Therefore, accurate RP prediction is crucial to ensure tumor coverage to improve treatment outcome. Current models have associated RP with increased dose but with limited accuracy as they lack spatial correlation between accurate dose representation and quantitative RP representation. These models represent lung tissue damage with radiation dose distribution planned pre-treatment, which assumes a fixed patient geometry and inevitably renders imprecise dose delivery due to intra-fractional breathing motion and inter-fractional anatomy response. Additionally, current models employ whole-lung dose metrics as the contributing factor to RP as a qualitative, binary outcome but these global dose metrics discard microscopic, voxel-(3D pixel)-level information and prevent spatial correlations with quantitative RP representation. To tackle these limitations, we developed advanced deformable image registration (DIR) techniques that registered corresponding anatomical voxels between images for tracking and accumulating dose throughout treatment. DIR also enabled voxel-level dose-response correlation when CT image density change (IDC) was used to quantify RP. We hypothesized that more accurate estimates of biologically effective dose distributions actually delivered, achieved through (a) dose accumulation using deformable registration of weekly 4DCT images acquired over the course or radiotherapy and (b) the incorporation of variable relative biological effectiveness (RBE), would lead to statistically and clinically significant improvement in the correlation of RP with biologically effective dose distributions. Our work resulted in a robust intra-4DCT and inter-4DCT DIR workflow, with the accuracy meeting AAPM TG-132 recommendations for clinical implementation of DIR. The automated DIR workflow allowed us to develop a fully automated 4DCT-based dose accumulation pipeline in RayStation (RaySearch Laboratories, Stockholm, Sweden). With a sample of 67 IMRT patients, our results showed that the accumulated dose was statistically different than the planned dose across the entire cohort with an average MLD increase of ~1 Gy and clinically different for individual patients where 16% resulted in difference in the score of the normal tissue complication probability (NTCP) using an established, clinically used model, which could qualify the patients for treatment planning re-evaluation. Lastly, we associated dose difference with accuracy difference by establishing and comparing voxel-level dose-IDC correlations and concluded that the accumulated dose better described the localized damage, thereby a closer representation of the delivered dose. Using the same dose-response correlation strategy, we plotted the dose-IDC relationships for both photon patients (N = 51) and proton patients (N = 67), we measured the variable proton RBE values to be 3.07–1.27 from 9–52 Gy proton voxels. With the measured RBE values, we fitted an established variable proton RBE model with pseudo-R2 of 0.98. Therefore, our results led to statistically and clinically significant improvement in the correlation of RP with accumulated and biologically effective dose distributions and demonstrated the potential of incorporating the effect of anatomical change and biological damage in RP prediction models

    Proceedings of the Third International Workshop on Mathematical Foundations of Computational Anatomy - Geometrical and Statistical Methods for Modelling Biological Shape Variability

    Get PDF
    International audienceComputational anatomy is an emerging discipline at the interface of geometry, statistics and image analysis which aims at modeling and analyzing the biological shape of tissues and organs. The goal is to estimate representative organ anatomies across diseases, populations, species or ages, to model the organ development across time (growth or aging), to establish their variability, and to correlate this variability information with other functional, genetic or structural information. The Mathematical Foundations of Computational Anatomy (MFCA) workshop aims at fostering the interactions between the mathematical community around shapes and the MICCAI community in view of computational anatomy applications. It targets more particularly researchers investigating the combination of statistical and geometrical aspects in the modeling of the variability of biological shapes. The workshop is a forum for the exchange of the theoretical ideas and aims at being a source of inspiration for new methodological developments in computational anatomy. A special emphasis is put on theoretical developments, applications and results being welcomed as illustrations. Following the successful rst edition of this workshop in 20061 and second edition in New-York in 20082, the third edition was held in Toronto on September 22 20113. Contributions were solicited in Riemannian and group theoretical methods, geometric measurements of the anatomy, advanced statistics on deformations and shapes, metrics for computational anatomy, statistics of surfaces, modeling of growth and longitudinal shape changes. 22 submissions were reviewed by three members of the program committee. To guaranty a high level program, 11 papers only were selected for oral presentation in 4 sessions. Two of these sessions regroups classical themes of the workshop: statistics on manifolds and diff eomorphisms for surface or longitudinal registration. One session gathers papers exploring new mathematical structures beyond Riemannian geometry while the last oral session deals with the emerging theme of statistics on graphs and trees. Finally, a poster session of 5 papers addresses more application oriented works on computational anatomy

    3d Face Reconstruction And Emotion Analytics With Part-Based Morphable Models

    Get PDF
    3D face reconstruction and facial expression analytics using 3D facial data are new and hot research topics in computer graphics and computer vision. In this proposal, we first review the background knowledge for emotion analytics using 3D morphable face model, including geometry feature-based methods, statistic model-based methods and more advanced deep learning-bade methods. Then, we introduce a novel 3D face modeling and reconstruction solution that robustly and accurately acquires 3D face models from a couple of images captured by a single smartphone camera. Two selfie photos of a subject taken from the front and side are used to guide our Non-Negative Matrix Factorization (NMF) induced part-based face model to iteratively reconstruct an initial 3D face of the subject. Then, an iterative detail updating method is applied to the initial generated 3D face to reconstruct facial details through optimizing lighting parameters and local depths. Our iterative 3D face reconstruction method permits fully automatic registration of a part-based face representation to the acquired face data and the detailed 2D/3D features to build a high-quality 3D face model. The NMF part-based face representation learned from a 3D face database facilitates effective global and adaptive local detail data fitting alternatively. Our system is flexible and it allows users to conduct the capture in any uncontrolled environment. We demonstrate the capability of our method by allowing users to capture and reconstruct their 3D faces by themselves. Based on the 3D face model reconstruction, we can analyze the facial expression and the related emotion in 3D space. We present a novel approach to analyze the facial expressions from images and a quantitative information visualization scheme for exploring this type of visual data. From the reconstructed result using NMF part-based morphable 3D face model, basis parameters and a displacement map are extracted as features for facial emotion analysis and visualization. Based upon the features, two Support Vector Regressions (SVRs) are trained to determine the fuzzy Valence-Arousal (VA) values to quantify the emotions. The continuously changing emotion status can be intuitively analyzed by visualizing the VA values in VA-space. Our emotion analysis and visualization system, based on 3D NMF morphable face model, detects expressions robustly from various head poses, face sizes and lighting conditions, and is fully automatic to compute the VA values from images or a sequence of video with various facial expressions. To evaluate our novel method, we test our system on publicly available databases and evaluate the emotion analysis and visualization results. We also apply our method to quantifying emotion changes during motivational interviews. These experiments and applications demonstrate effectiveness and accuracy of our method. In order to improve the expression recognition accuracy, we present a facial expression recognition approach with 3D Mesh Convolutional Neural Network (3DMCNN) and a visual analytics guided 3DMCNN design and optimization scheme. The geometric properties of the surface is computed using the 3D face model of a subject with facial expressions. Instead of using regular Convolutional Neural Network (CNN) to learn intensities of the facial images, we convolve the geometric properties on the surface of the 3D model using 3DMCNN. We design a geodesic distance-based convolution method to overcome the difficulties raised from the irregular sampling of the face surface mesh. We further present an interactive visual analytics for the purpose of designing and modifying the networks to analyze the learned features and cluster similar nodes in 3DMCNN. By removing low activity nodes in the network, the performance of the network is greatly improved. We compare our method with the regular CNN-based method by interactively visualizing each layer of the networks and analyze the effectiveness of our method by studying representative cases. Testing on public datasets, our method achieves a higher recognition accuracy than traditional image-based CNN and other 3D CNNs. The presented framework, including 3DMCNN and interactive visual analytics of the CNN, can be extended to other applications

    Optimization of Decision Making in Personalized Radiation Therapy using Deformable Image Registration

    Get PDF
    Cancer has become one of the dominant diseases worldwide, especially in western countries, and radiation therapy is one of the primary treatment options for 50% of all patients diagnosed. Radiation therapy involves the radiation delivery and planning based on radiobiological models derived primarily from clinical trials. Since 2015 improvements in information technologies and data storage allowed new models to be created using the large volumes of treatment data already available and correlate the actually delivered treatment with outcomes. The goals of this thesis are to 1) construct models of patient outcomes after receiving radiation therapy using available treatment and patient parameters and 2) provide a method to determine real accumulated radiation dose including the impact of registration uncertainties. In Chapter 2, a model was developed predicting overall survival for patients with hepatocellular carcinoma or liver metastasis receiving radiation therapy. These models show which patients benefit from curative radiation therapy based on liver function, and the survival benefit of increased radiation dose on survival. In Chapter 3, a method was developed to routinely evaluate deformable image registration (DIR) with computer-generated landmark pairs using the scale-invariant feature transform. The method presented in this chapter created landmark sets for comparing lung 4DCT images and provided the same evaluation of DIR as manual landmark sets. In Chapter 4, an investigation was performed on the impact of DIR error on dose accumulation using landmarked 4DCT images as the ground truth. The study demonstrated the relationship between dose gradient, DIR error and dose accumulation error, and presented a method to determine error bars on the dose accumulation process. In Chapter 5, a method was presented to determine quantitatively when to update a treatment plan during the course of a multi-fraction radiation treatment of head and neck cancer. This method investigated the ability to use only the planned dose with deformable image registration to predict dose changes caused by anatomical deformations. This thesis presents the fundamental elements of a decision support system including patient pre-treatment parameters and the actual delivered dose using DIR while considering registration uncertainties

    Study on the Method of Constructing a Statistical Shape Model and Its Application to the Segmentation of Internal Organs in Medical Images

    Get PDF
    In image processing, segmentation is one of the critical tasks for diagnostic analysis and image interpretation. In the following thesis, we describe the investigation of three problems related to the segmentation algorithms for medical images: Active shape model algorithm, 3-dimensional (3-D) statistical shape model building and organic segmentation experiments. For the development of Active shape models, the constraints of statistical model reduced this algorithm to be difficult for various biological shapes. To overcome the coupling of parameters in the original algorithm, in this thesis, the genetic algorithm is introduced to relax the shape limitation. How to construct a robust and effective 3-D point model is still a key step in statistical shape models. Generally the shape information is obtained from manually segmented voxel data. In this thesis, a two-step procedure for generating these models was designed. After transformed the voxel data to triangular polygonal data, in the first step, attitudes of these interesting objects are aligned according their surface features. We propose to reflect the surface orientations by means of their Gauss maps. As well the Gauss maps are mapped to a complex plane using stereographic projection approach. The experiment was run to align a set of left lung models. The second step is identifying the positions of landmarks on polygonal surfaces. This is solved by surface parameterization method. We proposed two simplex methods to correspond the landmarks. A semi-automatic method attempts to “copy” the phasic positions of pre-placed landmarks to all the surfaces, which have been mapped to the same parameterization domain. Another automatic corresponding method attempts to place the landmarks equidistantly. Finally, the goodness experiments were performed to measure the difference to manually corresponded results. And we also compared the affection to correspondence when using different surface mapping methods. The third part of this thesis is applying the segmentation algorithms to solve clinical problems. We did not stick to the model-based methods but choose the suitable one or their complex according to the objects. In the experiment of lung regions segmentation which includes pulmonary nodules, we propose a complementary region growing method to deal with the unpredictable variation of image densities of lesion regions. In the experiments of liver regions, instead of using region growing method in 3-D style, we turn into a slice-by-slice style in order to reduce the overflows. The image intensity of cardiac regions is distinguishable from lung regions in CT image. But as to the adjacent zone of heart and liver boundary are generally blurry. We utilized a shape model guided method to refine the segmentation results.3-D segmentation techniques have been applied widely not only in medical imaging fields, but also in machine vision, computer graphic. At the last part of this thesis, we resume some interesting topics such as 3-D visualization for medical interpretation, human face recognition and object grasping robot etc.九州工業大学博士学位論文 学位記番号:工博甲第353号 学位授与年月日:平成25年9月27日Chapter 1: Introduction|Chapter 2: Framework of Medical Image Segmentation|Chapter 3: 2-D Organic Regions Using Active Shape Model and Genetic Algorithm|Chapter 4: Alignment of 3-D Models|Chapter 5: Corespondence of 3-D Models|Chapter 6:Experiments of Organic Segmentation|Chapter 7: Visualization Technology and Its Applications|Chapter 8: Conclusions and Future Works九州工業大学平成25年
    corecore