23,124 research outputs found

    Histopathological image analysis : a review

    Get PDF
    Over the past decade, dramatic increases in computational power and improvement in image analysis algorithms have allowed the development of powerful computer-assisted analytical approaches to radiological data. With the recent advent of whole slide digital scanners, tissue histopathology slides can now be digitized and stored in digital image form. Consequently, digitized tissue histopathology has now become amenable to the application of computerized image analysis and machine learning techniques. Analogous to the role of computer-assisted diagnosis (CAD) algorithms in medical imaging to complement the opinion of a radiologist, CAD algorithms have begun to be developed for disease detection, diagnosis, and prognosis prediction to complement the opinion of the pathologist. In this paper, we review the recent state of the art CAD technology for digitized histopathology. This paper also briefly describes the development and application of novel image analysis technology for a few specific histopathology related problems being pursued in the United States and Europe

    Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates

    Get PDF
    The study of cerebral anatomy in developing neonates is of great importance for the understanding of brain development during the early period of life. This dissertation therefore focuses on three challenges in the modelling of cerebral anatomy in neonates during brain development. The methods that have been developed all use Magnetic Resonance Images (MRI) as source data. To facilitate study of vascular development in the neonatal period, a set of image analysis algorithms are developed to automatically extract and model cerebral vessel trees. The whole process consists of cerebral vessel tracking from automatically placed seed points, vessel tree generation, and vasculature registration and matching. These algorithms have been tested on clinical Time-of- Flight (TOF) MR angiographic datasets. To facilitate study of the neonatal cortex a complete cerebral cortex segmentation and reconstruction pipeline has been developed. Segmentation of the neonatal cortex is not effectively done by existing algorithms designed for the adult brain because the contrast between grey and white matter is reversed. This causes pixels containing tissue mixtures to be incorrectly labelled by conventional methods. The neonatal cortical segmentation method that has been developed is based on a novel expectation-maximization (EM) method with explicit correction for mislabelled partial volume voxels. Based on the resulting cortical segmentation, an implicit surface evolution technique is adopted for the reconstruction of the cortex in neonates. The performance of the method is investigated by performing a detailed landmark study. To facilitate study of cortical development, a cortical surface registration algorithm for aligning the cortical surface is developed. The method first inflates extracted cortical surfaces and then performs a non-rigid surface registration using free-form deformations (FFDs) to remove residual alignment. Validation experiments using data labelled by an expert observer demonstrate that the method can capture local changes and follow the growth of specific sulcus

    UWG -TRNSYS Simulation Coupling for Urban Building Energy Modelling

    Get PDF
    This paper presents a new methodology to carry out building performance simulation at the district scale integrating the building thermal model TRNSYS with the climate model ‘Urban Weather Generator’ (UWG). The integrated methodology is designed to include the microclimatic modifications induced by urban environments on buildings’ cooling load calculation. The impact of shadows, air temperature increase and urban radiant environment on building cooling performance has been highlighted for hot arid climates (Antofagasta, Chile). Results indicate that the impact of urban context on energy performance of buildings at the neighbourhood scale varies significantly with building typology and urban tissue densit

    Automatic Segmentation of the Left Ventricle in Cardiac CT Angiography Using Convolutional Neural Network

    Full text link
    Accurate delineation of the left ventricle (LV) is an important step in evaluation of cardiac function. In this paper, we present an automatic method for segmentation of the LV in cardiac CT angiography (CCTA) scans. Segmentation is performed in two stages. First, a bounding box around the LV is detected using a combination of three convolutional neural networks (CNNs). Subsequently, to obtain the segmentation of the LV, voxel classification is performed within the defined bounding box using a CNN. The study included CCTA scans of sixty patients, fifty scans were used to train the CNNs for the LV localization, five scans were used to train LV segmentation and the remaining five scans were used for testing the method. Automatic segmentation resulted in the average Dice coefficient of 0.85 and mean absolute surface distance of 1.1 mm. The results demonstrate that automatic segmentation of the LV in CCTA scans using voxel classification with convolutional neural networks is feasible.Comment: This work has been published as: Zreik, M., Leiner, T., de Vos, B. D., van Hamersvelt, R. W., Viergever, M. A., I\v{s}gum, I. (2016, April). Automatic segmentation of the left ventricle in cardiac CT angiography using convolutional neural networks. In Biomedical Imaging (ISBI), 2016 IEEE 13th International Symposium on (pp. 40-43). IEE

    Classification of Material Mixtures in Volume Data for Visualization and Modeling

    Get PDF
    Material classification is a key stop in creating computer graphics models and images from volume data, We present a new algorithm for identifying the distribution of different material types in volumetric datasets such as those produced with Magnetic Resonance Imaging (NMI) or Computed Tomography (CT). The algorithm assumes that voxels can contain more than one material, e.g. both muscle and fat; we wish to compute the relative proportion of each material in the voxels. Other classification methods have utilized Gaussian probability density functions to model the distribution of values within a dataset. These Gaussian basis functions work well for voxels with unmixed materials, but do not work well where the materials are mixed together. We extend this approach by deriving non-Gaussian "mixture" basis functions. We treat a voxel as a volume, not as a single point. We use the distribution of values within each voxel-sized volume to identify materials within the voxel using a probabilistic approach. The technique reduces the classification artifacts that occur along boundaries between materials. The technique is useful for making higher quality geometric models and renderings from volume data, and has the potential to make more accurate volume measurements. It also classifies noisy, low-resolution data well
    • …
    corecore