2,670 research outputs found

    A supervised texton based approach for automatic segmentation and measurement of the fetal head and femur in 2D ultrasound images

    Get PDF
    This paper presents a supervised texton based approach for the accurate segmentation and measurement of ultrasound fetal head (BPD, OFD, HC) and femur (FL). The method consists of several steps. First, a non-linear diffusion technique is utilized to reduce the speckle noise. Then, based on the assumption that cross sectional intensity profiles of skull and femur can be approximated by Gaussian-like curves, a multi-scale and multi-orientation filter bank is designed to extract texton features specific to ultrasound fetal anatomic structure. The extracted texton cues, together with multi-scale local brightness, are then built into a unified framework for boundary detection of ultrasound fetal head and femur. Finally, for fetal head, a direct least square ellipse fitting method is used to construct a closed head contour, whilst, for fetal femur a closed contour is produced by connecting the detected femur boundaries. The presented method is demonstrated to be promising for clinical applications. Overall the evaluation results of fetal head segmentation and measurement from our method are comparable with the inter-observer difference of experts, with the best average precision of 96.85%, the maximum symmetric contour distance (MSD) of 1.46 mm, average symmetric contour distance (ASD) of 0.53 mm; while for fetal femur, the overall performance of our method is better than the inter-observer difference of experts, with the average precision of 84.37%, MSD of 2.72 mm and ASD of 0.31 mm

    The ENIGMA Stroke Recovery Working Group: Big data neuroimaging to study brain–behavior relationships after stroke

    Get PDF
    The goal of the Enhancing Neuroimaging Genetics through Meta‐Analysis (ENIGMA) Stroke Recovery working group is to understand brain and behavior relationships using well‐powered meta‐ and mega‐analytic approaches. ENIGMA Stroke Recovery has data from over 2,100 stroke patients collected across 39 research studies and 10 countries around the world, comprising the largest multisite retrospective stroke data collaboration to date. This article outlines the efforts taken by the ENIGMA Stroke Recovery working group to develop neuroinformatics protocols and methods to manage multisite stroke brain magnetic resonance imaging, behavioral and demographics data. Specifically, the processes for scalable data intake and preprocessing, multisite data harmonization, and large‐scale stroke lesion analysis are described, and challenges unique to this type of big data collaboration in stroke research are discussed. Finally, future directions and limitations, as well as recommendations for improved data harmonization through prospective data collection and data management, are provided

    The Borexino Thermal Monitoring & Management System and simulations of the fluid-dynamics of the Borexino detector under asymmetrical, changing boundary conditions

    Full text link
    A comprehensive monitoring system for the thermal environment inside the Borexino neutrino detector was developed and installed in order to reduce uncertainties in determining temperatures throughout the detector. A complementary thermal management system limits undesirable thermal couplings between the environment and Borexino's active sections. This strategy is bringing improved radioactive background conditions to the region of interest for the physics signal thanks to reduced fluid mixing induced in the liquid scintillator. Although fluid-dynamical equilibrium has not yet been fully reached, and thermal fine-tuning is possible, the system has proven extremely effective at stabilizing the detector's thermal conditions while offering precise insights into its mechanisms of internal thermal transport. Furthermore, a Computational Fluid-Dynamics analysis has been performed, based on the empirical measurements provided by the thermal monitoring system, and providing information into present and future thermal trends. A two-dimensional modeling approach was implemented in order to achieve a proper understanding of the thermal and fluid-dynamics in Borexino. It was optimized for different regions and periods of interest, focusing on the most critical effects that were identified as influencing background concentrations. Literature experimental case studies were reproduced to benchmark the method and settings, and a Borexino-specific benchmark was implemented in order to validate the modeling approach for thermal transport. Finally, fully-convective models were applied to understand general and specific fluid motions impacting the detector's Active Volume.Comment: arXiv admin note: substantial text overlap with arXiv:1705.09078, arXiv:1705.0965

    Functional and structural MRI image analysis for brain glial tumors treatment

    Get PDF
    Cotutela con il Dipartimento di Biotecnologie e Scienze della Vita, UniversiitĂ  degli Studi dell'Insubria.openThis Ph.D Thesis is the outcome of a close collaboration between the Center for Research in Image Analysis and Medical Informatics (CRAIIM) of the Insubria University and the Operative Unit of Neurosurgery, Neuroradiology and Health Physics of the University Hospital ”Circolo Fondazione Macchi”, Varese. The project aim is to investigate new methodologies by means of whose, develop an integrated framework able to enhance the use of Magnetic Resonance Images, in order to support clinical experts in the treatment of patients with brain Glial tumor. Both the most common uses of MRI technology for non-invasive brain inspection were analyzed. From the Functional point of view, the goal has been to provide tools for an objective reliable and non-presumptive assessment of the brain’s areas locations, to preserve them as much as possible at surgery. From the Structural point of view, methodologies for fully automatic brain segmentation and recognition of the tumoral areas, for evaluating the tumor volume, the spatial distribution and to be able to infer correlation with other clinical data or trace growth trend, have been studied. Each of the proposed methods has been thoroughly assessed both qualitatively and quantitatively. All the Medical Imaging and Pattern Recognition algorithmic solutions studied for this Ph.D. Thesis have been integrated in GliCInE: Glioma Computerized Inspection Environment, which is a MATLAB prototype of an integrated analysis environment that oïŹ€ers, in addition to all the functionality speciïŹcally described in this Thesis, a set of tools needed to manage Functional and Structural Magnetic Resonance Volumes and ancillary data related to the acquisition and the patient.openInformaticaPedoia, ValentinaPedoia, Valentin

    Multi-Modality Automatic Lung Tumor Segmentation Method Using Deep Learning and Radiomics

    Get PDF
    Delineation of the tumor volume is the initial and fundamental step in the radiotherapy planning process. The current clinical practice of manual delineation is time-consuming and suffers from observer variability. This work seeks to develop an effective automatic framework to produce clinically usable lung tumor segmentations. First, to facilitate the development and validation of our methodology, an expansive database of planning CTs, diagnostic PETs, and manual tumor segmentations was curated, and an image registration and preprocessing pipeline was established. Then a deep learning neural network was constructed and optimized to utilize dual-modality PET and CT images for lung tumor segmentation. The feasibility of incorporating radiomics and other mechanisms such as a tumor volume-based stratification scheme for training/validation/testing were investigated to improve the segmentation performance. The proposed methodology was evaluated both quantitatively with similarity metrics and clinically with physician reviews. In addition, external validation with an independent database was also conducted. Our work addressed some of the major limitations that restricted clinical applicability of the existing approaches and produced automatic segmentations that were consistent with the manually contoured ground truth and were highly clinically-acceptable according to both the quantitative and clinical evaluations. Both novel approaches of implementing a tumor volume-based training/validation/ testing stratification strategy as well as incorporating voxel-wise radiomics feature images were shown to improve the segmentation performance. The results showed that the proposed method was effective and robust, producing automatic lung tumor segmentations that could potentially improve both the quality and consistency of manual tumor delineation

    Evaluation Methods of Accuracy and Reproducibility for Image Segmentation Algorithms

    Get PDF
    Segmentation algorithms perform different on differernt datasets. Sometimes we want to learn which segmentation algoirithm is the best for a specific task, therefore we need to rank the performance of segmentation algorithms and determine which one is most suitable to that task. The performance of segmentation algorithms can be characterized from many aspects, such as accuracy and reproducibility. In many situations, the mean of the accuracies of individual segmentations is regarded as the accuracy of the segmentation algorithm which generated these segmentations. Sometimes a new algorithm is proposed and argued to be best based on mean accuracy of segmentations only, but the distribution of accuracies of segmentations generated by the new segmentation algorithm may not be really better than that of other exist segmentation algorithms. There are some cases where two groups of segmentations have the same mean of accuracies but have different distributions. This indicates that even if the mean accuracies of two group of segmentations are the same, the corresponding segmentations may have different accuracy performances. In addition, the reproducibility of segmentation algorithms are measured by many different metrics. But few works compared the properties of reproducibility measures basing on real segmentation data. In this thesis, we illustrate how to evaluate and compare the accuracy performances of segmentation algorithms using a distribution-based method, as well as how to use the proposed extensive method to rank multiple segmentation algorithms according to their accuracy performances. Different from the standard method, our extensive method combines the distribution information with the mean accuracy to evaluate, compare, and rank the accuracy performance of segmentation algorithms, instead of using mean accuracy alone. In addition, we used two sets of real segmentation data to demonstrate that generalized Tanimoto coefficient is a superior reproducibility measure which is insensitive to segmentation group size (number of raters), while other popular measures of reproducibility exhibit sensitivity to group size

    Methodology for extensive evaluation of semiautomatic and interactive segmentation algorithms using simulated Interaction models

    Get PDF
    Performance of semiautomatic and interactive segmentation(SIS) algorithms are usually evaluated by employing a small number of human operators to segment the images. The human operators typically provide the approximate location of objects of interest and their boundaries in an interactive phase, which is followed by an automatic phase where the segmentation is performed under the constraints of the operator-provided guidance. The segmentation results produced from this small set of interactions do not represent the true capability and potential of the algorithm being evaluated. For example, due to inter-operator variability, human operators may make choices that may provide either overestimated or underestimated results. As well, their choices may not be realistic when compared to how the algorithm is used in the field, since interaction may be influenced by operator fatigue and lapses in judgement. Other drawbacks to using human operators to assess SIS algorithms, include: human error, the lack of available expert users, and the expense. A methodology for evaluating segmentation performance is proposed here which uses simulated Interaction models to programmatically generate large numbers of interactions to ensure the presence of interactions throughout the object region. These interactions are used to segment the objects of interest and the resulting segmentations are then analysed using statistical methods. The large number of interactions generated by simulated interaction models capture the variabilities existing in the set of user interactions by considering each and every pixel inside the entire region of the object as a potential location for an interaction to be placed with equal probability. Due to the practical limitation imposed by the enormous amount of computation for the enormous number of possible interactions, uniform sampling of interactions at regular intervals is used to generate the subset of all possible interactions which still can represent the diverse pattern of the entire set of interactions. Categorization of interactions into different groups, based on the position of the interaction inside the object region and texture properties of the image region where the interaction is located, provides the opportunity for fine-grained algorithm performance analysis based on these two criteria. Application of statistical hypothesis testing make the analysis more accurate, scientific and reliable in comparison to conventional evaluation of semiautomatic segmentation algorithms. The proposed methodology has been demonstrated by two case studies through implementation of seven different algorithms using three different types of interaction modes making a total of nine segmentation applications to assess the efficacy of the methodology. Application of this methodology has revealed in-depth, fine details about the performance of the segmentation algorithms which currently existing methods could not achieve due to the absence of a large, unbiased set of interactions. Practical application of the methodology for a number of algorithms and diverse interaction modes have shown its feasibility and generality for it to be established as an appropriate methodology. Development of this methodology to be used as a potential application for automatic evaluation of the performance of SIS algorithms looks very promising for users of image segmentation

    Assessment of algorithms for mitosis detection in breast cancer histopathology images

    Get PDF
    The proliferative activity of breast tumors, which is routinely estimated by counting of mitotic figures in hematoxylin and eosin stained histology sections, is considered to be one of the most important prognostic markers. However, mitosis counting is laborious, subjective and may suffer from low inter-observer agreement. With the wider acceptance of whole slide images in pathology labs, automatic image analysis has been proposed as a potential solution for these issues. In this paper, the results from the Assessment of Mitosis Detection Algorithms 2013 (AMIDA13) challenge are described. The challenge was based on a data set consisting of 12 training and 11 testing subjects, with more than one thousand annotated mitotic figures by multiple observers. Short descriptions and results from the evaluation of eleven methods are presented. The top performing method has an error rate that is comparable to the inter-observer agreement among pathologists

    Validation Strategies Supporting Clinical Integration of Prostate Segmentation Algorithms for Magnetic Resonance Imaging

    Get PDF
    Segmentation of the prostate in medical images is useful for prostate cancer diagnosis and therapy guidance. However, manual segmentation of the prostate is laborious and time-consuming, with inter-observer variability. The focus of this thesis was on accuracy, reproducibility and procedure time measurement for prostate segmentation on T2-weighted endorectal magnetic resonance imaging, and assessment of the potential of a computer-assisted segmentation technique to be translated to clinical practice for prostate cancer management. We collected an image data set from prostate cancer patients with manually-delineated prostate borders by one observer on all the images and by two other observers on a subset of images. We used a complementary set of error metrics to measure the different types of observed segmentation errors. We compared expert manual segmentation as well as semi-automatic and automatic segmentation approaches before and after manual editing by expert physicians. We recorded the time needed for user interaction to initialize the semi-automatic algorithm, algorithm execution, and manual editing as necessary. Comparing to manual segmentation, the measured errors for the algorithms compared favourably with observed differences between manual segmentations. The measured average editing times for the computer-assisted segmentation were lower than fully manual segmentation time, and the algorithms reduced the inter-observer variability as compared to manual segmentation. The accuracy of the computer-assisted approaches was near to or within the range of observed variability in manual segmentation. The recorded procedure time for prostate segmentation was reduced using computer-assisted segmentation followed by manual editing, compared to the time required for fully manual segmentation
    • 

    corecore