340,102 research outputs found

    Preoperative Brain Tumor Imaging: Models and Software for Segmentation and Standardized Reporting

    Get PDF
    For patients suffering from brain tumor, prognosis estimation and treatment decisions are made by a multidisciplinary team based on a set of preoperative MR scans. Currently, the lack of standardized and automatic methods for tumor detection and generation of clinical reports, incorporating a wide range of tumor characteristics, represents a major hurdle. In this study, we investigate the most occurring brain tumor types: glioblastomas, lower grade gliomas, meningiomas, and metastases, through four cohorts of up to 4,000 patients. Tumor segmentation models were trained using the AGU-Net architecture with different preprocessing steps and protocols. Segmentation performances were assessed in-depth using a wide-range of voxel and patient-wise metrics covering volume, distance, and probabilistic aspects. Finally, two software solutions have been developed, enabling an easy use of the trained models and standardized generation of clinical reports: Raidionics and Raidionics-Slicer. Segmentation performances were quite homogeneous across the four different brain tumor types, with an average true positive Dice ranging between 80 and 90%, patient-wise recall between 88 and 98%, and patient-wise precision around 95%. In conjunction to Dice, the identified most relevant other metrics were the relative absolute volume difference, the variation of information, and the Hausdorff, Mahalanobis, and object average symmetric surface distances. With our Raidionics software, running on a desktop computer with CPU support, tumor segmentation can be performed in 16–54 s depending on the dimensions of the MRI volume. For the generation of a standardized clinical report, including the tumor segmentation and features computation, 5–15 min are necessary. All trained models have been made open-access together with the source code for both software solutions and validation metrics computation. In the future, a method to convert results from a set of metrics into a final single score would be highly desirable for easier ranking across trained models. In addition, an automatic classification of the brain tumor type would be necessary to replace manual user input. Finally, the inclusion of post-operative segmentation in both software solutions will be key for generating complete post-operative standardized clinical reports.publishedVersio

    Preoperative Brain Tumor Imaging:Models and Software for Segmentation and Standardized Reporting

    Get PDF
    For patients suffering from brain tumor, prognosis estimation and treatment decisions are made by a multidisciplinary team based on a set of preoperative MR scans. Currently, the lack of standardized and automatic methods for tumor detection and generation of clinical reports, incorporating a wide range of tumor characteristics, represents a major hurdle. In this study, we investigate the most occurring brain tumor types: glioblastomas, lower grade gliomas, meningiomas, and metastases, through four cohorts of up to 4,000 patients. Tumor segmentation models were trained using the AGU-Net architecture with different preprocessing steps and protocols. Segmentation performances were assessed in-depth using a wide-range of voxel and patient-wise metrics covering volume, distance, and probabilistic aspects. Finally, two software solutions have been developed, enabling an easy use of the trained models and standardized generation of clinical reports: Raidionics and Raidionics-Slicer. Segmentation performances were quite homogeneous across the four different brain tumor types, with an average true positive Dice ranging between 80 and 90%, patient-wise recall between 88 and 98%, and patient-wise precision around 95%. In conjunction to Dice, the identified most relevant other metrics were the relative absolute volume difference, the variation of information, and the Hausdorff, Mahalanobis, and object average symmetric surface distances. With our Raidionics software, running on a desktop computer with CPU support, tumor segmentation can be performed in 16-54 s depending on the dimensions of the MRI volume. For the generation of a standardized clinical report, including the tumor segmentation and features computation, 5-15 min are necessary. All trained models have been made open-access together with the source code for both software solutions and validation metrics computation. In the future, a method to convert results from a set of metrics into a final single score would be highly desirable for easier ranking across trained models. In addition, an automatic classification of the brain tumor type would be necessary to replace manual user input. Finally, the inclusion of post-operative segmentation in both software solutions will be key for generating complete post-operative standardized clinical reports

    On Non-Parallelizable Deterministic Client Puzzle Scheme with Batch Verification Modes

    Get PDF
    A (computational) client puzzle scheme enables a client to prove to a server that a certain amount of computing resources (CPU cycles and/or Memory look-ups) has been dedicated to solve a puzzle. Researchers have identified a number of potential applications, such as constructing timed cryptography, fighting junk emails, and protecting critical infrastructure from DoS attacks. In this paper, we first revisit this concept and formally define two properties, namely deterministic computation and parallel computation resistance. Our analysis show that both properties are crucial for the effectiveness of client puzzle schemes in most application scenarios. We prove that the RSW client puzzle scheme, which is based on the repeated squaring technique, achieves both properties. Secondly, we introduce two batch verification modes for the RSW client puzzle scheme in order to improve the verification efficiency of the server, and investigate three methods for handling errors in batch verifications. Lastly, we show that client puzzle schemes can be integrated with reputation systems to further improve the effectiveness in practice

    ETEA: A euclidean minimum spanning tree-Based evolutionary algorithm for multiobjective optimization

    Get PDF
    © the Massachusetts Institute of TechnologyAbstract The Euclidean minimum spanning tree (EMST), widely used in a variety of domains, is a minimum spanning tree of a set of points in the space, where the edge weight between each pair of points is their Euclidean distance. Since the generation of an EMST is entirely determined by the Euclidean distance between solutions (points), the properties of EMSTs have a close relation with the distribution and position information of solutions. This paper explores the properties of EMSTs and proposes an EMST-based Evolutionary Algorithm (ETEA) to solve multiobjective optimization problems (MOPs). Unlike most EMO algorithms that focus on the Pareto dominance relation, the proposed algorithm mainly considers distance-based measures to evaluate and compare individuals during the evolutionary search. Specifically in ETEA, four strategies are introduced: 1) An EMST-based crowding distance (ETCD) is presented to estimate the density of individuals in the population; 2) A distance comparison approach incorporating ETCD is used to assign the fitness value for individuals; 3) A fitness adjustment technique is designed to avoid the partial overcrowding in environmental selection; 4) Three diversity indicators-the minimum edge, degree, and ETCD-with regard to EMSTs are applied to determine the survival of individuals in archive truncation. From a series of extensive experiments on 32 test instances with different characteristics, ETEA is found to be competitive against five state-of-the-art algorithms and its predecessor in providing a good balance among convergence, uniformity, and spread.Engineering and Physical Sciences Research Council (EPSRC) of the United Kingdom under Grant EP/K001310/1, and the National Natural Science Foundation of China under Grant 61070088

    A Survey on Continuous Time Computations

    Full text link
    We provide an overview of theories of continuous time computation. These theories allow us to understand both the hardness of questions related to continuous time dynamical systems and the computational power of continuous time analog models. We survey the existing models, summarizing results, and point to relevant references in the literature
    corecore