5 research outputs found

    BrainNET: Inference of Brain Network Topology Using Machine Learning

    No full text
    Background:To develop a new functional magnetic resonance image (fMRI) network inference method, BrainNET, that utilizes an efficient machine learning algorithm to quantify contributions of various regions of interests (ROIs) in the brain to a specific ROI. Methods:BrainNET is based on extremely randomized trees to estimate network topology from fMRI data and modified to generate an adjacency matrix representing brain network topology, without reliance on arbitrary thresholds. Open-source simulated fMRI data of 50 subjects in 28 different simulations under various confounding conditions with known ground truth were used to validate the method. Performance was compared with correlation and partial correlation (PC). The real-world performance was then evaluated in a publicly available attention-deficit/hyperactivity disorder (ADHD) data set, including 134 typically developing children (mean age: 12.03, males: 83), 75 ADHD inattentive (mean age: 11.46, males: 56), and 93 ADHD combined (mean age: 11.86, males: 77) subjects. Network topologies in ADHD were inferred using BrainNET, correlation, and PC. Graph metrics were extracted to determine differences between the ADHD groups. Results:BrainNET demonstrated excellent performance across all simulations and varying confounders in identifying the true presence of connections. In the ADHD data set, BrainNET was able to identify significant changes (p < 0.05) in graph metrics between groups. No significant changes in graph metrics between ADHD groups were identified using correlation and PC. Conclusion:We describe BrainNET, a new network inference method to estimate fMRI connectivity that was adapted from gene regulatory methods. BrainNET out-performed Pearson correlation and PC in fMRI simulation data and real-world ADHD data. BrainNET can be used independently or combined with other existing methods as a useful tool to understand network changes and to determine the true network topology of the brain under various conditions and disease states. Impact statement Developed a new functional magnetic resonance image (fMRI) network inference method named as BrainNET using machine learning. BrainNET out-performed Pearson correlation and partial correlation in fMRI simulation data and real-world attention-deficit/hyperactivity disorder data. BrainNET does not need to be pretrained and can be applied to infer fMRI network topology independently on individual subjects and for varying number of nodes.11Nsciescopu

    Automated Code generation for Information Technology Tasks in YAML through Large Language Models

    Full text link
    The recent improvement in code generation capabilities due to the use of large language models has mainly benefited general purpose programming languages. Domain specific languages, such as the ones used for IT Automation, have received far less attention, despite involving many active developers and being an essential component of modern cloud platforms. This work focuses on the generation of Ansible-YAML, a widely used markup language for IT Automation. We present Ansible Wisdom, a natural-language to Ansible-YAML code generation tool, aimed at improving IT automation productivity. Ansible Wisdom is a transformer-based model, extended by training with a new dataset containing Ansible-YAML. We also develop two novel performance metrics for YAML and Ansible to capture the specific characteristics of this domain. Results show that Ansible Wisdom can accurately generate Ansible script from natural language prompts with performance comparable or better than existing state of the art code generation models

    A Fully Automated Deep Learning Network for Brain Tumor Segmentation

    No full text
    We developed a fully automated method for brain tumor segmentation using deep learning; 285 brain tumor cases with multiparametric magnetic resonance images from the BraTS2018 data set were used. We designed 3 separate 3D-Dense-UNets to simplify the complex multiclass segmentation problem into individual binary-segmentation problems for each subcomponent. We implemented a 3-fold cross-validation to generalize the network\u27s performance. The mean cross-validation Dice-scores for whole tumor (WT), tumor core (TC), and enhancing tumor (ET) segmentations were 0.92, 0.84, and 0.80, respectively. We then retrained the individual binary-segmentation networks using 265 of the 285 cases, with 20 cases held-out for testing. We also tested the network on 46 cases from the BraTS2017 validation data set, 66 cases from the BraTS2018 validation data set, and 52 cases from an independent clinical data set. The average Dice-scores for WT, TC, and ET were 0.90, 0.84, and 0.80, respectively, on the 20 held-out testing cases. The average Dice-scores for WT, TC, and ET on the BraTS2017 validation data set, the BraTS2018 validation data set, and the clinical data set were as follows: 0.90, 0.80, and 0.78; 0.90, 0.82, and 0.80; and 0.85, 0.80, and 0.77, respectively. A fully automated deep learning method was developed to segment brain tumors into their subcomponents, which achieved high prediction accuracy on the BraTS data set and on the independent clinical data set. This method is promising for implementation into a clinical workflow

    QU-BraTS : MICCAI BraTS 2020 Challenge on QuantifyingUncertainty in Brain Tumor Segmentation - Analysis of Ranking Scores and Benchmarking Results

    No full text
    Deep learning (DL) models have provided the state-of-the-art performance in a wide variety of medical imaging benchmarking challenges, including the Brain Tumor Segmentation (BraTS) challenges. However, the task of focal pathology multi-compartment segmentation (e.g., tumor and lesion sub-regions) is particularly challenging, and potential errors hinder the translation of DL models into clinical workflows. Quantifying the reliability of DL model predictions in the form of uncertainties, could enable clinical review of the most uncertain regions, thereby building trust and paving the way towards clinical translation. Recently, a number of uncertainty estimation methods have been introduced for DL medical image segmentation tasks. Developing scores to evaluate and compare the performance of uncertainty measures will assist the end-user in making more informed decisions. In this study, we explore and evaluate a score developed during the BraTS 2019-2020 task on uncertainty quantification (QU-BraTS), and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation. This score (1) rewards uncertainty estimates that produce high confidence in correct assertions, and those that assign low confidence levels at incorrect assertions, and (2) penalizes uncertainty measures that lead to a higher percentages of under-confident correct assertions. We further benchmark the segmentation uncertainties generated by 14 independent participating teams of QU-BraTS 2020, all of which also participated in the main BraTS segmentation task. Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmentation algorithms, and hence highlight the need for uncertainty quantification in medical image analyses. Our evaluation code is made publicly available at https://github.com/RagMeh11/QU-BraT

    QU-BraTS : MICCAI BraTS 2020 Challenge on QuantifyingUncertainty in Brain Tumor Segmentation - Analysis of Ranking Scores and Benchmarking Results

    Get PDF
    Deep learning (DL) models have provided the state-of-the-art performance in a wide variety of medical imaging benchmarking challenges, including the Brain Tumor Segmentation (BraTS) challenges. However, the task of focal pathology multi-compartment segmentation (e.g., tumor and lesion sub-regions) is particularly challenging, and potential errors hinder the translation of DL models into clinical workflows. Quantifying the reliability of DL model predictions in the form of uncertainties, could enable clinical review of the most uncertain regions, thereby building trust and paving the way towards clinical translation. Recently, a number of uncertainty estimation methods have been introduced for DL medical image segmentation tasks. Developing scores to evaluate and compare the performance of uncertainty measures will assist the end-user in making more informed decisions. In this study, we explore and evaluate a score developed during the BraTS 2019-2020 task on uncertainty quantification (QU-BraTS), and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation. This score (1) rewards uncertainty estimates that produce high confidence in correct assertions, and those that assign low confidence levels at incorrect assertions, and (2) penalizes uncertainty measures that lead to a higher percentages of under-confident correct assertions. We further benchmark the segmentation uncertainties generated by 14 independent participating teams of QU-BraTS 2020, all of which also participated in the main BraTS segmentation task. Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmentation algorithms, and hence highlight the need for uncertainty quantification in medical image analyses. Our evaluation code is made publicly available at https://github.com/RagMeh11/QU-BraT
    corecore