3,054 research outputs found

    Application of machine learning to automated analysis of cerebral edema in large cohorts of ischemic stroke patients

    Get PDF
    Cerebral edema contributes to neurological deterioration and death after hemispheric stroke but there remains no effective means of preventing or accurately predicting its occurrence. Big data approaches may provide insights into the biologic variability and genetic contributions to severity and time course of cerebral edema. These methods require quantitative analyses of edema severity across large cohorts of stroke patients. We have proposed that changes in cerebrospinal fluid (CSF) volume over time may represent a sensitive and dynamic marker of edema progression that can be measured from routinely available CT scans. To facilitate and scale up such approaches we have created a machine learning algorithm capable of segmenting and measuring CSF volume from serial CT scans of stroke patients. We now present results of our preliminary processing pipeline that was able to efficiently extract CSF volumetrics from an initial cohort of 155 subjects enrolled in a prospective longitudinal stroke study. We demonstrate a high degree of reproducibility in total cranial volume registration between scans (R = 0.982) as well as a strong correlation of baseline CSF volume and patient age (as a surrogate of brain atrophy, R = 0.725). Reduction in CSF volume from baseline to final CT was correlated with infarct volume (R = 0.715) and degree of midline shift (quadratic model, p < 2.2 × 10−16). We utilized generalized estimating equations (GEE) to model CSF volumes over time (using linear and quadratic terms), adjusting for age. This model demonstrated that CSF volume decreases over time (p < 2.2 × 10−13) and is lower in those with cerebral edema (p = 0.0004). We are now fully automating this pipeline to allow rapid analysis of even larger cohorts of stroke patients from multiple sites using an XNAT (eXtensible Neuroimaging Archive Toolkit) platform. Data on kinetics of edema across thousands of patients will facilitate precision approaches to prediction of malignant edema as well as modeling of variability and further understanding of genetic variants that influence edema severity

    A Query Integrator and Manager for the Query Web

    Get PDF
    We introduce two concepts: the Query Web as a layer of interconnected queries over the document web and the semantic web, and a Query Web Integrator and Manager (QI) that enables the Query Web to evolve. QI permits users to write, save and reuse queries over any web accessible source, including other queries saved in other installations of QI. The saved queries may be in any language (e.g. SPARQL, XQuery); the only condition for interconnection is that the queries return their results in some form of XML. This condition allows queries to chain off each other, and to be written in whatever language is appropriate for the task. We illustrate the potential use of QI for several biomedical use cases, including ontology view generation using a combination of graph-based and logical approaches, value set generation for clinical data management, image annotation using terminology obtained from an ontology web service, ontology-driven brain imaging data integration, small-scale clinical data integration, and wider-scale clinical data integration. Such use cases illustrate the current range of applications of QI and lead us to speculate about the potential evolution from smaller groups of interconnected queries into a larger query network that layers over the document and semantic web. The resulting Query Web could greatly aid researchers and others who now have to manually navigate through multiple information sources in order to answer specific questions

    Brain structure in pediatric Tourette syndrome

    Get PDF
    Previous studies of brain structure in Tourette syndrome (TS) have produced mixed results, and most had modest sample sizes. In the present multicenter study, we used structural magnetic resonance imaging (MRI) to compare 103 children and adolescents with TS to a well-matched group of 103 children without tics. We applied voxel-based morphometry methods to test gray matter (GM) and white matter (WM) volume differences between diagnostic groups, accounting for MRI scanner and sequence, age, sex and total GM+WM volume. The TS group demonstrated lower WM volume bilaterally in orbital and medial prefrontal cortex, and greater GM volume in posterior thalamus, hypothalamus and midbrain. These results demonstrate evidence for abnormal brain structure in children and youth with TS, consistent with and extending previous findings, and they point to new target regions and avenues of study in TS. For example, as orbital cortex is reciprocally connected with hypothalamus, structural abnormalities in these regions may relate to abnormal decision making, reinforcement learning or somatic processing in TS

    Towards structured sharing of raw and derived neuroimaging data across existing resources

    Full text link
    Data sharing efforts increasingly contribute to the acceleration of scientific discovery. Neuroimaging data is accumulating in distributed domain-specific databases and there is currently no integrated access mechanism nor an accepted format for the critically important meta-data that is necessary for making use of the combined, available neuroimaging data. In this manuscript, we present work from the Derived Data Working Group, an open-access group sponsored by the Biomedical Informatics Research Network (BIRN) and the International Neuroimaging Coordinating Facility (INCF) focused on practical tools for distributed access to neuroimaging data. The working group develops models and tools facilitating the structured interchange of neuroimaging meta-data and is making progress towards a unified set of tools for such data and meta-data exchange. We report on the key components required for integrated access to raw and derived neuroimaging data as well as associated meta-data and provenance across neuroimaging resources. The components include (1) a structured terminology that provides semantic context to data, (2) a formal data model for neuroimaging with robust tracking of data provenance, (3) a web service-based application programming interface (API) that provides a consistent mechanism to access and query the data model, and (4) a provenance library that can be used for the extraction of provenance data by image analysts and imaging software developers. We believe that the framework and set of tools outlined in this manuscript have great potential for solving many of the issues the neuroimaging community faces when sharing raw and derived neuroimaging data across the various existing database systems for the purpose of accelerating scientific discovery

    Focal Spot, Winter 2005/2006

    Get PDF
    https://digitalcommons.wustl.edu/focal_spot_archives/1101/thumbnail.jp

    A Fair Trial: When the Constitution Requires Attorneys to Investigate Their Clients\u27 Brains

    Get PDF
    The U.S. Constitution guarantees every criminal defendant the right to a fair trial. This fundamental right includes the right to a defense counsel who provides effective assistance. To be effective, attorneys must sometimes develop specific types of evidence in crafting the best defense. In recent years, the U.S. Supreme Court has found that defense attorneys did not provide effective assistance when they failed to consider neuroscience. But when must defense attorneys develop neuroscience in order to provide effective assistance? This question is difficult because the standard for determining effective assistance is still evolving. There are two leading approaches. First, in Strickland v. Washington, the Court adopted a two-prong “reasonableness” test, which, according to Justice O’Conner, may result in court decisions that fail to properly protect a criminal defendant’s rights. Recently, courts have adopted a second approach based on guidelines promulgated by the American Bar Association. This Note aims to answer this question. It first provides a background on the right to effective assistance of counsel and briefly describes neuroscience evidence, oppositions to and limitations on in its use, and its admissibility in court. Second, this Note attempts to give some guidance to attorneys by exploring the American Bar Association and U.S. Supreme Court standards. Third, it summarizes the results of a statistical analysis conducted by the author, which helps further define when courts require attorneys to develop neuroscience evidence. It concludes by arguing that attorneys need guidance to ensure they are not violating the Sixth Amendment. This Note expands on the American Bar Association’s standard and suggests a framework attorneys may use to determine whether they should develop neuroscience evidence to ensure that their client has a fair trial

    A Fair Trial: When the Constitution Requires Attorneys to Investigate Their Clients\u27 Brains

    Get PDF
    The U.S. Constitution guarantees every criminal defendant the right to a fair trial. This fundamental right includes the right to a defense counsel who provides effective assistance. To be effective, attorneys must sometimes develop specific types of evidence in crafting the best defense. In recent years, the U.S. Supreme Court has found that defense attorneys did not provide effective assistance when they failed to consider neuroscience. But when must defense attorneys develop neuroscience in order to provide effective assistance? This question is difficult because the standard for determining effective assistance is still evolving. There are two leading approaches. First, in Strickland v. Washington, the Court adopted a two-prong “reasonableness” test, which, according to Justice O’Conner, may result in court decisions that fail to properly protect a criminal defendant’s rights. Recently, courts have adopted a second approach based on guidelines promulgated by the American Bar Association. This Note aims to answer this question. It first provides a background on the right to effective assistance of counsel and briefly describes neuroscience evidence, oppositions to and limitations on in its use, and its admissibility in court. Second, this Note attempts to give some guidance to attorneys by exploring the American Bar Association and U.S. Supreme Court standards. Third, it summarizes the results of a statistical analysis conducted by the author, which helps further define when courts require attorneys to develop neuroscience evidence. It concludes by arguing that attorneys need guidance to ensure they are not violating the Sixth Amendment. This Note expands on the American Bar Association’s standard and suggests a framework attorneys may use to determine whether they should develop neuroscience evidence to ensure that their client has a fair trial

    The Stroke Neuro-Imaging Phenotype Repository: An open data science platform for stroke research

    Get PDF
    Stroke is one of the leading causes of death and disability worldwide. Reducing this disease burden through drug discovery and evaluation of stroke patient outcomes requires broader characterization of stroke pathophysiology, yet the underlying biologic and genetic factors contributing to outcomes are largely unknown. Remedying this critical knowledge gap requires deeper phenotyping, including large-scale integration of demographic, clinical, genomic, and imaging features. Such big data approaches will be facilitated by developing and running processing pipelines to extract stroke-related phenotypes at large scale. Millions of stroke patients undergo routine brain imaging each year, capturing a rich set of data on stroke-related injury and outcomes. The Stroke Neuroimaging Phenotype Repository (SNIPR) was developed as a multi-center centralized imaging repository of clinical computed tomography (CT) and magnetic resonance imaging (MRI) scans from stroke patients worldwide, based on the open source XNAT imaging informatics platform. The aims of this repository are to: (i) store, manage, process, and facilitate sharing of high-value stroke imaging data sets, (ii) implement containerized automated computational methods to extract image characteristics and disease-specific features from contributed images, (iii) facilitate integration of imaging, genomic, and clinical data to perform large-scale analysis of complications after stroke; and (iv) develop SNIPR as a collaborative platform aimed at both data scientists and clinical investigators. Currently, SNIPR hosts research projects encompassing ischemic and hemorrhagic stroke, with data from 2,246 subjects, and 6,149 imaging sessions from Washington University\u27s clinical image archive as well as contributions from collaborators in different countries, including Finland, Poland, and Spain. Moreover, we have extended the XNAT data model to include relevant clinical features, including subject demographics, stroke severity (NIH Stroke Scale), stroke subtype (using TOAST classification), and outcome [modified Rankin Scale (mRS)]. Image processing pipelines are deployed on SNIPR using containerized modules, which facilitate replicability at a large scale. The first such pipeline identifies axial brain CT scans from DICOM header data and image data using a meta deep learning scan classifier, registers serial scans to an atlas, segments tissue compartments, and calculates CSF volume. The resulting volume can be used to quantify the progression of cerebral edema after ischemic stroke. SNIPR thus enables the development and validation of pipelines to automatically extract imaging phenotypes and couple them with clinical data with the overarching aim of enabling a broad understanding of stroke progression and outcomes
    corecore