2,478 research outputs found

    Responsible Data Governance of Neuroscience Big Data

    Get PDF
    Open access article.Current discussions of the ethical aspects of big data are shaped by concerns regarding the social consequences of both the widespread adoption of machine learning and the ways in which biases in data can be replicated and perpetuated. We instead focus here on the ethical issues arising from the use of big data in international neuroscience collaborations. Neuroscience innovation relies upon neuroinformatics, large-scale data collection and analysis enabled by novel and emergent technologies. Each step of this work involves aspects of ethics, ranging from concerns for adherence to informed consent or animal protection principles and issues of data re-use at the stage of data collection, to data protection and privacy during data processing and analysis, and issues of attribution and intellectual property at the data-sharing and publication stages. Significant dilemmas and challenges with far-reaching implications are also inherent, including reconciling the ethical imperative for openness and validation with data protection compliance and considering future innovation trajectories or the potential for misuse of research results. Furthermore, these issues are subject to local interpretations within different ethical cultures applying diverse legal systems emphasising different aspects. Neuroscience big data require a concerted approach to research across boundaries, wherein ethical aspects are integrated within a transparent, dialogical data governance process. We address this by developing the concept of “responsible data governance,” applying the principles of Responsible Research and Innovation (RRI) to the challenges presented by the governance of neuroscience big data in the Human Brain Project (HBP)

    Development of quality standards for multi-center, longitudinal magnetic resonance imaging studies in clinical neuroscience

    Get PDF
    Magnetic resonance imaging (MRI) data is generated by a complex procedure. Many possible sources of error exist which can lead to a worse signal. For example, hidden defective components of a MRI-scanner, changes in the static magnetic field caused by a person simply moving in the MRI scanner room as well as changes in the measurement sequences can negatively affect the signal-to-noise ratio (SNR). A comprehensive, reproducible, quality assurance (QA) procedure is necessary, to ensure reproducible results both from the MRI equipment and the human operator of the equipment. To examine the quality of the MRI data, there are two possibilities. On the one hand, water or gel-filled objects, so-called "phantoms", are regularly measured. Based on this signal, which in the best case should always be stable, the general performance of the MRI scanner can be tested. On the other hand, the actually interesting data, mostly human data, are checked directly for certain signal parameters (e.g., SNR, motion parameters). This thesis consists of two parts. In the first part a study-specific QA-protocol was developed for a large multicenter MRI-study, FOR2107. The aim of FOR2107 is to investigate the causes and course of affective disorders, unipolar depression and bipolar disorders, taking clinical and neurobiological effects into account. The main aspect of FOR2107 is the MRI-measurement of more than 2000 subjects in a longitudinal design (currently repeated measurements after 2 years, further measurements planned after 5 years). To bring MRI-data and disease history together, MRI-data must provide stable results over the course of the study. Ensuring this stability is dealt with in this part of the work. An extensive QA, based on phantom measurements, human data analysis, protocol compliance testing, etc., was set up. In addition to the development of parameters for the characterization of MRI-data, the used QA-protocols were improved during the study. The differences between sites and the impact of these differences on human data analysis were analyzed. The comprehensive quality assurance for the FOR2107 study showed significant differences in MRI-signal (for human and phantom data) between the centers. Occurring problems could easily be recognized in time and be corrected, and must be included for current and future analyses of human data. For the second part of this thesis, a QA-protocol (and the freely available associated software "LAB-QA2GO") has been developed and tested, and can be used for individual studies or to control the quality of an MRI-scanner. This routine was developed because at many sites and in many studies, no explicit QA is performed nevertheless suitable, freely available QA-software for MRI-measurements is available. With LAB-QA2GO, it is possible to set up a QA-protocol for an MRI-scanner or a study without much effort and IT knowledge. Both parts of the thesis deal with the implementation of QA-procedures. High quality data and study results can be achieved only by the usage of appropriate QA-procedures, as presented in this work. Therefore, QA-measures should be implemented at all levels of a project and should be implemented permanently in project and evaluation routines

    Neuromeasure: A software package for quantification of cortical motor maps using frameless stereotaxic transcranial magnetic stimulation

    Get PDF
    The recent enhanced sophistication of non-invasive mapping of the human motor cortex using MRI-guided Transcranial Magnetic Stimulation (TMS) techniques, has not been matched by refinement of methods for generating maps from motor evoked potential (MEP) data, or in quantifying map features. This is despite continued interest in understanding cortical reorganization for natural adaptive processes such as skill learning, or in the case of motor recovery, such as after lesion affecting the corticospinal system. With the observation that TMS-MEP map calculation and quantification methods vary, and that no readily available commercial or free software exists, we sought to establish and make freely available a comprehensive software package that advances existing methods, and could be helpful to scientists and clinician-researchers. Therefore, we developed NeuroMeasure, an open source interactive software application for the analysis of TMS motor cortex mapping data collected from Nexstim® and BrainSight®, two commonly used neuronavigation platforms. NeuroMeasure features four key innovations designed to improve motor mapping analysis: de-dimensionalization of the mapping data, fitting a predictive model, reporting measurements to characterize the motor map, and comparing those measurements between datasets. This software provides a powerful and easy to use workflow for characterizing and comparing motor maps generated with neuronavigated TMS. The software can be downloaded on our github page: https://github.com/EdwardsLabNeuroSci/NeuroMeasure Aim This paper aims to describe a software platform for quantifying and comparing maps of the human primary motor cortex, using neuronavigated transcranial magnetic stimulation, for the purpose of studying brain plasticity in health and disease

    Unsupervised Detection of Cell-Assembly Sequences by Similarity-Based Clustering

    Get PDF
    Neurons which fire in a fixed temporal pattern (i.e., "cell assemblies") are hypothesized to be a fundamental unit of neural information processing. Several methods are available for the detection of cell assemblies without a time structure. However, the systematic detection of cell assemblies with time structure has been challenging, especially in large datasets, due to the lack of efficient methods for handling the time structure. Here, we show a method to detect a variety of cell-assembly activity patterns, recurring in noisy neural population activities at multiple timescales. The key innovation is the use of a computer science method to comparing strings ("edit similarity"), to group spikes into assemblies. We validated the method using artificial data and experimental data, which were previously recorded from the hippocampus of male Long-Evans rats and the prefrontal cortex of male Brown Norway/Fisher hybrid rats. From the hippocampus, we could simultaneously extract place-cell sequences occurring on different timescales during navigation and awake replay. From the prefrontal cortex, we could discover multiple spike sequences of neurons encoding different segments of a goal-directed task. Unlike conventional event-driven statistical approaches, our method detects cell assemblies without creating event-locked averages. Thus, the method offers a novel analytical tool for deciphering the neural code during arbitrary behavioral and mental processes

    Gotta trace ‘em all: A mini-review on tools and procedures for segmenting single neurons toward deciphering the structural connectome

    Get PDF
    Decoding the morphology and physical connections of all the neurons populating a brain is necessary for predicting and studying the relationships between its form and function, as well as for documenting structural abnormalities in neuropathies. Digitizing a complete and high-fidelity map of the mammalian brain at the micro-scale will allow neuroscientists to understand disease, consciousness, and ultimately what it is that makes us humans. The critical obstacle for reaching this goal is the lack of robust and accurate tools able to deal with 3D datasets representing dense-packed cells in their native arrangement within the brain. This obliges neuroscientist to manually identify the neurons populating an acquired digital image stack, a notably time-consuming procedure prone to human bias. Here we review the automatic and semi-automatic algorithms and software for neuron segmentation available in the literature, as well as the metrics purposely designed for their validation, highlighting their strengths and limitations. In this direction, we also briefly introduce the recent advances in tissue clarification that enable significant improvements in both optical access of neural tissue and image stack quality, and which could enable more efficient segmentation approaches. Finally, we discuss new methods and tools for processing tissues and acquiring images at sub-cellular scales, which will require new robust algorithms for identifying neurons and their sub-structures (e.g., spines, thin neurites). This will lead to a more detailed structural map of the brain, taking twenty-first century cellular neuroscience to the next level, i.e., the Structural Connectome

    Analytic Performance Modeling and Analysis of Detailed Neuron Simulations

    Full text link
    Big science initiatives are trying to reconstruct and model the brain by attempting to simulate brain tissue at larger scales and with increasingly more biological detail than previously thought possible. The exponential growth of parallel computer performance has been supporting these developments, and at the same time maintainers of neuroscientific simulation code have strived to optimally and efficiently exploit new hardware features. Current state of the art software for the simulation of biological networks has so far been developed using performance engineering practices, but a thorough analysis and modeling of the computational and performance characteristics, especially in the case of morphologically detailed neuron simulations, is lacking. Other computational sciences have successfully used analytic performance engineering and modeling methods to gain insight on the computational properties of simulation kernels, aid developers in performance optimizations and eventually drive co-design efforts, but to our knowledge a model-based performance analysis of neuron simulations has not yet been conducted. We present a detailed study of the shared-memory performance of morphologically detailed neuron simulations based on the Execution-Cache-Memory (ECM) performance model. We demonstrate that this model can deliver accurate predictions of the runtime of almost all the kernels that constitute the neuron models under investigation. The gained insight is used to identify the main governing mechanisms underlying performance bottlenecks in the simulation. The implications of this analysis on the optimization of neural simulation software and eventually co-design of future hardware architectures are discussed. In this sense, our work represents a valuable conceptual and quantitative contribution to understanding the performance properties of biological networks simulations.Comment: 18 pages, 6 figures, 15 table
    corecore