9 research outputs found

    The MNI data-sharing and processing ecosystem

    Get PDF
    AbstractNeuroimaging has been facing a data deluge characterized by the exponential growth of both raw and processed data. As a result, mining the massive quantities of digital data collected in these studies offers unprecedented opportunities and has become paramount for today's research. As the neuroimaging community enters the world of “Big Data”, there has been a concerted push for enhanced sharing initiatives, whether within a multisite study, across studies, or federated and shared publicly. This article will focus on the database and processing ecosystem developed at the Montreal Neurological Institute (MNI) to support multicenter data acquisition both nationally and internationally, create database repositories, facilitate data-sharing initiatives, and leverage existing software toolkits for large-scale data processing

    Methodological challenges and analytic opportunities for modeling and interpreting Big Healthcare Data

    Full text link
    Abstract Managing, processing and understanding big healthcare data is challenging, costly and demanding. Without a robust fundamental theory for representation, analysis and inference, a roadmap for uniform handling and analyzing of such complex data remains elusive. In this article, we outline various big data challenges, opportunities, modeling methods and software techniques for blending complex healthcare data, advanced analytic tools, and distributed scientific computing. Using imaging, genetic and healthcare data we provide examples of processing heterogeneous datasets using distributed cloud services, automated and semi-automated classification techniques, and open-science protocols. Despite substantial advances, new innovative technologies need to be developed that enhance, scale and optimize the management and processing of large, complex and heterogeneous data. Stakeholder investments in data acquisition, research and development, computational infrastructure and education will be critical to realize the huge potential of big data, to reap the expected information benefits and to build lasting knowledge assets. Multi-faceted proprietary, open-source, and community developments will be essential to enable broad, reliable, sustainable and efficient data-driven discovery and analytics. Big data will affect every sector of the economy and their hallmark will be ‘team science’.http://deepblue.lib.umich.edu/bitstream/2027.42/134522/1/13742_2016_Article_117.pd

    Classifying migraine using PET compressive big data analytics of brain’s μ-opioid and D2/D3 dopamine neurotransmission

    Get PDF
    Introduction: Migraine is a common and debilitating pain disorder associated with dysfunction of the central nervous system. Advanced magnetic resonance imaging (MRI) studies have reported relevant pathophysiologic states in migraine. However, its molecular mechanistic processes are still poorly understood in vivo. This study examined migraine patients with a novel machine learning (ML) method based on their central μ-opioid and dopamine D2/D3 profiles, the most critical neurotransmitters in the brain for pain perception and its cognitive-motivational interface.Methods: We employed compressive Big Data Analytics (CBDA) to identify migraineurs and healthy controls (HC) in a large positron emission tomography (PET) dataset. 198 PET volumes were obtained from 38 migraineurs and 23 HC during rest and thermal pain challenge. 61 subjects were scanned with the selective μ-opioid receptor (μOR) radiotracer [11C]Carfentanil, and 22 with the selective dopamine D2/D3 receptor (DOR) radiotracer [11C]Raclopride. PET scans were recast into a 1D array of 510,340 voxels with spatial and intensity filtering of non-displaceable binding potential (BPND), representing the receptor availability level. We then performed data reduction and CBDA to power rank the predictive brain voxels.Results: CBDA classified migraineurs from HC with accuracy, sensitivity, and specificity above 90% for whole-brain and region-of-interest (ROI) analyses. The most predictive ROIs for μOR were the insula (anterior), thalamus (pulvinar, medial-dorsal, and ventral lateral/posterior nuclei), and the putamen. The latter, putamen (anterior), was also the most predictive for migraine regarding DOR D2/D3 BPND levels.Discussion: CBDA of endogenous μ-opioid and D2/D3 dopamine dysfunctions in the brain can accurately identify a migraine patient based on their receptor availability across key sensory, motor, and motivational processing regions. Our ML-based findings in the migraineur’s brain neurotransmission partly explain the severe impact of migraine suffering and associated neuropsychiatric comorbidities

    High-throughput neuroimaging-genetics computational infrastructure

    No full text

    High-throughput neuroimaging-genetics computational infrastructure

    No full text

    High-throughput neuroimaging-genetics computational infrastructure.

    Get PDF
    Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining, and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate, and disseminate novel scientific methods, computational resources, and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval, and aggregation. Computational processing involves the necessary software, hardware, and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical, and phenotypic data and meta-data. Data mining refers to the process of automatically extracting data features, characteristics and associations, which are not readily visible by human exploration of the raw dataset. Result interpretation includes scientific visualization, community validation of findings and reproducible findings. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI) and the Laboratory of Neuro Imaging (LONI) at University of Southern California (USC). INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. In addition, the institute provides a large number of software tools for image and shape analysis, mathematical modeling, genomic sequence processing, and scientific visualization. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer, and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize diverse suites of software tools and web-services. These pipeline workflows are represented as portable XML objects which transfer the execution instructions and user specifications from the client user machine to remote pipeline servers for distributed computing. Using Alzheimer's and Parkinson's data, we provide several examples of translational applications using this infrastructure
    corecore