65,697 research outputs found

    Towards a Multi-Subject Analysis of Neural Connectivity

    Full text link
    Directed acyclic graphs (DAGs) and associated probability models are widely used to model neural connectivity and communication channels. In many experiments, data are collected from multiple subjects whose connectivities may differ but are likely to share many features. In such circumstances it is natural to leverage similarity between subjects to improve statistical efficiency. The first exact algorithm for estimation of multiple related DAGs was recently proposed by Oates et al. 2014; in this letter we present examples and discuss implications of the methodology as applied to the analysis of fMRI data from a multi-subject experiment. Elicitation of tuning parameters requires care and we illustrate how this may proceed retrospectively based on technical replicate data. In addition to joint learning of subject-specific connectivity, we allow for heterogeneous collections of subjects and simultaneously estimate relationships between the subjects themselves. This letter aims to highlight the potential for exact estimation in the multi-subject setting.Comment: to appear in Neural Computation 27:1-2

    Size Control in the Nanoprecipitation Process of Stable Iodine (127I) Using Microchannel Reactor—Optimization by Artificial Neural Networks

    Get PDF
    In this study, nanosuspension of stable iodine (127I) was prepared by nanoprecipitation process in microfluidic devices. Then, size of particles was optimized using artificial neural networks (ANNs) modeling. The size of prepared particles was evaluated by dynamic light scattering. The response surfaces obtained from ANNs model illustrated the determining effect of input variables (solvent and antisolvent flow rate, surfactant concentration, and solvent temperature) on the output variable (nanoparticle size). Comparing the 3D graphs revealed that solvent and antisolvent flow rate had reverse relation with size of nanoparticles. Also, those graphs indicated that the solvent temperature at low values had an indirect relation with size of stable iodine (127I) nanoparticles, while at the high values, a direct relation was observed. In addition, it was found that the effect of surfactant concentration on particle size in the nanosuspension of stable iodine (127I) was depended on the solvent temperature. © 2015, American Association of Pharmaceutical Scientists

    Neural Substrates of Chronic Pain in the Thalamocortical Circuit

    Get PDF
    Chronic pain (CP), a pathological condition with a large repertory of signs and symptoms, has no recognizable neural functional common hallmark shared by its diverse expressions. The aim of the present research was to identify potential dynamic markers shared in CP models, by using simultaneous electrophysiological extracellular recordings from the rat ventrobasal thalamus and the primary somatosensory cortex. We have been able to extract a neural signature attributable solely to CP, independent from of the originating conditions. This study showed disrupted functional connectivity and increased redundancy in firing patterns in CP models versus controls, and interpreted these signs as a neural signature of CP. In a clinical perspective, we envisage CP as disconnection syndrome and hypothesize potential novel therapeutic appraisal

    Shift Aggregate Extract Networks

    Get PDF
    We introduce an architecture based on deep hierarchical decompositions to learn effective representations of large graphs. Our framework extends classic R-decompositions used in kernel methods, enabling nested "part-of-part" relations. Unlike recursive neural networks, which unroll a template on input graphs directly, we unroll a neural network template over the decomposition hierarchy, allowing us to deal with the high degree variability that typically characterize social network graphs. Deep hierarchical decompositions are also amenable to domain compression, a technique that reduces both space and time complexity by exploiting symmetries. We show empirically that our approach is competitive with current state-of-the-art graph classification methods, particularly when dealing with social network datasets

    Learning to Discover Sparse Graphical Models

    Get PDF
    We consider structure discovery of undirected graphical models from observational data. Inferring likely structures from few examples is a complex task often requiring the formulation of priors and sophisticated inference procedures. Popular methods rely on estimating a penalized maximum likelihood of the precision matrix. However, in these approaches structure recovery is an indirect consequence of the data-fit term, the penalty can be difficult to adapt for domain-specific knowledge, and the inference is computationally demanding. By contrast, it may be easier to generate training samples of data that arise from graphs with the desired structure properties. We propose here to leverage this latter source of information as training data to learn a function, parametrized by a neural network that maps empirical covariance matrices to estimated graph structures. Learning this function brings two benefits: it implicitly models the desired structure or sparsity properties to form suitable priors, and it can be tailored to the specific problem of edge structure discovery, rather than maximizing data likelihood. Applying this framework, we find our learnable graph-discovery method trained on synthetic data generalizes well: identifying relevant edges in both synthetic and real data, completely unknown at training time. We find that on genetics, brain imaging, and simulation data we obtain performance generally superior to analytical methods

    Disentangling causal webs in the brain using functional Magnetic Resonance Imaging: A review of current approaches

    Get PDF
    In the past two decades, functional Magnetic Resonance Imaging has been used to relate neuronal network activity to cognitive processing and behaviour. Recently this approach has been augmented by algorithms that allow us to infer causal links between component populations of neuronal networks. Multiple inference procedures have been proposed to approach this research question but so far, each method has limitations when it comes to establishing whole-brain connectivity patterns. In this work, we discuss eight ways to infer causality in fMRI research: Bayesian Nets, Dynamical Causal Modelling, Granger Causality, Likelihood Ratios, LiNGAM, Patel's Tau, Structural Equation Modelling, and Transfer Entropy. We finish with formulating some recommendations for the future directions in this area
    corecore