21,514 research outputs found
Structure Learning in Coupled Dynamical Systems and Dynamic Causal Modelling
Identifying a coupled dynamical system out of many plausible candidates, each
of which could serve as the underlying generator of some observed measurements,
is a profoundly ill posed problem that commonly arises when modelling real
world phenomena. In this review, we detail a set of statistical procedures for
inferring the structure of nonlinear coupled dynamical systems (structure
learning), which has proved useful in neuroscience research. A key focus here
is the comparison of competing models of (ie, hypotheses about) network
architectures and implicit coupling functions in terms of their Bayesian model
evidence. These methods are collectively referred to as dynamical casual
modelling (DCM). We focus on a relatively new approach that is proving
remarkably useful; namely, Bayesian model reduction (BMR), which enables rapid
evaluation and comparison of models that differ in their network architecture.
We illustrate the usefulness of these techniques through modelling
neurovascular coupling (cellular pathways linking neuronal and vascular
systems), whose function is an active focus of research in neurobiology and the
imaging of coupled neuronal systems
Disentangling causal webs in the brain using functional Magnetic Resonance Imaging: A review of current approaches
In the past two decades, functional Magnetic Resonance Imaging has been used
to relate neuronal network activity to cognitive processing and behaviour.
Recently this approach has been augmented by algorithms that allow us to infer
causal links between component populations of neuronal networks. Multiple
inference procedures have been proposed to approach this research question but
so far, each method has limitations when it comes to establishing whole-brain
connectivity patterns. In this work, we discuss eight ways to infer causality
in fMRI research: Bayesian Nets, Dynamical Causal Modelling, Granger Causality,
Likelihood Ratios, LiNGAM, Patel's Tau, Structural Equation Modelling, and
Transfer Entropy. We finish with formulating some recommendations for the
future directions in this area
Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Applications
Wireless sensor networks monitor dynamic environments that change rapidly
over time. This dynamic behavior is either caused by external factors or
initiated by the system designers themselves. To adapt to such conditions,
sensor networks often adopt machine learning techniques to eliminate the need
for unnecessary redesign. Machine learning also inspires many practical
solutions that maximize resource utilization and prolong the lifespan of the
network. In this paper, we present an extensive literature review over the
period 2002-2013 of machine learning methods that were used to address common
issues in wireless sensor networks (WSNs). The advantages and disadvantages of
each proposed algorithm are evaluated against the corresponding problem. We
also provide a comparative guide to aid WSN designers in developing suitable
machine learning solutions for their specific application challenges.Comment: Accepted for publication in IEEE Communications Surveys and Tutorial
Multiscale Topological Properties Of Functional Brain Networks During Motor Imagery After Stroke
In recent years, network analyses have been used to evaluate brain
reorganization following stroke. However, many studies have often focused on
single topological scales, leading to an incomplete model of how focal brain
lesions affect multiple network properties simultaneously and how changes on
smaller scales influence those on larger scales. In an EEG-based experiment on
the performance of hand motor imagery (MI) in 20 patients with unilateral
stroke, we observed that the anatomic lesion affects the functional brain
network on multiple levels. In the beta (13-30 Hz) frequency band, the MI of
the affected hand (Ahand) elicited a significantly lower smallworldness and
local efficiency (Eloc) versus the unaffected hand (Uhand). Notably, the
abnormal reduction in Eloc significantly depended on the increase in
interhemispheric connectivity, which was in turn determined primarily by the
rise in regional connectivity in the parieto-occipital sites of the affected
hemisphere. Further, in contrast to the Uhand MI, in which significantly high
connectivity was observed for the contralateral sensorimotor regions of the
unaffected hemisphere, the regions that increased in connection during the
Ahand MI lay in the frontal and parietal regions of the contralaterally
affected hemisphere. Finally, the overall sensorimotor function of our
patients, as measured by Fugl-Meyer Assessment (FMA) index, was significantly
predicted by the connectivity of their affected hemisphere. These results
increase our understanding of stroke-induced alterations in functional brain
networks.Comment: Neuroimage, accepted manuscript (unedited version) available online
19-June-201
Bits from Biology for Computational Intelligence
Computational intelligence is broadly defined as biologically-inspired
computing. Usually, inspiration is drawn from neural systems. This article
shows how to analyze neural systems using information theory to obtain
constraints that help identify the algorithms run by such systems and the
information they represent. Algorithms and representations identified
information-theoretically may then guide the design of biologically inspired
computing systems (BICS). The material covered includes the necessary
introduction to information theory and the estimation of information theoretic
quantities from neural data. We then show how to analyze the information
encoded in a system about its environment, and also discuss recent
methodological developments on the question of how much information each agent
carries about the environment either uniquely, or redundantly or
synergistically together with others. Last, we introduce the framework of local
information dynamics, where information processing is decomposed into component
processes of information storage, transfer, and modification -- locally in
space and time. We close by discussing example applications of these measures
to neural data and other complex systems
- …