41 research outputs found
Spatiotemporal information transfer pattern differences in motor selection
Analysis of information transfer between variables in brain images is currently a popular topic, e.g. [1]. Such work typically focuses on average information transfer (i.e. transfer entropy [2]), yet the dynamics of transfer from a source to a destination can also be quantified at individual time points using the local transfer entropy (TE) [3]. This local perspective is known to reveal dynamical structure that the average cannot. We present a method to quantify local TE values in time between source and destination regions of variables in brain-imaging data, combining: a. computation of inter-regional transfer between two regions of variables (e.g. voxels) [1], with b. the local perspective of the dynamics of such transfer in time [3]. Transfer is computed over samples from all variables – there is no training in or subset selection of variables to use. We apply this method to a set of fMRI measurements where we could expect to see differences in local information transfer between two conditions at specific time steps. The fMRI data set analyzed (from [4]) contains brain activity recorded from 7 localized regions while 12 subjects (who gave informed written consent) were asked to freely decide whether to pus
RoboCup 2D Soccer Simulation League: Evaluation Challenges
We summarise the results of RoboCup 2D Soccer Simulation League in 2016
(Leipzig), including the main competition and the evaluation round. The
evaluation round held in Leipzig confirmed the strength of RoboCup-2015
champion (WrightEagle, i.e. WE2015) in the League, with only eventual finalists
of 2016 competition capable of defeating WE2015. An extended, post-Leipzig,
round-robin tournament which included the top 8 teams of 2016, as well as
WE2015, with over 1000 games played for each pair, placed WE2015 third behind
the champion team (Gliders2016) and the runner-up (HELIOS2016). This
establishes WE2015 as a stable benchmark for the 2D Simulation League. We then
contrast two ranking methods and suggest two options for future evaluation
challenges. The first one, "The Champions Simulation League", is proposed to
include 6 previous champions, directly competing against each other in a
round-robin tournament, with the view to systematically trace the advancements
in the League. The second proposal, "The Global Challenge", is aimed to
increase the realism of the environmental conditions during the simulated
games, by simulating specific features of different participating countries.Comment: 12 pages, RoboCup-2017, Nagoya, Japan, July 201
The diminishing role of hubs in dynamical processes on complex networks
It is notoriously difficult to predict the behaviour of a complex
self-organizing system, where the interactions among dynamical units form a
heterogeneous topology. Even if the dynamics of each microscopic unit is known,
a real understanding of their contributions to the macroscopic system behaviour
is still lacking. Here we develop information-theoretical methods to
distinguish the contribution of each individual unit to the collective
out-of-equilibrium dynamics. We show that for a system of units connected by a
network of interaction potentials with an arbitrary degree distribution, highly
connected units have less impact on the system dynamics as compared to
intermediately connected units. In an equilibrium setting, the hubs are often
found to dictate the long-term behaviour. However, we find both analytically
and experimentally that the instantaneous states of these units have a
short-lasting effect on the state trajectory of the entire system. We present
qualitative evidence of this phenomenon from empirical findings about a social
network of product recommendations, a protein-protein interaction network, and
a neural network, suggesting that it might indeed be a widespread property in
nature.Comment: Published versio
Gliders2d: Source Code Base for RoboCup 2D Soccer Simulation League
We describe Gliders2d, a base code release for Gliders, a soccer simulation
team which won the RoboCup Soccer 2D Simulation League in 2016. We trace six
evolutionary steps, each of which is encapsulated in a sequential change of the
released code, from v1.1 to v1.6, starting from agent2d-3.1.1 (set as the
baseline v1.0). These changes improve performance by adjusting the agents'
stamina management, their pressing behaviour and the action-selection
mechanism, as well as their positional choice in both attack and defense, and
enabling riskier passes. The resultant behaviour, which is sufficiently generic
to be applicable to physical robot teams, increases the players' mobility and
achieves a better control of the field. The last presented version,
Gliders2d-v1.6, approaches the strength of Gliders2013, and outperforms
agent2d-3.1.1 by four goals per game on average. The sequential improvements
demonstrate how the methodology of human-based evolutionary computation can
markedly boost the overall performance with even a small number of controlled
steps.Comment: 12 pages, 1 figure, Gliders2d code releas
Deep Randomized Neural Networks
Randomized Neural Networks explore the behavior of neural systems where the majority of connections are fixed, either in a stochastic or a deterministic fashion. Typical examples of such systems consist of multi-layered neural network architectures where the connections to the hidden layer(s) are left untrained after initialization. Limiting the training algorithms to operate on a reduced set of weights inherently characterizes the class of Randomized Neural Networks with a number of intriguing features. Among them, the extreme efficiency of the resulting learning processes is undoubtedly a striking advantage with respect to fully trained architectures. Besides, despite the involved simplifications, randomized neural systems possess remarkable properties both in practice, achieving state-of-the-art results in multiple domains, and theoretically, allowing to analyze intrinsic properties of neural architectures (e.g. before training of the hidden layers’ connections). In recent years, the study of Randomized Neural Networks has been extended towards deep architectures, opening new research directions to the design of effective yet extremely efficient deep learning models in vectorial as well as in more complex data domains. This chapter surveys all the major aspects regarding the design and analysis of Randomized Neural Networks, and some of the key results with respect to their approximation capabilities. In particular, we first introduce the fundamentals of randomized neural models in the context of feed-forward networks (i.e., Random Vector Functional Link and equivalent models) and convolutional filters, before moving to the case of recurrent systems (i.e., Reservoir Computing networks). For both, we focus specifically on recent results in the domain of deep randomized systems, and (for recurrent models) their application to structured domains
Deep Randomized Neural Networks
Randomized Neural Networks explore the behavior of neural systems where the
majority of connections are fixed, either in a stochastic or a deterministic
fashion. Typical examples of such systems consist of multi-layered neural
network architectures where the connections to the hidden layer(s) are left
untrained after initialization. Limiting the training algorithms to operate on
a reduced set of weights inherently characterizes the class of Randomized
Neural Networks with a number of intriguing features. Among them, the extreme
efficiency of the resulting learning processes is undoubtedly a striking
advantage with respect to fully trained architectures. Besides, despite the
involved simplifications, randomized neural systems possess remarkable
properties both in practice, achieving state-of-the-art results in multiple
domains, and theoretically, allowing to analyze intrinsic properties of neural
architectures (e.g. before training of the hidden layers' connections). In
recent years, the study of Randomized Neural Networks has been extended towards
deep architectures, opening new research directions to the design of effective
yet extremely efficient deep learning models in vectorial as well as in more
complex data domains. This chapter surveys all the major aspects regarding the
design and analysis of Randomized Neural Networks, and some of the key results
with respect to their approximation capabilities. In particular, we first
introduce the fundamentals of randomized neural models in the context of
feed-forward networks (i.e., Random Vector Functional Link and equivalent
models) and convolutional filters, before moving to the case of recurrent
systems (i.e., Reservoir Computing networks). For both, we focus specifically
on recent results in the domain of deep randomized systems, and (for recurrent
models) their application to structured domains
When Two Become One: The Limits of Causality Analysis of Brain Dynamics
Biological systems often consist of multiple interacting subsystems, the brain being a prominent example. To understand the functions of such systems it is important to analyze if and how the subsystems interact and to describe the effect of these interactions. In this work we investigate the extent to which the cause-and-effect framework is applicable to such interacting subsystems. We base our work on a standard notion of causal effects and define a new concept called natural causal effect. This new concept takes into account that when studying interactions in biological systems, one is often not interested in the effect of perturbations that alter the dynamics. The interest is instead in how the causal connections participate in the generation of the observed natural dynamics. We identify the constraints on the structure of the causal connections that determine the existence of natural causal effects. In particular, we show that the influence of the causal connections on the natural dynamics of the system often cannot be analyzed in terms of the causal effect of one subsystem on another. Only when the causing subsystem is autonomous with respect to the rest can this interpretation be made. We note that subsystems in the brain are often bidirectionally connected, which means that interactions rarely should be quantified in terms of cause-and-effect. We furthermore introduce a framework for how natural causal effects can be characterized when they exist. Our work also has important consequences for the interpretation of other approaches commonly applied to study causality in the brain. Specifically, we discuss how the notion of natural causal effects can be combined with Granger causality and Dynamic Causal Modeling (DCM). Our results are generic and the concept of natural causal effects is relevant in all areas where the effects of interactions between subsystems are of interest
25th Annual Computational Neuroscience Meeting: CNS-2016
Abstracts of the 25th Annual Computational Neuroscience
Meeting: CNS-2016
Seogwipo City, Jeju-do, South Korea. 2–7 July 201