6,869 research outputs found
Effect of correlations on network controllability
A dynamical system is controllable if by imposing appropriate external
signals on a subset of its nodes, it can be driven from any initial state to
any desired state in finite time. Here we study the impact of various network
characteristics on the minimal number of driver nodes required to control a
network. We find that clustering and modularity have no discernible impact, but
the symmetries of the underlying matching problem can produce linear, quadratic
or no dependence on degree correlation coefficients, depending on the nature of
the underlying correlations. The results are supported by numerical simulations
and help narrow the observed gap between the predicted and the observed number
of driver nodes in real networks
Different approaches to community detection
A precise definition of what constitutes a community in networks has remained
elusive. Consequently, network scientists have compared community detection
algorithms on benchmark networks with a particular form of community structure
and classified them based on the mathematical techniques they employ. However,
this comparison can be misleading because apparent similarities in their
mathematical machinery can disguise different reasons for why we would want to
employ community detection in the first place. Here we provide a focused review
of these different motivations that underpin community detection. This
problem-driven classification is useful in applied network science, where it is
important to select an appropriate algorithm for the given purpose. Moreover,
highlighting the different approaches to community detection also delineates
the many lines of research and points out open directions and avenues for
future research.Comment: 14 pages, 2 figures. Written as a chapter for forthcoming Advances in
network clustering and blockmodeling, and based on an extended version of The
many facets of community detection in complex networks, Appl. Netw. Sci. 2: 4
(2017) by the same author
Discovering Functional Communities in Dynamical Networks
Many networks are important because they are substrates for dynamical
systems, and their pattern of functional connectivity can itself be dynamic --
they can functionally reorganize, even if their underlying anatomical structure
remains fixed. However, the recent rapid progress in discovering the community
structure of networks has overwhelmingly focused on that constant anatomical
connectivity. In this paper, we lay out the problem of discovering_functional
communities_, and describe an approach to doing so. This method combines recent
work on measuring information sharing across stochastic networks with an
existing and successful community-discovery algorithm for weighted networks. We
illustrate it with an application to a large biophysical model of the
transition from beta to gamma rhythms in the hippocampus.Comment: 18 pages, 4 figures, Springer "Lecture Notes in Computer Science"
style. Forthcoming in the proceedings of the workshop "Statistical Network
Analysis: Models, Issues and New Directions", at ICML 2006. Version 2: small
clarifications, typo corrections, added referenc
Community structure in real-world networks from a non-parametrical synchronization-based dynamical approach
This work analyzes the problem of community structure in real-world networks
based on the synchronization of nonidentical coupled chaotic R\"{o}ssler
oscillators each one characterized by a defined natural frequency, and coupled
according to a predefined network topology. The interaction scheme contemplates
an uniformly increasing coupling force to simulate a society in which the
association between the agents grows in time. To enhance the stability of the
correlated states that could emerge from the synchronization process, we
propose a parameterless mechanism that adapts the characteristic frequencies of
coupled oscillators according to a dynamic connectivity matrix deduced from
correlated data. We show that the characteristic frequency vector that results
from the adaptation mechanism reveals the underlying community structure
present in the network.Comment: 21 pages, 7 figures; Chaos, Solitons & Fractals (2012
Optimal modularity and memory capacity of neural reservoirs
The neural network is a powerful computing framework that has been exploited
by biological evolution and by humans for solving diverse problems. Although
the computational capabilities of neural networks are determined by their
structure, the current understanding of the relationships between a neural
network's architecture and function is still primitive. Here we reveal that
neural network's modular architecture plays a vital role in determining the
neural dynamics and memory performance of the network of threshold neurons. In
particular, we demonstrate that there exists an optimal modularity for memory
performance, where a balance between local cohesion and global connectivity is
established, allowing optimally modular networks to remember longer. Our
results suggest that insights from dynamical analysis of neural networks and
information spreading processes can be leveraged to better design neural
networks and may shed light on the brain's modular organization
Probabilistic Models of Motor Production
N. Bernstein defined the ability of the central neural system (CNS) to control many degrees of freedom of a physical body with all its redundancy and flexibility as the main problem in motor control. He pointed at that man-made mechanisms usually have one, sometimes two degrees of freedom (DOF); when the number of DOF increases further, it becomes prohibitively hard to control them. The brain, however, seems to perform such control effortlessly. He suggested the way the brain might deal with it: when a motor skill is being acquired, the brain artificially limits the degrees of freedoms, leaving only one or two. As the skill level increases, the brain gradually "frees" the previously fixed DOF, applying control when needed and in directions which have to be corrected, eventually arriving to the control scheme where all the DOF are "free". This approach of reducing the dimensionality of motor control remains relevant even today.
One the possibles solutions of the Bernstetin's problem is the hypothesis of motor primitives (MPs) - small building blocks that constitute complex movements and facilitite motor learnirng and task completion. Just like in the visual system, having a homogenious hierarchical architecture built of similar computational elements may be beneficial.
Studying such a complicated object as brain, it is important to define at which level of details one works and which questions one aims to answer. David Marr suggested three levels of analysis: 1. computational, analysing which problem the system solves; 2. algorithmic, questioning which representation the system uses and which computations it performs; 3. implementational, finding how such computations are performed by neurons in the brain. In this thesis we stay at the first two levels, seeking for the basic representation of motor output.
In this work we present a new model of motor primitives that comprises multiple interacting latent dynamical systems, and give it a full Bayesian treatment. Modelling within the Bayesian framework, in my opinion, must become the new standard in hypothesis testing in neuroscience. Only the Bayesian framework gives us guarantees when dealing with the inevitable plethora of hidden variables and uncertainty.
The special type of coupling of dynamical systems we proposed, based on the Product of Experts, has many natural interpretations in the Bayesian framework. If the dynamical systems run in parallel, it yields Bayesian cue integration. If they are organized hierarchically due to serial coupling, we get hierarchical priors over the dynamics. If one of the dynamical systems represents sensory state, we arrive to the sensory-motor primitives. The compact representation that follows from the variational treatment allows learning of a motor primitives library. Learned separately, combined motion can be represented as a matrix of coupling values.
We performed a set of experiments to compare different models of motor primitives. In a series of 2-alternative forced choice (2AFC) experiments participants were discriminating natural and synthesised movements, thus running a graphics Turing test. When available, Bayesian model score predicted the naturalness of the perceived movements. For simple movements, like walking, Bayesian model comparison and psychophysics tests indicate that one dynamical system is sufficient to describe the data. For more complex movements, like walking and waving, motion can be better represented as a set of coupled dynamical systems. We also experimentally confirmed that Bayesian treatment of model learning on motion data is superior to the simple point estimate of latent parameters. Experiments with non-periodic movements show that they do not benefit from more complex latent dynamics, despite having high kinematic complexity.
By having a fully Bayesian models, we could quantitatively disentangle the influence of motion dynamics and pose on the perception of naturalness. We confirmed that rich and correct dynamics is more important than the kinematic representation.
There are numerous further directions of research. In the models we devised, for multiple parts, even though the latent dynamics was factorized on a set of interacting systems, the kinematic parts were completely independent. Thus, interaction between the kinematic parts could be mediated only by the latent dynamics interactions. A more flexible model would allow a dense interaction on the kinematic level too.
Another important problem relates to the representation of time in Markov chains. Discrete time Markov chains form an approximation to continuous dynamics. As time step is assumed to be fixed, we face with the problem of time step selection. Time is also not a explicit parameter in Markov chains. This also prohibits explicit optimization of time as parameter and reasoning (inference) about it. For example, in optimal control boundary conditions are usually set at exact time points, which is not an ecological scenario, where time is usually a parameter of optimization. Making time an explicit parameter in dynamics may alleviate this
Grouping time series by pairwise measures of redundancy
A novel approach is proposed to group redundant time series in the frame of
causality. It assumes that (i) the dynamics of the system can be described
using just a small number of characteristic modes, and that (ii) a pairwise
measure of redundancy is sufficient to elicit the presence of correlated
degrees of freedom. We show the application of the proposed approach on fMRI
data from a resting human brain and gene expression profiles from HeLa cell
culture.Comment: 4 pages, 8 figure
- …