214 research outputs found
Towards a learning-theoretic analysis of spike-timing dependent plasticity
This paper suggests a learning-theoretic perspective on how synaptic
plasticity benefits global brain functioning. We introduce a model, the
selectron, that (i) arises as the fast time constant limit of leaky
integrate-and-fire neurons equipped with spiking timing dependent plasticity
(STDP) and (ii) is amenable to theoretical analysis. We show that the selectron
encodes reward estimates into spikes and that an error bound on spikes is
controlled by a spiking margin and the sum of synaptic weights. Moreover, the
efficacy of spikes (their usefulness to other reward maximizing selectrons)
also depends on total synaptic strength. Finally, based on our analysis, we
propose a regularized version of STDP, and show the regularization improves the
robustness of neuronal learning when faced with multiple stimuli.Comment: To appear in Adv. Neural Inf. Proc. System
Telling cause from effect in deterministic linear dynamical systems
Inferring a cause from its effect using observed time series data is a major
challenge in natural and social sciences. Assuming the effect is generated by
the cause trough a linear system, we propose a new approach based on the
hypothesis that nature chooses the "cause" and the "mechanism that generates
the effect from the cause" independent of each other. We therefore postulate
that the power spectrum of the time series being the cause is uncorrelated with
the square of the transfer function of the linear filter generating the effect.
While most causal discovery methods for time series mainly rely on the noise,
our method relies on asymmetries of the power spectral density properties that
can be exploited even in the context of deterministic systems. We describe
mathematical assumptions in a deterministic model under which the causal
direction is identifiable with this approach. We also discuss the method's
performance under the additive noise model and its relationship to Granger
causality. Experiments show encouraging results on synthetic as well as
real-world data. Overall, this suggests that the postulate of Independence of
Cause and Mechanism is a promising principle for causal inference on empirical
time series.Comment: This article is under review for a peer-reviewed conferenc
Group invariance principles for causal generative models
The postulate of independence of cause and mechanism (ICM) has recently led
to several new causal discovery algorithms. The interpretation of independence
and the way it is utilized, however, varies across these methods. Our aim in
this paper is to propose a group theoretic framework for ICM to unify and
generalize these approaches. In our setting, the cause-mechanism relationship
is assessed by comparing it against a null hypothesis through the application
of random generic group transformations. We show that the group theoretic view
provides a very general tool to study the structure of data generating
mechanisms with direct applications to machine learning.Comment: 16 pages, 6 figure
Function Classes for Identifiable Nonlinear Independent Component Analysis
Unsupervised learning of latent variable models (LVMs) is widely used to
represent data in machine learning. When such models reflect the ground truth
factors and the mechanisms mapping them to observations, there is reason to
expect that they allow generalization in downstream tasks. It is however well
known that such identifiability guaranties are typically not achievable without
putting constraints on the model class. This is notably the case for nonlinear
Independent Component Analysis, in which the LVM maps statistically independent
variables to observations via a deterministic nonlinear function. Several
families of spurious solutions fitting perfectly the data, but that do not
correspond to the ground truth factors can be constructed in generic settings.
However, recent work suggests that constraining the function class of such
models may promote identifiability. Specifically, function classes with
constraints on their partial derivatives, gathered in the Jacobian matrix, have
been proposed, such as orthogonal coordinate transformations (OCT), which
impose orthogonality of the Jacobian columns. In the present work, we prove
that a subclass of these transformations, conformal maps, is identifiable and
provide novel theoretical results suggesting that OCTs have properties that
prevent families of spurious solutions to spoil identifiability in a generic
setting.Comment: 43 page
Causal Feature Selection via Orthogonal Search
The problem of inferring the direct causal parents of a response variable
among a large set of explanatory variables is of high practical importance in
many disciplines. Recent work in the field of causal discovery exploits
invariance properties of models across different experimental conditions for
detecting direct causal links. However, these approaches generally do not scale
well with the number of explanatory variables, are difficult to extend to
nonlinear relationships, and require data across different experiments.
Inspired by {\em Debiased} machine learning methods, we study a
one-vs.-the-rest feature selection approach to discover the direct causal
parent of the response. We propose an algorithm that works for purely
observational data, while also offering theoretical guarantees, including the
case of partially nonlinear relationships. Requiring only one estimation for
each variable, we can apply our approach even to large graphs, demonstrating
significant improvements compared to established approaches
Sharp wave-ripple complexes in a reduced model of the hippocampal CA3-CA1 network of the macaque monkey
Sharp wave-ripple complexes observed in the hippocampal CA1 local field potential (LFP) are thought to play a major role in memory reactivation, transfer and consolidation. SPW-Rs are known to result from a complex interplay between local and upstream hippocampal ensembles. However, the key mechanisms that underlie these events remain partly unknown. In this work, we introduce a reduced, but realistic multi-compartmental model of the macaque monkey´s hippocampal CA3-CA1 network. The model consists of two semi-linear layers, each consisting of two-compartmental pyramidal neurons and one-compartmental perisomatic-targeting basket cells. Connections in the network were modeled as AMPA synapses, based on physiological and anatomical data. Notably, while auto-association fibers were prevalent in CA3, CA1 connectivity -inspired by recent findings- implemented a "feedback and reciprocal inhibition", dominated by recurrent inhibition and pyramidal cells-interneurons synapses. SPW-R episodes emerge spontaneously in the CA1 subfield LFP (which is assumed proportional to transmembrane currents across all compartments and medium resistivity): Episodes of short-lived high-frequency oscillations (ripples, 80-180 Hz) on top of a massive dendritic depolarization (< 20 Hz) with visual and quantitative characteristics observed experimentally [1]. Concomitantly, the CA3 subfield LFP presents episodes of quasi-synchronous neuronal bursting in the form of gamma episodes (25-75 Hz). The model reveals a lower bound for the minimal network that may generate SPW-R activity, and predicts a large number of features of in vivo hippocampal recordings in macaque monkeys [1]. Spike-LFP coherence analysis in CA1 displays reliable synchrony of spiking activity in the ripple LFP frequency band, suggesting that modeled SPW-R episodes reflect a genuine network oscillatory regime. Interestingly, interneuronal firing shows coherence increases concomitant with the beginning and the end of the SPW-R event, together with increases over gamma frequencies. The model suggests that activity of both pyramidal neurons and interneurons is critical for the local genesis and dynamics of physiological SPW-R activity. Unlike other models, we found that it is interneuronal silence, not interneuronal firing that triggers these fast oscillatory events, in line with the fact that unbalanced excitability of selected pyramidal cells marks the beginning of single network episodes. Interneuronal silence quickly increases population firing of pyramidal cells. The interneuronal population activity increases with some latency due to the unbalanced excitatory drive, becoming pivotal to pyramidal cell activity, and further pacing pyramidal cells due to interneuronal fast kinetic properties. Our modeled data suggests that this effect is possibly mediated by a silencing-and-rebound-excitation mechanism, maintaining the frequency of the field oscillation bounded to the ripple range. The reduced model suggests a simple mechanism for the occurrence of SPW-Rs, in light of recent experimental evidence. We provide new insights into the dynamics of the hippocampal CA3-CA1 network during ripples, and the relation between neuronal circuits' activity at meso- and microscopic scales. Finally, our model exhibits characteristic cell type-specific activity that might be critical for the emergence of physiological SPW-R activity and therefore, for the formation of hippocampus-dependent memory representations
- …