8,164 research outputs found
What can topology tell us about the neural code?
Neuroscience is undergoing a period of rapid experimental progress and
expansion. New mathematical tools, previously unknown in the neuroscience
community, are now being used to tackle fundamental questions and analyze
emerging data sets. Consistent with this trend, the last decade has seen an
uptick in the use of topological ideas and methods in neuroscience. In this
talk I will survey recent applications of topology in neuroscience, and explain
why topology is an especially natural tool for understanding neural codes.
Note: This is a write-up of my talk for the Current Events Bulletin, held at
the 2016 Joint Math Meetings in Seattle, WA.Comment: 16 pages, 9 figure
Blind Identification of Invertible Graph Filters with Multiple Sparse Inputs
This paper deals with problem of blind identification of a graph filter and
its sparse input signal, thus broadening the scope of classical blind
deconvolution of temporal and spatial signals to irregular graph domains. While
the observations are bilinear functions of the unknowns, a mild requirement on
invertibility of the filter enables an efficient convex formulation, without
relying on matrix lifting that can hinder applicability to large graphs. On top
of scaling, it is argued that (non-cyclic) permutation ambiguities may arise
with some particular graphs. Deterministic sufficient conditions under which
the proposed convex relaxation can exactly recover the unknowns are stated,
along with those guaranteeing identifiability under the Bernoulli-Gaussian
model for the inputs. Numerical tests with synthetic and real-world networks
illustrate the merits of the proposed algorithm, as well as the benefits of
leveraging multiple signals to aid the (blind) localization of sources of
diffusion
Deep Recurrent Neural Networks for Time Series Prediction
Ability of deep networks to extract high level features and of recurrent
networks to perform time-series inference have been studied. In view of
universality of one hidden layer network at approximating functions under weak
constraints, the benefit of multiple layers is to enlarge the space of
dynamical systems approximated or, given the space, reduce the number of units
required for a certain error. Traditionally shallow networks with manually
engineered features are used, back-propagation extent is limited to one and
attempt to choose a large number of hidden units to satisfy the Markov
condition is made. In case of Markov models, it has been shown that many
systems need to be modeled as higher order. In the present work, we present
deep recurrent networks with longer backpropagation through time extent as a
solution to modeling systems that are high order and to predicting ahead. We
study epileptic seizure suppression electro-stimulator. Extraction of manually
engineered complex features and prediction employing them has not allowed small
low-power implementations as, to avoid possibility of surgery, extraction of
any features that may be required has to be included. In this solution, a
recurrent neural network performs both feature extraction and prediction. We
prove analytically that adding hidden layers or increasing backpropagation
extent increases the rate of decrease of approximation error. A Dynamic
Programming (DP) training procedure employing matrix operations is derived. DP
and use of matrix operations makes the procedure efficient particularly when
using data-parallel computing. The simulation studies show the geometry of the
parameter space, that the network learns the temporal structure, that
parameters converge while model output displays same dynamic behavior as the
system and greater than .99 Average Detection Rate on all real seizure data
tried.Comment: Preliminary, submitted to IEEE TNNL
A Unified Perspective of Evolutionary Game Dynamics Using Generalized Growth Transforms
In this paper, we show that different types of evolutionary game dynamics
are, in principle, special cases of a dynamical system model based on our
previously reported framework of generalized growth transforms. The framework
shows that different dynamics arise as a result of minimizing a population
energy such that the population as a whole evolves to reach the most stable
state. By introducing a population dependent time-constant in the generalized
growth transform model, the proposed framework can be used to explain a vast
repertoire of evolutionary dynamics, including some novel forms of game
dynamics with non-linear payoffs
Training Multi-layer Spiking Neural Networks using NormAD based Spatio-Temporal Error Backpropagation
Spiking neural networks (SNNs) have garnered a great amount of interest for
supervised and unsupervised learning applications. This paper deals with the
problem of training multi-layer feedforward SNNs. The non-linear
integrate-and-fire dynamics employed by spiking neurons make it difficult to
train SNNs to generate desired spike trains in response to a given input. To
tackle this, first the problem of training a multi-layer SNN is formulated as
an optimization problem such that its objective function is based on the
deviation in membrane potential rather than the spike arrival instants. Then,
an optimization method named Normalized Approximate Descent (NormAD),
hand-crafted for such non-convex optimization problems, is employed to derive
the iterative synaptic weight update rule. Next, it is reformulated to
efficiently train multi-layer SNNs, and is shown to be effectively performing
spatio-temporal error backpropagation. The learning rule is validated by
training -layer SNNs to solve a spike based formulation of the XOR problem
as well as training -layer SNNs for generic spike based training problems.
Thus, the new algorithm is a key step towards building deep spiking neural
networks capable of efficient event-triggered learning.Comment: 19 pages, 10 figure
Addressing Class Imbalance in Classification Problems of Noisy Signals by using Fourier Transform Surrogates
Randomizing the Fourier-transform (FT) phases of temporal-spatial data
generates surrogates that approximate examples from the data-generating
distribution. We propose such FT surrogates as a novel tool to augment and
analyze training of neural networks and explore the approach in the example of
sleep-stage classification. By computing FT surrogates of raw EEG, EOG, and EMG
signals of under-represented sleep stages, we balanced the CAPSLPDB sleep
database. We then trained and tested a convolutional neural network for sleep
stage classification, and found that our surrogate-based augmentation improved
the mean F1-score by 7%. As another application of FT surrogates, we formulated
an approach to compute saliency maps for individual sleep epochs. The
visualization is based on the response of inferred class probabilities under
replacement of short data segments by partial surrogates. To quantify how well
the distributions of the surrogates and the original data match, we evaluated a
trained classifier on surrogates of correctly classified examples, and
summarized these conditional predictions in a confusion matrix. We show how
such conditional confusion matrices can qualitatively explain the performance
of surrogates in class balancing. The FT-surrogate augmentation approach may
improve classification on noisy signals if carefully adapted to the data
distribution under analysis.Comment: 7 pages, 7 figure
Enhancing Geometric Deep Learning via Graph Filter Deconvolution
In this paper, we incorporate a graph filter deconvolution step into the
classical geometric convolutional neural network pipeline. More precisely,
under the assumption that the graph domain plays a role in the generation of
the observed graph signals, we pre-process every signal by passing it through a
sparse deconvolution operation governed by a pre-specified filter bank. This
deconvolution operation is formulated as a group-sparse recovery problem, and
convex relaxations that can be solved efficiently are put forth. The
deconvolved signals are then fed into the geometric convolutional neural
network, yielding better classification performance than their unprocessed
counterparts. Numerical experiments showcase the effectiveness of the
deconvolution step on classification tasks on both synthetic and real-world
settings.Comment: 5 pages, 8 figures, to appear in the proceedings of the 2018 6th IEEE
Global Conference on Signal and Information Processing, November 26-29, 2018,
Anaheim, California, US
Geometric Generalization Based Zero-Shot Learning Dataset Infinite World: Simple Yet Powerful
Raven's Progressive Matrices are one of the widely used tests in evaluating
the human test taker's fluid intelligence. Analogously, this paper introduces
geometric generalization based zero-shot learning tests to measure the rapid
learning ability and the internal consistency of deep generative models. Our
empirical research analysis on state-of-the-art generative models discern their
ability to generalize concepts across classes. In the process, we introduce
Infinite World, an evaluable, scalable, multi-modal, light-weight dataset and
Zero-Shot Intelligence Metric ZSI. The proposed tests condenses human-level
spatial and numerical reasoning tasks to its simplistic geometric forms. The
dataset is scalable to a theoretical limit of infinity, in numerical features
of the generated geometric figures, image size and in quantity. We
systematically analyze state-of-the-art model's internal consistency, identify
their bottlenecks and propose a pro-active optimization method for few-shot and
zero-shot learning
Discovery and visualization of structural biomarkers from MRI using transport-based morphometry
Disease in the brain is often associated with subtle, spatially diffuse, or
complex tissue changes that may lie beneath the level of gross visual
inspection, even on magnetic resonance imaging (MRI). Unfortunately, current
computer-assisted approaches that examine pre-specified features, whether
anatomically-defined (i.e. thalamic volume, cortical thickness) or based on
pixelwise comparison (i.e. deformation-based methods), are prone to missing a
vast array of physical changes that are not well-encapsulated by these metrics.
In this paper, we have developed a technique for automated pattern analysis
that can fully determine the relationship between brain structure and
observable phenotype without requiring any a priori features. Our technique,
called transport-based morphometry (TBM), is an image transformation that maps
brain images losslessly to a domain where they become much more separable. The
new approach is validated on structural brain images of healthy older adult
subjects where even linear models for discrimination, regression, and blind
source separation enable TBM to independently discover the characteristic
changes of aging and highlight potential mechanisms by which aerobic fitness
may mediate brain health later in life. TBM is a generative approach that can
provide visualization of physically meaningful shifts in tissue distribution
through inverse transformation. The proposed framework is a powerful technique
that can potentially elucidate genotype-structural-behavioral associations in
myriad diseases
Non-convex non-local flows for saliency detection
We propose and numerically solve a new variational model for automatic
saliency detection in digital images. Using a non-local framework we consider a
family of edge preserving functions combined with a new quadratic saliency
detection term. Such term defines a constrained bilateral obstacle problem for
image classification driven by p-Laplacian operators, including the so-called
hyper-Laplacian case (0 < p < 1). The related non-convex non-local reactive
flows are then considered and applied for glioblastoma segmentation in magnetic
resonance fluid-attenuated inversion recovery (MRI-Flair) images. A fast
convolutional kernel based approximated solution is computed. The numerical
experiments show how the non-convexity related to the hyperLaplacian operators
provides monotonically better results in terms of the standard metrics
- …