179 research outputs found
Reconstructing a Graph from Path Traces
This paper considers the problem of inferring the structure of a network from
indirect observations. Each observation (a "trace") is the unordered set of
nodes which are activated along a path through the network. Since a trace does
not convey information about the order of nodes within the path, there are many
feasible orders for each trace observed, and thus the problem of inferring the
network from traces is, in general, illposed. We propose and analyze an
algorithm which inserts edges by ordering each trace into a path according to
which pairs of nodes in the path co-occur most frequently in the observations.
When all traces involve exactly 3 nodes, we derive necessary and sufficient
conditions for the reconstruction algorithm to exactly recover the graph.
Finally, for a family of random graphs, we present expressions for
reconstruction error probabilities (false discoveries and missed detections)
Sparse neural networks with large learning diversity
Coded recurrent neural networks with three levels of sparsity are introduced.
The first level is related to the size of messages, much smaller than the
number of available neurons. The second one is provided by a particular coding
rule, acting as a local constraint in the neural activity. The third one is a
characteristic of the low final connection density of the network after the
learning phase. Though the proposed network is very simple since it is based on
binary neurons and binary connections, it is able to learn a large number of
messages and recall them, even in presence of strong erasures. The performance
of the network is assessed as a classifier and as an associative memory
Maximum Likelihood Associative Memories
Associative memories are structures that store data in such a way that it can
later be retrieved given only a part of its content -- a sort-of
error/erasure-resilience property. They are used in applications ranging from
caches and memory management in CPUs to database engines. In this work we study
associative memories built on the maximum likelihood principle. We derive
minimum residual error rates when the data stored comes from a uniform binary
source. Second, we determine the minimum amount of memory required to store the
same data. Finally, we bound the computational complexity for message
retrieval. We then compare these bounds with two existing associative memory
architectures: the celebrated Hopfield neural networks and a neural network
architecture introduced more recently by Gripon and Berrou
Learning Local Receptive Fields and their Weight Sharing Scheme on Graphs
We propose a simple and generic layer formulation that extends the properties
of convolutional layers to any domain that can be described by a graph. Namely,
we use the support of its adjacency matrix to design learnable weight sharing
filters able to exploit the underlying structure of signals in the same fashion
as for images. The proposed formulation makes it possible to learn the weights
of the filter as well as a scheme that controls how they are shared across the
graph. We perform validation experiments with image datasets and show that
these filters offer performances comparable with convolutional ones.Comment: To appear in 2017, 5th IEEE Global Conference on Signal and
Information Processing, 5 pages, 3 figures, 3 table
A Comparative Study of Sparse Associative Memories
We study various models of associative memories with sparse information, i.e.
a pattern to be stored is a random string of s and s with about
s, only. We compare different synaptic weights, architectures and retrieval
mechanisms to shed light on the influence of the various parameters on the
storage capacity.Comment: 28 pages, 2 figure
Evaluating Graph Signal Processing for Neuroimaging Through Classification and Dimensionality Reduction
Graph Signal Processing (GSP) is a promising framework to analyze
multi-dimensional neuroimaging datasets, while taking into account both the
spatial and functional dependencies between brain signals. In the present work,
we apply dimensionality reduction techniques based on graph representations of
the brain to decode brain activity from real and simulated fMRI datasets. We
introduce seven graphs obtained from a) geometric structure and/or b)
functional connectivity between brain areas at rest, and compare them when
performing dimension reduction for classification. We show that mixed graphs
using both a) and b) offer the best performance. We also show that graph
sampling methods perform better than classical dimension reduction including
Principal Component Analysis (PCA) and Independent Component Analysis (ICA).Comment: 5 pages, GlobalSIP 201
- âŠ