488 research outputs found
Maximum Likelihood Associative Memories
Associative memories are structures that store data in such a way that it can
later be retrieved given only a part of its content -- a sort-of
error/erasure-resilience property. They are used in applications ranging from
caches and memory management in CPUs to database engines. In this work we study
associative memories built on the maximum likelihood principle. We derive
minimum residual error rates when the data stored comes from a uniform binary
source. Second, we determine the minimum amount of memory required to store the
same data. Finally, we bound the computational complexity for message
retrieval. We then compare these bounds with two existing associative memory
architectures: the celebrated Hopfield neural networks and a neural network
architecture introduced more recently by Gripon and Berrou
Learning Local Receptive Fields and their Weight Sharing Scheme on Graphs
We propose a simple and generic layer formulation that extends the properties
of convolutional layers to any domain that can be described by a graph. Namely,
we use the support of its adjacency matrix to design learnable weight sharing
filters able to exploit the underlying structure of signals in the same fashion
as for images. The proposed formulation makes it possible to learn the weights
of the filter as well as a scheme that controls how they are shared across the
graph. We perform validation experiments with image datasets and show that
these filters offer performances comparable with convolutional ones.Comment: To appear in 2017, 5th IEEE Global Conference on Signal and
Information Processing, 5 pages, 3 figures, 3 table
Characterization and Inference of Graph Diffusion Processes from Observations of Stationary Signals
Many tools from the field of graph signal processing exploit knowledge of the
underlying graph's structure (e.g., as encoded in the Laplacian matrix) to
process signals on the graph. Therefore, in the case when no graph is
available, graph signal processing tools cannot be used anymore. Researchers
have proposed approaches to infer a graph topology from observations of signals
on its nodes. Since the problem is ill-posed, these approaches make
assumptions, such as smoothness of the signals on the graph, or sparsity
priors. In this paper, we propose a characterization of the space of valid
graphs, in the sense that they can explain stationary signals. To simplify the
exposition in this paper, we focus here on the case where signals were i.i.d.
at some point back in time and were observed after diffusion on a graph. We
show that the set of graphs verifying this assumption has a strong connection
with the eigenvectors of the covariance matrix, and forms a convex set. Along
with a theoretical study in which these eigenvectors are assumed to be known,
we consider the practical case when the observations are noisy, and
experimentally observe how fast the set of valid graphs converges to the set
obtained when the exact eigenvectors are known, as the number of observations
grows. To illustrate how this characterization can be used for graph recovery,
we present two methods for selecting a particular point in this set under
chosen criteria, namely graph simplicity and sparsity. Additionally, we
introduce a measure to evaluate how much a graph is adapted to signals under a
stationarity assumption. Finally, we evaluate how state-of-the-art methods
relate to this framework through experiments on a dataset of temperatures.Comment: Submitted to IEEE Transactions on Signal and Information Processing
over Network
- …