83 research outputs found
Algebraic Signal Processing Theory: Cooley-Tukey Type Algorithms for Polynomial Transforms Based on Induction
A polynomial transform is the multiplication of an input vector x\in\C^n by
a matrix \PT_{b,\alpha}\in\C^{n\times n}, whose -th element is
defined as for polynomials p_\ell(x)\in\C[x] from a list
and sample points \alpha_k\in\C from a list
. Such transforms find applications in
the areas of signal processing, data compression, and function interpolation.
Important examples include the discrete Fourier and cosine transforms. In this
paper we introduce a novel technique to derive fast algorithms for polynomial
transforms. The technique uses the relationship between polynomial transforms
and the representation theory of polynomial algebras. Specifically, we derive
algorithms by decomposing the regular modules of these algebras as a stepwise
induction. As an application, we derive novel general-radix
algorithms for the discrete Fourier transform and the discrete cosine transform
of type 4.Comment: 19 pages. Submitted to SIAM Journal on Matrix Analysis and
Application
Discrete Signal Processing on Graphs: Frequency Analysis
Signals and datasets that arise in physical and engineering applications, as
well as social, genetics, biomolecular, and many other domains, are becoming
increasingly larger and more complex. In contrast to traditional time and image
signals, data in these domains are supported by arbitrary graphs. Signal
processing on graphs extends concepts and techniques from traditional signal
processing to data indexed by generic graphs. This paper studies the concepts
of low and high frequencies on graphs, and low-, high-, and band-pass graph
filters. In traditional signal processing, there concepts are easily defined
because of a natural frequency ordering that has a physical interpretation. For
signals residing on graphs, in general, there is no obvious frequency ordering.
We propose a definition of total variation for graph signals that naturally
leads to a frequency ordering on graphs and defines low-, high-, and band-pass
graph signals and filters. We study the design of graph filters with specified
frequency response, and illustrate our approach with applications to sensor
malfunction detection and data classification
When Is network lasso accurate?
The “least absolute shrinkage and selection operator” (Lasso) method has been adapted recently for network-structured datasets. In particular, this network Lasso method allows to learn graph signals from a small number of noisy signal samples by using the total variation of a graph signal for regularization. While efficient and scalable implementations of the network Lasso are available, only little is known about the conditions on the underlying network structure which ensure network Lasso to be accurate. By leveraging concepts of compressed sensing, we address this gap and derive precise conditions on the underlying network topology and sampling set which guarantee the network Lasso for a particular loss function to deliver an accurate estimate of the entire underlying graph signal. We also quantify the error incurred by network Lasso in terms of two constants which reflect the connectivity of the sampled nodes
Adaptation and learning over networks for nonlinear system modeling
In this chapter, we analyze nonlinear filtering problems in distributed
environments, e.g., sensor networks or peer-to-peer protocols. In these
scenarios, the agents in the environment receive measurements in a streaming
fashion, and they are required to estimate a common (nonlinear) model by
alternating local computations and communications with their neighbors. We
focus on the important distinction between single-task problems, where the
underlying model is common to all agents, and multitask problems, where each
agent might converge to a different model due to, e.g., spatial dependencies or
other factors. Currently, most of the literature on distributed learning in the
nonlinear case has focused on the single-task case, which may be a strong
limitation in real-world scenarios. After introducing the problem and reviewing
the existing approaches, we describe a simple kernel-based algorithm tailored
for the multitask case. We evaluate the proposal on a simulated benchmark task,
and we conclude by detailing currently open problems and lines of research.Comment: To be published as a chapter in `Adaptive Learning Methods for
Nonlinear System Modeling', Elsevier Publishing, Eds. D. Comminiello and J.C.
Principe (2018
The Complex Hierarchical Topology of EEG Functional Connectivity
Understanding the complex hierarchical topology of functional brain networks
is a key aspect of functional connectivity research. Such topics are obscured
by the widespread use of sparse binary network models which are fundamentally
different to the complete weighted networks derived from functional
connectivity. We introduce two techniques to probe the hierarchical complexity
of topologies. Firstly, a new metric to measure hierarchical complexity;
secondly, a Weighted Complex Hierarchy (WCH) model. To thoroughly evaluate our
techniques, we generalise sparse binary network archetypes to weighted forms
and explore the main topological features of brain networks- integration,
regularity and modularity- using curves over density. By controlling the
parameters of our model, the highest complexity is found to arise between a
random topology and a strict 'class-based' topology. Further, the model has
equivalent complexity to EEG phase-lag networks at peak performance.
Hierarchical complexity attains greater magnitude and range of differences
between different networks than the previous commonly used complexity metric
and our WCH model offers a much broader range of network topology than the
standard scale-free and small-world models at a full range of densities. Our
metric and model provide a rigorous characterisation of hierarchical
complexity. Importantly, our framework shows a scale of complexity arising
between 'all nodes are equal' topologies at one extreme and 'strict
class-based' topologies at the other.Comment: 12 pages, 7 figures, accepted for publication in Journal of
Neuroscience Methods, 8/11/201
Locating Temporal Functional Dynamics of Visual Short-Term Memory Binding using Graph Modular Dirichlet Energy
Visual short-term memory binding tasks are a promising early marker for
Alzheimer's disease (AD). To uncover functional deficits of AD in these tasks
it is meaningful to first study unimpaired brain function. Electroencephalogram
recordings were obtained from encoding and maintenance periods of tasks
performed by healthy young volunteers. We probe the task's transient
physiological underpinnings by contrasting shape only (Shape) and shape-colour
binding (Bind) conditions, displayed in the left and right sides of the screen,
separately. Particularly, we introduce and implement a novel technique named
Modular Dirichlet Energy (MDE) which allows robust and flexible analysis of the
functional network with unprecedented temporal precision. We find that
connectivity in the Bind condition is less integrated with the global network
than in the Shape condition in occipital and frontal modules during the
encoding period of the right screen condition. Using MDE we are able to discern
driving effects in the occipital module between 100-140ms, coinciding with the
P100 visually evoked potential, followed by a driving effect in the frontal
module between 140-180ms, suggesting that the differences found constitute an
information processing difference between these modules. This provides
temporally precise information over a heterogeneous population in promising
tasks for the detection of AD
- …