17,081 research outputs found
A Sparsity-Aware Adaptive Algorithm for Distributed Learning
In this paper, a sparsity-aware adaptive algorithm for distributed learning
in diffusion networks is developed. The algorithm follows the set-theoretic
estimation rationale. At each time instance and at each node of the network, a
closed convex set, known as property set, is constructed based on the received
measurements; this defines the region in which the solution is searched for. In
this paper, the property sets take the form of hyperslabs. The goal is to find
a point that belongs to the intersection of these hyperslabs. To this end,
sparsity encouraging variable metric projections onto the hyperslabs have been
adopted. Moreover, sparsity is also imposed by employing variable metric
projections onto weighted balls. A combine adapt cooperation strategy
is adopted. Under some mild assumptions, the scheme enjoys monotonicity,
asymptotic optimality and strong convergence to a point that lies in the
consensus subspace. Finally, numerical examples verify the validity of the
proposed scheme, compared to other algorithms, which have been developed in the
context of sparse adaptive learning
Estimation of the Number of Sources in Unbalanced Arrays via Information Theoretic Criteria
Estimating the number of sources impinging on an array of sensors is a well
known and well investigated problem. A common approach for solving this problem
is to use an information theoretic criterion, such as Minimum Description
Length (MDL) or the Akaike Information Criterion (AIC). The MDL estimator is
known to be a consistent estimator, robust against deviations from the Gaussian
assumption, and non-robust against deviations from the point source and/or
temporally or spatially white additive noise assumptions. Over the years
several alternative estimation algorithms have been proposed and tested.
Usually, these algorithms are shown, using computer simulations, to have
improved performance over the MDL estimator, and to be robust against
deviations from the assumed spatial model. Nevertheless, these robust
algorithms have high computational complexity, requiring several
multi-dimensional searches.
In this paper, motivated by real life problems, a systematic approach toward
the problem of robust estimation of the number of sources using information
theoretic criteria is taken. An MDL type estimator that is robust against
deviation from assumption of equal noise level across the array is studied. The
consistency of this estimator, even when deviations from the equal noise level
assumption occur, is proven. A novel low-complexity implementation method
avoiding the need for multi-dimensional searches is presented as well, making
this estimator a favorable choice for practical applications.Comment: To appear in the IEEE Transactions on Signal Processin
Information-based complexity, feedback and dynamics in convex programming
We study the intrinsic limitations of sequential convex optimization through
the lens of feedback information theory. In the oracle model of optimization,
an algorithm queries an {\em oracle} for noisy information about the unknown
objective function, and the goal is to (approximately) minimize every function
in a given class using as few queries as possible. We show that, in order for a
function to be optimized, the algorithm must be able to accumulate enough
information about the objective. This, in turn, puts limits on the speed of
optimization under specific assumptions on the oracle and the type of feedback.
Our techniques are akin to the ones used in statistical literature to obtain
minimax lower bounds on the risks of estimation procedures; the notable
difference is that, unlike in the case of i.i.d. data, a sequential
optimization algorithm can gather observations in a {\em controlled} manner, so
that the amount of information at each step is allowed to change in time. In
particular, we show that optimization algorithms often obey the law of
diminishing returns: the signal-to-noise ratio drops as the optimization
algorithm approaches the optimum. To underscore the generality of the tools, we
use our approach to derive fundamental lower bounds for a certain active
learning problem. Overall, the present work connects the intuitive notions of
information in optimization, experimental design, estimation, and active
learning to the quantitative notion of Shannon information.Comment: final version; to appear in IEEE Transactions on Information Theor
Random template placement and prior information
In signal detection problems, one is usually faced with the task of searching
a parameter space for peaks in the likelihood function which indicate the
presence of a signal. Random searches have proven to be very efficient as well
as easy to implement, compared e.g. to searches along regular grids in
parameter space. Knowledge of the parameterised shape of the signal searched
for adds structure to the parameter space, i.e., there are usually regions
requiring to be densely searched while in other regions a coarser search is
sufficient. On the other hand, prior information identifies the regions in
which a search will actually be promising or may likely be in vain. Defining
specific figures of merit allows one to combine both template metric and prior
distribution and devise optimal sampling schemes over the parameter space. We
show an example related to the gravitational wave signal from a binary inspiral
event. Here the template metric and prior information are particularly
contradictory, since signals from low-mass systems tolerate the least mismatch
in parameter space while high-mass systems are far more likely, as they imply a
greater signal-to-noise ratio (SNR) and hence are detectable to greater
distances. The derived sampling strategy is implemented in a Markov chain Monte
Carlo (MCMC) algorithm where it improves convergence.Comment: Proceedings of the 8th Edoardo Amaldi Conference on Gravitational
Waves. 7 pages, 4 figure
- …