8,219 research outputs found
A stochastic approximation algorithm for stochastic semidefinite programming
Motivated by applications to multi-antenna wireless networks, we propose a
distributed and asynchronous algorithm for stochastic semidefinite programming.
This algorithm is a stochastic approximation of a continous- time matrix
exponential scheme regularized by the addition of an entropy-like term to the
problem's objective function. We show that the resulting algorithm converges
almost surely to an -approximation of the optimal solution
requiring only an unbiased estimate of the gradient of the problem's stochastic
objective. When applied to throughput maximization in wireless multiple-input
and multiple-output (MIMO) systems, the proposed algorithm retains its
convergence properties under a wide array of mobility impediments such as user
update asynchronicities, random delays and/or ergodically changing channels.
Our theoretical analysis is complemented by extensive numerical simulations
which illustrate the robustness and scalability of the proposed method in
realistic network conditions.Comment: 25 pages, 4 figure
Distributed stochastic optimization via matrix exponential learning
In this paper, we investigate a distributed learning scheme for a broad class
of stochastic optimization problems and games that arise in signal processing
and wireless communications. The proposed algorithm relies on the method of
matrix exponential learning (MXL) and only requires locally computable gradient
observations that are possibly imperfect and/or obsolete. To analyze it, we
introduce the notion of a stable Nash equilibrium and we show that the
algorithm is globally convergent to such equilibria - or locally convergent
when an equilibrium is only locally stable. We also derive an explicit linear
bound for the algorithm's convergence speed, which remains valid under
measurement errors and uncertainty of arbitrarily high variance. To validate
our theoretical analysis, we test the algorithm in realistic
multi-carrier/multiple-antenna wireless scenarios where several users seek to
maximize their energy efficiency. Our results show that learning allows users
to attain a net increase between 100% and 500% in energy efficiency, even under
very high uncertainty.Comment: 31 pages, 3 figure
Neuronal Synchronization Can Control the Energy Efficiency of Inter-Spike Interval Coding
The role of synchronous firing in sensory coding and cognition remains
controversial. While studies, focusing on its mechanistic consequences in
attentional tasks, suggest that synchronization dynamically boosts sensory
processing, others failed to find significant synchronization levels in such
tasks. We attempt to understand both lines of evidence within a coherent
theoretical framework. We conceptualize synchronization as an independent
control parameter to study how the postsynaptic neuron transmits the average
firing activity of a presynaptic population, in the presence of
synchronization. We apply the Berger-Levy theory of energy efficient
information transmission to interpret simulations of a Hodgkin-Huxley-type
postsynaptic neuron model, where we varied the firing rate and synchronization
level in the presynaptic population independently. We find that for a fixed
presynaptic firing rate the simulated postsynaptic interspike interval
distribution depends on the synchronization level and is well-described by a
generalized extreme value distribution. For synchronization levels of 15% to
50%, we find that the optimal distribution of presynaptic firing rate,
maximizing the mutual information per unit cost, is maximized at ~30%
synchronization level. These results suggest that the statistics and energy
efficiency of neuronal communication channels, through which the input rate is
communicated, can be dynamically adapted by the synchronization level.Comment: 47 pages, 14 figures, 2 Table
Model-Based Deep Learning
Signal processing, communications, and control have traditionally relied on
classical statistical modeling techniques. Such model-based methods utilize
mathematical formulations that represent the underlying physics, prior
information and additional domain knowledge. Simple classical models are useful
but sensitive to inaccuracies and may lead to poor performance when real
systems display complex or dynamic behavior. On the other hand, purely
data-driven approaches that are model-agnostic are becoming increasingly
popular as datasets become abundant and the power of modern deep learning
pipelines increases. Deep neural networks (DNNs) use generic architectures
which learn to operate from data, and demonstrate excellent performance,
especially for supervised problems. However, DNNs typically require massive
amounts of data and immense computational resources, limiting their
applicability for some signal processing scenarios. We are interested in hybrid
techniques that combine principled mathematical models with data-driven systems
to benefit from the advantages of both approaches. Such model-based deep
learning methods exploit both partial domain knowledge, via mathematical
structures designed for specific problems, as well as learning from limited
data. In this article we survey the leading approaches for studying and
designing model-based deep learning systems. We divide hybrid
model-based/data-driven systems into categories based on their inference
mechanism. We provide a comprehensive review of the leading approaches for
combining model-based algorithms with deep learning in a systematic manner,
along with concrete guidelines and detailed signal processing oriented examples
from recent literature. Our aim is to facilitate the design and study of future
systems on the intersection of signal processing and machine learning that
incorporate the advantages of both domains
- …