32,372 research outputs found
A strict error bound with separated contributions of the discretization and of the iterative solver in non-overlapping domain decomposition methods
This paper deals with the estimation of the distance between the solution of
a static linear mechanic problem and its approximation by the finite element
method solved with a non-overlapping domain decomposition method (FETI or BDD).
We propose a new strict upper bound of the error which separates the
contribution of the iterative solver and the contribution of the
discretization. Numerical assessments show that the bound is sharp and enables
us to define an objective stopping criterion for the iterative solverComment: Computer Methods in Applied Mechanics and Engineering (2013) onlin
Superfast Line Spectral Estimation
A number of recent works have proposed to solve the line spectral estimation
problem by applying off-the-grid extensions of sparse estimation techniques.
These methods are preferable over classical line spectral estimation algorithms
because they inherently estimate the model order. However, they all have
computation times which grow at least cubically in the problem size, thus
limiting their practical applicability in cases with large dimensions. To
alleviate this issue, we propose a low-complexity method for line spectral
estimation, which also draws on ideas from sparse estimation. Our method is
based on a Bayesian view of the problem. The signal covariance matrix is shown
to have Toeplitz structure, allowing superfast Toeplitz inversion to be used.
We demonstrate that our method achieves estimation accuracy at least as good as
current methods and that it does so while being orders of magnitudes faster.Comment: 16 pages, 7 figures, accepted for IEEE Transactions on Signal
Processin
Slow Adaptive OFDMA Systems Through Chance Constrained Programming
Adaptive OFDMA has recently been recognized as a promising technique for
providing high spectral efficiency in future broadband wireless systems. The
research over the last decade on adaptive OFDMA systems has focused on adapting
the allocation of radio resources, such as subcarriers and power, to the
instantaneous channel conditions of all users. However, such "fast" adaptation
requires high computational complexity and excessive signaling overhead. This
hinders the deployment of adaptive OFDMA systems worldwide. This paper proposes
a slow adaptive OFDMA scheme, in which the subcarrier allocation is updated on
a much slower timescale than that of the fluctuation of instantaneous channel
conditions. Meanwhile, the data rate requirements of individual users are
accommodated on the fast timescale with high probability, thereby meeting the
requirements except occasional outage. Such an objective has a natural chance
constrained programming formulation, which is known to be intractable. To
circumvent this difficulty, we formulate safe tractable constraints for the
problem based on recent advances in chance constrained programming. We then
develop a polynomial-time algorithm for computing an optimal solution to the
reformulated problem. Our results show that the proposed slow adaptation scheme
drastically reduces both computational cost and control signaling overhead when
compared with the conventional fast adaptive OFDMA. Our work can be viewed as
an initial attempt to apply the chance constrained programming methodology to
wireless system designs. Given that most wireless systems can tolerate an
occasional dip in the quality of service, we hope that the proposed methodology
will find further applications in wireless communications
Fast non-negative deconvolution for spike train inference from population calcium imaging
Calcium imaging for observing spiking activity from large populations of
neurons are quickly gaining popularity. While the raw data are fluorescence
movies, the underlying spike trains are of interest. This work presents a fast
non-negative deconvolution filter to infer the approximately most likely spike
train for each neuron, given the fluorescence observations. This algorithm
outperforms optimal linear deconvolution (Wiener filtering) on both simulated
and biological data. The performance gains come from restricting the inferred
spike trains to be positive (using an interior-point method), unlike the Wiener
filter. The algorithm is fast enough that even when imaging over 100 neurons,
inference can be performed on the set of all observed traces faster than
real-time. Performing optimal spatial filtering on the images further refines
the estimates. Importantly, all the parameters required to perform the
inference can be estimated using only the fluorescence data, obviating the need
to perform joint electrophysiological and imaging calibration experiments.Comment: 22 pages, 10 figure
Towards Fast-Convergence, Low-Delay and Low-Complexity Network Optimization
Distributed network optimization has been studied for well over a decade.
However, we still do not have a good idea of how to design schemes that can
simultaneously provide good performance across the dimensions of utility
optimality, convergence speed, and delay. To address these challenges, in this
paper, we propose a new algorithmic framework with all these metrics
approaching optimality. The salient features of our new algorithm are
three-fold: (i) fast convergence: it converges with only
iterations that is the fastest speed among all the existing algorithms; (ii)
low delay: it guarantees optimal utility with finite queue length; (iii) simple
implementation: the control variables of this algorithm are based on virtual
queues that do not require maintaining per-flow information. The new technique
builds on a kind of inexact Uzawa method in the Alternating Directional Method
of Multiplier, and provides a new theoretical path to prove global and linear
convergence rate of such a method without requiring the full rank assumption of
the constraint matrix
- …