563 research outputs found
Joint DOA Estimation and Array Calibration Using Multiple Parametric Dictionary Learning
This letter proposes a multiple parametric dictionary learning algorithm for
direction of arrival (DOA) estimation in presence of array gain-phase error and
mutual coupling. It jointly solves both the DOA estimation and array
imperfection problems to yield a robust DOA estimation in presence of array
imperfection errors and off-grid. In the proposed method, a multiple parametric
dictionary learning-based algorithm with an steepest-descent iteration is used
for learning the parametric perturbation matrices and the steering matrix
simultaneously. It also exploits the multiple snapshots information to enhance
the performance of DOA estimation. Simulation results show the efficiency of
the proposed algorithm when both off-grid problem and array imperfection exist
Super-Resolution Compressed Sensing: A Generalized Iterative Reweighted L2 Approach
Conventional compressed sensing theory assumes signals have sparse
representations in a known, finite dictionary. Nevertheless, in many practical
applications such as direction-of-arrival (DOA) estimation and line spectral
estimation, the sparsifying dictionary is usually characterized by a set of
unknown parameters in a continuous domain. To apply the conventional compressed
sensing technique to such applications, the continuous parameter space has to
be discretized to a finite set of grid points, based on which a "presumed
dictionary" is constructed for sparse signal recovery. Discretization, however,
inevitably incurs errors since the true parameters do not necessarily lie on
the discretized grid. This error, also referred to as grid mismatch, may lead
to deteriorated recovery performance or even recovery failure. To address this
issue, in this paper, we propose a generalized iterative reweighted L2 method
which jointly estimates the sparse signals and the unknown parameters
associated with the true dictionary. The proposed algorithm is developed by
iteratively decreasing a surrogate function majorizing a given objective
function, resulting in a gradual and interweaved iterative process to refine
the unknown parameters and the sparse signal. A simple yet effective scheme is
developed for adaptively updating the regularization parameter that controls
the tradeoff between the sparsity of the solution and the data fitting error.
Extension of the proposed algorithm to the multiple measurement vector scenario
is also considered. Numerical results show that the proposed algorithm achieves
a super-resolution accuracy and presents superiority over other existing
methods.Comment: arXiv admin note: text overlap with arXiv:1401.431
From Bayesian Sparsity to Gated Recurrent Nets
The iterations of many first-order algorithms, when applied to minimizing
common regularized regression functions, often resemble neural network layers
with pre-specified weights. This observation has prompted the development of
learning-based approaches that purport to replace these iterations with
enhanced surrogates forged as DNN models from available training data. For
example, important NP-hard sparse estimation problems have recently benefitted
from this genre of upgrade, with simple feedforward or recurrent networks
ousting proximal gradient-based iterations. Analogously, this paper
demonstrates that more powerful Bayesian algorithms for promoting sparsity,
which rely on complex multi-loop majorization-minimization techniques, mirror
the structure of more sophisticated long short-term memory (LSTM) networks, or
alternative gated feedback networks previously designed for sequence
prediction. As part of this development, we examine the parallels between
latent variable trajectories operating across multiple time-scales during
optimization, and the activations within deep network structures designed to
adaptively model such characteristic sequences. The resulting insights lead to
a novel sparse estimation system that, when granted training data, can estimate
optimal solutions efficiently in regimes where other algorithms fail, including
practical direction-of-arrival (DOA) and 3D geometry recovery problems. The
underlying principles we expose are also suggestive of a learning process for a
richer class of multi-loop algorithms in other domains
Super-Resolution Compressed Sensing: An Iterative Reweighted Algorithm for Joint Parameter Learning and Sparse Signal Recovery
In many practical applications such as direction-of-arrival (DOA) estimation
and line spectral estimation, the sparsifying dictionary is usually
characterized by a set of unknown parameters in a continuous domain. To apply
the conventional compressed sensing to such applications, the continuous
parameter space has to be discretized to a finite set of grid points.
Discretization, however, incurs errors and leads to deteriorated recovery
performance. To address this issue, we propose an iterative reweighted method
which jointly estimates the unknown parameters and the sparse signals.
Specifically, the proposed algorithm is developed by iteratively decreasing a
surrogate function majorizing a given objective function, which results in a
gradual and interweaved iterative process to refine the unknown parameters and
the sparse signal. Numerical results show that the algorithm provides superior
performance in resolving closely-spaced frequency components
A Block Alternating Optimization Method for Direction-of-Arrival Estimation with Nested Array
In this paper, direction-of-arrival estimation using nested array is studied
in the framework of sparse signal representation. With the vectorization
operator, a new real-valued nonnegative sparse signal recovery model which has
a wider virtual array aperture is built. To leverage celebrated compressive
sensing algorithms, the continuous parameter space has to be discretized to a
number of fixed grid points, which inevitably incurs modeling error caused by
off-grid gap. To remedy this issue, a block alternating optimization method is
put forth that jointly estimates the sparse signal and refines the locations of
grid points. Specifically, inspired by the majorization minimization, the
proposed method iteratively minimizes a surrogate function majorizing the given
objective function, where only a single block of variables are updated per
iteration while the remaining ones are kept fixed. The proposed method features
affordable computational complexity, and numerical tests corroborate its
superior performance relative to existing alternatives in both overdetermined
and underdetermined scenarios
Enhancing Sparsity and Resolution via Reweighted Atomic Norm Minimization
The mathematical theory of super-resolution developed recently by Cand\`{e}s
and Fernandes-Granda states that a continuous, sparse frequency spectrum can be
recovered with infinite precision via a (convex) atomic norm technique given a
set of uniform time-space samples. This theory was then extended to the cases
of partial/compressive samples and/or multiple measurement vectors via atomic
norm minimization (ANM), known as off-grid/continuous compressed sensing (CCS).
However, a major problem of existing atomic norm methods is that the
frequencies can be recovered only if they are sufficiently separated,
prohibiting commonly known high resolution. In this paper, a novel (nonconvex)
sparse metric is proposed that promotes sparsity to a greater extent than the
atomic norm. Using this metric an optimization problem is formulated and a
locally convergent iterative algorithm is implemented. The algorithm
iteratively carries out ANM with a sound reweighting strategy which enhances
sparsity and resolution, and is termed as reweighted atomic-norm minimization
(RAM). Extensive numerical simulations are carried out to demonstrate the
advantageous performance of RAM with application to direction of arrival (DOA)
estimation.Comment: 12 pages, double column, 5 figures, to appear in IEEE Transactions on
Signal Processin
Online unsupervised deep unfolding for massive MIMO channel estimation
Massive MIMO communication systems have a huge potential both in terms of
data rate and energy efficiency, although channel estimation becomes
challenging for a large number antennas. Using a physical model allows to ease
the problem by injecting a priori information based on the physics of
propagation. However, such a model rests on simplifying assumptions and
requires to know precisely the configuration of the system, which is
unrealistic in practice. In this letter, we propose to perform online learning
for channel estimation in a massive MIMO context, adding flexibility to
physical channel models by unfolding a channel estimation algorithm (matching
pursuit) as a neural network. This leads to a computationally efficient neural
network structure that can be trained online when initialized with an imperfect
model. The method allows a base station to automatically correct its channel
estimation algorithm based on incoming data, without the need for a separate
offline training phase. It is applied to realistic millimeter wave channels and
shows great performance, achieving a channel estimation error almost as low as
one would get with a perfectly calibrated system
Sparse Bayesian Learning-Based Direction Finding Method With Unknown Mutual Coupling Effect
The imperfect array degrades the direction finding performance. In this
paper, we investigate the direction finding problem in uniform linear array
(ULA) system with unknown mutual coupling effect between antennas. By
exploiting the target sparsity in the spatial domain, sparse Bayesian learning
(SBL)-based model is proposed and converts the direction finding problem into a
sparse reconstruction problem. In the sparse-based model, the \emph{off-grid}
errors are introduced by discretizing the direction area into grids. Therefore,
an off-grid SBL model with mutual coupling vector is proposed to overcome both
the mutual coupling and the off-grid effect. With the distribution assumptions
of unknown parameters including the noise variance, the off-grid vector, the
received signals and the mutual coupling vector, a novel direction finding
method based on SBL with unknown mutual coupling effect named DFSMC is
proposed, where an expectation-maximum (EM)-based step is adopted by deriving
the estimation expressions for all the unknown parameters theoretically.
Simulation results show that the proposed DFSMC method can outperform
state-of-the-art direction finding methods significantly in the array system
with unknown mutual coupling effect
Sparse Bayesian learning with uncertainty models and multiple dictionaries
Sparse Bayesian learning (SBL) has emerged as a fast and competitive method
to perform sparse processing. The SBL algorithm, which is developed using a
Bayesian framework, approximately solves a non-convex optimization problem
using fixed point updates. It provides comparable performance and is
significantly faster than convex optimization techniques used in sparse
processing. We propose a signal model which accounts for dictionary mismatch
and the presence of errors in the weight vector at low signal-to-noise ratios.
A fixed point update equation is derived which incorporates the statistics of
mismatch and weight errors. We also process observations from multiple
dictionaries. Noise variances are estimated using stochastic maximum
likelihood. The derived update equations are studied quantitatively using
beamforming simulations applied to direction-of-arrival (DoA). Performance of
SBL using single- and multi-frequency observations, and in the presence of
aliasing, is evaluated. SwellEx-96 experimental data demonstrates qualitatively
the advantages of SBL.Comment: 11 pages, 8 figure
Using the LASSO's Dual for Regularization in Sparse Signal Reconstruction from Array Data
Waves from a sparse set of source hidden in additive noise are observed by a
sensor array. We treat the estimation of the sparse set of sources as a
generalized complex-valued LASSO problem. The corresponding dual problem is
formulated and it is shown that the dual solution is useful for selecting the
regularization parameter of the LASSO when the number of sources is given. The
solution path of the complex-valued LASSO is analyzed. For a given number of
sources, the corresponding regularization parameter is determined by an
order-recursive algorithm and two iterative algorithms that are based on a
further approximation. Using this regularization parameter, the DOAs of all
sources are estimated.Comment: submitted to IEEE Transactions on Signal Processing, 09-Aug-201
- …