39 research outputs found
Sparsity-Cognizant Total Least-Squares for Perturbed Compressive Sampling
Solving linear regression problems based on the total least-squares (TLS)
criterion has well-documented merits in various applications, where
perturbations appear both in the data vector as well as in the regression
matrix. However, existing TLS approaches do not account for sparsity possibly
present in the unknown vector of regression coefficients. On the other hand,
sparsity is the key attribute exploited by modern compressive sampling and
variable selection approaches to linear regression, which include noise in the
data, but do not account for perturbations in the regression matrix. The
present paper fills this gap by formulating and solving TLS optimization
problems under sparsity constraints. Near-optimum and reduced-complexity
suboptimum sparse (S-) TLS algorithms are developed to address the perturbed
compressive sampling (and the related dictionary learning) challenge, when
there is a mismatch between the true and adopted bases over which the unknown
vector is sparse. The novel S-TLS schemes also allow for perturbations in the
regression matrix of the least-absolute selection and shrinkage selection
operator (Lasso), and endow TLS approaches with ability to cope with sparse,
under-determined "errors-in-variables" models. Interesting generalizations can
further exploit prior knowledge on the perturbations to obtain novel weighted
and structured S-TLS solvers. Analysis and simulations demonstrate the
practical impact of S-TLS in calibrating the mismatch effects of contemporary
grid-based approaches to cognitive radio sensing, and robust
direction-of-arrival estimation using antenna arrays.Comment: 30 pages, 10 figures, submitted to IEEE Transactions on Signal
Processin
Sparse Packetized Predictive Control for Networked Control over Erasure Channels
We study feedback control over erasure channels with packet-dropouts. To
achieve robustness with respect to packet-dropouts, the controller transmits
data packets containing plant input predictions, which minimize a finite
horizon cost function. To reduce the data size of packets, we propose to adopt
sparsity-promoting optimizations, namely, ell-1-ell-2 and ell-2-constrained
ell-0 optimizations, for which efficient algorithms exist. We derive sufficient
conditions on design parameters, which guarantee (practical) stability of the
resulting feedback control systems when the number of consecutive
packet-dropouts is bounded.Comment: IEEE Transactions on Automatic Control, Volume 59 (2014), Issue 7
(July) (to appear
NUV-DoA: NUV Prior-based Bayesian Sparse Reconstruction with Spatial Filtering for Super-Resolution DoA Estimation
Achieving high-resolution Direction of Arrival (DoA) recovery typically
requires high Signal to Noise Ratio (SNR) and a sufficiently large number of
snapshots. This paper presents NUV-DoA algorithm, that augments Bayesian sparse
reconstruction with spatial filtering for super-resolution DoA estimation. By
modeling each direction on the azimuth's grid with the sparsity-promoting
normal with unknown variance (NUV) prior, the non-convex optimization problem
is reduced to iteratively reweighted least-squares under Gaussian distribution,
where the mean of the snapshots is a sufficient statistic. This approach not
only simplifies our solution but also accurately detects the DoAs. We utilize a
hierarchical approach for interference cancellation in multi-source scenarios.
Empirical evaluations show the superiority of NUV-DoA, especially in low SNRs,
compared to alternative DoA estimators.Comment: 5 pages include reference, 11 figures, submitted to ICASSP 2024, on
Sep 6 202
Low-complexity DCD-based sparse recovery algorithms
Sparse recovery techniques find applications in many areas. Real-time implementation of such techniques has been recently an important area for research. In this paper, we propose computationally efficient techniques based on dichotomous coordinate descent (DCD) iterations for recovery of sparse complex-valued signals. We first consider optimization that can incorporate \emph{a priori} information on the solution in the form of a weight vector. We propose a DCD-based algorithm for optimization with a fixed -regularization, and then efficiently incorporate it in reweighting iterations using a \emph{warm start} at each iteration. We then exploit homotopy by sampling the regularization parameter and arrive at an algorithm that, in each homotopy iteration, performs the optimization on the current support with a fixed regularization parameter and then updates the support by adding/removing elements. We propose efficient rules for adding and removing the elements. The performance of the homotopy algorithm is further improved with the reweighting. We then propose an algorithm for optimization that exploits homotopy for the regularization; it alternates between the least-squares (LS) optimization on the support and the support update, for which we also propose an efficient rule. The algorithm complexity is reduced when DCD iterations with a \emph{warm start} are used for the LS optimization, and, as most of the DCD operations are additions and bit-shifts, it is especially suited to real time implementation. The proposed algorithms are investigated in channel estimation scenarios and compared with known sparse recovery techniques such as the matching pursuit (MP) and YALL1 algorithms. The numerical examples show that the proposed techniques achieve a mean-squared error smaller than that of the YALL1 algorithm and complexity comparable to that of the MP algorithm
Recovery under Side Constraints
This paper addresses sparse signal reconstruction under various types of
structural side constraints with applications in multi-antenna systems. Side
constraints may result from prior information on the measurement system and the
sparse signal structure. They may involve the structure of the sensing matrix,
the structure of the non-zero support values, the temporal structure of the
sparse representationvector, and the nonlinear measurement structure. First, we
demonstrate how a priori information in form of structural side constraints
influence recovery guarantees (null space properties) using L1-minimization.
Furthermore, for constant modulus signals, signals with row-, block- and
rank-sparsity, as well as non-circular signals, we illustrate how structural
prior information can be used to devise efficient algorithms with improved
recovery performance and reduced computational complexity. Finally, we address
the measurement system design for linear and nonlinear measurements of sparse
signals. Moreover, we discuss the linear mixing matrix design based on
coherence minimization. Then we extend our focus to nonlinear measurement
systems where we design parallel optimization algorithms to efficiently compute
stationary points in the sparse phase retrieval problem with and without
dictionary learning
Source localization via time difference of arrival
Accurate localization of a signal source, based on the signals collected by a number of receiving sensors deployed in the source surrounding area is a problem of interest in various fields. This dissertation aims at exploring different techniques to improve the localization accuracy of non-cooperative sources, i.e., sources for which the specific transmitted symbols and the time of the transmitted signal are unknown to the receiving sensors. With the localization of non-cooperative sources, time difference of arrival (TDOA) of the signals received at pairs of sensors is typically employed.
A two-stage localization method in multipath environments is proposed. During the first stage, TDOA of the signals received at pairs of sensors is estimated. In the second stage, the actual location is computed from the TDOA estimates. This later stage is referred to as hyperbolic localization and it generally involves a non-convex optimization. For the first stage, a TDOA estimation method that exploits the sparsity of multipath channels is proposed. This is formulated as an f1-regularization problem, where the f1-norm is used as channel sparsity constraint. For the second stage, three methods are proposed to offer high accuracy at different computational costs. The first method takes a semi-definite relaxation (SDR) approach to relax the hyperbolic localization to a convex optimization. The second method follows a linearized formulation of the problem and seeks a biased estimate of improved accuracy. A third method is proposed to exploit the source sparsity. With this, the hyperbolic localization is formulated as an an f1-regularization problem, where the f1-norm is used as source sparsity constraint. The proposed methods compare favorably to other existing methods, each of them having its own advantages. The SDR method has the advantage of simplicity and low computational cost. The second method may perform better than the SDR approach in some situations, but at the price of higher computational cost. The l1-regularization may outperform the first two methods, but is sensitive to the choice of a regularization parameter. The proposed two-stage localization approach is shown to deliver higher accuracy and robustness to noise, compared to existing TDOA localization methods.
A single-stage source localization method is explored. The approach is coherent in the sense that, in addition to the TDOA information, it utilizes the relative carrier phases of the received signals among pairs of sensors. A location estimator is constructed based on a maximum likelihood metric. The potential of accuracy improvement by the coherent approach is shown through the Cramer Rao lower bound (CRB). However, the technique has to contend with high peak sidelobes in the localization metric, especially at low signal-to-noise ratio (SNR). Employing a small antenna array at each sensor is shown to lower the sidelobes level in the localization metric.
Finally, the performance of time delay and amplitude estimation from samples of the received signal taken at rates lower than the conventional Nyquist rate is evaluated. To this end, a CRB is developed and its variation with system parameters is analyzed. It is shown that while with noiseless low rate sampling there is no estimation accuracy loss compared to Nyquist sampling, in the presence of additive noise the performance degrades significantly. However, increasing the low sampling rate by a small factor leads to significant performance improvement, especially for time delay estimation
A Coordinate Descent Approach to Atomic Norm Minimization
Atomic norm minimization is of great interest in various applications of
sparse signal processing including super-resolution line-spectral estimation
and signal denoising. In practice, atomic norm minimization (ANM) is formulated
as a semi-definite programming (SDP) which is generally hard to solve. This
work introduces a low-complexity, matrix-free method for solving ANM. The
method uses the framework of coordinate descent and exploits the
sparsity-induced nature of atomic-norm regularization. Specifically, an
equivalent, non-convex formulation of ANM is first proposed. It is then proved
that applying the coordinate descent framework on the non-convex formulation
leads to convergence to the global optimal point. For the case of a single
measurement vector of length N in discrete fourier transform (DFT) basis, the
complexity of each iteration in the coordinate descent procedure is O(N log N
), rendering the proposed method efficient even for large-scale problems. The
proposed coordinate descent framework can be readily modified to solve a
variety of ANM problems, including multi-dimensional ANM with multiple
measurement vectors. It is easy to implement and can essentially be applied to
any atomic sets as long as a corresponding rank-1 problem can be solved.
Through extensive numerical simulations, it is verified that for solving sparse
problems the proposed method is much faster than the alternating direction
method of multipliers (ADMM) or the customized interior point SDP solver