1,661 research outputs found
Robust Adaptive Beamforming for General-Rank Signal Model with Positive Semi-Definite Constraint via POTDC
The robust adaptive beamforming (RAB) problem for general-rank signal model
with an additional positive semi-definite constraint is considered. Using the
principle of the worst-case performance optimization, such RAB problem leads to
a difference-of-convex functions (DC) optimization problem. The existing
approaches for solving the resulted non-convex DC problem are based on
approximations and find only suboptimal solutions. Here we solve the non-convex
DC problem rigorously and give arguments suggesting that the solution is
globally optimal. Particularly, we rewrite the problem as the minimization of a
one-dimensional optimal value function whose corresponding optimization problem
is non-convex. Then, the optimal value function is replaced with another
equivalent one, for which the corresponding optimization problem is convex. The
new one-dimensional optimal value function is minimized iteratively via
polynomial time DC (POTDC) algorithm.We show that our solution satisfies the
Karush-Kuhn-Tucker (KKT) optimality conditions and there is a strong evidence
that such solution is also globally optimal. Towards this conclusion, we
conjecture that the new optimal value function is a convex function. The new
RAB method shows superior performance compared to the other state-of-the-art
general-rank RAB methods.Comment: 29 pages, 7 figures, 2 tables, Submitted to IEEE Trans. Signal
Processing on August 201
Sidelobe Control in Collaborative Beamforming via Node Selection
Collaborative beamforming (CB) is a power efficient method for data
communications in wireless sensor networks (WSNs) which aims at increasing the
transmission range in the network by radiating the power from a cluster of
sensor nodes in the directions of the intended base station(s) or access
point(s) (BSs/APs). The CB average beampattern expresses a deterministic
behavior and can be used for characterizing/controling the transmission at
intended direction(s), since the mainlobe of the CB beampattern is independent
on the particular random node locations. However, the CB for a cluster formed
by a limited number of collaborative nodes results in a sample beampattern with
sidelobes that severely depend on the particular node locations. High level
sidelobes can cause unacceptable interference when they occur at directions of
unintended BSs/APs. Therefore, sidelobe control in CB has a potential to
increase the network capacity and wireless channel availability by decreasing
the interference. Traditional sidelobe control techniques are proposed for
centralized antenna arrays and, therefore, are not suitable for WSNs. In this
paper, we show that distributed, scalable, and low-complexity sidelobe control
techniques suitable for CB in WSNs can be developed based on node selection
technique which make use of the randomness of the node locations. A node
selection algorithm with low-rate feedback is developed to search over
different node combinations. The performance of the proposed algorithm is
analyzed in terms of the average number of trials required to select the
collaborative nodes and the resulting interference. Our simulation results
approve the theoretical analysis and show that the interference is
significantly reduced when node selection is used with CB.Comment: 30 pages, 10 figures, submitted to the IEEE Trans. Signal Processin
Cramer-Rao Bound for Sparse Signals Fitting the Low-Rank Model with Small Number of Parameters
In this paper, we consider signals with a low-rank covariance matrix which
reside in a low-dimensional subspace and can be written in terms of a finite
(small) number of parameters. Although such signals do not necessarily have a
sparse representation in a finite basis, they possess a sparse structure which
makes it possible to recover the signal from compressed measurements. We study
the statistical performance bound for parameter estimation in the low-rank
signal model from compressed measurements. Specifically, we derive the
Cramer-Rao bound (CRB) for a generic low-rank model and we show that the number
of compressed samples needs to be larger than the number of sources for the
existence of an unbiased estimator with finite estimation variance. We further
consider the applications to direction-of-arrival (DOA) and spectral estimation
which fit into the low-rank signal model. We also investigate the effect of
compression on the CRB by considering numerical examples of the DOA estimation
scenario, and show how the CRB increases by increasing the compression or
equivalently reducing the number of compressed samples.Comment: 14 pages, 1 figure, Submitted to IEEE Signal Processing Letters on
December 201
Segmented compressed sampling for analog-to-information conversion: Method and performance analysis
A new segmented compressed sampling method for analog-to-information
conversion (AIC) is proposed. An analog signal measured by a number of parallel
branches of mixers and integrators (BMIs), each characterized by a specific
random sampling waveform, is first segmented in time into segments. Then
the sub-samples collected on different segments and different BMIs are reused
so that a larger number of samples than the number of BMIs is collected. This
technique is shown to be equivalent to extending the measurement matrix, which
consists of the BMI sampling waveforms, by adding new rows without actually
increasing the number of BMIs. We prove that the extended measurement matrix
satisfies the restricted isometry property with overwhelming probability if the
original measurement matrix of BMI sampling waveforms satisfies it. We also
show that the signal recovery performance can be improved significantly if our
segmented AIC is used for sampling instead of the conventional AIC. Simulation
results verify the effectiveness of the proposed segmented compressed sampling
method and the validity of our theoretical studies.Comment: 32 pages, 5 figures, submitted to the IEEE Transactions on Signal
Processing in April 201
Subspace Leakage Analysis and Improved DOA Estimation with Small Sample Size
Classical methods of DOA estimation such as the MUSIC algorithm are based on
estimating the signal and noise subspaces from the sample covariance matrix.
For a small number of samples, such methods are exposed to performance
breakdown, as the sample covariance matrix can largely deviate from the true
covariance matrix. In this paper, the problem of DOA estimation performance
breakdown is investigated. We consider the structure of the sample covariance
matrix and the dynamics of the root-MUSIC algorithm. The performance breakdown
in the threshold region is associated with the subspace leakage where some
portion of the true signal subspace resides in the estimated noise subspace. In
this paper, the subspace leakage is theoretically derived. We also propose a
two-step method which improves the performance by modifying the sample
covariance matrix such that the amount of the subspace leakage is reduced.
Furthermore, we introduce a phenomenon named as root-swap which occurs in the
root-MUSIC algorithm in the low sample size region and degrades the performance
of the DOA estimation. A new method is then proposed to alleviate this problem.
Numerical examples and simulation results are given for uncorrelated and
correlated sources to illustrate the improvement achieved by the proposed
methods. Moreover, the proposed algorithms are combined with the pseudo-noise
resampling method to further improve the performance.Comment: 37 pages, 10 figures, Submitted to the IEEE Transactions on Signal
Processing in July 201
- …