169 research outputs found
Error Gradient-based Variable-Lp Norm Constraint LMS Algorithm for Sparse System Identification
Sparse adaptive filtering has gained much attention due to its wide
applicability in the field of signal processing. Among the main algorithm
families, sparse norm constraint adaptive filters develop rapidly in recent
years. However, when applied for system identification, most priori work in
sparse norm constraint adaptive filtering suffers from the difficulty of
adaptability to the sparsity of the systems to be identified. To address this
problem, we propose a novel variable p-norm constraint least mean square (LMS)
algorithm, which serves as a variant of the conventional Lp-LMS algorithm
established for sparse system identification. The parameter p is iteratively
adjusted by the gradient descent method applied to the instantaneous square
error. Numerical simulations show that this new approach achieves better
performance than the traditional Lp-LMS and LMS algorithms in terms of
steady-state error and convergence rate.Comment: Submitted to 41st IEEE International Conference on Acoustics, Speech
and Signal Processing (ICASSP 2016), 5 pages, 2 tables, 2 figures, 15
equations, 15 reference
Performance Analysis of l_0 Norm Constraint Least Mean Square Algorithm
As one of the recently proposed algorithms for sparse system identification,
norm constraint Least Mean Square (-LMS) algorithm modifies the cost
function of the traditional method with a penalty of tap-weight sparsity. The
performance of -LMS is quite attractive compared with its various
precursors. However, there has been no detailed study of its performance. This
paper presents all-around and throughout theoretical performance analysis of
-LMS for white Gaussian input data based on some reasonable assumptions.
Expressions for steady-state mean square deviation (MSD) are derived and
discussed with respect to algorithm parameters and system sparsity. The
parameter selection rule is established for achieving the best performance.
Approximated with Taylor series, the instantaneous behavior is also derived. In
addition, the relationship between -LMS and some previous arts and the
sufficient conditions for -LMS to accelerate convergence are set up.
Finally, all of the theoretical results are compared with simulations and are
shown to agree well in a large range of parameter setting.Comment: 31 pages, 8 figure
An Improved Variable Step-size Zero-point Attracting Projection Algorithm
This paper proposes an improved variable step-size (VSS) scheme for
zero-point attracting projection (ZAP) algorithm. The proposed VSS is
proportional to the sparseness difference between filter coefficients and the
true impulse response. Meanwhile, it works for both sparse and non-sparse
system identification, and simulation results demonstrate that the proposed
algorithm could provide both faster convergence rate and better tracking
ability than previous ones.Comment: 5 pages, ICASSP 2015. arXiv admin note: substantial text overlap with
arXiv:1312.261
Reweighted l1-norm Penalized LMS for Sparse Channel Estimation and Its Analysis
A new reweighted l1-norm penalized least mean square (LMS) algorithm for
sparse channel estimation is proposed and studied in this paper. Since standard
LMS algorithm does not take into account the sparsity information about the
channel impulse response (CIR), sparsity-aware modifications of the LMS
algorithm aim at outperforming the standard LMS by introducing a penalty term
to the standard LMS cost function which forces the solution to be sparse. Our
reweighted l1-norm penalized LMS algorithm introduces in addition a reweighting
of the CIR coefficient estimates to promote a sparse solution even more and
approximate l0-pseudo-norm closer. We provide in depth quantitative analysis of
the reweighted l1-norm penalized LMS algorithm. An expression for the excess
mean square error (MSE) of the algorithm is also derived which suggests that
under the right conditions, the reweighted l1-norm penalized LMS algorithm
outperforms the standard LMS, which is expected. However, our quantitative
analysis also answers the question of what is the maximum sparsity level in the
channel for which the reweighted l1-norm penalized LMS algorithm is better than
the standard LMS. Simulation results showing the better performance of the
reweighted l1-norm penalized LMS algorithm compared to other existing LMS-type
algorithms are given.Comment: 28 pages, 4 figures, 1 table, Submitted to Signal Processing on June
201
Adaptive Sparse Channel Estimation for Time-Variant MIMO-OFDM Systems
Accurate channel state information (CSI) is required for coherent detection
in time-variant multiple-input multipleoutput (MIMO) communication systems
using orthogonal frequency division multiplexing (OFDM) modulation. One of
low-complexity and stable adaptive channel estimation (ACE) approaches is the
normalized least mean square (NLMS)-based ACE. However, it cannot exploit the
inherent sparsity of MIMO channel which is characterized by a few dominant
channel taps. In this paper, we propose two adaptive sparse channel estimation
(ASCE) methods to take advantage of such sparse structure information for
time-variant MIMO-OFDM systems. Unlike traditional NLMS-based method, two
proposed methods are implemented by introducing sparse penalties to the cost
function of NLMS algorithm. Computer simulations confirm obvious performance
advantages of the proposed ASCEs over the traditional ACE.Comment: 6 cages,10 figures, conference pape
Adaptive Combination of l0 LMS Adaptive Filters for Sparse System Identification in Fluctuating Noise Power
Recently, the l0-least mean square (l0-LMS) algorithm has been proposed to
identify sparse linear systems by employing a sparsity-promoting continuous
function as an approximation of l0 pseudonorm penalty. However, the performance
of this algorithm is sensitive to the appropriate choice of the some parameter
responsible for the zero-attracting intensity. The optimum choice for this
parameter depends on the signal-to-noise ratio (SNR) prevailing in the system.
Thus, it becomes difficult to fix a suitable value for this parameter,
particularly in a situation where SNR fluctuates over time. In this work, we
propose several adaptive combinations of differently parameterized l0-LMS to
get an overall satisfactory performance independent of the SNR, and discuss
some issues relevant to these combination structures. We also demonstrate an
efficient partial update scheme which not only reduces the number of
computations per iteration, but also achieves some interesting performance gain
compared with the full update case. Then, we propose a new recursive least
squares (RLS)-type rule to update the combining parameter more efficiently.
Finally, we extend the combination of two filters to a combination of M number
adaptive filters, which manifests further improvement for M > 2.Comment: 15 pages, 15 figure
Sparsity Aware Normalized Least Mean p-power Algorithms with Correntropy Induced Metric Penalty
For identifying the non-Gaussian impulsive noise systems, normalized LMP
(NLMP) has been proposed to combat impulsive-inducing instability. However, the
standard algorithm is without considering the inherent sparse structure
distribution of unknown system. To exploit sparsity as well as to mitigate the
impulsive noise, this paper proposes a sparse NLMP algorithm, i.e., Correntropy
Induced Metric (CIM) constraint based NLMP (CIMNLMP). Based on the first
proposed algorithm, moreover, we propose an improved CIM constraint variable
regularized NLMP(CIMVRNLMP) algorithm by utilizing variable regularized
parameter(VRP) selection method which can further adjust convergence speed and
steady-state error. Numerical simulations are given to confirm the proposed
algorithms.Comment: 5 pages, 4 figures, submitted for DSP201
Study of Distributed Spectrum Estimation Using Alternating Mixed Discrete-Continuous Adaptation
This paper proposes a distributed alternating mixed discrete-continuous
(DAMDC) algorithm to approach the oracle algorithm based on the diffusion
strategy for parameter and spectrum estimation over sensor networks. A least
mean squares (LMS) type algorithm that obtains the oracle matrix adaptively is
developed and compared with the existing sparsity-aware and conventional
algorithms. The proposed algorithm exhibits improved performance in terms of
mean square deviation and power spectrum estimation accuracy. Numerical results
show that the DAMDC algorithm achieves excellent performance.Comment: 11 pages, 5 figure
Convergence Analysis of l0-RLS Adaptive Filter
This paper presents first and second order convergence analysis of the
sparsity aware l0-RLS adaptive filter. The theorems 1 and 2 state the steady
state value of mean and mean square deviation of the adaptive filter weight
vector
Improved adaptive sparse channel estimation using mixed square/fourth error criterion
Sparse channel estimation problem is one of challenge technical issues in
stable broadband wireless communications. Based on square error criterion
(SEC), adaptive sparse channel estimation (ASCE) methods, e.g., zero-attracting
least mean square error (ZA-LMS) algorithm and reweighted ZA-LMS (RZA-LMS)
algorithm, have been proposed to mitigate noise interferences as well as to
exploit the inherent channel sparsity. However, the conventional SEC-ASCE
methods are vulnerable to 1) random scaling of input training signal; and 2)
imbalance between convergence speed and steady state mean square error (MSE)
performance due to fixed step-size of gradient descend method. In this paper, a
mixed square/fourth error criterion (SFEC) based improved ASCE methods are
proposed to avoid aforementioned shortcomings. Specifically, the improved
SFEC-ASCE methods are realized with zero-attracting least mean square/fourth
error (ZA-LMS/F) algorithm and reweighted ZA-LMS/F (RZA-LMS/F) algorithm,
respectively. Firstly, regularization parameters of the SFEC-ASCE methods are
selected by means of Monte-Carlo simulations. Secondly, lower bounds of the
SFEC-ASCE methods are derived and analyzed. Finally, simulation results are
given to show that the proposed SFEC-ASCE methods achieve better estimation
performance than the conventional SEC-ASCE methods. 1Comment: 21 pages, 10 figures, submitted for journa
- …