399 research outputs found
A Robust Zero-point Attraction LMS Algorithm on Near Sparse System Identification
The newly proposed norm constraint zero-point attraction Least Mean
Square algorithm (ZA-LMS) demonstrates excellent performance on exact sparse
system identification. However, ZA-LMS has less advantage against standard LMS
when the system is near sparse. Thus, in this paper, firstly the near sparse
system modeling by Generalized Gaussian Distribution is recommended, where the
sparsity is defined accordingly. Secondly, two modifications to the ZA-LMS
algorithm have been made. The norm penalty is replaced by a partial
norm in the cost function, enhancing robustness without increasing the
computational complexity. Moreover, the zero-point attraction item is weighted
by the magnitude of estimation error which adjusts the zero-point attraction
force dynamically. By combining the two improvements, Dynamic Windowing ZA-LMS
(DWZA-LMS) algorithm is further proposed, which shows better performance on
near sparse system identification. In addition, the mean square performance of
DWZA-LMS algorithm is analyzed. Finally, computer simulations demonstrate the
effectiveness of the proposed algorithm and verify the result of theoretical
analysis.Comment: 20 pages, 11 figure
Reweighted lp Constraint LMS-Based Adaptive Sparse Channel Estimation for Cooperative Communication System
This paper studies the issue of sparsity adaptive channel reconstruction in time-varying cooperative
communication networks through the amplify-and-forward transmission scheme. A new sparsity adaptive system
identification method is proposed, namely reweighted norm ( < < ) penalized least mean square(LMS)algorithm.
The main idea of the algorithm is to add a norm penalty of sparsity into the cost function of the LMS algorithm. By doing
so, the weight factor becomes a balance parameter of the associated norm adaptive sparse system identification.
Subsequently, the steady state of the coefficient misalignment vector is derived theoretically, with a performance upper
bounds provided which serve as a sufficient condition for the LMS channel estimation of the precise reweighted norm.
With the upper bounds, we prove that the ( < < ) norm sparsity inducing cost function is superior to the
reweighted norm. An optimal selection of for the norm problem is studied to recover various sparse channel
vectors. Several experiments verify that the simulation results agree well with the theoretical analysis, and thus
demonstrate that the proposed algorithm has a better convergence speed and better steady state behavior than other LMS
algorithms
- …