2 research outputs found
Reweighted lp Constraint LMS-Based Adaptive Sparse Channel Estimation for Cooperative Communication System
This paper studies the issue of sparsity adaptive channel reconstruction in time-varying cooperative
communication networks through the amplify-and-forward transmission scheme. A new sparsity adaptive system
identification method is proposed, namely reweighted norm ( < < ) penalized least mean square(LMS)algorithm.
The main idea of the algorithm is to add a norm penalty of sparsity into the cost function of the LMS algorithm. By doing
so, the weight factor becomes a balance parameter of the associated norm adaptive sparse system identification.
Subsequently, the steady state of the coefficient misalignment vector is derived theoretically, with a performance upper
bounds provided which serve as a sufficient condition for the LMS channel estimation of the precise reweighted norm.
With the upper bounds, we prove that the ( < < ) norm sparsity inducing cost function is superior to the
reweighted norm. An optimal selection of for the norm problem is studied to recover various sparse channel
vectors. Several experiments verify that the simulation results agree well with the theoretical analysis, and thus
demonstrate that the proposed algorithm has a better convergence speed and better steady state behavior than other LMS
algorithms
Non-convex Optimization for Machine Learning
A vast majority of machine learning algorithms train their models and perform
inference by solving optimization problems. In order to capture the learning
and prediction problems accurately, structural constraints such as sparsity or
low rank are frequently imposed or else the objective itself is designed to be
a non-convex function. This is especially true of algorithms that operate in
high-dimensional spaces or that train non-linear models such as tensor models
and deep networks.
The freedom to express the learning problem as a non-convex optimization
problem gives immense modeling power to the algorithm designer, but often such
problems are NP-hard to solve. A popular workaround to this has been to relax
non-convex problems to convex ones and use traditional methods to solve the
(convex) relaxed optimization problems. However this approach may be lossy and
nevertheless presents significant challenges for large scale optimization.
On the other hand, direct approaches to non-convex optimization have met with
resounding success in several domains and remain the methods of choice for the
practitioner, as they frequently outperform relaxation-based techniques -
popular heuristics include projected gradient descent and alternating
minimization. However, these are often poorly understood in terms of their
convergence and other properties.
This monograph presents a selection of recent advances that bridge a
long-standing gap in our understanding of these heuristics. The monograph will
lead the reader through several widely used non-convex optimization techniques,
as well as applications thereof. The goal of this monograph is to both,
introduce the rich literature in this area, as well as equip the reader with
the tools and techniques needed to analyze these simple procedures for
non-convex problems.Comment: The official publication is available from now publishers via
http://dx.doi.org/10.1561/220000005