740 research outputs found
Robust Linear Regression Analysis - A Greedy Approach
The task of robust linear estimation in the presence of outliers is of
particular importance in signal processing, statistics and machine learning.
Although the problem has been stated a few decades ago and solved using
classical (considered nowadays) methods, recently it has attracted more
attention in the context of sparse modeling, where several notable
contributions have been made. In the present manuscript, a new approach is
considered in the framework of greedy algorithms. The noise is split into two
components: a) the inlier bounded noise and b) the outliers, which are
explicitly modeled by employing sparsity arguments. Based on this scheme, a
novel efficient algorithm (Greedy Algorithm for Robust Denoising - GARD), is
derived. GARD alternates between a least square optimization criterion and an
Orthogonal Matching Pursuit (OMP) selection step that identifies the outliers.
The case where only outliers are present has been studied separately, where
bounds on the \textit{Restricted Isometry Property} guarantee that the recovery
of the signal via GARD is exact. Moreover, theoretical results concerning
convergence as well as the derivation of error bounds in the case of additional
bounded noise are discussed. Finally, we provide extensive simulations, which
demonstrate the comparative advantages of the new technique
Non-Convex Rank Minimization via an Empirical Bayesian Approach
In many applications that require matrix solutions of minimal rank, the
underlying cost function is non-convex leading to an intractable, NP-hard
optimization problem. Consequently, the convex nuclear norm is frequently used
as a surrogate penalty term for matrix rank. The problem is that in many
practical scenarios there is no longer any guarantee that we can correctly
estimate generative low-rank matrices of interest, theoretical special cases
notwithstanding. Consequently, this paper proposes an alternative empirical
Bayesian procedure build upon a variational approximation that, unlike the
nuclear norm, retains the same globally minimizing point estimate as the rank
function under many useful constraints. However, locally minimizing solutions
are largely smoothed away via marginalization, allowing the algorithm to
succeed when standard convex relaxations completely fail. While the proposed
methodology is generally applicable to a wide range of low-rank applications,
we focus our attention on the robust principal component analysis problem
(RPCA), which involves estimating an unknown low-rank matrix with unknown
sparse corruptions. Theoretical and empirical evidence are presented to show
that our method is potentially superior to related MAP-based approaches, for
which the convex principle component pursuit (PCP) algorithm (Candes et al.,
2011) can be viewed as a special case.Comment: 10 pages, 6 figures, UAI 2012 pape
Channel Protection: Random Coding Meets Sparse Channels
Multipath interference is an ubiquitous phenomenon in modern communication
systems. The conventional way to compensate for this effect is to equalize the
channel by estimating its impulse response by transmitting a set of training
symbols. The primary drawback to this type of approach is that it can be
unreliable if the channel is changing rapidly. In this paper, we show that
randomly encoding the signal can protect it against channel uncertainty when
the channel is sparse. Before transmission, the signal is mapped into a
slightly longer codeword using a random matrix. From the received signal, we
are able to simultaneously estimate the channel and recover the transmitted
signal. We discuss two schemes for the recovery. Both of them exploit the
sparsity of the underlying channel. We show that if the channel impulse
response is sufficiently sparse, the transmitted signal can be recovered
reliably.Comment: To appear in the proceedings of the 2009 IEEE Information Theory
Workshop (Taormina
- …