1,476 research outputs found
Robust Principal Component Analysis?
This paper is about a curious phenomenon. Suppose we have a data matrix,
which is the superposition of a low-rank component and a sparse component. Can
we recover each component individually? We prove that under some suitable
assumptions, it is possible to recover both the low-rank and the sparse
components exactly by solving a very convenient convex program called Principal
Component Pursuit; among all feasible decompositions, simply minimize a
weighted combination of the nuclear norm and of the L1 norm. This suggests the
possibility of a principled approach to robust principal component analysis
since our methodology and results assert that one can recover the principal
components of a data matrix even though a positive fraction of its entries are
arbitrarily corrupted. This extends to the situation where a fraction of the
entries are missing as well. We discuss an algorithm for solving this
optimization problem, and present applications in the area of video
surveillance, where our methodology allows for the detection of objects in a
cluttered background, and in the area of face recognition, where it offers a
principled way of removing shadows and specularities in images of faces
Sparse And Low Rank Decomposition Based Batch Image Alignment for Speckle Reduction of retinal OCT Images
Optical Coherence Tomography (OCT) is an emerging technique in the field of
biomedical imaging, with applications in ophthalmology, dermatology, coronary
imaging etc. Due to the underlying physics, OCT images usually suffer from a
granular pattern, called speckle noise, which restricts the process of
interpretation. Here, a sparse and low rank decomposition based method is used
for speckle reduction in retinal OCT images. This technique works on input data
that consists of several B-scans of the same location. The next step is the
batch alignment of the images using a sparse and low-rank decomposition based
technique. Finally the denoised image is created by median filtering of the
low-rank component of the processed data. Simultaneous decomposition and
alignment of the images result in better performance in comparison to simple
registration-based methods that are used in the literature for noise reduction
of OCT images.Comment: Accepted for presentation at ISBI'1
Manifold Constrained Low-Rank Decomposition
Low-rank decomposition (LRD) is a state-of-the-art method for visual data
reconstruction and modelling. However, it is a very challenging problem when
the image data contains significant occlusion, noise, illumination variation,
and misalignment from rotation or viewpoint changes. We leverage the specific
structure of data in order to improve the performance of LRD when the data are
not ideal. To this end, we propose a new framework that embeds manifold priors
into LRD. To implement the framework, we design an alternating direction method
of multipliers (ADMM) method which efficiently integrates the manifold
constraints during the optimization process. The proposed approach is
successfully used to calculate low-rank models from face images, hand-written
digits and planar surface images. The results show a consistent increase of
performance when compared to the state-of-the-art over a wide range of
realistic image misalignments and corruptions
Modeling and performance evaluation of stealthy false data injection attacks on smart grid in the presence of corrupted measurements
The false data injection (FDI) attack cannot be detected by the traditional
anomaly detection techniques used in the energy system state estimators. In
this paper, we demonstrate how FDI attacks can be constructed blindly, i.e.,
without system knowledge, including topological connectivity and line reactance
information. Our analysis reveals that existing FDI attacks become detectable
(consequently unsuccessful) by the state estimator if the data contains grossly
corrupted measurements such as device malfunction and communication errors. The
proposed sparse optimization based stealthy attacks construction strategy
overcomes this limitation by separating the gross errors from the measurement
matrix. Extensive theoretical modeling and experimental evaluation show that
the proposed technique performs more stealthily (has less relative error) and
efficiently (fast enough to maintain time requirement) compared to other
methods on IEEE benchmark test systems.Comment: Keywords: Smart grid, False data injection, Blind attack, Principal
component analysis (PCA), Journal of Computer and System Sciences, Elsevier,
201
- …