7,294 research outputs found
Enhancement of Image Resolution by Binarization
Image segmentation is one of the principal approaches of image processing.
The choice of the most appropriate Binarization algorithm for each case proved
to be a very interesting procedure itself. In this paper, we have done the
comparison study between the various algorithms based on Binarization
algorithms and propose a methodologies for the validation of Binarization
algorithms. In this work we have developed two novel algorithms to determine
threshold values for the pixels value of the gray scale image. The performance
estimation of the algorithm utilizes test images with, the evaluation metrics
for Binarization of textual and synthetic images. We have achieved better
resolution of the image by using the Binarization method of optimum
thresholding techniques.Comment: 5 pages, 8 figure
Global Thresholding and Multiple Pass Parsing
We present a variation on classic beam thresholding techniques that is up to
an order of magnitude faster than the traditional method, at the same
performance level. We also present a new thresholding technique, global
thresholding, which, combined with the new beam thresholding, gives an
additional factor of two improvement, and a novel technique, multiple pass
parsing, that can be combined with the others to yield yet another 50%
improvement. We use a new search algorithm to simultaneously optimize the
thresholding parameters of the various algorithms.Comment: Fixed latex errors; fixed minor errors in published versio
A Multi-Grid Iterative Method for Photoacoustic Tomography
Inspired by the recent advances on minimizing nonsmooth or bound-constrained
convex functions on models using varying degrees of fidelity, we propose a line
search multigrid (MG) method for full-wave iterative image reconstruction in
photoacoustic tomography (PAT) in heterogeneous media. To compute the search
direction at each iteration, we decide between the gradient at the target
level, or alternatively an approximate error correction at a coarser level,
relying on some predefined criteria. To incorporate absorption and dispersion,
we derive the analytical adjoint directly from the first-order acoustic wave
system. The effectiveness of the proposed method is tested on a total-variation
penalized Iterative Shrinkage Thresholding algorithm (ISTA) and its accelerated
variant (FISTA), which have been used in many studies of image reconstruction
in PAT. The results show the great potential of the proposed method in
improving speed of iterative image reconstruction
Robust Subspace Learning: Robust PCA, Robust Subspace Tracking, and Robust Subspace Recovery
PCA is one of the most widely used dimension reduction techniques. A related
easier problem is "subspace learning" or "subspace estimation". Given
relatively clean data, both are easily solved via singular value decomposition
(SVD). The problem of subspace learning or PCA in the presence of outliers is
called robust subspace learning or robust PCA (RPCA). For long data sequences,
if one tries to use a single lower dimensional subspace to represent the data,
the required subspace dimension may end up being quite large. For such data, a
better model is to assume that it lies in a low-dimensional subspace that can
change over time, albeit gradually. The problem of tracking such data (and the
subspaces) while being robust to outliers is called robust subspace tracking
(RST). This article provides a magazine-style overview of the entire field of
robust subspace learning and tracking. In particular solutions for three
problems are discussed in detail: RPCA via sparse+low-rank matrix decomposition
(S+LR), RST via S+LR, and "robust subspace recovery (RSR)". RSR assumes that an
entire data vector is either an outlier or an inlier. The S+LR formulation
instead assumes that outliers occur on only a few data vector indices and hence
are well modeled as sparse corruptions.Comment: To appear, IEEE Signal Processing Magazine, July 201
A Stochastic Majorize-Minimize Subspace Algorithm for Online Penalized Least Squares Estimation
Stochastic approximation techniques play an important role in solving many
problems encountered in machine learning or adaptive signal processing. In
these contexts, the statistics of the data are often unknown a priori or their
direct computation is too intensive, and they have thus to be estimated online
from the observed signals. For batch optimization of an objective function
being the sum of a data fidelity term and a penalization (e.g. a sparsity
promoting function), Majorize-Minimize (MM) methods have recently attracted
much interest since they are fast, highly flexible, and effective in ensuring
convergence. The goal of this paper is to show how these methods can be
successfully extended to the case when the data fidelity term corresponds to a
least squares criterion and the cost function is replaced by a sequence of
stochastic approximations of it. In this context, we propose an online version
of an MM subspace algorithm and we study its convergence by using suitable
probabilistic tools. Simulation results illustrate the good practical
performance of the proposed algorithm associated with a memory gradient
subspace, when applied to both non-adaptive and adaptive filter identification
problems
- …