9,735 research outputs found
Adaptive Image Denoising by Targeted Databases
We propose a data-dependent denoising procedure to restore noisy images.
Different from existing denoising algorithms which search for patches from
either the noisy image or a generic database, the new algorithm finds patches
from a database that contains only relevant patches. We formulate the denoising
problem as an optimal filter design problem and make two contributions. First,
we determine the basis function of the denoising filter by solving a group
sparsity minimization problem. The optimization formulation generalizes
existing denoising algorithms and offers systematic analysis of the
performance. Improvement methods are proposed to enhance the patch search
process. Second, we determine the spectral coefficients of the denoising filter
by considering a localized Bayesian prior. The localized prior leverages the
similarity of the targeted database, alleviates the intensive Bayesian
computation, and links the new method to the classical linear minimum mean
squared error estimation. We demonstrate applications of the proposed method in
a variety of scenarios, including text images, multiview images and face
images. Experimental results show the superiority of the new algorithm over
existing methods.Comment: 15 pages, 13 figures, 2 tables, journa
Synthesis and Optimization of Reversible Circuits - A Survey
Reversible logic circuits have been historically motivated by theoretical
research in low-power electronics as well as practical improvement of
bit-manipulation transforms in cryptography and computer graphics. Recently,
reversible circuits have attracted interest as components of quantum
algorithms, as well as in photonic and nano-computing technologies where some
switching devices offer no signal gain. Research in generating reversible logic
distinguishes between circuit synthesis, post-synthesis optimization, and
technology mapping. In this survey, we review algorithmic paradigms ---
search-based, cycle-based, transformation-based, and BDD-based --- as well as
specific algorithms for reversible synthesis, both exact and heuristic. We
conclude the survey by outlining key open challenges in synthesis of reversible
and quantum logic, as well as most common misconceptions.Comment: 34 pages, 15 figures, 2 table
Rapid three-dimensional multiparametric MRI with quantitative transient-state imaging
Novel methods for quantitative, transient-state multiparametric imaging are
increasingly being demonstrated for assessment of disease and treatment
efficacy. Here, we build on these by assessing the most common Non-Cartesian
readout trajectories (2D/3D radials and spirals), demonstrating efficient
anti-aliasing with a k-space view-sharing technique, and proposing novel
methods for parameter inference with neural networks that incorporate the
estimation of proton density. Our results show good agreement with gold
standard and phantom references for all readout trajectories at 1.5T and 3T.
Parameters inferred with the neural network were within 6.58% difference from
the parameters inferred with a high-resolution dictionary. Concordance
correlation coefficients were above 0.92 and the normalized root mean squared
error ranged between 4.2% - 12.7% with respect to gold-standard phantom
references for T1 and T2. In vivo acquisitions demonstrate sub-millimetric
isotropic resolution in under five minutes with reconstruction and inference
times < 7 minutes. Our 3D quantitative transient-state imaging approach could
enable high-resolution multiparametric tissue quantification within clinically
acceptable acquisition and reconstruction times.Comment: 43 pages, 12 Figures, 5 Table
Revisiting Complex Moments For 2D Shape Representation and Image Normalization
When comparing 2D shapes, a key issue is their normalization. Translation and
scale are easily taken care of by removing the mean and normalizing the energy.
However, defining and computing the orientation of a 2D shape is not so simple.
In fact, although for elongated shapes the principal axis can be used to define
one of two possible orientations, there is no such tool for general shapes. As
we show in the paper, previous approaches fail to compute the orientation of
even noiseless observations of simple shapes. We address this problem. In the
paper, we show how to uniquely define the orientation of an arbitrary 2D shape,
in terms of what we call its Principal Moments. We show that a small subset of
these moments suffice to represent the underlying 2D shape and propose a new
method to efficiently compute the shape orientation: Principal Moment Analysis.
Finally, we discuss how this method can further be applied to normalize
grey-level images. Besides the theoretical proof of correctness, we describe
experiments demonstrating robustness to noise and illustrating the method with
real images.Comment: 69 pages, 20 figure
Compressive Mining: Fast and Optimal Data Mining in the Compressed Domain
Real-world data typically contain repeated and periodic patterns. This
suggests that they can be effectively represented and compressed using only a
few coefficients of an appropriate basis (e.g., Fourier, Wavelets, etc.).
However, distance estimation when the data are represented using different sets
of coefficients is still a largely unexplored area. This work studies the
optimization problems related to obtaining the \emph{tightest} lower/upper
bound on Euclidean distances when each data object is potentially compressed
using a different set of orthonormal coefficients. Our technique leads to
tighter distance estimates, which translates into more accurate search,
learning and mining operations \textit{directly} in the compressed domain.
We formulate the problem of estimating lower/upper distance bounds as an
optimization problem. We establish the properties of optimal solutions, and
leverage the theoretical analysis to develop a fast algorithm to obtain an
\emph{exact} solution to the problem. The suggested solution provides the
tightest estimation of the -norm or the correlation. We show that typical
data-analysis operations, such as k-NN search or k-Means clustering, can
operate more accurately using the proposed compression and distance
reconstruction technique. We compare it with many other prevalent compression
and reconstruction techniques, including random projections and PCA-based
techniques. We highlight a surprising result, namely that when the data are
highly sparse in some basis, our technique may even outperform PCA-based
compression.
The contributions of this work are generic as our methodology is applicable
to any sequential or high-dimensional data as well as to any orthogonal data
transformation used for the underlying data compression scheme.Comment: 25 pages, 20 figures, accepted in VLD
Optimization with Sparsity-Inducing Penalties
Sparse estimation methods are aimed at using or obtaining parsimonious
representations of data or models. They were first dedicated to linear variable
selection but numerous extensions have now emerged such as structured sparsity
or kernel selection. It turns out that many of the related estimation problems
can be cast as convex optimization problems by regularizing the empirical risk
with appropriate non-smooth norms. The goal of this paper is to present from a
general perspective optimization tools and techniques dedicated to such
sparsity-inducing penalties. We cover proximal methods, block-coordinate
descent, reweighted -penalized techniques, working-set and homotopy
methods, as well as non-convex formulations and extensions, and provide an
extensive set of experiments to compare various algorithms from a computational
point of view
Is there anything new to say about SIFT matching?
SIFT is a classical hand-crafted, histogram-based descriptor that has deeply influenced research on image matching for more than a decade. In this paper, a critical review of the aspects that affect SIFT matching performance is carried out, and novel descriptor design strategies are introduced and individually evaluated. These encompass quantization, binarization and hierarchical cascade filtering as means to reduce data storage and increase matching efficiency, with no significant loss of accuracy. An original contextual matching strategy based on a symmetrical variant of the usual nearest-neighbor ratio is discussed as well, that can increase the discriminative power of any descriptor. The paper then undertakes a comprehensive experimental evaluation of state-of-the-art hand-crafted and data-driven descriptors, also including the most recent deep descriptors. Comparisons are carried out according to several performance parameters, among which accuracy and space-time efficiency. Results are provided for both planar and non-planar scenes, the latter being evaluated with a new benchmark based on the concept of approximated patch overlap. Experimental evidence shows that, despite their age, SIFT and other hand-crafted descriptors, once enhanced through the proposed strategies, are ready to meet the future image matching challenges. We also believe that the lessons learned from this work will inspire the design of better hand-crafted and data-driven descriptors
- …