1,618 research outputs found
Dense Scattering Layer Removal
We propose a new model, together with advanced optimization, to separate a
thick scattering media layer from a single natural image. It is able to handle
challenging underwater scenes and images taken in fog and sandstorm, both of
which are with significantly reduced visibility. Our method addresses the
critical issue -- this is, originally unnoticeable impurities will be greatly
magnified after removing the scattering media layer -- with transmission-aware
optimization. We introduce non-local structure-aware regularization to properly
constrain transmission estimation without introducing the halo artifacts. A
selective-neighbor criterion is presented to convert the unconventional
constrained optimization problem to an unconstrained one where the latter can
be efficiently solved.Comment: 10 pages, 10 figures, Siggraph Asia 2013 Technial Brief
Structured Sparsity: Discrete and Convex approaches
Compressive sensing (CS) exploits sparsity to recover sparse or compressible
signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity
is also used to enhance interpretability in machine learning and statistics
applications: While the ambient dimension is vast in modern data analysis
problems, the relevant information therein typically resides in a much lower
dimensional space. However, many solutions proposed nowadays do not leverage
the true underlying structure. Recent results in CS extend the simple sparsity
idea to more sophisticated {\em structured} sparsity models, which describe the
interdependency between the nonzero components of a signal, allowing to
increase the interpretability of the results and lead to better recovery
performance. In order to better understand the impact of structured sparsity,
in this chapter we analyze the connections between the discrete models and
their convex relaxations, highlighting their relative advantages. We start with
the general group sparse model and then elaborate on two important special
cases: the dispersive and the hierarchical models. For each, we present the
models in their discrete nature, discuss how to solve the ensuing discrete
problems and then describe convex relaxations. We also consider more general
structures as defined by set functions and present their convex proxies.
Further, we discuss efficient optimization solutions for structured sparsity
problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure
Bernoulli Factories and Black-Box Reductions in Mechanism Design
We provide a polynomial time reduction from Bayesian incentive compatible
mechanism design to Bayesian algorithm design for welfare maximization
problems. Unlike prior results, our reduction achieves exact incentive
compatibility for problems with multi-dimensional and continuous type spaces.
The key technical barrier preventing exact incentive compatibility in prior
black-box reductions is that repairing violations of incentive constraints
requires understanding the distribution of the mechanism's output. Reductions
that instead estimate the output distribution by sampling inevitably suffer
from sampling error, which typically precludes exact incentive compatibility.
We overcome this barrier by employing and generalizing the computational
model in the literature on Bernoulli Factories. In a Bernoulli factory problem,
one is given a function mapping the bias of an "input coin" to that of an
"output coin", and the challenge is to efficiently simulate the output coin
given sample access to the input coin. We generalize this to the "expectations
from samples" computational model, in which an instance is specified by a
function mapping the expected values of a set of input distributions to a
distribution over outcomes. The challenge is to give a polynomial time
algorithm that exactly samples from the distribution over outcomes given only
sample access to the input distributions. In this model, we give a polynomial
time algorithm for the exponential weights: expected values of the input
distributions correspond to the weights of alternatives and we wish to select
an alternative with probability proportional to an exponential function of its
weight. This algorithm is the key ingredient in designing an incentive
compatible mechanism for bipartite matching, which can be used to make the
approximately incentive compatible reduction of Hartline et al. (2015) exactly
incentive compatible.Comment: To appear in Proc. 49th ACM Symposium on Theory of Computing (STOC
2017
A survey of sparse representation: algorithms and applications
Sparse representation has attracted much attention from researchers in fields
of signal processing, image processing, computer vision and pattern
recognition. Sparse representation also has a good reputation in both
theoretical research and practical applications. Many different algorithms have
been proposed for sparse representation. The main purpose of this article is to
provide a comprehensive study and an updated review on sparse representation
and to supply a guidance for researchers. The taxonomy of sparse representation
methods can be studied from various viewpoints. For example, in terms of
different norm minimizations used in sparsity constraints, the methods can be
roughly categorized into five groups: sparse representation with -norm
minimization, sparse representation with -norm (0p1) minimization,
sparse representation with -norm minimization and sparse representation
with -norm minimization. In this paper, a comprehensive overview of
sparse representation is provided. The available sparse representation
algorithms can also be empirically categorized into four groups: greedy
strategy approximation, constrained optimization, proximity algorithm-based
optimization, and homotopy algorithm-based sparse representation. The
rationales of different algorithms in each category are analyzed and a wide
range of sparse representation applications are summarized, which could
sufficiently reveal the potential nature of the sparse representation theory.
Specifically, an experimentally comparative study of these sparse
representation algorithms was presented. The Matlab code used in this paper can
be available at: http://www.yongxu.org/lunwen.html.Comment: Published on IEEE Access, Vol. 3, pp. 490-530, 201
Bethe Learning of Conditional Random Fields via MAP Decoding
Many machine learning tasks can be formulated in terms of predicting
structured outputs. In frameworks such as the structured support vector machine
(SVM-Struct) and the structured perceptron, discriminative functions are
learned by iteratively applying efficient maximum a posteriori (MAP) decoding.
However, maximum likelihood estimation (MLE) of probabilistic models over these
same structured spaces requires computing partition functions, which is
generally intractable. This paper presents a method for learning discrete
exponential family models using the Bethe approximation to the MLE. Remarkably,
this problem also reduces to iterative (MAP) decoding. This connection emerges
by combining the Bethe approximation with a Frank-Wolfe (FW) algorithm on a
convex dual objective which circumvents the intractable partition function. The
result is a new single loop algorithm MLE-Struct, which is substantially more
efficient than previous double-loop methods for approximate maximum likelihood
estimation. Our algorithm outperforms existing methods in experiments involving
image segmentation, matching problems from vision, and a new dataset of
university roommate assignments.Comment: 19 pages (9 supplementary), 10 figures (3 supplementary
Applications of Compressed Sensing in Communications Networks
This paper presents a tutorial for CS applications in communications
networks. The Shannon's sampling theorem states that to recover a signal, the
sampling rate must be as least the Nyquist rate. Compressed sensing (CS) is
based on the surprising fact that to recover a signal that is sparse in certain
representations, one can sample at the rate far below the Nyquist rate. Since
its inception in 2006, CS attracted much interest in the research community and
found wide-ranging applications from astronomy, biology, communications, image
and video processing, medicine, to radar. CS also found successful applications
in communications networks. CS was applied in the detection and estimation of
wireless signals, source coding, multi-access channels, data collection in
sensor networks, and network monitoring, etc. In many cases, CS was shown to
bring performance gains on the order of 10X. We believe this is just the
beginning of CS applications in communications networks, and the future will
see even more fruitful applications of CS in our field.Comment: 18 page
Semi-supervised Ranking Pursuit
We propose a novel sparse preference learning/ranking algorithm. Our
algorithm approximates the true utility function by a weighted sum of basis
functions using the squared loss on pairs of data points, and is a
generalization of the kernel matching pursuit method. It can operate both in a
supervised and a semi-supervised setting and allows efficient search for
multiple, near-optimal solutions. Furthermore, we describe the extension of the
algorithm suitable for combined ranking and regression tasks. In our
experiments we demonstrate that the proposed algorithm outperforms several
state-of-the-art learning methods when taking into account unlabeled data and
performs comparably in a supervised learning scenario, while providing sparser
solutions
Solving Jigsaw Puzzles By the Graph Connection Laplacian
We propose a novel mathematical framework to address the problem of
automatically solving large jigsaw puzzles. This problem assumes a large image,
which is cut into equal square pieces that are arbitrarily rotated and
shuffled, and asks to recover the original image given the transformed pieces.
The main contribution of this work is a method for recovering the rotations of
the pieces when both shuffles and rotations are unknown. A major challenge of
this procedure is estimating the graph connection Laplacian without the
knowledge of shuffles. We guarantee some robustness of the latter estimate to
measurement errors. A careful combination of our proposed method for estimating
rotations with any existing method for estimating shuffles results in a
practical solution for the jigsaw puzzle problem. Numerical experiments
demonstrate the competitive accuracy of this solution, its robustness to
corruption and its computational advantage for large puzzles
Simulating CRF with CNN for CNN
Combining CNN with CRF for modeling dependencies between pixel labels is a
popular research direction. This task is far from trivial, especially if
end-to-end training is desired. In this paper, we propose a novel simple
approach to CNN+CRF combination. In particular, we propose to simulate a CRF
regularizer with a trainable module that has standard CNN architecture. We call
this module a CRF Simulator. We can automatically generate an unlimited amount
of ground truth for training such CRF Simulator without any user interaction,
provided we have an efficient algorithm for optimization of the actual CRF
regularizer. After our CRF Simulator is trained, it can be directly
incorporated as part of any larger CNN architecture, enabling a seamless
end-to-end training. In particular, the other modules can learn parameters that
are more attuned to the performance of the CRF Simulator module. We demonstrate
the effectiveness of our approach on the task of salient object segmentation
regularized with the standard binary CRF energy. In contrast to previous work
we do not need to develop and implement the complex mechanics of optimizing a
specific CRF as part of CNN. In fact, our approach can be easily extended to
other CRF energies, including multi-label. To the best of our knowledge we are
the first to study the question of whether the output of CNNs can have
regularization properties of CRFs
HYDRA: Hybrid Deep Magnetic Resonance Fingerprinting
Purpose: Magnetic resonance fingerprinting (MRF) methods typically rely on
dictio-nary matching to map the temporal MRF signals to quantitative tissue
parameters. Such approaches suffer from inherent discretization errors, as well
as high computational complexity as the dictionary size grows. To alleviate
these issues, we propose a HYbrid Deep magnetic ResonAnce fingerprinting
approach, referred to as HYDRA.
Methods: HYDRA involves two stages: a model-based signature restoration phase
and a learning-based parameter restoration phase. Signal restoration is
implemented using low-rank based de-aliasing techniques while parameter
restoration is performed using a deep nonlocal residual convolutional neural
network. The designed network is trained on synthesized MRF data simulated with
the Bloch equations and fast imaging with steady state precession (FISP)
sequences. In test mode, it takes a temporal MRF signal as input and produces
the corresponding tissue parameters.
Results: We validated our approach on both synthetic data and anatomical data
generated from a healthy subject. The results demonstrate that, in contrast to
conventional dictionary-matching based MRF techniques, our approach
significantly improves inference speed by eliminating the time-consuming
dictionary matching operation, and alleviates discretization errors by
outputting continuous-valued parameters. We further avoid the need to store a
large dictionary, thus reducing memory requirements.
Conclusions: Our approach demonstrates advantages in terms of inference
speed, accuracy and storage requirements over competing MRF method
- …