112 research outputs found
Recommended from our members
Statistics of Natural Images and Models
Large calibrated datasets of `random' natural images have recently become available. These make possible precise and intensive statistical studies of the local nature of images. We report results ranging from the simplest single pixel intensity to joint distribution of 3 Haar wavelet responses. Some of these statistics shed light on old issues such as the near scale-invariance of image statistics and some are entirely new. We fit mathematical models to some of the statistics and explain others in terms of local image featuresMathematic
Revisiting loss-specific training of filter-based MRFs for image restoration
It is now well known that Markov random fields (MRFs) are particularly
effective for modeling image priors in low-level vision. Recent years have seen
the emergence of two main approaches for learning the parameters in MRFs: (1)
probabilistic learning using sampling-based algorithms and (2) loss-specific
training based on MAP estimate. After investigating existing training
approaches, it turns out that the performance of the loss-specific training has
been significantly underestimated in existing work. In this paper, we revisit
this approach and use techniques from bi-level optimization to solve it. We
show that we can get a substantial gain in the final performance by solving the
lower-level problem in the bi-level framework with high accuracy using our
newly proposed algorithm. As a result, our trained model is on par with highly
specialized image denoising algorithms and clearly outperforms
probabilistically trained MRF models. Our findings suggest that for the
loss-specific training scheme, solving the lower-level problem with higher
accuracy is beneficial. Our trained model comes along with the additional
advantage, that inference is extremely efficient. Our GPU-based implementation
takes less than 1s to produce state-of-the-art performance.Comment: 10 pages, 2 figures, appear at 35th German Conference, GCPR 2013,
Saarbr\"ucken, Germany, September 3-6, 2013. Proceeding
Deep Markov Random Field for Image Modeling
Markov Random Fields (MRFs), a formulation widely used in generative image
modeling, have long been plagued by the lack of expressive power. This issue is
primarily due to the fact that conventional MRFs formulations tend to use
simplistic factors to capture local patterns. In this paper, we move beyond
such limitations, and propose a novel MRF model that uses fully-connected
neurons to express the complex interactions among pixels. Through theoretical
analysis, we reveal an inherent connection between this model and recurrent
neural networks, and thereon derive an approximated feed-forward network that
couples multiple RNNs along opposite directions. This formulation combines the
expressive power of deep neural networks and the cyclic dependency structure of
MRF in a unified model, bringing the modeling capability to a new level. The
feed-forward approximation also allows it to be efficiently learned from data.
Experimental results on a variety of low-level vision tasks show notable
improvement over state-of-the-arts.Comment: Accepted at ECCV 201
Local Variation as a Statistical Hypothesis Test
The goal of image oversegmentation is to divide an image into several pieces,
each of which should ideally be part of an object. One of the simplest and yet
most effective oversegmentation algorithms is known as local variation (LV)
(Felzenszwalb and Huttenlocher 2004). In this work, we study this algorithm and
show that algorithms similar to LV can be devised by applying different
statistical models and decisions, thus providing further theoretical
justification and a well-founded explanation for the unexpected high
performance of the LV approach. Some of these algorithms are based on
statistics of natural images and on a hypothesis testing decision; we denote
these algorithms probabilistic local variation (pLV). The best pLV algorithm,
which relies on censored estimation, presents state-of-the-art results while
keeping the same computational complexity of the LV algorithm
Parsimonious Black-Box Adversarial Attacks via Efficient Combinatorial Optimization
Solving for adversarial examples with projected gradient descent has been
demonstrated to be highly effective in fooling the neural network based
classifiers. However, in the black-box setting, the attacker is limited only to
the query access to the network and solving for a successful adversarial
example becomes much more difficult. To this end, recent methods aim at
estimating the true gradient signal based on the input queries but at the cost
of excessive queries. We propose an efficient discrete surrogate to the
optimization problem which does not require estimating the gradient and
consequently becomes free of the first order update hyperparameters to tune.
Our experiments on Cifar-10 and ImageNet show the state of the art black-box
attack performance with significant reduction in the required queries compared
to a number of recently proposed methods. The source code is available at
https://github.com/snu-mllab/parsimonious-blackbox-attack.Comment: Accepted and to appear at ICML 201
Geometric approach to sampling and communication
Relationships that exist between the classical, Shannon-type, and
geometric-based approaches to sampling are investigated. Some aspects of coding
and communication through a Gaussian channel are considered. In particular, a
constructive method to determine the quantizing dimension in Zador's theorem is
provided. A geometric version of Shannon's Second Theorem is introduced.
Applications to Pulse Code Modulation and Vector Quantization of Images are
addressed.Comment: 19 pages, submitted for publicatio
- …