2,540 research outputs found
Discriminative Transfer Learning for General Image Restoration
Recently, several discriminative learning approaches have been proposed for
effective image restoration, achieving convincing trade-off between image
quality and computational efficiency. However, these methods require separate
training for each restoration task (e.g., denoising, deblurring, demosaicing)
and problem condition (e.g., noise level of input images). This makes it
time-consuming and difficult to encompass all tasks and conditions during
training. In this paper, we propose a discriminative transfer learning method
that incorporates formal proximal optimization and discriminative learning for
general image restoration. The method requires a single-pass training and
allows for reuse across various problems and conditions while achieving an
efficiency comparable to previous discriminative approaches. Furthermore, after
being trained, our model can be easily transferred to new likelihood terms to
solve untrained tasks, or be combined with existing priors to further improve
image restoration quality
Zero Shot Recognition with Unreliable Attributes
In principle, zero-shot learning makes it possible to train a recognition
model simply by specifying the category's attributes. For example, with
classifiers for generic attributes like \emph{striped} and \emph{four-legged},
one can construct a classifier for the zebra category by enumerating which
properties it possesses---even without providing zebra training images. In
practice, however, the standard zero-shot paradigm suffers because attribute
predictions in novel images are hard to get right. We propose a novel random
forest approach to train zero-shot models that explicitly accounts for the
unreliability of attribute predictions. By leveraging statistics about each
attribute's error tendencies, our method obtains more robust discriminative
models for the unseen classes. We further devise extensions to handle the
few-shot scenario and unreliable attribute descriptions. On three datasets, we
demonstrate the benefit for visual category learning with zero or few training
examples, a critical domain for rare categories or categories defined on the
fly.Comment: NIPS 201
- …