14,810 research outputs found
Unsupervised learning of clutter-resistant visual representations from natural videos
Populations of neurons in inferotemporal cortex (IT) maintain an explicit
code for object identity that also tolerates transformations of object
appearance e.g., position, scale, viewing angle [1, 2, 3]. Though the learning
rules are not known, recent results [4, 5, 6] suggest the operation of an
unsupervised temporal-association-based method e.g., Foldiak's trace rule [7].
Such methods exploit the temporal continuity of the visual world by assuming
that visual experience over short timescales will tend to have invariant
identity content. Thus, by associating representations of frames from nearby
times, a representation that tolerates whatever transformations occurred in the
video may be achieved. Many previous studies verified that such rules can work
in simple situations without background clutter, but the presence of visual
clutter has remained problematic for this approach. Here we show that temporal
association based on large class-specific filters (templates) avoids the
problem of clutter. Our system learns in an unsupervised way from natural
videos gathered from the internet, and is able to perform a difficult
unconstrained face recognition task on natural images: Labeled Faces in the
Wild [8]
Single Frame Image super Resolution using Learned Directionlets
In this paper, a new directionally adaptive, learning based, single image
super resolution method using multiple direction wavelet transform, called
Directionlets is presented. This method uses directionlets to effectively
capture directional features and to extract edge information along different
directions of a set of available high resolution images .This information is
used as the training set for super resolving a low resolution input image and
the Directionlet coefficients at finer scales of its high-resolution image are
learned locally from this training set and the inverse Directionlet transform
recovers the super-resolved high resolution image. The simulation results
showed that the proposed approach outperforms standard interpolation techniques
like Cubic spline interpolation as well as standard Wavelet-based learning,
both visually and in terms of the mean squared error (mse) values. This method
gives good result with aliased images also.Comment: 14 pages,6 figure
Face Hallucination using Linear Models of Coupled Sparse Support
Most face super-resolution methods assume that low-resolution and
high-resolution manifolds have similar local geometrical structure, hence learn
local models on the lowresolution manifolds (e.g. sparse or locally linear
embedding models), which are then applied on the high-resolution manifold.
However, the low-resolution manifold is distorted by the oneto-many
relationship between low- and high- resolution patches. This paper presents a
method which learns linear models based on the local geometrical structure on
the high-resolution manifold rather than on the low-resolution manifold. For
this, in a first step, the low-resolution patch is used to derive a globally
optimal estimate of the high-resolution patch. The approximated solution is
shown to be close in Euclidean space to the ground-truth but is generally
smooth and lacks the texture details needed by state-ofthe-art face
recognizers. This first estimate allows us to find the support of the
high-resolution manifold using sparse coding (SC), which are then used as
support for learning a local projection (or upscaling) model between the
low-resolution and the highresolution manifolds using Multivariate Ridge
Regression (MRR). Experimental results show that the proposed method
outperforms six face super-resolution methods in terms of both recognition and
quality. These results also reveal that the recognition and quality are
significantly affected by the method used for stitching all super-resolved
patches together, where quilting was found to better preserve the texture
details which helps to achieve higher recognition rates
Beyond Face Rotation: Global and Local Perception GAN for Photorealistic and Identity Preserving Frontal View Synthesis
Photorealistic frontal view synthesis from a single face image has a wide
range of applications in the field of face recognition. Although data-driven
deep learning methods have been proposed to address this problem by seeking
solutions from ample face data, this problem is still challenging because it is
intrinsically ill-posed. This paper proposes a Two-Pathway Generative
Adversarial Network (TP-GAN) for photorealistic frontal view synthesis by
simultaneously perceiving global structures and local details. Four landmark
located patch networks are proposed to attend to local textures in addition to
the commonly used global encoder-decoder network. Except for the novel
architecture, we make this ill-posed problem well constrained by introducing a
combination of adversarial loss, symmetry loss and identity preserving loss.
The combined loss function leverages both frontal face distribution and
pre-trained discriminative deep face models to guide an identity preserving
inference of frontal views from profiles. Different from previous deep learning
methods that mainly rely on intermediate features for recognition, our method
directly leverages the synthesized identity preserving image for downstream
tasks like face recognition and attribution estimation. Experimental results
demonstrate that our method not only presents compelling perceptual results but
also outperforms state-of-the-art results on large pose face recognition.Comment: accepted at ICCV 2017, main paper & supplementary material, 11 page
Face Hallucination by Attentive Sequence Optimization with Reinforcement Learning
Face hallucination is a domain-specific super-resolution problem that aims to
generate a high-resolution (HR) face image from a low-resolution~(LR) input. In
contrast to the existing patch-wise super-resolution models that divide a face
image into regular patches and independently apply LR to HR mapping to each
patch, we implement deep reinforcement learning and develop a novel
attention-aware face hallucination (Attention-FH) framework, which recurrently
learns to attend a sequence of patches and performs facial part enhancement by
fully exploiting the global interdependency of the image. Specifically, our
proposed framework incorporates two components: a recurrent policy network for
dynamically specifying a new attended region at each time step based on the
status of the super-resolved image and the past attended region sequence, and a
local enhancement network for selected patch hallucination and global state
updating. The Attention-FH model jointly learns the recurrent policy network
and local enhancement network through maximizing a long-term reward that
reflects the hallucination result with respect to the whole HR image. Extensive
experiments demonstrate that our Attention-FH significantly outperforms the
state-of-the-art methods on in-the-wild face images with large pose and
illumination variations.Comment: To be published in TPAM
TextureNet: Consistent Local Parametrizations for Learning from High-Resolution Signals on Meshes
We introduce, TextureNet, a neural network architecture designed to extract
features from high-resolution signals associated with 3D surface meshes (e.g.,
color texture maps). The key idea is to utilize a 4-rotational symmetric
(4-RoSy) field to define a domain for convolution on a surface. Though 4-RoSy
fields have several properties favorable for convolution on surfaces (low
distortion, few singularities, consistent parameterization, etc.), orientations
are ambiguous up to 4-fold rotation at any sample point. So, we introduce a new
convolutional operator invariant to the 4-RoSy ambiguity and use it in a
network to extract features from high-resolution signals on geodesic
neighborhoods of a surface. In comparison to alternatives, such as PointNet
based methods which lack a notion of orientation, the coherent structure given
by these neighborhoods results in significantly stronger features. As an
example application, we demonstrate the benefits of our architecture for 3D
semantic segmentation of textured 3D meshes. The results show that our method
outperforms all existing methods on the basis of mean IoU by a significant
margin in both geometry-only (6.4%) and RGB+Geometry (6.9-8.2%) settings
A Deep Journey into Super-resolution: A survey
Deep convolutional networks based super-resolution is a fast-growing field
with numerous practical applications. In this exposition, we extensively
compare 30+ state-of-the-art super-resolution Convolutional Neural Networks
(CNNs) over three classical and three recently introduced challenging datasets
to benchmark single image super-resolution. We introduce a taxonomy for
deep-learning based super-resolution networks that groups existing methods into
nine categories including linear, residual, multi-branch, recursive,
progressive, attention-based and adversarial designs. We also provide
comparisons between the models in terms of network complexity, memory
footprint, model input and output, learning details, the type of network losses
and important architectural differences (e.g., depth, skip-connections,
filters). The extensive evaluation performed, shows the consistent and rapid
growth in the accuracy in the past few years along with a corresponding boost
in model complexity and the availability of large-scale datasets. It is also
observed that the pioneering methods identified as the benchmark have been
significantly outperformed by the current contenders. Despite the progress in
recent years, we identify several shortcomings of existing techniques and
provide future research directions towards the solution of these open problems.Comment: Accepted in ACM Computing Survey
A survey of sparse representation: algorithms and applications
Sparse representation has attracted much attention from researchers in fields
of signal processing, image processing, computer vision and pattern
recognition. Sparse representation also has a good reputation in both
theoretical research and practical applications. Many different algorithms have
been proposed for sparse representation. The main purpose of this article is to
provide a comprehensive study and an updated review on sparse representation
and to supply a guidance for researchers. The taxonomy of sparse representation
methods can be studied from various viewpoints. For example, in terms of
different norm minimizations used in sparsity constraints, the methods can be
roughly categorized into five groups: sparse representation with -norm
minimization, sparse representation with -norm (0p1) minimization,
sparse representation with -norm minimization and sparse representation
with -norm minimization. In this paper, a comprehensive overview of
sparse representation is provided. The available sparse representation
algorithms can also be empirically categorized into four groups: greedy
strategy approximation, constrained optimization, proximity algorithm-based
optimization, and homotopy algorithm-based sparse representation. The
rationales of different algorithms in each category are analyzed and a wide
range of sparse representation applications are summarized, which could
sufficiently reveal the potential nature of the sparse representation theory.
Specifically, an experimentally comparative study of these sparse
representation algorithms was presented. The Matlab code used in this paper can
be available at: http://www.yongxu.org/lunwen.html.Comment: Published on IEEE Access, Vol. 3, pp. 490-530, 201
Neural Style Transfer: A Review
The seminal work of Gatys et al. demonstrated the power of Convolutional
Neural Networks (CNNs) in creating artistic imagery by separating and
recombining image content and style. This process of using CNNs to render a
content image in different styles is referred to as Neural Style Transfer
(NST). Since then, NST has become a trending topic both in academic literature
and industrial applications. It is receiving increasing attention and a variety
of approaches are proposed to either improve or extend the original NST
algorithm. In this paper, we aim to provide a comprehensive overview of the
current progress towards NST. We first propose a taxonomy of current algorithms
in the field of NST. Then, we present several evaluation methods and compare
different NST algorithms both qualitatively and quantitatively. The review
concludes with a discussion of various applications of NST and open problems
for future research. A list of papers discussed in this review, corresponding
codes, pre-trained models and more comparison results are publicly available at
https://github.com/ycjing/Neural-Style-Transfer-Papers.Comment: Project page: https://github.com/ycjing/Neural-Style-Transfer-Paper
Joint Maximum Purity Forest with Application to Image Super-Resolution
In this paper, we propose a novel random-forest scheme, namely Joint Maximum
Purity Forest (JMPF), for classification, clustering, and regression tasks. In
the JMPF scheme, the original feature space is transformed into a compactly
pre-clustered feature space, via a trained rotation matrix. The rotation matrix
is obtained through an iterative quantization process, where the input data
belonging to different classes are clustered to the respective vertices of the
new feature space with maximum purity. In the new feature space, orthogonal
hyperplanes, which are employed at the split-nodes of decision trees in random
forests, can tackle the clustering problems effectively. We evaluated our
proposed method on public benchmark datasets for regression and classification
tasks, and experiments showed that JMPF remarkably outperforms other
state-of-the-art random-forest-based approaches. Furthermore, we applied JMPF
to image super-resolution, because the transformed, compact features are more
discriminative to the clustering-regression scheme. Experiment results on
several public benchmark datasets also showed that the JMPF-based image
super-resolution scheme is consistently superior to recent state-of-the-art
image super-resolution algorithms.Comment: 18 pages, 7 figure
- …