466,382 research outputs found
Quality Aware Network for Set to Set Recognition
This paper targets on the problem of set to set recognition, which learns the
metric between two image sets. Images in each set belong to the same identity.
Since images in a set can be complementary, they hopefully lead to higher
accuracy in practical applications. However, the quality of each sample cannot
be guaranteed, and samples with poor quality will hurt the metric. In this
paper, the quality aware network (QAN) is proposed to confront this problem,
where the quality of each sample can be automatically learned although such
information is not explicitly provided in the training stage. The network has
two branches, where the first branch extracts appearance feature embedding for
each sample and the other branch predicts quality score for each sample.
Features and quality scores of all samples in a set are then aggregated to
generate the final feature embedding. We show that the two branches can be
trained in an end-to-end manner given only the set-level identity annotation.
Analysis on gradient spread of this mechanism indicates that the quality
learned by the network is beneficial to set-to-set recognition and simplifies
the distribution that the network needs to fit. Experiments on both face
verification and person re-identification show advantages of the proposed QAN.
The source code and network structure can be downloaded at
https://github.com/sciencefans/Quality-Aware-Network.Comment: Accepted at CVPR 201
Learning to Generate and Refine Object Proposals
Visual object recognition is a fundamental and challenging
problem in computer vision. To build a practical recognition
system, one is first confronted with high computation complexity
due to an enormous search space from an image, which is caused by
large variations in object appearance, pose and mutual occlusion,
as well as other environmental factors. To reduce the search
complexity, a moderate set of image regions that are likely to
contain an object, regardless of its category, are usually first
generated in modern object recognition subsystems. These possible
object regions are called object proposals, object hypotheses or
object candidates, which can be used for down-stream
classification or global reasoning in many different vision tasks
like object detection, segmentation and tracking, etc.
This thesis addresses the problem of object proposal generation,
including bounding box and segment proposal generation, in
real-world scenarios. In particular, we investigate the
representation learning in object proposal generation with 3D
cues and contextual information, aiming to propose higher-quality
object candidates which have higher object recall, better
boundary coverage and lower number. We focus on three main
issues: 1) how can we incorporate additional geometric and
high-level semantic context information into the proposal
generation for stereo images? 2) how do we generate object
segment proposals for stereo images with learning representations
and learning grouping process? and 3) how can we learn a
context-driven representation to refine segment proposals
efficiently?
In this thesis, we propose a series of solutions to address each
of the raised problems. We first propose a semantic context and
depth-aware object proposal generation method. We design a set of
new cues to encode the objectness, and then train an efficient
random forest classifier to re-rank the initial proposals and
linear regressors to fine-tune their locations. Next, we extend
the task to the segment proposal generation in the same setting
and develop a learning-based segment proposal generation method
for stereo images. Our method makes use of learned deep features
and designed geometric features to represent a region and learns
a similarity network to guide the superpixel grouping process. We
also learn a ranking network to predict the objectness score for
each segment proposal. To address the third problem, we take a
transformation-based approach to improve the quality of a given
segment candidate pool based on context information. We propose
an efficient deep network that learns affine transformations to
warp an initial object mask towards nearby object region, based
on a novel feature pooling strategy. Finally, we extend our
affine warping approach to address the object-mask alignment
problem and particularly the problem of refining a set of segment
proposals. We design an end-to-end deep spatial transformer
network that learns free-form deformations (FFDs) to non-rigidly
warp the shape mask towards the ground truth, based on a
multi-level dual mask feature pooling strategy. We evaluate all
our approaches on several publicly available object recognition
datasets and show superior performance
SAFE: Scale Aware Feature Encoder for Scene Text Recognition
In this paper, we address the problem of having characters with different
scales in scene text recognition. We propose a novel scale aware feature
encoder (SAFE) that is designed specifically for encoding characters with
different scales. SAFE is composed of a multi-scale convolutional encoder and a
scale attention network. The multi-scale convolutional encoder targets at
extracting character features under multiple scales, and the scale attention
network is responsible for selecting features from the most relevant scale(s).
SAFE has two main advantages over the traditional single-CNN encoder used in
current state-of-the-art text recognizers. First, it explicitly tackles the
scale problem by extracting scale-invariant features from the characters. This
allows the recognizer to put more effort in handling other challenges in scene
text recognition, like those caused by view distortion and poor image quality.
Second, it can transfer the learning of feature encoding across different
character scales. This is particularly important when the training set has a
very unbalanced distribution of character scales, as training with such a
dataset will make the encoder biased towards extracting features from the
predominant scale. To evaluate the effectiveness of SAFE, we design a simple
text recognizer named scale-spatial attention network (S-SAN) that employs SAFE
as its feature encoder, and carry out experiments on six public benchmarks.
Experimental results demonstrate that S-SAN can achieve state-of-the-art (or,
in some cases, extremely competitive) performance without any post-processing.Comment: ACCV201
Multicolumn Networks for Face Recognition
The objective of this work is set-based face recognition, i.e. to decide if
two sets of images of a face are of the same person or not. Conventionally, the
set-wise feature descriptor is computed as an average of the descriptors from
individual face images within the set. In this paper, we design a neural
network architecture that learns to aggregate based on both "visual" quality
(resolution, illumination), and "content" quality (relative importance for
discriminative classification). To this end, we propose a Multicolumn Network
(MN) that takes a set of images (the number in the set can vary) as input, and
learns to compute a fix-sized feature descriptor for the entire set. To
encourage high-quality representations, each individual input image is first
weighted by its "visual" quality, determined by a self-quality assessment
module, and followed by a dynamic recalibration based on "content" qualities
relative to the other images within the set. Both of these qualities are learnt
implicitly during training for set-wise classification. Comparing with the
previous state-of-the-art architectures trained with the same dataset
(VGGFace2), our Multicolumn Networks show an improvement of between 2-6% on the
IARPA IJB face recognition benchmarks, and exceed the state of the art for all
methods on these benchmarks.Comment: To appear in BMVC201
Scaling Speech Enhancement in Unseen Environments with Noise Embeddings
We address the problem of speech enhancement generalisation to unseen
environments by performing two manipulations. First, we embed an additional
recording from the environment alone, and use this embedding to alter
activations in the main enhancement subnetwork. Second, we scale the number of
noise environments present at training time to 16,784 different environments.
Experiment results show that both manipulations reduce word error rates of a
pretrained speech recognition system and improve enhancement quality according
to a number of performance measures. Specifically, our best model reduces the
word error rate from 34.04% on noisy speech to 15.46% on the enhanced speech.
Enhanced audio samples can be found in
https://speechenhancement.page.link/samples
- …