183,711 research outputs found
Lattic path proofs of extended Bressoud-Wei and Koike skew Schur function identities
Our recent paper provides extensions to two classical determinantal results of Bressoud and Wei, and of Koike. The proofs in that paper were algebraic. The present paper contains combinatorial lattice path proofs
Non-parametric synthesis of laminar volumetric texture
International audienceThe goal of this paper is to evaluate several extensions of Wei and Levoy's algorithm for the synthesis of laminar volumetric textures constrained only by a single 2D sample. Hence, we shall also review in a unified form the improved algorithm proposed by Kopf et al. and the particular histogram matching approach of Chen and Wang. Developing a genuine quantitative study we are able to compare the performances of these algorithms that we have applied to the synthesis of volumetric structures of dense carbons. The 2D samples are lattice fringe images obtained by high resolution transmission electronic microscopy (HRTEM)
Divide and Fuse: A Re-ranking Approach for Person Re-identification
As re-ranking is a necessary procedure to boost person re-identification
(re-ID) performance on large-scale datasets, the diversity of feature becomes
crucial to person reID for its importance both on designing pedestrian
descriptions and re-ranking based on feature fusion. However, in many
circumstances, only one type of pedestrian feature is available. In this paper,
we propose a "Divide and use" re-ranking framework for person re-ID. It
exploits the diversity from different parts of a high-dimensional feature
vector for fusion-based re-ranking, while no other features are accessible.
Specifically, given an image, the extracted feature is divided into
sub-features. Then the contextual information of each sub-feature is
iteratively encoded into a new feature. Finally, the new features from the same
image are fused into one vector for re-ranking. Experimental results on two
person re-ID benchmarks demonstrate the effectiveness of the proposed
framework. Especially, our method outperforms the state-of-the-art on the
Market-1501 dataset.Comment: Accepted by BMVC201
Solving Visual Madlibs with Multiple Cues
This paper focuses on answering fill-in-the-blank style multiple choice
questions from the Visual Madlibs dataset. Previous approaches to Visual
Question Answering (VQA) have mainly used generic image features from networks
trained on the ImageNet dataset, despite the wide scope of questions. In
contrast, our approach employs features derived from networks trained for
specialized tasks of scene classification, person activity prediction, and
person and object attribute prediction. We also present a method for selecting
sub-regions of an image that are relevant for evaluating the appropriateness of
a putative answer. Visual features are computed both from the whole image and
from local regions, while sentences are mapped to a common space using a simple
normalized canonical correlation analysis (CCA) model. Our results show a
significant improvement over the previous state of the art, and indicate that
answering different question types benefits from examining a variety of image
cues and carefully choosing informative image sub-regions
Deep Learning for Detecting Multiple Space-Time Action Tubes in Videos
In this work, we propose an approach to the spatiotemporal localisation
(detection) and classification of multiple concurrent actions within temporally
untrimmed videos. Our framework is composed of three stages. In stage 1,
appearance and motion detection networks are employed to localise and score
actions from colour images and optical flow. In stage 2, the appearance network
detections are boosted by combining them with the motion detection scores, in
proportion to their respective spatial overlap. In stage 3, sequences of
detection boxes most likely to be associated with a single action instance,
called action tubes, are constructed by solving two energy maximisation
problems via dynamic programming. While in the first pass, action paths
spanning the whole video are built by linking detection boxes over time using
their class-specific scores and their spatial overlap, in the second pass,
temporal trimming is performed by ensuring label consistency for all
constituting detection boxes. We demonstrate the performance of our algorithm
on the challenging UCF101, J-HMDB-21 and LIRIS-HARL datasets, achieving new
state-of-the-art results across the board and significantly increasing
detection speed at test time. We achieve a huge leap forward in action
detection performance and report a 20% and 11% gain in mAP (mean average
precision) on UCF-101 and J-HMDB-21 datasets respectively when compared to the
state-of-the-art.Comment: Accepted by British Machine Vision Conference 201
Cascaded Boundary Regression for Temporal Action Detection
Temporal action detection in long videos is an important problem.
State-of-the-art methods address this problem by applying action classifiers on
sliding windows. Although sliding windows may contain an identifiable portion
of the actions, they may not necessarily cover the entire action instance,
which would lead to inferior performance. We adapt a two-stage temporal action
detection pipeline with Cascaded Boundary Regression (CBR) model.
Class-agnostic proposals and specific actions are detected respectively in the
first and the second stage. CBR uses temporal coordinate regression to refine
the temporal boundaries of the sliding windows. The salient aspect of the
refinement process is that, inside each stage, the temporal boundaries are
adjusted in a cascaded way by feeding the refined windows back to the system
for further boundary refinement. We test CBR on THUMOS-14 and TVSeries, and
achieve state-of-the-art performance on both datasets. The performance gain is
especially remarkable under high IoU thresholds, e.g. map@tIoU=0.5 on THUMOS-14
is improved from 19.0% to 31.0%
Face Alignment Assisted by Head Pose Estimation
In this paper we propose a supervised initialization scheme for cascaded face
alignment based on explicit head pose estimation. We first investigate the
failure cases of most state of the art face alignment approaches and observe
that these failures often share one common global property, i.e. the head pose
variation is usually large. Inspired by this, we propose a deep convolutional
network model for reliable and accurate head pose estimation. Instead of using
a mean face shape, or randomly selected shapes for cascaded face alignment
initialisation, we propose two schemes for generating initialisation: the first
one relies on projecting a mean 3D face shape (represented by 3D facial
landmarks) onto 2D image under the estimated head pose; the second one searches
nearest neighbour shapes from the training set according to head pose distance.
By doing so, the initialisation gets closer to the actual shape, which enhances
the possibility of convergence and in turn improves the face alignment
performance. We demonstrate the proposed method on the benchmark 300W dataset
and show very competitive performance in both head pose estimation and face
alignment.Comment: Accepted by BMVC201
Multispectral Deep Neural Networks for Pedestrian Detection
Multispectral pedestrian detection is essential for around-the-clock
applications, e.g., surveillance and autonomous driving. We deeply analyze
Faster R-CNN for multispectral pedestrian detection task and then model it into
a convolutional network (ConvNet) fusion problem. Further, we discover that
ConvNet-based pedestrian detectors trained by color or thermal images
separately provide complementary information in discriminating human instances.
Thus there is a large potential to improve pedestrian detection by using color
and thermal images in DNNs simultaneously. We carefully design four ConvNet
fusion architectures that integrate two-branch ConvNets on different DNNs
stages, all of which yield better performance compared with the baseline
detector. Our experimental results on KAIST pedestrian benchmark show that the
Halfway Fusion model that performs fusion on the middle-level convolutional
features outperforms the baseline method by 11% and yields a missing rate 3.5%
lower than the other proposed architectures.Comment: 13 pages, 8 figures, BMVC 2016 ora
Deep View-Sensitive Pedestrian Attribute Inference in an end-to-end Model
Pedestrian attribute inference is a demanding problem in visual surveillance
that can facilitate person retrieval, search and indexing. To exploit semantic
relations between attributes, recent research treats it as a multi-label image
classification task. The visual cues hinting at attributes can be strongly
localized and inference of person attributes such as hair, backpack, shorts,
etc., are highly dependent on the acquired view of the pedestrian. In this
paper we assert this dependence in an end-to-end learning framework and show
that a view-sensitive attribute inference is able to learn better attribute
predictions. Our proposed model jointly predicts the coarse pose (view) of the
pedestrian and learns specialized view-specific multi-label attribute
predictions. We show in an extensive evaluation on three challenging datasets
(PETA, RAP and WIDER) that our proposed end-to-end view-aware attribute
prediction model provides competitive performance and improves on the published
state-of-the-art on these datasets.Comment: accepted BMVC 201
Deformable Part-based Fully Convolutional Network for Object Detection
Existing region-based object detectors are limited to regions with fixed box
geometry to represent objects, even if those are highly non-rectangular. In
this paper we introduce DP-FCN, a deep model for object detection which
explicitly adapts to shapes of objects with deformable parts. Without
additional annotations, it learns to focus on discriminative elements and to
align them, and simultaneously brings more invariance for classification and
geometric information to refine localization. DP-FCN is composed of three main
modules: a Fully Convolutional Network to efficiently maintain spatial
resolution, a deformable part-based RoI pooling layer to optimize positions of
parts and build invariance, and a deformation-aware localization module
explicitly exploiting displacements of parts to improve accuracy of bounding
box regression. We experimentally validate our model and show significant
gains. DP-FCN achieves state-of-the-art performances of 83.1% and 80.9% on
PASCAL VOC 2007 and 2012 with VOC data only.Comment: Accepted to BMVC 2017 (oral
- …