47,139 research outputs found
Exploring Human Vision Driven Features for Pedestrian Detection
Motivated by the center-surround mechanism in the human visual attention
system, we propose to use average contrast maps for the challenge of pedestrian
detection in street scenes due to the observation that pedestrians indeed
exhibit discriminative contrast texture. Our main contributions are first to
design a local, statistical multi-channel descriptorin order to incorporate
both color and gradient information. Second, we introduce a multi-direction and
multi-scale contrast scheme based on grid-cells in order to integrate
expressive local variations. Contributing to the issue of selecting most
discriminative features for assessing and classification, we perform extensive
comparisons w.r.t. statistical descriptors, contrast measurements, and scale
structures. This way, we obtain reasonable results under various
configurations. Empirical findings from applying our optimized detector on the
INRIA and Caltech pedestrian datasets show that our features yield
state-of-the-art performance in pedestrian detection.Comment: Accepted for publication in IEEE Transactions on Circuits and Systems
for Video Technology (TCSVT
The Perception of Globally Coherent Motion
How do human observers perceive a coherent pattern of motion from a disparate set of local motion measures? Our research has examined how ambiguous motion signals along straight contours are spatially integrated to obtain a globally coherent perception of motion. Observers viewed displays containing a large number of apertures, with each aperture containing one or more contours whose orientations and velocities could be independently specified. The total pattern of the contour trajectories across the individual apertures was manipulated to produce globally coherent motions, such as rotations, expansions, or translations. For displays containing only straight contours extending to the circumferences of the apertures, observers' reports of global motion direction were biased whenever the sampling of contour orientations was asymmetric relative to the direction of motion. Performance was improved by the presence of identifiable features, such as line ends or crossings, whose trajectories could be tracked over time. The reports of our observers were consistent with a pooling process involving a vector average of measures of the component of velocity normal to contour orientation, rather than with the predictions of the intersection-of-constraints analysis in velocity space.Air Force Office of Scientific Research (90-0175, 89-0016); National Science Foundation, Office of Naval Research, Air Force Office of Scientific Research (BNS-8908426
PIXOR: Real-time 3D Object Detection from Point Clouds
We address the problem of real-time 3D object detection from point clouds in
the context of autonomous driving. Computation speed is critical as detection
is a necessary component for safety. Existing approaches are, however,
expensive in computation due to high dimensionality of point clouds. We utilize
the 3D data more efficiently by representing the scene from the Bird's Eye View
(BEV), and propose PIXOR, a proposal-free, single-stage detector that outputs
oriented 3D object estimates decoded from pixel-wise neural network
predictions. The input representation, network architecture, and model
optimization are especially designed to balance high accuracy and real-time
efficiency. We validate PIXOR on two datasets: the KITTI BEV object detection
benchmark, and a large-scale 3D vehicle detection benchmark. In both datasets
we show that the proposed detector surpasses other state-of-the-art methods
notably in terms of Average Precision (AP), while still runs at >28 FPS.Comment: Update of CVPR2018 paper: correct timing, fix typos, add
acknowledgemen
FoveaBox: Beyond Anchor-based Object Detector
We present FoveaBox, an accurate, flexible, and completely anchor-free
framework for object detection. While almost all state-of-the-art object
detectors utilize predefined anchors to enumerate possible locations, scales
and aspect ratios for the search of the objects, their performance and
generalization ability are also limited to the design of anchors. Instead,
FoveaBox directly learns the object existing possibility and the bounding box
coordinates without anchor reference. This is achieved by: (a) predicting
category-sensitive semantic maps for the object existing possibility, and (b)
producing category-agnostic bounding box for each position that potentially
contains an object. The scales of target boxes are naturally associated with
feature pyramid representations. In FoveaBox, an instance is assigned to
adjacent feature levels to make the model more accurate.We demonstrate its
effectiveness on standard benchmarks and report extensive experimental
analysis. Without bells and whistles, FoveaBox achieves state-of-the-art single
model performance on the standard COCO and Pascal VOC object detection
benchmark. More importantly, FoveaBox avoids all computation and
hyper-parameters related to anchor boxes, which are often sensitive to the
final detection performance. We believe the simple and effective approach will
serve as a solid baseline and help ease future research for object detection.
The code has been made publicly available at
https://github.com/taokong/FoveaBox .Comment: IEEE Transactions on Image Processing, code at:
https://github.com/taokong/FoveaBo
- …