1,346 research outputs found
Speed/accuracy trade-offs for modern convolutional object detectors
The goal of this paper is to serve as a guide for selecting a detection
architecture that achieves the right speed/memory/accuracy balance for a given
application and platform. To this end, we investigate various ways to trade
accuracy for speed and memory usage in modern convolutional object detection
systems. A number of successful systems have been proposed in recent years, but
apples-to-apples comparisons are difficult due to different base feature
extractors (e.g., VGG, Residual Networks), different default image resolutions,
as well as different hardware and software platforms. We present a unified
implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016]
and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and
trace out the speed/accuracy trade-off curve created by using alternative
feature extractors and varying other critical parameters such as image size
within each of these meta-architectures. On one extreme end of this spectrum
where speed and memory are critical, we present a detector that achieves real
time speeds and can be deployed on a mobile device. On the opposite end in
which accuracy is critical, we present a detector that achieves
state-of-the-art performance measured on the COCO detection task.Comment: Accepted to CVPR 201
Distance to Center of Mass Encoding for Instance Segmentation
The instance segmentation can be considered an extension of the object
detection problem where bounding boxes are replaced by object contours.
Strictly speaking the problem requires to identify each pixel instance and
class independently of the artifice used for this mean. The advantage of
instance segmentation over the usual object detection lies in the precise
delineation of objects improving object localization. Additionally, object
contours allow the evaluation of partial occlusion with basic image processing
algorithms. This work approaches the instance segmentation problem as an
annotation problem and presents a novel technique to encode and decode ground
truth annotations. We propose a mathematical representation of instances that
any deep semantic segmentation model can learn and generalize. Each individual
instance is represented by a center of mass and a field of vectors pointing to
it. This encoding technique has been denominated Distance to Center of Mass
Encoding (DCME)
Satellite Imagery Multiscale Rapid Detection with Windowed Networks
Detecting small objects over large areas remains a significant challenge in
satellite imagery analytics. Among the challenges is the sheer number of pixels
and geographical extent per image: a single DigitalGlobe satellite image
encompasses over 64 km2 and over 250 million pixels. Another challenge is that
objects of interest are often minuscule (~pixels in extent even for the highest
resolution imagery), which complicates traditional computer vision techniques.
To address these issues, we propose a pipeline (SIMRDWN) that evaluates
satellite images of arbitrarily large size at native resolution at a rate of >
0.2 km2/s. Building upon the tensorflow object detection API paper, this
pipeline offers a unified approach to multiple object detection frameworks that
can run inference on images of arbitrary size. The SIMRDWN pipeline includes a
modified version of YOLO (known as YOLT), along with the models of the
tensorflow object detection API: SSD, Faster R-CNN, and R-FCN. The proposed
approach allows comparison of the performance of these four frameworks, and can
rapidly detect objects of vastly different scales with relatively little
training data over multiple sensors. For objects of very different scales (e.g.
airplanes versus airports) we find that using two different detectors at
different scales is very effective with negligible runtime cost.We evaluate
large test images at native resolution and find mAP scores of 0.2 to 0.8 for
vehicle localization, with the YOLT architecture achieving both the highest mAP
and fastest inference speed.Comment: 8 pages, 7 figures, 2 tables, 1 appendix. arXiv admin note:
substantial text overlap with arXiv:1805.0951
- …