45,676 research outputs found
SpaceNet MVOI: a Multi-View Overhead Imagery Dataset
Detection and segmentation of objects in overheard imagery is a challenging
task. The variable density, random orientation, small size, and
instance-to-instance heterogeneity of objects in overhead imagery calls for
approaches distinct from existing models designed for natural scene datasets.
Though new overhead imagery datasets are being developed, they almost
universally comprise a single view taken from directly overhead ("at nadir"),
failing to address a critical variable: look angle. By contrast, views vary in
real-world overhead imagery, particularly in dynamic scenarios such as natural
disasters where first looks are often over 40 degrees off-nadir. This
represents an important challenge to computer vision methods, as changing view
angle adds distortions, alters resolution, and changes lighting. At present,
the impact of these perturbations for algorithmic detection and segmentation of
objects is untested. To address this problem, we present an open source
Multi-View Overhead Imagery dataset, termed SpaceNet MVOI, with 27 unique looks
from a broad range of viewing angles (-32.5 degrees to 54.0 degrees). Each of
these images cover the same 665 square km geographic extent and are annotated
with 126,747 building footprint labels, enabling direct assessment of the
impact of viewpoint perturbation on model performance. We benchmark multiple
leading segmentation and object detection models on: (1) building detection,
(2) generalization to unseen viewing angles and resolutions, and (3)
sensitivity of building footprint extraction to changes in resolution. We find
that state of the art segmentation and object detection models struggle to
identify buildings in off-nadir imagery and generalize poorly to unseen views,
presenting an important benchmark to explore the broadly relevant challenge of
detecting small, heterogeneous target objects in visually dynamic contexts.Comment: Accepted into IEEE International Conference on Computer Vision (ICCV)
201
Lifting GIS Maps into Strong Geometric Context for Scene Understanding
Contextual information can have a substantial impact on the performance of
visual tasks such as semantic segmentation, object detection, and geometric
estimation. Data stored in Geographic Information Systems (GIS) offers a rich
source of contextual information that has been largely untapped by computer
vision. We propose to leverage such information for scene understanding by
combining GIS resources with large sets of unorganized photographs using
Structure from Motion (SfM) techniques. We present a pipeline to quickly
generate strong 3D geometric priors from 2D GIS data using SfM models aligned
with minimal user input. Given an image resectioned against this model, we
generate robust predictions of depth, surface normals, and semantic labels. We
show that the precision of the predicted geometry is substantially more
accurate other single-image depth estimation methods. We then demonstrate the
utility of these contextual constraints for re-scoring pedestrian detections,
and use these GIS contextual features alongside object detection score maps to
improve a CRF-based semantic segmentation framework, boosting accuracy over
baseline models
DOTA: A Large-scale Dataset for Object Detection in Aerial Images
Object detection is an important and challenging problem in computer vision.
Although the past decade has witnessed major advances in object detection in
natural scenes, such successes have been slow to aerial imagery, not only
because of the huge variation in the scale, orientation and shape of the object
instances on the earth's surface, but also due to the scarcity of
well-annotated datasets of objects in aerial scenes. To advance object
detection research in Earth Vision, also known as Earth Observation and Remote
Sensing, we introduce a large-scale Dataset for Object deTection in Aerial
images (DOTA). To this end, we collect aerial images from different
sensors and platforms. Each image is of the size about 4000-by-4000 pixels and
contains objects exhibiting a wide variety of scales, orientations, and shapes.
These DOTA images are then annotated by experts in aerial image interpretation
using common object categories. The fully annotated DOTA images contains
instances, each of which is labeled by an arbitrary (8 d.o.f.)
quadrilateral To build a baseline for object detection in Earth Vision, we
evaluate state-of-the-art object detection algorithms on DOTA. Experiments
demonstrate that DOTA well represents real Earth Vision applications and are
quite challenging.Comment: Accepted to CVPR 201
- …