9,374 research outputs found
Satellite Imagery Multiscale Rapid Detection with Windowed Networks
Detecting small objects over large areas remains a significant challenge in
satellite imagery analytics. Among the challenges is the sheer number of pixels
and geographical extent per image: a single DigitalGlobe satellite image
encompasses over 64 km2 and over 250 million pixels. Another challenge is that
objects of interest are often minuscule (~pixels in extent even for the highest
resolution imagery), which complicates traditional computer vision techniques.
To address these issues, we propose a pipeline (SIMRDWN) that evaluates
satellite images of arbitrarily large size at native resolution at a rate of >
0.2 km2/s. Building upon the tensorflow object detection API paper, this
pipeline offers a unified approach to multiple object detection frameworks that
can run inference on images of arbitrary size. The SIMRDWN pipeline includes a
modified version of YOLO (known as YOLT), along with the models of the
tensorflow object detection API: SSD, Faster R-CNN, and R-FCN. The proposed
approach allows comparison of the performance of these four frameworks, and can
rapidly detect objects of vastly different scales with relatively little
training data over multiple sensors. For objects of very different scales (e.g.
airplanes versus airports) we find that using two different detectors at
different scales is very effective with negligible runtime cost.We evaluate
large test images at native resolution and find mAP scores of 0.2 to 0.8 for
vehicle localization, with the YOLT architecture achieving both the highest mAP
and fastest inference speed.Comment: 8 pages, 7 figures, 2 tables, 1 appendix. arXiv admin note:
substantial text overlap with arXiv:1805.0951
DOTA: A Large-scale Dataset for Object Detection in Aerial Images
Object detection is an important and challenging problem in computer vision.
Although the past decade has witnessed major advances in object detection in
natural scenes, such successes have been slow to aerial imagery, not only
because of the huge variation in the scale, orientation and shape of the object
instances on the earth's surface, but also due to the scarcity of
well-annotated datasets of objects in aerial scenes. To advance object
detection research in Earth Vision, also known as Earth Observation and Remote
Sensing, we introduce a large-scale Dataset for Object deTection in Aerial
images (DOTA). To this end, we collect aerial images from different
sensors and platforms. Each image is of the size about 4000-by-4000 pixels and
contains objects exhibiting a wide variety of scales, orientations, and shapes.
These DOTA images are then annotated by experts in aerial image interpretation
using common object categories. The fully annotated DOTA images contains
instances, each of which is labeled by an arbitrary (8 d.o.f.)
quadrilateral To build a baseline for object detection in Earth Vision, we
evaluate state-of-the-art object detection algorithms on DOTA. Experiments
demonstrate that DOTA well represents real Earth Vision applications and are
quite challenging.Comment: Accepted to CVPR 201
Object Detection in 20 Years: A Survey
Object detection, as of one the most fundamental and challenging problems in
computer vision, has received great attention in recent years. Its development
in the past two decades can be regarded as an epitome of computer vision
history. If we think of today's object detection as a technical aesthetics
under the power of deep learning, then turning back the clock 20 years we would
witness the wisdom of cold weapon era. This paper extensively reviews 400+
papers of object detection in the light of its technical evolution, spanning
over a quarter-century's time (from the 1990s to 2019). A number of topics have
been covered in this paper, including the milestone detectors in history,
detection datasets, metrics, fundamental building blocks of the detection
system, speed up techniques, and the recent state of the art detection methods.
This paper also reviews some important detection applications, such as
pedestrian detection, face detection, text detection, etc, and makes an in-deep
analysis of their challenges as well as technical improvements in recent years.Comment: This work has been submitted to the IEEE TPAMI for possible
publicatio
Airborne photogrammetry and LIDAR for DSM extraction and 3D change detection over an urban area : a comparative study
A digital surface model (DSM) extracted from stereoscopic aerial images, acquired in March 2000, is compared with a DSM derived from airborne light detection and ranging (lidar) data collected in July 2009. Three densely built-up study areas in the city centre of Ghent, Belgium, are selected, each covering approximately 0.4 km(2). The surface models, generated from the two different 3D acquisition methods, are compared qualitatively and quantitatively as to what extent they are suitable in modelling an urban environment, in particular for the 3D reconstruction of buildings. Then the data sets, which are acquired at two different epochs t(1) and t(2), are investigated as to what extent 3D (building) changes can be detected and modelled over the time interval. A difference model, generated by pixel-wise subtracting of both DSMs, indicates changes in elevation. Filters are proposed to differentiate 'real' building changes from false alarms provoked by model noise, outliers, vegetation, etc. A final 3D building change model maps all destructed and newly constructed buildings within the time interval t(2) - t(1). Based on the change model, the surface and volume of the building changes can be quantified
Dense semantic labeling of sub-decimeter resolution images with convolutional neural networks
Semantic labeling (or pixel-level land-cover classification) in ultra-high
resolution imagery (< 10cm) requires statistical models able to learn high
level concepts from spatial data, with large appearance variations.
Convolutional Neural Networks (CNNs) achieve this goal by learning
discriminatively a hierarchy of representations of increasing abstraction.
In this paper we present a CNN-based system relying on an
downsample-then-upsample architecture. Specifically, it first learns a rough
spatial map of high-level representations by means of convolutions and then
learns to upsample them back to the original resolution by deconvolutions. By
doing so, the CNN learns to densely label every pixel at the original
resolution of the image. This results in many advantages, including i)
state-of-the-art numerical accuracy, ii) improved geometric accuracy of
predictions and iii) high efficiency at inference time.
We test the proposed system on the Vaihingen and Potsdam sub-decimeter
resolution datasets, involving semantic labeling of aerial images of 9cm and
5cm resolution, respectively. These datasets are composed by many large and
fully annotated tiles allowing an unbiased evaluation of models making use of
spatial information. We do so by comparing two standard CNN architectures to
the proposed one: standard patch classification, prediction of local label
patches by employing only convolutions and full patch labeling by employing
deconvolutions. All the systems compare favorably or outperform a
state-of-the-art baseline relying on superpixels and powerful appearance
descriptors. The proposed full patch labeling CNN outperforms these models by a
large margin, also showing a very appealing inference time.Comment: Accepted in IEEE Transactions on Geoscience and Remote Sensing, 201
- …