59 research outputs found
Satellite Imagery Multiscale Rapid Detection with Windowed Networks
Detecting small objects over large areas remains a significant challenge in
satellite imagery analytics. Among the challenges is the sheer number of pixels
and geographical extent per image: a single DigitalGlobe satellite image
encompasses over 64 km2 and over 250 million pixels. Another challenge is that
objects of interest are often minuscule (~pixels in extent even for the highest
resolution imagery), which complicates traditional computer vision techniques.
To address these issues, we propose a pipeline (SIMRDWN) that evaluates
satellite images of arbitrarily large size at native resolution at a rate of >
0.2 km2/s. Building upon the tensorflow object detection API paper, this
pipeline offers a unified approach to multiple object detection frameworks that
can run inference on images of arbitrary size. The SIMRDWN pipeline includes a
modified version of YOLO (known as YOLT), along with the models of the
tensorflow object detection API: SSD, Faster R-CNN, and R-FCN. The proposed
approach allows comparison of the performance of these four frameworks, and can
rapidly detect objects of vastly different scales with relatively little
training data over multiple sensors. For objects of very different scales (e.g.
airplanes versus airports) we find that using two different detectors at
different scales is very effective with negligible runtime cost.We evaluate
large test images at native resolution and find mAP scores of 0.2 to 0.8 for
vehicle localization, with the YOLT architecture achieving both the highest mAP
and fastest inference speed.Comment: 8 pages, 7 figures, 2 tables, 1 appendix. arXiv admin note:
substantial text overlap with arXiv:1805.0951
A Chandra Proper Motion for PSR J1809-2332
We report on a new Chandra exposure of PSR J1809-2332, the recently
discovered pulsar powering the bright EGRET source 3EG J1809-2328. By
registration of field X-ray sources in an archival exposure, we measure a
significant proper motion for the pulsar point source over an ~11 year
baseline. The shift of 0.30+/-0.06" (at PA= 153.3+/-18.4) supports an
association with proposed SNR parent G7.5-1.7. Spectral analysis of diffuse
emission in the region also supports the interpretation as a hard wind nebula
trail pointing back toward the SNR.Comment: To Appear in the Astrophysical Journal, Sept 1 (v. 756
Rings and Jets around PSR J2021+3651: the `Dragonfly Nebula'
We describe recent Chandra ACIS observations of the Vela-like pulsar PSR
J2021+3651 and its pulsar wind nebula (PWN). This `Dragonfly Nebula' displays
an axisymmetric morphology, with bright inner jets, a double-ridged inner
nebula, and a ~30" polar jet. The PWN is embedded in faint diffuse emission: a
bow shock-like structure with standoff ~1' brackets the pulsar to the east and
emission trails off westward for 3-4'. Thermal (kT=0.16 +/-0.02 keV) and power
law emission are detected from the pulsar. The nebular X-rays show spectral
steepening from Gamma=1.5 in the equatorial torus to Gamma=1.9 in the outer
nebula, suggesting synchrotron burn-off. A fit to the `Dragonfly' structure
suggests a large (86 +/-1 degree) inclination with a double equatorial torus.
Vela is currently the only other PWN showing such double structure. The >12 kpc
distance implied by the pulsar dispersion measure is not supported by the X-ray
data; spectral, scale and efficiency arguments suggest a more modest 3-4 kpc.Comment: 22 pages, 5 figures, 3 tables, Accepted to Ap
SpaceNet MVOI: a Multi-View Overhead Imagery Dataset
Detection and segmentation of objects in overheard imagery is a challenging
task. The variable density, random orientation, small size, and
instance-to-instance heterogeneity of objects in overhead imagery calls for
approaches distinct from existing models designed for natural scene datasets.
Though new overhead imagery datasets are being developed, they almost
universally comprise a single view taken from directly overhead ("at nadir"),
failing to address a critical variable: look angle. By contrast, views vary in
real-world overhead imagery, particularly in dynamic scenarios such as natural
disasters where first looks are often over 40 degrees off-nadir. This
represents an important challenge to computer vision methods, as changing view
angle adds distortions, alters resolution, and changes lighting. At present,
the impact of these perturbations for algorithmic detection and segmentation of
objects is untested. To address this problem, we present an open source
Multi-View Overhead Imagery dataset, termed SpaceNet MVOI, with 27 unique looks
from a broad range of viewing angles (-32.5 degrees to 54.0 degrees). Each of
these images cover the same 665 square km geographic extent and are annotated
with 126,747 building footprint labels, enabling direct assessment of the
impact of viewpoint perturbation on model performance. We benchmark multiple
leading segmentation and object detection models on: (1) building detection,
(2) generalization to unseen viewing angles and resolutions, and (3)
sensitivity of building footprint extraction to changes in resolution. We find
that state of the art segmentation and object detection models struggle to
identify buildings in off-nadir imagery and generalize poorly to unseen views,
presenting an important benchmark to explore the broadly relevant challenge of
detecting small, heterogeneous target objects in visually dynamic contexts.Comment: Accepted into IEEE International Conference on Computer Vision (ICCV)
201
- …