98,324 research outputs found

    Object Image Linking of Earth Orbiting Objects in the Presence of Cosmics

    Full text link
    In survey series of unknown Earth orbiting objects, no a priori orbital elements are available. In surveys of wide field telescopes possibly many nonresolved object images are present on the single frames of the series. Reliable methods have to be found to associate the object images stemming from the same object with each other, so-called linking. The presence of cosmic ray events, so-called Cosmics, complicates reliable linking of non-resolved images. The tracklets of object images allow to extract exact positions for a first orbit determination. A two step method is used and tested on observation frames of space debris surveys of the ESA Space Debris Telescope, located on Tenerife, Spain: In a first step a cosmic filter is applied in the single observation frames. Four different filter approaches are compared and tested in performance. In a second step, the detected object images are linked on observation series based on the assumption of a linear accelerated movement of the objects over the frame during the series, which is updated with every object image, that could be successfully linked.Comment: Accepted for Publication; Advances in Space Research, 201

    Infrared Non-detection of Fomalhaut b -- Implications for the Planet Interpretation

    Full text link
    The nearby A4-type star Fomalhaut hosts a debris belt in the form of an eccentric ring, which is thought to be caused by dynamical influence from a giant planet companion. In 2008, a detection of a point-source inside the inner edge of the ring was reported and was interpreted as a direct image of the planet, named Fomalhaut b. The detection was made at ~600--800 nm, but no corresponding signatures were found in the near-infrared range, where the bulk emission of such a planet should be expected. Here we present deep observations of Fomalhaut with Spitzer/IRAC at 4.5 um, using a novel PSF subtraction technique based on ADI and LOCI, in order to substantially improve the Spitzer contrast at small separations. The results provide more than an order of magnitude improvement in the upper flux limit of Fomalhaut b and exclude the possibility that any flux from a giant planet surface contributes to the observed flux at visible wavelengths. This renders any direct connection between the observed light source and the dynamically inferred giant planet highly unlikely. We discuss several possible interpretations of the total body of observations of the Fomalhaut system, and find that the interpretation that best matches the available data for the observed source is scattered light from transient or semi-transient dust cloud.Comment: 12 pages, 4 figures, ApJ 747, 166. V2: updated acknowledgments and reference

    Detect to Track and Track to Detect

    Full text link
    Recent approaches for high accuracy detection and tracking of object categories in video consist of complex multistage solutions that become more cumbersome each year. In this paper we propose a ConvNet architecture that jointly performs detection and tracking, solving the task in a simple and effective way. Our contributions are threefold: (i) we set up a ConvNet architecture for simultaneous detection and tracking, using a multi-task objective for frame-based object detection and across-frame track regression; (ii) we introduce correlation features that represent object co-occurrences across time to aid the ConvNet during tracking; and (iii) we link the frame level detections based on our across-frame tracklets to produce high accuracy detections at the video level. Our ConvNet architecture for spatiotemporal object detection is evaluated on the large-scale ImageNet VID dataset where it achieves state-of-the-art results. Our approach provides better single model performance than the winning method of the last ImageNet challenge while being conceptually much simpler. Finally, we show that by increasing the temporal stride we can dramatically increase the tracker speed.Comment: ICCV 2017. Code and models: https://github.com/feichtenhofer/Detect-Track Results: https://www.robots.ox.ac.uk/~vgg/research/detect-track

    Watch and Learn: Semi-Supervised Learning of Object Detectors from Videos

    Full text link
    We present a semi-supervised approach that localizes multiple unknown object instances in long videos. We start with a handful of labeled boxes and iteratively learn and label hundreds of thousands of object instances. We propose criteria for reliable object detection and tracking for constraining the semi-supervised learning process and minimizing semantic drift. Our approach does not assume exhaustive labeling of each object instance in any single frame, or any explicit annotation of negative data. Working in such a generic setting allow us to tackle multiple object instances in video, many of which are static. In contrast, existing approaches either do not consider multiple object instances per video, or rely heavily on the motion of the objects present. The experiments demonstrate the effectiveness of our approach by evaluating the automatically labeled data on a variety of metrics like quality, coverage (recall), diversity, and relevance to training an object detector.Comment: To appear in CVPR 201

    Interactive multiple object learning with scanty human supervision

    Get PDF
    © 2016. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/We present a fast and online human-robot interaction approach that progressively learns multiple object classifiers using scanty human supervision. Given an input video stream recorded during the human robot interaction, the user just needs to annotate a small fraction of frames to compute object specific classifiers based on random ferns which share the same features. The resulting methodology is fast (in a few seconds, complex object appearances can be learned), versatile (it can be applied to unconstrained scenarios), scalable (real experiments show we can model up to 30 different object classes), and minimizes the amount of human intervention by leveraging the uncertainty measures associated to each classifier.; We thoroughly validate the approach on synthetic data and on real sequences acquired with a mobile platform in indoor and outdoor scenarios containing a multitude of different objects. We show that with little human assistance, we are able to build object classifiers robust to viewpoint changes, partial occlusions, varying lighting and cluttered backgrounds. (C) 2016 Elsevier Inc. All rights reserved.Peer ReviewedPostprint (author's final draft
    • …
    corecore