19,381 research outputs found

    Aerial Vehicle Tracking by Adaptive Fusion of Hyperspectral Likelihood Maps

    Full text link
    Hyperspectral cameras can provide unique spectral signatures for consistently distinguishing materials that can be used to solve surveillance tasks. In this paper, we propose a novel real-time hyperspectral likelihood maps-aided tracking method (HLT) inspired by an adaptive hyperspectral sensor. A moving object tracking system generally consists of registration, object detection, and tracking modules. We focus on the target detection part and remove the necessity to build any offline classifiers and tune a large amount of hyperparameters, instead learning a generative target model in an online manner for hyperspectral channels ranging from visible to infrared wavelengths. The key idea is that, our adaptive fusion method can combine likelihood maps from multiple bands of hyperspectral imagery into one single more distinctive representation increasing the margin between mean value of foreground and background pixels in the fused map. Experimental results show that the HLT not only outperforms all established fusion methods but is on par with the current state-of-the-art hyperspectral target tracking frameworks.Comment: Accepted at the International Conference on Computer Vision and Pattern Recognition Workshops, 201

    Towards Automatic SAR-Optical Stereogrammetry over Urban Areas using Very High Resolution Imagery

    Full text link
    In this paper we discuss the potential and challenges regarding SAR-optical stereogrammetry for urban areas, using very-high-resolution (VHR) remote sensing imagery. Since we do this mainly from a geometrical point of view, we first analyze the height reconstruction accuracy to be expected for different stereogrammetric configurations. Then, we propose a strategy for simultaneous tie point matching and 3D reconstruction, which exploits an epipolar-like search window constraint. To drive the matching and ensure some robustness, we combine different established handcrafted similarity measures. For the experiments, we use real test data acquired by the Worldview-2, TerraSAR-X and MEMPHIS sensors. Our results show that SAR-optical stereogrammetry using VHR imagery is generally feasible with 3D positioning accuracies in the meter-domain, although the matching of these strongly hetereogeneous multi-sensor data remains very challenging. Keywords: Synthetic Aperture Radar (SAR), optical images, remote sensing, data fusion, stereogrammetr

    HeadOn: Real-time Reenactment of Human Portrait Videos

    Get PDF
    We propose HeadOn, the first real-time source-to-target reenactment approach for complete human portrait videos that enables transfer of torso and head motion, face expression, and eye gaze. Given a short RGB-D video of the target actor, we automatically construct a personalized geometry proxy that embeds a parametric head, eye, and kinematic torso model. A novel real-time reenactment algorithm employs this proxy to photo-realistically map the captured motion from the source actor to the target actor. On top of the coarse geometric proxy, we propose a video-based rendering technique that composites the modified target portrait video via view- and pose-dependent texturing, and creates photo-realistic imagery of the target actor under novel torso and head poses, facial expressions, and gaze directions. To this end, we propose a robust tracking of the face and torso of the source actor. We extensively evaluate our approach and show significant improvements in enabling much greater flexibility in creating realistic reenacted output videos.Comment: Video: https://www.youtube.com/watch?v=7Dg49wv2c_g Presented at Siggraph'1

    Joint Registration and Fusion of an Infra-Red Camera and Scanning Radar in a Maritime Context

    Get PDF
    The number of nodes in sensor networks is continually increasing, and maintaining accurate track estimates inside their common surveillance region is a critical necessity. Modern sensor platforms are likely to carry a range of different sensor modalities, all providing data at differing rates, and with varying degrees of uncertainty. These factors complicate the fusion problem as multiple observation models are required, along with a dynamic prediction model. However, the problem is exacerbated when sensors are not registered correctly with respect to each other, i.e. if they are subject to a static or dynamic bias. In this case, measurements from different sensors may correspond to the same target, but do not correlate with each other when in the same Frame of Reference (FoR), which decreases track accuracy. This paper presents a method to jointly estimate the state of multiple targets in a surveillance region, and to correctly register a radar and an Infrared Search and Track (IRST) system onto the same FoR to perform sensor fusion. Previous work using this type of parent-offspring process has been successful when calibrating a pair of cameras, but has never been attempted on a heterogeneous sensor network, nor in a maritime environment. This article presents results on both simulated scenarios and a segment of real data that show a significant increase in track quality in comparison to using incorrectly calibrated sensors or single-radar only

    MAMUD : contribution of HR satellite imagery to a better monitoring, modeling and understanding of urban dynamics

    Get PDF
    In this treatise the discussion of a methodology and results of semi-automatic city DSM extrac-tion from an Ikonos triplet, is introduced. Built-up areas are known as being complex for photogrammetric purposes, partly because of the steep changes in elevation caused by buildings and urban features. To make DSM extraction more robust and to cope with the specific problems of height displacement, concealed areas and shadow, a multi-image based approach is followed. For the VHR tri-stereoscopic study an area extending from the centre of Istanbul to the urban fringe is chosen. Research will concentrate, in first phase on the development of methods to optimize the extraction of photogrammetric products from the bundled Ikonos triplet. Optimal methods need to be found to improve the radiometry and geometry of the imagery, to improve the semi-automatically derivation of DSM’s and to improve the postprocessing of the products. Secondly we will also investigate the possibilities of creating stereo models out of images from the same sensor taken on a different date, e.g. one image of the stereo pair combined with the third image. Finally the photogrammetric products derived from the Ikonos stereo pair as well as the products created out of the triplet and the constructed stereo models will be investigated by comparison with a 3D reference. This evaluation should show the increase of accuracy when multi-imagery is used instead of stereo pairs

    Message Passing and Hierarchical Models for Simultaneous Tracking and Registration

    Get PDF

    Towards improving driver situation awareness at intersections

    Full text link
    Providing safety critical information to the driver is vital in reducing road accidents, especially at intersections. Intersections are complex to deal with due to the presence of large number of vehicle and pedestrian activities, and possible occlusions. Information available from only the sensors onboard a vehicle has limited value in this scenario. In this paper, we propose to utilize sensors on-board the vehicle of interest as well as the sensors that are mounted on nearby vehicles to enhance the driver situation awareness. The resulting major research challenge of sensor registration with moving observers is solved using a mutual information based technique. The response of the sensors to common causes are identified and exploited for computing their unknown relative locations. Experimental results, for a mock up traffic intersection in which mobile robots equipped with laser range finders are used, are presented to demonstrate the efficacy of the proposed technique. ©2007 IEEE
    • …
    corecore