129 research outputs found
Georeferencing accuracy analysis of a single worldview-3 image collected over Milan
The use of rational functions has become a standard for very high-resolution satellite imagery (VHRSI). On the other hand, the overall geolocalization accuracy via direct georeferencing from on board navigation components is much worse than image ground sampling distance (predicted < 3.5 m CE90 for WorldView-3, whereas GSD = 0.31 m for panchromatic images at nadir). This paper presents the georeferencing accuracy results obtained from a single WorldView-3 image processed with a bias compensated RPC camera model. Orientation results for an image collected over Milan are illustrated and discussed for both direct and indirect georeferencing strategies as well as different bias correction parameters estimated from a set of ground control points. Results highlight that the use of a correction based on two shift parameters is optimal for the considered dataset
Identification of dynamic textures using Dynamic Mode Decomposition
Abstract Dynamic Textures (DTs) are image sequences of moving scenes that present stationary properties in time. In this paper, we apply Dynamic Mode Decomposition (DMD) and Dynamic Mode Decomposition with Control (DMDc) to identify a parametric model of dynamic textures. The identification results are compared with a benchmark method from the dynamic texture literature, both from a mathematical and from a computational complexity point of view. Extensive simulations are carried out to assess the performance of the proposed algorithms with regards to synthesis and denoising purposes, with different types of dynamic textures. Results show that DMD and DMDc present lower error, lower residual noise and lower variance compared to the benchmark approach
BIM from laser scans… not just for buildings: NURBS-based parametric modeling of a medieval bridge
Enhancing automatic maritime surveillance systems with visual information
Automatic surveillance systems for the maritime
domain are becoming more and more important due to a constant
increase of naval traffic and to the simultaneous reduction of
crews on decks. However, available technology still provides only
a limited support to this kind of applications. In this paper,
a modular system for intelligent maritime surveillance, capable
of fusing information from heterogeneous sources, is described.
The system is designed to enhance the functions of the existing
Vessel Traffic Services systems and to be deployable in populated
areas, where radar-based systems cannot be used due to the high
electromagnetic radiation emissions. A quantitative evaluation
of the proposed approach has been carried out on a large
and publicly available data set of images and videos, collected
from multiple real sites, with different light, weather, and traffic
conditions
Fisheye Photogrammetry to Survey Narrow Spaces in Architecture and a Hypogea Environment
Nowadays, the increasing computation power of commercial grade processors has actively led to a vast spreading of image-based reconstruction software as well as its application in different disciplines. As a result, new frontiers regarding the use of photogrammetry in a vast range of investigation activities are being explored. This paper investigates the implementation of
fisheye lenses in non-classical survey activities along with the related problematics. Fisheye lenses are outstanding because of their large field of view.
This characteristic alone can be a game changer in reducing the amount of data required, thus speeding up the photogrammetric process when needed. Although they come at a cost, field of view (FOV), speed and manoeuvrability are key to the success of those optics as shown by two of the presented case studies: the survey of a very narrow spiral staircase located in the Duomo di Milano and the survey of a very narrow hypogea structure in Rome. A third case study, which deals with low-cost sensors, shows the metric evaluation of a commercial spherical camera equipped with fisheye lenses
Counterfactual Reasoning about Intent for Interactive Navigation in Dynamic Environments
Many modern robotics applications require robots to function autonomously in
dynamic environments including other decision making agents, such as people or
other robots. This calls for fast and scalable interactive motion planning.
This requires models that take into consideration the other agent's intended
actions in one's own planning. We present a real-time motion planning framework
that brings together a few key components including intention inference by
reasoning counterfactually about potential motion of the other agents as they
work towards different goals. By using a light-weight motion model, we achieve
efficient iterative planning for fluid motion when avoiding pedestrians, in
parallel with goal inference for longer range movement prediction. This
inference framework is coupled with a novel distributed visual tracking method
that provides reliable and robust models for the current belief-state of the
monitored environment. This combined approach represents a computationally
efficient alternative to previously studied policy learning methods that often
require significant offline training or calibration and do not yet scale to
densely populated environments. We validate this framework with experiments
involving multi-robot and human-robot navigation. We further validate the
tracker component separately on much larger scale unconstrained pedestrian data
sets
Predicting future agent motions for dynamic environments
Understanding activities of people in a monitored environment is a topic of active research, motivated by applications requiring context-awareness. Inferring future agent motion is useful not only for improving tracking accuracy, but also for planning in an interactive motion task. Despite rapid advances in the area of activity forecasting, many state-of-the-art methods are still cumbersome for use in realistic robots. This is due to the requirement of having good semantic scene and map labelling, as well as assumptions made regarding possible goals and types of motion. Many emerging applications require robots with modest sensory and computational ability to robustly perform such activity forecasting in high density and dynamic environments. We address this by combining a novel multi-camera tracking method, efficient multi-resolution representations of state and a standard Inverse Reinforcement Learning (IRL) technique, to demonstrate performance that is better than the state-of-the-art in the literature. In this framework, the IRL method uses agent trajectories from a distributed tracker and estimates a reward function within a Markov Decision Process (MDP) model. This reward function can then be used to estimate the agent's motion in future novel task instances. We present empirical experiments using data gathered in our own lab and external corpora (VIRAT), based on which we find that our algorithm is not only efficiently implementable on a resource constrained platform but is also competitive in terms of accuracy with state-of-the-art alternatives (e.g., up to 20% better than the results reported in [1]
- …