46 research outputs found
RGB-D Odometry and SLAM
The emergence of modern RGB-D sensors had a significant impact in many
application fields, including robotics, augmented reality (AR) and 3D scanning.
They are low-cost, low-power and low-size alternatives to traditional range
sensors such as LiDAR. Moreover, unlike RGB cameras, RGB-D sensors provide the
additional depth information that removes the need of frame-by-frame
triangulation for 3D scene reconstruction. These merits have made them very
popular in mobile robotics and AR, where it is of great interest to estimate
ego-motion and 3D scene structure. Such spatial understanding can enable robots
to navigate autonomously without collisions and allow users to insert virtual
entities consistent with the image stream. In this chapter, we review common
formulations of odometry and Simultaneous Localization and Mapping (known by
its acronym SLAM) using RGB-D stream input. The two topics are closely related,
as the former aims to track the incremental camera motion with respect to a
local map of the scene, and the latter to jointly estimate the camera
trajectory and the global map with consistency. In both cases, the standard
approaches minimize a cost function using nonlinear optimization techniques.
This chapter consists of three main parts: In the first part, we introduce the
basic concept of odometry and SLAM and motivate the use of RGB-D sensors. We
also give mathematical preliminaries relevant to most odometry and SLAM
algorithms. In the second part, we detail the three main components of SLAM
systems: camera pose tracking, scene mapping and loop closing. For each
component, we describe different approaches proposed in the literature. In the
final part, we provide a brief discussion on advanced research topics with the
references to the state-of-the-art.Comment: This is the pre-submission version of the manuscript that was later
edited and published as a chapter in RGB-D Image Analysis and Processin
Decentralised particle filtering for multiple target tracking in wireless sensor networks
This paper presents algorithms for consistent joint localisation and tracking of multiple targets in wireless sensor networks under the decentralised data fusion (DDF) paradigm where particle representations of the state posteriors are communicated. This work differs from previous work [1], [2] as more generalised methods have been developed to account for correlated estimation errors that arise due to common past information between two discrete particle sets. The particle sets are converted to continuous distributions for communication and inter-nodal fusion. Common past information is then removed by a division operation of two estimates so that only new information is updated at the node. In previous work, the continuous distribution used was limited to a Gaussian kernel function. This new method is compared to the optimal centralised solution where each node sends all observation information to a central fusion node when received. Results presented include a real-time application of the DDF operation of division on data logged from field trials
Observability Analysis of Opportunistic Navigation with Pseudorange Measurements
The observability analysis of an opportunistic navigation (OpNav) environment whose states may be partially known is considered. An OpNav environment can be thought of as a radio frequency signal landscape within which a receiver locates itself in space and time by extracting information from ambient signals of opportunity (SOPs). Available SOPs may have a fully-known, partially-known, or unknown characterization. In the present work, the receiver is assumed to draw only pseudorange-type measurements from the SOP signals. Separate observations are fused to produce an estimate of the receiverâs position, velocity, and time (PVT). Since not all SOP states in the OpNav environment may be known a priori, the receiver must estimate the unknown SOP states of interest simultaneously with its own PVT. The observability analysis presented here first evaluates various linear and nonlinear observability tools and identifies those that can be appropriately applied to Op- Nav observability. Subsequently, the appropriate tools are invoked to establish the minimal conditions under which the environment is observable. It is shown that a planar OpNav environment consisting of a receiver and n SOPs is observable if either the receiverâs initial state is known or the receiverâs initial position is known along with the initial state of one SOP. Simulation results are shown to agree with the theoretical observability conditions.Electrical and Computer Engineerin
Real scale augmented reality. a novel paradigm for archaeological heritage fruition
none4no3D contents have great potential in improving the communication and fruition of Cultural Heritage (CH). The visiting experience on an archaeological site or historical building can be improved by digital contents that help visitors to discover and learn how they once appeared. Augmented Reality (AR) is one of the technologies nowadays used for CH exploitation and it has the great quality of superimposing digital contents on elements of the reality. This paper shows a new interactive application on the archaeological site of Forum Sempronii: thanks to SLAM technology it allows the visitors to see several virtual reconstructions superimposed on the ruins walking around the site, so that they can see how anciently the roman city was. Great attention has been given to the creation of 3D contents: after the creation of high poly models they have been decimated and optimized in terms of number of polygons and textures in order to be fluently managed on mobile devices. The fruition of real scale contents on the real context increases immersive usersâ experiences.restrictedDragoni, Aldo F.; Quattrini, Ramona*; Sernani, Paolo; Ruggeri, LudovicoDragoni, Aldo F.; Quattrini, Ramona; Sernani, Paolo; Ruggeri, Ludovic