14 research outputs found
OASIS: Optimal Arrangements for Sensing in SLAM
The number and arrangement of sensors on an autonomous mobile robot
dramatically influence its perception capabilities. Ensuring that sensors are
mounted in a manner that enables accurate detection, localization, and mapping
is essential for the success of downstream control tasks. However, when
designing a new robotic platform, researchers and practitioners alike usually
mimic standard configurations or maximize simple heuristics like field-of-view
(FOV) coverage to decide where to place exteroceptive sensors. In this work, we
conduct an information-theoretic investigation of this overlooked element of
mobile robotic perception in the context of simultaneous localization and
mapping (SLAM). We show how to formalize the sensor arrangement problem as a
form of subset selection under the E-optimality performance criterion. While
this formulation is NP-hard in general, we further show that a combination of
greedy sensor selection and fast convex relaxation-based post-hoc verification
enables the efficient recovery of certifiably optimal sensor designs in
practice. Results from synthetic experiments reveal that sensors placed with
OASIS outperform benchmarks in terms of mean squared error of visual SLAM
estimates
Attention and Anticipation in Fast Visual-Inertial Navigation
We study a Visual-Inertial Navigation (VIN) problem in which a robot needs to
estimate its state using an on-board camera and an inertial sensor, without any
prior knowledge of the external environment. We consider the case in which the
robot can allocate limited resources to VIN, due to tight computational
constraints. Therefore, we answer the following question: under limited
resources, what are the most relevant visual cues to maximize the performance
of visual-inertial navigation? Our approach has four key ingredients. First, it
is task-driven, in that the selection of the visual cues is guided by a metric
quantifying the VIN performance. Second, it exploits the notion of
anticipation, since it uses a simplified model for forward-simulation of robot
dynamics, predicting the utility of a set of visual cues over a future time
horizon. Third, it is efficient and easy to implement, since it leads to a
greedy algorithm for the selection of the most relevant visual cues. Fourth, it
provides formal performance guarantees: we leverage submodularity to prove that
the greedy selection cannot be far from the optimal (combinatorial) selection.
Simulations and real experiments on agile drones show that our approach ensures
state-of-the-art VIN performance while maintaining a lean processing time. In
the easy scenarios, our approach outperforms appearance-based feature selection
in terms of localization errors. In the most challenging scenarios, it enables
accurate visual-inertial navigation while appearance-based feature selection
fails to track robot's motion during aggressive maneuvers.Comment: 20 pages, 7 figures, 2 table
Trajectory Servoing: Image-Based Trajectory Tracking without Absolute Positioning
The thesis describes an image based visual servoing (IBVS) system for a non-holonomic robot to achieve good trajectory following without real-time robot pose information
and without a known visual map of the environment. We call it trajectory servoing. The critical component is a feature based, indirect SLAM method to provide a pool of available features with estimated depth and covariance, so that they may be propagated forward in time to generate image feature trajectories with uncertainty information for visual servoing. Short and long distance experiments show the benefits of trajectory servoing for navigating unknown areas without absolute positioning. Trajectory servoing is shown to be more accurate than SLAM pose-based feedback and further improved by a weighted least square controller using covariance from the underlying SLAM system.M.S
Epälambertilaiset pinnat ja niiden haasteet konenäössä
This thesis regards non-Lambertian surfaces and their challenges, solutions and study in computer vision. The physical theory for understanding the phenomenon is built first, using the Lambertian reflectance model, which defines Lambertian surfaces as ideally diffuse surfaces, whose luminance is isotropic and the luminous intensity obeys Lambert's cosine law. From these two assumptions, non-Lambertian surfaces violate at least the cosine law and are consequently specularly reflecting surfaces, whose perceived brightness is dependent from the viewpoint. Thus non-Lambertian surfaces violate also brightness and colour constancies, which assume that the brightness and colour of same real-world points stays constant across images. These assumptions are used, for example, in tracking and feature matching and thus non-Lambertian surfaces pose complications for object reconstruction and navigation among other tasks in the field of computer vision.
After formulating the theoretical foundation of necessary physics and a more general reflectance model called the bi-directional reflectance distribution function, a comprehensive literature review into significant studies regarding non-Lambertian surfaces is conducted. The primary topics of the survey include photometric stereo and navigation systems, while considering other potential fields, such as fusion methods and illumination invariance. The goal of the survey is to formulate a detailed and in-depth answer to what methods can be used to solve the challenges posed by non-Lambertian surfaces, what are these methods' strengths and weaknesses, what are the used datasets and what remains to be answered by further research. After the survey, a dataset is collected and presented, and an outline of another dataset to be published in an upcoming paper is presented. Then a general discussion about the survey and the study is undertaken and conclusions along with proposed future steps are introduced