2,439 research outputs found
Vision-based Situational Graphs Generating Optimizable 3D Scene Representations
3D scene graphs offer a more efficient representation of the environment by
hierarchically organizing diverse semantic entities and the topological
relationships among them. Fiducial markers, on the other hand, offer a valuable
mechanism for encoding comprehensive information pertaining to environments and
the objects within them. In the context of Visual SLAM (VSLAM), especially when
the reconstructed maps are enriched with practical semantic information, these
markers have the potential to enhance the map by augmenting valuable semantic
information and fostering meaningful connections among the semantic objects. In
this regard, this paper exploits the potential of fiducial markers to
incorporate a VSLAM framework with hierarchical representations that generates
optimizable multi-layered vision-based situational graphs. The framework
comprises a conventional VSLAM system with low-level feature tracking and
mapping capabilities bolstered by the incorporation of a fiducial marker map.
The fiducial markers aid in identifying walls and doors in the environment,
subsequently establishing meaningful associations with high-level entities,
including corridors and rooms. Experimental results are conducted on a
real-world dataset collected using various legged robots and benchmarked
against a Light Detection And Ranging (LiDAR)-based framework (S-Graphs) as the
ground truth. Consequently, our framework not only excels in crafting a richer,
multi-layered hierarchical map of the environment but also shows enhancement in
robot pose accuracy when contrasted with state-of-the-art methodologies.Comment: 7 pages, 6 figures, 2 table
Building with Drones: Accurate 3D Facade Reconstruction using MAVs
Automatic reconstruction of 3D models from images using multi-view
Structure-from-Motion methods has been one of the most fruitful outcomes of
computer vision. These advances combined with the growing popularity of Micro
Aerial Vehicles as an autonomous imaging platform, have made 3D vision tools
ubiquitous for large number of Architecture, Engineering and Construction
applications among audiences, mostly unskilled in computer vision. However, to
obtain high-resolution and accurate reconstructions from a large-scale object
using SfM, there are many critical constraints on the quality of image data,
which often become sources of inaccuracy as the current 3D reconstruction
pipelines do not facilitate the users to determine the fidelity of input data
during the image acquisition. In this paper, we present and advocate a
closed-loop interactive approach that performs incremental reconstruction in
real-time and gives users an online feedback about the quality parameters like
Ground Sampling Distance (GSD), image redundancy, etc on a surface mesh. We
also propose a novel multi-scale camera network design to prevent scene drift
caused by incremental map building, and release the first multi-scale image
sequence dataset as a benchmark. Further, we evaluate our system on real
outdoor scenes, and show that our interactive pipeline combined with a
multi-scale camera network approach provides compelling accuracy in multi-view
reconstruction tasks when compared against the state-of-the-art methods.Comment: 8 Pages, 2015 IEEE International Conference on Robotics and
Automation (ICRA '15), Seattle, WA, US
MOMA: Visual Mobile Marker Odometry
In this paper, we present a cooperative odometry scheme based on the
detection of mobile markers in line with the idea of cooperative positioning
for multiple robots [1]. To this end, we introduce a simple optimization scheme
that realizes visual mobile marker odometry via accurate fixed marker-based
camera positioning and analyse the characteristics of errors inherent to the
method compared to classical fixed marker-based navigation and visual odometry.
In addition, we provide a specific UAV-UGV configuration that allows for
continuous movements of the UAV without doing stops and a minimal
caterpillar-like configuration that works with one UGV alone. Finally, we
present a real-world implementation and evaluation for the proposed UAV-UGV
configuration
Learning Articulated Motions From Visual Demonstration
Many functional elements of human homes and workplaces consist of rigid
components which are connected through one or more sliding or rotating
linkages. Examples include doors and drawers of cabinets and appliances;
laptops; and swivel office chairs. A robotic mobile manipulator would benefit
from the ability to acquire kinematic models of such objects from observation.
This paper describes a method by which a robot can acquire an object model by
capturing depth imagery of the object as a human moves it through its range of
motion. We envision that in future, a machine newly introduced to an
environment could be shown by its human user the articulated objects particular
to that environment, inferring from these "visual demonstrations" enough
information to actuate each object independently of the user.
Our method employs sparse (markerless) feature tracking, motion segmentation,
component pose estimation, and articulation learning; it does not require prior
object models. Using the method, a robot can observe an object being exercised,
infer a kinematic model incorporating rigid, prismatic and revolute joints,
then use the model to predict the object's motion from a novel vantage point.
We evaluate the method's performance, and compare it to that of a previously
published technique, for a variety of household objects.Comment: Published in Robotics: Science and Systems X, Berkeley, CA. ISBN:
978-0-9923747-0-
UcoSLAM: Simultaneous Localization and Mapping by Fusion of KeyPoints and Squared Planar Markers
This paper proposes a novel approach for Simultaneous Localization and
Mapping by fusing natural and artificial landmarks. Most of the SLAM approaches
use natural landmarks (such as keypoints). However, they are unstable over
time, repetitive in many cases or insufficient for a robust tracking (e.g. in
indoor buildings). On the other hand, other approaches have employed artificial
landmarks (such as squared fiducial markers) placed in the environment to help
tracking and relocalization. We propose a method that integrates both
approaches in order to achieve long-term robust tracking in many scenarios.
Our method has been compared to the start-of-the-art methods ORB-SLAM2 and
LDSO in the public dataset Kitti, Euroc-MAV, TUM and SPM, obtaining better
precision, robustness and speed. Our tests also show that the combination of
markers and keypoints achieves better accuracy than each one of them
independently.Comment: Paper submitted to Pattern Recognitio
- …