7,135 research outputs found
Efficient moving point handling for incremental 3D manifold reconstruction
As incremental Structure from Motion algorithms become effective, a good
sparse point cloud representing the map of the scene becomes available
frame-by-frame. From the 3D Delaunay triangulation of these points,
state-of-the-art algorithms build a manifold rough model of the scene. These
algorithms integrate incrementally new points to the 3D reconstruction only if
their position estimate does not change. Indeed, whenever a point moves in a 3D
Delaunay triangulation, for instance because its estimation gets refined, a set
of tetrahedra have to be removed and replaced with new ones to maintain the
Delaunay property; the management of the manifold reconstruction becomes thus
complex and it entails a potentially big overhead. In this paper we investigate
different approaches and we propose an efficient policy to deal with moving
points in the manifold estimation process. We tested our approach with four
sequences of the KITTI dataset and we show the effectiveness of our proposal in
comparison with state-of-the-art approaches.Comment: Accepted in International Conference on Image Analysis and Processing
(ICIAP 2015
Mesh-based 3D Textured Urban Mapping
In the era of autonomous driving, urban mapping represents a core step to let
vehicles interact with the urban context. Successful mapping algorithms have
been proposed in the last decade building the map leveraging on data from a
single sensor. The focus of the system presented in this paper is twofold: the
joint estimation of a 3D map from lidar data and images, based on a 3D mesh,
and its texturing. Indeed, even if most surveying vehicles for mapping are
endowed by cameras and lidar, existing mapping algorithms usually rely on
either images or lidar data; moreover both image-based and lidar-based systems
often represent the map as a point cloud, while a continuous textured mesh
representation would be useful for visualization and navigation purposes. In
the proposed framework, we join the accuracy of the 3D lidar data, and the
dense information and appearance carried by the images, in estimating a
visibility consistent map upon the lidar measurements, and refining it
photometrically through the acquired images. We evaluate the proposed framework
against the KITTI dataset and we show the performance improvement with respect
to two state of the art urban mapping algorithms, and two widely used surface
reconstruction algorithms in Computer Graphics.Comment: accepted at iros 201
C-blox: A Scalable and Consistent TSDF-based Dense Mapping Approach
In many applications, maintaining a consistent dense map of the environment
is key to enabling robotic platforms to perform higher level decision making.
Several works have addressed the challenge of creating precise dense 3D maps
from visual sensors providing depth information. However, during operation over
longer missions, reconstructions can easily become inconsistent due to
accumulated camera tracking error and delayed loop closure. Without explicitly
addressing the problem of map consistency, recovery from such distortions tends
to be difficult. We present a novel system for dense 3D mapping which addresses
the challenge of building consistent maps while dealing with scalability.
Central to our approach is the representation of the environment as a
collection of overlapping TSDF subvolumes. These subvolumes are localized
through feature-based camera tracking and bundle adjustment. Our main
contribution is a pipeline for identifying stable regions in the map, and to
fuse the contributing subvolumes. This approach allows us to reduce map growth
while still maintaining consistency. We demonstrate the proposed system on a
publicly available dataset and simulation engine, and demonstrate the efficacy
of the proposed approach for building consistent and scalable maps. Finally we
demonstrate our approach running in real-time on-board a lightweight MAV.Comment: 8 pages, 5 figures, conferenc
Learning Latent Space Dynamics for Tactile Servoing
To achieve a dexterous robotic manipulation, we need to endow our robot with
tactile feedback capability, i.e. the ability to drive action based on tactile
sensing. In this paper, we specifically address the challenge of tactile
servoing, i.e. given the current tactile sensing and a target/goal tactile
sensing --memorized from a successful task execution in the past-- what is the
action that will bring the current tactile sensing to move closer towards the
target tactile sensing at the next time step. We develop a data-driven approach
to acquire a dynamics model for tactile servoing by learning from
demonstration. Moreover, our method represents the tactile sensing information
as to lie on a surface --or a 2D manifold-- and perform a manifold learning,
making it applicable to any tactile skin geometry. We evaluate our method on a
contact point tracking task using a robot equipped with a tactile finger. A
video demonstrating our approach can be seen in https://youtu.be/0QK0-Vx7WkIComment: Accepted to be published at the International Conference on Robotics
and Automation (ICRA) 2019. The final version for publication at ICRA 2019 is
7 pages (i.e. 6 pages of technical content (including text, figures, tables,
acknowledgement, etc.) and 1 page of the Bibliography/References), while this
arXiv version is 8 pages (added Appendix and some extra details
Learning Articulated Motions From Visual Demonstration
Many functional elements of human homes and workplaces consist of rigid
components which are connected through one or more sliding or rotating
linkages. Examples include doors and drawers of cabinets and appliances;
laptops; and swivel office chairs. A robotic mobile manipulator would benefit
from the ability to acquire kinematic models of such objects from observation.
This paper describes a method by which a robot can acquire an object model by
capturing depth imagery of the object as a human moves it through its range of
motion. We envision that in future, a machine newly introduced to an
environment could be shown by its human user the articulated objects particular
to that environment, inferring from these "visual demonstrations" enough
information to actuate each object independently of the user.
Our method employs sparse (markerless) feature tracking, motion segmentation,
component pose estimation, and articulation learning; it does not require prior
object models. Using the method, a robot can observe an object being exercised,
infer a kinematic model incorporating rigid, prismatic and revolute joints,
then use the model to predict the object's motion from a novel vantage point.
We evaluate the method's performance, and compare it to that of a previously
published technique, for a variety of household objects.Comment: Published in Robotics: Science and Systems X, Berkeley, CA. ISBN:
978-0-9923747-0-
- …