9,899 research outputs found
S-AVE Semantic Active Vision Exploration and Mapping of Indoor Environments for Mobile Robots
Semantic mapping is fundamental to enable cognition and high-level planning in robotics. It is a difficult task due to generalization to different scenarios and sensory data types. Hence, most techniques do not obtain a rich and accurate semantic map of the environment and of the objects therein. To tackle this issue we present a novel approach that exploits active vision and drives environment exploration aiming at improving the quality of the semantic map
Autonomous 3D Exploration of Large Structures Using an UAV Equipped with a 2D LIDAR
This paper addressed the challenge of exploring large, unknown, and unstructured
industrial environments with an unmanned aerial vehicle (UAV). The resulting system combined
well-known components and techniques with a new manoeuvre to use a low-cost 2D laser to measure
a 3D structure. Our approach combined frontier-based exploration, the Lazy Theta* path planner, and
a flyby sampling manoeuvre to create a 3D map of large scenarios. One of the novelties of our system
is that all the algorithms relied on the multi-resolution of the octomap for the world representation.
We used a Hardware-in-the-Loop (HitL) simulation environment to collect accurate measurements
of the capability of the open-source system to run online and on-board the UAV in real-time. Our
approach is compared to different reference heuristics under this simulation environment showing
better performance in regards to the amount of explored space. With the proposed approach, the UAV
is able to explore 93% of the search space under 30 min, generating a path without repetition that
adjusts to the occupied space covering indoor locations, irregular structures, and suspended obstaclesUnión Europea Marie Sklodowska-Curie 64215Unión Europea MULTIDRONE (H2020-ICT-731667)Uniión Europea HYFLIERS (H2020-ICT-779411
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
From Monocular SLAM to Autonomous Drone Exploration
Micro aerial vehicles (MAVs) are strongly limited in their payload and power
capacity. In order to implement autonomous navigation, algorithms are therefore
desirable that use sensory equipment that is as small, low-weight, and
low-power consuming as possible. In this paper, we propose a method for
autonomous MAV navigation and exploration using a low-cost consumer-grade
quadrocopter equipped with a monocular camera. Our vision-based navigation
system builds on LSD-SLAM which estimates the MAV trajectory and a semi-dense
reconstruction of the environment in real-time. Since LSD-SLAM only determines
depth at high gradient pixels, texture-less areas are not directly observed so
that previous exploration methods that assume dense map information cannot
directly be applied. We propose an obstacle mapping and exploration approach
that takes the properties of our semi-dense monocular SLAM system into account.
In experiments, we demonstrate our vision-based autonomous navigation and
exploration system with a Parrot Bebop MAV
PlaceRaider: Virtual Theft in Physical Spaces with Smartphones
As smartphones become more pervasive, they are increasingly targeted by
malware. At the same time, each new generation of smartphone features
increasingly powerful onboard sensor suites. A new strain of sensor malware has
been developing that leverages these sensors to steal information from the
physical environment (e.g., researchers have recently demonstrated how malware
can listen for spoken credit card numbers through the microphone, or feel
keystroke vibrations using the accelerometer). Yet the possibilities of what
malware can see through a camera have been understudied. This paper introduces
a novel visual malware called PlaceRaider, which allows remote attackers to
engage in remote reconnaissance and what we call virtual theft. Through
completely opportunistic use of the camera on the phone and other sensors,
PlaceRaider constructs rich, three dimensional models of indoor environments.
Remote burglars can thus download the physical space, study the environment
carefully, and steal virtual objects from the environment (such as financial
documents, information on computer monitors, and personally identifiable
information). Through two human subject studies we demonstrate the
effectiveness of using mobile devices as powerful surveillance and virtual
theft platforms, and we suggest several possible defenses against visual
malware
- …