19,259 research outputs found
Indoor Localization Using Radio, Vision and Audio Sensors: Real-Life Data Validation and Discussion
This paper investigates indoor localization methods using radio, vision, and
audio sensors, respectively, in the same environment. The evaluation is based
on state-of-the-art algorithms and uses a real-life dataset. More specifically,
we evaluate a machine learning algorithm for radio-based localization with
massive MIMO technology, an ORB-SLAM3 algorithm for vision-based localization
with an RGB-D camera, and an SFS2 algorithm for audio-based localization with
microphone arrays. Aspects including localization accuracy, reliability,
calibration requirements, and potential system complexity are discussed to
analyze the advantages and limitations of using different sensors for indoor
localization tasks. The results can serve as a guideline and basis for further
development of robust and high-precision multi-sensory localization systems,
e.g., through sensor fusion and context and environment-aware adaptation.Comment: 6 pages, 6 figure
Improving the mobile robots indoor localization system by combining SLAM with fiducial markers
Autonomous mobile robots applications require a
robust navigation system, which ensures the proper movement
of the robot while performing their tasks. The key challenge
in the navigation system is related to the indoor localization.
Simultaneous Localization and Mapping (SLAM) techniques
combined with Adaptive Monte Carlo Localization (AMCL)
are widely used to localize robots. However, this approach is
susceptible to errors, especially in dynamic environments and
in presence of obstacles and objects. This paper presents an
approach to improve the estimation of the indoor pose of
a wheeled mobile robot in an environment. To this end, the
proposed localization system integrates the AMCL algorithm
with the position updates and corrections based on the artificial
vision detection of fiducial markers scattered throughout the
environment to reduce the errors accumulated by the AMCL
position estimation. The proposed approach is based on Robot
Operating System (ROS), and tested and validated in a simulation
environment. As a result, an improvement in the trajectory
performed by the robot was identified using the SLAM system
combined with traditional AMCL corrected with the detection,
by artificial vision, of fiducial markers.info:eu-repo/semantics/publishedVersio
High-Precision Localization Using Ground Texture
Location-aware applications play an increasingly critical role in everyday
life. However, satellite-based localization (e.g., GPS) has limited accuracy
and can be unusable in dense urban areas and indoors. We introduce an
image-based global localization system that is accurate to a few millimeters
and performs reliable localization both indoors and outside. The key idea is to
capture and index distinctive local keypoints in ground textures. This is based
on the observation that ground textures including wood, carpet, tile, concrete,
and asphalt may look random and homogeneous, but all contain cracks, scratches,
or unique arrangements of fibers. These imperfections are persistent, and can
serve as local features. Our system incorporates a downward-facing camera to
capture the fine texture of the ground, together with an image processing
pipeline that locates the captured texture patch in a compact database
constructed offline. We demonstrate the capability of our system to robustly,
accurately, and quickly locate test images on various types of outdoor and
indoor ground surfaces
Vision-based Assistive Indoor Localization
An indoor localization system is of significant importance to the visually impaired in their daily lives by helping them localize themselves and further navigate an indoor environment. In this thesis, a vision-based indoor localization solution is proposed and studied with algorithms and their implementations by maximizing the usage of the visual information surrounding the users for an optimal localization from multiple stages. The contributions of the work include the following: (1) Novel combinations of a daily-used smart phone with a low-cost lens (GoPano) are used to provide an economic, portable, and robust indoor localization service for visually impaired people. (2) New omnidirectional features (omni-features) extracted from 360 degrees field-of-view images are proposed to represent visual landmarks of indoor positions, and then used as on-line query keys when a user asks for localization services. (3) A scalable and light-weight computation and storage solution is implemented by transferring big database storage and computational heavy querying procedure to the cloud. (4) Real-time query performance of 14 fps is achieved with a Wi-Fi connection by identifying and implementing both data and task parallelism using many-core NVIDIA GPUs. (5) Rene localization via 2D-to-3D and 3D-to-3D geometric matching and automatic path planning for efficient environmental modeling by utilizing architecture AutoCAD floor plans.
This dissertation first provides a description of assistive indoor localization problem with its detailed connotations as well as overall methodology. Then related work in indoor localization and automatic path planning for environmental modeling is surveyed. After that, the framework of omnidirectional-vision-based indoor assistive localization is introduced. This is followed by multiple refine localization strategies such as 2D-to-3D and 3D-to-3D geometric matching approaches. Finally, conclusions and a few promising future research directions are provided
Mobile Robot Self Localization based on Multi-Antenna-RFID Reader and IC Tag Textile
This paper presents a self-localization system
using multiple RFID reader antennas and High-Frequency
RFID-tag textile floor for an indoor autonomous mobile robot.
Conventional self-localization systems often use vision sensors
and/or laser range finders and an environment model. It is
difficult to estimate the exact global location if the environment
has number of places that have similar shape boundaries or
small number of landmarks to localize. It tends to take a long
time to recover the self-localization estimation if it goes wrong at
once. Vision sensors work hard in dark lighting condition. Laser
range finder often fails to detect distance to a transparent wall.
In addition, the self-localization becomes unstable if obstacles
occlude landmarks that are important to estimate position of
the robot. Door opening and closing condition affects the self-
localization performance.
Self-localization system based on reading RFID-tags on floor
is robust against lighting condition, obstacles, furniture and
doors conditions in the environment. Even if the arrangement
of the obstacles or furniture in the environment is changed,
it is not necessary to update the map for the self-localization.
It can localize itself immediately and is free from well-known
kidnapped robot problem because the RFID-tags give global po-
sition information. Conventional self-localization systems based
on reading RFID-tags on floor often use only one RFID reader
antenna and have difficulty of orientation estimation. We have
developed a self-localization system using multiple RFID reader
antennas and High-Frequency RFID-tag textile floor for an
indoor autonomous mobile robot. Experimental results show
the validity of the proposed methods.2013 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO)
Shibaura Institute of Technology, Tokyo, JAPAN
November 7-9, 201
Aerial-Ground collaborative sensing: Third-Person view for teleoperation
Rapid deployment and operation are key requirements in time critical
application, such as Search and Rescue (SaR). Efficiently teleoperated ground
robots can support first-responders in such situations. However, first-person
view teleoperation is sub-optimal in difficult terrains, while a third-person
perspective can drastically increase teleoperation performance. Here, we
propose a Micro Aerial Vehicle (MAV)-based system that can autonomously provide
third-person perspective to ground robots. While our approach is based on local
visual servoing, it further leverages the global localization of several ground
robots to seamlessly transfer between these ground robots in GPS-denied
environments. Therewith one MAV can support multiple ground robots on a demand
basis. Furthermore, our system enables different visual detection regimes, and
enhanced operability, and return-home functionality. We evaluate our system in
real-world SaR scenarios.Comment: Accepted for publication in 2018 IEEE International Symposium on
Safety, Security and Rescue Robotics (SSRR
- …