2,377 research outputs found
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
Multisensor Data Fusion for Reliable Obstacle Avoidance
In this work, we propose a new approach that combines data from multiple
sensors for reliable obstacle avoidance. The sensors include two depth cameras
and a LiDAR arranged so that they can capture the whole 3D area in front of the
robot and a 2D slide around it. To fuse the data from these sensors, we first
use an external camera as a reference to combine data from two depth cameras. A
projection technique is then introduced to convert the 3D point cloud data of
the cameras to its 2D correspondence. An obstacle avoidance algorithm is then
developed based on the dynamic window approach. A number of experiments have
been conducted to evaluate our proposed approach. The results show that the
robot can effectively avoid static and dynamic obstacles of different shapes
and sizes in different environments.Comment: In the 11th International Conference on Control, Automation and
Information Sciences (ICCAIS 2022), Hanoi, Vietna
Loosely Coupled Odometry, UWB Ranging, and Cooperative Spatial Detection for Relative Monte-Carlo Multi-Robot Localization
As mobile robots become more ubiquitous, their deployments grow across use
cases where GNSS positioning is either unavailable or unreliable. This has led
to increased interest in multi-modal relative localization methods.
Complementing onboard odometry, ranging allows for relative state estimation,
with ultra-wideband (UWB) ranging having gained widespread recognition due to
its low cost and centimeter-level out-of-box accuracy. Infrastructure-free
localization methods allow for more dynamic, ad-hoc, and flexible deployments,
yet they have received less attention from the research community. In this
work, we propose a cooperative relative multi-robot localization where we
leverage inter-robot ranging and simultaneous spatial detections of objects in
the environment. To achieve this, we equip robots with a single UWB transceiver
and a stereo camera. We propose a novel Monte-Carlo approach to estimate
relative states by either employing only UWB ranges or dynamically integrating
simultaneous spatial detections from the stereo cameras. We also address the
challenges for UWB ranging error mitigation, especially in non-line-of-sight,
with a study on different LSTM networks to estimate the ranging error. The
proposed approach has multiple benefits. First, we show that a single range is
enough to estimate the accurate relative states of two robots when fusing
odometry measurements. Second, our experiments also demonstrate that our
approach surpasses traditional methods such as multilateration in terms of
accuracy. Third, to increase accuracy even further, we allow for the
integration of cooperative spatial detections. Finally, we show how ROS 2 and
Zenoh can be integrated to build a scalable wireless communication solution for
multi-robot systems. The experimental validation includes real-time deployment
and autonomous navigation based on the relative positioning method
A multi-modal perception based assistive robotic system for the elderly
Edited by Giovanni Maria Farinella, Takeo Kanade, Marco Leo, Gerard G. Medioni, Mohan TrivediInternational audienceIn this paper, we present a multi-modal perception based framework to realize a non-intrusive domestic assistive robotic system. It is non-intrusive in that it only starts interaction with a user when it detects the user's intention to do so. All the robot's actions are based on multi-modal perceptions which include user detection based on RGB-D data, user's intention-for-interaction detection with RGB-D and audio data, and communication via user distance mediated speech recognition. The utilization of multi-modal cues in different parts of the robotic activity paves the way to successful robotic runs (94% success rate). Each presented perceptual component is systematically evaluated using appropriate dataset and evaluation metrics. Finally the complete system is fully integrated on the PR2 robotic platform and validated through system sanity check runs and user studies with the help of 17 volunteer elderly participants
Low cost underwater acoustic localization
Over the course of the last decade, the cost of marine robotic platforms has
significantly decreased. In part this has lowered the barriers to entry of
exploring and monitoring larger areas of the earth's oceans. However, these
advances have been mostly focused on autonomous surface vehicles (ASVs) or
shallow water autonomous underwater vehicles (AUVs). One of the main drivers
for high cost in the deep water domain is the challenge of localizing such
vehicles using acoustics. A low cost one-way travel time underwater ranging
system is proposed to assist in localizing deep water submersibles. The system
consists of location aware anchor buoys at the surface and underwater nodes.
This paper presents a comparison of methods together with details on the
physical implementation to allow its integration into a deep sea micro AUV
currently in development. Additional simulation results show error reductions
by a factor of three.Comment: 73rd Meeting of the Acoustical Society of Americ
- …