477 research outputs found
Blind as a bat: audible echolocation on small robots
For safe and efficient operation, mobile robots need to perceive their
environment, and in particular, perform tasks such as obstacle detection,
localization, and mapping. Although robots are often equipped with microphones
and speakers, the audio modality is rarely used for these tasks. Compared to
the localization of sound sources, for which many practical solutions exist,
algorithms for active echolocation are less developed and often rely on
hardware requirements that are out of reach for small robots. We propose an
end-to-end pipeline for sound-based localization and mapping that is targeted
at, but not limited to, robots equipped with only simple buzzers and low-end
microphones. The method is model-based, runs in real time, and requires no
prior calibration or training. We successfully test the algorithm on the e-puck
robot with its integrated audio hardware, and on the Crazyflie drone, for which
we design a reproducible audio extension deck. We achieve centimeter-level wall
localization on both platforms when the robots are static during the
measurement process. Even in the more challenging setting of a flying drone, we
can successfully localize walls, which we demonstrate in a proof-of-concept
multi-wall localization and mapping demo.Comment: 8 pages, 10 figures, published in IEEE Robotics and Automation
Letter
Flying Animal Inspired Behavior-Based Gap-Aiming Autonomous Flight with a Small Unmanned Rotorcraft in a Restricted Maneuverability Environment
This dissertation research shows a small unmanned rotorcraft system with onboard processing and a vision sensor can produce autonomous, collision-free flight in a restricted maneuverability environment with no a priori knowledge by using a gap-aiming behavior inspired by flying animals. Current approaches to autonomous flight with small unmanned aerial systems (SUAS) concentrate on detecting and explicitly avoiding obstacles. In contrast, biology indicates that birds, bats, and insects do the opposite; they react to open spaces, or gaps in the environment, with a gap_aiming behavior. Using flying animals as inspiration a behavior-based robotics approach is taken to implement and test their observed gap-aiming behavior in three dimensions. Because biological studies were unclear whether the flying animals were reacting to the largest gap perceived, the closest gap perceived, or all of the gaps three approaches for the perceptual schema were explored in simulation: detect_closest_gap, detect_largest_gap, and detect_all_gaps. The result of these simulations was used in a proof-of-concept implementation on a 3DRobotics Solo quadrotor platform in an environment designed to represent the navigational diffi- culties found inside a restricted maneuverability environment. The motor schema is implemented with an artificial potential field to produce the action of aiming to the center of the gap. Through two sets of field trials totaling fifteen flights conducted with a small unmanned quadrotor, the gap-aiming behavior observed in flying animals is shown to produce repeatable autonomous, collision-free flight in a restricted maneuverability environment. Additionally, using the distance from the starting location to perceived gaps, the horizontal and vertical distance traveled, and the distance from the center of the gap during traversal the implementation of the gap selection approach performs as intended, the three-dimensional movement produced by the motor schema and the accuracy of the motor schema are shown, respectively. This gap-aiming behavior provides the robotics community with the first known implementation of autonomous, collision-free flight on a small unmanned quadrotor without explicit obstacle detection and avoidance as seen with current implementations. Additionally, the testing environment described by quantitative metrics provides a benchmark for autonomous SUAS flight testing in confined environments. Finally, the success of the autonomous collision-free flight implementation on a small unmanned rotorcraft and field tested in a restricted maneuverability environment could have important societal impact in both the public and private sectors
Use of Pattern Classification Algorithms to Interpret Passive and Active Data Streams from a Walking-Speed Robotic Sensor Platform
In order to perform useful tasks for us, robots must have the ability to notice, recognize, and respond to objects and events in their environment. This requires the acquisition and synthesis of information from a variety of sensors. Here we investigate the performance of a number of sensor modalities in an unstructured outdoor environment, including the Microsoft Kinect, thermal infrared camera, and coffee can radar. Special attention is given to acoustic echolocation measurements of approaching vehicles, where an acoustic parametric array propagates an audible signal to the oncoming target and the Kinect microphone array records the reflected backscattered signal. Although useful information about the target is hidden inside the noisy time domain measurements, the Dynamic Wavelet Fingerprint process (DWFP) is used to create a time-frequency representation of the data. A small-dimensional feature vector is created for each measurement using an intelligent feature selection process for use in statistical pattern classification routines. Using our experimentally measured data from real vehicles at 50 m, this process is able to correctly classify vehicles into one of five classes with 94% accuracy. Fully three-dimensional simulations allow us to study the nonlinear beam propagation and interaction with real-world targets to improve classification results
Make robots Be Bats: Specializing robotic swarms to the Bat algorithm
Bat algorithm is a powerful nature-inspired swarm intelligence method proposed by Prof. Xin-She Yang in 2010, with remarkable applications in industrial and scientific domains. However, to the best of authors' knowledge, this algorithm has never been applied so far in the context of swarm robotics. With the aim to fill this gap, this paper introduces the first practical implementation of the bat algorithm in swarm robotics. Our implementation is performed at two levels: a physical level, where we design and build a real robotic prototype; and a computational level, where we develop a robotic simulation framework. A very important feature of our implementation is its high specialization: all (physical and logical) components are fully optimized to replicate the most relevant features of the real microbats and the bat algorithm as faithfully as possible. Our implementation has been tested by its application to the problem of finding a target location within unknown static indoor 3D environments. Our experimental results show that the behavioral patterns observed in the real and the simulated robotic swarms are very similar. This makes our robotic swarm implementation an ideal tool to explore the potential and limitations of the bat algorithm for real-world practical applications and their computer simulations.This research has been kindly supported by the Computer Science National Program of the Spanish Research Agency (Agencia Estatal de InvestigaciĂłn) and European Funds, Project #TIN2017-89275-R (AEI/FEDER, UE), the project EVOLFORMAS Ref. #JU12, jointly supported by public body SODERCAN of the Regional
Government of Cantabria and the European funds FEDER, the project PDE-GIR of the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Actions grant agreement #778035, Toho University (Funabashi, Japan), and the University of Cantabria (Santander, Spain). The authors are particularly grateful to the Department of Information Science of Toho University for
all the facilities given to carry out this work. Special thanks are also due to the Editors and the three anonymous reviewers for their encouraging and constructive comments and very helpful feedback that allowed us to improve our paper signi cantly
Visual echolocation concept for the colorophone sensory substitution device using virtual reality
Detecting characteristics of 3D scenes is considered one of the biggest challenges for visually impaired people. This ability is nonetheless crucial for orientation and navigation in the natural environment. Although there are several Electronic Travel Aids aiming at enhancing orientation and
mobility for the blind, only a few of them combine passing both 2D and 3D information, including colour.
Moreover, existing devices either focus on a small part of an image or allow interpretation of a mere
few points in the field of view. Here, we propose a concept of visual echolocation with integrated
colour sonification as an extension of Colorophone - an assistive device for visually impaired people.
The concept aims at mimicking the process of echolocation and thus provides 2D, 3D and additionally
colour information of the whole scene. Even though the final implementation will be realised by a 3D camera, it is first simulated, as a proof of concept, by using VIRCO - a Virtual Reality training and evaluation system for Colorophone. The first experiments showed that it is possible to sonify colour
and distance of the whole scene, which opens up a possibility to implement the developed algorithm on a hardware-based stereo camera platform. An introductory user evaluation of the system has been conducted in order to assess the effectiveness of the proposed solution for perceiving distance, position and colour of the objects placed in Virtual Reality
The Audio-Visual BatVision Dataset for Research on Sight and Sound
Vision research showed remarkable success in understanding our world,
propelled by datasets of images and videos. Sensor data from radar, LiDAR and
cameras supports research in robotics and autonomous driving for at least a
decade. However, while visual sensors may fail in some conditions, sound has
recently shown potential to complement sensor data. Simulated room impulse
responses (RIR) in 3D apartment-models became a benchmark dataset for the
community, fostering a range of audiovisual research. In simulation, depth is
predictable from sound, by learning bat-like perception with a neural network.
Concurrently, the same was achieved in reality by using RGB-D images and echoes
of chirping sounds. Biomimicking bat perception is an exciting new direction
but needs dedicated datasets to explore the potential. Therefore, we collected
the BatVision dataset to provide large-scale echoes in complex real-world
scenes to the community. We equipped a robot with a speaker to emit chirps and
a binaural microphone to record their echoes. Synchronized RGB-D images from
the same perspective provide visual labels of traversed spaces. We sampled
modern US office spaces to historic French university grounds, indoor and
outdoor with large architectural variety. This dataset will allow research on
robot echolocation, general audio-visual tasks and sound ph{\ae}nomena
unavailable in simulated data. We show promising results for audio-only depth
prediction and show how state-of-the-art work developed for simulated data can
also succeed on our dataset. Project page:
https://amandinebtto.github.io/Batvision-Dataset/Comment: Project page https://amandinebtto.github.io/Batvision-Dataset/ This
version contains camera ready pape
EGOR: design, development, implementation an entry in the 1994 AAAI robot competition
Journal ArticleEGOR, an entry in the 1994 AAAI Robot Competition, was built by ate am from the Department of Computer Science at the University of Utah. The constraints imposed by the competition rules, and by cost and time, led to the development of a system composed of off-the- shelf parts based on a mobile base built by Transitions Research Corporation and an Intel 486DX33-based laptop computer. The work included design, subsystem part procurement, fabrication, software development, testing, and system evaluation
- …