32,049 research outputs found
A Comprehensive Review on Autonomous Navigation
The field of autonomous mobile robots has undergone dramatic advancements
over the past decades. Despite achieving important milestones, several
challenges are yet to be addressed. Aggregating the achievements of the robotic
community as survey papers is vital to keep the track of current
state-of-the-art and the challenges that must be tackled in the future. This
paper tries to provide a comprehensive review of autonomous mobile robots
covering topics such as sensor types, mobile robot platforms, simulation tools,
path planning and following, sensor fusion methods, obstacle avoidance, and
SLAM. The urge to present a survey paper is twofold. First, autonomous
navigation field evolves fast so writing survey papers regularly is crucial to
keep the research community well-aware of the current status of this field.
Second, deep learning methods have revolutionized many fields including
autonomous navigation. Therefore, it is necessary to give an appropriate
treatment of the role of deep learning in autonomous navigation as well which
is covered in this paper. Future works and research gaps will also be
discussed
Q Learning Behavior on Autonomous Navigation of Physical Robot
Behavior based architecture gives robot fast and reliable action. If there are many behaviors in robot, behavior coordination is needed. Subsumption architecture is behavior coordination method that give quick and robust response. Learning mechanism improve robot’s performance in handling uncertainty. Q learning is popular reinforcement learning method that has been used in robot learning because it is simple, convergent and off
policy. In this paper, Q learning will be used as learning mechanism for obstacle avoidance behavior in autonomous robot navigation. Learning rate of Q learning affect robot’s performance in learning phase. As the result,
Q learning algorithm is successfully implemented in a physical robot with its imperfect environment
Autonomous Optimization of Swimming Gait in a Fish Robot With Multiple Onboard Sensors
Autonomous gait optimization is an essential survival ability for mobile robots. However, it remains a challenging task for underwater robots. This paper addresses this problem for the locomotion of a bio-inspired robotic fish and aims at identifying fast swimming gait autonomously by the robot. Our approach for learning locomotion controllers mainly uses three components: 1) a biological concept of central pattern generator to obtain specific gaits; 2) an onboard sensory processing center to discover the environment and to evaluate the swimming gait; and 3) an evolutionary algorithm referred to as particle swarm optimization. A key aspect of our approach is the swimming gait of the robot is optimized autonomously, equivalent to that the robot is able to navigate and evaluate its swimming gait in the environment by the onboard sensors, and simultaneously run a built-in evolutionary algorithm to optimize its locomotion all by itself. Forward speed optimization experiments conducted on the robotic fish demonstrate the effectiveness of the developed autonomous optimization system. The latest results show that our robotic fish attained a maximum swimming speed of 1.011 BL/s (40.42 cm/s) through autonomous gait optimization, faster than any of the robot's previously recorded speeds
An Integrated Architecture for Learning of Reactive Behaviors based on Dynamic Cell Structures
In this contribution we want to draw the readers attention to the advantages of controller architectures based on Dynamic Cell Structures (DCS) [5] for learning reactive behaviors of autonomous robots. These include incremental on-line learning, fast output calculation, a flexible integration of different learning rules and a close connection to fuzzy logic. The latter allows for incorporation of prior knowledge and to interpret learning with a DCS as fuzzy rule generation and ad aptation. After successful applications of DCS to tasks involving supervised learning, feedback error learning and incremental category learning, in this article we take reinforcement learning of reactive collision avoidance for an autonomous mobile robot as a further example to demonstrate the validity of our approach. More specifically, we employ a REINFORCE [23] algorithm in combination with an Adaptive Heuristic Critique (AHC) [21] to learn a continuous valued sensory motor mapping for obstacle avoidance with a TRC Labmate from delayed rein forcement. The sensory input consists of eight unprocessed sonar readings, the controller output is the continuous angular and forward velocity of the Labmate. The controller and the AHC are integrated within a single DCS network, and the resulting avoidance behavior of the robot can be analyzed as a set of fuzzy rules, each rule having an additional certainty value
Online Searching with an Autonomous Robot
We discuss online strategies for visibility-based searching for an object
hidden behind a corner, using Kurt3D, a real autonomous mobile robot. This task
is closely related to a number of well-studied problems. Our robot uses a
three-dimensional laser scanner in a stop, scan, plan, go fashion for building
a virtual three-dimensional environment. Besides planning trajectories and
avoiding obstacles, Kurt3D is capable of identifying objects like a chair. We
derive a practically useful and asymptotically optimal strategy that guarantees
a competitive ratio of 2, which differs remarkably from the well-studied
scenario without the need of stopping for surveying the environment. Our
strategy is used by Kurt3D, documented in a separate video.Comment: 16 pages, 8 figures, 12 photographs, 1 table, Latex, submitted for
publicatio
Supervised Autonomous Locomotion and Manipulation for Disaster Response with a Centaur-like Robot
Mobile manipulation tasks are one of the key challenges in the field of
search and rescue (SAR) robotics requiring robots with flexible locomotion and
manipulation abilities. Since the tasks are mostly unknown in advance, the
robot has to adapt to a wide variety of terrains and workspaces during a
mission. The centaur-like robot Centauro has a hybrid legged-wheeled base and
an anthropomorphic upper body to carry out complex tasks in environments too
dangerous for humans. Due to its high number of degrees of freedom, controlling
the robot with direct teleoperation approaches is challenging and exhausting.
Supervised autonomy approaches are promising to increase quality and speed of
control while keeping the flexibility to solve unknown tasks. We developed a
set of operator assistance functionalities with different levels of autonomy to
control the robot for challenging locomotion and manipulation tasks. The
integrated system was evaluated in disaster response scenarios and showed
promising performance.Comment: In Proceedings of IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS), Madrid, Spain, October 201
Two-Timescale Learning Using Idiotypic Behaviour Mediation For A Navigating Mobile Robot
A combined Short-Term Learning (STL) and Long-Term Learning (LTL) approach to
solving mobile-robot navigation problems is presented and tested in both the
real and virtual domains. The LTL phase consists of rapid simulations that use
a Genetic Algorithm to derive diverse sets of behaviours, encoded as variable
sets of attributes, and the STL phase is an idiotypic Artificial Immune System.
Results from the LTL phase show that sets of behaviours develop very rapidly,
and significantly greater diversity is obtained when multiple autonomous
populations are used, rather than a single one. The architecture is assessed
under various scenarios, including removal of the LTL phase and switching off
the idiotypic mechanism in the STL phase. The comparisons provide substantial
evidence that the best option is the inclusion of both the LTL phase and the
idiotypic system. In addition, this paper shows that structurally different
environments can be used for the two phases without compromising
transferability.Comment: 40 pages, 12 tables, Journal of Applied Soft Computin
- …