2,501 research outputs found
A fuzzy controller with supervised learning assisted reinforcement learning algorithm for obstacle avoidance
Fuzzy logic system promises an efficient way for obstacle avoidance. However, it is difficult to maintain the correctness, consistency, and completeness of a fuzzy rule base constructed and tuned by a human expert. Reinforcement learning method is capable of learning the fuzzy rules automatically. However, it incurs heavy learning phase and may result in an insufficiently learned rule base due to the curse of dimensionality. In this paper, we propose a neural fuzzy system with mixed coarse learning and fine learning phases. In the first phase, supervised learning method is used to determine the membership functions for the input and output variables simultaneously. After sufficient training, fine learning is applied which employs reinforcement learning algorithm to fine-tune the membership functions for the output variables. For sufficient learning, a new learning method using modified Sutton and Barto's model is proposed to strengthen the exploration. Through this two-step tuning approach, the mobile robot is able to perform collision-free navigation. To deal with the difficulty in acquiring large amount of training data with high consistency for the supervised learning, we develop a virtual environment (VE) simulator, which is able to provide desktop virtual environment (DVE) and immersive virtual environment (IVE) visualization. Through operating a mobile robot in the virtual environment (DVE/IVE) by a skilled human operator, the training data are readily obtained and used to train the neural fuzzy system.published_or_final_versio
Q Learning Behavior on Autonomous Navigation of Physical Robot
Behavior based architecture gives robot fast and reliable action. If there are many behaviors in robot, behavior coordination is needed. Subsumption architecture is behavior coordination method that give quick and robust response. Learning mechanism improve robot’s performance in handling uncertainty. Q learning is popular reinforcement learning method that has been used in robot learning because it is simple, convergent and off
policy. In this paper, Q learning will be used as learning mechanism for obstacle avoidance behavior in autonomous robot navigation. Learning rate of Q learning affect robot’s performance in learning phase. As the result,
Q learning algorithm is successfully implemented in a physical robot with its imperfect environment
Quantum Robot: Structure, Algorithms and Applications
A kind of brand-new robot, quantum robot, is proposed through fusing quantum
theory with robot technology. Quantum robot is essentially a complex quantum
system and it is generally composed of three fundamental parts: MQCU (multi
quantum computing units), quantum controller/actuator, and information
acquisition units. Corresponding to the system structure, several learning
control algorithms including quantum searching algorithm and quantum
reinforcement learning are presented for quantum robot. The theoretic results
show that quantum robot can reduce the complexity of O(N^2) in traditional
robot to O(N^(3/2)) using quantum searching algorithm, and the simulation
results demonstrate that quantum robot is also superior to traditional robot in
efficient learning by novel quantum reinforcement learning algorithm.
Considering the advantages of quantum robot, its some potential important
applications are also analyzed and prospected.Comment: 19 pages, 4 figures, 2 table
Discussion on Different Controllers Used for the Navigation of Mobile Robot
Robots that can comprehend and navigate their surroundings independently on their own are considered intelligent mobile robots (MR). Using a sophisticated set of controllers, artificial intelligence (AI), deep learning (DL), machine learning (ML), sensors, and computation for navigation, MR\u27s can understand and navigate around their environments without even being connected to a cabled source of power. Mobility and intelligence are fundamental drivers of autonomous robots that are intended for their planned operations. They are becoming popular in a variety of fields, including business, industry, healthcare, education, government, agriculture, military operations, and even domestic settings, to optimize everyday activities. We describe different controllers, including proportional integral derivative (PID) controllers, model predictive controllers (MPCs), fuzzy logic controllers (FLCs), and reinforcement learning controllers used in robotics science. The main objective of this article is to demonstrate a comprehensive idea and basic working principle of controllers utilized by mobile robots (MR) for navigation. This work thoroughly investigates several available books and literature to provide a better understanding of the navigation strategies taken by MR. Future research trends and possible challenges to optimizing the MR navigation system are also discussed
An adaptive fuzzy approach to obstacle avoidance
Reinforcement learning based on a new training method previously reported guarantees convergence and an almost complete set of rules. However, there are two shortcomings remained: 1) the membership functions of the input sensor readings are determined manually and take the same form; and 2) there are still a small number of blank rules needed to be manually inserted. To address these two issues, this paper proposes an adaptive fuzzy approach using a supervised learning method based on backpropagation to determine the parameters for the membership functions for each sensor reading. By having different input fuzzy sets, each sensor reading contributes differently in avoiding obstacles. Our simulations show that the proposed system converges rapidly to a complete set of rules, and if there are no conflicting input-output data pairs in the training sets, the proposed system performs collision-free obstacle avoidance.published_or_final_versio
An intelligent mobile vehicle navigator based on fuzzy logic and reinforcement learning
In this paper, an alternative training approach to the EEM-based training method is presented and a fuzzy reactive navigation architecture is described. The new training method is 270 times faster in learning speed; and is only 4% of the learning cost of the EEM method. It also has very reliable convergence of learning; very high number of learned rules (98.8%); and high adaptability. Using the rule base learned from the new method, the proposed fuzzy reactive navigator fuses the obstacle avoidance behavior and goal seeking behavior to determine its control actions, where adaptability is achieved with the aid of an environment evaluator. A comparison of this navigator using the rule bases obtained from the new training method and the EEM method, shows that the new navigator guarantees a solution and its solution is more acceptable. © 1999 IEEE.published_or_final_versio
Adaptive and intelligent navigation of autonomous planetary rovers - A survey
The application of robotics and autonomous systems in space has increased dramatically. The ongoing Mars rover mission involving the Curiosity rover, along with the success of its predecessors, is a key milestone that showcases the existing capabilities of robotic technology. Nevertheless, there has still been a heavy reliance on human tele-operators to drive these systems. Reducing the reliance on human experts for navigational tasks on Mars remains a major challenge due to the harsh and complex nature of the Martian terrains. The development of a truly autonomous rover system with the capability to be effectively navigated in such environments requires intelligent and adaptive methods fitting for a system with limited resources. This paper surveys a representative selection of work applicable to autonomous planetary rover navigation, discussing some ongoing challenges and promising future research directions from the perspectives of the authors
Decision tree learning for intelligent mobile robot navigation
The replication of human intelligence, learning and reasoning by means of computer
algorithms is termed Artificial Intelligence (Al) and the interaction of such
algorithms with the physical world can be achieved using robotics. The work described in
this thesis investigates the applications of concept learning (an approach which takes its
inspiration from biological motivations and from survival instincts in particular) to robot
control and path planning. The methodology of concept learning has been applied using
learning decision trees (DTs) which induce domain knowledge from a finite set of training
vectors which in turn describe systematically a physical entity and are used to train a robot
to learn new concepts and to adapt its behaviour.
To achieve behaviour learning, this work introduces the novel approach of hierarchical
learning and knowledge decomposition to the frame of the reactive robot architecture.
Following the analogy with survival instincts, the robot is first taught how to survive in
very simple and homogeneous environments, namely a world without any disturbances or
any kind of "hostility". Once this simple behaviour, named a primitive, has been established, the robot is trained to adapt new knowledge to cope with increasingly complex
environments by adding further worlds to its existing knowledge. The repertoire of the
robot behaviours in the form of symbolic knowledge is retained in a hierarchy of clustered
decision trees (DTs) accommodating a number of primitives. To classify robot perceptions,
control rules are synthesised using symbolic knowledge derived from searching the
hierarchy of DTs.
A second novel concept is introduced, namely that of multi-dimensional fuzzy associative
memories (MDFAMs). These are clustered fuzzy decision trees (FDTs) which are trained
locally and accommodate specific perceptual knowledge. Fuzzy logic is incorporated to
deal with inherent noise in sensory data and to merge conflicting behaviours of the DTs.
In this thesis, the feasibility of the developed techniques is illustrated in the robot
applications, their benefits and drawbacks are discussed
- …