47,908 research outputs found

    Realization of reactive control for multi purpose mobile agents

    Get PDF
    Mobile robots are built for different purposes, have different physical size, shape, mechanics and electronics. They are required to work in real-time, realize more than one goal simultaneously, hence to communicate and cooperate with other agents. The approach proposed in this paper for mobile robot control is reactive and has layered structure that supports multi sensor perception. Potential field method is implemented for both obstacle avoidance and goal tracking. However imaginary forces of the obstacles and of the goal point are separately treated, and then resulting behaviors are fused with the help of the geometry. Proposed control is tested on simulations where different scenarios are studied. Results have confirmed the high performance of the method

    Metric State Space Reinforcement Learning for a Vision-Capable Mobile Robot

    Full text link
    We address the problem of autonomously learning controllers for vision-capable mobile robots. We extend McCallum's (1995) Nearest-Sequence Memory algorithm to allow for general metrics over state-action trajectories. We demonstrate the feasibility of our approach by successfully running our algorithm on a real mobile robot. The algorithm is novel and unique in that it (a) explores the environment and learns directly on a mobile robot without using a hand-made computer model as an intermediate step, (b) does not require manual discretization of the sensor input space, (c) works in piecewise continuous perceptual spaces, and (d) copes with partial observability. Together this allows learning from much less experience compared to previous methods.Comment: 14 pages, 8 figure

    Application of a Natural Language Interface to the Teleoperation of a Mobile Robot

    Get PDF
    IFAC Intelligent Components for Vehicles, Seville, Spain, 1998This paper describes the application of a natural language interface to the teleoperation of a mobile robot. Natural language communication with robots is a major goal, since it allows for non expert people to communicate with robots in his or her own language. This communication has to be flexible enough to allow the user to control the robot with a minimum knowledge about its details. In order to do this, the user must be able to perform simple operations as well as high level tasks which involve multiple elements of the system. For this ones, an adequate representation of the knowledge about the robot and its environment will allow the creation of a plan of simple actions whose execution will result in the accomplishment of the requested tas

    Quantum Robot: Structure, Algorithms and Applications

    Full text link
    A kind of brand-new robot, quantum robot, is proposed through fusing quantum theory with robot technology. Quantum robot is essentially a complex quantum system and it is generally composed of three fundamental parts: MQCU (multi quantum computing units), quantum controller/actuator, and information acquisition units. Corresponding to the system structure, several learning control algorithms including quantum searching algorithm and quantum reinforcement learning are presented for quantum robot. The theoretic results show that quantum robot can reduce the complexity of O(N^2) in traditional robot to O(N^(3/2)) using quantum searching algorithm, and the simulation results demonstrate that quantum robot is also superior to traditional robot in efficient learning by novel quantum reinforcement learning algorithm. Considering the advantages of quantum robot, its some potential important applications are also analyzed and prospected.Comment: 19 pages, 4 figures, 2 table

    Q Learning Behavior on Autonomous Navigation of Physical Robot

    Get PDF
    Behavior based architecture gives robot fast and reliable action. If there are many behaviors in robot, behavior coordination is needed. Subsumption architecture is behavior coordination method that give quick and robust response. Learning mechanism improve robot’s performance in handling uncertainty. Q learning is popular reinforcement learning method that has been used in robot learning because it is simple, convergent and off policy. In this paper, Q learning will be used as learning mechanism for obstacle avoidance behavior in autonomous robot navigation. Learning rate of Q learning affect robot’s performance in learning phase. As the result, Q learning algorithm is successfully implemented in a physical robot with its imperfect environment
    • …
    corecore