32 research outputs found

    Reinforcement Learning Algorithms in Humanoid Robotics

    Get PDF

    Humanoid Robots

    Get PDF
    For many years, the human being has been trying, in all ways, to recreate the complex mechanisms that form the human body. Such task is extremely complicated and the results are not totally satisfactory. However, with increasing technological advances based on theoretical and experimental researches, man gets, in a way, to copy or to imitate some systems of the human body. These researches not only intended to create humanoid robots, great part of them constituting autonomous systems, but also, in some way, to offer a higher knowledge of the systems that form the human body, objectifying possible applications in the technology of rehabilitation of human beings, gathering in a whole studies related not only to Robotics, but also to Biomechanics, Biomimmetics, Cybernetics, among other areas. This book presents a series of researches inspired by this ideal, carried through by various researchers worldwide, looking for to analyze and to discuss diverse subjects related to humanoid robots. The presented contributions explore aspects about robotic hands, learning, language, vision and locomotion

    Navigation of Mobile Robot using Fuzzy Logic

    Get PDF
    In this paper research has been carried out to develop a navigation technique for an autonomous robot to work in a real world environment, which should be capable of identifying and avoiding obstacles, specifically in a very busy a demanding environment. In this paper better technique is develop in navigating mobile robot in above mention environment. The action and reaction of the robot is addressed by fuzzy logic control system. The input fuzzy members are turn angle between the robot head and the target, distance of the obstacles present all around the robot (left, right, front, back).The above mention input members are senses by series of infrared sensors. The presented FLC for navigation of robot has been applied in all complex and adverse environment. The results are hold good for all the above mention conditions

    Learning control of bipedal dynamic walking robots with neural networks

    Get PDF
    Thesis (Elec.E.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.Includes bibliographical references (p. 90-94).Stability and robustness are two important performance requirements for a dynamic walking robot. Learning and adaptation can improve stability and robustness. This thesis explores such an adaptation capability through the use of neural networks. Three neural network models (BP, CMAC and RBF networks) are studied. The RBF network is chosen as best, despite its weakness at covering high dimensional input spaces. To overcome this problem, a self-organizing scheme of data clustering is explored. This system is applied successfully in a biped walking robot system with a supervised learning mode. Generalized Virtual Model Control (GVMC) is also proposed in this thesis, which is inspired by a bio-mechanical model of locomotion, and is an extension of ordinary Virtual Model Control. Instead of adding virtual impedance components to the biped skeletal system in virtual Cartesian space, GVMC uses adaptation to approximately reconstruct the dynamics of the biped. The effectiveness of these approaches is proved both theoretically and experimentally (in simulation).by Jianjuen Hu.Elec.E

    Navigation of Real Mobile Robot by Using Fuzzy Logic Technique

    Get PDF
    Now a day’s robots play an important role many applications like medical, industrial, military, transportation etc. navigation of mobile robot is the primary issue in now a days. Navigation is the process of detection and avoiding the obstacles in the path and to reach the destination by taking the surrounding information from the sensors. The successful navigation of mobile robot means reaching the destination in short distance in short period by avoiding the obstacles in the path. For this, we are using fuzzy logic technique for the navigation of mobile robot. In this project, we build up the four-wheel mobile robot and simulation and experimental results are carried out in the lab. Comparison between the simulation and experimental results are done and are found to be in good

    Path planning and control of mobile robot using fuzzy logic

    Get PDF
    In this paper study has been carried out to improve a steering technique for an self-directed bot to work in a real world atmosphere, which should be proficient of classifying and evading hindrances, precisely in a very busy a challenging atmosphere. In this paper better method is develop in circumnavigating mobile bot in afore said atmosphere. The action and reaction of the bot is addressed by fuzzy logic control scheme. The input fuzzy members are turn angle between the bot head and the target, distance of the hindrances present all around the bot (lef, rgh, and front, back).The aforesaid input members are sensed by series of infrared sensors. The obtainable FLC for steering of bot has been applied in all complex and hostile atmosphere. The outcomes hold good for all the above mention situations

    Learning Control of Robotic Arm Using Deep Q-Neural Network

    Get PDF
    Enabling robotic systems for autonomous actions such as driverless systems, is a very complex task in real-world scenarios due to uncertainties. Machine learning capabilities have been quickly making their way into autonomous systems and industrial robotics technology. They found many applications in every sector, including autonomous vehicles, humanoid robots, drones and many more. In this research we will be implementing artificial intelligence in robotic arm to be able to solve a complex balancing control problem from scratch, without any feedback loop and using state of the art deep reinforcement learning algorithm named DQN. The benchmark problem that is considered as case study, is balancing an inverted pendulum upward using a six-degrees freedom robot arm. Very simple form of this problem has been solved recently using machine learning however under this thesis we made a very complex system of inverted pendulum and implemented in Robot Operating System (ROS) which is very realistic simulation environment. We have not only succeeded to control the pendulum but also added turbulences on the learned model to study its robustness. We observed how the initial learned model is unstable at the presence of turbulence and how random turbulences helps the system to transform to a more robust model. We have also used the robust model in different environment and showed how the model adopt itself with the new physical properties. Using orientation sensor on the tip of the inverted pendulum to get angular velocity, simulation in ROS and having inverted pendulum on ball joint are few highlighted novelties in this thesis in compare previous publications

    Interactive Imitation Learning in Robotics: A Survey

    Full text link
    Interactive Imitation Learning (IIL) is a branch of Imitation Learning (IL) where human feedback is provided intermittently during robot execution allowing an online improvement of the robot's behavior. In recent years, IIL has increasingly started to carve out its own space as a promising data-driven alternative for solving complex robotic tasks. The advantages of IIL are its data-efficient, as the human feedback guides the robot directly towards an improved behavior, and its robustness, as the distribution mismatch between the teacher and learner trajectories is minimized by providing feedback directly over the learner's trajectories. Nevertheless, despite the opportunities that IIL presents, its terminology, structure, and applicability are not clear nor unified in the literature, slowing down its development and, therefore, the research of innovative formulations and discoveries. In this article, we attempt to facilitate research in IIL and lower entry barriers for new practitioners by providing a survey of the field that unifies and structures it. In addition, we aim to raise awareness of its potential, what has been accomplished and what are still open research questions. We organize the most relevant works in IIL in terms of human-robot interaction (i.e., types of feedback), interfaces (i.e., means of providing feedback), learning (i.e., models learned from feedback and function approximators), user experience (i.e., human perception about the learning process), applications, and benchmarks. Furthermore, we analyze similarities and differences between IIL and RL, providing a discussion on how the concepts offline, online, off-policy and on-policy learning should be transferred to IIL from the RL literature. We particularly focus on robotic applications in the real world and discuss their implications, limitations, and promising future areas of research
    corecore