21,006 research outputs found

    Neural Controller for a Mobile Robot in a Nonstationary Enviornment

    Full text link
    Recently it has been introduced a neural controller for a mobile robot that learns both forward and inverse odometry of a differential-drive robot through an unsupervised learning-by-doing cycle. This article introduces an obstacle avoidance module that is integrated into the neural controller. This module makes use of sensory information to determine at each instant a desired angle and distance that causes the robot to navigate around obstacles on the way to a final target. Obstacle avoidance is performed in a reactive manner by representing the objects and target in the robot's environment as Gaussian functions. However, the influence of the Gaussians is modulated dynamically on the basis of the robot's behavior in a way that avoids problems with local minima. The proposed module enables the robot to operate successfully with different obstacle configurations, such as corridors, mazes, doors and even concave obstacles.Air Force Office of Scientific Research (F49620-92-J-0499

    A Model of Operant Conditioning for Adaptive Obstacle Avoidance

    Full text link
    We have recently introduced a self-organizing adaptive neural controller that learns to control movements of a wheeled mobile robot toward stationary or moving targets, even when the robot's kinematics arc unknown, or when they change unexpectedly during operation. The model has been shown to outperform other traditional controllers, especially in noisy environments. This article describes a neural network module for obstacle avoidance that complements our previous work. The obstacle avoidance module is based on a model of classical and operant conditioning first proposed by Grossberg ( 1971). This module learns the patterns of ultrasonic sensor activation that predict collisions as the robot navigates in an unknown cluttered environment. Along with our original low-level controller, this work illustrates the potential of applying biologically inspired neural networks to the areas of adaptive robotics and control.Office of Naval Research (N00014-95-1-0409, Young Investigator Award

    Robot Composite Learning and the Nunchaku Flipping Challenge

    Full text link
    Advanced motor skills are essential for robots to physically coexist with humans. Much research on robot dynamics and control has achieved success on hyper robot motor capabilities, but mostly through heavily case-specific engineering. Meanwhile, in terms of robot acquiring skills in a ubiquitous manner, robot learning from human demonstration (LfD) has achieved great progress, but still has limitations handling dynamic skills and compound actions. In this paper, we present a composite learning scheme which goes beyond LfD and integrates robot learning from human definition, demonstration, and evaluation. The method tackles advanced motor skills that require dynamic time-critical maneuver, complex contact control, and handling partly soft partly rigid objects. We also introduce the "nunchaku flipping challenge", an extreme test that puts hard requirements to all these three aspects. Continued from our previous presentations, this paper introduces the latest update of the composite learning scheme and the physical success of the nunchaku flipping challenge
    • …
    corecore