1,246 research outputs found

    Application of Biological Learning Theories to Mobile Robot Avoidance and Approach Behaviors

    Full text link
    We present a neural network that learns to control approach and avoidance behaviors in a mobile robot using the mechanisms of classical and operant conditioning. Learning, which requires no supervision, takes place as the robot moves around an environment cluttered with obstacles and light sources. The neural network requires no knowledge of the geometry of the robot or of the quality, number or configuration of the robot's sensors. In this article we provide a detailed presentation of the model, and show our results with the Khepera and Pioneer 1 mobile robots.Office of Naval Research (N00014-96-1-0772, N00014-95-1-0409

    A Model of Operant Conditioning for Adaptive Obstacle Avoidance

    Full text link
    We have recently introduced a self-organizing adaptive neural controller that learns to control movements of a wheeled mobile robot toward stationary or moving targets, even when the robot's kinematics arc unknown, or when they change unexpectedly during operation. The model has been shown to outperform other traditional controllers, especially in noisy environments. This article describes a neural network module for obstacle avoidance that complements our previous work. The obstacle avoidance module is based on a model of classical and operant conditioning first proposed by Grossberg ( 1971). This module learns the patterns of ultrasonic sensor activation that predict collisions as the robot navigates in an unknown cluttered environment. Along with our original low-level controller, this work illustrates the potential of applying biologically inspired neural networks to the areas of adaptive robotics and control.Office of Naval Research (N00014-95-1-0409, Young Investigator Award

    An Unsupervised Neural Network for Real-Time Low-Level Control of a Mobile Robot: Noise Resistance, Stability, and Hardware Implementation

    Full text link
    We have recently introduced a neural network mobile robot controller (NETMORC). The controller is based on earlier neural network models of biological sensory-motor control. We have shown that NETMORC is able to guide a differential drive mobile robot to an arbitrary stationary or moving target while compensating for noise and other forms of disturbance, such as wheel slippage or changes in the robot's plant. Furthermore, NETMORC is able to adapt in response to long-term changes in the robot's plant, such as a change in the radius of the wheels. In this article we first review the NETMORC architecture, and then we prove that NETMORC is asymptotically stable. After presenting a series of simulations results showing robustness to disturbances, we compare NETMORC performance on a trajectory-following task with the performance of an alternative controller. Finally, we describe preliminary results on the hardware implementation of NETMORC with the mobile robot ROBUTER.Sloan Fellowship (BR-3122), Air Force Office of Scientific Research (F49620-92-J-0499

    Determining robot actions for tasks requiring sensor interaction

    Get PDF
    The performance of non-trivial tasks by a mobile robot has been a long term objective of robotic research. One of the major stumbling blocks to this goal is the conversion of the high-level planning goals and commands into the actuator and sensor processing controls. In order for a mobile robot to accomplish a non-trivial task, the task must be described in terms of primitive actions of the robot's actuators. Most non-trivial tasks require the robot to interact with its environment; thus necessitating coordination of sensor processing and actuator control to accomplish the task. The main contention is that the transformation from the high level description of the task to the primitive actions should be performed primarily at execution time, when knowledge about the environment can be obtained through sensors. It is proposed to produce the detailed plan of primitive actions by using a collection of low-level planning components that contain domain specific knowledge and knowledge about the available sensors, actuators, and sensor/actuator processing. This collection will perform signal and control processing as well as serve as a control interface between an actual mobile robot and a high-level planning system. Previous research has shown the usefulness of high-level planning systems to plan the coordination of activities such to achieve a goal, but none have been fully applied to actual mobile robots due to the complexity of interacting with sensors and actuators. This control interface is currently being implemented on a LABMATE mobile robot connected to a SUN workstation and will be developed such to enable the LABMATE to perform non-trivial, sensor-intensive tasks as specified by a planning system

    Simultaneous localization and map-building using active vision

    No full text
    An active approach to sensing can provide the focused measurement capability over a wide field of view which allows correctly formulated Simultaneous Localization and Map-Building (SLAM) to be implemented with vision, permitting repeatable long-term localization using only naturally occurring, automatically-detected features. In this paper, we present the first example of a general system for autonomous localization using active vision, enabled here by a high-performance stereo head, addressing such issues as uncertainty-based measurement selection, automatic map-maintenance, and goal-directed steering. We present varied real-time experiments in a complex environment.Published versio

    A natural-language interface to a mobile robot

    Get PDF
    The present work on robot instructability is based on an ongoing effort to apply modern manipulation technology to serve the needs of the handicapped. The Stanford/VA Robotic Aid is a mobile manipulation system that is being developed to assist severely disabled persons (quadriplegics) in performing simple activities of everyday living in a homelike, unstructured environment. It consists of two major components: a nine degree-of-freedom manipulator and a stationary control console. In the work presented here, only the motions of the Robotic Aid's omnidirectional motion base have been considered, i.e., the six degrees of freedom of the arm and gripper have been ignored. The goal has been to develop some basic software tools for commanding the robot's motions in an enclosed room containing a few objects such as tables, chairs, and rugs. In the present work, the environmental model takes the form of a two-dimensional map with objects represented by polygons. Admittedly, such a highly simplified scheme bears little resemblance to the elaborate cognitive models of reality that are used in normal human discourse. In particular, the polygonal model is given a priori and does not contain any perceptual elements: there is no polygon sensor on board the mobile robot

    A Real-Time Unsupervised Neural Network for the Low-Level Control of a Mobile Robot in a Nonstationary Environment

    Full text link
    This article introduces a real-time, unsupervised neural network that learns to control a two-degree-of-freedom mobile robot in a nonstationary environment. The neural controller, which is termed neural NETwork MObile Robot Controller (NETMORC), combines associative learning and Vector Associative Map (YAM) learning to generate transformations between spatial and velocity coordinates. As a result, the controller learns the wheel velocities required to reach a target at an arbitrary distance and angle. The transformations are learned during an unsupervised training phase, during which the robot moves as a result of randomly selected wheel velocities. The robot learns the relationship between these velocities and the resulting incremental movements. Aside form being able to reach stationary or moving targets, the NETMORC structure also enables the robot to perform successfully in spite of disturbances in the enviroment, such as wheel slippage, or changes in the robot's plant, including changes in wheel radius, changes in inter-wheel distance, or changes in the internal time step of the system. Finally, the controller is extended to include a module that learns an internal odometric transformation, allowing the robot to reach targets when visual input is sporadic or unreliable.Sloan Fellowship (BR-3122), Air Force Office of Scientific Research (F49620-92-J-0499

    Image-guided Landmark-based Localization and Mapping with LiDAR

    Get PDF
    Mobile robots must be able to determine their position to operate effectively in diverse environments. The presented work proposes a system that integrates LiDAR and camera sensors and utilizes the YOLO object detection model to identify objects in the robot's surroundings. The system, developed in ROS, groups detected objects into triangles, utilizing them as landmarks to determine the robot's position. A triangulation algorithm is employed to obtain the robot's position, which generates a set of nonlinear equations that are solved using the Levenberg-Marquardt algorithm. The presented work comprehensively discusses the proposed system's study, design, and implementation. The investigation begins with an overview of current SLAM techniques. Next, the system design considers the requirements for localization and mapping tasks and an analysis comparing the proposed approach to the contemporary SLAM methods. Finally, we evaluate the system's effectiveness and accuracy through experimentation in the Gazebo simulation environment, which allows for controlling various disturbances that a real scenario can introduce
    corecore