17,514 research outputs found

    Swarm-Based Techniques for Adaptive Navigation Primitives

    Get PDF
    Adaptive Navigation (AN) has, in the past, been successfully accomplished by using mobile multi-robot systems (MMS) in highly structured formations known as clusters. Such multi-robot adaptive navigation (MAN) allows for real-time reaction to sensor readings and navigation to a goal location not known a priori. This thesis successfully reproduces MAN cluster techniques via swarm control techniques, a less computationally expensive but less formalized control technique for MMS, which achieves robot control through a combination of primitive robot behaviors. While powerful for large numbers of robots, swarm robotics often relies on “emergent” swarm behaviors resulting from robot-level behaviors, rather than top-down specification of swarm behaviors. For adaptive navigation purposes, it was desired to be able to specify swarm-level behavior from a top down perspective rather than experimenting with emergent behaviors. To this end, a simulation environment was developed to allow rapid development and vetting of swarm behaviors while easily interfacing with an existing testbed for validation on hardware. An initial suite of robot primitive and composite behaviors was developed and vetted using this simulator, and the behaviors were validated using the existing testbed in Santa Clara University’s Robotics System Laboratory (RSL). Of particular importance were the adaptive navigation primitives of extrema finding and contour finding and following. These AN primitives were tested over a variety of experimental parameters, yielding design guidelines for top-down specification of swarm robotic adaptive navigation. These design guidelines are presented, and their usefulness is demonstrated for a Contour Finding and Following application using the RSL’s testbed. Finally, possible future work to expand the capability of swarm-based adaptive navigation techniques is discussed

    Neural Controller for a Mobile Robot in a Nonstationary Enviornment

    Full text link
    Recently it has been introduced a neural controller for a mobile robot that learns both forward and inverse odometry of a differential-drive robot through an unsupervised learning-by-doing cycle. This article introduces an obstacle avoidance module that is integrated into the neural controller. This module makes use of sensory information to determine at each instant a desired angle and distance that causes the robot to navigate around obstacles on the way to a final target. Obstacle avoidance is performed in a reactive manner by representing the objects and target in the robot's environment as Gaussian functions. However, the influence of the Gaussians is modulated dynamically on the basis of the robot's behavior in a way that avoids problems with local minima. The proposed module enables the robot to operate successfully with different obstacle configurations, such as corridors, mazes, doors and even concave obstacles.Air Force Office of Scientific Research (F49620-92-J-0499

    Neural network controller against environment: A coevolutive approach to generalize robot navigation behavior

    Get PDF
    In this paper, a new coevolutive method, called Uniform Coevolution, is introduced to learn weights of a neural network controller in autonomous robots. An evolutionary strategy is used to learn high-performance reactive behavior for navigation and collisions avoidance. The introduction of coevolutive over evolutionary strategies allows evolving the environment, to learn a general behavior able to solve the problem in different environments. Using a traditional evolutionary strategy method, without coevolution, the learning process obtains a specialized behavior. All the behaviors obtained, with/without coevolution have been tested in a set of environments and the capability of generalization is shown for each learned behavior. A simulator based on a mini-robot Khepera has been used to learn each behavior. The results show that Uniform Coevolution obtains better generalized solutions to examples-based problems.Publicad

    A layered fuzzy logic controller for nonholonomic car-like robot

    Get PDF
    A system for real time navigation of a nonholonomic car-like robot in a dynamic environment consists of two layers is described: a Sugeno-type fuzzy motion planner; and a modified proportional navigation based fuzzy controller. The system philosophy is inspired by human routing when moving between obstacles based on visual information including right and left views to identify the next step to the goal. A Sugeno-type fuzzy motion planner of four inputs one output is introduced to give a clear direction to the robot controller. The second stage is a modified proportional navigation based fuzzy controller based on the proportional navigation guidance law and able to optimize the robot's behavior in real time, i.e. to avoid stationary and moving obstacles in its local environment obeying kinematics constraints. The system has an intelligent combination of two behaviors to cope with obstacle avoidance as well as approaching a target using a proportional navigation path. The system was simulated and tested on different environments with various obstacle distributions. The simulation reveals that the system gives good results for various simple environments

    Navite: A Neural Network System For Sensory-Based Robot Navigation

    Full text link
    A neural network system, NAVITE, for incremental trajectory generation and obstacle avoidance is presented. Unlike other approaches, the system is effective in unstructured environments. Multimodal inforrnation from visual and range data is used for obstacle detection and to eliminate uncertainty in the measurements. Optimal paths are computed without explicitly optimizing cost functions, therefore reducing computational expenses. Simulations of a planar mobile robot (including the dynamic characteristics of the plant) in obstacle-free and object avoidance trajectories are presented. The system can be extended to incorporate global map information into the local decision-making process.Defense Advanced Research Projects Agency (AFOSR 90-0083); Office of Naval Research (N00014-92-J-l309); Consejo Nacional de Ciencia y TecnologĂ­a (63l462

    Application of Biological Learning Theories to Mobile Robot Avoidance and Approach Behaviors

    Full text link
    We present a neural network that learns to control approach and avoidance behaviors in a mobile robot using the mechanisms of classical and operant conditioning. Learning, which requires no supervision, takes place as the robot moves around an environment cluttered with obstacles and light sources. The neural network requires no knowledge of the geometry of the robot or of the quality, number or configuration of the robot's sensors. In this article we provide a detailed presentation of the model, and show our results with the Khepera and Pioneer 1 mobile robots.Office of Naval Research (N00014-96-1-0772, N00014-95-1-0409

    Experiments in cooperative human multi-robot navigation

    Get PDF
    In this paper, we consider the problem of a group of autonomous mobile robots and a human moving coordinately in a real-world implementation. The group moves throughout a dynamic and unstructured environment. The key problem to be solved is the inclusion of a human in a real multi-robot system and consequently the multiple robot motion coordination. We present a set of performance metrics (system efficiency and percentage of time in formation) and a novel flexible formation definition whereby a formation control strategy both in simulation and in real-world experiments of a human multi-robot system is presented. The formation control proposed is stable and effective by means of its uniform dispersion, cohesion and flexibility

    Near range path navigation using LGMD visual neural networks

    Get PDF
    In this paper, we proposed a method for near range path navigation for a mobile robot by using a pair of biologically inspired visual neural network – lobula giant movement detector (LGMD). In the proposed binocular style visual system, each LGMD processes images covering a part of the wide field of view and extracts relevant visual cues as its output. The outputs from the two LGMDs are compared and translated into executable motor commands to control the wheels of the robot in real time. Stronger signal from the LGMD in one side pushes the robot away from this side step by step; therefore, the robot can navigate in a visual environment naturally with the proposed vision system. Our experiments showed that this bio-inspired system worked well in different scenarios
    • …
    corecore