1,814 research outputs found

    Determining robot actions for tasks requiring sensor interaction

    Get PDF
    The performance of non-trivial tasks by a mobile robot has been a long term objective of robotic research. One of the major stumbling blocks to this goal is the conversion of the high-level planning goals and commands into the actuator and sensor processing controls. In order for a mobile robot to accomplish a non-trivial task, the task must be described in terms of primitive actions of the robot's actuators. Most non-trivial tasks require the robot to interact with its environment; thus necessitating coordination of sensor processing and actuator control to accomplish the task. The main contention is that the transformation from the high level description of the task to the primitive actions should be performed primarily at execution time, when knowledge about the environment can be obtained through sensors. It is proposed to produce the detailed plan of primitive actions by using a collection of low-level planning components that contain domain specific knowledge and knowledge about the available sensors, actuators, and sensor/actuator processing. This collection will perform signal and control processing as well as serve as a control interface between an actual mobile robot and a high-level planning system. Previous research has shown the usefulness of high-level planning systems to plan the coordination of activities such to achieve a goal, but none have been fully applied to actual mobile robots due to the complexity of interacting with sensors and actuators. This control interface is currently being implemented on a LABMATE mobile robot connected to a SUN workstation and will be developed such to enable the LABMATE to perform non-trivial, sensor-intensive tasks as specified by a planning system

    A layered fuzzy logic controller for nonholonomic car-like robot

    Get PDF
    A system for real time navigation of a nonholonomic car-like robot in a dynamic environment consists of two layers is described: a Sugeno-type fuzzy motion planner; and a modified proportional navigation based fuzzy controller. The system philosophy is inspired by human routing when moving between obstacles based on visual information including right and left views to identify the next step to the goal. A Sugeno-type fuzzy motion planner of four inputs one output is introduced to give a clear direction to the robot controller. The second stage is a modified proportional navigation based fuzzy controller based on the proportional navigation guidance law and able to optimize the robot's behavior in real time, i.e. to avoid stationary and moving obstacles in its local environment obeying kinematics constraints. The system has an intelligent combination of two behaviors to cope with obstacle avoidance as well as approaching a target using a proportional navigation path. The system was simulated and tested on different environments with various obstacle distributions. The simulation reveals that the system gives good results for various simple environments

    Encoderless position estimation and error correction techniques for miniature mobile robots

    Get PDF
    This paper presents an encoderless position estimation technique for miniature-sized mobile robots. Odometry techniques, which are based on the hardware components, are commonly used for calculating the geometric location of mobile robots. Therefore, the robot must be equipped with an appropriate sensor to measure the motion. However, due to the hardware limitations of some robots, employing extra hardware is impossible. On the other hand, in swarm robotic research, which uses a large number of mobile robots, equipping the robots with motion sensors might be costly. In this study, the trajectory of the robot is divided into several small displacements over short spans of time. Therefore, the position of the robot is calculated within a short period, using the speed equations of the robot's wheel. In addition, an error correction function is proposed that estimates the errors of the motion using a current monitoring technique. The experiments illustrate the feasibility of the proposed position estimation and error correction techniques to be used in miniature-sized mobile robots without requiring an additional sensor

    Neural Sensor Fusion for Spatial Visualization on a Mobile Robot

    Full text link
    An ARTMAP neural network is used to integrate visual information and ultrasonic sensory information on a B 14 mobile robot. Training samples for the neural network are acquired without human intervention. Sensory snapshots are retrospectively associated with the distance to the wall, provided by on~ board odomctry as the robot travels in a straight line. The goal is to produce a more accurate measure of distance than is provided by the raw sensors. The neural network effectively combines sensory sources both within and between modalities. The improved distance percept is used to produce occupancy grid visualizations of the robot's environment. The maps produced point to specific problems of raw sensory information processing and demonstrate the benefits of using a neural network system for sensor fusion.Office of Naval Research and Naval Research Laboratory (00014-96-1-0772, 00014-95-1-0409, 00014-95-0657

    Avoiding space robot collisions utilizing the NASA/GSFC tri-mode skin sensor

    Get PDF
    Sensor based robot motion planning research has primarily focused on mobile robots. Consider, however, the case of a robot manipulator expected to operate autonomously in a dynamic environment where unexpected collisions can occur with many parts of the robot. Only a sensor based system capable of generating collision free paths would be acceptable in such situations. Recently, work in this area has been reported in which a deterministic solution for 2DOF systems has been generated. The arm was sensitized with 'skin' of infra-red sensors. We have proposed a heuristic (potential field based) methodology for redundant robots with large DOF's. The key concepts are solving the path planning problem by cooperating global and local planning modules, the use of complete information from the sensors and partial (but appropriate) information from a world model, representation of objects with hyper-ellipsoids in the world model, and the use of variational planning. We intend to sensitize the robot arm with a 'skin' of capacitive proximity sensors. These sensors were developed at NASA, and are exceptionally suited for the space application. In the first part of the report, we discuss the development and modeling of the capacitive proximity sensor. In the second part we discuss the motion planning algorithm

    Research and development at ORNL/CESAR towards cooperating robotic systems for hazardous environments

    Get PDF
    One of the frontiers in intelligent machine research is the understanding of how constructive cooperation among multiple autonomous agents can be effected. The effort at the Center for Engineering Systems Advanced Research (CESAR) at the Oak Ridge National Laboratory (ORNL) focuses on two problem areas: (1) cooperation by multiple mobile robots in dynamic, incompletely known environments; and (2) cooperating robotic manipulators. Particular emphasis is placed on experimental evaluation of research and developments using the CESAR robot system testbeds, including three mobile robots, and a seven-axis, kinematically redundant mobile manipulator. This paper summarizes initial results of research addressing the decoupling of position and force control for two manipulators holding a common object, and the path planning for multiple robots in a common workspace

    An Unsupervised Neural Network for Real-Time Low-Level Control of a Mobile Robot: Noise Resistance, Stability, and Hardware Implementation

    Full text link
    We have recently introduced a neural network mobile robot controller (NETMORC). The controller is based on earlier neural network models of biological sensory-motor control. We have shown that NETMORC is able to guide a differential drive mobile robot to an arbitrary stationary or moving target while compensating for noise and other forms of disturbance, such as wheel slippage or changes in the robot's plant. Furthermore, NETMORC is able to adapt in response to long-term changes in the robot's plant, such as a change in the radius of the wheels. In this article we first review the NETMORC architecture, and then we prove that NETMORC is asymptotically stable. After presenting a series of simulations results showing robustness to disturbances, we compare NETMORC performance on a trajectory-following task with the performance of an alternative controller. Finally, we describe preliminary results on the hardware implementation of NETMORC with the mobile robot ROBUTER.Sloan Fellowship (BR-3122), Air Force Office of Scientific Research (F49620-92-J-0499

    A tesselated probabilistic representation for spatial robot perception and navigation

    Get PDF
    The ability to recover robust spatial descriptions from sensory information and to efficiently utilize these descriptions in appropriate planning and problem-solving activities are crucial requirements for the development of more powerful robotic systems. Traditional approaches to sensor interpretation, with their emphasis on geometric models, are of limited use for autonomous mobile robots operating in and exploring unknown and unstructured environments. Here, researchers present a new approach to robot perception that addresses such scenarios using a probabilistic tesselated representation of spatial information called the Occupancy Grid. The Occupancy Grid is a multi-dimensional random field that maintains stochastic estimates of the occupancy state of each cell in the grid. The cell estimates are obtained by interpreting incoming range readings using probabilistic models that capture the uncertainty in the spatial information provided by the sensor. A Bayesian estimation procedure allows the incremental updating of the map using readings taken from several sensors over multiple points of view. An overview of the Occupancy Grid framework is given, and its application to a number of problems in mobile robot mapping and navigation are illustrated. It is argued that a number of robotic problem-solving activities can be performed directly on the Occupancy Grid representation. Some parallels are drawn between operations on Occupancy Grids and related image processing operations
    corecore