4,683 research outputs found

    Building Similarity Maps Of The Environment Using SONAR Information For The Navigation Of The Mobile Robots

    Get PDF
    The objective of the work is to present a representation called similarity map based on some results in the evaluation of the SONAR system of a mobile robot. As environment a corner of our lab it is considered. Based on some reference positions, where the robot is making a complete rotation, some test positions are considered with the task of recognition of the environment, in order to be able to recognize the position. Using similarity measures based on Euclidian distance, similarities maps are defined and estimated. The results are useful in defining complex strategies of navigation using SONAR systems

    Monocular Vision as a Range Sensor

    Get PDF
    One of the most important abilities for a mobile robot is detecting obstacles in order to avoid collisions. Building a map of these obstacles is the next logical step. Most robots to date have used sensors such as passive or active infrared, sonar or laser range finders to locate obstacles in their path. In contrast, this work uses a single colour camera as the only sensor, and consequently the robot must obtain range information from the camera images. We propose simple methods for determining the range to the nearest obstacle in any direction in the robot’s field of view, referred to as the Radial Obstacle Profile. The ROP can then be used to determine the amount of rotation between two successive images, which is important for constructing a 360º view of the surrounding environment as part of map construction

    A tesselated probabilistic representation for spatial robot perception and navigation

    Get PDF
    The ability to recover robust spatial descriptions from sensory information and to efficiently utilize these descriptions in appropriate planning and problem-solving activities are crucial requirements for the development of more powerful robotic systems. Traditional approaches to sensor interpretation, with their emphasis on geometric models, are of limited use for autonomous mobile robots operating in and exploring unknown and unstructured environments. Here, researchers present a new approach to robot perception that addresses such scenarios using a probabilistic tesselated representation of spatial information called the Occupancy Grid. The Occupancy Grid is a multi-dimensional random field that maintains stochastic estimates of the occupancy state of each cell in the grid. The cell estimates are obtained by interpreting incoming range readings using probabilistic models that capture the uncertainty in the spatial information provided by the sensor. A Bayesian estimation procedure allows the incremental updating of the map using readings taken from several sensors over multiple points of view. An overview of the Occupancy Grid framework is given, and its application to a number of problems in mobile robot mapping and navigation are illustrated. It is argued that a number of robotic problem-solving activities can be performed directly on the Occupancy Grid representation. Some parallels are drawn between operations on Occupancy Grids and related image processing operations

    Fast, Accurate Thin-Structure Obstacle Detection for Autonomous Mobile Robots

    Full text link
    Safety is paramount for mobile robotic platforms such as self-driving cars and unmanned aerial vehicles. This work is devoted to a task that is indispensable for safety yet was largely overlooked in the past -- detecting obstacles that are of very thin structures, such as wires, cables and tree branches. This is a challenging problem, as thin objects can be problematic for active sensors such as lidar and sonar and even for stereo cameras. In this work, we propose to use video sequences for thin obstacle detection. We represent obstacles with edges in the video frames, and reconstruct them in 3D using efficient edge-based visual odometry techniques. We provide both a monocular camera solution and a stereo camera solution. The former incorporates Inertial Measurement Unit (IMU) data to solve scale ambiguity, while the latter enjoys a novel, purely vision-based solution. Experiments demonstrated that the proposed methods are fast and able to detect thin obstacles robustly and accurately under various conditions.Comment: Appeared at IEEE CVPR 2017 Workshop on Embedded Visio

    Appearance-based localization for mobile robots using digital zoom and visual compass

    Get PDF
    This paper describes a localization system for mobile robots moving in dynamic indoor environments, which uses probabilistic integration of visual appearance and odometry information. The approach is based on a novel image matching algorithm for appearance-based place recognition that integrates digital zooming, to extend the area of application, and a visual compass. Ambiguous information used for recognizing places is resolved with multiple hypothesis tracking and a selection procedure inspired by Markov localization. This enables the system to deal with perceptual aliasing or absence of reliable sensor data. It has been implemented on a robot operating in an office scenario and the robustness of the approach demonstrated experimentally

    ARTMAP-FTR: A Neural Network for Object Recognition Through Sonar on a Mobile Robot

    Full text link
    ART (Adaptive Resonance Theory) neural networks for fast, stable learning and prediction have been applied in a variety of areas. Applications include automatic mapping from satellite remote sensing data, machine tool monitoring, medical prediction, digital circuit design, chemical analysis, and robot vision. Supervised ART architectures, called ARTMAP systems, feature internal control mechanisms that create stable recognition categories of optimal size by maximizing code compression while minimizing predictive error in an on-line setting. Special-purpose requirements of various application domains have led to a number of ARTMAP variants, including fuzzy ARTMAP, ART-EMAP, ARTMAP-IC, Gaussian ARTMAP, and distributed ARTMAP. A new ARTMAP variant, called ARTMAP-FTR (fusion target recognition), has been developed for the problem of multi-ping sonar target classification. The development data set, which lists sonar returns from underwater objects, was provided by the Naval Surface Warfare Center (NSWC) Coastal Systems Station (CSS), Dahlgren Division. The ARTMAP-FTR network has proven to be an effective tool for classifying objects from sonar returns. The system also provides a procedure for solving more general sensor fusion problems.Office of Naval Research (N00014-95-I-0409, N00014-95-I-0657

    Overcoming barriers and increasing independence: service robots for elderly and disabled people

    Get PDF
    This paper discusses the potential for service robots to overcome barriers and increase independence of elderly and disabled people. It includes a brief overview of the existing uses of service robots by disabled and elderly people and advances in technology which will make new uses possible and provides suggestions for some of these new applications. The paper also considers the design and other conditions to be met for user acceptance. It also discusses the complementarity of assistive service robots and personal assistance and considers the types of applications and users for which service robots are and are not suitable

    ARTMAP-FTR: A Neural Network For Fusion Target Recognition, With Application To Sonar Classification

    Full text link
    ART (Adaptive Resonance Theory) neural networks for fast, stable learning and prediction have been applied in a variety of areas. Applications include automatic mapping from satellite remote sensing data, machine tool monitoring, medical prediction, digital circuit design, chemical analysis, and robot vision. Supervised ART architectures, called ARTMAP systems, feature internal control mechanisms that create stable recognition categories of optimal size by maximizing code compression while minimizing predictive error in an on-line setting. Special-purpose requirements of various application domains have led to a number of ARTMAP variants, including fuzzy ARTMAP, ART-EMAP, ARTMAP-IC, Gaussian ARTMAP, and distributed ARTMAP. A new ARTMAP variant, called ARTMAP-FTR (fusion target recognition), has been developed for the problem of multi-ping sonar target classification. The development data set, which lists sonar returns from underwater objects, was provided by the Naval Surface Warfare Center (NSWC) Coastal Systems Station (CSS), Dahlgren Division. The ARTMAP-FTR network has proven to be an effective tool for classifying objects from sonar returns. The system also provides a procedure for solving more general sensor fusion problems.Office of Naval Research (N00014-95-I-0409, N00014-95-I-0657
    • …
    corecore