2,278 research outputs found

    Reactive direction control for a mobile robot: A locust-like control of escape direction emerges when a bilateral pair of model locust visual neurons are integrated

    Get PDF
    Locusts possess a bilateral pair of uniquely identifiable visual neurons that respond vigorously to the image of an approaching object. These neurons are called the lobula giant movement detectors (LGMDs). The locust LGMDs have been extensively studied and this has lead to the development of an LGMD model for use as an artificial collision detector in robotic applications. To date, robots have been equipped with only a single, central artificial LGMD sensor, and this triggers a non-directional stop or rotation when a potentially colliding object is detected. Clearly, for a robot to behave autonomously, it must react differently to stimuli approaching from different directions. In this study, we implement a bilateral pair of LGMD models in Khepera robots equipped with normal and panoramic cameras. We integrate the responses of these LGMD models using methodologies inspired by research on escape direction control in cockroaches. Using ‘randomised winner-take-all’ or ‘steering wheel’ algorithms for LGMD model integration, the khepera robots could escape an approaching threat in real time and with a similar distribution of escape directions as real locusts. We also found that by optimising these algorithms, we could use them to integrate the left and right DCMD responses of real jumping locusts offline and reproduce the actual escape directions that the locusts took in a particular trial. Our results significantly advance the development of an artificial collision detection and evasion system based on the locust LGMD by allowing it reactive control over robot behaviour. The success of this approach may also indicate some important areas to be pursued in future biological research

    The Cyborg Astrobiologist: Testing a Novelty-Detection Algorithm on Two Mobile Exploration Systems at Rivas Vaciamadrid in Spain and at the Mars Desert Research Station in Utah

    Full text link
    (ABRIDGED) In previous work, two platforms have been developed for testing computer-vision algorithms for robotic planetary exploration (McGuire et al. 2004b,2005; Bartolo et al. 2007). The wearable-computer platform has been tested at geological and astrobiological field sites in Spain (Rivas Vaciamadrid and Riba de Santiuste), and the phone-camera has been tested at a geological field site in Malta. In this work, we (i) apply a Hopfield neural-network algorithm for novelty detection based upon color, (ii) integrate a field-capable digital microscope on the wearable computer platform, (iii) test this novelty detection with the digital microscope at Rivas Vaciamadrid, (iv) develop a Bluetooth communication mode for the phone-camera platform, in order to allow access to a mobile processing computer at the field sites, and (v) test the novelty detection on the Bluetooth-enabled phone-camera connected to a netbook computer at the Mars Desert Research Station in Utah. This systems engineering and field testing have together allowed us to develop a real-time computer-vision system that is capable, for example, of identifying lichens as novel within a series of images acquired in semi-arid desert environments. We acquired sequences of images of geologic outcrops in Utah and Spain consisting of various rock types and colors to test this algorithm. The algorithm robustly recognized previously-observed units by their color, while requiring only a single image or a few images to learn colors as familiar, demonstrating its fast learning capability.Comment: 28 pages, 12 figures, accepted for publication in the International Journal of Astrobiolog

    Improving situation awareness of a single human operator interacting with multiple unmanned vehicles: first results

    Get PDF
    In the context of the supervision of one or several unmanned vehicles by a human operator, the design of an adapted user interface is a major challenge. Therefore, in the context of an existing experimental set up composed of a ground station and heterogeneous unmanned ground and air vehicles we aim at redesigning the human-robot interactions to improve the operator's situation awareness. We base our new design on a classical user centered approach

    Knowledge-based control for robot self-localization

    Get PDF
    Autonomous robot systems are being proposed for a variety of missions including the Mars rover/sample return mission. Prior to any other mission objectives being met, an autonomous robot must be able to determine its own location. This will be especially challenging because location sensors like GPS, which are available on Earth, will not be useful, nor will INS sensors because their drift is too large. Another approach to self-localization is required. In this paper, we describe a novel approach to localization by applying a problem solving methodology. The term 'problem solving' implies a computational technique based on logical representational and control steps. In this research, these steps are derived from observing experts solving localization problems. The objective is not specifically to simulate human expertise but rather to apply its techniques where appropriate for computational systems. In doing this, we describe a model for solving the problem and a system built on that model, called localization control and logic expert (LOCALE), which is a demonstration of concept for the approach and the model. The results of this work represent the first successful solution to high-level control aspects of the localization problem

    Supervised Autonomous Locomotion and Manipulation for Disaster Response with a Centaur-like Robot

    Full text link
    Mobile manipulation tasks are one of the key challenges in the field of search and rescue (SAR) robotics requiring robots with flexible locomotion and manipulation abilities. Since the tasks are mostly unknown in advance, the robot has to adapt to a wide variety of terrains and workspaces during a mission. The centaur-like robot Centauro has a hybrid legged-wheeled base and an anthropomorphic upper body to carry out complex tasks in environments too dangerous for humans. Due to its high number of degrees of freedom, controlling the robot with direct teleoperation approaches is challenging and exhausting. Supervised autonomy approaches are promising to increase quality and speed of control while keeping the flexibility to solve unknown tasks. We developed a set of operator assistance functionalities with different levels of autonomy to control the robot for challenging locomotion and manipulation tasks. The integrated system was evaluated in disaster response scenarios and showed promising performance.Comment: In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, October 201

    Interfaz de software Autonavi3at para navegar de forma autónoma en vías urbanas mediante visión omnidireccional y un robot móvil

    Get PDF
    The design of efficient autonomous navigation systems for mobile robots or autonomous vehicles is fundamental to perform the programmed tasks. Basically, two kind of sensors are used in urban road following: LIDAR and cameras. LIDAR sensors are highly accurate but expensive and extra work is needed for human understanding of the point cloud scenes; however, visual content is understood better by human beings, which should be used to develop human-robot interfaces. In this work, a computer vision-based urban road following software tool called AutoNavi3AT for mobile robots and autonomous vehicles is presented. The urban road following scheme proposed in AutoNavi3AT uses vanishing point estimation and tracking on panoramic images to control the mobile robot heading on the urban road. To do that, Gabor filters, region growing, and particle filters were used. In addition, laser range data are also employed for local obstacle avoidance. Quantitative results were achieved using two kind of tests, one uses datasets acquired at the Universidad del Valle campus, and field tests using a Pioneer 3AT mobile robot. As a result, important improvements in the vanishing point estimation of 68.26 % and 61.46 % in average were achieved, which is useful for mobile robots and autonomous vehicles when they are moving on urban roads.El diseño de sistemas de navegación autónomos eficientes para robots móviles o vehículos autónomos es fundamental para realizar las tareas programadas. Básicamente, se utilizan dos tipos de sensores en el seguimiento de vías urbanas: LIDAR y cámaras. Los sensores LIDAR son muy precisos, pero costosos y se necesita trabajo adicional para la comprensión humana de las escenas de nubes de puntos; sin embargo, los seres humanos entienden mejor el contenido visual, lo que debería usarse para desarrollar interfaces humano-robot. En este trabajo, se presenta una herramienta de software de seguimiento de carreteras urbanas basada en visión artificial llamada AutoNavi3AT para robots móviles y vehículos autónomos. El esquema de seguimiento de vías urbanas propuesto en AutoNavi3AT utiliza la estimación del punto de fuga y el seguimiento de imágenes panorámicas para controlar el avance del robot móvil en la vía urbana. Para ello se utilizaron filtros Gabor, crecimiento de regiones y filtros de partículas. Además, los datos de alcance del láser también se emplean para evitar obstáculos locales. Los resultados cuantitativos se lograron utilizando dos tipos de pruebas, una utiliza conjuntos de datos adquiridos en el campus de la Universidad del Valle y pruebas de campo utilizando un robot móvil Pioneer 3AT. Como resultado, se lograron mejoras importantes en la estimación del punto de fuga de 68.26% y 61.46% en promedio, lo cual es útil para robots móviles y vehículos autónomos cuando se desplazan por vías urbanas

    Space and camera path reconstruction for omni-directional vision

    Full text link
    In this paper, we address the inverse problem of reconstructing a scene as well as the camera motion from the image sequence taken by an omni-directional camera. Our structure from motion results give sharp conditions under which the reconstruction is unique. For example, if there are three points in general position and three omni-directional cameras in general position, a unique reconstruction is possible up to a similarity. We then look at the reconstruction problem with m cameras and n points, where n and m can be large and the over-determined system is solved by least square methods. The reconstruction is robust and generalizes to the case of a dynamic environment where landmarks can move during the movie capture. Possible applications of the result are computer assisted scene reconstruction, 3D scanning, autonomous robot navigation, medical tomography and city reconstructions
    corecore