6,543 research outputs found

    Estimating Sensor Motion from Wide-Field Optical Flow on a Log-Dipolar Sensor

    Full text link
    Log-polar image architectures, motivated by the structure of the human visual field, have long been investigated in computer vision for use in estimating motion parameters from an optical flow vector field. Practical problems with this approach have been: (i) dependence on assumed alignment of the visual and motion axes; (ii) sensitivity to occlusion form moving and stationary objects in the central visual field, where much of the numerical sensitivity is concentrated; and (iii) inaccuracy of the log-polar architecture (which is an approximation to the central 20°) for wide-field biological vision. In the present paper, we show that an algorithm based on generalization of the log-polar architecture; termed the log-dipolar sensor, provides a large improvement in performance relative to the usual log-polar sampling. Specifically, our algorithm: (i) is tolerant of large misalignmnet of the optical and motion axes; (ii) is insensitive to significant occlusion by objects of unknown motion; and (iii) represents a more correct analogy to the wide-field structure of human vision. Using the Helmholtz-Hodge decomposition to estimate the optical flow vector field on a log-dipolar sensor, we demonstrate these advantages, using synthetic optical flow maps as well as natural image sequences

    A modified model for the Lobula Giant Movement Detector and its FPGA implementation

    Get PDF
    The Lobula Giant Movement Detector (LGMD) is a wide-field visual neuron located in the Lobula layer of the Locust nervous system. The LGMD increases its firing rate in response to both the velocity of an approaching object and the proximity of this object. It has been found that it can respond to looming stimuli very quickly and trigger avoidance reactions. It has been successfully applied in visual collision avoidance systems for vehicles and robots. This paper introduces a modified neural model for LGMD that provides additional depth direction information for the movement. The proposed model retains the simplicity of the previous model by adding only a few new cells. It has been simplified and implemented on a Field Programmable Gate Array (FPGA), taking advantage of the inherent parallelism exhibited by the LGMD, and tested on real-time video streams. Experimental results demonstrate the effectiveness as a fast motion detector

    Autonomous Navigation and Mapping using Monocular Low-Resolution Grayscale Vision

    Get PDF
    Vision has been a powerful tool for navigation of intelligent and man-made systems ever since the cybernetics revolution in the 1970s. There have been two basic approaches to the navigation of computer controlled systems: The self-contained bottom-up development of sensorimotor abilities, namely perception and mobility, and the top-down approach, namely artificial intelligence, reasoning and knowledge based methods. The three-fold goal of autonomous exploration, mapping and localization of a mobile robot however, needs to be developed within a single framework. An algorithm is proposed to answer the challenges of autonomous corridor navigation and mapping by a mobile robot equipped with a single forward-facing camera. Using a combination of corridor ceiling lights, visual homing, and entropy, the robot is able to perform straight line navigation down the center of an unknown corridor. Turning at the end of a corridor is accomplished using Jeffrey divergence and time-to-collision, while deflection from dead ends and blank walls uses a scalar entropy measure of the entire image. When combined, these metrics allow the robot to navigate in both textured and untextured environments. The robot can autonomously explore an unknown indoor environment, recovering from difficult situations like corners, blank walls, and initial heading toward a wall. While exploring, the algorithm constructs a Voronoi-based topo-geometric map with nodes representing distinctive places like doors, water fountains, and other corridors. Because the algorithm is based entirely upon low-resolution (32 x 24) grayscale images, processing occurs at over 1000 frames per second

    A visual attention mechanism for autonomous robots controlled by sensorimotor contingencies

    Get PDF
    Alexander Maye, Dari Trendafilov, Daniel Polani, Andreas Engel, ‘A visual attention mechanism for autonomous robots controlled by sensorimotor contingencies’, paper presented at the International Conference on Intelligent Robots and Systems (IROS) 2015 Workshop on Sensorimotor Contingencies for Robotics, Hamburg, Germany, 2 October, 2015.Robot control architectures that are based on learning the dependencies between robot's actions and the resulting change in sensory input face the fundamental problem that for high-dimensional action and/or sensor spaces, the number of these sensorimotor dependencies can become huge. In this article we present a scenario of a robot that learns to avoid collisions with stationary objects from image-based motion flow and a collision detector. Following an information-theoretic approach, we demonstrate that the robot can infer image regions that facilitate the prediction of imminent collisions. This allows restricting the computation to the domain in the input space that is relevant for the given task, which enables learning sensorimotor contingencies in robots with high-dimensional sensor spaces.Peer reviewedFinal Accepted Versio

    Insect inspired behaviours for the autonomous control of mobile robots

    Full text link
    Animals navigate through various uncontrolled environments with seemingly little effort. Flying insects, especially, are quite adept at manoeuvring in complex, unpredictable and possibly hostile environments. Through both simulation and real-world experiments, we demonstrate the feasibility of equipping a mobile robot with the ability to navigate a corridor environment, in real time, using principles based on insect-based visual guidance. In particular we have used the bees&rsquo; navigational strategy of measuring object range in terms of image velocity. We have also shown the viability and usefulness of various other insect behaviours: (i) keeping walls equidistant, (ii) slowing down when approaching an object, (iii) regulating speed according to tunnel width, and (iv) using visual motion as a measure of distance travelled.<br /

    Image-Based Flexible Endoscope Steering

    Get PDF
    Manually steering the tip of a flexible endoscope to navigate through an endoluminal path relies on the physician’s dexterity and experience. In this paper we present the realization of a robotic flexible endoscope steering system that uses the endoscopic images to control the tip orientation towards the direction of the lumen. Two image-based control algorithms are investigated, one is based on the optical flow and the other is based on the image intensity. Both are evaluated using simulations in which the endoscope was steered through the lumen. The RMS distance to the lumen center was less than 25% of the lumen width. An experimental setup was built using a standard flexible endoscope, and the image-based control algorithms were used to actuate the wheels of the endoscope for tip steering. Experiments were conducted in an anatomical model to simulate gastroscopy. The image intensity- based algorithm was capable of steering the endoscope tip through an endoluminal path from the mouth to the duodenum accurately. Compared to manual control, the robotically steered endoscope performed 68% better in terms of keeping the lumen centered in the image

    Airborne laser sensors and integrated systems

    Get PDF
    The underlying principles and technologies enabling the design and operation of airborne laser sensors are introduced and a detailed review of state-of-the-art avionic systems for civil and military applications is presented. Airborne lasers including Light Detection and Ranging (LIDAR), Laser Range Finders (LRF), and Laser Weapon Systems (LWS) are extensively used today and new promising technologies are being explored. Most laser systems are active devices that operate in a manner very similar to microwave radars but at much higher frequencies (e.g., LIDAR and LRF). Other devices (e.g., laser target designators and beam-riders) are used to precisely direct Laser Guided Weapons (LGW) against ground targets. The integration of both functions is often encountered in modern military avionics navigation-attack systems. The beneficial effects of airborne lasers including the use of smaller components and remarkable angular resolution have resulted in a host of manned and unmanned aircraft applications. On the other hand, laser sensors performance are much more sensitive to the vagaries of the atmosphere and are thus generally restricted to shorter ranges than microwave systems. Hence it is of paramount importance to analyse the performance of laser sensors and systems in various weather and environmental conditions. Additionally, it is important to define airborne laser safety criteria, since several systems currently in service operate in the near infrared with considerable risk for the naked human eye. Therefore, appropriate methods for predicting and evaluating the performance of infrared laser sensors/systems are presented, taking into account laser safety issues. For aircraft experimental activities with laser systems, it is essential to define test requirements taking into account the specific conditions for operational employment of the systems in the intended scenarios and to verify the performance in realistic environments at the test ranges. To support the development of such requirements, useful guidelines are provided for test and evaluation of airborne laser systems including laboratory, ground and flight test activities

    Taking Inspiration from Flying Insects to Navigate inside Buildings

    Get PDF
    These days, flying insects are seen as genuinely agile micro air vehicles fitted with smart sensors and also parsimonious in their use of brain resources. They are able to visually navigate in unpredictable and GPS-denied environments. Understanding how such tiny animals work would help engineers to figure out different issues relating to drone miniaturization and navigation inside buildings. To turn a drone of ~1 kg into a robot, miniaturized conventional avionics can be employed; however, this results in a loss of their flight autonomy. On the other hand, to turn a drone of a mass between ~1 g (or less) and ~500 g into a robot requires an innovative approach taking inspiration from flying insects both with regard to their flapping wing propulsion system and their sensory system based mainly on motion vision in order to avoid obstacles in three dimensions or to navigate on the basis of visual cues. This chapter will provide a snapshot of the current state of the art in the field of bioinspired optic flow sensors and optic flow-based direct feedback loops applied to micro air vehicles flying inside buildings
    corecore