5,157 research outputs found

    A Bio-inspired Collision Avoidance Model Based on Spatial Information Derived from Motion Detectors Leads to Common Routes

    Get PDF
    Bertrand O, Lindemann JP, Egelhaaf M. A Bio-inspired Collision Avoidance Model Based on Spatial Information Derived from Motion Detectors Leads to Common Routes. PLoS Computational Biology. 2015;11(11): e1004339.Avoiding collisions is one of the most basic needs of any mobile agent, both biological and technical, when searching around or aiming toward a goal. We propose a model of collision avoidance inspired by behavioral experiments on insects and by properties of optic flow on a spherical eye experienced during translation, and test the interaction of this model with goal-driven behavior. Insects, such as flies and bees, actively separate the rotational and translational optic flow components via behavior, i.e. by employing a saccadic strategy of flight and gaze control. Optic flow experienced during translation, i.e. during intersaccadic phases, contains information on the depth-structure of the environment, but this information is entangled with that on self-motion. Here, we propose a simple model to extract the depth structure from translational optic flow by using local properties of a spherical eye. On this basis, a motion direction of the agent is computed that ensures collision avoidance. Flying insects are thought to measure optic flow by correlation-type elementary motion detectors. Their responses depend, in addition to velocity, on the texture and contrast of objects and, thus, do not measure the velocity of objects veridically. Therefore, we initially used geometrically determined optic flow as input to a collision avoidance algorithm to show that depth information inferred from optic flow is sufficient to account for collision avoidance under closed-loop conditions. Then, the collision avoidance algorithm was tested with bio-inspired correlation-type elementary motion detectors in its input. Even then, the algorithm led successfully to collision avoidance and, in addition, replicated the characteristics of collision avoidance behavior of insects. Finally, the collision avoidance algorithm was combined with a goal direction and tested in cluttered environments. The simulated agent then showed goal-directed behavior reminiscent of components of the navigation behavior of insects

    Bioinspired event-driven collision avoidance algorithm based on optic flow

    Get PDF
    Milde MB, Bertrand O, Benosman R, Egelhaaf M, Chicca E. Bioinspired event-driven collision avoidance algorithm based on optic flow. In: 2015 International Conference on Event-based Control, Communication, and Signal Processing (EBCCSP). IEEE; 2015.Any mobile agent, whether biological or robotic, needs to avoid collisions with obstacles. Insects, such as bees and flies, use optic flow to estimate the relative nearness to obstacles. Optic flow induced by ego-motion is composed of a translational and a rotational component. The segregation of both components is computationally and thus energetically expensive. Flies and bees actively separate the rotational and translational optic flow components via behaviour, i.e. by employing a saccadic strategy of flight and gaze control. Although robotic systems are able to mimic this gaze-strategy, the calculation of optic-flow fields from standard camera images remains time and energy consuming. To overcome this problem, we use a dynamic vision sensor (DVS), which provides event-based information about changes in contrast over time at each pixel location. To extract optic flow from this information, a plane-fitting algorithm estimating the relative velocity in a small spatio-temporal cuboid is used. The depthstructure is derived from the translational optic flow by using local properties of the retina. A collision avoidance direction is then computed from the event-based depth-structure of the environment. The system has successfully been tested on a robotic platform in open-loop

    A Synthetic-Vision Based Steering Approach for Crowd Simulation

    Get PDF
    International audienceIn the everyday exercise of controlling their locomotion, humans rely on their optic flow of the perceived environment to achieve collision-free navigation. In crowds, in spite of the complexity of the environment made of numerous obstacles, humans demonstrate remarkable capacities in avoiding collisions. Cognitive science work on human locomotion states that relatively succinct information is extracted from the optic flow to achieve safe locomotion. In this paper, we explore a novel vision-based approach of collision avoidance between walkers that fits the requirements of interactive crowd simulation. By simulating humans based on cognitive science results, we detect future collisions as well as the level of danger from visual stimuli. The motor-response is twofold: a reorientation strategy prevents future collision, whereas a deceleration strategy prevents imminent collisions. Several examples of our simulation results show that the emergence of self-organized patterns of walkers is reinforced using our approach. The emergent phenomena are visually appealing. More importantly, they improve the overall efficiency of the walkers' traffic and avoid improbable locking situations

    Collision-avoidance and landing responses are mediated by separate pathways in the fruit fly, Drosophila melanogaster

    Get PDF
    Flies rely heavily on visual feedback for several aspects of flight control. As a fly approaches an object, the image projected across its retina expands, providing the fly with visual feedback that can be used either to trigger a collision-avoidance maneuver or a landing response. To determine how a fly makes the decision to land on or avoid a looming object, we measured the behaviors generated in response to an expanding image during tethered flight in a visual closed-loop flight arena. During these experiments, each fly varied its wing-stroke kinematics to actively control the azimuth position of a 15°×15° square within its visual field. Periodically, the square symmetrically expanded in both the horizontal and vertical directions. We measured changes in the fly's wing-stroke amplitude and frequency in response to the expanding square while optically tracking the position of its legs to monitor stereotyped landing responses. Although this stimulus could elicit both the landing responses and collision-avoidance reactions, separate pathways appear to mediate the two behaviors. For example, if the square is in the lateral portion of the fly's field of view at the onset of expansion, the fly increases stroke amplitude in one wing while decreasing amplitude in the other, indicative of a collision-avoidance maneuver. In contrast, frontal expansion elicits an increase in wing-beat frequency and leg extension, indicative of a landing response. To further characterize the sensitivity of these responses to expansion rate, we tested a range of expansion velocities from 100 to 10000° s^(-1). Differences in the latency of both the collision-avoidance reactions and the landing responses with expansion rate supported the hypothesis that the two behaviors are mediated by separate pathways. To examine the effects of visual feedback on the magnitude and time course of the two behaviors, we presented the stimulus under open-loop conditions, such that the fly's response did not alter the position of the expanding square. From our results we suggest a model that takes into account the spatial sensitivities and temporal latencies of the collision-avoidance and landing responses, and is sufficient to schematically represent how the fly uses integration of motion information in deciding whether to turn or land when confronted with an expanding object

    Texture dependence of motion sensing and free flight behavior in blowflies

    Get PDF
    Lindemann JP, Egelhaaf M. Texture dependence of motion sensing and free flight behavior in blowflies. Frontiers in Behavioral Neuroscience. 2013;6:92.Many flying insects exhibit an active flight and gaze strategy: purely translational flight segments alternate with quick turns called saccades. To generate such a saccadic flight pattern, the animals decide the timing, direction, and amplitude of the next saccade during the previous translatory intersaccadic interval. The information underlying these decisions is assumed to be extracted from the retinal image displacements (optic flow), which scale with the distance to objects during the intersaccadic flight phases. In an earlier study we proposed a saccade-generation mechanism based on the responses of large-field motion-sensitive neurons. In closed-loop simulations we achieved collision avoidance behavior in a limited set of environments but observed collisions in others. Here we show by open-loop simulations that the cause of this observation is the known texture-dependence of elementary motion detection in flies, reflected also in the responses of large-field neurons as used in our model. We verified by electrophysiological experiments that this result is not an artifact of the sensory model. Already subtle changes in the texture may lead to qualitative differences in the responses of both our model cells and their biological counterparts in the fly's brain. Nonetheless, free flight behavior of blowflies is only moderately affected by such texture changes. This divergent texture dependence of motion-sensitive neurons and behavioral performance suggests either mechanisms that compensate for the texture dependence of the visual motion pathway at the level of the circuits generating the saccadic turn decisions or the involvement of a hypothetical parallel pathway in saccadic control that provides the information for collision avoidance independent of the textural properties of the environment

    A contribution to vision-based autonomous helicopter flight in urban environments

    Get PDF
    A navigation strategy that exploits the optic flow and inertial information to continuously avoid collisions with both lateral and frontal obstacles has been used to control a simulated helicopter flying autonomously in a textured urban environment. Experimental results demonstrate that the corresponding controller generates cautious behavior, whereby the helicopter tends to stay in the middle of narrow corridors, while its forward velocity is automatically reduced when the obstacle density increases. When confronted with a frontal obstacle, the controller is also able to generate a tight U-turn that ensures the UAV’s survival. The paper provides comparisons with related work, and discusses the applicability of the approach to real platforms

    Spatial organization of visuomotor reflexes in Drosophila

    Get PDF
    In most animals, the visual system plays a central role in locomotor guidance. Here, we examined the functional organization of visuomotor reflexes in the fruit fly, Drosophila, using an electronic flight simulator. Flies exhibit powerful avoidance responses to visual expansion centered laterally. The amplitude of these expansion responses is three times larger than those generated by image rotation. Avoidance of a laterally positioned focus of expansion emerges from an inversion of the optomotor response when motion is restricted to the rear visual hemisphere. Furthermore, motion restricted to rear quarter-fields elicits turning responses that are independent of the direction of image motion about the animal's yaw axis. The spatial heterogeneity of visuomotor responses explains a seemingly peculiar behavior in which flies robustly fixate the contracting pole of a translating flow field

    A modified model for the Lobula Giant Movement Detector and its FPGA implementation

    Get PDF
    The Lobula Giant Movement Detector (LGMD) is a wide-field visual neuron located in the Lobula layer of the Locust nervous system. The LGMD increases its firing rate in response to both the velocity of an approaching object and the proximity of this object. It has been found that it can respond to looming stimuli very quickly and trigger avoidance reactions. It has been successfully applied in visual collision avoidance systems for vehicles and robots. This paper introduces a modified neural model for LGMD that provides additional depth direction information for the movement. The proposed model retains the simplicity of the previous model by adding only a few new cells. It has been simplified and implemented on a Field Programmable Gate Array (FPGA), taking advantage of the inherent parallelism exhibited by the LGMD, and tested on real-time video streams. Experimental results demonstrate the effectiveness as a fast motion detector
    corecore