98,204 research outputs found

    Go with the flow : visually mediated flight control in bumblebees

    Get PDF
    Despite their small brains and tiny eyes, flying insects are capable of detecting and avoiding collisions with moving obstacles, and with remarkable precision they navigate through environments of different complexity. For this thesis, I have investigated how bumblebees use the pattern of apparent image motion that is generated in their eyes as they move through the world (known as optic flow), in order to control flight. I analysed the speed and position of bumblebee (Bombus terrestris) flight trajectories as they negotiated arenas of different dimensions and visual complexity. I also investigated the impact of optic flow on bumblebee learning flights, a special kind of flight designed to memorise the location of the nest or a newly discovered food source. The general aim of my research has been to understand how flying insects use vision to actively control their flight. The viewing angle at which optic flow is measured has important consequences for flight in densely cluttered environments, where timely control of position and speed are necessary for effective collision avoidance. I therefore investigated when, and how, bumblebees respond to sudden changes in the magnitude of optic flow. My results reveal that the visual region over which bumblebees measure optic flow is determined by the location in the frontal visual field where they experience the maximum magnitude of translational optic flow. This strategy ensures that bumblebees regulate their position and speed according to the nearest obstacles, allowing them to maximise flight efficiency and to minimise the risk of collision. My results further demonstrate that, when flying in narrow spaces, bumblebees use optic flow information from nearby surfaces in the lateral visual field to control flight, while in more open spaces they rely primarily on optic flow cues from the ventral field of view. This result strengthens the finding that bumblebees measure optic flow for flight control flexibly in their visual field, depending on where the maximum magnitude of translational optic flow occurs. It also adds another dimension to it by suggesting that bumblebees respond to optic flow cues in the ventral visual field if the magnitude is higher there than in the lateral visual field. Thus, the ability to flexibly use the surrounding optic flow field is of great importance when it comes to the control of cruising flight. For this thesis I also investigated the impact of ventral and panoramic optic flow on the control of learning flights in bumblebees. The results show that the presence of ventral optic flow is important for enabling bumblebees to perform well-controlled learning flights. Whether panoramic optic flow cues are present or not does not strongly affect the overall structure of the learning flight, although these cues might still be involved in fine-scale flight control. Finally, I found that, when the availability of ventral optic flow is limited to certain heights, bumblebees appear to adjust their flight parameters to maintain the perception of ventral optic flow cues. In summary, the results compiled in this thesis contribute to a better understanding of how insects use visual information to control their flight. Among other findings, my results emphasize the importance of a being able to flexibly measure optic flow in different parts of the visual field, something that enhances bees’ ability to avoid collisions

    The Intrinsic Structure of the Optic Flow Field

    Get PDF
    n this paper, a generating equation for optic flow is proposed that generalises Horn and Schunck's Optic Flow Constraint Equation (OFCE). Whereas the OFCE has an interpretation as a pointwise conservation law, requiring grey-values associated with fixed-scale volume elements to be constant when co-moving with the flow, the new one can be regarded as a similar conservation requirement in which the flow elements have variable scale consistent with the field's divergence. Thus the equation gives rise to a definition of optic flow which is compatible with the scale-space paradigm. We emphasise the gauge invariant nature of optic flow due to the inherent ambiguity of its components, i.e. the well-known aperture problem. Since gauge invariance is intrinsic to any definition of optic flow based solely on the data, it is argued that the gauge should be fixed on the basis of extrinsic knowledge of the image formation process and of the physics of the scene. The optic flow field is replaced by an approximating field so as to allow for an order-by-order operational definition preserving gauge invariance, i.e.\ the approximation does not add spurious degrees of freedom to the field. One thus obtains a defining system of linear equations in the optic flow components up to arbitrary order, which remains decoupled from any physical considerations of gauge fixing. Such considerations are needed to derive a complementary system of gauge conditions that allows for a unique, physically sensible solution of the optic flow equations. The theory is illustrated by means of several examples

    Species-Specific Flight Styles of Flies are Reflected in the Response Dynamics of a Homolog Motion-Sensitive Neuron

    Get PDF
    Hoverflies and blowflies have distinctly different flight styles. Yet, both species have been shown to structure their flight behavior in a way that facilitates extraction of 3D information from the image flow on the retina (optic flow). Neuronal candidates to analyze the optic flow are the tangential cells in the third optical ganglion – the lobula complex. These neurons are directionally selective and integrate the optic flow over large parts of the visual field. Homolog tangential cells in hoverflies and blowflies have a similar morphology. Because blowflies and hoverflies have similar neuronal layout but distinctly different flight behaviors, they are an ideal substrate to pinpoint potential neuronal adaptations to the different flight styles. In this article we describe the relationship between locomotion behavior and motion vision on three different levels: (1) We compare the different flight styles based on the categorization of flight behavior into prototypical movements. (2) We measure the species-specific dynamics of the optic flow under naturalistic flight conditions. We found the translational optic flow of both species to be very different. (3) We describe possible adaptations of a homolog motion-sensitive neuron. We stimulate this cell in blowflies (Calliphora) and hoverflies (Eristalis) with naturalistic optic flow generated by both species during free flight. The characterized hoverfly tangential cell responds faster to transient changes in the optic flow than its blowfly homolog. It is discussed whether and how the different dynamical response properties aid optic flow analysis

    Lucas/Kanade meets Horn/Schunck : combining local and global optic flow methods

    Get PDF
    Differential methods belong to the most widely used techniques for optic flow computation in image sequences. They can be classified into local methods such as the Lucas-Kanade technique or Bigün\u27s structure tensor method, and into global methods such as the Horn/Schunck approach and its extensions. Often local methods are more robust under noise, while global techniques yield dense flow fields. The goal of this paper is to contribute to a better understanding and the design of differential methods in four ways: (i) We juxtapose the role of smoothing/regularisation processes that are required in local and global differential methods for optic flow computation. (ii) This discussion motivates us to describe and evaluate a novel method that combines important advantages of local and global approaches: It yields dense flow fields that are robust against noise. (iii) Spatiotemproal and nonlinear extensions to this hybrid method are presented. (iv) We propose a simple confidence measure for optic flow methods that minimise energy functionals. It allows to sparsify a dense flow field gradually, depending on the reliability required for the resulting flow. Comparisons with experiments from the literature demonstrate the favourable performance of the proposed methods and the confidence measure

    Misperception of rigidity from actively generated optic flow

    Get PDF
    It is conventionally assumed that the goal of the visual system is to derive a perceptual representation that is a veridical reconstruction of the external world: a reconstruction that leads to optimal accuracy and precision of metric estimates, given sensory information. For example, 3-D structure is thought to be veridically recovered from optic flow signals in combination with egocentric motion information and assumptions of the stationarity and rigidity of the external world. This theory predicts veridical perceptual judgments under conditions that mimic natural viewing, while ascribing nonoptimality under laboratory conditions to unreliable or insufficient sensory information\u2014for example, the lack of natural and measurable observer motion. In two experiments, we contrasted this optimal theory with a heuristic theory that predicts the derivation of perceived 3-D structure based on the velocity gradients of the retinal flow field without the use of egomotion signals or a rigidity prior. Observers viewed optic flow patterns generated by their own motions relative to two surfaces and later viewed the same patterns while stationary. When the surfaces were part of a rigid structure, static observers systematically perceived a nonrigid structure, consistent with the predictions of both an optimal and a heuristic model. Contrary to the optimal model, moving observers also perceived nonrigid structures in situations where retinal and extraretinal signals, combined with a rigidity assumption, should have yielded a veridical rigid estimate. The perceptual biases were, however, consistent with a heuristic model which is only based on an analysis of the optic flow

    A Continuous-Time Nonlinear Observer for Estimating Structure from Motion from Omnidirectional Optic Flow

    Get PDF
    Various insect species utilize certain types of self-motion to perceive structure in their local environment, a process known as active vision. This dissertation presents the development of a continuous-time formulated observer for estimating structure from motion that emulates the biological phenomenon of active vision. In an attempt to emulate the wide-field of view of compound eyes and neurophysiology of insects, the observer utilizes an omni-directional optic flow field. Exponential stability of the observer is assured provided the persistency of excitation condition is met. Persistency of excitation is assured by altering the direction of motion sufficiently quickly. An equal convergence rate on the entire viewable area can be achieved by executing certain prototypical maneuvers. Practical implementation of the observer is accomplished both in simulation and via an actual flying quadrotor testbed vehicle. Furthermore, this dissertation presents the vehicular implementation of a complimentary navigation methodology known as wide-field integration of the optic flow field. The implementation of the developed insect-inspired navigation methodologies on physical testbed vehicles utilized in this research required the development of many subsystems that comprise a control and navigation suite, including avionics development and state sensing, model development via system identification, feedback controller design, and state estimation strategies. These requisite subsystems and their development are discussed

    Influenza dell'optic flow sul controllo posturale

    Get PDF
    The study of optic flow on postural control may explain how self-motion perception contributes to postural stability in young males and females and how such function changes in the old falls risk population. Study I: The aim was to examine the optic flow effect on postural control in young people (n=24), using stabilometry and surface-electromyography. Subjects viewed expansion and contraction optic flow stimuli which were presented full field, in the foveral or in the peripheral visual field. Results showed that optic flow stimulation causes an asymmetry in postural balance and a different lateralization of postural control in men and women. Gender differences evoked by optic flow were found both in the muscle activity and in the prevalent direction of oscillation. The COP spatial variability was reduced during the view of peripheral stimuli which evoked a clustered prevalent direction of oscillation, while foveal and random stimuli induced non-distributed directions. Study II was aimed at investigating the age-related mechanisms of postural stability during the view of optic flow stimuli in young (n=17) and old (n=19) people, using stabilometry and kinematic. Results showed that old people showed a greater effort to maintain posture during the view of optic flow stimuli than the young. Elderly seems to use the head stabilization on trunk strategy. Visual stimuli evoke an excitatory input on postural muscles, but the stimulus structure produces different postural effects. Peripheral optic flow stabilizes postural sway, while random and foveal stimuli provoke larger sway variability similar to those evoked in baseline. Postural control uses different mechanisms within each leg to produce the appropriate postural response to interact with extrapersonal environment. Ageing reduce the effortlessness to stabilize posture during optic flow, suggesting a neuronal processing decline associated with difficulty integrating multi-sensory information of self-motion perception and increasing risk of falls

    Real-time Visual Flow Algorithms for Robotic Applications

    Get PDF
    Vision offers important sensor cues to modern robotic platforms. Applications such as control of aerial vehicles, visual servoing, simultaneous localization and mapping, navigation and more recently, learning, are examples where visual information is fundamental to accomplish tasks. However, the use of computer vision algorithms carries the computational cost of extracting useful information from the stream of raw pixel data. The most sophisticated algorithms use complex mathematical formulations leading typically to computationally expensive, and consequently, slow implementations. Even with modern computing resources, high-speed and high-resolution video feed can only be used for basic image processing operations. For a vision algorithm to be integrated on a robotic system, the output of the algorithm should be provided in real time, that is, at least at the same frequency as the control logic of the robot. With robotic vehicles becoming more dynamic and ubiquitous, this places higher requirements to the vision processing pipeline. This thesis addresses the problem of estimating dense visual flow information in real time. The contributions of this work are threefold. First, it introduces a new filtering algorithm for the estimation of dense optical flow at frame rates as fast as 800 Hz for 640x480 image resolution. The algorithm follows a update-prediction architecture to estimate dense optical flow fields incrementally over time. A fundamental component of the algorithm is the modeling of the spatio-temporal evolution of the optical flow field by means of partial differential equations. Numerical predictors can implement such PDEs to propagate current estimation of flow forward in time. Experimental validation of the algorithm is provided using high-speed ground truth image dataset as well as real-life video data at 300 Hz. The second contribution is a new type of visual flow named structure flow. Mathematically, structure flow is the three-dimensional scene flow scaled by the inverse depth at each pixel in the image. Intuitively, it is the complete velocity field associated with image motion, including both optical flow and scale-change or apparent divergence of the image. Analogously to optic flow, structure flow provides a robotic vehicle with perception of the motion of the environment as seen by the camera. However, structure flow encodes the full 3D image motion of the scene whereas optic flow only encodes the component on the image plane. An algorithm to estimate structure flow from image and depth measurements is proposed based on the same filtering idea used to estimate optical flow. The final contribution is the spherepix data structure for processing spherical images. This data structure is the numerical back-end used for the real-time implementation of the structure flow filter. It consists of a set of overlapping patches covering the surface of the sphere. Each individual patch approximately holds properties such as orthogonality and equidistance of points, thus allowing efficient implementations of low-level classical 2D convolution based image processing routines such as Gaussian filters and numerical derivatives. These algorithms are implemented on GPU hardware and can be integrated to future Robotic Embedded Vision systems to provide fast visual information to robotic vehicles

    Neural Action Fields for Optic Flow Based Navigation: A Simulation Study of the Fly Lobula Plate Network

    Get PDF
    Optic flow based navigation is a fundamental way of visual course control described in many different species including man. In the fly, an essential part of optic flow analysis is performed in the lobula plate, a retinotopic map of motion in the environment. There, the so-called lobula plate tangential cells possess large receptive fields with different preferred directions in different parts of the visual field. Previous studies demonstrated an extensive connectivity between different tangential cells, providing, in principle, the structural basis for their large and complex receptive fields. We present a network simulation of the tangential cells, comprising most of the neurons studied so far (22 on each hemisphere) with all the known connectivity between them. On their dendrite, model neurons receive input from a retinotopic array of Reichardt-type motion detectors. Model neurons exhibit receptive fields much like their natural counterparts, demonstrating that the connectivity between the lobula plate tangential cells indeed can account for their complex receptive field structure. We describe the tuning of a model neuron to particular types of ego-motion (rotation as well as translation around/along a given body axis) by its ‘action field’. As we show for model neurons of the vertical system (VS-cells), each of them displays a different type of action field, i.e., responds maximally when the fly is rotating around a particular body axis. However, the tuning width of the rotational action fields is relatively broad, comparable to the one with dendritic input only. The additional intra-lobula-plate connectivity mainly reduces their translational action field amplitude, i.e., their sensitivity to translational movements along any body axis of the fly
    corecore