27 research outputs found

    Evidence for ventral optic flow regulation in honeybees

    No full text
    To better grasp the visuomotor control system underlying insects' height and speed control, we attempted to interfere with this system by producing a major perturbation on the free flying insect and obsering the effect of this perturbation. Honeybees were trained to fly along a high-roofed tunnel, part of which was equipped with a moving floor. The bees followed the stationary part of the floor at a given height. On encountering the moving part of the floor, which moved in the same direction as their flight, honeybees descended and flew at a lower height. In so doing, bees gradually restored their ventral optic flow (OF) to a similar value to that they had perceived when flying over the stationary part of the floor. OF restoration therefore relied on lowering the groundheight rather than increasing the groundspeed. This result can be accounted for by the control system called an optic flow regulator that we proposed in previous studies. This visuo-motor control scheme explains how honeybees can navigate safely along tunnels on the sole basis of OF measurements, without any need to measure either their speed or the clearance from the ground, the roof or the surrounding walls

    Le pilotage visuel chez l'abeille : expériences et modÚle

    Get PDF
    Quand un insecte vole, l'image des objets Ă  l'entour dĂ©file sur sa rĂ©tine. De nombreuses expĂ©riences ont montrĂ© que ce dĂ©filement angulaire (flux optique) joue un rĂŽle majeur dans le contrĂŽle du vol. Les insectes semblent maintenir le flux optique perçu Ă  une valeur prĂ©fĂ©rĂ©e, les conduisant Ă  adopter une position et une vitesse "de sĂ©curitĂ©". Nous avons Ă©laborĂ© un modĂšle basĂ© sur le principe de "rĂ©gulation du flux optique" proposĂ© prĂ©cĂ©demment par notre laboratoire, et capable de rendre compte d'observations et de rĂ©sultats d'expĂ©riences rĂ©alisĂ©es auparavant chez les insectes. Nous avons ensuite rĂ©alisĂ© des expĂ©riences comportementales sur des abeilles en vol libre, en environnement contrĂŽlĂ©, visant Ă  mettre en dĂ©faut le modĂšle proposĂ©. Nos rĂ©sultats rĂ©vĂšlent un lien Ă©troit entre flux optique ventral et hauteur de vol. Ils montrent aussi que l'abeille est sensible au flux optique dorsal et enfin que l'abeille adapte sa vitesse Ă  l'encombrement de l'environnement dans les plans vertical et horizontal. Tous ces rĂ©sultats Ă©tayent le modĂšle proposĂ©. Cependant une expĂ©rience finale suggĂšre qu'au delĂ  de l'aspect "rĂ©flexe" du contrĂŽle du vol, l'apprentissage joue un rĂŽle et module le comportement de vol. Ceci impose d'inclure un Ă©lĂ©ment "cognitif" au schĂ©ma proposĂ©. Cette thĂšse dĂ©crit, pour la premiĂšre fois sous la forme d'un schĂ©ma fonctionnel, les principes mis en Ɠuvre dans le contrĂŽle 3D du vol d'un insecte par le flux optique. Le caractĂšre explicite du modĂšle proposĂ©, d'une part ouvre la voie Ă  de nouvelles expĂ©riences comportementales susceptibles de le mettre en dĂ©faut ou d'en prĂ©ciser les limites, d'autre part le rend directement applicable Ă  la robotique mobile, aĂ©rienne ou spatiale.When an insect flies in its environment, the image of the surrounding objects moves on its retina. Several studies have shown that this angular movement, called "optic flow", plays a major role in the insect flight control. Flying insects seem to maintain the perceived optic flow at a prefered value, which makes them choose a "safe" position and a "safe" speed. We first designed a model based on the "optic flow regulation principle" recently proposed at our laboratory, which can account for observations and results previously shown on insects. We then performed behavioral experiments using free flying bees in controlled environments, which aimed at refuting the proposed model. Our results show a direct link between the ventral optic flow and the flight height. They also show that the honeybee is sensitive to the dorsal optic flow and that the honeybee can adjust its speed according to the cluttering of the environment in both the vertical and horizontal planes. All these results support the proposed model. The results of a last experiment suggest, however, that beyond the "reflex" part of the flight control system, a learning process may play a role and modulate the flight behavior. This last point requires that a learning process be incorporated into the model. This thesis for the first time proposes an explicit and functional scheme based on optic flow, describing the principles involved in the 3D flight control system of an insect. This model suggests new behavioral experiments liable to fault it. Because this model is explicit, it may be directly implemented onboard aerial or spatial robots

    3D Navigation With An Insect-Inspired Autopilot

    No full text
    ISBN : 978-2-9532965-0-1Using computer-simulation experiments, we developed a vision-based autopilot that enables a ‘simulated bee' to travel along a tunnel by controlling both its speed and its clearance from the right wall, the left wall, the ground, and the ceiling. The flying agent can translate along three directions (surge, sway, and heave): the agent is therefore fully actuated. The visuo-motor control system, called ALIS (AutopiLot using an Insect based vision System), is a dual OF regulator consisting of two interdependent feedback loops, each of which has its own OF set-point. The experiments show that the simulated bee navigates safely along a straight tunnel, while reacting sensibly to the major OF perturbation caused by the presence of a tapered tunnel. The visual system is minimalistic (only eight pixels) and it suffices to control the clearance from the four walls and the forward speed jointly, without the need to measure any speeds and distances. The OF sensors and the simple visuo-motor control system developed here are suitable for use on MAVs with avionic payloads as small as a few grams. Besides, the ALIS autopilot accounts remarkably for the quantitative results of ethological experiments performed on honeybees flying freely in straight or tapered corridors

    A 3D insect-inspired visual autopilot for corridor-following

    Get PDF
    International audienceMotivated by the results of behavioral studies performed on bees over the last two decades, we have attempted to decipher the logics behind the bee's autopilot, with specific reference to their use of optic flow (OF). Using computer-simulation experiments, we developed a vision-based autopilot that enables a 'simulated bee' to travel along a tunnel by controlling both its speed and its clearance from the walls, the ground, and the ceiling. The flying agent is fully actuated and can translate along three directions: surge, sway, and heave. The visuo-motor control system, called ALIS (AutopiLot using an Insect based vision System), is a dual OF regulator consisting of intertwined feedback loops, each of which has its own OF set-point. The experiments show that the simulated bee navigates safely along a straight or tapered tunnel and reacts sensibly to major OF perturbations caused, e.g., by the lack of texture on one wall or by the presence of a tapered tunnel. The agent is equipped with a minimalistic visual system (comprised of only eight pixels) that suffices to control the clearance from the four walls and the forward speed jointly, without the need to measure any speeds and distances. The OF sensors and the simple visuo-motor control system developed here are suitable for use on MAVs with avionic payloads as small as a few grams. Besides, the ALIS autopilot accounts remarkably for the quantitative results of ethological experiments performed on honeybees flying freely in straight or tapered corridors

    Decoding the retina with the first wave of spikes

    Get PDF
    International audienceUnderstanding how the retina encodes visual information remains an open question. Using MEAs on salamander retinas, Gollisch & Meister (2008) showed that the relative latencies between some neuron pairs carry sufficient information to identify the phase of square-wave gratings (,. Using gratings of varying phase, spatial frequency, and contrast on mouse retinas, we extended this idea by systematically considering the relative order of all spike latencies, i.e. the shape of the first wave of spikes after stimulus onset. The discrimination task was to identify the phase among gratings of identical spatial frequency. We compared the performance (fraction correct predictions) of our approach under classical Bayesian and LDA decoders to spike count and response latency of each recorded neuron. Best results were obtained for the lowest spatial frequency. There, results showed that the spike count discrimination performance was higher than for latency under both the Bayesian (0,95+-0,02 and 0,75+-0,11 respectively) and LDA (0,95+-0,01 and 0,62+-0,03 respectively) decoders. The first wave of spikes decoder is (0,46+-0,06) less efficient than the spike count. Nevertheless, it accounts for 50% of the overall performance. Interestingly, these results tend to confirm the rank order coding hypothesis (Thorpe & Gautrais, 1998)

    The wave of first spikes provides robust spatial cues for retinal information processing

    Get PDF
    How a population of retinal ganglion cells (RGCs) encode the visual scene remains an open question. Several coding strategies have been investigated out of which two main views have emerged: considering RGCs as independent encoders or as synergistic encoders, i.e. when the concerted spiking in a RGC population carries more information than the sum of the information contained in the spiking of individual RGCs. Although the RGCs assumed as independent encode the main information, there is currently a growing body of evidence that considering RGCs as synergistic encoders provides complementary and more precise information. Based on salamander retina recordings, it has been suggested [11] that a code based on di erential spike latencies between RGC pairs could be a powerful mechanism. Here, we have tested this hypothesis in the mammalian retina. We recorded responses to stationary gratings from 469 RGCs in 5 mouse retinas. Interestingly, we did not nd any RGC pairs exhibiting clear latency correlations (presumably due to the presence of spontaneous activity), showing that individual RGC pairs do not provide su cient information in our conditions. However considering the whole RGC population, we show that the shape of the wave of rst spikes (WFS) successfully encodes for spatial cues. To quantify its coding capabilities, we performed a discrimination task and we showed that the WFS was more robust to the spontaneous ring than the absolute latencies are. We also investigated the impact of a post-processing neural layer. The recorded spikes were fed into an arti cial lateral geniculate nucleus (LGN) layer. We found that the WFS is not only preserved but even re ned through the LGN-like layer, while classical independent coding strategies become impaired. These ndings suggest that even at the level of the retina, the WFS provides a reliable strategy to encode spatial cues.Comment une population de cellules ganglionnaires de la rĂ©tine (RGC) encode la scĂšne visuelle reste une question ouverte. Plusieurs stratĂ©gies de codage ont Ă©tĂ© Ă©tudiĂ©s Ă  partir desquelles deux principales vues ont Ă©mergĂ©: considĂ©rer les RGCs en tant que encodeurs indĂ©pendants ou en tant que encodeurs synergique; c'est Ă  dire lorsque la rĂ©ponse concertĂ©e dans une population de RGCs contient plus d'informations que la somme des informations contenues dans les rĂ©ponses individuelles. Bien que en considĂ©rant les RGCs comme des encodeurs indĂ©pendants donne accĂšs Ă  l'information principale, il existe actuellement un nombre croissant de preuves qui montrent que considĂ©rer les RGCs comme des encodeurs synergiques fournit des informations complĂ©mentaires et plus prĂ©cises. BasĂ© sur des enregistrements de la rĂ©tine de salamandre, il a Ă©tĂ© suggĂ©rĂ© [11] qu'un code basĂ© sur les di Ă©rences entre les latences des paires de RGCs pourrait ĂȘtre un mĂ©canisme puissant. Ici, nous avons testĂ© cette hypothĂšse dans la rĂ©tine de mammifĂšre. Nous avons enregistrĂ© les rĂ©ponses de 469 RGCs de 5 rĂ©tines de souris. Fait intĂ©ressant, nous n'avons pas trouvĂ© de paires de RGCs prĂ©sentant des corrĂ©lations de latence claires (probablement en raison de la prĂ©sence d'une forte activitĂ© spontanĂ©e). Cela montre que les paires de RGCs individuelles ne fournissent pas su samment d'informations dans nos conditions. Toutefois, en considĂ©rant la population de RGCs, nous avons montrĂ© que la forme de la premiĂšre vague de potentiels d'action (WFS) code avec succĂšs des indices spatiaux. Pour quanti er ses capacitĂ©s de codage, nous avons rĂ©alisĂ© une tĂąche de discrimination et nous avons montrĂ© que la WFS Ă©tait plus robuste Ă  l'activitĂ©e spontannĂ©e que les latences absolues. Nous avons Ă©galement Ă©tudiĂ© l'impact du traitement par une couche de neurones. Les rĂ©ponses enregistrĂ©es ont Ă©tĂ© introduits dans une couche de corps gĂ©niculĂ© latĂ©ral arti ciel (LGN). Nous avons constatĂ© que la WFS est non seulement prĂ©servĂ©e mais mÃame ra nĂ©e Ă  travers la couche LGN, tandis que les stratĂ©gies classiques de codage indĂ©pendant deviennent altĂ©rĂ©es. Ces rĂ©sultats suggĂšrent que, mĂȘme au niveau de la rĂ©tine, la WFS propose une stratĂ©gie able pour coder l'information visuelle

    ENAS: A new software for spike train analysis and simulation

    Get PDF
    International audienceAs one gains more intuitions and results on the importance of concerted activity in spike trains, models are developed to extract potential canonical principles underlying spike coding. These methods shed a new light on spike train dynamics. However, they require time and expertise to be implemented efficiently, making them hard to use in a daily basis by neuroscientists or modelers. To bridge this gap, we developed the license free multiplatform software ENAS integrating tools for spike trains analysis and simulation. These tools are accessible through a friendly Graphical User Interface that avoids any scripting or writing code from user. Most of them have been implemented to run in parallel to reduce the time and memory consumption. One of the main strength of ENAS when compared to competing software is to provide statistical analysis with Maximum Entropy-Gibbs distributions taking into account both spatial and temporal correlations as constraints, allowing to introduce causality and memory in statistics. Conversely, given this analysis or other known statistics, ENAS one can also generate new spike trains. These methods result from a series of work and they have already been applied to the analysis of retina data. This comes in addition to basic visualizations and classical analysis for statistics of spike trains analysis. All these tools are generic and can be applied to any spike train. Interestingly, ENAS also includes specific tools dedicated for vision, and the retina in particular. For example, one can jointly visualize stimulus and spiking activity or estimate receptive fields. ENAS also includes a virtual retina simulator extending former Virtual Retina simulator to include lateral connections in the IPL. We hope that ENAS will become a useful tool for neuroscientists to analyse spike trains and we hope to improve it thanks to user feedback. Our goal is to progressively enrich ENAS with the latest research results, in order to facilitate transfer of new methods to the community. It is downloadable from https://enas.inria.fr where documentation, tutorials and samples of spike trains are available

    Rank order coding: a retinal information decoding strategy revealed by large-scale multielectrode array retinal recordings

    Get PDF
    International audienceHow a population of retinal ganglion cells (RGCs) encodes the visual scene remains an open question. Going beyond individual RGC coding strategies, results in salamander suggest that the relative latencies of an RGC pair encodes spatial information. Thus a population code based on this concerted spiking could be a powerful mechanism to transmit visual information rapidly and efficiently. Here, we tested this hypothesis in mouse by recording simultaneous light-evoked responses from hundreds of RGCs, at pan-retinal level, using a new generation of large-scale, high density multielectrode array consisting of 4096 electrodes. Interestingly, we did not find any RGCs exhibiting a clear latency tuning to the stimuli, suggesting that in mouse, individual RGC pairs may not provide sufficient information. We show that a significant amount of information is encoded synergistically in the concerted spiking of large RGC populations. Thus, the RGC population response described with relative activities, or ranks, provides more relevant information than classical independent spike count- or latency- based codes. In particular, we report for the first time that when considering the relative activities across the whole population, the wave of first stimulus-evoked spikes (WFS) is an accurate indicator of stimulus content. We show that this coding strategy co-exists with classical neural codes, and that it is more efficient and faster. Overall, these novel observations suggest that already at the level of the retina, concerted spiking provides a reliable and fast strategy to rapidly transmit new visual scenes

    Honeybees' Speed Depends on Dorsal as Well as Lateral, Ventral and Frontal Optic Flows

    Get PDF
    Flying insects use the optic flow to navigate safely in unfamiliar environments, especially by adjusting their speed and their clearance from surrounding objects. It has not yet been established, however, which specific parts of the optical flow field insects use to control their speed. With a view to answering this question, freely flying honeybees were trained to fly along a specially designed tunnel including two successive tapering parts: the first part was tapered in the vertical plane and the second one, in the horizontal plane. The honeybees were found to adjust their speed on the basis of the optic flow they perceived not only in the lateral and ventral parts of their visual field, but also in the dorsal part. More specifically, the honeybees' speed varied monotonically, depending on the minimum cross-section of the tunnel, regardless of whether the narrowing occurred in the horizontal or vertical plane. The honeybees' speed decreased or increased whenever the minimum cross-section decreased or increased. In other words, the larger sum of the two opposite optic flows in the horizontal and vertical planes was kept practically constant thanks to the speed control performed by the honeybees upon encountering a narrowing of the tunnel. The previously described ALIS (“AutopiLot using an Insect-based vision System”) model nicely matches the present behavioral findings. The ALIS model is based on a feedback control scheme that explains how honeybees may keep their speed proportional to the minimum local cross-section of a tunnel, based solely on optic flow processing, without any need for speedometers or rangefinders. The present behavioral findings suggest how flying insects may succeed in adjusting their speed in their complex foraging environments, while at the same time adjusting their distance not only from lateral and ventral objects but also from those located in their dorsal visual field

    Honeybee visual flight control (experiments and model)

    No full text
    TOULOUSE3-BU Sciences (315552104) / SudocSudocFranceF
    corecore