290 research outputs found

    Spectral and ocellar inputs to honeybee motion-sensitive descending neurons

    Get PDF
    Optomotor reflexes have been observed in many insects and in some cases the neural pathways that mediate these reflexes have been identified physiologically and anatomically. In honeybees Kaiser (1975) established that the spectral sensitivity of optomotor responses in bees almost exactly matched that of the green photoreceptors, suggesting an exclusive input from green photoreceptors. However, physiological studies showed that the motion detectors in the optic lobes have a secondary response peak in the UV region of the spectrum suggesting that there may be more than one type of photoreceptor involved in the optomotor response. Thus in this thesis, I investigate the neural basis of motion and spectral wavelength processing in motion-sensitive descending neurons, which are on the optomotor response pathway, to reveal the neural contributions from other spectral receptor types. In this study, intracellular recording techniques were utilised. The stimuli consisted of a wide-field LED (light emitting diode) display in which green (peak 530 nm) and short-wavelength (peak 380 nm) LEDs were mounted in pairs across a wide visual area. Six types of motion-sensitive descending neurons were recorded and anatomically identified, including two pitch-sensitive neurons (Locth3, DNII2), two roll-sensitive neurons (DNIV2 and DNIV3) and two yaw-sensitive neurons (DNVII1 and DNVII2). The results show that for the vertical sensitive (pitch and roll) neurons, the cells have equal-sized excitatory responses to motion when using short-wavelength or green motion stimulation. However, for the horizontal sensitive (yaw-sensitive) neurons excitatory responses only occurred for the green stimulus in the preferred direction. The short-wavelength stimulus induced clear inhibitory responses for all tested motion directions. The results suggest that besides green photoreceptors, the motion-sensitive descending neurons also receive inputs from the short-wavelength photoreceptors, but only for motion detectors tuned for vertical motion. Honeybees, like most flying insects, have three ocelli (simple eyes) located on the top of the head, in addition to the compound eyes. However, the exact function of the bee ocelli and the information computation between the ocelli, compound eyes and central brain remain unclear. In this thesis, I investigate the ocellar properties morphologically, anatomically and physiologically. Semi-thin sections and focal length measurements were performed on both median and lateral ocelli, a 3-dimensional reconstruction model of the honeybee ocellar lenses and retinas was developed to understand the visual fields of the ocelli. Intracellular electrophysiology experiments were carried out on descending neurons to understand the information processing between the ocelli and compound eyes. Cell responses to different stimuli were recorded with and without the ocelli covered. It is shown that the ocellar input provides a faster response to motion stimuli than with compound eye stimulation alone, and also increases the amplitude of responses to flashed stimuli. In the case of the DNII2 neuron, it is also shown that the ocelli provide a directional contribution to the responses

    Investigation of visual pathways in honeybees (Apis mellifera) and desert locusts (Schistocerca gregaria): anatomical, ultrastructural, and physiological approaches

    Get PDF
    Many insect species demonstrate sophisticated abilities regarding spatial orientation and navigation, despite their small brain size. The behaviors that are based on spatial orientation differ dramatically between individual insect species according to their lifestyle and habitat. Central place foragers like bees and ants, for example, orient themselves in their surrounding and navigate back to the nest after foraging for food or water. Insects like some locust and butterfly species, on the other hand, use spatial orientation during migratory phases to keep a stable heading into a certain direction over a long period of time. In both scenarios, homing and long-distance migration, vision is the primary source for orientation cues even though additional features like wind direction, the earth’s magnetic field, and olfactory cues can be taken into account as well. Visual cues that are used for orientational purposes range from landmarks and the panorama to celestial cues. The latter consists in diurnal insects of the position of the sun itself, the sun-based polarization pattern and intensity and spectral gradient, and is summarized as sky-compass system. For a reliable sky-compass orientation, the animal needs, in addition to the perception of celestial cues, to compensate for the daily movement of the sun across the sky. It is likely that a connection from the circadian pacemaker system to the sky-compass network could provide the necessary circuitry for this time compensation. The present thesis focuses on the sky-compass system of honeybees and locusts. There is a large body of work on the navigational abilities of honeybees from a behavioral perspective but the underlying neuronal anatomy and physiology has received less attention so far. Therefore, the first two chapters of this thesis reveals a large part of the anatomy of the anterior sky-compass pathway in the bee brain. To this end, dye injections, immunohistochemical stainings, and ultrastructural examinations were conducted. The third chapter describes a novel methodical protocol for physiological investigations of neurons involved in the sky-compass system using calcium imaging in behaving animals. The fourth chapter of this thesis deals with the anatomical basis of time compensation in the sky-compass system of locusts. Therefore, the ultrastructure of synaptic connections in a brain region of the desert locust where the contact of both systems could be feasible has been investigated

    Vibration-Processing Interneurons in the Honeybee Brain

    Get PDF
    The afferents of the Johnston's organ (JO) in the honeybee brain send their axons to three distinct areas, the dorsal lobe, the dorsal subesophageal ganglion (DL-dSEG), and the posterior protocerebral lobe (PPL), suggesting that vibratory signals detected by the JO are processed differentially in these primary sensory centers. The morphological and physiological characteristics of interneurons arborizing in these areas were studied by intracellular recording and staining. DL-Int-1 and DL-Int-2 have dense arborizations in the DL-dSEG and respond to vibratory stimulation applied to the JO in either tonic excitatory, on-off-phasic excitatory, or tonic inhibitory patterns. PPL-D-1 has dense arborizations in the PPL, sends axons into the ventral nerve cord (VNC), and responds to vibratory stimulation and olfactory stimulation simultaneously applied to the antennae in long-lasting excitatory pattern. These results show that there are at least two parallel pathways for vibration processing through the DL-dSEG and the PPL. In this study, Honeybee Standard Brain was used as the common reference, and the morphology of two types of interneurons (DL-Int-1 and DL-Int-2) and JO afferents was merged into the standard brain based on the boundary of several neuropiles, greatly supporting the understanding of the spatial relationship between these identified neurons and JO afferents. The visualization of the region where the JO afferents are closely appositioned to these DL interneurons demonstrated the difference in putative synaptic regions between the JO afferents and these DL interneurons (DL-Int-1 and DL-Int-2) in the DL. The neural circuits related to the vibration-processing interneurons are discussed

    Adaptations for nocturnal vision in insect apposition eyes

    Get PDF
    Due to our own preference for bright light, we tend to forget that many insects are active in very dim light. The reasons for nocturnal activity are most easily seen in tropical areas of the world, where animals face severe competition for food and nocturnal insects are able to forage in a climate of reduced competition and predation. Generally nocturnal insects possess superposition compound eyes. This eye design is truly optimized for dim light as photons can be gathered through large apertures comprised of hundreds of lenses. In apposition eyes, on the other hand, the aperture consists of a single lens resulting in a poor photon catch and unreliable vision in dim light. Apposition eyes are therefore typically found in day-active insects and according to theoretical calculations should render bees blind by mid dusk. Nevertheless, the tropical bee Megalopta genalis and the wasp Apoica pallens have managed the transition to a nocturnal lifestyle while retaining their highly unsuitable apposition eye design. Far from being blind, these bees and wasps forage at extremely low light intensities. Moreover, M. genalis is the first insect shown to use landmark navigation at light intensities less than starlight. How do their apposition eyes permit such complex visual behaviour in so little light? Optical adaptations can significantly enhance sensitivity in apposition eyes. In bees and wasps, the major effect comes from their extremely wide photoreceptors, which are able to trap light reaching the eye from a large visual angle. These optical adaptations lead to a 30-fold increase in sensitivity compared to diurnal bees and wasps. This however is not sufficient for the 8 log units difference in light intensity between day and night. Our hypothesis is that neural adaptations in the form of spatial and temporal summation must be involved. By means of spatial summation the eyes could sum signals from large groups of visual units (ommatidia), in order to improve sensitivity at the cost of coarser spatial resolution. In nocturnal bees, spatial summation could be mediated via their wide laterally-spreading first-order interneurons (L-fibres) present in the first optic ganglion (lamina). These L-fibres have significantly larger dendritic fields than equivalent neurons in diurnal bees and the potential to sum photons from up to 18 visual units. Theoretical modelling further supports this hypothesis, as the optimal dendritic field size predicted by the model agrees well with the anatomical data

    Analyses moléculaires du traitement et de l'apprentissage visuels chez les abeilles mellifÚres

    Get PDF
    Ici, nous avons dĂ©veloppĂ© des Ă©tudes couvrant l'exploration des propriĂ©tĂ©s de l'opsine et les changements d'expression des gĂšnes dans le cerveau de l'abeille pendant l'apprentissage et la rĂ©tention des couleurs afin de combler ce vide. Nous avons caractĂ©risĂ© la distribution de l'opsine dans le systĂšme visuel des abeilles, en nous concentrant sur la prĂ©sence de deux types d'opsines vertes (Lop1 et Lop2). Nous avons confirmĂ© que Lop1 n'est prĂ©sente que dans les ommatidies de l'Ɠil composĂ©, tandis que Lop2 Ă©tait confinĂ©e aux ocelles. Nous discutons des raisons de cette sĂ©grĂ©gation spatiale, puis dĂ©veloppons une approche CRISPR/Cas9 pour dĂ©terminer les Ă©ventuelles diffĂ©rences fonctionnelles entre ces opsines. Nous avons crĂ©Ă© avec succĂšs des abeilles mutantes adultes lop1 et lop2 au moyen de la technologie CRISPR/Cas9 et nous avons Ă©galement dĂ©veloppĂ© des mutants du gĂšne blanc comme contrĂŽle de l'efficacitĂ© de notre mĂ©thode. Nous dĂ©crivons la mĂ©thodologie CRISPR/Cas9 utilisĂ©e dans notre travail et prĂ©sentons les rĂ©sultats des tests fonctionnels des mutants gĂ©nĂ©rĂ©s au moyen d'un protocole de conditionnement dans lequel les abeilles apprennent Ă  inhiber la phototaxie vers la lumiĂšre chromatique basĂ©e sur la punition par choc Ă©lectrique (protocole Icarus). Les mutants White et Lop2 ont appris Ă  inhiber la phototaxie spontanĂ©e Ă  la lumiĂšre bleue mais pas les mutants Lop1. Ces rĂ©sultats indiquent que les rĂ©ponses phototactiques Ă  la lumiĂšre bleue sont mĂ©diĂ©es principalement par les photorĂ©cepteurs Ă  Ɠil composĂ© contenant Lop1 mais pas par le systĂšme ocellaire qui contiennent Lop2. En consĂ©quence, les mutants blancs et Lop2 prĂ©sentaient une mĂ©moire aversive pour la couleur comparable aux abeilles tĂ©moins, mais les mutants Lop1 ne prĂ©sentaient aucune mĂ©moire. Nous discutons ces rĂ©sultats en termes de vision chromatique et des consĂ©quences que la mutation induite pourrait avoir sur d'autres mĂ©canismes de signalisation neuronale. Enfin, nous avons analysĂ© l'expression immĂ©diate des gĂšnes prĂ©coces (IEG) dans des zones spĂ©cifiques du cerveau de l'abeille aprĂšs l'apprentissage des couleurs dans un environnement de rĂ©alitĂ© virtuelle (VR). Nous avons soumis les abeilles Ă  une VR 2D dans laquelle seuls les mouvements latĂ©raux des stimuli Ă©taient possibles et Ă  une VR 3D qui procurait une sensation plus immersive. Nous avons analysĂ© les niveaux d'expression relative de trois IEG (kakusei, Hr38 et Egr1) dans les calices des corps de champignons, les lobes optiques et le reste du cerveau aprĂšs l'apprentissage de la discrimination des couleurs. Dans la rĂ©alitĂ© virtuelle 3D, les apprenants qui ont rĂ©ussi ont prĂ©sentĂ© une rĂ©gulation positive d'Egr1 uniquement dans les calices des corps de champignons, rĂ©vĂ©lant ainsi une implication privilĂ©giĂ©e de ces rĂ©gions cĂ©rĂ©brales dans l'apprentissage associatif des couleurs. Pourtant, dans le 2D VR; Egr1 Ă©tait rĂ©gulĂ© Ă  la baisse dans les OL, tandis que Hr38 et kakusei Ă©taient rĂ©gulĂ©s Ă  la baisse dans les calices des MB. Bien que les deux scĂ©narios VR pointent vers des activations spĂ©cifiques des calices des corps de champignons (et des circuits visuels dans la VR 2D), la diffĂ©rence dĂ©tectĂ©e suggĂšre que les diffĂ©rentes contraintes des deux VR peuvent conduire Ă  diffĂ©rents types de phĂ©nomĂšnes neuronaux. Alors que les scĂ©narios VR 3D permettant la navigation et l'apprentissage exploratoire peuvent conduire Ă  une rĂ©gulation positive de l'IEG, les scĂ©narios VR 2D dans lesquels les mouvements sont limitĂ©s induisent des niveaux plus Ă©levĂ©s d'activitĂ© inhibitrice dans le cerveau de l'abeille. Dans l'ensemble, nous proposons une sĂ©rie de nouvelles explorations du systĂšme visuel, y compris de nouvelles analyses fonctionnelles et le dĂ©veloppement de nouvelles mĂ©thodes pour Ă©tudier la fonction de l'opsine, qui font progresser notre comprĂ©hension de la vision et de l'apprentissage visuel des abeilles.Honey bees are endowed with the capacity of color vision as they possess three types of photoreceptors in their retina that are maximally sensitive in the ultraviolet, blue and green domains owing to the presence of corresponding opsin types. While the behavioral aspects of color vision have been intensively explored based on the easiness by which free-flying bee foragers are trained to color stimuli paired with sucrose solution, the molecular underpinnings of this capacity have been barely explored. Here we developed studies that spanned the exploration of opsin properties and changes of gene expression in the bee brain during color learning and retention in controlled laboratory protocols to fill this void. We characterized opsin distribution in the honey bee visual system, focusing on the presence of two types of green opsins (Amlop1 and Amlop2), one of which (Amlop2) was discovered upon sequencing of the bee genome. We confirmed that Amlop1 is present in ommatidia of the compound eye but not in the ocelli, while Amlop2 is confined to the ocelli. We developed a CRISPR/Cas9 approach to determine possible functional differences between these opsins. We successfully created Amlop1 and Amlop2 adult mutant bees by means of the CRISPR/Cas9 technology and we also produced white-gene mutants as a control for the efficiency of our method. We tested our mutants using a conditioning protocol in which bees learn to inhibit attraction to chromatic light based on electric-shock punishment (Icarus protocol). White and Amlop2 mutants learned to inhibit spontaneous attraction to blue light while Amlop1 mutants failed to do so. These results indicate that responses to blue light, which is also partially sensed by green receptors, are mediated mainly by compound-eye photoreceptors containing Amlop1 but not by the ocellar system in which photoreceptors contain Amlop2. Accordingly, 24 hours later, white and Amlop2 mutants exhibited an aversive memory for the punished color that was comparable to control bees but Amlop1 mutants exhibited no memory. We discuss these findings based on controls with eyes or ocelli covered by black paint and interpret our results by discussing use of chromatic vs. achromatic vision via the compound eyes and the ocelli, respectively. Finally, we analyzed immediate early gene (IEG) expression in specific areas of the bee brain following color vision learning in a virtual reality (VR) environment. We changed the degrees of freedom of this environment and subjected bees to a 2D VR in which only lateral movements of the stimuli were possible and to a 3D VR which provided a more immersive sensation. We analyzed levels of relative expression of three IEGs (kakusei, Hr38, and Egr1) in the calyces of the mushroom bodies, the optic lobes and the rest of the brain after color discrimination learning. In the 3D VR, successful learners exhibited Egr1 upregulation only in the calyces of the mushroom bodies, thus uncovering a privileged involvement of these brain regions in associative color learning. Yet, in the 2D VR, Egr1 was downregulated in the OLs while Hr38 and kakusei were coincidently downregulated in the calyces of the MBs in the learned group. Although both VR scenarios point towards specific activations of the calyces of the mushroom bodies (and of the visual circuits in the 2D VR), the difference in the type of expression detected suggests that the different constraints of the two VRs may lead to different kinds of neural phenomena. While 3D VR scenarios allowing for navigation and exploratory learning may lead to IEG upregulation, 2D VR scenarios in which movements are constrained may induce higher levels of inhibitory activity in the bee brain. Overall, we provide a series of new explorations of the visual system, including new functional analyses and the development of novel methods to study opsin function, which advances our understanding of honey bee vision and visual learning

    Mechanisms, functions and ecology of colour vision in the honeybee.

    Get PDF
    notes: PMCID: PMC4035557types: Journal Article© The Author(s) 2014.This is an open access article that is freely available in ORE or from Springerlink.com. Please cite the published version available at: http://link.springer.com/article/10.1007%2Fs00359-014-0915-1Research in the honeybee has laid the foundations for our understanding of insect colour vision. The trichromatic colour vision of honeybees shares fundamental properties with primate and human colour perception, such as colour constancy, colour opponency, segregation of colour and brightness coding. Laborious efforts to reconstruct the colour vision pathway in the honeybee have provided detailed descriptions of neural connectivity and the properties of photoreceptors and interneurons in the optic lobes of the bee brain. The modelling of colour perception advanced with the establishment of colour discrimination models that were based on experimental data, the Colour-Opponent Coding and Receptor Noise-Limited models, which are important tools for the quantitative assessment of bee colour vision and colour-guided behaviours. Major insights into the visual ecology of bees have been gained combining behavioural experiments and quantitative modelling, and asking how bee vision has influenced the evolution of flower colours and patterns. Recently research has focussed on the discrimination and categorisation of coloured patterns, colourful scenes and various other groupings of coloured stimuli, highlighting the bees' behavioural flexibility. The identification of perceptual mechanisms remains of fundamental importance for the interpretation of their learning strategies and performance in diverse experimental tasks.Biotechnology and Biological Sciences Research Council (BBSRC

    What makes a landmark a landmark? How active vision strategies help honeybees to process salient visual features for spatial learning

    Get PDF
    Mertes M. Primary sensory processing of visual and olfactory signals in the bumblebee brain. Bielefeld: Bielefeld University; 2013.Since decades honeybees are being used as an insect model system for answering scientific questions in a variety of areas. This is due to their enormous behavioural repertoire paired with their learning capabilities. Similar learning capabilities are also evident in bumblebees that are closely related to honeybees. As honeybees, they are central place foragers that commute between a reliable food source and their nest and, therefore, need to remember particular facets of their environment to reliably find back to these places. Via their flight style that consists of fast head and body rotations (saccades)interspersed with flight segments of almost no rotational movements of the head (intersaccades) it is possible to acquire distance information about objects in the environment. Depending on the structure of the environment bumblebees as well as honeybees can use these objects as landmarks to guide their way between the nest and a particular food source. Landmark learning as a visual task depends of course on the visual input perceived by the animal’s eyes. As this visual input rapidly changes during head saccades, we recorded in my first project bumblebees with high-speed cameras in an indoor flight arena, while they were solving a navigation task that required them to orient according to landmarks. First of all we tracked head orientation during whole flight periods that served to learn the spatial arrangement of the landmarks. Like this we acquired detailed data on the fine structure of their head saccades that shape the visual input they perceive. Head-saccades of bumblebees exhibit a consistent relationship between their duration, peak velocity and amplitude resembling the human so-called "saccadic main sequence" in its main characteristics. We also found the bumblebees’saccadic sequence to be highly stereotyped, similar to many other animals. This hints at a common principle of reliably reducing the time during which the eye is moved by fast and precise motor control. In my first project I tested bumblebees with salient landmarks in front of a background covered with a random-dot pattern. In a previous study, honeybees were trained with the same landmark arrangement and were additionally tested using landmarks that were camouflaged against the background. As the pattern of the landmark textures did not seem to affect their performance in finding the goal location, it had been assumed that the way they acquire information about the spatial relationship between objects is independent of the objects texture. Our aim for the second project of my dissertation was therefore to record the activity of motion sensitive neurons in the bumblebee to analyse in how far object information is contained in a navigation-related visual stimulus movie. Also we wanted to clarify, if object texture is represented by the neural responses. As recording from neurons in free-flying bumblebees is not possible, we used one of the recorded bumblebee trajectories to reconstruct a three-dimensional flight path including data on the head orientation. We therefore could reconstruct ego-perspective movies of a bumblebee 10 while solving a navigational task. These movies were presented to motion-sensitive neurons in the bumblebee lobula. We found for two different classes of neurons that object information was contained in the neuronal response traces. Furthermore, during the intersaccadic parts of flight the object’s texture did not change the general response profile of these neurons, which nicely matches the behavioural findings. However, slight changes in the response profiles acquired for the saccadic parts of flight might allow to extract texture information from these neurons at later processing stages. In the final project of my dissertation I switched from exploring coding of visual information to the coding of olfactory signals. For honeybees and bumblebees olfaction is approximately equally important for their behaviour as their vision sense. But whereas there is a solid knowledge base on honeybee olfaction with detailed studies on the single stages of olfactory information processing this knowledge was missing for the bumblebee. In the first step we conducted staining experiments and confocal microscopy to identify input tracts conveying information from the antennae to the first processing stage of olfactory information – the antennal lobe (AL ). Using three-dimensional reconstruction of the AL we could further elucidate typical numbers of single spheroidal shaped subunits of the AL , which are called glomeruli. Odour molecules that the bumblebee perceives induce typical activation patterns characteristic of particular odours. By retrogradely staining the output tracts that connect the AL to higher order processing stages with a calcium indicator, we were capable of recording the odourdependent activation patterns of the AL glomeruli and to describe their basic coding principles. Similarly as in honeybees, we could show that the odours’ carbon chain length as well as their functional groups are dimensions that the antennal lobe glomeruli are coding in their spatial response pattern. Applying correlation methods underlined the strong similarity of the glomerular activity pattern between honeybees and bumblebees

    Apprentissage visuel en réalité virtuelle chez Apis mellifera

    Get PDF
    DotĂ©es d'un cerveau de moins d'un millimĂštre cube et contenant environ 950 000 neurones, les abeilles prĂ©sentent un riche rĂ©pertoire comportemental, parmi lesquels l'apprentissage appĂ©titif et la mĂ©moire jouent un rĂŽle fondamental dans le contexte des activitĂ©s de recherche de nourriture. Outre les formes Ă©lĂ©mentaires d'apprentissage, oĂč les abeilles apprennent une association spĂ©cifique entre des Ă©vĂ©nements de leur environnement, les abeilles maĂźtrisent Ă©galement diffĂ©rentes formes d'apprentissage non-Ă©lĂ©mentaire, Ă  la fois dans le domaine visuel et olfactif, y compris la catĂ©gorisation, l'apprentissage contextuel et l'abstraction de rĂšgles. Ces caractĂ©ristiques en font un modĂšle idĂ©al pour l'Ă©tude de l'apprentissage visuel et pour explorer les mĂ©canismes neuronaux qui sous-tendent leurs capacitĂ©s d'apprentissage. Afin d'accĂ©der au cerveau d'une abeille lors d'une tĂąche d'apprentissage visuel, l'insecte doit ĂȘtre immobilisĂ©. Par consĂ©quent, des systĂšmes de rĂ©alitĂ© virtuelle (VR) ont Ă©tĂ© dĂ©veloppĂ©s pour permettre aux abeilles d'agir dans un monde virtuel, tout en restant stationnaires dans le monde rĂ©el. Au cours de mon doctorat, j'ai dĂ©veloppĂ© un logiciel de rĂ©alitĂ© virtuelle 3D flexible et open source pour Ă©tudier l'apprentissage visuel, et je l'ai utilisĂ© pour amĂ©liorer les protocoles de conditionnement existants en VR et pour Ă©tudier le mĂ©canisme neuronal de l'apprentissage visuel. En Ă©tudiant l'influence du flux optique sur l'apprentissage associatif des couleurs, j'ai dĂ©couvert que l'augmentation des signaux de mouvement de l'arriĂšre-plan nuisait aux performances des abeilles. Ce qui m'a amenĂ© Ă  identifier des problĂšmes pouvant affecter la prise de dĂ©cision dans les paysages virtuels, qui nĂ©cessitent un contrĂŽle spĂ©cifique par les expĂ©rimentateurs. Au moyen de la VR, j'ai induit l'apprentissage visuel chez des abeilles et quantifiĂ© l'expression immĂ©diate des gĂšnes prĂ©coces (IEG) dans des zones spĂ©cifiques de leur cerveau pour dĂ©tecter les rĂ©gions impliquĂ©es dans l'apprentissage visuel. En particulier, je me suis concentrĂ© sur kakusei, Hr38 et Egr1, trois IEG liĂ©s Ă  la recherche de nourriture et Ă  l'orientation des abeilles et qui peuvent donc Ă©galement ĂȘtre pertinents pour la formation d'association visuelle appĂ©titive. Cette analyse suggĂšre que les corps pĂ©donculĂ©s sont impliquĂ©s dans l'apprentissage associatif des couleurs. Enfin, j'ai explorĂ© la possibilitĂ© d'utiliser la VR sur d'autres modĂšles d'insectes et effectuĂ© un conditionnement diffĂ©rentiel sur des bourdons. Cette Ă©tude a montrĂ© que non seulement les bourdons sont capables de rĂ©soudre cette tĂąche cognitive aussi bien que les abeilles, mais aussi qu'ils interagissent davantage avec la rĂ©alitĂ© virtuelle, ce qui entraĂźne un ratio plus faible d'individus rejetĂ©s de l'expĂ©rience par manque de mouvement. Ces rĂ©sultats indiquent que les protocoles VR que j'ai Ă©tablis au cours de cette thĂšse peuvent ĂȘtre appliquĂ©s Ă  d'autres insectes, et que le bourdon est un bon candidat pour l'Ă©tude de l'apprentissage visuel en VR.Equipped with a brain smaller than one cubic millimeter and containing ~950,000 neurons, honeybees display a rich behavioral repertoire, among which appetitive learning and memory play a fundamental role in the context of foraging activities. Besides elemental forms of learning, where bees learn specific association between environmental features, bees also master different forms of non-elemental learning, including categorization, contextual learning and rule abstraction. These characteristics make them an ideal model for the study of visual learning and its underlying neural mechanisms. In order to access the working brain of a bee during visual learning the insect needs to be immobilized. To do so, virtual reality (VR) setups have been developed to allow bees to behave within a virtual world, while remaining stationary within the real world. During my PhD, I developed a flexible and open source 3D VR software to study visual learning, and used it to improve existing conditioning protocols and to investigate the neural mechanism of visual learning. By developing a true 3D environment, we opened the possibility to add frontal background cues, which were also subjected to 3D updating based on the bee movements. We thus studied if and how the presence of such motion cues affected visual discrimination in our VR landscape. Our results showed that the presence of frontal background motion cues impaired the bees' performance. Whenever these cues were suppressed, color discrimination learning became possible. Our results point towards deficits in attentional processes underlying color discrimination whenever motion cues from the background were frontally available in our VR setup. VR allows to present insects with a tightly controlled visual experience during visual learning. We took advantage of this feature to perform ex-vivo analysis of immediate early gene (IEG) expression in specific brain area, comparing learner and non-learner bees. Using both 3D VR and a lore restrictive 2D version of the same task we tackled two questions, first what are the brain region involved in visual learning? And second, is the pattern of activation of the brain dependent on the modality of learning? Learner bees that solved the task in 3D showed an increased activity of the Mushroom Bodies (MB), which is coherent with the role of the MB in sensory integration and learning. Surprisingly we also found a completely different pattern of IEGs expression in the bees that solved the task in 2D conditions. We observed a neural signature that spanned the optic lobes and MB calyces and was characterized by IEG downregulation, consistent with an inhibitory trace. The study of visual learning's neural mechanisms requires invasive approach to access the brain of the insects, which induces stress in the animals and can thus impair behaviors in itself. To potentially mitigate this effect, bumble bees Bombus terrestris could constitute a good alternative to Apis mellifera as bumble bees are more robust. That's why in the last part of this work we explored the performances of bumblebees in a differential learning task in VR and compared them to those of honey bees. We found that, not only bumble bees are able to solve the task as well as honey bees, but they also engage more with the virtual environment, leading to a lower ratio of discarded individuals. We also found no correlation between the size of bumble bees and their learning performances. This is surprising as larger bumble bees, that assume the role of foragers in the colony, have been shown to be better at learning visual tasks in the literature

    Transgenic Tools in Ants and the Representation of Alarm Pheromones in the Ant Antennial Lobe

    Get PDF
    For decades, ants have served as major study species for ethologists, theorists, geneticists, and chemical ecologists, who have been drawn to understand how their unique features contribute to the evolution and maintenance of insect societies. These features, especially their extreme morphological plasticity, collective behavior, and complex chemical communication, together with their small sizes and relatively simple brains, make ants intriguing model systems for many topics in neurobiology. While many ant species possess brains no larger than the wellcharacterized vinegar fly, the primary olfactory processing centers (antennal lobes) contain an order of magnitude more functional units compared to the vinegar fly, presumably to facilitate detecting and discriminating between vast numbers of pheromones. The discovery of additional developmental differences between ants and flies led to a proposed but untested model where ant olfactory sensory neurons target the appropriate antennal lobe compartment via receptor dependent activity, as occurs in mammals. However, while transgenic tools have allowed dissection of the olfactory system\u27s development and functional organization in some solitary insect species, these tools have so far been impossible to implement in ants. Taking advantage of the unusual experimental tractability of the clonal raider ant Ooceraea biroi, we implemented piggyBac transgenesis for the first time in ants, and generated a toolkit of transgenic lines. These protocols and transgenic tools greatly expand the space of feasible experiments in ants, and make clonal raider ants the most experimentally tractable model system among eusocial insects. We then used these transgenic lines to study outstanding questions in social insect olfaction
    • 

    corecore