48 research outputs found

    How is an ant navigation algorithm affected by visual parameters and ego-motion?

    Get PDF
    Ants typically use path integration and vision for navigation when the environment precludes the use of pheromones for trails. Recent simulations have been able to accurately mimic the retinotopic navigation behaviour of these ants using simple models of movement and memory of unprocessed visual images. Naturally it is interesting to test these navigation algorithms in more realistic circumstances, particularly with actual route data from the ant, in an accurate facsimile of the ant world and with visual input that draws on the characteristics of the animal. While increasing the complexity of the visual processing to include skyline extraction, inhomogeneous sampling and motion processing was conjectured to improve the performance of the simulations, the reverse appears to be the case. Examining closely the assumptions about motion, analysis of ants in the field shows that they experience considerable displacement of the head which when applied to the simulation leads to significant degradation in performance. The family of simulations rely upon continuous visual monitoring of the scene to determine heading and it was decided to test whether the animals were similarly dependent on this input. A field study demonstrated that ants with only visual navigation cues can return the nest when largely facing away from the direction of travel (moving backwards) and so it appears that ant visual navigation is not a process of continuous retinotopic image matching. We conclude ants may use vision to determine an initial heading by image matching and then continue to follow this direction using their celestial compass, or they may use a rotationally invariant form of the visual world for continuous course correction

    A decentralised neural model explaining optimal integration of navigational strategies in insects

    Get PDF
    Insect navigation arises from the coordinated action of concurrent guidance systems but the neural mechanisms through which each functions, and are then coordinated, remains unknown. We propose that insects require distinct strategies to retrace familiar routes (route-following) and directly return from novel to familiar terrain (homing) using different aspects of frequency encoded views that are processed in different neural pathways. We also demonstrate how the Central Complex and Mushroom Bodies regions of the insect brain may work in tandem to coordinate the directional output of different guidance cues through a contextually switched ring-attractor inspired by neural recordings. The resultant unified model of insect navigation reproduces behavioural data from a series of cue conflict experiments in realistic animal environments and offers testable hypotheses of where and how insects process visual cues, utilise the different information that they provide and coordinate their outputs to achieve the adaptive behaviours observed in the wild

    A unified neural model explaining optimal multi-guidance coordination in insect navigation

    Get PDF
    The robust navigation of insects arises from the coordinated action of concurrently functioning and interacting guidance systems. Computational models of specific brain regions can account for isolated behaviours such as path integration or route following, but the neural mechanisms by which their outputs are coordinated remains unknown. In this work, a functional modelling approach was taken to identify and model the elemental guidance subsystems required by homing insects. Then we produced realistic adaptive behaviours by integrating different guidance's outputs in a biologically constrained unified model mapped onto identified neural circuits. Homing paths are quantitatively and qualitatively compared with real ant data in a series of simulation studies replicating key infield experiments. Our analysis reveals that insects require independent visual homing and route following capabilities which we show can be realised by encoding panoramic skylines in the frequency domain, using image processing circuits in the optic lobe and learning pathways through the Mushroom Bodies (MB) and Anterior Optic Tubercle (AOTU) to Bulb (BU) respectively before converging in the Central Complex (CX) steering circuit. Further, we demonstrate that a ring attractor network inspired by firing patterns recorded in the CX can optimally integrate the outputs of path integration and visual homing systems guiding simulated ants back to their familiar route, and a simple non-linear weighting function driven by the output of the MB provides a context-dependent switch allowing route following strategies to dominate and the learned route retraced back to the nest when familiar terrain is encountered. The resultant unified model of insect navigation reproduces behavioural data from a series of cue conflict experiments in realistic animal environments and offers testable hypotheses of where and how insects process visual cues, utilise the different information that they provide and coordinate their outputs to achieve the adaptive behaviours observed in the wild. These results forward the case for a distributed architecture of the insect navigational toolkit. This unified model then be further validated by modelling the olfactory navigation of flies and ants. With simple adaptions of the sensory inputs, this model reproduces the main characteristics of the observed behavioural data, further demonstrating the useful role played by sensory-processing to CX to motor pathway in generating context-dependent coordination behaviours. In addition, this model help to complete the unified model of insect navigation by adding the olfactory cues that is one of the most crucial cues for insects

    Reinforcement Learning Approaches to Rapid Hippocampal Place Learning

    Get PDF
    The ability to successfully navigate the physical environment is a vital skill for numer- ous species, including humans, to find food and shelter and remember how to return to important locations. As environments are inherently variable, brains have evolved amazing capabilities to adapt to various new situations. In particular, animals and humans have the ability to return to specific locations based on as few as a single experience. The mechanisms underlying behavioural flexibility in spatial navigation is the focus of ongoing research with repercussions in behavioural sciences, neurosciences, and artificial intelligence. In particular, the field of Reinforcement Learning (RL), which investigates how an or- ganism, virtual or living, learns to generate actions based on the reception of rewards, has been extremely active since the 1970s for the exploration of the mechanisms of flexibil- ity underlying decision making. In parallel, neuroscience has also significantly advanced in uncovering the neural basis underlying spatial navigation mechanisms, for example with the discovery of neurons underlying the computation of cognitive maps [O’Keefe and Dostrovsky, 1971, Hafting et al., 2005], an internal representation of space. Past RL models design relies on representations that do not allow efficient flexibility in spatial navigation. However, models provide a theoretical framework that influences the interpretation of neural recordings. As recent recording technologies enable experimentalists to target an increasing number of neurons, there is a compelling need to develop new RL computational approaches for flexible spatial navigation, in particular to bridge the gap between neural population recordings and the production of behaviours. In this thesis, I consider RL approaches in which the known coding properties of the cognitive map are used as a basis to perform spatial navigation. Specifically, I investigate computational ideas which enable agents to be more flexible in virtual spatial navigation scenarios. In particular, this thesis focuses on the Morris watermaze, an experimental apparatus in which rodents have to find a hidden platform within a pool of cloudy water. Rapid place learning in the Morris watermaze, demonstrated by rodents requiring only one exposure to a new platform location to subsequently be able to retrieve its position, is an example of flexibility in spatial navigation. I present different RL-based architectures which generate flexible behaviours in a virtual watermaze equivalent, and compare them to behavioural observations. I discuss both the similarity in behavioural performance (i.e., how well they reproduce behavioural measures of rapid place learning) and neuro- biological realism (i.e., how well they map to neurobiological substrates involved in rapid place learning). I propose distinct biologically realistic computational properties which enable an agent to be more flexible towards changes in goal locations. Behavioural flexibility requires hier- archical and generalisable representations for flexible transfer of knowledge. Hierarchical control is useful to generalise action chains, such as selecting a trajectory, to fulfil differ- ent purposes, such as reaching different goal locations. It also enables the adjustment of ongoing behaviour to unforeseen situations, for example, adapting to misprediction of the goal’s location. Continuous encoding of space, action and time, permits smoother control and generalisation of experience, and removes the constraints caused by the choice of the representation’s granularity. Neural networks in which connections between neurons re- flect predictions about most likely future scenarios enable efficient planning of trajectories to adapt to novel situations. In a nutshell, flexibility requires efficient representations, and this thesis contributes to the investigation of their neural implementations

    The Neural Basis of a Cognitive Map

    Get PDF
    It has been proposed that as animals explore their environment they build and maintain a cognitive map, an internal representation of their surroundings (Tolman, 1948). We tested this hypothesis using a task designed to assess the ability of rats to make a spatial inference (take a novel shortcut)(Roberts et al., 2007). Our findings suggest that rats are unable to make a spontaneous spatial inference. Furthermore, they bear similarities to experiments which have been similarly unable to replicate or support Tolman’s (1948) findings. An inability to take novel shortcuts suggests that rats do not possess a cognitive map (Bennett, 1996). However, we found evidence of alternative learning strategies, such as latent learning (Tolman & Honzik, 1930b) , which suggest that rats may still be building such a representation, although it does not appear they are able to utilise this information to make complex spatial computations. Neurons found in the hippocampus show remarkable spatial modulation of their firing rate and have been suggested as a possible neural substrate for a cognitive map (O'Keefe & Nadel, 1978). However, the firing of these place cells often appears to be modulated by features of an animal’s behaviour (Ainge, Tamosiunaite, et al., 2007; Wood, Dudchenko, Robitsek, & Eichenbaum, 2000). For instance, previous experiments have demonstrated that the firing rate of place fields in the start box of some mazes are predictive of the animal’s final destination (Ainge, Tamosiunaite, et al., 2007; Ferbinteanu & Shapiro, 2003). We sought to understand whether this prospective firing is in fact related to the goal the rat is planning to navigate to or the route the rat is planning to take. Our results provide strong evidence for the latter, suggesting that rats may not be aware of the location of specific goals and may not be aware of their environment in the form of a contiguous map. However, we also found behavioural evidence that rats are aware of specific goal locations, suggesting that place cells in the hippocampus may not be responsible for this representation and that it may reside elsewhere (Hok, Chah, Save, & Poucet, 2013). Unlike their typical activity in an open field, place cells often have multiple place fields in geometrically similar areas of a multicompartment environment (Derdikman et al., 2009; Spiers et al., 2013). For example, Spiers et al. (2013) found that in an environment composed of four parallel compartments, place cells often fired similarly in multiple compartments, despite the active movement of the rat between them. We were able to replicate this phenomenon, furthermore, we were also able to show that if the compartments are arranged in a radial configuration this repetitive firing does not occur as frequently. We suggest that this place field repetition is driven by inputs from Boundary Vector Cells (BVCs) in neighbouring brain regions which are in turn greatly modulated by inputs from the head direction system. This is supported by a novel BVC model of place cell firing which predicts our observed results accurately. If place cells form the neural basis of a cognitive map one would predict spatial learning to be difficult in an environment where repetitive firing is observed frequently (Spiers et al., 2013). We tested this hypothesis by training animals on an odour discrimination task in the maze environments described above. We found that rats trained in the parallel version of the task were significantly impaired when compared to the radial version. These results support the hypothesis that place cells form the neural basis of a cognitive map; in environments where it is difficult to discriminate compartments based on the firing of place cells, rats find it similarly difficult to discriminate these compartments as shown by their behaviour. The experiments reported here are discussed in terms of a cognitive map, the likelihood that such a construct exists and the possibility that place cells form the neural basis of such a representation. Although the results of our experiments could be interpreted as evidence that animals do not possess a cognitive map, ultimately they suggest that animals do have a cognitive map and that place cells form a more than adequate substrate for this representation

    Reinforcement Learning Approaches to Rapid Hippocampal Place Learning

    Get PDF
    The ability to successfully navigate the physical environment is a vital skill for numer- ous species, including humans, to find food and shelter and remember how to return to important locations. As environments are inherently variable, brains have evolved amazing capabilities to adapt to various new situations. In particular, animals and humans have the ability to return to specific locations based on as few as a single experience. The mechanisms underlying behavioural flexibility in spatial navigation is the focus of ongoing research with repercussions in behavioural sciences, neurosciences, and artificial intelligence. In particular, the field of Reinforcement Learning (RL), which investigates how an or- ganism, virtual or living, learns to generate actions based on the reception of rewards, has been extremely active since the 1970s for the exploration of the mechanisms of flexibil- ity underlying decision making. In parallel, neuroscience has also significantly advanced in uncovering the neural basis underlying spatial navigation mechanisms, for example with the discovery of neurons underlying the computation of cognitive maps [O’Keefe and Dostrovsky, 1971, Hafting et al., 2005], an internal representation of space. Past RL models design relies on representations that do not allow efficient flexibility in spatial navigation. However, models provide a theoretical framework that influences the interpretation of neural recordings. As recent recording technologies enable experimentalists to target an increasing number of neurons, there is a compelling need to develop new RL computational approaches for flexible spatial navigation, in particular to bridge the gap between neural population recordings and the production of behaviours. In this thesis, I consider RL approaches in which the known coding properties of the cognitive map are used as a basis to perform spatial navigation. Specifically, I investigate computational ideas which enable agents to be more flexible in virtual spatial navigation scenarios. In particular, this thesis focuses on the Morris watermaze, an experimental apparatus in which rodents have to find a hidden platform within a pool of cloudy water. Rapid place learning in the Morris watermaze, demonstrated by rodents requiring only one exposure to a new platform location to subsequently be able to retrieve its position, is an example of flexibility in spatial navigation. I present different RL-based architectures which generate flexible behaviours in a virtual watermaze equivalent, and compare them to behavioural observations. I discuss both the similarity in behavioural performance (i.e., how well they reproduce behavioural measures of rapid place learning) and neuro- biological realism (i.e., how well they map to neurobiological substrates involved in rapid place learning). I propose distinct biologically realistic computational properties which enable an agent to be more flexible towards changes in goal locations. Behavioural flexibility requires hier- archical and generalisable representations for flexible transfer of knowledge. Hierarchical control is useful to generalise action chains, such as selecting a trajectory, to fulfil differ- ent purposes, such as reaching different goal locations. It also enables the adjustment of ongoing behaviour to unforeseen situations, for example, adapting to misprediction of the goal’s location. Continuous encoding of space, action and time, permits smoother control and generalisation of experience, and removes the constraints caused by the choice of the representation’s granularity. Neural networks in which connections between neurons re- flect predictions about most likely future scenarios enable efficient planning of trajectories to adapt to novel situations. In a nutshell, flexibility requires efficient representations, and this thesis contributes to the investigation of their neural implementations

    Role of the hippocampus in goal representation : Insights from behavioural and electrophysiological approaches

    Get PDF
    The hippocampus plays an important role in spatial cognition, as supported by the location-specific firing of hippocampal place cells. In random foraging tasks, each place cell fires at a specific position (‘place field’) while other hippocampal pyramidal neurons remain silent. A recent study evidenced a reliable extra-field activity in most CA1 place cells of rats waiting for reward delivery in an uncued goal zone. While the location-specific activity of place cells is thought to underlie a flexible representation of space, the nature of this goal-related signal remains unclear. To test whether hippocampal goal-related activity reflects a representation of goal location or a reward-related signal, we designed a two-goal navigation task in which rats were free to choose between two uncued spatial goals to receive a reward. The magnitude of reward associated to each goal zone was modulated, therefore changing the goal value. We recorded CA1 and CA3 unit activity from rats performing this task. Behaviourally, rats were able to remember each goal location and flexibly adapt their choices to goal values. Electrophysiological data showed that a large majority of CA1-CA3 place and silent cells expressed goal-related activity. This activity was independent from goal value and rats’ behavioural choices. Importantly, a large proportion of cells expressed a goal-related activity at one goal zone only. Altogether, our findings suggest that the hippocampus processes and stores relevant information about the spatial characteristics of the goal. This goal representation could be used in cooperation with structures involved in decision-making to optimise goal-directed navigation

    Enacting Memoryscapes:Urban Assemblages and Embodied Memory in Post-Socialist Tashkent

    Get PDF

    What makes a landmark a landmark? How active vision strategies help honeybees to process salient visual features for spatial learning

    Get PDF
    Mertes M. Primary sensory processing of visual and olfactory signals in the bumblebee brain. Bielefeld: Bielefeld University; 2013.Since decades honeybees are being used as an insect model system for answering scientific questions in a variety of areas. This is due to their enormous behavioural repertoire paired with their learning capabilities. Similar learning capabilities are also evident in bumblebees that are closely related to honeybees. As honeybees, they are central place foragers that commute between a reliable food source and their nest and, therefore, need to remember particular facets of their environment to reliably find back to these places. Via their flight style that consists of fast head and body rotations (saccades)interspersed with flight segments of almost no rotational movements of the head (intersaccades) it is possible to acquire distance information about objects in the environment. Depending on the structure of the environment bumblebees as well as honeybees can use these objects as landmarks to guide their way between the nest and a particular food source. Landmark learning as a visual task depends of course on the visual input perceived by the animal’s eyes. As this visual input rapidly changes during head saccades, we recorded in my first project bumblebees with high-speed cameras in an indoor flight arena, while they were solving a navigation task that required them to orient according to landmarks. First of all we tracked head orientation during whole flight periods that served to learn the spatial arrangement of the landmarks. Like this we acquired detailed data on the fine structure of their head saccades that shape the visual input they perceive. Head-saccades of bumblebees exhibit a consistent relationship between their duration, peak velocity and amplitude resembling the human so-called "saccadic main sequence" in its main characteristics. We also found the bumblebees’saccadic sequence to be highly stereotyped, similar to many other animals. This hints at a common principle of reliably reducing the time during which the eye is moved by fast and precise motor control. In my first project I tested bumblebees with salient landmarks in front of a background covered with a random-dot pattern. In a previous study, honeybees were trained with the same landmark arrangement and were additionally tested using landmarks that were camouflaged against the background. As the pattern of the landmark textures did not seem to affect their performance in finding the goal location, it had been assumed that the way they acquire information about the spatial relationship between objects is independent of the objects texture. Our aim for the second project of my dissertation was therefore to record the activity of motion sensitive neurons in the bumblebee to analyse in how far object information is contained in a navigation-related visual stimulus movie. Also we wanted to clarify, if object texture is represented by the neural responses. As recording from neurons in free-flying bumblebees is not possible, we used one of the recorded bumblebee trajectories to reconstruct a three-dimensional flight path including data on the head orientation. We therefore could reconstruct ego-perspective movies of a bumblebee 10 while solving a navigational task. These movies were presented to motion-sensitive neurons in the bumblebee lobula. We found for two different classes of neurons that object information was contained in the neuronal response traces. Furthermore, during the intersaccadic parts of flight the object’s texture did not change the general response profile of these neurons, which nicely matches the behavioural findings. However, slight changes in the response profiles acquired for the saccadic parts of flight might allow to extract texture information from these neurons at later processing stages. In the final project of my dissertation I switched from exploring coding of visual information to the coding of olfactory signals. For honeybees and bumblebees olfaction is approximately equally important for their behaviour as their vision sense. But whereas there is a solid knowledge base on honeybee olfaction with detailed studies on the single stages of olfactory information processing this knowledge was missing for the bumblebee. In the first step we conducted staining experiments and confocal microscopy to identify input tracts conveying information from the antennae to the first processing stage of olfactory information – the antennal lobe (AL ). Using three-dimensional reconstruction of the AL we could further elucidate typical numbers of single spheroidal shaped subunits of the AL , which are called glomeruli. Odour molecules that the bumblebee perceives induce typical activation patterns characteristic of particular odours. By retrogradely staining the output tracts that connect the AL to higher order processing stages with a calcium indicator, we were capable of recording the odourdependent activation patterns of the AL glomeruli and to describe their basic coding principles. Similarly as in honeybees, we could show that the odours’ carbon chain length as well as their functional groups are dimensions that the antennal lobe glomeruli are coding in their spatial response pattern. Applying correlation methods underlined the strong similarity of the glomerular activity pattern between honeybees and bumblebees
    corecore