10 research outputs found

    A model of ant route navigation driven by scene familiarity

    Get PDF
    In this paper we propose a model of visually guided route navigation in ants that captures the known properties of real behaviour whilst retaining mechanistic simplicity and thus biological plausibility. For an ant, the coupling of movement and viewing direction means that a familiar view specifies a familiar direction of movement. Since the views experienced along a habitual route will be more familiar, route navigation can be re-cast as a search for familiar views. This search can be performed with a simple scanning routine, a behaviour that ants have been observed to perform. We test this proposed route navigation strategy in simulation, by learning a series of routes through visually cluttered environments consisting of objects that are only distinguishable as silhouettes against the sky. In the first instance we determine view familiarity by exhaustive comparison with the set of views experienced during training. In further experiments we train an artificial neural network to perform familiarity discrimination using the training views. Our results indicate that, not only is the approach successful, but also that the routes that are learnt show many of the characteristics of the routes of desert ants. As such, we believe the model represents the only detailed and complete model of insect route guidance to date. What is more, the model provides a general demonstration that visually guided routes can be produced with parsimonious mechanisms that do not specify when or what to learn, nor separate routes into sequences of waypoints

    Snapshots in ants? New interpretations of paradigmatic experiments

    Get PDF
    Ants can use visual information to guide long idiosyncratic routes and accurately pinpoint locations in complex natural environments. It has often been assumed that the world knowledge of these foragers consists of multiple discrete views that are retrieved sequentially for breaking routes into sections controlling approaches to a goal. Here we challenge this idea using a model of visual navigation that does not store and use discrete views to replicate the results from paradigmatic experiments that have been taken as evidence that ants navigate using such discrete snapshots. Instead of sequentially retrieving views, the proposed architecture gathers information from all experienced views into a single memory network, and uses this network all along the route to determine the most familiar heading at a given location. This algorithm is consistent with the navigation of ants in both laboratory and natural environments, and provides a parsimonious solution to deal with visual information from multiple locations

    The internal maps of insects

    Get PDF

    Visual navigation in ants

    Get PDF
    Les remarquables capacitĂ©s de navigation des insectes nous prouvent Ă  quel point ces " mini-cerveaux " peuvent produire des comportements admirablement robustes et efficaces dans des environnements complexes. En effet, ĂȘtre capable de naviguer de façon efficace et autonome dans un environnement parfois hostile (dĂ©sert, forĂȘt tropicale) sollicite l'intervention de nombreux processus cognitifs impliquant l'extraction, la mĂ©morisation et le traitement de l'information spatiale prĂ©alables Ă  une prise de dĂ©cision locomotrice orientĂ©e dans l'espace. Lors de leurs excursions hors du nid, les insectes tels que les abeilles, guĂȘpes ou fourmis, se fient Ă  un processus d'intĂ©gration du trajet, mais Ă©galement Ă  des indices visuels qui leur permettent de mĂ©moriser des routes et de retrouver certains sites alimentaires familiers et leur nid. L'Ă©tude des mĂ©canismes d'intĂ©gration du trajet a fait l'objet de nombreux travaux, par contre, nos connaissances Ă  propos de l'utilisation d'indices visuels sont beaucoup plus limitĂ©es et proviennent principalement d'Ă©tudes menĂ©es dans des environnements artificiellement simplifiĂ©s, dont les conclusions sont parfois difficilement transposables aux conditions naturelles. Cette thĂšse propose une approche intĂ©grative, combinant 1- des Ă©tudes de terrains et de laboratoire conduites sur deux espĂšces de fourmis spĂ©cialistes de la navigation visuelle (Melophorus bagoti et Gigantiops destructor) et 2- des analyses de photos panoramiques prisent aux endroits oĂč les fourmis naviguent qui permettent de quantifier objectivement l'information visuelle accessible Ă  l'insecte. Les rĂ©sultats convergents obtenus sur le terrain et au laboratoire permettent de montrer que, chez ces deux espĂšces, les fourmis ne fragmentent pas leur monde visuel en multiples objets indĂ©pendants, et donc ne mĂ©morisent pas de 'repĂšres visuels' ou de balises particuliers comme le ferait un ĂȘtre humain. En fait, l'efficacitĂ© de leur navigation Ă©mergerait de l'utilisation de paramĂštres visuels Ă©tendus sur l'ensemble de leur champ visuel panoramique, incluant repĂšres proximaux comme distaux, sans les individualiser. Contre-intuitivement, de telles images panoramiques, mĂȘme Ă  basse rĂ©solution, fournissent une information spatiale prĂ©cise et non ambiguĂ« dans les environnements naturels. PlutĂŽt qu'une focalisation sur des repĂšres isolĂ©s, l'utilisation de vues dans leur globalitĂ© semble ĂȘtre plus efficace pour reprĂ©senter la complexitĂ© des scĂšnes naturelles et ĂȘtre mieux adaptĂ©e Ă  la basse rĂ©solution du systĂšme visuel des insectes. Les photos panoramiques enregistrĂ©es peuvent Ă©galement servir Ă  l'Ă©laboration de modĂšles navigationnels. Les prĂ©dictions de ces modĂšles sont ici directement comparĂ©es au comportement des fourmis, permettant ainsi de tester et d'amĂ©liorer les diffĂ©rentes hypothĂšses envisagĂ©es. Cette approche m'a conduit Ă  la conclusion selon laquelle les fourmis utilisent leurs vues panoramiques de façons diffĂ©rentes suivant qu'elles se dĂ©placent en terrain familier ou non. Par exemple, aligner son corps de maniĂšre Ă  ce que la vue perçue reproduise au mieux l'information mĂ©morisĂ©e est une stratĂ©gie trĂšs efficace pour naviguer le long d'une route bien connue ; mais n'est d'aucune efficacitĂ© si l'insecte se retrouve en territoire nouveau, Ă©cartĂ© du chemin familier. Dans ces cas critiques, les fourmis semblent recourir Ă  une seconde stratĂ©gie qui consiste Ă  se dĂ©placer vers les rĂ©gions prĂ©sentant une ligne d'horizon plus basse que celle mĂ©morisĂ©e, ce qui gĂ©nĂ©ralement conduit vers le terrain familier. Afin de choisir parmi ces deux diffĂ©rentes stratĂ©gies, les fourmis semblent tout simplement se fier au degrĂ© de familiarisation avec le panorama perçu. Cette thĂšse soulĂšve aussi la question de la nature de l'information visuelle mĂ©morisĂ©e par les insectes. Le modĂšle du " snapshot " qui prĂ©domine dans la littĂ©rature suppose que les fourmis mĂ©morisent une sĂ©quence d'instantanĂ©s photographiques placĂ©s Ă  diffĂ©rents points le long de leurs routes. A l'inverse, les rĂ©sultats obtenus dans le prĂ©sent travail montrent que l'information visuelle mĂ©morisĂ©e au bout d'une route (15 mĂštres) modifie l'information mĂ©morisĂ©e Ă  l'autre extrĂ©mitĂ© de cette mĂȘme route, ce qui suggĂšre que la connaissance visuelle de l'ensemble de la route soit compactĂ©e en une seule et mĂȘme reprĂ©sentation mĂ©morisĂ©e. Cette hypothĂšse s'accorde aussi avec d'autres de nos rĂ©sultats montrant que la mĂ©moire visuelle ne s'acquiert pas instantanĂ©ment, mais se dĂ©veloppe et s'affine avec l'expĂ©rience rĂ©pĂ©tĂ©e. Lorsqu'une fourmi navigue le long de sa route, ses rĂ©cepteurs visuels sont stimulĂ©s de façon continue par une scĂšne Ă©voluant doucement et rĂ©guliĂšrement au fur et Ă  mesure du dĂ©placement. MĂ©moriser un pattern gĂ©nĂ©ral de stimulations, plutĂŽt qu'une sĂ©rie de " snapshots " indĂ©pendants et trĂšs ressemblants les uns aux autres, constitue une hypothĂšse parcimonieuse. Cette hypothĂšse s'applique en outre particuliĂšrement bien aux modĂšles en rĂ©seaux de neurones, suggĂ©rant sa pertinence biologique. Dans l'ensemble, cette thĂšse s'intĂ©resse Ă  la nature des perceptions et de la mĂ©moire visuelle des fourmis, ainsi qu'Ă  la maniĂšre dont elles sont intĂ©grĂ©es et traitĂ©es afin de produire une rĂ©ponse navigationnelle appropriĂ©e. Nos rĂ©sultats sont aussi discutĂ©s dans le cadre de la cognition comparĂ©e. Insectes comme vertĂ©brĂ©s ont rĂ©solu le mĂȘme problĂšme qui consiste Ă  naviguer de façon efficace sur terre. A la lumiĂšre de la thĂ©orie de l'Ă©volution de Darwin, il n'y a 'a priori' aucune raison de penser qu'il existe une forme de transition brutale entre les mĂ©canismes cognitifs des diffĂ©rentes espĂšces animales. Le fossĂ© marquĂ© entre insectes et vertĂ©brĂ©s au sein des sciences cognitives pourrait bien ĂȘtre dĂ» Ă  des approches diffĂ©rentes plutĂŽt qu'Ă  de vraies diffĂ©rences ontologiques. Historiquement, l'Ă©tude de la navigation de l'insecte a suivi une approche de type 'bottom-up' qui recherche comment des comportements apparemment complexes peuvent dĂ©couler de mĂ©canismes simples. Ces solutions parcimonieuses, comme celles explorĂ©es dans cette thĂšse, peuvent fournir de remarquables hypothĂšses de base pour expliquer la navigation chez d'autres espĂšces animales aux cerveaux et comportements apparemment plus complexes, contribuant ainsi Ă  une vĂ©ritable cognition comparĂ©e.Navigating efficiently in the outside world requires many cognitive abilities like extracting, memorising, and processing information. The remarkable navigational abilities of insects are an existence proof of how small brains can produce exquisitely efficient, robust behaviour in complex environments. During their foraging trips, insects, like ants or bees, are known to rely on both path integration and learnt visual cues to recapitulate a route or reach familiar places like the nest. The strategy of path integration is well understood, but much less is known about how insects acquire and use visual information. Field studies give good descriptions of visually guided routes, but our understanding of the underlying mechanisms comes mainly from simplified laboratory conditions using artificial, geometrically simple landmarks. My thesis proposes an integrative approach that combines 1- field and lab experiments on two visually guided ant species (Melophorus bagoti and Gigantiops destructor) and 2- an analysis of panoramic pictures recorded along the animal's route. The use of panoramic pictures allows an objective quantification of the visual information available to the animal. Results from both species, in the lab and the field, converged, showing that ants do not segregate their visual world into objects, such as landmarks or discrete features, as a human observers might assume. Instead, efficient navigation seems to arise from the use of cues widespread on the ants' panoramic visual field, encompassing both proximal and distal objects together. Such relatively unprocessed panoramic views, even at low resolution, provide remarkably unambiguous spatial information in natural environment. Using such a simple but efficient panoramic visual input, rather than focusing on isolated landmarks, seems an appropriate strategy to cope with the complexity of natural scenes and the poor resolution of insects' eyes. Also, panoramic pictures can serve as a basis for running analytical models of navigation. The predictions of these models can be directly compared with the actual behaviour of real ants, allowing the iterative tuning and testing of different hypotheses. This integrative approach led me to the conclusion that ants do not rely on a single navigational technique, but might switch between strategies according to whether they are on or off their familiar terrain. For example, ants can recapitulate robustly a familiar route by simply aligning their body in a way that the current view matches best their memory. However, this strategy becomes ineffective when displaced away from the familiar route. In such a case, ants appear to head instead towards the regions where the skyline appears lower than the height recorded in their memory, which generally leads them closer to a familiar location. How ants choose between strategies at a given time might be simply based on the degree of familiarity of the panoramic scene currently perceived. Finally, this thesis raises questions about the nature of ant memories. Past studies proposed that ants memorise a succession of discrete 2D 'snapshots' of their surroundings. Contrastingly, results obtained here show that knowledge from the end of a foraging route (15 m) impacts strongly on the behaviour at the beginning of the route, suggesting that the visual knowledge of a whole foraging route may be compacted into a single holistic memory. Accordingly, repetitive training on the exact same route clearly affects the ants' behaviour, suggesting that the memorised information is processed and not 'obtained at once'. While navigating along their familiar route, ants' visual system is continually stimulated by a slowly evolving scene, and learning a general pattern of stimulation rather than storing independent but very similar snapshots appears a reasonable hypothesis to explain navigation on a natural scale; such learning works remarkably well with neural networks. Nonetheless, what the precise nature of ants' visual memories is and how elaborated they are remain wide open question. Overall, my thesis tackles the nature of ants' perception and memory as well as how both are processed together to output an appropriate navigational response. These results are discussed in the light of comparative cognition. Both vertebrates and insects have resolved the same problem of navigating efficiently in the world. In light of Darwin's theory of evolution, there is no a priori reason to think that there is a clear division between cognitive mechanisms of different species. The actual gap between insect and vertebrate cognitive sciences may result more from different approaches rather than real differences. Research on insect navigation has been approached with a bottom-up philosophy, one that examines how simple mechanisms can produce seemingly complex behaviour. Such parsimonious solutions, like the ones explored in the present thesis, can provide useful baseline hypotheses for navigation in other larger-brained animals, and thus contribute to a more truly comparative cognition

    Theoretical Computational Models for the Cognitive Map

    Get PDF
    In den letzten Jahrzehnten hat die Forschung nach der Frage, wie Raum im Gehirn reprĂ€sentiert wird, ein weit verzweigtes Netzwerk von spezialisierten Zellen aufgedeckt. Es ist nun klar, dass RĂ€umlichkeit auf irgendeine Art reprĂ€sentiert sein muss, aber die genaue Umsetzung wird nach wie vor debattiert. Folgerichtig liegt das ĂŒbergeordnete Ziel meiner Dissertation darin, das VerstĂ€ndnis von der neuronalen ReprĂ€sentation, der Kognitiven Karte, mithilfe von theoretischer Computermodellierung (im Gegensatz zu datengetriebener Modellierung) zu erweitern. Die Arbeit setzt sich aus vier Publikationen zusammen, die das Problem aus verschiedenen, aber miteinander kompatiblen Richtungen angehen: In den ersten beiden Publikationen geht es um zielgerichtete Navigation durch topologische Graphen, in denen die erkundete Umgebung als Netzwerk aus loka len Positionen und sie verbindenden Handlungen dargestellt wird. Im Gegensatz zu Koordinaten-basierten metrischen Karten sind Graphenmodelle weniger gebunden und haben verschiedene Vorteile wie z.B. Algorithmen, die garantiert optimale Pfade finden. Im ersten Modell sind Orte durch Populationen von einfachen Bildfeatures im Graphen gespeichert. FĂŒr die Navigation werden dann mehrere Pfade gleichzeitig zwischen Start- und Zielpopulationen berechnet und die schlussendliche Route folgt dem Durchschnitt der Pfade. Diese Methode macht die Wegsuche robuster und umgeht das Problem, Orte entlang der Route wiedererkennen zu mĂŒssen. In der zweiten Publikation wird ein hierarchisches Graphenmodell vorgeschlagen, bei dem die Umgebung in mehrere Regionen unterteilt ist. Das Regionenwissen ist ebenfalls als ĂŒbergeordnete Knoten im Graphen gespeichert. Diese Struktur fĂŒhrt bei der Wegsuche dazu, dass die berechneten Routen verzerrt sind, was mit dem Verhalten von menschlichen Probanden in Navigationsstudien ĂŒbereinstimmt. In der dritten Publikation geht es auch um Regionen, der Fokus liegt aber auf der konkreten biologischen Umsetzung in Form von Place Cell und Grid Cell-AktivitĂ€t. Im Gegensatz zu einzigartigen Ortsknoten im Graphen zeigen Place Cells multiple Feuerfelder in verschiedenen Regionen oder Kontexten. Dieses PhĂ€nomen wird als Remapping bezeichnet und könnte der Mechanismus hinter Regionenwissen sein. Wir modellieren das PhĂ€nomen mithilfe eines Attraktor-Netzwerks aus Place- und Grid Cells: Immer, wenn sich der virtuelle Agent des Modells von einer Region in eine andere bewegt, verĂ€ndert sich der Kontext und die ZellaktivitĂ€t springt zu einem anderen Attraktor, was zu einem Remapping fĂŒhrt. Das Modell kann die ZellaktivitĂ€t von Tieren in mehreren Experimentalumgebungen replizieren und ist daher eine plausible ErklĂ€rung fĂŒr die VorgĂ€nge im biologischen Gehirn. In der vierten Publikation geht es um den Vergleich von Graphen- und Kartenmodellen als fundamentale Struktur der kognitiven Karte. Im Speziellen geht es bei dieser Debatte um die Unterscheidung zwischen nicht-metrischen Graphen und metrischen euklidischen Karten; euklidische Karten sind zwar mĂ€chtiger als die Alternative, aber menschliche Probanden neigen dazu, Fehler zu machen, die stark von einer metrischen Vorhersage abweichen. Deshalb wird hĂ€ufig argumentiert, dass nicht-metrische Modelle das Verhalten besser erklĂ€ren können. Wir schlagen eine alternative metrische ErklĂ€rung fĂŒr die nichtmetrischen Graphen vor, indem wir die Graphen im metrischen Raum einbetten. Die Methode wird in einer bestimmten nicht-euklidischen Beispielumgebung gezeigt, in der sie Versuchspersonenverhalten genauso gut vorhersagen kann, wie ein nichtmetrischer Graph. Wir argumentieren daher, dass unser Modell ein besseres Modell fĂŒr RaumreprĂ€sentation sein könnte. ZusĂ€tzlich zu den Einzelergebnissen diskutiere ich außerdem die Gemeinsamkeiten der Modelle und wie sie in den derzeitigen Stand der Forschung zur kognitiven Karte passen. DarĂŒber hinaus erörtere ich, wie die Ergebnisse zu komplexeren Modellen vereint werden könnten, um unser Bild der Raumkognition zu erweitern.Decades of research into the neural representation of physical space have uncovered a complex and distributed network of specialized cells in the mammalian brain. It is now clear that space is represented in some form, but the realization remains debated. Accordingly, the overall aim of my thesis is to further the understanding of the neural representation of space, the cognitive map, with the aid of theoretical computational modeling (as opposed to data-driven modeling). It consists of four separate publications which approach the problem from different but complementing perspectives: The first two publications consider goal-directed navigation with topological graph models, which encode the environment as a state-action graph of local positions connected by simple movement instructions. Graph models are often less constrained than coordinate-based metric maps and offer a variety of computational advantages; for example, graph search algorithms may be used to derive optimal routes between arbitrary positions. In the first model, places are encoded by population codes of low-level image features. For goal-directed navigation, a set of simultaneous paths is obtained between the start and goal populations and the final trajectory follows the population average. This makes route following more robust and circumvents problems related to place recognition. The second model proposes a hierarchical place graph which subdivides the known environment into well-defined regions. The region knowledge is included in the graph as superordinate nodes. During wayfinding, these nodes distort the resulting paths in a way that matches region-related biases observed in human navigation experiments. The third publication also considers region coding but focuses on more concrete biological implementation in the form of place cell and grid cell activity. As opposed to unique nodes in a graph, place cells may express multiple firing fields in different contexts or regions. This phenomenon is known as “remapping” and may be fundamental to the encoding region knowledge. The dynamics are modeled in a joint attractor neural network of place and grid cells: Whenever a virtual agent moves into another region, the context changes and the model remaps the cell activity to an associated pattern from memory. The model is able to replicate experimental findings in a series of mazes and may therefore be an explanation for the observed activity in the biological brain. The fourth publication again returns to graph models, joining the debate on the fundamental structure of the cognitive map: The internal representation of space has often been argued to either take the form of a non-metric topological graph or a Euclidean metric map in which places are assigned specific coordinates. While the Euclidean map is more powerful, human navigation in experiments often strongly deviates from a (correct) metric prediction, which has been taken as an argument for the non-metric alternative. However, it may also be possible to find an alternative metric explanation to the non-metric graphs by embedding the latter into metric space. The method is shown with a specific non-Euclidean example environment where it can explain subject behavior equally well to the purely non-metric graph, and it is argued that it is therefore a better model for spatial knowledge. Beyond the individual results, the thesis discusses the commonalities of the models and how they compare to current research on the cognitive map. I also consider how the findings may be combined into more complex models to further the understanding of the cognitive neuroscience of space

    On the relationship between neuronal codes and mental models

    Get PDF
    Das ĂŒbergeordnete Ziel meiner Arbeit an dieser Dissertation war ein besseres VerstĂ€ndnis des Zusammenhangs von mentalen Modellen und den zugrundeliegenden Prinzipien, die zur Selbstorganisation neuronaler Verschaltung fĂŒhren. Die Dissertation besteht aus vier individuellen Publikationen, die dieses Ziel aus unterschiedlichen Perspektiven angehen. WĂ€hrend die Selbstorganisation von Sparse-Coding-ReprĂ€sentationen in neuronalem Substrat bereits ausgiebig untersucht worden ist, sind viele Forschungsfragen dazu, wie Sparse-Coding fĂŒr höhere, kognitive Prozesse genutzt werden könnte noch offen. Die ersten zwei Studien, die in Kapitel 2 und Kapitel 3 enthalten sind, behandeln die Frage, inwieweit ReprĂ€sentationen, die mit Sparse-Coding entstehen, mentalen Modellen entsprechen. Wir haben folgende SelektivitĂ€ten in Sparse-Coding-ReprĂ€sentationen identifiziert: mit Stereo-Bildern als Eingangsdaten war die ReprĂ€sentation selektiv fĂŒr die DisparitĂ€ten von Bildstrukturen, welche fĂŒr das AbschĂ€tzen der Entfernung der Strukturen zum Beobachter genutzt werden können. Außerdem war die ReprĂ€sentation selektiv fĂŒr die die vorherrschende Orientierung in Texturen, was fĂŒr das AbschĂ€tzen der Neigung von OberflĂ€chen genutzt werden kann. Mit optischem Fluss von Eigenbewegung als Eingangsdaten war die ReprĂ€sentation selektiv fĂŒr die Richtung der Eigenbewegung in den sechs Freiheitsgraden. Wegen des direkten Zusammenhangs der SelektivitĂ€ten mit physikalischen Eigenschaften können ReprĂ€sentationen, die mit Sparse-Coding entstehen, als frĂŒhe sensorische Modelle der Umgebung dienen. Die kognitiven Prozesse hinter rĂ€umlichem Wissen ruhen auf mentalen Modellen, welche die Umgebung representieren. Wir haben in der dritten Studie, welche in Kapitel 4 enthalten ist, ein topologisches Modell zur Navigation prĂ€sentiert, Es beschreibt einen dualen Populations-Code, bei dem der erste Populations-Code Orte anhand von Orts-Feldern (Place-Fields) kodiert und der zweite Populations-Code Bewegungs-Instruktionen, basierend auf der VerknĂŒpfung von Orts-Feldern, kodiert. Der Fokus lag nicht auf der Implementation in biologischem Substrat oder auf einer exakten Modellierung physiologischer Ergebnisse. Das Modell ist eine biologisch plausible, einfache Methode zur Navigation, welche sich an einen Zwischenschritt emergenter Navigations-FĂ€higkeiten in einer evolutiven Navigations-Hierarchie annĂ€hert. Unser automatisierter Test der Sehleistungen von MĂ€usen, welcher in Kapitel 5 beschrieben wird, ist ein Beispiel von Verhaltens-Tests im Wahrnehmungs-Handlungs-Zyklus (Perception-Action-Cycle). Das Ziel dieser Studie war die Quantifizierung des optokinetischen Reflexes. Wegen des reichhaltigen Verhaltensrepertoires von MĂ€usen sind fĂŒr die Quantifizierung viele umfangreiche Analyseschritte erforderlich. Tiere und Menschen sind verkörperte (embodied) lebende Systeme und daher aus stark miteinander verwobenen Modulen oder EntitĂ€ten zusammengesetzt, welche außerdem auch mit der Umgebung verwoben sind. Um lebende Systeme als Ganzes zu studieren ist es notwendig Hypothesen, zum Beispiel zur Natur mentaler Modelle, im Wahrnehmungs-Handlungs-Zyklus zu testen. Zusammengefasst erweitern die Studien dieser Dissertation unser VerstĂ€ndnis des Charakters frĂŒher sensorischer ReprĂ€sentationen als mentale Modelle, sowie unser VerstĂ€ndnis höherer, mentalen Modellen fĂŒr die rĂ€umliche Navigation. DarĂŒber hinaus enthĂ€lt es ein Beispiel fĂŒr das Evaluieren von Hypothesn im Wahr\-neh\-mungs-Handlungs-Zyklus.The superordinate aim of my work towards this thesis was a better understanding of the relationship between mental models and the underlying principles that lead to the self-organization of neuronal circuitry. The thesis consists of four individual publications, which approach this goal from differing perspectives. While the formation of sparse coding representations in neuronal substrate has been investigated extensively, many research questions on how sparse coding may be exploited for higher cognitive processing are still open. The first two studies, included as chapter 2 and chapter 3, asked to what extend representations obtained with sparse coding match mental models. We identified the following selectivities in sparse coding representations: with stereo images as input, the representation was selective for the disparity of image structures, which can be used to infer the distance of structures to the observer. Furthermore, it was selective to the predominant orientation in textures, which can be used to infer the orientation of surfaces. With optic flow from egomotion as input, the representation was selective to the direction of egomotion in 6 degrees of freedom. Due to the direct relation between selectivity and physical properties, these representations, obtained with sparse coding, can serve as early sensory models of the environment. The cognitive processes behind spatial knowledge rest on mental models that represent the environment. We presented a topological model for wayfinding in the third study, included as chapter 4. It describes a dual population code, where the first population code encodes places by means of place fields, and the second population code encodes motion instructions based on links between place fields. We did not focus on an implementation in biological substrate or on an exact fit to physiological findings. The model is a biologically plausible, parsimonious method for wayfinding, which may be close to an intermediate step of emergent skills in an evolutionary navigational hierarchy. Our automated testing for visual performance in mice, included in chapter 5, is an example of behavioral testing in the perception-action cycle. The goal of this study was to quantify the optokinetic reflex. Due to the rich behavioral repertoire of mice, quantification required many elaborate steps of computational analyses. Animals and humans are embodied living systems, and therefore composed of strongly enmeshed modules or entities, which are also enmeshed with the environment. In order to study living systems as a whole, it is necessary to test hypothesis, for example on the nature of mental models, in the perception-action cycle. In summary, the studies included in this thesis extend our view on the character of early sensory representations as mental models, as well as on high-level mental models for spatial navigation. Additionally it contains an example for the evaluation of hypotheses in the perception-action cycle

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp

    Linked local navigation for visual route guidance

    No full text
    Insects are able to navigate reliably between food and nest using only visual information. This behavior has inspired many models of visual landmark guidance, some of which have been tested on autonomous robots. The majority of these models work by comparing the agent's current view with a view of the world stored when the agent was at the goal. The region from which agents can successfully reach home is therefore limited to the goal's visual locale, that is, the area around the goal where the visual scene is not radically different to the goal position. Ants are known to navigate over large distances using visually guided routes consisting of a series of visual memories. Taking inspiration from such route navigation, we propose a framework for linking together local navigation methods. We implement this framework on a robotic platform and test it in a series of environments in which local navigation methods fail. Finally, we show that the framework is robust to environments of varying complexity
    corecore