81 research outputs found
Use of Augmented Reality in Human Wayfinding: A Systematic Review
Augmented reality technology has emerged as a promising solution to assist
with wayfinding difficulties, bridging the gap between obtaining navigational
assistance and maintaining an awareness of one's real-world surroundings. This
article presents a systematic review of research literature related to AR
navigation technologies. An in-depth analysis of 65 salient studies was
conducted, addressing four main research topics: 1) current state-of-the-art of
AR navigational assistance technologies, 2) user experiences with these
technologies, 3) the effect of AR on human wayfinding performance, and 4)
impacts of AR on human navigational cognition. Notably, studies demonstrate
that AR can decrease cognitive load and improve cognitive map development, in
contrast to traditional guidance modalities. However, findings regarding
wayfinding performance and user experience were mixed. Some studies suggest
little impact of AR on improving outdoor navigational performance, and certain
information modalities may be distracting and ineffective. This article
discusses these nuances in detail, supporting the conclusion that AR holds
great potential in enhancing wayfinding by providing enriched navigational
cues, interactive experiences, and improved situational awareness.Comment: 52 page
On the relationship between neuronal codes and mental models
Das ĂĽbergeordnete Ziel meiner Arbeit an dieser Dissertation
war ein besseres Verständnis des Zusammenhangs
von mentalen Modellen
und den zugrundeliegenden Prinzipien,
die zur Selbstorganisation neuronaler Verschaltung fĂĽhren.
Die Dissertation besteht aus vier individuellen Publikationen,
die dieses Ziel aus unterschiedlichen Perspektiven angehen.
Während die Selbstorganisation von Sparse-Coding-Repräsentationen
in neuronalem Substrat
bereits ausgiebig untersucht worden ist,
sind viele Forschungsfragen dazu,
wie Sparse-Coding für höhere, kognitive Prozesse genutzt werden könnte
noch offen.
Die ersten zwei Studien,
die in Kapitel 2 und Kapitel 3 enthalten sind,
behandeln die Frage,
inwieweit Repräsentationen, die mit Sparse-Coding entstehen,
mentalen Modellen entsprechen.
Wir haben folgende Selektivitäten
in Sparse-Coding-Repräsentationen identifiziert:
mit Stereo-Bildern als Eingangsdaten
war die Repräsentation selektiv für die Disparitäten von Bildstrukturen,
welche für das Abschätzen der Entfernung der Strukturen zum Beobachter genutzt werden können.
Außerdem war die Repräsentation selektiv für die die vorherrschende Orientierung in Texturen,
was für das Abschätzen der Neigung von Oberflächen genutzt werden kann.
Mit optischem Fluss von Eigenbewegung als Eingangsdaten
war die Repräsentation selektiv für die Richtung der Eigenbewegung
in den sechs Freiheitsgraden.
Wegen des direkten Zusammenhangs der Selektivitäten mit physikalischen Eigenschaften
können Repräsentationen, die mit Sparse-Coding entstehen,
als frĂĽhe sensorische Modelle der Umgebung dienen.
Die kognitiven Prozesse hinter räumlichem Wissen
ruhen auf mentalen Modellen, welche die Umgebung representieren.
Wir haben in der dritten Studie,
welche in Kapitel 4 enthalten ist,
ein topologisches Modell zur Navigation präsentiert,
Es beschreibt einen dualen Populations-Code,
bei dem der erste Populations-Code Orte anhand von Orts-Feldern (Place-Fields) kodiert
und der zweite Populations-Code Bewegungs-Instruktionen,
basierend auf der VerknĂĽpfung von Orts-Feldern, kodiert.
Der Fokus lag nicht auf der Implementation in biologischem Substrat
oder auf einer exakten Modellierung physiologischer Ergebnisse.
Das Modell ist eine biologisch plausible, einfache Methode zur Navigation,
welche sich an einen Zwischenschritt emergenter Navigations-Fähigkeiten
in einer evolutiven Navigations-Hierarchie annähert.
Unser automatisierter Test der Sehleistungen von Mäusen,
welcher in Kapitel 5 beschrieben wird,
ist ein Beispiel von Verhaltens-Tests
im Wahrnehmungs-Handlungs-Zyklus (Perception-Action-Cycle).
Das Ziel dieser Studie war die Quantifizierung des optokinetischen Reflexes.
Wegen des reichhaltigen Verhaltensrepertoires von Mäusen
sind fĂĽr die Quantifizierung viele umfangreiche Analyseschritte erforderlich.
Tiere und Menschen sind verkörperte (embodied) lebende Systeme
und daher aus stark miteinander verwobenen Modulen oder Entitäten zusammengesetzt,
welche auĂźerdem auch mit der Umgebung verwoben sind.
Um lebende Systeme als Ganzes zu studieren
ist es notwendig Hypothesen,
zum Beispiel zur Natur mentaler Modelle,
im Wahrnehmungs-Handlungs-Zyklus zu testen.
Zusammengefasst erweitern die Studien dieser Dissertation
unser Verständnis des Charakters früher sensorischer Repräsentationen als mentale Modelle,
sowie unser Verständnis höherer, mentalen Modellen für die räumliche Navigation.
Darüber hinaus enthält es ein Beispiel
fĂĽr das Evaluieren von Hypothesn im Wahr\-neh\-mungs-Handlungs-Zyklus.The superordinate aim of my work towards this thesis
was a better understanding
of the relationship between mental models
and the underlying principles that lead to the self-organization
of neuronal circuitry.
The thesis consists of four individual publications,
which approach this goal from differing perspectives.
While the formation of sparse coding representations in neuronal substrate
has been investigated extensively,
many research questions
on how sparse coding may be exploited for higher cognitive processing
are still open.
The first two studies,
included as chapter 2 and chapter 3,
asked to what extend representations obtained with sparse coding
match mental models.
We identified the following selectivities in sparse coding representations:
with stereo images as input,
the representation was selective for the disparity of image structures,
which can be used to infer the distance of structures to the observer.
Furthermore, it was selective to the predominant orientation in textures,
which can be used to infer the orientation of surfaces.
With optic flow from egomotion as input,
the representation was selective to the direction of egomotion
in 6 degrees of freedom.
Due to the direct relation between selectivity and physical properties,
these representations, obtained with sparse coding,
can serve as early sensory models of the environment.
The cognitive processes behind spatial knowledge
rest on mental models that represent the environment.
We presented a topological model for wayfinding
in the third study,
included as chapter 4.
It describes a dual population code,
where the first population code encodes places
by means of place fields,
and the second population code encodes motion instructions
based on links between place fields.
We did not focus on an implementation in biological substrate
or on an exact fit to physiological findings.
The model is a biologically plausible, parsimonious method for wayfinding,
which may be close to an intermediate step
of emergent skills in an evolutionary navigational hierarchy.
Our automated testing for visual performance in mice,
included in chapter 5,
is an example of behavioral testing in the perception-action cycle.
The goal of this study was to quantify the optokinetic reflex.
Due to the rich behavioral repertoire of mice,
quantification required many elaborate steps of computational analyses.
Animals and humans are embodied living systems,
and therefore composed of strongly enmeshed modules or entities,
which are also enmeshed with the environment.
In order to study living systems as a whole,
it is necessary to test hypothesis,
for example on the nature of mental models,
in the perception-action cycle.
In summary,
the studies included in this thesis
extend our view on the character of early sensory representations
as mental models,
as well as on high-level mental models
for spatial navigation.
Additionally it contains an example
for the evaluation of hypotheses in the perception-action cycle
Mobile Robots Navigation
Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described
Wholetoning: Synthesizing Abstract Black-and-White Illustrations
Black-and-white imagery is a popular and interesting depiction technique in the visual arts, in which varying tints and shades of a single colour are used. Within the realm of black-and-white images, there is a set of black-and-white illustrations that only depict salient features by ignoring
details, and reduce colour to pure black and white, with no intermediate tones. These illustrations hold tremendous potential to enrich decoration, human communication and entertainment. Producing abstract black-and-white illustrations by hand relies on a time consuming and difficult process that requires both artistic talent and technical expertise. Previous work has not explored this style of illustration in much depth, and simple approaches such as thresholding are insufficient for stylization and artistic control.
I use the word wholetoning to refer to illustrations that feature a high degree of shape and tone abstraction. In this thesis, I explore computer algorithms for generating wholetoned illustrations. First, I offer a general-purpose framework, “artistic thresholding”, to control the generation of
wholetoned illustrations in an intuitive way. The basic artistic thresholding algorithm is an optimization framework based on simulated annealing to get the final bi-level result. I design an extensible objective function from our observations of a lot of wholetoned images. The objective
function is a weighted sum over terms that encode features common to wholetoned illustrations.
Based on the framework, I then explore two specific wholetoned styles: papercutting and representational calligraphy. I define a paper-cut design as a wholetoned image with connectivity constraints that ensure that it can be cut out from only one piece of paper. My computer generated papercutting technique can convert an original wholetoned image into a paper-cut design. It can also synthesize stylized and geometric patterns often found in traditional designs.
Representational calligraphy is defined as a wholetoned image with the constraint that all depiction elements must be letters. The procedure of generating representational calligraphy designs is formalized as a “calligraphic packing” problem. I provide a semi-automatic technique that can warp a sequence of letters to fit a shape while preserving their readability
Apprentissage visuel en réalité virtuelle chez Apis mellifera
Dotées d'un cerveau de moins d'un millimètre cube et contenant environ 950 000 neurones, les abeilles présentent un riche répertoire comportemental, parmi lesquels l'apprentissage appétitif et la mémoire jouent un rôle fondamental dans le contexte des activités de recherche de nourriture. Outre les formes élémentaires d'apprentissage, où les abeilles apprennent une association spécifique entre des événements de leur environnement, les abeilles maîtrisent également différentes formes d'apprentissage non-élémentaire, à la fois dans le domaine visuel et olfactif, y compris la catégorisation, l'apprentissage contextuel et l'abstraction de règles. Ces caractéristiques en font un modèle idéal pour l'étude de l'apprentissage visuel et pour explorer les mécanismes neuronaux qui sous-tendent leurs capacités d'apprentissage. Afin d'accéder au cerveau d'une abeille lors d'une tâche d'apprentissage visuel, l'insecte doit être immobilisé. Par conséquent, des systèmes de réalité virtuelle (VR) ont été développés pour permettre aux abeilles d'agir dans un monde virtuel, tout en restant stationnaires dans le monde réel. Au cours de mon doctorat, j'ai développé un logiciel de réalité virtuelle 3D flexible et open source pour étudier l'apprentissage visuel, et je l'ai utilisé pour améliorer les protocoles de conditionnement existants en VR et pour étudier le mécanisme neuronal de l'apprentissage visuel. En étudiant l'influence du flux optique sur l'apprentissage associatif des couleurs, j'ai découvert que l'augmentation des signaux de mouvement de l'arrière-plan nuisait aux performances des abeilles. Ce qui m'a amené à identifier des problèmes pouvant affecter la prise de décision dans les paysages virtuels, qui nécessitent un contrôle spécifique par les expérimentateurs. Au moyen de la VR, j'ai induit l'apprentissage visuel chez des abeilles et quantifié l'expression immédiate des gènes précoces (IEG) dans des zones spécifiques de leur cerveau pour détecter les régions impliquées dans l'apprentissage visuel. En particulier, je me suis concentré sur kakusei, Hr38 et Egr1, trois IEG liés à la recherche de nourriture et à l'orientation des abeilles et qui peuvent donc également être pertinents pour la formation d'association visuelle appétitive. Cette analyse suggère que les corps pédonculés sont impliqués dans l'apprentissage associatif des couleurs.
Enfin, j'ai exploré la possibilité d'utiliser la VR sur d'autres modèles d'insectes et effectué un conditionnement différentiel sur des bourdons. Cette étude a montré que non seulement les bourdons sont capables de résoudre cette tâche cognitive aussi bien que les abeilles, mais aussi qu'ils interagissent davantage avec la réalité virtuelle, ce qui entraîne un ratio plus faible d'individus rejetés de l'expérience par manque de mouvement. Ces résultats indiquent que les protocoles VR que j'ai établis au cours de cette thèse peuvent être appliqués à d'autres insectes, et que le bourdon est un bon candidat pour l'étude de l'apprentissage visuel en VR.Equipped with a brain smaller than one cubic millimeter and containing ~950,000 neurons, honeybees display a rich behavioral repertoire, among which appetitive learning and memory play a fundamental role in the context of foraging activities. Besides elemental forms of learning, where bees learn specific association between environmental features, bees also master different forms of non-elemental learning, including categorization, contextual learning and rule abstraction. These characteristics make them an ideal model for the study of visual learning and its underlying neural mechanisms. In order to access the working brain of a bee during visual learning the insect needs to be immobilized. To do so, virtual reality (VR) setups have been developed to allow bees to behave within a virtual world, while remaining stationary within the real world. During my PhD, I developed a flexible and open source 3D VR software to study visual learning, and used it to improve existing conditioning protocols and to investigate the neural mechanism of visual learning. By developing a true 3D environment, we opened the possibility to add frontal background cues, which were also subjected to 3D updating based on the bee movements. We thus studied if and how the presence of such motion cues affected visual discrimination in our VR landscape. Our results showed that the presence of frontal background motion cues impaired the bees' performance. Whenever these cues were suppressed, color discrimination learning became possible. Our results point towards deficits in attentional processes underlying color discrimination whenever motion cues from the background were frontally available in our VR setup.
VR allows to present insects with a tightly controlled visual experience during visual learning. We took advantage of this feature to perform ex-vivo analysis of immediate early gene (IEG) expression in specific brain area, comparing learner and non-learner bees. Using both 3D VR and a lore restrictive 2D version of the same task we tackled two questions, first what are the brain region involved in visual learning? And second, is the pattern of activation of the brain dependent on the modality of learning? Learner bees that solved the task in 3D showed an increased activity of the Mushroom Bodies (MB), which is coherent with the role of the MB in sensory integration and learning. Surprisingly we also found a completely different pattern of IEGs expression in the bees that solved the task in 2D conditions. We observed a neural signature that spanned the optic lobes and MB calyces and was characterized by IEG downregulation, consistent with an inhibitory trace.
The study of visual learning's neural mechanisms requires invasive approach to access the brain of the insects, which induces stress in the animals and can thus impair behaviors in itself. To potentially mitigate this effect, bumble bees Bombus terrestris could constitute a good alternative to Apis mellifera as bumble bees are more robust. That's why in the last part of this work we explored the performances of bumblebees in a differential learning task in VR and compared them to those of honey bees. We found that, not only bumble bees are able to solve the task as well as honey bees, but they also engage more with the virtual environment, leading to a lower ratio of discarded individuals. We also found no correlation between the size of bumble bees and their learning performances. This is surprising as larger bumble bees, that assume the role of foragers in the colony, have been shown to be better at learning visual tasks in the literature
An Approach Based on Particle Swarm Optimization for Inspection of Spacecraft Hulls by a Swarm of Miniaturized Robots
The remoteness and hazards that are inherent to the operating environments of space infrastructures promote their need for automated robotic inspection. In particular, micrometeoroid and orbital debris impact and structural fatigue are common sources of damage to spacecraft hulls. Vibration sensing has been used to detect structural damage in spacecraft hulls as well as in structural health monitoring practices in industry by deploying static sensors. In this paper, we propose using a swarm of miniaturized vibration-sensing mobile robots realizing a network of mobile sensors. We present a distributed inspection algorithm based on the bio-inspired particle swarm optimization and evolutionary algorithm niching techniques to deliver the task of enumeration and localization of an a priori unknown number of vibration sources on a simplified 2.5D spacecraft surface. Our algorithm is deployed on a swarm of simulated cm-scale wheeled robots. These are guided in their inspection task by sensing vibrations arising from failure points on the surface which are detected by on-board accelerometers. We study three performance metrics: (1) proximity of the localized sources to the ground truth locations, (2) time to localize each source, and (3) time to finish the inspection task given a 75% inspection coverage threshold. We find that our swarm is able to successfully localize the present so
- …