36 research outputs found
Snapshot navigation in the wavelet domain
Many animals rely on robust visual navigation which can be explained by snapshot models, where an agent is assumed to store egocentric panoramic images and subsequently use them to recover a heading by comparing current views to the stored snapshots. Long-range route navigation can also be explained by such models, by storing multiple snapshots along a training route and comparing the current image to these. For such models, memory capacity and comparison time increase dramatically with route length, rendering them unfeasible for small-brained insects and low-power robots where computation and storage are limited. One way to reduce the requirements is to use a compressed image representation. Inspired by the filter bank-like arrangement of the visual system, we here investigate how a frequency-based image representation influences the performance of a typical snapshot model. By decomposing views into wavelet coefficients at different levels and orientations, we achieve a compressed visual representation that remains robust when used for navigation. Our results indicate that route following based on wavelet coefficients is not only possible but gives increased performance over a range of other models
Learning cognitive maps: Finding useful structure in an uncertain world
In this chapter we will describe the central mechanisms that influence how people learn about large-scale space. We will focus particularly on how these mechanisms enable people to effectively cope with both the uncertainty inherent in a constantly changing world and also with the high information content of natural environments. The major lessons are that humans get by with a less is more approach to building structure, and that they are able to quickly adapt to environmental changes thanks to a range of general purpose mechanisms. By looking at abstract principles, instead of concrete implementation details, it is shown that the study of human learning can provide valuable lessons for robotics. Finally, these issues are discussed in the context of an implementation on a mobile robot. © 2007 Springer-Verlag Berlin Heidelberg
An autonomous navigational system using GPS and computer vision for futuristic road traffic
Navigational service is one of the most essential dependency towards any transport system and at present, there are various revolutionary approaches that has contributed towards its improvement. This paper has reviewed the global positioning system (GPS) and computer vision based navigational system and found that there is a large gap between the actual demand of navigation and what currently exists. Therefore, the proposed study discusses about a novel framework of an autonomous navigation system that uses GPS as well as computer vision considering the case study of futuristic road traffic system. An analytical model is built up where the geo-referenced data from GPS is integrated with the signals captured from the visual sensors are considered to implement this concept. The simulated outcome of the study shows that proposed study offers enhanced accuracy as well as faster processing in contrast to existing approaches
Recommended from our members
Biomimetic models of visual navigation - active sensing for embodied intelligence
Insects have developed small scale search behaviours to pursue navigation relevant stimuli more effectively. These often resemble a variation of Zig-Zagging, steering periodically to the left and right, therefore increasing the sampling. In this context we investigate the role of a homologous insect brain structure, the Lateral Accessory Lobe (LAL), which has been described as a pre-motor centre but received limited attention so far. Following a synthesis of the literature on the LAL we developed a steering framework, which proposes that with lateralised stimuli as input, the LAL can initiate a Zig-Zagging behaviour if the input is too weak, meaning unreliable, and targeted steering behaviours if the input is strong, thus reliable. Based on this framework we model a Spiking Neural Network (SNN) investigating a sensory modulated Central Pattern Generator (CPG) as a possible neural mechanism enabling adaptive search behaviours. We investigated the parameter space of the model to discover both the range of possible behaviours as well as which parameter combinations lead to the previously described behaviour. We found that no parameter combination accounts for the majority of observed behaviours. Furthermore, changing the computational noise levels does not lead to break-down of this behaviour. We conclude, that this neural architecture is robust to generate an adaptable Zig-Zagging behaviour. Additionally, we developed a more comprehensive network to explore the functions of known neuron-types with regard to motor control. To investigate how this steering framework might work for view based navigation, we investigated how lateralised sensory input can be used for snapshot navigation. We used a 3D-reconstruction from a LiDAR-scanned field-site (âAntworldâ) to generate realistic visual stimuli. Instead of using the entire panorama, we subdivided this into two Fields of View for snapshot generation and the later image comparisons. The difference of image familiarity from both sides was subtracted to initiate a steering response into the most familiar direction. We found that a bigger Field of View alongside non-forward facing memories generated the most correct steering responses towards the snapshot direction. This demonstrates that the LAL-inspired steering framework can be functional for a complex sensori-motor task that had previously not been implicated in LAL functionality. Finally, we modelled how bilateral sensory information and a SNN model of the LAL behave in a snapshot navigation setup using Antworld. We compared the original snapshot navigation model using a panoramic Field of View with several combinations of the Core-Network and bilateral vision models: using a bilateral view, a bilateral view with the SNN, a panoramic view with SNN and other standard movement behaviours. We confirmed the findings of preliminary work, in an abstract setup, that had shown that a bilateral view combined with a SNN performs best to recover and approach navigation relevant locations. Also introducing models based on the steering framework into this visually complex environment improved the performance of agents performing snapshot navigation
Recommended from our members
Recent advances in evolutionary and bio-inspired adaptive robotics: exploiting embodied dynamics
This paper explores current developments in evolutionary and bio-inspired approaches to autonomous robotics, concentrating on research from our group at the University of Sussex. These developments are discussed in the context of advances in the wider fields of adaptive and evolutionary approaches to AI and robotics, focusing on the exploitation of embodied dynamics to create behaviour. Four case studies highlight various aspects of such exploitation. The first exploits the dynamical properties of a physical electronic substrate, demonstrating for the first time how component-level analog electronic circuits can be evolved directly in hardware to act as robot controllers. The second develops novel, effective and highly parsimonious navigation methods inspired by the way insects exploit the embodied dynamics of innate behaviours. Combining biological experiments with robotic modeling, it is shown how rapid route learning can be achieved with the aid of navigation-specific visual information that is provided and exploited by the innate behaviours. The third study focuses on the exploitation of neuromechanical chaos in the generation of robust motor behaviours. It is demonstrated how chaotic dynamics can be exploited to power a goal-driven search for desired motor behaviours in embodied systems using a particular control architecture based around neural oscillators. The dynamics are shown to be chaotic at all levels in the system, from the neural to the embodied mechanical. The final study explores the exploitation of the dynamics of brain-body-environment interactions for efficient, agile flapping winged flight. It is shown how a multi-objective evolutionary algorithm can be used to evolved dynamical neural controllers for a simulated flapping wing robot with feathered wings. Results demonstrate robust, stable, agile flight is achieved in the face of random wind gusts by exploiting complex asymmetric dynamics partly enabled by continually changing wing and tail morphologies
On the relationship between neuronal codes and mental models
Das ĂŒbergeordnete Ziel meiner Arbeit an dieser Dissertation
war ein besseres VerstÀndnis des Zusammenhangs
von mentalen Modellen
und den zugrundeliegenden Prinzipien,
die zur Selbstorganisation neuronaler Verschaltung fĂŒhren.
Die Dissertation besteht aus vier individuellen Publikationen,
die dieses Ziel aus unterschiedlichen Perspektiven angehen.
WÀhrend die Selbstorganisation von Sparse-Coding-ReprÀsentationen
in neuronalem Substrat
bereits ausgiebig untersucht worden ist,
sind viele Forschungsfragen dazu,
wie Sparse-Coding fĂŒr höhere, kognitive Prozesse genutzt werden könnte
noch offen.
Die ersten zwei Studien,
die in Kapitel 2 und Kapitel 3 enthalten sind,
behandeln die Frage,
inwieweit ReprÀsentationen, die mit Sparse-Coding entstehen,
mentalen Modellen entsprechen.
Wir haben folgende SelektivitÀten
in Sparse-Coding-ReprÀsentationen identifiziert:
mit Stereo-Bildern als Eingangsdaten
war die ReprĂ€sentation selektiv fĂŒr die DisparitĂ€ten von Bildstrukturen,
welche fĂŒr das AbschĂ€tzen der Entfernung der Strukturen zum Beobachter genutzt werden können.
AuĂerdem war die ReprĂ€sentation selektiv fĂŒr die die vorherrschende Orientierung in Texturen,
was fĂŒr das AbschĂ€tzen der Neigung von OberflĂ€chen genutzt werden kann.
Mit optischem Fluss von Eigenbewegung als Eingangsdaten
war die ReprĂ€sentation selektiv fĂŒr die Richtung der Eigenbewegung
in den sechs Freiheitsgraden.
Wegen des direkten Zusammenhangs der SelektivitÀten mit physikalischen Eigenschaften
können ReprÀsentationen, die mit Sparse-Coding entstehen,
als frĂŒhe sensorische Modelle der Umgebung dienen.
Die kognitiven Prozesse hinter rÀumlichem Wissen
ruhen auf mentalen Modellen, welche die Umgebung representieren.
Wir haben in der dritten Studie,
welche in Kapitel 4 enthalten ist,
ein topologisches Modell zur Navigation prÀsentiert,
Es beschreibt einen dualen Populations-Code,
bei dem der erste Populations-Code Orte anhand von Orts-Feldern (Place-Fields) kodiert
und der zweite Populations-Code Bewegungs-Instruktionen,
basierend auf der VerknĂŒpfung von Orts-Feldern, kodiert.
Der Fokus lag nicht auf der Implementation in biologischem Substrat
oder auf einer exakten Modellierung physiologischer Ergebnisse.
Das Modell ist eine biologisch plausible, einfache Methode zur Navigation,
welche sich an einen Zwischenschritt emergenter Navigations-FĂ€higkeiten
in einer evolutiven Navigations-Hierarchie annÀhert.
Unser automatisierter Test der Sehleistungen von MĂ€usen,
welcher in Kapitel 5 beschrieben wird,
ist ein Beispiel von Verhaltens-Tests
im Wahrnehmungs-Handlungs-Zyklus (Perception-Action-Cycle).
Das Ziel dieser Studie war die Quantifizierung des optokinetischen Reflexes.
Wegen des reichhaltigen Verhaltensrepertoires von MĂ€usen
sind fĂŒr die Quantifizierung viele umfangreiche Analyseschritte erforderlich.
Tiere und Menschen sind verkörperte (embodied) lebende Systeme
und daher aus stark miteinander verwobenen Modulen oder EntitÀten zusammengesetzt,
welche auĂerdem auch mit der Umgebung verwoben sind.
Um lebende Systeme als Ganzes zu studieren
ist es notwendig Hypothesen,
zum Beispiel zur Natur mentaler Modelle,
im Wahrnehmungs-Handlungs-Zyklus zu testen.
Zusammengefasst erweitern die Studien dieser Dissertation
unser VerstĂ€ndnis des Charakters frĂŒher sensorischer ReprĂ€sentationen als mentale Modelle,
sowie unser VerstĂ€ndnis höherer, mentalen Modellen fĂŒr die rĂ€umliche Navigation.
DarĂŒber hinaus enthĂ€lt es ein Beispiel
fĂŒr das Evaluieren von Hypothesn im Wahr\-neh\-mungs-Handlungs-Zyklus.The superordinate aim of my work towards this thesis
was a better understanding
of the relationship between mental models
and the underlying principles that lead to the self-organization
of neuronal circuitry.
The thesis consists of four individual publications,
which approach this goal from differing perspectives.
While the formation of sparse coding representations in neuronal substrate
has been investigated extensively,
many research questions
on how sparse coding may be exploited for higher cognitive processing
are still open.
The first two studies,
included as chapter 2 and chapter 3,
asked to what extend representations obtained with sparse coding
match mental models.
We identified the following selectivities in sparse coding representations:
with stereo images as input,
the representation was selective for the disparity of image structures,
which can be used to infer the distance of structures to the observer.
Furthermore, it was selective to the predominant orientation in textures,
which can be used to infer the orientation of surfaces.
With optic flow from egomotion as input,
the representation was selective to the direction of egomotion
in 6 degrees of freedom.
Due to the direct relation between selectivity and physical properties,
these representations, obtained with sparse coding,
can serve as early sensory models of the environment.
The cognitive processes behind spatial knowledge
rest on mental models that represent the environment.
We presented a topological model for wayfinding
in the third study,
included as chapter 4.
It describes a dual population code,
where the first population code encodes places
by means of place fields,
and the second population code encodes motion instructions
based on links between place fields.
We did not focus on an implementation in biological substrate
or on an exact fit to physiological findings.
The model is a biologically plausible, parsimonious method for wayfinding,
which may be close to an intermediate step
of emergent skills in an evolutionary navigational hierarchy.
Our automated testing for visual performance in mice,
included in chapter 5,
is an example of behavioral testing in the perception-action cycle.
The goal of this study was to quantify the optokinetic reflex.
Due to the rich behavioral repertoire of mice,
quantification required many elaborate steps of computational analyses.
Animals and humans are embodied living systems,
and therefore composed of strongly enmeshed modules or entities,
which are also enmeshed with the environment.
In order to study living systems as a whole,
it is necessary to test hypothesis,
for example on the nature of mental models,
in the perception-action cycle.
In summary,
the studies included in this thesis
extend our view on the character of early sensory representations
as mental models,
as well as on high-level mental models
for spatial navigation.
Additionally it contains an example
for the evaluation of hypotheses in the perception-action cycle
Low-Cost GNSS Simulators with Wireless Clock Synchronization for Indoor Positioning
In regions where global navigation satellite systems (GNSS) signals are
unavailable, such as underground areas and tunnels, GNSS simulators can be
deployed for transmitting simulated GNSS signals. Then, a GNSS receiver in the
simulator coverage outputs the position based on the received GNSS signals
(e.g., Global Positioning System (GPS) L1 signals in this study) transmitted by
the corresponding simulator. This approach provides periodic position updates
to GNSS users while deploying a small number of simulators without modifying
the hardware and software of user receivers. However, the simulator clock
should be synchronized to the GNSS satellite clock to generate almost identical
signals to the live-sky GNSS signals, which is necessary for seamless indoor
and outdoor positioning handover. The conventional clock synchronization method
based on the wired connection between each simulator and an outdoor GNSS
antenna causes practical difficulty and increases the cost of deploying the
simulators. This study proposes a wireless clock synchronization method based
on a private time server and time delay calibration. Additionally, we derived
the constraints for determining the optimal simulator coverage and separation
between adjacent simulators. The positioning performance of the proposed GPS
simulator-based indoor positioning system was demonstrated in the underground
testbed for a driving vehicle with a GPS receiver and a pedestrian with a
smartphone. The average position errors were 3.7 m for the vehicle and 9.6 m
for the pedestrian during the field tests with successful indoor and outdoor
positioning handovers. Since those errors are within the coverage of each
deployed simulator, it is confirmed that the proposed system with wireless
clock synchronization can effectively provide periodic position updates to
users where live-sky GNSS signals are unavailable.Comment: Submitted to IEEE Acces