179 research outputs found
A unified neural model explaining optimal multi-guidance coordination in insect navigation
The robust navigation of insects arises from the coordinated action of concurrently functioning and interacting guidance systems. Computational models of specific brain regions can account for isolated behaviours such as path integration or route following, but the neural mechanisms by which their outputs are coordinated remains unknown. In this work, a functional modelling approach was taken to identify and model the elemental guidance subsystems required by homing insects. Then we produced realistic adaptive behaviours by integrating different guidance's outputs in a biologically constrained unified model mapped onto identified neural circuits. Homing paths are quantitatively and qualitatively compared with real ant data in a series of simulation studies replicating key infield experiments.
Our analysis reveals that insects require independent visual homing and route following capabilities which we show can be realised by encoding panoramic skylines in the frequency domain, using image processing circuits in the optic lobe and learning pathways through the Mushroom Bodies (MB) and Anterior Optic Tubercle (AOTU) to Bulb (BU) respectively before converging in the Central Complex (CX) steering circuit.
Further, we demonstrate that a ring attractor network inspired by firing patterns recorded in the CX can optimally integrate the outputs of path integration and visual homing systems guiding simulated ants back to their familiar route, and a simple non-linear weighting function driven by the output of the MB provides a context-dependent switch allowing route following strategies to dominate and the learned route retraced back to the nest when familiar terrain is encountered.
The resultant unified model of insect navigation reproduces behavioural data from a series of cue conflict experiments in realistic animal environments and offers testable hypotheses of where and how insects process visual cues, utilise the different information that they provide and coordinate their outputs to achieve the adaptive behaviours observed in the wild. These results forward the case for a distributed architecture of the insect navigational toolkit.
This unified model then be further validated by modelling the olfactory navigation of flies and ants. With simple adaptions of the sensory inputs, this model reproduces the main characteristics of the observed behavioural data, further demonstrating the useful role played by sensory-processing to CX to motor pathway in generating context-dependent coordination behaviours. In addition, this model help to complete the unified model of insect navigation by adding the olfactory cues that is one of the most crucial cues for insects
Augmented reality device for first response scenarios
A prototype of a wearable computer system is proposed and implemented using commercial off-shelf components. The system is designed to allow the user to access location-specific information about an environment, and to provide capability for user tracking. Areas of applicability include primarily first response scenarios, with possible applications in maintenance or construction of buildings and other structures. Necessary preparation of the target environment prior to system\u27s deployment is limited to noninvasive labeling using optical fiducial markers. The system relies on computational vision methods for registration of labels and user position. With the system the user has access to on-demand information relevant to a particular real-world location. Team collaboration is assisted by user tracking and real-time visualizations of team member positions within the environment. The user interface and display methods are inspired by Augmented Reality1 (AR) techniques, incorporating a video-see-through Head Mounted Display (HMD) and fingerbending sensor glove.*.
1Augmented reality (AR) is a field of computer research which deals with the combination of real world and computer generated data. At present, most AR research is concerned with the use of live video imagery which is digitally processed and augmented by the addition of computer generated graphics. Advanced research includes the use of motion tracking data, fiducial marker recognition using machine vision, and the construction of controlled environments containing any number of sensors and actuators. (Source: Wikipedia) *This dissertation is a compound document (contains both a paper copy and a CD as part of the dissertation). The CD requires the following system requirements: Adobe Acrobat; Microsoft Office; Windows MediaPlayer or RealPlayer
The augmented reality framework : an approach to the rapid creation of mixed reality environments and testing scenarios
Debugging errors during real-world testing of remote platforms can be time consuming and expensive
when the remote environment is inaccessible and hazardous such as deep-sea. Pre-real world testing
facilities, such as Hardware-In-the-Loop (HIL), are often not available due to the time and expense
necessary to create them. Testing facilities tend to be monolithic in structure and thus inflexible
making complete redesign necessary for slightly different uses. Redesign is simpler in the short term
than creating the required architecture for a generic facility. This leads to expensive facilities, due
to reinvention of the wheel, or worse, no testing facilities. Without adequate pre-real world testing,
integration errors can go undetected until real world testing where they are more costly to diagnose
and rectify, e.g. especially when developing Unmanned Underwater Vehicles (UUVs).
This thesis introduces a novel framework, the Augmented Reality Framework (ARF), for rapid
construction of virtual environments for Augmented Reality tasks such as Pure Simulation, HIL,
Hybrid Simulation and real world testing. ARFâs architecture is based on JavaBeans and is therefore
inherently generic, flexible and extendable. The aim is to increase the performance of constructing,
reconfiguring and extending virtual environments, and consequently enable more mature and stable
systems to be developed in less time due to previously undetectable faults being diagnosed earlier in
the pre-real-world testing phase. This is only achievable if test harnesses can be created quickly and
easily, which in turn allows the developer to visualise more system feedback making faults easier to
spot. Early fault detection and less wasted real world testing leads to a more mature, stable and
less expensive system.
ARF provides guidance on how to connect and configure user made components, allowing for
rapid prototyping and complex virtual environments to be created quickly and easily. In essence,
ARF tries to provide intuitive construction guidance which is similar in nature to LEGOR
pieces
which can be so easily connected to form useful configurations.
ARF is demonstrated through case studies which show the flexibility and applicability of ARF to
testing techniques such as HIL for UUVs. In addition, an informal study was carried out to asses the
performance increases attributable to ARFâs core concepts. In comparison to classical programming
methods ARFâs average performance increase was close to 200%. The study showed that ARF was
incredibly intuitive since the test subjects were novices in ARF but experts in programming. ARF
provides key contributions in the field of HIL testing of remote systems by providing more accessible
facilities that allow new or modified testing scenarios to be created where it might not have been
feasible to do so before. In turn this leads to early detection of faults which in some cases would not
have ever been detected before
Generating walking behaviours in legged robots
Many legged robots have boon built with a variety of different abilities, from running
to liopping to climbing stairs. Despite this however, there has been no consistency of
approach to the problem of getting them to walk. Approaches have included breaking
down the walking step into discrete parts and then controlling them separately, using
springs and linkages to achieve a passive walking cycle, and even working out the
necessary movements in simulation and then imposing them on the real robot. All of
these have limitations, although most were successful at the task for which they were
designed. However, all of them fall into one of two categories: either they alter the
dynamics of the robots physically so that the robot, whilst very good at walking, is
not as general purpose as it once was (as with the passive robots), or they control the
physical mechanism of the robot directly to achieve their goals, and this is a difficult
task.In this thesis a design methodology is described for building controllers for 3D dynamÂŹ
ically stable walking, inspired by the best walkers and runners around â ourselves â
so the controllers produced are based 011 the vertebrate Central Nervous System. This
means that there is a low-level controller which adapts itself to the robot so that, when
switched on, it can be considered to simulate the springs and linkages of the passive
robots to produce a walking robot, and this now active mechanism is then controlled
by a relatively simple higher level controller. This is the best of both worlds â we
have a robot which is inherently capable of walking, and thus is easy to control like
the passive walkers, but also retains the general purpose abilities which makes it so
potentially useful.This design methodology uses an evolutionary algorithm to generate low-level controlÂŹ
lers for a selection of simulated legged robots. The thesis also looks in detail at previous
walking robots and their controllers and shows that some approaches, including staged
evolution and hand-coding designs, may be unnecessary, and indeed inappropriate, at
least for a general purpose controller. The specific algorithm used is evolutionary, using
a simple genetic algorithm to allow adaptation to different robot configurations, and
the controllers evolved are continuous time neural networks. These are chosen because
of their ability to entrain to the movement of the robot, allowing the whole robot and
network to be considered as a single dynamical system, which can then be controlled
by a higher level system.An extensive program of experiments investigates the types of neural models and netÂŹ
work structures which are best suited to this task, and it is shown that stateless and
simple dynamic neural models are significantly outperformed as controllers by more
complex, biologically plausible ones but that other ideas taken from biological systems,
including network connectivities, are not generally as useful and reasons for this are
examined.The thesis then shows that this system, although only developed 011 a single robot,
is capable of automatically generating controllers for a wide selection of different test
designs. Finally it shows that high level controllers, at least to control steering and
speed, can be easily built 011 top of this now active walking mechanism
Imitation learning through games: theory, implementation and evaluation
Despite a history of games-based research, academia has generally regarded
commercial games as a distraction from the serious business of AI, rather than as an
opportunity to leverage this existing domain to the advancement of our knowledge.
Similarly, the computer game industry still relies on techniques that were developed
several decades ago, and has shown little interest in adopting more progressive
academic approaches. In recent times, however, these attitudes have begun to change;
under- and post-graduate games development courses are increasingly common,
while the industry itself is slowly but surely beginning to recognise the potential
offered by modern machine-learning approaches, though games which actually
implement said approaches on more than a token scale remain scarce.
One area which has not yet received much attention from either academia or industry
is imitation learning, which seeks to expedite the learning process by exploiting data
harvested from demonstrations of a given task. While substantial work has been done
in developing imitation techniques for humanoid robot movement, there has been
very little exploration of the challenges posed by interactive computer games. Given
that such games generally encode reasoning and decision-making behaviours which
are inherently more complex and potentially more interesting than limb motion data,
that they often provide inbuilt facilities for recording human play, that the generation
and collection of training samples is therefore far easier than in robotics, and that
many games have vast pre-existing libraries of these recorded demonstrations, it is
fair to say that computer games represent an extremely fertile domain for imitation
learning research.
In this thesis, we argue in favour of using modern, commercial computer games to
study, model and reproduce humanlike behaviour. We provide an overview of the
biological and robotic imitation literature as well as the current status of game AI, highlighting techniques which may be adapted for the purposes of game-based
imitation. We then proceed to describe our contributions to the field of imitation
learning itself, which encompass three distinct categories: theory, implementation
and evaluation.
We first describe the development of a fully-featured Java API - the Quake2 Agent
Simulation Environment (QASE) - designed to facilitate both research and education
in imitation and general machine-learning, using the game Quake 2 as a testbed. We
outline our motivation for developing QASE, discussing the shortcomings of existing
APIs and the steps which we have taken to circumvent them. We describe QASEâs
network layer, which acts as an interface between the local AI routines and the
Quake 2 server on which the game environment is maintained, before detailing the
APIâs agent architecture, which includes an interface to the MatLab programming
environment and the ability to parse and analyse full recordings of game sessions.
We conclude the chapter with a discussion of QASEâs adoption by numerous
universities as both an undergraduate teaching tool and research platform.
We then proceed to describe the various imitative mechanisms which we have
developed using QASE and its MatLab integration facilities. We first outline a
behaviour model based on a well-known psychological model of human planning.
Drawing upon previous research, we also identify a set of believability criteria -
elements of agent behaviour which are of particular importance in determining the
âhumannessâ of its in-game appearance. We then detail a reinforcement-learning
approach to imitating the human playerâs navigation of his environment, centred
upon his pursuit of items as strategic goals. In the subsequent section, we describe
the integration of this strategic system with a Bayesian mechanism for the imitation
of tactical and motion-modelling behaviours. Finally, we outline a model for the
imitation of reactive combat behaviours; specifically, weapon-selection and aiming. Experiments are presented in each case to demonstrate the imitative mechanismsâ
ability to accurately reproduce observed behaviours.
Finally, we criticise the lack of any existing methodology to formally gauge the
believability of game agents, and observe that the few previous attempts have been
extremely ad-hoc and informal. We therefore propose a generalised approach to such
testing; the Bot-Oriented Turing Test (BOTT). This takes the form of an anonymous
online questionnaire, an accompanying protocol to which examiners should adhere,
and the formulation of a believability index which numerically expresses each agentâs
humanness as indicated by its observers, weighted by their experience and the
accuracy with which the agents were identified. To both validate the survey approach
and to determine the efficacy of our imitative models, we present a series of
experiments which use the believability test to evaluate our own imitation agents
against both human players and traditional artificial bots. We demonstrate that our
imitation agents perform substantially better than even a highly-regarded rule-based
agent, and indeed approach the believability of actual human players.
Some suggestions for future directions in our research, as well as a broader
discussion of open questions, conclude this thesis
MATLAB
This excellent book represents the final part of three-volumes regarding MATLAB-based applications in almost every branch of science. The book consists of 19 excellent, insightful articles and the readers will find the results very useful to their work. In particular, the book consists of three parts, the first one is devoted to mathematical methods in the applied sciences by using MATLAB, the second is devoted to MATLAB applications of general interest and the third one discusses MATLAB for educational purposes. This collection of high quality articles, refers to a large range of professional fields and can be used for science as well as for various educational purposes
Robots learn to behave: improving human-robot collaboration in flexible manufacturing applications
L'abstract Ăš presente nell'allegato / the abstract is in the attachmen
- âŠ