758 research outputs found
Vision for navigation: what can we learn from ants?
The visual systems of all animals are used to provide information that can guide behaviour. In some cases insects demonstrate particularly impressive visually-guided behaviour and then we might reasonably ask
how the low-resolution vision and limited neural resources of insects are tuned to particular behavioural strategies. Such questions are of interest to both biologists and to engineers seeking to emulate insectlevel performance with lightweight hardware. One behaviour that insects share with many animals is the use of learnt visual information for navigation. Desert ants, in particular, are expert visual navigators.
Across their foraging life, ants can learn long idiosyncratic foraging routes. What's more, these routes are learnt quickly and the visual cues that define them can be implemented for guidance independently of
other social or personal information. Here we review the style of visual navigation in solitary foraging ants and consider the physiological mechanisms that underpin it. Our perspective is to consider that
robust navigation comes from the optimal interaction between behavioural strategy, visual mechanisms and neural hardware.We consider each of these in turn, highlighting the value of ant-like mechanisms in biomimetic endeavours
A Step Towards Autonomous, Biomimetic, Non-GPS Based Navigation Methodology
Global Positioning System (GPS) based navigations have their own inherent weakness; they can be overridden so easily and are not useful inside structures. One method of overcoming the above problem is the use of feature based navigation system. Nature has so much perfected this that copying nature is one of the best approaches available to scientist. In this study, desert ant (Cataglyphis fortis) was imitated. A simple infra-red based active beacons and robot mounted rotating receiver based on TSOP31138 infra-red sensor was implemented using New Three Objects Triangulation Algorithm (ToTAL) in its firmware for the robot pose. The designed robot with the triangulation algorithm was able to compute its pose such that on a grid of 6 m x 6 m, it can home to its base with a maximum error of 14.8 mm
Navigating a Robot through Big Visual Sensory Data
AbstractThis paper describes a reinforcement learning architecture that is capable of incorporating deeply learned feature representation of a robot's unknown working environment. An autoencoder is used along with convolutional and pooling layers to deduce the reduced feature representation based on a set of images taken by the agent. This representation is used to discover and learn the best route to navigate to a goal. The features are fed to an actor layer that can learn from a value function calculated by a second output layer. The policy is É›-greedy and the effect is similar to actor-critic architecture where temporal difference error is back propagated from the critic to the actor. This compact architecture helps in reducing the overhead of setting up a desired fully fledged actor-critic architecture that typically needs extra processing time. Hence, the model is ideal for dealing with lots of data coming from visual sensor that needs speedy processing. The processing is accomplished off board due to the limitation of the used robot but latency was compensated by the speedy processing. Adaptability for the different data sizes, critical to big data processing, is realized by the ability to shrink or expand the whole architecture to fit different deeply learned feature dimensions. This added flexibility is crucial for setting up such model since the space dimensionality is not known prior to operating in the environment. Initial experimental results on real robot show that the agent accomplished good level of accuracy and efficacy in reaching the goal
Visual homing in field crickets and desert ants: a comparative behavioural and modelling study
Visually guided navigation represents a long standing goal in robotics. Insights may
be drawn from various insect species for which visual information has been shown
sufficient for navigation in complex environments, however the generality of visual
homing abilities across insect species remains unclear. Furthermore variousmodels
have been proposed as strategies employed by navigating insects yet comparative
studies across models and species are lacking. This work addresses these questions
in two insect species not previously studied: the field cricket Gryllus bimaculatus
for which almost no navigational data is available; and the European desert ant
Cataglyphis velox, a relation of the African desert ant Cataglyphis bicolor which has
become a model species for insect navigation studies.
The ability of crickets to return to a hidden target using surrounding visual cues
was tested using an analogue of the Morris water-maze, a standard paradigm for
spatial memory testing in rodents. Crickets learned to re-locate the hidden target
using the provided visual cues, with the best performance recorded when a natural
image was provided as stimulus rather than clearly identifiable landmarks.
The role of vision in navigation was also observed for desert ants within their
natural habitat. Foraging ants formed individual, idiosyncratic, visually guided routes
through their cluttered surroundings as has been reported in other ant species inhabiting
similar environments. In the absence of other cues ants recalled their route
even when displaced along their path indicating that ants recall previously visited
places rather than a sequence of manoeuvres.
Image databases were collected within the environments experienced by the insects
using custompanoramic cameras that approximated the insect eye viewof the
world. Six biologically plausible visual homing models were implemented and their
performance assessed across experimental conditions.
The models were first assessed on their ability to replicate the relative performance
across the various visual surrounds in which crickets were tested. That is,
best performance was sought with the natural scene, followed by blank walls and
then the distinct landmarks. Only two models were able to reproduce the pattern
of results observed in crickets: pixel-wise image difference with RunDown and the
centre of mass average landmark vector.
The efficacy of models was then assessed across locations in the ant habitat.
A 3D world was generated from the captured images providing noise free and high
spatial resolution images asmodel input. Best performancewas found for optic flow and image difference based models. However in many locations the centre of mass
average landmark vector failed to provide reliable guidance. This work shows that
two previously unstudied insect species can navigate using surrounding visual cues
alone. Moreover six biologically plausible models of visual navigation were assessed
in the same environments as the insects and only an image difference based model
succeeded in all experimental conditions
Self-reflective deep reinforcement learning
© 2016 IEEE. In this paper we present a new concept of self-reflection learning to support a deep reinforcement learning model. The self-reflective process occurs offline between episodes to help the agent to learn to navigate towards a goal location and boost its online performance. In particular, a so far optimal experience is recalled and compared with other similar but suboptimal episodes to reemphasize worthy decisions and deemphasize unworthy ones using eligibility and learning traces. At the same time, relatively bad experience is forgotten to remove its confusing effect. We set up a layer-wise deep actor-critic architecture and apply the self-reflection process to help to train it. We show that the self-reflective model seems to work well and initial experimental result on real robot shows that the agent accomplished good success rate in reaching a goal location
A model of ant route navigation driven by scene familiarity
In this paper we propose a model of visually guided route navigation in ants that captures the known properties of real behaviour whilst retaining mechanistic simplicity and thus biological plausibility. For an ant, the coupling of movement and viewing direction means that a familiar view specifies a familiar direction of movement. Since the views experienced along a habitual route will be more familiar, route navigation can be re-cast as a search for familiar views. This search can be performed with a simple scanning routine, a behaviour that ants have been observed to perform. We test this proposed route navigation strategy in simulation, by learning a series of routes through visually cluttered environments consisting of objects that are only distinguishable as silhouettes against the sky. In the first instance we determine view familiarity by exhaustive comparison with the set of views experienced during training. In further experiments we train an artificial neural network to perform familiarity discrimination using the training views. Our results indicate that, not only is the approach successful, but also that the routes that are learnt show many of the characteristics of the routes of desert ants. As such, we believe the model represents the only detailed and complete model of insect route guidance to date. What is more, the model provides a general demonstration that visually guided routes can be produced with parsimonious mechanisms that do not specify when or what to learn, nor separate routes into sequences of waypoints
How variation in head pitch could affect image matching algorithms for ant navigation
Desert ants are a model system for animal navigation, using visual memory to follow long routes across both sparse and cluttered environments. Most accounts of this behaviour assume retinotopic image matching, e.g. recovering heading direction by finding a minimum in the image difference function as the viewpoint rotates. But most models neglect the potential image distortion that could result from unstable head motion. We report that for ants running across a short section of natural substrate, the head pitch varies substantially: by over 20 degrees with no load; and 60 degrees when carrying a large food item. There is no evidence of head stabilisation. Using a realistic simulation of the ant’s visual world, we demonstrate that this range of head pitch significantly degrades image matching. The effect of pitch variation can be ameliorated by a memory bank of densely sampled along a route so that an image sufficiently similar in pitch and location is available for comparison. However, with large pitch disturbance, inappropriate memories sampled at distant locations are often recalled and navigation along a route can be adversely affected. Ignoring images obtained at extreme pitches, or averaging images over several pitches, does not significantly improve performance
- …