9 research outputs found

    A model of ant route navigation driven by scene familiarity

    Get PDF
    In this paper we propose a model of visually guided route navigation in ants that captures the known properties of real behaviour whilst retaining mechanistic simplicity and thus biological plausibility. For an ant, the coupling of movement and viewing direction means that a familiar view specifies a familiar direction of movement. Since the views experienced along a habitual route will be more familiar, route navigation can be re-cast as a search for familiar views. This search can be performed with a simple scanning routine, a behaviour that ants have been observed to perform. We test this proposed route navigation strategy in simulation, by learning a series of routes through visually cluttered environments consisting of objects that are only distinguishable as silhouettes against the sky. In the first instance we determine view familiarity by exhaustive comparison with the set of views experienced during training. In further experiments we train an artificial neural network to perform familiarity discrimination using the training views. Our results indicate that, not only is the approach successful, but also that the routes that are learnt show many of the characteristics of the routes of desert ants. As such, we believe the model represents the only detailed and complete model of insect route guidance to date. What is more, the model provides a general demonstration that visually guided routes can be produced with parsimonious mechanisms that do not specify when or what to learn, nor separate routes into sequences of waypoints

    Visual Place Recognition for Autonomous Mobile Robots

    Get PDF
    Horst M, Möller R. Visual Place Recognition for Autonomous Mobile Robots. Robotics. 2017;6(2): 9.Place recognition is an essential component of autonomous mobile robot navigation. It is used for loop-closure detection to maintain consistent maps, or to localize the robot along a route, or in kidnapped-robot situations. Camera sensors provide rich visual information for this task. We compare different approaches for visual place recognition: holistic methods (visual compass and warping), signature-based methods (using Fourier coefficients or feature descriptors (able for binary-appearance loop-closure evaluation, ABLE)), and feature-based methods (fast appearance-based mapping, FabMap). As new contributions we investigate whether warping, a successful visual homing method, is suitable for place recognition. In addition, we extend the well-known visual compass to use multiple scale planes, a concept also employed by warping. To achieve tolerance against changing illumination conditions, we examine the NSAD distance measure (normalized sum of absolute differences) on edge-filtered images. To reduce the impact of illumination changes on the distance values, we suggest to compute ratios of image distances to normalize these values to a common range. We test all methods on multiple indoor databases, as well as a small outdoor database, using images with constant or changing illumination conditions. ROC analysis (receiver-operator characteristics) and the metric distance between best-matching image pairs are used as evaluation measures. Most methods perform well under constant illumination conditions, but fail under changing illumination. The visual compass using the NSAD measure on edge-filtered images with multiple scale planes, while being slower than signature methods, performs best in the latter cas

    Advances in Reinforcement Learning

    Get PDF
    Reinforcement Learning (RL) is a very dynamic area in terms of theory and application. This book brings together many different aspects of the current research on several fields associated to RL which has been growing rapidly, producing a wide variety of learning algorithms for different applications. Based on 24 Chapters, it covers a very broad variety of topics in RL and their application in autonomous systems. A set of chapters in this book provide a general overview of RL while other chapters focus mostly on the applications of RL paradigms: Game Theory, Multi-Agent Theory, Robotic, Networking Technologies, Vehicular Navigation, Medicine and Industrial Logistic

    Visual homing in field crickets and desert ants: a comparative behavioural and modelling study

    Get PDF
    Visually guided navigation represents a long standing goal in robotics. Insights may be drawn from various insect species for which visual information has been shown sufficient for navigation in complex environments, however the generality of visual homing abilities across insect species remains unclear. Furthermore variousmodels have been proposed as strategies employed by navigating insects yet comparative studies across models and species are lacking. This work addresses these questions in two insect species not previously studied: the field cricket Gryllus bimaculatus for which almost no navigational data is available; and the European desert ant Cataglyphis velox, a relation of the African desert ant Cataglyphis bicolor which has become a model species for insect navigation studies. The ability of crickets to return to a hidden target using surrounding visual cues was tested using an analogue of the Morris water-maze, a standard paradigm for spatial memory testing in rodents. Crickets learned to re-locate the hidden target using the provided visual cues, with the best performance recorded when a natural image was provided as stimulus rather than clearly identifiable landmarks. The role of vision in navigation was also observed for desert ants within their natural habitat. Foraging ants formed individual, idiosyncratic, visually guided routes through their cluttered surroundings as has been reported in other ant species inhabiting similar environments. In the absence of other cues ants recalled their route even when displaced along their path indicating that ants recall previously visited places rather than a sequence of manoeuvres. Image databases were collected within the environments experienced by the insects using custompanoramic cameras that approximated the insect eye viewof the world. Six biologically plausible visual homing models were implemented and their performance assessed across experimental conditions. The models were first assessed on their ability to replicate the relative performance across the various visual surrounds in which crickets were tested. That is, best performance was sought with the natural scene, followed by blank walls and then the distinct landmarks. Only two models were able to reproduce the pattern of results observed in crickets: pixel-wise image difference with RunDown and the centre of mass average landmark vector. The efficacy of models was then assessed across locations in the ant habitat. A 3D world was generated from the captured images providing noise free and high spatial resolution images asmodel input. Best performancewas found for optic flow and image difference based models. However in many locations the centre of mass average landmark vector failed to provide reliable guidance. This work shows that two previously unstudied insect species can navigate using surrounding visual cues alone. Moreover six biologically plausible models of visual navigation were assessed in the same environments as the insects and only an image difference based model succeeded in all experimental conditions

    Long-range visual homing

    No full text
    Abstract — A biologically-inspired approach to robot route following is presented. Snapshot images of a robot’s environment are captured while learning a route. Later, when retracing the route, the robot uses visual homing to move between positions where snapshot images had been captured. This general approach was inspired by experiments on route following in wood ants. The impact of odometric error and another key parameter is studied in relation to the number of snapshots captured by the learning algorithm. Tests in a photo-realistic simulated environment reveal that route following can succeed even on relatively sparse paths. A major change in illumination reduces, but does eliminate, the robot’s ability to retrace a route. Index Terms — visual homing, route following, robot navigation, insect navigation. I

    Long-Range Visual Homing

    No full text
    Experiments which apply local visual homing to the problem of long-range homing are described. Local visual homing methods allow return to a goal position where there exists some commonality between the current image and the `snapshot' image captured from the goal

    Feature Optimization for Long-Range Visual Homing in Changing Environments

    No full text
    This paper introduces a feature optimization method for robot long-range feature-based visual homing in changing environments. To cope with the changing environmental appearance, the optimization procedure is introduced to distinguish the most relevant features for feature-based visual homing, including the spatial distribution, selection and updating. In the previous research on feature-based visual homing, less effort has been spent on the way to improve the feature distribution to get uniformly distributed features, which are closely related to homing performance. This paper presents a modified feature extraction algorithm to decrease the influence of anisotropic feature distribution. In addition, the feature selection and updating mechanisms, which have hardly drawn any attention in the domain of feature-based visual homing, are crucial in improving homing accuracy and in maintaining the representation of changing environments. To verify the feasibility of the proposal, several comprehensive evaluations are conducted. The results indicate that the feature optimization method can find optimal feature sets for feature-based visual homing, and adapt the appearance representation to the changing environments as well
    corecore