380 research outputs found

    Interactions of visual odometry and landmark guidance during food search in honeybees

    Get PDF
    How do honeybees use visual odometry and goal-defining landmarks to guide food search? In one experiment, bees were trained to forage in an optic-flow-rich tunnel with a landmark positioned directly above the feeder. Subsequent food-search tests indicated that bees searched much more accurately when both odometric and landmark cues were available than when only odometry was available. When the two cue sources were set in conflict, by shifting the position of the landmark in the tunnel during test, bees overwhelmingly used landmark cues rather than odometry. In another experiment, odometric cues were removed by training and testing in axially striped tunnels. The data show that bees did not weight landmarks as highly as when odometric cues were available, tending to search in the vicinity of the landmark for shorter periods. A third experiment, in which bees were trained with odometry but without a landmark, showed that a novel landmark placed anywhere in the tunnel during testing prevented bees from searching beyond the landmark location. Two further experiments, involving training bees to relatively longer distances with a goal-defining landmark, produced similar results to the initial experiment. One caveat was that, with the removal of the familiar landmark, bees tended to overshoot the training location, relative to the case where bees were trained without a landmark. Taken together, the results suggest that bees assign appropriate significance to odometric and landmark cues in a more flexible and dynamic way than previously envisaged.</p

    Robust nonparametric detection of objects in noisy images

    Full text link
    We propose a novel statistical hypothesis testing method for detection of objects in noisy images. The method uses results from percolation theory and random graph theory. We present an algorithm that allows to detect objects of unknown shapes in the presence of nonparametric noise of unknown level and of unknown distribution. No boundary shape constraints are imposed on the object, only a weak bulk condition for the object's interior is required. The algorithm has linear complexity and exponential accuracy and is appropriate for real-time systems. In this paper, we develop further the mathematical formalism of our method and explore important connections to the mathematical theory of percolation and statistical physics. We prove results on consistency and algorithmic complexity of our testing procedure. In addition, we address not only an asymptotic behavior of the method, but also a finite sample performance of our test.Comment: This paper initially appeared in 2010 as EURANDOM Report 2010-049. Link to the abstract at EURANDOM repository: http://www.eurandom.tue.nl/reports/2010/049-abstract.pdf Link to the paper at EURANDOM repository: http://www.eurandom.tue.nl/reports/2010/049-report.pd

    How do field of view and resolution affect the information content of panoramic scenes for visual navigation? A computational investigation

    Get PDF
    The visual systems of animals have to provide information to guide behaviour and the informational requirements of an animal’s behavioural repertoire are often reflected in its sensory system. For insects, this is often evident in the optical array of the compound eye. One behaviour that insects share with many animals is the use of learnt visual information for navigation. As ants are expert visual navigators it may be that their vision is optimised for navigation. Here we take a computational approach in asking how the details of the optical array influence the informational content of scenes used in simple view matching strategies for orientation. We find that robust orientation is best achieved with low-resolution visual information and a large field of view, similar to the optical properties seen for many ant species. A lower resolution allows for a trade-off between specificity and generalisation for stored views. Additionally, our simulations show that orientation performance increases if different portions of the visual field are considered as discrete visual sensors, each giving an independent directional estimate. This suggests that ants might benefit by processing information from their two eyes independently

    Open environments to support systems engineering tool integration: A study using the Portable Common Tool Environment (PCTE)

    Get PDF
    A study of computer engineering tool integration using the Portable Common Tool Environment (PCTE) Public Interface Standard is presented. Over a 10-week time frame, three existing software products were encapsulated to work in the Emeraude environment, an implementation of the PCTE version 1.5 standard. The software products used were a computer-aided software engineering (CASE) design tool, a software reuse tool, and a computer architecture design and analysis tool. The tool set was then demonstrated to work in a coordinated design process in the Emeraude environment. The project and the features of PCTE used are described, experience with the use of Emeraude environment over the project time frame is summarized, and several related areas for future research are summarized

    A model of ant route navigation driven by scene familiarity

    Get PDF
    In this paper we propose a model of visually guided route navigation in ants that captures the known properties of real behaviour whilst retaining mechanistic simplicity and thus biological plausibility. For an ant, the coupling of movement and viewing direction means that a familiar view specifies a familiar direction of movement. Since the views experienced along a habitual route will be more familiar, route navigation can be re-cast as a search for familiar views. This search can be performed with a simple scanning routine, a behaviour that ants have been observed to perform. We test this proposed route navigation strategy in simulation, by learning a series of routes through visually cluttered environments consisting of objects that are only distinguishable as silhouettes against the sky. In the first instance we determine view familiarity by exhaustive comparison with the set of views experienced during training. In further experiments we train an artificial neural network to perform familiarity discrimination using the training views. Our results indicate that, not only is the approach successful, but also that the routes that are learnt show many of the characteristics of the routes of desert ants. As such, we believe the model represents the only detailed and complete model of insect route guidance to date. What is more, the model provides a general demonstration that visually guided routes can be produced with parsimonious mechanisms that do not specify when or what to learn, nor separate routes into sequences of waypoints

    Ground-nesting insects could use visual tracking for monitoring nest position during learning flights

    Get PDF
    Ants, bees and wasps are central place foragers. They leave their nests to forage and routinely return to their home-base. Most are guided by memories of the visual panorama and the visual appearance of the local nest environment when pinpointing their nest. These memories are acquired during highly structured learning walks or flights that are performed when leaving the nest for the first time or whenever the insects had difficulties finding the nest during their previous return. Ground-nesting bees and wasps perform such learning flights daily when they depart for the first time. During these flights, the insects turn back to face the nest entrance and subsequently back away from the nest while flying along ever increasing arcs that are centred on the nest. Flying along these arcs, the insects counter-turn in such a way that the nest entrance is always seen in the frontal visual field at slightly lateral positions. Here we asked how the insects may achieve keeping track of the nest entrance location given that it is a small, inconspicuous hole in the ground, surrounded by complex natural structures that undergo unpredictable perspective transformations as the insect pivots around the area and gains distance from it. We reconstructed the natural visual scene experienced by wasps and bees during their learning flights and applied a number of template-based tracking methods to these image sequences. We find that tracking with a fixed template fails very quickly in the course of a learning flight, but that continuously updating the template allowed us to reliably estimate nest direction in reconstructed image sequences. This is true even for later sections of learning flights when the insects are so far away from the nest that they cannot resolve the nest entrance as a visual feature. We discuss why visual goal-anchoring is likely to be important during the acquisition of visual-spatial memories and describe experiments to test whether insects indeed update nest-related templates during their learning flights. © 2014 Springer International Publishing Switzerland

    How Ants Use Vision When Homing Backward

    Get PDF
    Ants can navigate over long distances between their nest and food sites using visual cues [1, 2]. Recent studies show that this capacity is undiminished when walking backward while dragging a heavy food item [3, 4, 5]. This challenges the idea that ants use egocentric visual memories of the scene for guidance [1, 2, 6]. Can ants use their visual memories of the terrestrial cues when going backward? Our results suggest that ants do not adjust their direction of travel based on the perceived scene while going backward. Instead, they maintain a straight direction using their celestial compass. This direction can be dictated by their path integrator [5] but can also be set using terrestrial visual cues after a forward peek. If the food item is too heavy to enable body rotations, ants moving backward drop their food on occasion, rotate and walk a few steps forward, return to the food, and drag it backward in a now-corrected direction defined by terrestrial cues. Furthermore, we show that ants can maintain their direction of travel independently of their body orientation. It thus appears that egocentric retinal alignment is required for visual scene recognition, but ants can translate this acquired directional information into a holonomic frame of reference, which enables them to decouple their travel direction from their body orientation and hence navigate backward. This reveals substantial flexibility and communication between different types of navigational information: from terrestrial to celestial cues and from egocentric to holonomic directional memories

    Framing vulnerability, risk and societal responses: the MOVE framework

    Get PDF
    The paper deals with the development of a general as well as integrative and holistic framework to systematize and assess vulnerability, risk and adaptation. The framework is a thinking tool meant as a heuristic that outlines key factors and different dimensions that need to be addressed when assessing vulnerability in the context of natural hazards and climate change. The approach underlines that the key factors of such a common framework are related to the exposure of a society or system to a hazard or stressor, the susceptibility of the system or community exposed, and its resilience and adaptive capacity. Additionally, it underlines the necessity to consider key factors and multiple thematic dimensions when assessing vulnerability in the context of natural and socio-natural hazards. In this regard, it shows key linkages between the different concepts used within the disaster risk management (DRM) and climate change adaptation (CCA) research. Further, it helps to illustrate the strong relationships between different concepts used in DRM and CCA. The framework is also a tool for communicating complexity and stresses the need for societal change in order to reduce risk and to promote adaptation. With regard to this, the policy relevance of the framework and first results of its application are outlined. Overall, the framework presented enhances the discussion on how to frame and link vulnerability, disaster risk, risk management and adaptation concepts

    A Preference for a Sexual Signal Keeps Females Safe

    Get PDF
    Predation is generally thought to constrain sexual selection by female choice and limit the evolution of conspicuous sexual signals. Under high predation risk, females usually become less choosy, because they reduce their exposure to their predators by reducing the extent of their mate searching. However, predation need not weaken sexual selection if, under high predation risk, females exhibit stronger preferences for males that use conspicuous signals that help females avoid their predators. We tested this prediction in the fiddler crab Uca terpsichores by increasing females' perceived predation risk from crab-eating birds and measuring the attractiveness of a courtship signal that females use to find mates. The sexual signal is an arching mound of sand that males build at the openings of their burrows to which they attract females for mating. We found that the greater the risk, the more attractive were males with those structures. The benefits of mate preferences for sexual signals are usually thought to be linked to males' reproductive contributions to females or their young. Our study provides the first evidence that a female preference for a sexual signal can yield direct survival benefits by keeping females safe as they search for mates
    • …
    corecore