3,565 research outputs found

    Near range path navigation using LGMD visual neural networks

    Get PDF
    In this paper, we proposed a method for near range path navigation for a mobile robot by using a pair of biologically inspired visual neural network – lobula giant movement detector (LGMD). In the proposed binocular style visual system, each LGMD processes images covering a part of the wide field of view and extracts relevant visual cues as its output. The outputs from the two LGMDs are compared and translated into executable motor commands to control the wheels of the robot in real time. Stronger signal from the LGMD in one side pushes the robot away from this side step by step; therefore, the robot can navigate in a visual environment naturally with the proposed vision system. Our experiments showed that this bio-inspired system worked well in different scenarios

    An adaptive appearance-based map for long-term topological localization of mobile robots

    Get PDF
    This work considers a mobile service robot which uses an appearance-based representation of its workplace as a map, where the current view and the map are used to estimate the current position in the environment. Due to the nature of real-world environments such as houses and offices, where the appearance keeps changing, the internal representation may become out of date after some time. To solve this problem the robot needs to be able to adapt its internal representation continually to the changes in the environment. This paper presents a method for creating an adaptive map for long-term appearance-based localization of a mobile robot using long-term and short-term memory concepts, with omni-directional vision as the external sensor

    Bioinspired engineering of exploration systems for NASA and DoD

    Get PDF
    A new approach called bioinspired engineering of exploration systems (BEES) and its value for solving pressing NASA and DoD needs are described. Insects (for example honeybees and dragonflies) cope remarkably well with their world, despite possessing a brain containing less than 0.01% as many neurons as the human brain. Although most insects have immobile eyes with fixed focus optics and lack stereo vision, they use a number of ingenious, computationally simple strategies for perceiving their world in three dimensions and navigating successfully within it. We are distilling selected insect-inspired strategies to obtain novel solutions for navigation, hazard avoidance, altitude hold, stable flight, terrain following, and gentle deployment of payload. Such functionality provides potential solutions for future autonomous robotic space and planetary explorers. A BEES approach to developing lightweight low-power autonomous flight systems should be useful for flight control of such biomorphic flyers for both NASA and DoD needs. Recent biological studies of mammalian retinas confirm that representations of multiple features of the visual world are systematically parsed and processed in parallel. Features are mapped to a stack of cellular strata within the retina. Each of these representations can be efficiently modeled in semiconductor cellular nonlinear network (CNN) chips. We describe recent breakthroughs in exploring the feasibility of the unique blending of insect strategies of navigation with mammalian visual search, pattern recognition, and image understanding into hybrid biomorphic flyers for future planetary and terrestrial applications. We describe a few future mission scenarios for Mars exploration, uniquely enabled by these newly developed biomorphic flyers

    Reactive direction control for a mobile robot: A locust-like control of escape direction emerges when a bilateral pair of model locust visual neurons are integrated

    Get PDF
    Locusts possess a bilateral pair of uniquely identifiable visual neurons that respond vigorously to the image of an approaching object. These neurons are called the lobula giant movement detectors (LGMDs). The locust LGMDs have been extensively studied and this has lead to the development of an LGMD model for use as an artificial collision detector in robotic applications. To date, robots have been equipped with only a single, central artificial LGMD sensor, and this triggers a non-directional stop or rotation when a potentially colliding object is detected. Clearly, for a robot to behave autonomously, it must react differently to stimuli approaching from different directions. In this study, we implement a bilateral pair of LGMD models in Khepera robots equipped with normal and panoramic cameras. We integrate the responses of these LGMD models using methodologies inspired by research on escape direction control in cockroaches. Using ‘randomised winner-take-all’ or ‘steering wheel’ algorithms for LGMD model integration, the khepera robots could escape an approaching threat in real time and with a similar distribution of escape directions as real locusts. We also found that by optimising these algorithms, we could use them to integrate the left and right DCMD responses of real jumping locusts offline and reproduce the actual escape directions that the locusts took in a particular trial. Our results significantly advance the development of an artificial collision detection and evasion system based on the locust LGMD by allowing it reactive control over robot behaviour. The success of this approach may also indicate some important areas to be pursued in future biological research

    Multimodal interactions in insect navigation

    Get PDF
    Animals travelling through the world receive input from multiple sensory modalities that could be important for the guidance of their journeys. Given the availability of a rich array of cues, from idiothetic information to input from sky compasses and visual information through to olfactory and other cues (e.g. gustatory, magnetic, anemotactic or thermal) it is no surprise to see multimodality in most aspects of navigation. In this review, we present the current knowledge of multimodal cue use during orientation and navigation in insects. Multimodal cue use is adapted to a species’ sensory ecology and shapes navigation behaviour both during the learning of environmental cues and when performing complex foraging journeys. The simultaneous use of multiple cues is beneficial because it provides redundant navigational information, and in general, multimodality increases robustness, accuracy and overall foraging success. We use examples from sensorimotor behaviours in mosquitoes and flies as well as from large scale navigation in ants, bees and insects that migrate seasonally over large distances, asking at each stage how multiple cues are combined behaviourally and what insects gain from using different modalities

    Region of interest-based adaptive multimedia streaming scheme

    Get PDF
    Adaptive multimedia streaming aims at adjusting the transmitted content based on the available bandwidth such as losses that often severely affect the end-user perceived quality are minimized and consequently the transmission quality increases. Current solutions affect equally the whole viewing area of the multimedia frames, despite research showing that there are regions on which the viewers are more interested in than on others. This paper presents a novel region of interest-based adaptive scheme (ROIAS) for multimedia streaming that when performing transmission-related quality adjustments, selectively affects the quality of those regions of the image the viewers are the least interested in. As the quality of the regions the viewers are the most interested in will not change (or will involve little change),the proposed scheme provides higher overall end-user perceived quality than any of the existing adaptive solutions
    corecore