324 research outputs found

    Bio-inspired collision detector with enhanced selectivity for ground robotic vision system

    Get PDF
    There are many ways of building collision-detecting systems. In this paper, we propose a novel collision selective visual neural network inspired by LGMD2 neurons in the juvenile locusts. Such collision-sensitive neuron matures early in the ļ¬rst-aged or even hatching locusts, and is only selective to detect looming dark objects against bright background in depth, represents swooping predators, a situation which is similar to ground robots or vehicles. However, little has been done on modeling LGMD2, let alone its potential applications in robotics and other vision-based areas. Compared to other collision detectors, our major contributions are ļ¬rst, enhancing the collision selectivity in a bio-inspired way, via constructing a computing efļ¬cient visual sensor, and realizing the revealed speciļ¬c characteristic sofLGMD2. Second, we applied the neural network to help rearrange path navigation of an autonomous ground miniature robot in an arena. We also examined its neural properties through systematic experiments challenged against image streams from a visual sensor of the micro-robot

    Shaping the collision selectivity in a looming sensitive neuron model with parallel ON and OFF pathways and spike frequency adaptation

    Get PDF
    Shaping the collision selectivity in vision-based artificial collision-detecting systems is still an open challenge. This paper presents a novel neuron model of a locust looming detector, i.e. the lobula giant movement detector (LGMD1), in order to provide effective solutions to enhance the collision selectivity of looming objects over other visual challenges. We propose an approach to model the biologically plausible mechanisms of ON and OFF pathways and a biophysical mechanism of spike frequency adaptation (SFA) in the proposed LGMD1 visual neural network. The ON and OFF pathways can separate both dark and light looming features for parallel spatiotemporal computations. This works effectively on perceiving a potential collision from dark or light objects that approach; such a bio-plausible structure can also separate LGMD1's collision selectivity to its neighbouring looming detector -- the LGMD2.The SFA mechanism can enhance the LGMD1's collision selectivity to approaching objects rather than receding and translating stimuli, which is a significant improvement compared with similar LGMD1 neuron models. The proposed framework has been tested using off-line tests of synthetic and real-world stimuli, as well as on-line bio-robotic tests. The enhanced collision selectivity of the proposed model has been validated in systematic experiments. The computational simplicity and robustness of this work have also been verified by the bio-robotic tests, which demonstrates potential in building neuromorphic sensors for collision detection in both a fast and reliable manner

    Towards a Dynamic Vision System - Computational Modelling of Insect Motion Sensitive Neural Systems

    Get PDF
    For motion perception, vision plays an irreplaceable role, which can extract more abundant useful movement features from an unpredictable dynamic environment compared to other sensing modalities. Nowadays, building a dynamic vision system for motion perception in a both reliable and efficient manner is still an open challenge. Millions of years of evolutionary development has provided, in nature, animals that possess robust vision systems capable of motion perception to deal with a variety of aspects of life. Insects, in particular, have a relatively smaller number of visual neurons compared to vertebrates and humans, but can still navigate smartly through visually cluttered and dynamic environments. Understanding the insects' visual processing pathways and methods thus are not only attractive to neural system modellers but also critical in providing effective solutions for future intelligent machines. Originated from biological researches in insect visual systems, this thesis investigates computational modelling of motion sensitive neural systems and potential applications to robotics. This proposes novel modelling of the locust and fly visual systems for sensing looming and translating stimuli. Specifically, the proposed models comprise collision selective neural networks of two lobula giant movement detectors (LGMD1 and LGMD2) in locusts, and translating sensitive neural networks of direction selective neurons (DSNs) in flies, as well as hybrid visual neural systems of their combinations. In all these proposed models, the functionality of ON and OFF pathways is highlighted, which separate visual processing into parallel computation. This works effectively to realise neural characteristics of both the LGMD1 and the LGMD2 in locusts and plays crucial roles in separating the different looming selectivity between the two visual neurons. Such a biologically plausible structure can also implement the fly DSNs for translational movements perception and guide fast motion tracking with a behavioural response to visual fixation. The effectiveness and flexibility of the proposed motion sensitive neural systems have been validated by systematic and comparative experiments ranging from off-line synthetic and real-world tests to on-line bio-robotic tests. The underlying characteristics and functionality of the locust LGMDs and the fly DSNs have been achieved by the proposed models. All the proposed visual models have been successfully realised on the embedded system in a vision-based ground mobile robot. The robot tests have verified the computational simplicity and efficiency of proposed bio-inspired methodologies, which hit at great potential of building neuromorphic sensors in autonomous machines for motion perception in a fast, reliable and low-energy manner

    Modeling direction selective visual neural network with ON and OFF pathways for extracting motion cues from cluttered background

    Get PDF
    The nature endows animals robustvision systems for extracting and recognizing differentmotion cues, detectingpredators, chasing preys/mates in dynamic and cluttered environments. Direction selective neurons (DSNs), with preference to certain orientation visual stimulus, have been found in both vertebrates and invertebrates for decades. In thispaper, with respectto recent biological research progress in motion-detecting circuitry, we propose a novel way to model DSNs for recognizing movements on four cardinal directions. It is based on an architecture of ON and OFF visual pathways underlies a theory of splitting motion signals into parallel channels, encoding brightness increments and decrements separately. To enhance the edge selectivity and speed response to moving objects, we put forth a bio-plausible spatial-temporal network structure with multiple connections of same polarity ON/OFF cells. Each pair-wised combination is ļ¬ltered with dynamic delay depending on sampling distance. The proposed vision system was challenged against image streams from both synthetic and cluttered real physical scenarios. The results demonstrated three major contributions: ļ¬rst, the neural network fulļ¬lled the characteristics of a postulated physiological map of conveying visual information through different neuropile layers; second, the DSNs model can extract useful directional motion cues from cluttered background robustly and timely, which hits at potential of quick implementation in visionbased micro mobile robots; moreover, it also represents better speed response compared to a state-of-the-art elementary motion detector

    Collision selective LGMDs neuron models research benefits from a vision-based autonomous micro robot

    Get PDF
    The developments of robotics inform research across a broad range of disciplines. In this paper, we will study and compare two collision selective neuron models via a vision-based autonomous micro robot. In the locusts' visual brain, two Lobula Giant Movement Detectors (LGMDs), i.e. LGMD1 and LGMD2, have been identified as looming sensitive neurons responding to rapidly expanding objects, yet with different collision selectivity. Both neurons have been built for perceiving potential collisions in an efficient and reliable manner; a few modeling works have also demonstrated their effectiveness for robotic implementations. In this research, for the first time, we set up binocular neuronal models, combining the functionalities of LGMD1 and LGMD2 neurons, in the visual modality of a ground mobile robot. The results of systematic on-line experiments demonstrated three contributions: (1) The arena tests involving multiple robots verified the robustness and efficiency of a reactive motion control strategy via integrating a bilateral pair of LGMD1 and LGMD2 models for collision detection in dynamic scenarios. (2) We pinpointed the different collision selectivity between LGMD1 and LGMD2 neuron models fulfilling corresponded biological research results. (3) The low-cost robot may also shed lights on similar bio-inspired embedded vision systems and swarm robotics applications

    Robustness of Bio-Inspired Visual Systems for Collision Prediction in Critical Robot Traffic

    Get PDF
    Collision prevention sets a major research and development obstacle for intelligent robots and vehicles. This paper investigates the robustness of two state-of-the-art neural network models inspired by the locustā€™s LGMD-1 and LGMD-2 visual pathways as fast and low-energy collision alert systems in critical scenarios. Although both the neural circuits have been studied and modelled intensively, their capability and robustness against real-time critical traffic scenarios where real-physical crashes will happen have never been systematically investigated due to difficulty and high price in replicating risky traffic with many crash occurrences. To close this gap, we apply a recently published robotic platform to test the LGMDs inspired visual systems in physical implementation of critical traffic scenarios at low cost and high flexibility. The proposed visual systems are applied as the only collision sensing modality in each micro-mobile robot to conduct avoidance by abrupt braking. The simulated traffic resembles on-road sections including the intersection and highway scenes wherein the roadmaps are rendered by coloured, artificial pheromones upon a wide LCD screen acting as the ground of an arena. The robots with light sensors at bottom can recognise the lanes and signals, tightly follow paths. The emphasis herein is laid on corroborating the robustness of LGMDs neural systems model in different dynamic robot scenes to timely alert potential crashes. This study well complements previous experimentation on such bio-inspired computations for collision prediction in more critical physical scenarios, and for the first time demonstrates the robustness of LGMDs inspired visual systems in critical traffic towards a reliable collision alert system under constrained computation power. This paper also exhibits a novel, tractable, and affordable robotic approach to evaluate online visual systems in dynamic scenes
    • ā€¦
    corecore