20 research outputs found

    Collision selective LGMDs neuron models research benefits from a vision-based autonomous micro robot

    Get PDF
    The developments of robotics inform research across a broad range of disciplines. In this paper, we will study and compare two collision selective neuron models via a vision-based autonomous micro robot. In the locusts' visual brain, two Lobula Giant Movement Detectors (LGMDs), i.e. LGMD1 and LGMD2, have been identified as looming sensitive neurons responding to rapidly expanding objects, yet with different collision selectivity. Both neurons have been built for perceiving potential collisions in an efficient and reliable manner; a few modeling works have also demonstrated their effectiveness for robotic implementations. In this research, for the first time, we set up binocular neuronal models, combining the functionalities of LGMD1 and LGMD2 neurons, in the visual modality of a ground mobile robot. The results of systematic on-line experiments demonstrated three contributions: (1) The arena tests involving multiple robots verified the robustness and efficiency of a reactive motion control strategy via integrating a bilateral pair of LGMD1 and LGMD2 models for collision detection in dynamic scenarios. (2) We pinpointed the different collision selectivity between LGMD1 and LGMD2 neuron models fulfilling corresponded biological research results. (3) The low-cost robot may also shed lights on similar bio-inspired embedded vision systems and swarm robotics applications

    Towards Computational Models and Applications of Insect Visual Systems for Motion Perception: A Review

    Get PDF
    Motion perception is a critical capability determining a variety of aspects of insects' life, including avoiding predators, foraging and so forth. A good number of motion detectors have been identified in the insects' visual pathways. Computational modelling of these motion detectors has not only been providing effective solutions to artificial intelligence, but also benefiting the understanding of complicated biological visual systems. These biological mechanisms through millions of years of evolutionary development will have formed solid modules for constructing dynamic vision systems for future intelligent machines. This article reviews the computational motion perception models originating from biological research of insects' visual systems in the literature. These motion perception models or neural networks comprise the looming sensitive neuronal models of lobula giant movement detectors (LGMDs) in locusts, the translation sensitive neural systems of direction selective neurons (DSNs) in fruit flies, bees and locusts, as well as the small target motion detectors (STMDs) in dragonflies and hover flies. We also review the applications of these models to robots and vehicles. Through these modelling studies, we summarise the methodologies that generate different direction and size selectivity in motion perception. At last, we discuss about multiple systems integration and hardware realisation of these bio-inspired motion perception models

    Towards a Dynamic Vision System - Computational Modelling of Insect Motion Sensitive Neural Systems

    Get PDF
    For motion perception, vision plays an irreplaceable role, which can extract more abundant useful movement features from an unpredictable dynamic environment compared to other sensing modalities. Nowadays, building a dynamic vision system for motion perception in a both reliable and efficient manner is still an open challenge. Millions of years of evolutionary development has provided, in nature, animals that possess robust vision systems capable of motion perception to deal with a variety of aspects of life. Insects, in particular, have a relatively smaller number of visual neurons compared to vertebrates and humans, but can still navigate smartly through visually cluttered and dynamic environments. Understanding the insects' visual processing pathways and methods thus are not only attractive to neural system modellers but also critical in providing effective solutions for future intelligent machines. Originated from biological researches in insect visual systems, this thesis investigates computational modelling of motion sensitive neural systems and potential applications to robotics. This proposes novel modelling of the locust and fly visual systems for sensing looming and translating stimuli. Specifically, the proposed models comprise collision selective neural networks of two lobula giant movement detectors (LGMD1 and LGMD2) in locusts, and translating sensitive neural networks of direction selective neurons (DSNs) in flies, as well as hybrid visual neural systems of their combinations. In all these proposed models, the functionality of ON and OFF pathways is highlighted, which separate visual processing into parallel computation. This works effectively to realise neural characteristics of both the LGMD1 and the LGMD2 in locusts and plays crucial roles in separating the different looming selectivity between the two visual neurons. Such a biologically plausible structure can also implement the fly DSNs for translational movements perception and guide fast motion tracking with a behavioural response to visual fixation. The effectiveness and flexibility of the proposed motion sensitive neural systems have been validated by systematic and comparative experiments ranging from off-line synthetic and real-world tests to on-line bio-robotic tests. The underlying characteristics and functionality of the locust LGMDs and the fly DSNs have been achieved by the proposed models. All the proposed visual models have been successfully realised on the embedded system in a vision-based ground mobile robot. The robot tests have verified the computational simplicity and efficiency of proposed bio-inspired methodologies, which hit at great potential of building neuromorphic sensors in autonomous machines for motion perception in a fast, reliable and low-energy manner

    Complementary Visual Neuronal Systems Model for Collision Sensing

    Get PDF
    Inspired by insects’ visual brains, this paper presents original modelling of a complementary visual neuronal systems model for real-time and robust collision sensing. Two categories of wide-field motion sensitive neurons, i.e., the lobula giant movement detectors (LGMDs) in locusts and the lobula plate tangential cells (LPTCs) in flies, have been studied, intensively. The LGMDs have specific selectivity to approaching objects in depth that threaten collision; whilst the LPTCs are only sensitive to translating objects in horizontal and vertical directions. Though each has been modelled and applied in various visual scenes including robot scenarios, little has been done on investigating their complementary functionality and selectivity when functioning together. To fill this vacancy, we introduce a hybrid model combining two LGMDs (LGMD-1 and LGMD2) with horizontally (rightward and leftward) sensitive LPTCs (LPTC-R and LPTC-L) specialising in fast collision perception. With coordination and competition between different activated neurons, the proximity feature by frontal approaching stimuli can be largely sharpened up by suppressing translating and receding motions. The proposed method has been implemented ingroundmicro-mobile robots as embedded systems. The multi-robot experiments have demonstrated the effectiveness and robustness of the proposed model for frontal collision sensing, which outperforms previous single-type neuron computation methods against translating interference

    Improved Collision Perception Neuronal System Model with Adaptive Inhibition Mechanism and Evolutionary Learning

    Get PDF
    Accurate and timely perception of collision in highly variable environments is still a challenging problem for artificial visual systems. As a source of inspiration, the lobula giant movement detectors (LGMDs) in locust’s visual pathways have been studied intensively, and modelled as quick collision detectors against challenges from various scenarios including vehicles and robots. However, the state-of-the-art LGMD models have not achieved acceptable robustness to deal with more challenging scenarios like the various vehicle driving scenes, due to the lack of adaptive signal processing mechanisms. To address this problem, we propose an improved neuronal system model, called LGMD+, that is featured by novel modelling of spatiotemporal inhibition dynamics with biological plausibilities including 1) lateral inhibitionswithglobalbiasesdefinedbyavariantofGaussiandistribution,spatially,and2)anadaptivefeedforward inhibition mediation pathway, temporally. Accordingly, the LGMD+ performs more effectively to detect merely approaching objects threatening head-on collision risks by appropriately suppressing motion distractors caused by vibrations, near-miss or approaching stimuli with deviations from the centre view. Through evolutionary learning with a systematic dataset of various crash and non-collision driving scenarios, the LGMD+ shows improved robustness outperforming the previous related methods. After evolution, its computational simplicity, flexibility and robustness have also been well demonstrated by real-time experiments of autonomous micro-mobile robots

    A Robust Collision Perception Visual Neural Network with Specific Selectivity to Darker Objects

    Get PDF
    Building an efficient and reliable collision perception visual system is a challenging problem for future robots and autonomous vehicles. The biological visual neural networks, which have evolved over millions of years in nature, and are working perfectly in the real world, could be ideal models for designing artificial vision systems. In the locust's visual pathways, a lobula giant movement detector, i.e. the LGMD2, has been identified as a looming perception neuron that responds most strongly to darker approaching objects relative to their backgrounds, similar situations which many ground vehicles and robots are often facing with. However, little has been done on modelling the LGMD2 and investigating its potential in robotics and vehicles. In this research, we build an LGMD2 visual neural network which possesses the similar collision selectivity of an LGMD2 neuron in locust, via the modelling of biased-ON and OFF pathways splitting visual signals into parallel ON/OFF channels. With stronger-inhibition (bias) in the ON pathway, this model responds selectively to darker looming objects. The proposed model has been tested systematically with a range of stimuli including real-world scenarios. It has also been implemented in a micro mobile robot and tested with real-time experiments. The experimental results have verified the effectiveness and robustness of the proposed model for detecting darker looming objects against various dynamic and cluttered backgrounds

    Robustness of Bio-Inspired Visual Systems for Collision Prediction in Critical Robot Traffic

    Get PDF
    Collision prevention sets a major research and development obstacle for intelligent robots and vehicles. This paper investigates the robustness of two state-of-the-art neural network models inspired by the locust’s LGMD-1 and LGMD-2 visual pathways as fast and low-energy collision alert systems in critical scenarios. Although both the neural circuits have been studied and modelled intensively, their capability and robustness against real-time critical traffic scenarios where real-physical crashes will happen have never been systematically investigated due to difficulty and high price in replicating risky traffic with many crash occurrences. To close this gap, we apply a recently published robotic platform to test the LGMDs inspired visual systems in physical implementation of critical traffic scenarios at low cost and high flexibility. The proposed visual systems are applied as the only collision sensing modality in each micro-mobile robot to conduct avoidance by abrupt braking. The simulated traffic resembles on-road sections including the intersection and highway scenes wherein the roadmaps are rendered by coloured, artificial pheromones upon a wide LCD screen acting as the ground of an arena. The robots with light sensors at bottom can recognise the lanes and signals, tightly follow paths. The emphasis herein is laid on corroborating the robustness of LGMDs neural systems model in different dynamic robot scenes to timely alert potential crashes. This study well complements previous experimentation on such bio-inspired computations for collision prediction in more critical physical scenarios, and for the first time demonstrates the robustness of LGMDs inspired visual systems in critical traffic towards a reliable collision alert system under constrained computation power. This paper also exhibits a novel, tractable, and affordable robotic approach to evaluate online visual systems in dynamic scenes

    Modelling Drosophila motion vision pathways for decoding the direction of translating objects against cluttered moving backgrounds

    Get PDF
    Decoding the direction of translating objects in front of cluttered moving backgrounds, accurately and efficiently, is still a challenging problem. In nature, lightweight and low-powered flying insects apply motion vision to detect a moving target in highly variable environments during flight, which are excellent paradigms to learn motion perception strategies. This paper investigates the fruit fly Drosophila motion vision pathways and presents computational modelling based on cuttingedge physiological researches. The proposed visual system model features bio-plausible ON and OFF pathways, wide-field horizontal-sensitive (HS) and vertical-sensitive (VS) systems. The main contributions of this research are on two aspects: (1) the proposed model articulates the forming of both direction-selective and direction-opponent responses, revealed as principalfeaturesofmotionperceptionneuralcircuits,inafeed-forwardmanner;(2)italsoshowsrobustdirectionselectivity to translating objects in front of cluttered moving backgrounds, via the modelling of spatiotemporal dynamics including combination of motion pre-filtering mechanisms and ensembles of local correlators inside both the ON and OFF pathways, which works effectively to suppress irrelevant background motion or distractors, and to improve the dynamic response. Accordingly, the direction of translating objects is decoded as global responses of both the HS and VS systems with positive ornegativeoutputindicatingpreferred-direction or null-direction translation.The experiments have verified the effectiveness of the proposed neural system model, and demonstrated its responsive preference to faster-moving, higher-contrast and larger-size targets embedded in cluttered moving backgrounds
    corecore