2 research outputs found

    Robust Models for Optic Flow Coding in Natural Scenes Inspired by Insect Biology

    Get PDF
    The extraction of accurate self-motion information from the visual world is a difficult problem that has been solved very efficiently by biological organisms utilizing non-linear processing. Previous bio-inspired models for motion detection based on a correlation mechanism have been dogged by issues that arise from their sensitivity to undesired properties of the image, such as contrast, which vary widely between images. Here we present a model with multiple levels of non-linear dynamic adaptive components based directly on the known or suspected responses of neurons within the visual motion pathway of the fly brain. By testing the model under realistic high-dynamic range conditions we show that the addition of these elements makes the motion detection model robust across a large variety of images, velocities and accelerations. Furthermore the performance of the entire system is more than the incremental improvements offered by the individual components, indicating beneficial non-linear interactions between processing stages. The algorithms underlying the model can be implemented in either digital or analog hardware, including neuromorphic analog VLSI, but defy an analytical solution due to their dynamic non-linear operation. The successful application of this algorithm has applications in the development of miniature autonomous systems in defense and civilian roles, including robotics, miniature unmanned aerial vehicles and collision avoidance sensors

    Towards a Dynamic Vision System - Computational Modelling of Insect Motion Sensitive Neural Systems

    Get PDF
    For motion perception, vision plays an irreplaceable role, which can extract more abundant useful movement features from an unpredictable dynamic environment compared to other sensing modalities. Nowadays, building a dynamic vision system for motion perception in a both reliable and efficient manner is still an open challenge. Millions of years of evolutionary development has provided, in nature, animals that possess robust vision systems capable of motion perception to deal with a variety of aspects of life. Insects, in particular, have a relatively smaller number of visual neurons compared to vertebrates and humans, but can still navigate smartly through visually cluttered and dynamic environments. Understanding the insects' visual processing pathways and methods thus are not only attractive to neural system modellers but also critical in providing effective solutions for future intelligent machines. Originated from biological researches in insect visual systems, this thesis investigates computational modelling of motion sensitive neural systems and potential applications to robotics. This proposes novel modelling of the locust and fly visual systems for sensing looming and translating stimuli. Specifically, the proposed models comprise collision selective neural networks of two lobula giant movement detectors (LGMD1 and LGMD2) in locusts, and translating sensitive neural networks of direction selective neurons (DSNs) in flies, as well as hybrid visual neural systems of their combinations. In all these proposed models, the functionality of ON and OFF pathways is highlighted, which separate visual processing into parallel computation. This works effectively to realise neural characteristics of both the LGMD1 and the LGMD2 in locusts and plays crucial roles in separating the different looming selectivity between the two visual neurons. Such a biologically plausible structure can also implement the fly DSNs for translational movements perception and guide fast motion tracking with a behavioural response to visual fixation. The effectiveness and flexibility of the proposed motion sensitive neural systems have been validated by systematic and comparative experiments ranging from off-line synthetic and real-world tests to on-line bio-robotic tests. The underlying characteristics and functionality of the locust LGMDs and the fly DSNs have been achieved by the proposed models. All the proposed visual models have been successfully realised on the embedded system in a vision-based ground mobile robot. The robot tests have verified the computational simplicity and efficiency of proposed bio-inspired methodologies, which hit at great potential of building neuromorphic sensors in autonomous machines for motion perception in a fast, reliable and low-energy manner
    corecore