45 research outputs found

    A rotational motion perception neural network based on asymmetric spatiotemporal visual information processing

    Get PDF
    All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing

    Improved Collision Perception Neuronal System Model with Adaptive Inhibition Mechanism and Evolutionary Learning

    Get PDF
    Accurate and timely perception of collision in highly variable environments is still a challenging problem for artificial visual systems. As a source of inspiration, the lobula giant movement detectors (LGMDs) in locust’s visual pathways have been studied intensively, and modelled as quick collision detectors against challenges from various scenarios including vehicles and robots. However, the state-of-the-art LGMD models have not achieved acceptable robustness to deal with more challenging scenarios like the various vehicle driving scenes, due to the lack of adaptive signal processing mechanisms. To address this problem, we propose an improved neuronal system model, called LGMD+, that is featured by novel modelling of spatiotemporal inhibition dynamics with biological plausibilities including 1) lateral inhibitionswithglobalbiasesdefinedbyavariantofGaussiandistribution,spatially,and2)anadaptivefeedforward inhibition mediation pathway, temporally. Accordingly, the LGMD+ performs more effectively to detect merely approaching objects threatening head-on collision risks by appropriately suppressing motion distractors caused by vibrations, near-miss or approaching stimuli with deviations from the centre view. Through evolutionary learning with a systematic dataset of various crash and non-collision driving scenarios, the LGMD+ shows improved robustness outperforming the previous related methods. After evolution, its computational simplicity, flexibility and robustness have also been well demonstrated by real-time experiments of autonomous micro-mobile robots

    Coping With Multiple Visual Motion Cues Under Extremely Constrained Computation Power of Micro Autonomous Robots

    Get PDF
    The perception of different visual motion cues is crucial for autonomous mobile robots to react to or interact with the dynamic visual world. It is still a great challenge for a micro mobile robot to cope with dynamic environments due to the restricted computational resources and the limited functionalities of its visual systems. In this study, we propose a compound visual neural system to automatically extract and fuse different visual motion cues in real-time using the extremely constrained computation power of micro mobile robots. The proposed visual system contains multiple bio-inspired visual motion perceptive neurons each with a unique role, for example to extract collision visual cues, darker collision cue and directional motion cues. In the embedded system, these multiple visual neurons share a similar presynaptic network to minimise the consumption of computation resources. In the postsynaptic part of the system, visual cues pass results to corresponding action neurons using lateral inhibition mechanism. The translational motion cues, which are identified by comparing pairs of directional cues, are given the highest priority, followed by the darker colliding cues and approaching cues. Systematic experiments with both virtual visual stimuli and real-world scenarios have been carried out to validate the system's functionality and reliability. The proposed methods have demonstrated that (1) with extremely limited computation power, it is still possible for a micro mobile robot to extract multiple visual motion cues robustly in a complex dynamic environment; (2) the cues extracted can be fused with a lateral inhibited postsynaptic network, thus enabling the micro robots to respond effectively with different actions, accordingly to different states, in real-time. The proposed embedded visual system has been modularised and can be easily implemented in other autonomous mobile platforms for real-time applications. The system could also be used by neurophysiologists to test new hypotheses pertaining to biological visual neural systems

    Towards Computational Models and Applications of Insect Visual Systems for Motion Perception: A Review

    Get PDF
    Motion perception is a critical capability determining a variety of aspects of insects' life, including avoiding predators, foraging and so forth. A good number of motion detectors have been identified in the insects' visual pathways. Computational modelling of these motion detectors has not only been providing effective solutions to artificial intelligence, but also benefiting the understanding of complicated biological visual systems. These biological mechanisms through millions of years of evolutionary development will have formed solid modules for constructing dynamic vision systems for future intelligent machines. This article reviews the computational motion perception models originating from biological research of insects' visual systems in the literature. These motion perception models or neural networks comprise the looming sensitive neuronal models of lobula giant movement detectors (LGMDs) in locusts, the translation sensitive neural systems of direction selective neurons (DSNs) in fruit flies, bees and locusts, as well as the small target motion detectors (STMDs) in dragonflies and hover flies. We also review the applications of these models to robots and vehicles. Through these modelling studies, we summarise the methodologies that generate different direction and size selectivity in motion perception. At last, we discuss about multiple systems integration and hardware realisation of these bio-inspired motion perception models

    A Looming Spatial Localization Neural Network Inspired by MLG1 Neurons in the Crab Neohelice

    Get PDF
    Similar to most visual animals, the crab Neohelice granulata relies predominantly on visual information to escape from predators, to track prey and for selecting mates. It, therefore, needs specialized neurons to process visual information and determine the spatial location of looming objects. In the crab Neohelice granulata, the Monostratified Lobula Giant type1 (MLG1) neurons have been found to manifest looming sensitivity with finely tuned capabilities of encoding spatial location information. MLG1s neuronal ensemble can not only perceive the location of a looming stimulus, but are also thought to be able to influence the direction of movement continuously, for example, escaping from a threatening, looming target in relation to its position. Such specific characteristics make the MLG1s unique compared to normal looming detection neurons in invertebrates which can not localize spatial looming. Modeling the MLG1s ensemble is not only critical for elucidating the mechanisms underlying the functionality of such neural circuits, but also important for developing new autonomous, efficient, directionally reactive collision avoidance systems for robots and vehicles. However, little computational modeling has been done for implementing looming spatial localization analogous to the specific functionality of MLG1s ensemble. To bridge this gap, we propose a model of MLG1s and their pre-synaptic visual neural network to detect the spatial location of looming objects. The model consists of 16 homogeneous sectors arranged in a circular field inspired by the natural arrangement of 16 MLG1s’ receptive fields to encode and convey spatial information concerning looming objects with dynamic expanding edges in different locations of the visual field. Responses of the proposed model to systematic real-world visual stimuli match many of the biological characteristics of MLG1 neurons. The systematic experiments demonstrate that our proposed MLG1s model works effectively and robustly to perceive and localize looming information, which could be a promising candidate for intelligent machines interacting within dynamic environments free of collision. This study also sheds light upon a new type of neuromorphic visual sensor strategy that can extract looming objects with locational information in a quick and reliable manner

    A bio-inspired computational model for motion detection

    Get PDF
    Tese de Doutoramento (Programa Doutoral em Engenharia Biomédica)Last years have witnessed a considerable interest in research dedicated to show that solutions to challenges in autonomous robot navigation can be found by taking inspiration from biology. Despite their small size and relatively simple nervous systems, insects have evolved vision systems able to perform the computations required for a safe navigation in dynamic and unstructured environments, by using simple, elegant and computationally efficient strategies. Thus, invertebrate neuroscience provides engineers with many neural circuit diagrams that can potentially be used to solve complicated engineering control problems. One major and yet unsolved problem encountered by visually guided robotic platforms is collision avoidance in complex, dynamic and inconstant light environments. In this dissertation, the main aim is to draw inspiration from recent and future findings on insect’s collision avoidance in dynamic environments and on visual strategies of light adaptation applied by diurnal insects, to develop a computationally efficient model for robotic control, able to work even in adverse light conditions. We first present a comparative analysis of three leading collision avoidance models based on a neural pathway responsible for signing collisions, the Lobula Giant Movement Detector/Desceding Contralateral Movement Detector (LGMD/DCMD), found in the locust visual system. Models are described, simulated and results are compared with biological data from literature. Due to the lack of information related to the way this collision detection neuron deals with dynamic environments, new visual stimuli were developed. Locusts Lo- custa Migratoria were stimulated with computer-generated discs that traveled along a combination of non-colliding and colliding trajectories, placed over a static and two distinct moving backgrounds, while simultaneously recording the DCMD activity extracellularly. Based on these results, an innovative model was developed. This model was tested in specially designed computer simulations, replicating the same visual conditions used for the biological recordings. The proposed model is shown to be sufficient to give rise to experimentally observed neural insect responses. Using a different approach, and based on recent findings, we present a direct approach to estimate potential collisions through a sequential computation of the image’s power spectra. This approach has been implemented in a real robotic platform, showing that distant dependent variations on image statistics are likely to be functional significant. Maintaining the collision detection performance at lower light levels is not a trivial task. Nevertheless, some insect visual systems have developed several strategies to help them to optimize visual performance over a wide range of light intensities. In this dissertation we address the neural adaptation mechanisms responsible to improve light capture on a day active insect, the bumblebee Bombus Terrestris. Behavioral analyses enabled us to investigate and infer about the spatial and temporal neural summation extent applied by those insects to improve image reliability at the different light levels. As future work, the collision avoidance model may be coupled with a bio-inspired light adaptation mechanism and used for robotic autonomous navigation.Os últimos anos têm testemunhado um aumento progressivo da investigação dedicada a demonstrar que possíveis soluções, para problemas existentes na navegação autónoma de robôs, podem ser encontradas buscando inspiração na biologia. Apesar do reduzido tamanho e da simplicidade do seu sistema nervoso, os insectos possuem sistemas de visão capazes de realizar os cálculos necessários para uma navegação segura em ambientes dinâmicos e não estruturados, por meio de estratégias simples, elegantes e computacionalmente eficientes. Assim, a área da neurociência que se debruça sobre o estudo dos invertebrados fornece, à area da engenharia, uma vasta gama de diagramas de circuitos neurais, que podem ser usados como base para a resolução de problemas complexos. Um atual e notável problema, cujas plataformas robóticas baseadas em sistemas de visão estão sujeitas, é o problema de deteção de colisões em ambientes complexos, dinâmicos e de intensidade luminosa variável. Assim, o objetivo principal do trabalho aqui apresentado é o de procurar inspiração em recentes e futuras descobertas relacionadas com os mecanismos que possibilitam a deteção de colisões em ambientes dinâmicos, bem como nas estratégias visuais de adaptação à luz, aplicadas por insectos diurnos. Numa primeira abordagem é feita uma análise comparativa dos três principais modelos, propostos na literatura, de deteção de colisões, que têm por base o funcionamento dos neurónios Lobular Gigante Detector de Movimento/ Detector de Movimento Descendente Contralateral (LGMD / DCMD), que fazem parte do sistema visual do gafanhoto. Os modelos são descritos, simulados e os resultados são comparados com os dados biológicos existentes, descritos na literatura. Devido à falta de informação relacionada com a forma como estes neurónios detectores de colisões lidam com ambientes dinâmicos, foram desenvolvidos novos estímulos visuais. A estimulação de gafanhotos Locusta Migratoria foi realizada usando-se estímulos controlados, gerados por computador, efectuando diferentes combinações de trajectórias de não-colisão e colisão, colocados sobre um fundo estático e dois fundos dinâmicos. extracelulares do neurónio DCMD. Com base nos resultados obtidos foi possível desenvolver um modelo inovador. Este foi testado sob estímulos visuais desenvolvidos computacionalmente, recriando as mesmas condições visuais usadas aquando dos registos neuronais biológicos. O modelo proposto mostrou ser capaz de reproduzir os resultados neuronais dos gafanhotos, experimentalmente obtidos. Usando uma abordagem diferente, e com base em descobertas recentes, apresentamos uma metodologia mais direta, que possibilita estimar possíveis colisões através de cálculos sequenciais dos espetros de potência das imagens captadas. Esta abordagem foi implementada numa plataforma robótica real, mostrando que, variações estatísticas nas imagens captadas, são susceptíveis de serem funcionalmente significativas. Manter o desempenho da deteção de colisões, em níveis de luz reduzida, não é uma tarefa trivial. No entanto, alguns sistemas visuais de insectos desenvolveram estratégias de forma a optimizar o seu desempenho visual numa larga gama de intensidades luminosas. Nesta dissertação, os mecanismos de adaptação neuronais, responsáveis pela melhoraria de captação de luz num inseto diurno, a abelha Bombus Terrestris, serviram como uma base de estudo. Adaptando análises comportamentais, foi-nos permitido investigar e inferir acerca da extensão dos somatórios neuronais, espaciais e temporais, aplicados por estes insetos, por forma a melhorar a qualidade das imagens captadas a diferentes níveis de luz. Como trabalho futuro, o modelo de deteção de colisões deverá ser acoplado com um mecanismo de adaptação à luz, sendo ambos bio-inspirados, e que possam ser utilizados na navegação robótica autónoma

    Bio-inspired Neural Networks for Angular Velocity Estimation in Visually Guided Flights

    Get PDF
    Executing delicate flight maneuvers using visual information is a huge challenge for future robotic vision systems. As a source of inspiration, insects are quite apt at navigating in woods and landing on surfaces which require delicate visual perception and flight control. The exquisite sensitivity of insects for image motion speed, as revealed recently, is coming from a class of specific neurons called descending neurons. Some of the descending neurons have demonstrated angular velocity selectivity as the image motion speed varies in retina. Build a quantitative angular velocity detection model is the first step for not only further understanding of the biological visual system, but also providing robust and economic solutions of visual motion perception for an artificial visual system. This thesis aims to explore biological image processing methods for motion speed detection in visually guided flights. The major contributions are summarized as follows. We have presented an angular velocity decoding model (AVDM), which estimates the visual motion speed combining both textural and temporal information from input signals. The model consists of three parts: elementary motion detection circuits, wide-field texture estimation pathway and angular velocity decoding layer. The model estimates the angular velocity very well with improved spatial frequency independence compared to the state-of-the-art angular velocity detecting models, when firstly tested by moving sinusoidal gratings. This spatial independence is vital to account for the honeybee’s flight behaviors. We have also investigated the spatial and temporal resolutions of honeybees to get a bio-plausible parameter setting for explaining these behaviors. To investigate whether the model can account for observations of tunnel centering behaviors of honeybees, the model has been implemented in a virtual bee simulated by the game engine Unity. The simulation results of a series of experiments show that the agent can adjust its position to fly through patterned tunnels by balancing the angular velocities estimated on both eyes under several circumstances. All tunnel stimulations reproduce similar behaviors of real bees, which indicate that our model does provide a possible explanation for estimating the image velocity and can be used for MAV’s flight course regulation in tunnels. What’s more, to further verify the robustness of the model, the visually guided terrain following simulations have been carried out with a closed-loop control scheme to restore a preset angular velocity during the flight. The simulation results of successfully flying over the undulating terrain verify the feasibility and robustness of the AVDM performing in various application scenarios, which shows its potential in applications of micro aerial vehicle’s terrain following. In addition, we have also applied the AVDM in grazing landing using only visual information. A LGMD neuron is also introduced to avoid collision and to trigger the hover phase, which ensures the safety of landing. By applying honeybee’s landing strategy of keeping constant angular velocity, we have designed a close-loop control scheme with an adaptive gain to control landing dynamic using AVDM response as input. A series of controlled trails have been designed in Unity platform to demonstrate the effectiveness of the proposed model and control scheme for visual landing under various conditions. The proposed model could be implemented into real small robots to investigate the robustness in real landing scenarios in near future

    Biomimetic vision-based collision avoidance system for MAVs.

    Get PDF
    This thesis proposes a secondary collision avoidance algorithm for micro aerial vehicles based on luminance-difference processing exhibited by the Lobula Giant Movement Detector (LGMD), a wide-field visual neuron located in the lobula layer of a locust’s nervous system. In particular, we address the design, modulation, hardware implementation, and testing of a computationally simple yet robust collision avoidance algorithm based on the novel concept of quadfurcated luminance-difference processing (QLDP). Micro and Nano class of unmanned robots are the primary target applications of this algorithm, however, it could also be implemented on advanced robots as a fail-safe redundant system. The algorithm proposed in this thesis addresses some of the major detection challenges such as, obstacle proximity, collision threat potentiality, and contrast correction within the robot’s field of view, to establish and generate a precise yet simple collision-free motor control command in real-time. Additionally, it has proven effective in detecting edges independent of background or obstacle colour, size, and contour. To achieve this, the proposed QLDP essentially executes a series of image enhancement and edge detection algorithms to estimate collision threat-level (spike) which further determines if the robot’s field of view must be dissected into four quarters where each quadrant’s response is analysed and interpreted against the others to determine the most secure path. Ultimately, the computation load and the performance of the model is assessed against an eclectic set of off-line as well as real-time real-world collision scenarios in order to validate the proposed model’s asserted capability to avoid obstacles at more than 670 mm prior to collision (real-world), moving at 1.2 msˉ¹ with a successful avoidance rate of 90% processing at an extreme frequency of 120 Hz, that is much superior compared to the results reported in the contemporary related literature to the best of our knowledge.MSc by Researc
    corecore