14 research outputs found

    Towards Computational Models and Applications of Insect Visual Systems for Motion Perception: A Review

    Get PDF
    Motion perception is a critical capability determining a variety of aspects of insects' life, including avoiding predators, foraging and so forth. A good number of motion detectors have been identified in the insects' visual pathways. Computational modelling of these motion detectors has not only been providing effective solutions to artificial intelligence, but also benefiting the understanding of complicated biological visual systems. These biological mechanisms through millions of years of evolutionary development will have formed solid modules for constructing dynamic vision systems for future intelligent machines. This article reviews the computational motion perception models originating from biological research of insects' visual systems in the literature. These motion perception models or neural networks comprise the looming sensitive neuronal models of lobula giant movement detectors (LGMDs) in locusts, the translation sensitive neural systems of direction selective neurons (DSNs) in fruit flies, bees and locusts, as well as the small target motion detectors (STMDs) in dragonflies and hover flies. We also review the applications of these models to robots and vehicles. Through these modelling studies, we summarise the methodologies that generate different direction and size selectivity in motion perception. At last, we discuss about multiple systems integration and hardware realisation of these bio-inspired motion perception models

    Large scale retinal modeling for the design of new generation retinal prostheses

    Get PDF
    With the help of modern technology, blindness caused by retinal diseases such as age-related macular degeneration or retinitis pigmentosa is now considered reversible. Scientists from various fields such as Neuroscience, Electrical Engineering, Computer Science, and Bioscience have been collaborating to design and develop retinal prostheses, with the aim of replacing malfunctioning parts of the retina and restoring vision in the blind. Human trials conducted to test retinal prostheses have yielded encouraging results, showing the potential of this approach in vision recovery. However, a retinal prosthesis has several limitations with regard to its hardware and biological functions, and several attempts have been made to overcome these limitations. This thesis focuses on the biological aspects of retinal prostheses: the biological processes occurring inside the retina and the limitations of retinal prostheses corresponding to those processes have been analysed. Based on these analyses, three major findings regarding information processing inside the retina have been presented and these findings have been used to conceptualise retinal prostheses that have the characteristics of asymmetrical and separate pathway stimulations. In the future, when nanotechnology gains more popularity and is completely integrated inside the prosthesis, this concept can be utilized to restore useful visual information such as colour, depth, and contrast to achieve high-quality vision in the blind

    Brain-Inspired Computing

    Get PDF
    This open access book constitutes revised selected papers from the 4th International Workshop on Brain-Inspired Computing, BrainComp 2019, held in Cetraro, Italy, in July 2019. The 11 papers presented in this volume were carefully reviewed and selected for inclusion in this book. They deal with research on brain atlasing, multi-scale models and simulation, HPC and data infra-structures for neuroscience as well as artificial and natural neural architectures

    An Insect-Inspired Target Tracking Mechanism for Autonomous Vehicles

    Get PDF
    Target tracking is a complicated task from an engineering perspective, especially where targets are small and seen against complex natural environments. Due to the high demand for robust target tracking algorithms a great deal of research has focused on this area. However, most engineering solutions developed for this purpose are often unreliable in real world conditions or too computationally expensive to be used in real-time applications. While engineering methods try to solve the problem of target detection and tracking by using high resolution input images, fast processors, with typically computationally expensive methods, a quick glance at nature provides evidence that practical real world solutions for target tracking exist. Many animals track targets for predation, territorial or mating purposes and with millions of years of evolution behind them, it seems reasonable to assume that these solutions are highly efficient. For instance, despite their low resolution compound eyes and tiny brains, many flying insects have evolved superb abilities to track targets in visual clutter even in the presence of other distracting stimuli, such as swarms of prey and conspecifics. The accessibility of the dragonfly for stable electrophysiological recordings makes this insect an ideal and tractable model system for investigating the neuronal correlates for complex tasks such as target pursuit. Studies on dragonflies identified and characterized a set of neurons likely to mediate target detection and pursuit referred to as ‘small target motion detector’ (STMD) neurons. These neurons are selective for tiny targets, are velocity-tuned, contrast-sensitive and respond robustly to targets even against the motion of background. These neurons have shown several high-order properties which can contribute to the dragonfly’s ability to robustly pursue prey with over a 97% success rate. These include the recent electrophysiological observations of response ‘facilitation’ (a slow build-up of response to targets that move on long, continuous trajectories) and ‘selective attention’, a competitive mechanism that selects one target from alternatives. In this thesis, I adopted a bio-inspired approach to develop a solution for the problem of target tracking and pursuit. Directly inspired by recent physiological breakthroughs in understanding the insect brain, I developed a closed-loop target tracking system that uses an active saccadic gaze fixation strategy inspired by insect pursuit. First, I tested this model in virtual world simulations using MATLAB/Simulink. The results of these simulations show robust performance of this insect-inspired model, achieving high prey capture success even within complex background clutter, low contrast and high relative speed of pursued prey. Additionally, these results show that inclusion of facilitation not only substantially improves success for even short-duration pursuits, it also enhances the ability to ‘attend’ to one target in the presence of distracters. This inspect-inspired system has a relatively simple image processing strategy compared to state-of-the-art trackers developed recently for computer vision applications. Traditional machine vision approaches incorporate elaborations to handle challenges and non-idealities in the natural environments such as local flicker and illumination changes, and non-smooth and non-linear target trajectories. Therefore, the question arises as whether this insect inspired tracker can match their performance when given similar challenges? I investigated this question by testing both the efficacy and efficiency of this insect-inspired model in open-loop, using a widely-used set of videos recorded under natural conditions. I directly compared the performance of this model with several state-of-the-art engineering algorithms using the same hardware, software environment and stimuli. This insect-inspired model exhibits robust performance in tracking small moving targets even in very challenging natural scenarios, outperforming the best of the engineered approaches. Furthermore, it operates more efficiently compared to the other approaches, in some cases dramatically so. Computer vision literature traditionally test target tracking algorithms only in open-loop. However, one of the main purposes for developing these algorithms is implementation in real-time robotic applications. Therefore, it is still unclear how these algorithms might perform in closed-loop real-world applications where inclusion of sensors and actuators on a physical robot results in additional latency which can affect the stability of the feedback process. Additionally, studies show that animals interact with the target by changing eye or body movements, which then modulate the visual inputs underlying the detection and selection task (via closed-loop feedback). This active vision system may be a key to exploiting visual information by the simple insect brain for complex tasks such as target tracking. Therefore, I implemented this insect-inspired model along with insect active vision in a robotic platform. I tested this robotic implementation both in indoor and outdoor environments against different challenges which exist in real-world conditions such as vibration, illumination variation, and distracting stimuli. The experimental results show that the robotic implementation is capable of handling these challenges and robustly pursuing a target even in highly challenging scenarios.Thesis (Ph.D.) -- University of Adelaide, School of Mechanical Engineering, 201

    Learning to Behave: Internalising Knowledge

    Get PDF

    Computational Models of Perceptual Organization and Bottom-up Attention in Visual and Audio-Visual Environments

    Get PDF
    Figure Ground Organization (FGO) - inferring spatial depth ordering of objects in a visual scene - involves determining which side of an occlusion boundary (OB) is figure (closer to the observer) and which is ground (further away from the observer). Attention, the process that governs how only some part of sensory information is selected for further analysis based on behavioral relevance, can be exogenous, driven by stimulus properties such as an abrupt sound or a bright flash, the processing of which is purely bottom-up; or endogenous (goal-driven or voluntary), where top-down factors such as familiarity, aesthetic quality, etc., determine attentional selection. The two main objectives of this thesis are developing computational models of: (i) FGO in visual environments; (ii) bottom-up attention in audio-visual environments. In the visual domain, we first identify Spectral Anisotropy (SA), characterized by anisotropic distribution of oriented high frequency spectral power on the figure side and lack of it on the ground side, as a novel FGO cue, that can determine Figure/Ground (FG) relations at an OB with an accuracy exceeding 60%. Next, we show a non-linear Support Vector Machine based classifier trained on the SA features achieves an accuracy close to 70% in determining FG relations, the highest for a stand-alone local cue. We then show SA can be computed in a biologically plausible manner by pooling the Complex cell responses of different scales in a specific orientation, which also achieves an accuracy greater than or equal to 60% in determining FG relations. Next, we present a biologically motivated, feed forward model of FGO incorporating convexity, surroundedness, parallelism as global cues and SA, T-junctions as local cues, where SA is computed in a biologically plausible manner. Each local cue, when added alone, gives statistically significant improvement in the model's performance. The model with both local cues achieves higher accuracy than those of models with individual cues in determining FG relations, indicating SA and T-Junctions are not mutually contradictory. Compared to the model with no local cues, the model with both local cues achieves greater than or equal to 8.78% improvement in determining FG relations at every border location of images in the BSDS dataset. In the audio-visual domain, first we build a simple computational model to explain how visual search can be aided by providing concurrent, co-spatial auditory cues. Our model shows that adding a co-spatial, concurrent auditory cue can enhance the saliency of a weakly visible target among prominent visual distractors, the behavioral effect of which could be faster reaction time and/or better search accuracy. Lastly, a bottom-up, feed-forward, proto-object based audiovisual saliency map (AVSM) for the analysis of dynamic natural scenes is presented. We demonstrate that the performance of proto-object based AVSM in detecting and localizing salient objects/events is in agreement with human judgment. In addition, we show the AVSM computed as a linear combination of visual and auditory feature conspicuity maps captures a higher number of valid salient events compared to unisensory saliency maps

    Tracking the Temporal-Evolution of Supernova Bubbles in Numerical Simulations

    Get PDF
    The study of low-dimensional, noisy manifolds embedded in a higher dimensional space has been extremely useful in many applications, from the chemical analysis of multi-phase flows to simulations of galactic mergers. Building a probabilistic model of the manifolds has helped in describing their essential properties and how they vary in space. However, when the manifold is evolving through time, a joint spatio-temporal modelling is needed, in order to fully comprehend its nature. We propose a first-order Markovian process that propagates the spatial probabilistic model of a manifold at fixed time, to its adjacent temporal stages. The proposed methodology is demonstrated using a particle simulation of an interacting dwarf galaxy to describe the evolution of a cavity generated by a Supernov
    corecore