10 research outputs found

    A silicon implementation of the fly's optomotor control system

    Get PDF
    Flies are capable of stabilizing their body during free flight by using visual motion information to estimate self-rotation. We have built a hardware model of this optomotor control system in a standard CMOS VLSI process. The result is a small, low-power chip that receives input directly from the real world through on-board photoreceptors and generates motor commands in real time. The chip was tested under closed-loop conditions typically used for insect studies. The silicon system exhibited stable control sufficiently analogous to the biological system to allow for quantitative comparisons

    Fly-inspired VLSI vision sensors

    Get PDF
    Journal ArticleEngineers have long looked to nature for inspiration. The diversity of life produced by five billion years of evolution provides countless existence proofs of organic machines with abilities that far surpass those of our own relatively crude automata. We have learned how to harness large amounts of energy and thus far exceed the capabilities of biological systems in some ways (e.g., supersonic flight, space travel, and global communications). However, biological information processing systems (i.e., brains) far outperform today's most advanced computers at tasks involving real-time pattern recognition and perception in complex, uncontrolled environments. If we take energy efficiency into account, the performance gap widens. The human brain dissipates 12 W of power, independent of mental activity. A modern microprocessor dissipates around 50 W, and is equivalent to a vanishingly small fraction of our brain's functionality

    Eyes and ears: combining sensory motor systems modelled on insect physiology

    Get PDF
    Journal ArticleIntegrating sensorimotor systems is still a difficult problems for robotics. Biological inspiration, which has been effectively used to address single sensorimotor tasks, could also be applied to this problem. Several studies on the cricket suggest that it integrates an optomotor response to two existing 'biorabots' - one that uses an a VLSI circuit to reproduce the optomotor behaviour and another that models in hardware and software the sounds localization of the cricket- and combined their capabilities to investigate whether an additive combination will reproduce these effects. We report the initial results and discuss a number of issues raised by this investigation

    Near range path navigation using LGMD visual neural networks

    Get PDF
    In this paper, we proposed a method for near range path navigation for a mobile robot by using a pair of biologically inspired visual neural network – lobula giant movement detector (LGMD). In the proposed binocular style visual system, each LGMD processes images covering a part of the wide field of view and extracts relevant visual cues as its output. The outputs from the two LGMDs are compared and translated into executable motor commands to control the wheels of the robot in real time. Stronger signal from the LGMD in one side pushes the robot away from this side step by step; therefore, the robot can navigate in a visual environment naturally with the proposed vision system. Our experiments showed that this bio-inspired system worked well in different scenarios

    Reactive direction control for a mobile robot: A locust-like control of escape direction emerges when a bilateral pair of model locust visual neurons are integrated

    Get PDF
    Locusts possess a bilateral pair of uniquely identifiable visual neurons that respond vigorously to the image of an approaching object. These neurons are called the lobula giant movement detectors (LGMDs). The locust LGMDs have been extensively studied and this has lead to the development of an LGMD model for use as an artificial collision detector in robotic applications. To date, robots have been equipped with only a single, central artificial LGMD sensor, and this triggers a non-directional stop or rotation when a potentially colliding object is detected. Clearly, for a robot to behave autonomously, it must react differently to stimuli approaching from different directions. In this study, we implement a bilateral pair of LGMD models in Khepera robots equipped with normal and panoramic cameras. We integrate the responses of these LGMD models using methodologies inspired by research on escape direction control in cockroaches. Using ‘randomised winner-take-all’ or ‘steering wheel’ algorithms for LGMD model integration, the khepera robots could escape an approaching threat in real time and with a similar distribution of escape directions as real locusts. We also found that by optimising these algorithms, we could use them to integrate the left and right DCMD responses of real jumping locusts offline and reproduce the actual escape directions that the locusts took in a particular trial. Our results significantly advance the development of an artificial collision detection and evasion system based on the locust LGMD by allowing it reactive control over robot behaviour. The success of this approach may also indicate some important areas to be pursued in future biological research

    Redundant neural vision systems: competing for collision recognition roles

    Get PDF
    Ability to detect collisions is vital for future robots that interact with humans in complex visual environments. Lobula giant movement detectors (LGMD) and directional selective neurons (DSNs) are two types of identified neurons found in the visual pathways of insects such as locusts. Recent modelling studies showed that the LGMD or grouped DSNs could each be tuned for collision recognition. In both biological and artificial vision systems, however, which one should play the collision recognition role and the way the two types of specialized visual neurons could be functioning together are not clear. In this modeling study, we compared the competence of the LGMD and the DSNs, and also investigate the cooperation of the two neural vision systems for collision recognition via artificial evolution. We implemented three types of collision recognition neural subsystems – the LGMD, the DSNs and a hybrid system which combines the LGMD and the DSNs subsystems together, in each individual agent. A switch gene determines which of the three redundant neural subsystems plays the collision recognition role. We found that, in both robotics and driving environments, the LGMD was able to build up its ability for collision recognition quickly and robustly therefore reducing the chance of other types of neural networks to play the same role. The results suggest that the LGMD neural network could be the ideal model to be realized in hardware for collision recognition

    Analog VLSI circuits for inertial sensory systems

    Get PDF
    Supervised by Rahul Sarpeshkar.Also isssued as Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.Includes bibliographical references (leaves 67-68).by Maziar Tavakoli Dastjerdi

    Space-variant motion detection for active visual target tracking

    Get PDF
    A biologically inspired approach to active visual target tracking is presented. The approach makes use of three strategies found in biological systems: space-variant sensing, a spatio-temporal frequency based model of motion detection and the alignment of sensory-motor maps. Space-variant imaging is used to create a 1-D array of elementary motion detectors (EMDs) that are tuned in such a way as to make it possible to detect motion over a wide range of velocities while still being able to detect motion precisely. The array is incorporated into an active visual tracking system. A method of analysis and design for such a tracking system is proposed. It makes use of a sensory-motor map which consists of a phase-plane plot of the continuous-time dynamics of the tracking system overlaid onto a map of the detection capabilities of the array of EMDs. This sensory-motor map is used to design a simple 1-D tracking system and several simulations show how the method can be used to control tracking performance using such metrics as overshoot and settling time. A complete 1-D active vision system is implemented and a set of simple target tracking experiments are performed to demonstrate the effectiveness of the approach

    Neuromorphic models for biological photoreceptors

    Get PDF
    Biological visual processing is extremely flexible and provides pixelby- pixel adaptation. Millennia of evolution and natural selection have provided inspiration for robust, efficient and elegant solutions in artificial visual system designs. Physiological studies have shown that non-linear adaptation of biological visual processing is evident even at the first stage of the visual system pathway. Theory and modelling have shown that adaptation in the early visual processing is required to compress the high bandwidth visual environment into a sensible form prior to transmission via the limited bandwidth neuron channels. However, many current bio-inspired visual systems have neglected the importance of having a reliable early stage of visual processing. Having a robust and reliable early stage design not only provides a better mimic of the biology, but also allows better design and understanding of higher order neurons in the visual system pathway. (Chapter 3: A Non-linear Adaptive Artificial Photoreceptor Circuit - Design and Implementation) The primary aim of this work was to design and implement an elaborated artificial photoreceptor circuit which faithfully mimics the actual biological photoreceptors, using standard analogue discrete electronic components. I have incorporated several key features of the biological photoreceptors in the implementation, such as non-linear adaptation to background luminance, adaptive frequency response and logarithmic encoding of luminance. Initial parameters for the key features of the model were based on existing literature and fine tuning of the circuit was done after analysis of actual recordings from biological photoreceptors. (Chapter 2: Dimmable Voltage-Controlled High Current LED Driver System for Vision Science Experiments) The visual stimulus was a critical component in performing the vision experiments, and has historically been a limiting factor in performing experiments which ask critical questions about responses to complicated scenes, such as natural environments. The ability to reproduce the large dynamic range of the real-world luminance was important to correctly test the performance of the model. I evaluated the performance of several existing light emitting diode (LED) drivers and commercial products and found that none of them provided adequate dynamic range and freedom from noise. I therefore designed and implemented a stable multi-channel, high-current LED driver that allowed creation of light stimuli with inexpensive analogue discrete electronic components, and was used for the experiments described in this thesis. This LED driver, which was properly calibrated to the real-world luminance, was used in conjunction with a standard commercial data acquisition card. (An Elaborated Electronic Prototype of a Biological Photoreceptor - Steady-state Analysis (Chapter 4) & Dynamic Analysis (Chapter 5)) I performed electrophysiological experiments measuring the responses of the intact hoverfly photoreceptor cells (Rl-6) using both characterised and dynamic (naturalistic) stimuli. The analysed data were used to fine tune the circuit parameters in order to realise a faithful mimic of the actual biological photoreceptors. Similar experiments were performed on the artificial photoreceptor circuit to thoroughly evaluate the robustness and performance of the circuit against actual biological photoreceptors. Correlation and coherence analyses were used to measure the performance of the circuit with respect to its biological counterpart in both time and frequency domains respectively. Chapter 6: Early Visual Processing Maximises Information for Higher Order Neurons) The artificial photoreceptor circuit was then further evaluated against a complex natural movie scene in which the full dynamic range of the original scenario was maintained. Again, I performed experiments on both the circuit and actual biological photoreceptors. Correlation and coherence analyses of the circuit against the biological photoreceptors showed that the circuit was robust and reliable even under complex naturalistic conditions. I managed to design and implement an add-on electronic circuit to the elaborated photoreceptor circuit that crudely mimicked the temporal high-pass nature of the second order Large Monopolar Cell (LMC) in order to observe how the non-linear features in the early stage of visual processing assists higher order neurons in efficiently coding visual information. Based on this research, I found that the first stage of visual processing consists of numerous non-linearities, which have been proven to provide optimal coding of visual information. The variable frequency response curve of the hoverfly, Eristalis tenax was mapped out against large range of background luminance. Previous studies have suggested that such variability in frequency response was to improve signal transmission quality in the insect visual pathway, even though I have not made any quantitative measurements of the improvements. I also found that high dynamic range images (32-bit floating point numbers) are better representations of the real-world luminance for naturalistic visual experiments compared to the conventional 8-bit images. I have successfully implemented a circuit that faithfully mimicked the biological photoreceptors and it has been evaluated against characterised and dynamic stimuli. I found that my circuit design was far better than using just a normal linear phototransducer as the front-end of a vision system as it is more capable of compressing visual information in a way which maximises the information content before transmission to higher order neurons.Thesis (Ph.D.) -- University of Adelaide, School of Molecular and Biomedical Sciences, Discipline of Physiology, 2007
    corecore