56 research outputs found

    A sub-mW IoT-endnode for always-on visual monitoring and smart triggering

    Full text link
    This work presents a fully-programmable Internet of Things (IoT) visual sensing node that targets sub-mW power consumption in always-on monitoring scenarios. The system features a spatial-contrast 128x64128\mathrm{x}64 binary pixel imager with focal-plane processing. The sensor, when working at its lowest power mode (10ÎŒW10\mu W at 10 fps), provides as output the number of changed pixels. Based on this information, a dedicated camera interface, implemented on a low-power FPGA, wakes up an ultra-low-power parallel processing unit to extract context-aware visual information. We evaluate the smart sensor on three always-on visual triggering application scenarios. Triggering accuracy comparable to RGB image sensors is achieved at nominal lighting conditions, while consuming an average power between 193ÎŒW193\mu W and 277ÎŒW277\mu W, depending on context activity. The digital sub-system is extremely flexible, thanks to a fully-programmable digital signal processing engine, but still achieves 19x lower power consumption compared to MCU-based cameras with significantly lower on-board computing capabilities.Comment: 11 pages, 9 figures, submitteted to IEEE IoT Journa

    Event-based neuromorphic stereo vision

    Full text link

    High speed event-based visual processing in the presence of noise

    Get PDF
    Standard machine vision approaches are challenged in applications where large amounts of noisy temporal data must be processed in real-time. This work aims to develop neuromorphic event-based processing systems for such challenging, high-noise environments. The novel event-based application-focused algorithms developed are primarily designed for implementation in digital neuromorphic hardware with a focus on noise robustness, ease of implementation, operationally useful ancillary signals and processing speed in embedded systems

    Towards Computational Models and Applications of Insect Visual Systems for Motion Perception: A Review

    Get PDF
    Motion perception is a critical capability determining a variety of aspects of insects' life, including avoiding predators, foraging and so forth. A good number of motion detectors have been identified in the insects' visual pathways. Computational modelling of these motion detectors has not only been providing effective solutions to artificial intelligence, but also benefiting the understanding of complicated biological visual systems. These biological mechanisms through millions of years of evolutionary development will have formed solid modules for constructing dynamic vision systems for future intelligent machines. This article reviews the computational motion perception models originating from biological research of insects' visual systems in the literature. These motion perception models or neural networks comprise the looming sensitive neuronal models of lobula giant movement detectors (LGMDs) in locusts, the translation sensitive neural systems of direction selective neurons (DSNs) in fruit flies, bees and locusts, as well as the small target motion detectors (STMDs) in dragonflies and hover flies. We also review the applications of these models to robots and vehicles. Through these modelling studies, we summarise the methodologies that generate different direction and size selectivity in motion perception. At last, we discuss about multiple systems integration and hardware realisation of these bio-inspired motion perception models

    Biomimetic vision-based collision avoidance system for MAVs.

    Get PDF
    This thesis proposes a secondary collision avoidance algorithm for micro aerial vehicles based on luminance-difference processing exhibited by the Lobula Giant Movement Detector (LGMD), a wide-field visual neuron located in the lobula layer of a locust’s nervous system. In particular, we address the design, modulation, hardware implementation, and testing of a computationally simple yet robust collision avoidance algorithm based on the novel concept of quadfurcated luminance-difference processing (QLDP). Micro and Nano class of unmanned robots are the primary target applications of this algorithm, however, it could also be implemented on advanced robots as a fail-safe redundant system. The algorithm proposed in this thesis addresses some of the major detection challenges such as, obstacle proximity, collision threat potentiality, and contrast correction within the robot’s field of view, to establish and generate a precise yet simple collision-free motor control command in real-time. Additionally, it has proven effective in detecting edges independent of background or obstacle colour, size, and contour. To achieve this, the proposed QLDP essentially executes a series of image enhancement and edge detection algorithms to estimate collision threat-level (spike) which further determines if the robot’s field of view must be dissected into four quarters where each quadrant’s response is analysed and interpreted against the others to determine the most secure path. Ultimately, the computation load and the performance of the model is assessed against an eclectic set of off-line as well as real-time real-world collision scenarios in order to validate the proposed model’s asserted capability to avoid obstacles at more than 670 mm prior to collision (real-world), moving at 1.2 msˉÂč with a successful avoidance rate of 90% processing at an extreme frequency of 120 Hz, that is much superior compared to the results reported in the contemporary related literature to the best of our knowledge.MSc by Researc

    Neuromorphic models for biological photoreceptors

    Get PDF
    Biological visual processing is extremely flexible and provides pixelby- pixel adaptation. Millennia of evolution and natural selection have provided inspiration for robust, efficient and elegant solutions in artificial visual system designs. Physiological studies have shown that non-linear adaptation of biological visual processing is evident even at the first stage of the visual system pathway. Theory and modelling have shown that adaptation in the early visual processing is required to compress the high bandwidth visual environment into a sensible form prior to transmission via the limited bandwidth neuron channels. However, many current bio-inspired visual systems have neglected the importance of having a reliable early stage of visual processing. Having a robust and reliable early stage design not only provides a better mimic of the biology, but also allows better design and understanding of higher order neurons in the visual system pathway. (Chapter 3: A Non-linear Adaptive Artificial Photoreceptor Circuit - Design and Implementation) The primary aim of this work was to design and implement an elaborated artificial photoreceptor circuit which faithfully mimics the actual biological photoreceptors, using standard analogue discrete electronic components. I have incorporated several key features of the biological photoreceptors in the implementation, such as non-linear adaptation to background luminance, adaptive frequency response and logarithmic encoding of luminance. Initial parameters for the key features of the model were based on existing literature and fine tuning of the circuit was done after analysis of actual recordings from biological photoreceptors. (Chapter 2: Dimmable Voltage-Controlled High Current LED Driver System for Vision Science Experiments) The visual stimulus was a critical component in performing the vision experiments, and has historically been a limiting factor in performing experiments which ask critical questions about responses to complicated scenes, such as natural environments. The ability to reproduce the large dynamic range of the real-world luminance was important to correctly test the performance of the model. I evaluated the performance of several existing light emitting diode (LED) drivers and commercial products and found that none of them provided adequate dynamic range and freedom from noise. I therefore designed and implemented a stable multi-channel, high-current LED driver that allowed creation of light stimuli with inexpensive analogue discrete electronic components, and was used for the experiments described in this thesis. This LED driver, which was properly calibrated to the real-world luminance, was used in conjunction with a standard commercial data acquisition card. (An Elaborated Electronic Prototype of a Biological Photoreceptor - Steady-state Analysis (Chapter 4) & Dynamic Analysis (Chapter 5)) I performed electrophysiological experiments measuring the responses of the intact hoverfly photoreceptor cells (Rl-6) using both characterised and dynamic (naturalistic) stimuli. The analysed data were used to fine tune the circuit parameters in order to realise a faithful mimic of the actual biological photoreceptors. Similar experiments were performed on the artificial photoreceptor circuit to thoroughly evaluate the robustness and performance of the circuit against actual biological photoreceptors. Correlation and coherence analyses were used to measure the performance of the circuit with respect to its biological counterpart in both time and frequency domains respectively. Chapter 6: Early Visual Processing Maximises Information for Higher Order Neurons) The artificial photoreceptor circuit was then further evaluated against a complex natural movie scene in which the full dynamic range of the original scenario was maintained. Again, I performed experiments on both the circuit and actual biological photoreceptors. Correlation and coherence analyses of the circuit against the biological photoreceptors showed that the circuit was robust and reliable even under complex naturalistic conditions. I managed to design and implement an add-on electronic circuit to the elaborated photoreceptor circuit that crudely mimicked the temporal high-pass nature of the second order Large Monopolar Cell (LMC) in order to observe how the non-linear features in the early stage of visual processing assists higher order neurons in efficiently coding visual information. Based on this research, I found that the first stage of visual processing consists of numerous non-linearities, which have been proven to provide optimal coding of visual information. The variable frequency response curve of the hoverfly, Eristalis tenax was mapped out against large range of background luminance. Previous studies have suggested that such variability in frequency response was to improve signal transmission quality in the insect visual pathway, even though I have not made any quantitative measurements of the improvements. I also found that high dynamic range images (32-bit floating point numbers) are better representations of the real-world luminance for naturalistic visual experiments compared to the conventional 8-bit images. I have successfully implemented a circuit that faithfully mimicked the biological photoreceptors and it has been evaluated against characterised and dynamic stimuli. I found that my circuit design was far better than using just a normal linear phototransducer as the front-end of a vision system as it is more capable of compressing visual information in a way which maximises the information content before transmission to higher order neurons.Thesis (Ph.D.) -- University of Adelaide, School of Molecular and Biomedical Sciences, Discipline of Physiology, 2007

    The Development of Bio-Inspired Cortical Feature Maps for Robot Sensorimotor Controllers

    Get PDF
    Full version unavailable due to 3rd party copyright restrictions.This project applies principles from the field of Computational Neuroscience to Robotics research, in particular to develop systems inspired by how nature manages to solve sensorimotor coordination tasks. The overall aim has been to build a self-organising sensorimotor system using biologically inspired techniques based upon human cortical development which can in the future be implemented in neuromorphic hardware. This can then deliver the benefits of low power consumption and real time operation but with flexible learning onboard autonomous robots. A core principle is the Self-Organising Feature Map which is based upon the theory of how 2D maps develop in real cortex to represent complex information from the environment. A framework for developing feature maps for both motor and visual directional selectivity representing eight different directions of motion is described as well as how they can be coupled together to make a basic visuomotor system. In contrast to many previous works which use artificially generated visual inputs (for example, image sequences of oriented moving bars or mathematically generated Gaussian bars) a novel feature of the current work is that the visual input is generated by a DVS 128 silicon retina camera which is a neuromorphic device and produces spike events in a frame-free way. One of the main contributions of this work has been to develop a method of autonomous regulation of the map development process which adapts the learning dependent upon input activity. The main results show that distinct directionally selective maps for both the motor and visual modalities are produced under a range of experimental scenarios. The adaptive learning process successfully controls the rate of learning in both motor and visual map development and is used to indicate when sufficient patterns have been presented, thus avoiding the need to define in advance the quantity and range of training data. The coupling training experiments show that the visual input learns to modulate the original motor map response, creating a new visual-motor topological map.EPSRC, University of Plymouth Graduate Schoo

    Pitt Momentum Fund 2020 Overview

    Get PDF
    In the 2019–2020 academic year, Provost and Senior Vice Chancellor Ann E. Cudd and Senior Vice Chancellor for Research (SVCR) Rob A. Rutenbar have collaborated to enhance and streamline internal funding opportunities for faculty research while continuing to support high-quality research, scholarship, and creative endeavors. The result is a jointly funded large-scale research development fund—the Pitt Momentum Funds—which restructures the University’s suite of internal funding programs (Central Research Development Fund, Social Science Research Initiative, and Special initiative to Promote Scholarly Activities in the Humanities) and adds a new SVCR/Provost Fund to provide allocations for research seeding, teaming, and scaling grants. The new 920,000annualfundingmodelprovideslarge−scale,transformativescholarshipsupportforinterdisciplinaryteamsoffacultyfromatleastthreeschoolswithtwonewone−yearplanning(or“teaming”)grantsof920,000 annual funding model provides large-scale, transformative scholarship support for interdisciplinary teams of faculty from at least three schools with two new one-year planning (or “teaming”) grants of 60,000, and two new two-year scaling grants at 400,000.Thisnewsuiteoffunds—whichisnewfundinganddoesnotreduceoverallfunding—willhelpadvancePitt’sgoaltoengageinresearchofimpact.Thenewstructureforawardsincludesthreetiers:SeedingGrants—one−yeartermwithanawardcapof400,000. This new suite of funds—which is new funding and does not reduce overall funding—will help advance Pitt’s goal to engage in research of impact. The new structure for awards includes three tiers: Seeding Grants—one-year term with an award cap of 16,000 plus (2,000 supplements are available for specific cases); awards are made in four tracks: STEM Health & Life Science Arts & Humanities Social Sciences, which includes business, policy, law, education, and social work Preventing Sexual Misconduct** (All faculty, including School of Medicine, are eligible to apply to this track) Seeding grants support significant and innovative scholarship by individual or groups of faculty at all ranks at the University of Pittsburgh, with a particular focus on early career faculty and areas where external funding is extremely limited. Teaming Grants—one-year term with an award cap of 60,000. Teaming grants support the formation of new multi-disciplinary collaborations to successfully pursue large-scale external funding. Scaling Grants—two-year term with an award cap of $400,000. Scaling grants enable multi-disciplinary teams to competitively scale their research efforts in targeted pursuit of large-scale external funding. More information on eligibility, application processes, evaluation criteria, exclusions, and participation requirements is available through Pitt’s Office of Sponsored Programs. The application can be accessed at the Competition Space

    A two-directional 1-gram visual motion sensor inspired by the fly's eye

    No full text
    International audienceOptic flow based autopilots for Micro-Aerial Vehicles (MAVs) need lightweight, low-power sensors to be able to fly safely through unknown environments. The new tiny 6-pixel visual motion sensor presented here meets these demanding requirements in term of its mass, size and power consumption. This 1-gram, low-power, fly-inspired sensor accurately gauges the visual motion using only this 6-pixel array with two different panoramas and illuminance conditions. The new visual motion sensor's output results from a smart combination of the information collected by several 2-pixel Local Motion Sensors (LMSs), based on the \enquote{time of travel} scheme originally inspired by the common housefly's Elementary Motion Detector (EMD) neurons. The proposed sensory fusion method enables the new visual sensor to measure the visual angular speed and determine the main direction of the visual motion without any prior knowledge. By computing the median value of the output from several LMSs, we also ended up with a more robust, more accurate and more frequently refreshed measurement of the 1-D angular speed
    • 

    corecore