13 research outputs found

    A spatial contrast retina with on-chip calibration for neuromorphic spike-based AER vision systems

    Get PDF
    We present a 32 32 pixels contrast retina microchip that provides its output as an address event representation (AER) stream. Spatial contrast is computed as the ratio between pixel photocurrent and a local average between neighboring pixels obtained with a diffuser network. This current-based computation produces an important amount of mismatch between neighboring pixels, because the currents can be as low as a few pico-amperes. Consequently, a compact calibration circuitry has been included to trimm each pixel. Measurements show a reduction in mismatch standard deviation from 57% to 6.6% (indoor light). The paper describes the design of the pixel with its spatial contrast computation and calibration sections. About one third of pixel area is used for a 5-bit calibration circuit. Area of pixel is 58 m 56 m, while its current consumption is about 20 nA at 1-kHz event rate. Extensive experimental results are provided for a prototype fabricated in a standard 0.35- m CMOS process.Gobierno de España TIC2003-08164-C03-01, TEC2006-11730-C03-01European Union IST-2001-3412

    A spatial contrast retina with on-chip calibration for neuromorphic spike-based AER vision systems

    Get PDF
    We present a 32 32 pixels contrast retina microchip that provides its output as an address event representation (AER) stream. Spatial contrast is computed as the ratio between pixel photocurrent and a local average between neighboring pixels obtained with a diffuser network. This current-based computation produces an important amount of mismatch between neighboring pixels, because the currents can be as low as a few pico-amperes. Consequently, a compact calibration circuitry has been included to trimm each pixel. Measurements show a reduction in mismatch standard deviation from 57% to 6.6% (indoor light). The paper describes the design of the pixel with its spatial contrast computation and calibration sections. About one third of pixel area is used for a 5-bit calibration circuit. Area of pixel is 58 m 56 m, while its current consumption is about 20 nA at 1-kHz event rate. Extensive experimental results are provided for a prototype fabricated in a standard 0.35- m CMOS process.This work was supported by Spanish Research Grants TIC2003-08164-C03-01 (SAMANTA), TEC2006-11730-C03-01 (SAMANTA-II), and EU grant IST-2001-34124 (CAVIAR). JCS was supported by the I3P program of the Spanish Research Council. RSG was supported by a national grant from the Spanish Ministry of Education and Science.Peer reviewe

    Real-time Neuromorphic Visual Pre-Processing and Dynamic Saliency

    Get PDF
    The human brain is by far the most computationally complex, efficient, and reliable computing system operating under such low-power, small-size, and light-weight specifications. Within the field of neuromorphic engineering, we seek to design systems with facsimiles to that of the human brain with means to reach its desirable properties. In this doctoral work, the focus is within the realm of vision, specifically visual saliency and related visual tasks with bio-inspired, real-time processing. The human visual system, from the retina through the visual cortical hierarchy, is responsible for extracting visual information and processing this information, forming our visual perception. This visual information is transmitted through these various layers of the visual system via spikes (or action potentials), representing information in the temporal domain. The objective is to exploit this neurological communication protocol and functionality within the systems we design. This approach is essential for the advancement of autonomous, mobile agents (i.e. drones/MAVs, cars) which must perform visual tasks under size and power constraints in which traditional CPU or GPU implementations to not suffice. Although the high-level objective is to design a complete visual processor with direct physical and functional correlates to the human visual system, we focus on three specific tasks. The first focus of this thesis is the integration of motion into a biologically-plausible proto-object-based visual saliency model. Laurent Itti, one of the pioneers in the field, defines visual saliency as ``the distinct subjective perceptual quality which makes some items in the world stand out from their neighbors and immediately grab our attention.'' From humans to insects, visual saliency is important for the extraction of only interesting regions of visual stimuli for further processing. Prior to this doctoral work, Russel et al. \cite{russell2014model} designed a model of proto-object-based visual saliency with biological correlates. This model was designed for computing saliency only on static images. However, motion is a naturally occurring phenomena that plays an essential role in both human and animal visual processing. Henceforth, the most ideal model of visual saliency should consider motion that may be exhibited within the visual scene. In this work a novel dynamic proto-object-based visual saliency is described which extends the Russel et. al. saliency model to consider not only static, but also temporal information. This model was validated by using metrics for determining how accurate the model is in predicting human eye fixations and saccades on a public dataset of videos with attached eye tracking data. This model outperformed other state-of-the-art visual saliency models in computing dynamic visual saliency. Such a model that can accurately predict where humans look, can serve as a front-end component to other visual processors performing tasks such as object detection and recognition, or object tracking. In doing so it can reduce throughput and increase processing speed for such tasks. Furthermore, it has more obvious applications in artificial intelligence in mimicking the functionality of the human visual system. The second focus of this thesis is the implementation of this visual saliency model on an FPGA (Field Programmable Gate Array) for real-time processing. Initially, this model was designed within MATLAB, a software-based approach running on a CPU, which limits the processing speed and consumes unnecessary amounts of power due to overhead. This is detrimental for integration with an autonomous, mobile system which must operate in real-time. This novel FPGA implementation allows for a low-power, high-speed approach to computing visual saliency. There are a few existing FPGA-based implementations of visual saliency, and of those, none are based on the notion of proto-objects. This work presents the first, to our knowledge, FPGA implementation of an object-based visual saliency model. Such an FPGA implementation allows for the low-power, light-weight, and small-size specifications that we seek within the field of neuromorphic engineering. For validating the FPGA model, the same metrics are used for determining the extent to which it predicts human eye saccades and fixations. We compare this hardware implementation to the software model for validation. The third focus of this thesis is the design of a generic neuromorphic platform both on FPGA and VLSI (Very-Large-Scale-Integration) technology for performing visual tasks, including those necessary in the computation of the visual saliency. Visual processing tasks such as image filtering and image dewarping are demonstrated via this novel neuromorphic technology consisting of an array of hardware-based generalized integrate-and-fire neurons. It allows the visual saliency model's computation to be offloaded onto this hardware-based architecture. We first demonstrate an emulation of this neuromorphic system on FPGA demonstrating its capability of dewarping and filtering tasks as well as integration with a neuromorphic camera called the ATIS (Asynchronous Time-based Image Sensor). We then demonstrate the neuromorphic platform implemented in CMOS technology, specifically designed for low-mismatch, high-density, and low-power. Such a VLSI technology-based platform further bridges the gap between engineering and biology and moves us closer towards developing a complete neuromorphic visual processor

    Optical Axons for Electro-Optical Neural Networks

    Get PDF
    Recently, neuromorphic sensors, which convert analogue signals to spiking frequencies, have ‎been reported for neurorobotics. In bio-inspired systems these sensors are connected to the main neural unit to perform ‎post-processing of the sensor data. The performance of spiking neural networks has been ‎improved using optical synapses, which offer parallel communications between the distanced ‎neural areas but are sensitive to the intensity variations of the optical signal. For systems with ‎several neuromorphic sensors, which are connected optically to the main unit, the use of ‎optical synapses is not an advantage. To address this, in this paper we propose and ‎experimentally verify optical axons with synapses activated optically using digital signals. The ‎synaptic weights are encoded by the energy of the stimuli, which are then optically transmitted ‎independently. We show that the optical intensity fluctuations and link’s misalignment result ‎in delay in activation of the synapses. For the proposed optical axon, we have demonstrated line of ‎sight transmission over a maximum link length of 190 cm with a delay of 8 μs. Furthermore, we ‎show the axon delay as a function of the illuminance using a fitted model for which the root mean square error (RMS) ‎similarity is 0.95

    On Real-Time AER 2-D Convolutions Hardware for Neuromorphic Spike-Based Cortical Processing

    Get PDF
    In this paper, a chip that performs real-time image convolutions with programmable kernels of arbitrary shape is presented. The chip is a first experimental prototype of reduced size to validate the implemented circuits and system level techniques. The convolution processing is based on the address–event-representation (AER) technique, which is a spike-based biologically inspired image and video representation technique that favors communication bandwidth for pixels with more information. As a first test prototype, a pixel array of 16x16 has been implemented with programmable kernel size of up to 16x16. The chip has been fabricated in a standard 0.35- m complimentary metal–oxide–semiconductor (CMOS) process. The technique also allows to process larger size images by assembling 2-D arrays of such chips. Pixel operation exploits low-power mixed analog–digital circuit techniques. Because of the low currents involved (down to nanoamperes or even picoamperes), an important amount of pixel area is devoted to mismatch calibration. The rest of the chip uses digital circuit techniques, both synchronous and asynchronous. The fabricated chip has been thoroughly tested, both at the pixel level and at the system level. Specific computer interfaces have been developed for generating AER streams from conventional computers and feeding them as inputs to the convolution chip, and for grabbing AER streams coming out of the convolution chip and storing and analyzing them on computers. Extensive experimental results are provided. At the end of this paper, we provide discussions and results on scaling up the approach for larger pixel arrays and multilayer cortical AER systems.Commission of the European Communities IST-2001-34124 (CAVIAR)Commission of the European Communities 216777 (NABAB)Ministerio de Educación y Ciencia TIC-2000-0406-P4Ministerio de Educación y Ciencia TIC-2003-08164-C03-01Ministerio de Educación y Ciencia TEC2006-11730-C03-01Junta de Andalucía TIC-141

    Biomimetic vision-based collision avoidance system for MAVs.

    Get PDF
    This thesis proposes a secondary collision avoidance algorithm for micro aerial vehicles based on luminance-difference processing exhibited by the Lobula Giant Movement Detector (LGMD), a wide-field visual neuron located in the lobula layer of a locust’s nervous system. In particular, we address the design, modulation, hardware implementation, and testing of a computationally simple yet robust collision avoidance algorithm based on the novel concept of quadfurcated luminance-difference processing (QLDP). Micro and Nano class of unmanned robots are the primary target applications of this algorithm, however, it could also be implemented on advanced robots as a fail-safe redundant system. The algorithm proposed in this thesis addresses some of the major detection challenges such as, obstacle proximity, collision threat potentiality, and contrast correction within the robot’s field of view, to establish and generate a precise yet simple collision-free motor control command in real-time. Additionally, it has proven effective in detecting edges independent of background or obstacle colour, size, and contour. To achieve this, the proposed QLDP essentially executes a series of image enhancement and edge detection algorithms to estimate collision threat-level (spike) which further determines if the robot’s field of view must be dissected into four quarters where each quadrant’s response is analysed and interpreted against the others to determine the most secure path. Ultimately, the computation load and the performance of the model is assessed against an eclectic set of off-line as well as real-time real-world collision scenarios in order to validate the proposed model’s asserted capability to avoid obstacles at more than 670 mm prior to collision (real-world), moving at 1.2 msˉ¹ with a successful avoidance rate of 90% processing at an extreme frequency of 120 Hz, that is much superior compared to the results reported in the contemporary related literature to the best of our knowledge.MSc by Researc

    Saliency-driven image acuity modulation on a reconfigurable silicon array of spiking neurons

    No full text
    We have constructed a system that uses an array of 9,600 spiking silicon neurons, a fast microcontroller, and digital memory, to implement a reconfigurable network of integrate-and-fire neurons. The system is designed for rapid prototyping of spiking neural networks that require high-throughput communication with external address-event hardware. Arbitrary network topologies can be implemented by selectively routing address-events to specific internal or external targets according to a memory-based projective field mapping. The utility and versatility of the system is demonstrated by configuring it as a three-stage network that accepts input from an address-event imager, detects salient regions of the image, and performs spatial acuity modulation around a high-resolution fovea that is centered on the location of highest salience.
    corecore