179 research outputs found

    Frequency Analysis of a 64x64 Pixel Retinomorphic System with AER Output to Estimate the Limits to Apply onto Specific Mechanical Environment

    Get PDF
    The rods and cones of a human retina are constantly sensing and transmitting the light in the form of spikes to the cortex of the brain in order to reproduce an image in the brain. Delbruck’s lab has designed and manufactured several generations of spike based image sensors that mimic the human retina. In this paper we present an exhaustive timing analysis of the Address-Event- Representation (AER) output of a 64x64 pixels silicon retinomorphic system. Two different scenarios are presented in order to achieve the maximum frequency of light changes for a pixel sensor and the maximum frequency of requested directions on the output AER. Results obtained are 100 Hz and 1.66 MHz in each case respectively. We have tested the upper spin limit and found it to be approximately 6000rpm (revolutions per minute) and in some cases with high light contrast lost events do not exist.Ministerio de Ciencia e Innovación TEC2009-10639- C04-0

    Operating Principles of Zero-Bias Retinomorphic Sensors

    Get PDF
    Zero bias retinomorphic sensors (ZBRSs) are a new type of optical sensor which produce a signal in response to changes in light intensity, but not to constant illumination. For this reason, they are hoped to enable much faster identification of moving objects than conventional sensing strategies. While recent proof-of-principle experimental demonstrations are significant, there does not yet exist a robust quantitative model for their behaviour, which represents an impediment for effective progress to be made in this field. Here I report a mathematical framework to quantify and predict the behaviour of ZRBSs. A simple device-level model and a more detailed carrier-dynamics model are derived. Both models are tested computationally, yielding equivalent behaviour consistent with experimental observations. A figure of merit, Λ_0, was identified which is hoped to enable facile comparison of devices between different research groups. This work is hoped to serve as the foundation for a consistent description of ZBRSs

    A micropower centroiding vision processor

    Get PDF
    Published versio

    Spike Events Processing for Vision Systems

    Get PDF
    In this paper we briefly summarize the fundamental properties of spike events processing applied to artificial vision systems. This sensing and processing technology is capable of very high speed throughput, because it does not rely on sensing and processing sequences of frames, and because it allows for complex hierarchically structured cortical-like layers for sophisticated processing. The paper includes a few examples that have demonstrated the potential of this technology for highspeed vision processing, such as a multilayer event processing network of 5 sequential cortical-like layers, and a recognition system capable of discriminating propellers of different shape rotating at 5000 revolutions per second (300000 revolutions per minute)

    Role of blend ratio in bulk heterojunction organic retinomorphic sensors

    Get PDF
    Conventional image sensors are designed to digitally reproduce every aspect of the visual field; in general representing brighter regions of a scene as brighter regions in an image. While the benefits of detecting and representing light in this way are obvious, limitations imposed by processing power and frame rate place a cap on the speed at which moving objects can be identified. An emerging alternative strategy is to use sensors which output a signal only in response to changes in light intensity, hence inherently identifying movement by design. These so-called retinomorphic sensors are hoped to outperform conventional sensors for certain tasks, such as identification of moving objects. In this report, the working mechanism of retinomorphic sensors based on organic semiconductors as the active layer is probed. It is observed that the sign of the voltage signal is changed when electrode connections are reversed, suggesting our previous description of device behaviour was incomplete. By systematically varying the ratio of poly(3-hexylthiophene-2,5-diyl) (P3HT) to phenyl-C61-butyric acid methyl (PCBM) in the absorption layer, a maximum performance was observed when the ratio was 1 : 2 P3HT : PCBM, while pure P3HT and pure PCBM exhibited very weak signals

    An Optoelectronic Stimulator for Retinal Prosthesis

    No full text
    Retinal prostheses require the presence of viable population of cells in the inner retina. Evaluations of retina with Age-Related Macular Degeneration (AMD) and Retinitis Pigmentosa (RP) have shown a large number of cells remain in the inner retina compared with the outer retina. Therefore, vision loss caused by AMD and RP is potentially treatable with retinal prostheses. Photostimulation based retinal prostheses have shown many advantages compared with retinal implants. In contrary to electrode based stimulation, light does not require mechanical contact. Therefore, the system can be completely external and not does have the power and degradation problems of implanted devices. In addition, the stimulating point is flexible and does not require a prior decision on the stimulation location. Furthermore, a beam of light can be projected on tissue with both temporal and spatial precision. This thesis aims at fi nding a feasible solution to such a system. Firstly, a prototype of an optoelectronic stimulator was proposed and implemented by using the Xilinx Virtex-4 FPGA evaluation board. The platform was used to demonstrate the possibility of photostimulation of the photosensitized neurons. Meanwhile, with the aim of developing a portable retinal prosthesis, a system on chip (SoC) architecture was proposed and a wide tuning range sinusoidal voltage-controlled oscillator (VCO) which is the pivotal component of the system was designed. The VCO is based on a new designed Complementary Metal Oxide Semiconductor (CMOS) Operational Transconductance Ampli er (OTA) which achieves a good linearity over a wide tuning range. Both the OTA and the VCO were fabricated in the AMS 0.35 µm CMOS process. Finally a 9X9 CMOS image sensor with spiking pixels was designed. Each pixel acts as an independent oscillator whose frequency is controlled by the incident light intensity. The sensor was fabricated in the AMS 0.35 µm CMOS Opto Process. Experimental validation and measured results are provided

    A review of current neuromorphic approaches for vision, auditory, and olfactory sensors

    Get PDF
    Conventional vision, auditory, and olfactory sensors generate large volumes of redundant data and as a result tend to consume excessive power. To address these shortcomings, neuromorphic sensors have been developed. These sensors mimic the neuro-biological architecture of sensory organs using aVLSI (analog Very Large Scale Integration) and generate asynchronous spiking output that represents sensing information in ways that are similar to neural signals. This allows for much lower power consumption due to an ability to extract useful sensory information from sparse captured data. The foundation for research in neuromorphic sensors was laid more than two decades ago, but recent developments in understanding of biological sensing and advanced electronics, have stimulated research on sophisticated neuromorphic sensors that provide numerous advantages over conventional sensors. In this paper, we review the current state-of-the-art in neuromorphic implementation of vision, auditory, and olfactory sensors and identify key contributions across these fields. Bringing together these key contributions we suggest a future research direction for further development of the neuromorphic sensing field

    DDD17: End-To-End DAVIS Driving Dataset

    Full text link
    Event cameras, such as dynamic vision sensors (DVS), and dynamic and active-pixel vision sensors (DAVIS) can supplement other autonomous driving sensors by providing a concurrent stream of standard active pixel sensor (APS) images and DVS temporal contrast events. The APS stream is a sequence of standard grayscale global-shutter image sensor frames. The DVS events represent brightness changes occurring at a particular moment, with a jitter of about a millisecond under most lighting conditions. They have a dynamic range of >120 dB and effective frame rates >1 kHz at data rates comparable to 30 fps (frames/second) image sensors. To overcome some of the limitations of current image acquisition technology, we investigate in this work the use of the combined DVS and APS streams in end-to-end driving applications. The dataset DDD17 accompanying this paper is the first open dataset of annotated DAVIS driving recordings. DDD17 has over 12 h of a 346x260 pixel DAVIS sensor recording highway and city driving in daytime, evening, night, dry and wet weather conditions, along with vehicle speed, GPS position, driver steering, throttle, and brake captured from the car's on-board diagnostics interface. As an example application, we performed a preliminary end-to-end learning study of using a convolutional neural network that is trained to predict the instantaneous steering angle from DVS and APS visual data.Comment: Presented at the ICML 2017 Workshop on Machine Learning for Autonomous Vehicle
    • …
    corecore