98 research outputs found

    Retinal ganglion cell software and FPGA model implementation for object detection and tracking

    Get PDF
    This paper describes the software and FPGA implementation of a Retinal Ganglion Cell model which detects moving objects. It is shown how this processing, in conjunction with a Dynamic Vision Sensor as its input, can be used to extrapolate information about object position. Software-wise, a system based on an array of these of RGCs has been developed in order to obtain up to two trackers. These can track objects in a scene, from a still observer, and get inhibited when saccadic camera motion happens. The entire processing takes on average 1000 ns/event. A simplified version of this mechanism, with a mean latency of 330 ns/event, at 50 MHz, has also been implemented in a Spartan6 FPGA.European Commission FP7-ICT-600954Ministerio de Economía y Competitividad TEC2012-37868-C04-02Junta de Andalucía P12-TIC-130

    Neuromorphic Approach Sensitivity Cell Modeling and FPGA Implementation

    Get PDF
    Neuromorphic engineering takes inspiration from biology to solve engineering problems using the organizing principles of biological neural computation. This field has demonstrated success in sensor based applications (vision and audition) as well in cognition and actuators. This paper is focused on mimicking an interesting functionality of the retina that is computed by one type of Retinal Ganglion Cell (RGC). It is the early detection of approaching (expanding) dark objects. This paper presents the software and hardware logic FPGA implementation of this approach sensitivity cell. It can be used in later cognition layers as an attention mechanism. The input of this hardware modeled cell comes from an asynchronous spiking Dynamic Vision Sensor, which leads to an end-to-end event based processing system. The software model has been developed in Java, and computed with an average processing time per event of 370 ns on a NUC embedded computer. The output firing rate for an approaching object depends on the cell parameters that represent the needed number of input events to reach the firing threshold. For the hardware implementation on a Spartan6 FPGA, the processing time is reduced to 160 ns/event with the clock running at 50 MHz.Ministerio de Economía y Competitividad TEC2016-77785-PUnión Europea FP7-ICT-60095

    Approaching Retinal Ganglion Cell Modeling and FPGA Implementation for Robotics

    Get PDF
    Taking inspiration from biology to solve engineering problems using the organizing principles of biological neural computation is the aim of the field of neuromorphic engineering. This field has demonstrated success in sensor based applications (vision and audition) as well as in cognition and actuators. This paper is focused on mimicking the approaching detection functionality of the retina that is computed by one type of Retinal Ganglion Cell (RGC) and its application to robotics. These RGCs transmit action potentials when an expanding object is detected. In this work we compare the software and hardware logic FPGA implementations of this approaching function and the hardware latency when applied to robots, as an attention/reaction mechanism. The visual input for these cells comes from an asynchronous event-driven Dynamic Vision Sensor, which leads to an end-to-end event based processing system. The software model has been developed in Java, and computed with an average processing time per event of 370 ns on a NUC embedded computer. The output firing rate for an approaching object depends on the cell parameters that represent the needed number of input events to reach the firing threshold. For the hardware implementation, on a Spartan 6 FPGA, the processing time is reduced to 160 ns/event with the clock running at 50 MHz. The entropy has been calculated to demonstrate that the system is not totally deterministic in response to approaching objects because of several bioinspired characteristics. It has been measured that a Summit XL mobile robot can react to an approaching object in 90 ms, which can be used as an attentional mechanism. This is faster than similar event-based approaches in robotics and equivalent to human reaction latencies to visual stimulus.Ministerio de Economía y Competitividad TEC2016-77785-PComisión Europea FP7-ICT-60095

    Low Latency Event-Based Filtering and Feature Extraction for Dynamic Vision Sensors in Real-Time FPGA Applications

    Get PDF
    Dynamic Vision Sensor (DVS) pixels produce an asynchronous variable-rate address-event output that represents brightness changes at the pixel. Since these sensors produce frame-free output, they are ideal for real-time dynamic vision applications with real-time latency and power system constraints. Event-based ltering algorithms have been proposed to post-process the asynchronous event output to reduce sensor noise, extract low level features, and track objects, among others. These postprocessing algorithms help to increase the performance and accuracy of further processing for tasks such as classi cation using spike-based learning (ie. ConvNets), stereo vision, and visually-servoed robots, etc. This paper presents an FPGA-based library of these postprocessing event-based algorithms with implementation details; speci cally background activity (noise) ltering, pixel masking, object motion detection and object tracking. The latencies of these lters on the Field Programmable Gate Array (FPGA) platform are below 300ns with an average latency reduction of 188% (maximum of 570%) over the software versions running on a desktop PC CPU. This open-source event-based lter IP library for FPGA has been tested on two different platforms and scenarios using different synthesis and implementation tools for Lattice and Xilinx vendors

    The role of direction-selective visual interneurons T4 and T5 in Drosophila orientation behavior

    Get PDF
    In order to safely move through the environment, visually-guided animals use several types of visual cues for orientation. Optic flow provides faithful information about ego-motion and can thus be used to maintain a straight course. Additionally, local motion cues or landmarks indicate potentially interesting targets or signal danger, triggering approach or avoidance, respectively. The visual system must reliably and quickly evaluate these cues and integrate this information in order to orchestrate behavior. The underlying neuronal computations for this remain largely inaccessible in higher organisms, such as in humans, but can be studied experimentally in more simple model species. The fly Drosophila, for example, heavily relies on such visual cues during its impressive flight maneuvers. Additionally, it is genetically and physiologically accessible. Hence, it can be regarded as an ideal model organism for exploring neuronal computations during visual processing. In my PhD studies, I have designed and built several autonomous virtual reality setups to precisely measure visual behavior of walking flies. The setups run in open-loop and in closed-loop configuration. In an open-loop experiment, the visual stimulus is clearly defined and does not depend on the behavioral response. Hence, it allows mapping of how specific features of simple visual stimuli are translated into behavioral output, which can guide the creation of computational models of visual processing. In closedloop experiments, the behavioral response is fed back onto the visual stimulus, which permits characterization of the behavior under more realistic conditions and, thus, allows for testing of the predictive power of the computational models. In addition, Drosophila’s genetic toolbox provides various strategies for targeting and silencing specific neuron types, which helps identify which cells are needed for a specific behavior. We have focused on visual interneuron types T4 and T5 and assessed their role in visual orientation behavior. These neurons build up a retinotopic array and cover the whole visual field of the fly. They constitute major output elements from the medulla and have long been speculated to be involved in motion processing. This cumulative thesis consists of three published studies: In the first study, we silenced both T4 and T5 neurons together and found that such flies were completely blind to any kind of motion. In particular, these flies could not perform an optomotor response anymore, which means that they lost their normally innate following responses to motion of large-field moving patterns. This was an important finding as it ruled out the contribution of another system for motion vision-based behaviors. However, these flies were still able to fixate a black bar. We could show that this behavior is mediated by a T4/T5-independent flicker detection circuitry which exists in parallel to the motion system. In the second study, T4 and T5 neurons were characterized via twophoton imaging, revealing that these cells are directionally selective and have very similar temporal and orientation tuning properties to directionselective neurons in the lobula plate. T4 and T5 cells responded in a contrast polarity-specific manner: T4 neurons responded selectively to ON edge motion while T5 neurons responded only to OFF edge motion. When we blocked T4 neurons, behavioral responses to moving ON edges were more impaired than those to moving OFF edges and the opposite was true for the T5 block. Hence, these findings confirmed that the contrast polarityspecific visual motion pathways, which start at the level of L1 (ON) and L2 (OFF), are maintained within the medulla and that motion information is computed twice independently within each of these pathways. Finally, in the third study, we used the virtual reality setups to probe the performance of an artificial microcircuit. The system was equipped with a camera and spherical fisheye lens. Images were processed by an array of Reichardt detectors whose outputs were integrated in a similar way to what is found in the lobula plate of flies. We provided the system with several rotating natural environments and found that the fly-inspired artificial system could accurately predict the axes of rotation

    FPGA design and implementation of a framework for optogenetic retinal prosthesis

    Get PDF
    PhD ThesisThere are 285 million people worldwide with a visual impairment, 39 million of whom are completely blind and 246 million partially blind, known as low vision patients. In the UK and other developed countries of the west, retinal dystrophy diseases represent the primary cause of blindness, especially Age Related Macular Degeneration (AMD), diabetic retinopathy and Retinitis Pigmentosa (RP). There are various treatments and aids that can help these visual disorders, such as low vision aids, gene therapy and retinal prosthesis. Retinal prostheses consist of four main stages: the input stage (Image Acquisition), the high level processing stage (Image preparation and retinal encoding), low level processing stage (Stimulation controller) and the output stage (Image displaying on the opto-electronic micro-LEDs array). Up to now, a limited number of full hardware implementations have been available for retinal prosthesis. In this work, a photonic stimulation controller was designed and implemented. The main rule of this controller is to enhance framework results in terms of power and time. It involves, first, an even power distributor, which was used to evenly distribute the power through image sub-frames, to avoid a large surge of power, especially with large arrays. Therefore, the overall framework power results are improved. Second, a pulse encoder was used to select different modes of operation for the opto-electronic micro-LEDs array, and as a result of this the overall time for the framework was improved. The implementation is completed using reconfigurable hardware devices, i.e. Field Programmable Gate Arrays (FPGAs), to achieve high performance at an economical price. Moreover, this FPGA-based framework for an optogenetic retinal prosthesis aims to control the opto-electronic micro-LED array in an efficient way, and to interface and link between the opto-electronic micro-LED array hardware architecture and the previously developed high level retinal prosthesis image processing algorithms.University of Jorda

    Low-power dynamic object detection and classification with freely moving event cameras

    Get PDF
    We present the first purely event-based, energy-efficient approach for dynamic object detection and categorization with a freely moving event camera. Compared to traditional cameras, event-based object recognition systems are considerably behind in terms of accuracy and algorithmic maturity. To this end, this paper presents an event-based feature extraction method devised by accumulating local activity across the image frame and then applying principal component analysis (PCA) to the normalized neighborhood region. Subsequently, we propose a backtracking-free k-d tree mechanism for efficient feature matching by taking advantage of the low-dimensionality of the feature representation. Additionally, the proposed k-d tree mechanism allows for feature selection to obtain a lower-dimensional object representation when hardware resources are limited to implement PCA. Consequently, the proposed system can be realized on a field-programmable gate array (FPGA) device leading to high performance over resource ratio. The proposed system is tested on real-world event-based datasets for object categorization, showing superior classification performance compared to state-of-the-art algorithms. Additionally, we verified the real-time FPGA performance of the proposed object detection method, trained with limited data as opposed to deep learning methods, under a closed-loop aerial vehicle flight mode. We also compare the proposed object categorization framework to pre-trained convolutional neural networks using transfer learning and highlight the drawbacks of using frame-based sensors under dynamic camera motion. Finally, we provide critical insights about the feature extraction method and the classification parameters on the system performance, which aids in understanding the framework to suit various low-power (less than a few watts) application scenarios

    Topics in Adaptive Optics

    Get PDF
    Advances in adaptive optics technology and applications move forward at a rapid pace. The basic idea of wavefront compensation in real-time has been around since the mid 1970s. The first widely used application of adaptive optics was for compensating atmospheric turbulence effects in astronomical imaging and laser beam propagation. While some topics have been researched and reported for years, even decades, new applications and advances in the supporting technologies occur almost daily. This book brings together 11 original chapters related to adaptive optics, written by an international group of invited authors. Topics include atmospheric turbulence characterization, astronomy with large telescopes, image post-processing, high power laser distortion compensation, adaptive optics and the human eye, wavefront sensors, and deformable mirrors
    corecore