2,512 research outputs found
Neuromorphic Approach Sensitivity Cell Modeling and FPGA Implementation
Neuromorphic engineering takes inspiration from biology to
solve engineering problems using the organizing principles of biological
neural computation. This field has demonstrated success in sensor based
applications (vision and audition) as well in cognition and actuators.
This paper is focused on mimicking an interesting functionality of the
retina that is computed by one type of Retinal Ganglion Cell (RGC).
It is the early detection of approaching (expanding) dark objects. This
paper presents the software and hardware logic FPGA implementation
of this approach sensitivity cell. It can be used in later cognition layers as
an attention mechanism. The input of this hardware modeled cell comes
from an asynchronous spiking Dynamic Vision Sensor, which leads to an
end-to-end event based processing system. The software model has been
developed in Java, and computed with an average processing time per
event of 370 ns on a NUC embedded computer. The output firing rate
for an approaching object depends on the cell parameters that represent
the needed number of input events to reach the firing threshold. For the
hardware implementation on a Spartan6 FPGA, the processing time is
reduced to 160 ns/event with the clock running at 50 MHz.Ministerio de Economía y Competitividad TEC2016-77785-PUnión Europea FP7-ICT-60095
Spike-based VITE control with Dynamic Vision Sensor applied to an Arm Robot.
Spike-based motor control is very important in the
field of robotics and also for the neuromorphic engineering
community to bridge the gap between sensing / processing
devices and motor control without losing the spike philosophy
that enhances speed response and reduces power consumption.
This paper shows an accurate neuro-inspired spike-based system
composed of a DVS retina, a visual processing system that detects
and tracks objects, and a SVITE motor control, where everything
follows the spike-based philosophy. The control system is a spike
version of the neuroinspired open loop VITE control algorithm
implemented in a couple of FPGA boards: the first one runs the
algorithm and the second one drives the motors with spikes. The
robotic platform is a low cost arm with four degrees of freedom.Ministerio de Ciencia e Innovación TEC2009-10639-C04-02/01Ministerio de Economía y Competitividad TEC2012-37868-C04-02/0
FPGA-based Anomalous trajectory detection using SOFM
A system for automatically classifying the trajectory of a moving object in a scene as usual or suspicious is presented. The system uses an unsupervised neural network (Self Organising Feature Map) fully implemented on a reconfigurable hardware architecture (Field Programmable Gate Array) to cluster trajectories acquired over a period, in order to detect novel ones. First order motion information, including first order moving average smoothing, is generated from the 2D image coordinates (trajectories). The classification is dynamic and achieved in real-time. The dynamic classifier is achieved using a SOFM and a probabilistic model. Experimental results show less than 15\% classification error, showing the robustness of our approach over others in literature and the speed-up over the use of conventional microprocessor as compared to the use of an off-the-shelf FPGA prototyping board
Approaching Retinal Ganglion Cell Modeling and FPGA Implementation for Robotics
Taking inspiration from biology to solve engineering problems using the organizing
principles of biological neural computation is the aim of the field of neuromorphic engineering.
This field has demonstrated success in sensor based applications (vision and audition) as well as in
cognition and actuators. This paper is focused on mimicking the approaching detection functionality
of the retina that is computed by one type of Retinal Ganglion Cell (RGC) and its application to
robotics. These RGCs transmit action potentials when an expanding object is detected. In this work
we compare the software and hardware logic FPGA implementations of this approaching function
and the hardware latency when applied to robots, as an attention/reaction mechanism. The visual
input for these cells comes from an asynchronous event-driven Dynamic Vision Sensor, which leads
to an end-to-end event based processing system. The software model has been developed in Java,
and computed with an average processing time per event of 370 ns on a NUC embedded computer.
The output firing rate for an approaching object depends on the cell parameters that represent the
needed number of input events to reach the firing threshold. For the hardware implementation, on a
Spartan 6 FPGA, the processing time is reduced to 160 ns/event with the clock running at 50 MHz.
The entropy has been calculated to demonstrate that the system is not totally deterministic in response
to approaching objects because of several bioinspired characteristics. It has been measured that a
Summit XL mobile robot can react to an approaching object in 90 ms, which can be used as an
attentional mechanism. This is faster than similar event-based approaches in robotics and equivalent
to human reaction latencies to visual stimulus.Ministerio de Economía y Competitividad TEC2016-77785-PComisión Europea FP7-ICT-60095
Neuro-inspired system for real-time vision sensor tilt correction
Neuromorphic engineering tries to mimic biological
information processing. Address-Event-Representation (AER)
is an asynchronous protocol for transferring the information of
spiking neuro-inspired systems. Currently AER systems are able
sense visual and auditory stimulus, to process information, to
learn, to control robots, etc. In this paper we present an AER
based layer able to correct in real time the tilt of an AER vision
sensor, using a high speed algorithmic mapping layer. A codesign
platform (the AER-Robot platform), with a Xilinx
Spartan 3 FPGA and an 8051 USB microcontroller, has been
used to implement the system. Testing it with the help of the
USBAERmini2 board and the jAER software.Junta de Andalucía P06-TIC-01417Ministerio de Educación y Ciencia TEC2006-11730-C03-02Ministerio de Ciencia e Innovación TEC2009-10639-C04-0
FPGA Implementation of An Event-driven Saliency-based Selective Attention Model
Artificial vision systems of autonomous agents face very difficult
challenges, as their vision sensors are required to transmit vast amounts of
information to the processing stages, and to process it in real-time. One first
approach to reduce data transmission is to use event-based vision sensors,
whose pixels produce events only when there are changes in the input. However,
even for event-based vision, transmission and processing of visual data can be
quite onerous. Currently, these challenges are solved by using high-speed
communication links and powerful machine vision processing hardware. But if
resources are limited, instead of processing all the sensory information in
parallel, an effective strategy is to divide the visual field into several
small sub-regions, choose the region of highest saliency, process it, and shift
serially the focus of attention to regions of decreasing saliency. This
strategy, commonly used also by the visual system of many animals, is typically
referred to as ``selective attention''. Here we present a digital architecture
implementing a saliency-based selective visual attention model for processing
asynchronous event-based sensory information received from a DVS. For ease of
prototyping, we use a standard digital design flow and map the architecture on
an FPGA. We describe the architecture block diagram highlighting the efficient
use of the available hardware resources demonstrated through experimental
results exploiting a hardware setup where the FPGA interfaced with the DVS
camera.Comment: 5 pages, 5 figure
Pavlov's dog associative learning demonstrated on synaptic-like organic transistors
In this letter, we present an original demonstration of an associative
learning neural network inspired by the famous Pavlov's dogs experiment. A
single nanoparticle organic memory field effect transistor (NOMFET) is used to
implement each synapse. We show how the physical properties of this dynamic
memristive device can be used to perform low power write operations for the
learning and implement short-term association using temporal coding and spike
timing dependent plasticity based learning. An electronic circuit was built to
validate the proposed learning scheme with packaged devices, with good
reproducibility despite the complex synaptic-like dynamic of the NOMFET in
pulse regime
- …