33 research outputs found
Approaching Retinal Ganglion Cell Modeling and FPGA Implementation for Robotics
Taking inspiration from biology to solve engineering problems using the organizing
principles of biological neural computation is the aim of the field of neuromorphic engineering.
This field has demonstrated success in sensor based applications (vision and audition) as well as in
cognition and actuators. This paper is focused on mimicking the approaching detection functionality
of the retina that is computed by one type of Retinal Ganglion Cell (RGC) and its application to
robotics. These RGCs transmit action potentials when an expanding object is detected. In this work
we compare the software and hardware logic FPGA implementations of this approaching function
and the hardware latency when applied to robots, as an attention/reaction mechanism. The visual
input for these cells comes from an asynchronous event-driven Dynamic Vision Sensor, which leads
to an end-to-end event based processing system. The software model has been developed in Java,
and computed with an average processing time per event of 370 ns on a NUC embedded computer.
The output firing rate for an approaching object depends on the cell parameters that represent the
needed number of input events to reach the firing threshold. For the hardware implementation, on a
Spartan 6 FPGA, the processing time is reduced to 160 ns/event with the clock running at 50 MHz.
The entropy has been calculated to demonstrate that the system is not totally deterministic in response
to approaching objects because of several bioinspired characteristics. It has been measured that a
Summit XL mobile robot can react to an approaching object in 90 ms, which can be used as an
attentional mechanism. This is faster than similar event-based approaches in robotics and equivalent
to human reaction latencies to visual stimulus.Ministerio de Economía y Competitividad TEC2016-77785-PComisión Europea FP7-ICT-60095
Neuromorphic Approach Sensitivity Cell Modeling and FPGA Implementation
Neuromorphic engineering takes inspiration from biology to
solve engineering problems using the organizing principles of biological
neural computation. This field has demonstrated success in sensor based
applications (vision and audition) as well in cognition and actuators.
This paper is focused on mimicking an interesting functionality of the
retina that is computed by one type of Retinal Ganglion Cell (RGC).
It is the early detection of approaching (expanding) dark objects. This
paper presents the software and hardware logic FPGA implementation
of this approach sensitivity cell. It can be used in later cognition layers as
an attention mechanism. The input of this hardware modeled cell comes
from an asynchronous spiking Dynamic Vision Sensor, which leads to an
end-to-end event based processing system. The software model has been
developed in Java, and computed with an average processing time per
event of 370 ns on a NUC embedded computer. The output firing rate
for an approaching object depends on the cell parameters that represent
the needed number of input events to reach the firing threshold. For the
hardware implementation on a Spartan6 FPGA, the processing time is
reduced to 160 ns/event with the clock running at 50 MHz.Ministerio de Economía y Competitividad TEC2016-77785-PUnión Europea FP7-ICT-60095
The role of direction-selective visual interneurons T4 and T5 in Drosophila orientation behavior
In order to safely move through the environment, visually-guided animals
use several types of visual cues for orientation. Optic flow provides faithful
information about ego-motion and can thus be used to maintain a straight
course. Additionally, local motion cues or landmarks indicate potentially
interesting targets or signal danger, triggering approach or avoidance, respectively.
The visual system must reliably and quickly evaluate these cues
and integrate this information in order to orchestrate behavior. The underlying
neuronal computations for this remain largely inaccessible in higher
organisms, such as in humans, but can be studied experimentally in more
simple model species. The fly Drosophila, for example, heavily relies on
such visual cues during its impressive flight maneuvers. Additionally, it is
genetically and physiologically accessible. Hence, it can be regarded as an
ideal model organism for exploring neuronal computations during visual
processing.
In my PhD studies, I have designed and built several autonomous virtual
reality setups to precisely measure visual behavior of walking flies. The
setups run in open-loop and in closed-loop configuration. In an open-loop
experiment, the visual stimulus is clearly defined and does not depend on
the behavioral response. Hence, it allows mapping of how specific features
of simple visual stimuli are translated into behavioral output, which can
guide the creation of computational models of visual processing. In closedloop
experiments, the behavioral response is fed back onto the visual stimulus,
which permits characterization of the behavior under more realistic
conditions and, thus, allows for testing of the predictive power of the computational
models.
In addition, Drosophila’s genetic toolbox provides various strategies for
targeting and silencing specific neuron types, which helps identify which
cells are needed for a specific behavior. We have focused on visual interneuron
types T4 and T5 and assessed their role in visual orientation behavior.
These neurons build up a retinotopic array and cover the whole visual field
of the fly. They constitute major output elements from the medulla and have
long been speculated to be involved in motion processing.
This cumulative thesis consists of three published studies: In the first
study, we silenced both T4 and T5 neurons together and found that such flies
were completely blind to any kind of motion. In particular, these flies could
not perform an optomotor response anymore, which means that they lost
their normally innate following responses to motion of large-field moving
patterns. This was an important finding as it ruled out the contribution
of another system for motion vision-based behaviors. However, these flies
were still able to fixate a black bar. We could show that this behavior is
mediated by a T4/T5-independent flicker detection circuitry which exists in
parallel to the motion system.
In the second study, T4 and T5 neurons were characterized via twophoton
imaging, revealing that these cells are directionally selective and
have very similar temporal and orientation tuning properties to directionselective
neurons in the lobula plate. T4 and T5 cells responded in a
contrast polarity-specific manner: T4 neurons responded selectively to ON
edge motion while T5 neurons responded only to OFF edge motion. When
we blocked T4 neurons, behavioral responses to moving ON edges were
more impaired than those to moving OFF edges and the opposite was true
for the T5 block. Hence, these findings confirmed that the contrast polarityspecific
visual motion pathways, which start at the level of L1 (ON) and L2
(OFF), are maintained within the medulla and that motion information is
computed twice independently within each of these pathways.
Finally, in the third study, we used the virtual reality setups to probe the
performance of an artificial microcircuit. The system was equipped with a
camera and spherical fisheye lens. Images were processed by an array of
Reichardt detectors whose outputs were integrated in a similar way to what
is found in the lobula plate of flies. We provided the system with several rotating
natural environments and found that the fly-inspired artificial system
could accurately predict the axes of rotation
Redundant neural vision systems: competing for collision recognition roles
Ability to detect collisions is vital for future robots that interact with humans in complex visual environments. Lobula giant movement detectors (LGMD) and directional selective neurons (DSNs) are two types of identified neurons found in the visual pathways of insects such as locusts. Recent modelling studies showed that the LGMD or grouped DSNs could each be tuned for collision recognition. In both biological and artificial vision systems, however, which one should play the collision recognition role and the way the two types of specialized visual neurons could be functioning together are not clear. In this modeling study, we compared the competence of the LGMD and the DSNs, and also investigate the cooperation of the two neural vision systems for collision recognition via artificial evolution. We implemented three types of collision recognition neural subsystems – the LGMD, the DSNs and a hybrid system which combines the LGMD and the DSNs subsystems together, in each individual agent. A switch gene determines which of the three redundant neural subsystems plays the collision recognition role. We found that, in both robotics and driving environments, the LGMD was able to build up its ability for collision recognition quickly and robustly therefore reducing the chance of other types of neural networks to play the same role. The results suggest that the LGMD neural network could be the ideal model to be realized in hardware for collision recognition
Recommended from our members
Computational models of object motion detectors accelerated using FPGA technology
The detection of moving objects is a trivial task when performed by vertebrate retinas, yet a complex computer vision task. This PhD research programme has made three key contributions, namely: 1) a multi-hierarchical spiking neural network (MHSNN) architecture for detecting horizontal and vertical movements, 2) a Hybrid Sensitive Motion Detector (HSMD) algorithm for detecting object motion and 3) the Neuromorphic Hybrid Sensitive Motion Detector (NeuroHSMD) , a real-time neuromorphic implementation of the HSMD algorithm.
The MHSNN is a customised 4 layers Spiking Neural Network (SNN) architecture designed to reflect the basic connectivity, similar to canonical behaviours found in the majority of vertebrate retinas (including human retinas). The architecture, was trained using images from a custom dataset generated in laboratory settings. Simulation results revealed that each cell model is sensitive to vertical and horizontal movements, with a detection error of 6.75% contrasted against the teaching signals (expected output signals) used to train the MHSNN. The experimental evaluation of the methodology shows that the MH SNN was not scalable because of the overall number of neurons and synapses which lead to the development of the HSMD.
The HSMD algorithm enhanced an existing Dynamic Background subtraction (DBS) algorithm using a customised 3-layer SNN. The customised 3-layer SNN was used to stabilise the foreground information of moving objects in the scene, which improves the object motion detection. The algorithm was compared against existing background subtraction approaches, available on the Open Computer Vision (OpenCV) library, specifically on the 2012 Change Detection (CDnet2012) and the 2014 Change Detection (CDnet2014) benchmark datasets. The accuracy results show that the HSMD was ranked overall first and performed better than all the other benchmarked algorithms on four of the categories, across all eight test metrics. Furthermore, the HSMD is the first to use an SNN to enhance the existing dynamic background subtraction algorithm without a substantial degradation of the frame rate, being capable of processing images 720 × 480 at 13.82 Frames Per Second (fps) (CDnet2014) and 720 × 480 at 13.92 fps (CDnet2012) on a High Performance computer (96 cores and 756 GB of RAM). Although the HSMD analysis shows good Percentage of Correct Classifications (PCC) on the CDnet2012 and CDnet2014, it was identified that the 3-layer customised SNN was the bottleneck, in terms of speed, and could be improved using dedicated hardware.
The NeuroHSMD is thus an adaptation of the HSMD algorithm whereby the SNN component has been fully implemented on dedicated hardware [Terasic DE10-pro Field-Programmable Gate Array (FPGA) board]. Open Computer Language (OpenCL) was used to simplify the FPGA design flow and allow the code portability to other devices such as FPGA and Graphical Processing Unit (GPU). The NeuroHSMD was also tested against the CDnet2012 and CDnet2014 datasets with an acceleration of 82% over the HSMD algorithm, being capable of processing 720 × 480 images at 28.06 fps (CDnet2012) and 28.71 fps (CDnet2014)
Advances in Stereo Vision
Stereopsis is a vision process whose geometrical foundation has been known for a long time, ever since the experiments by Wheatstone, in the 19th century. Nevertheless, its inner workings in biological organisms, as well as its emulation by computer systems, have proven elusive, and stereo vision remains a very active and challenging area of research nowadays. In this volume we have attempted to present a limited but relevant sample of the work being carried out in stereo vision, covering significant aspects both from the applied and from the theoretical standpoints
The role of direction-selective visual interneurons T4 and T5 in Drosophila orientation behavior
In order to safely move through the environment, visually-guided animals
use several types of visual cues for orientation. Optic flow provides faithful
information about ego-motion and can thus be used to maintain a straight
course. Additionally, local motion cues or landmarks indicate potentially
interesting targets or signal danger, triggering approach or avoidance, respectively.
The visual system must reliably and quickly evaluate these cues
and integrate this information in order to orchestrate behavior. The underlying
neuronal computations for this remain largely inaccessible in higher
organisms, such as in humans, but can be studied experimentally in more
simple model species. The fly Drosophila, for example, heavily relies on
such visual cues during its impressive flight maneuvers. Additionally, it is
genetically and physiologically accessible. Hence, it can be regarded as an
ideal model organism for exploring neuronal computations during visual
processing.
In my PhD studies, I have designed and built several autonomous virtual
reality setups to precisely measure visual behavior of walking flies. The
setups run in open-loop and in closed-loop configuration. In an open-loop
experiment, the visual stimulus is clearly defined and does not depend on
the behavioral response. Hence, it allows mapping of how specific features
of simple visual stimuli are translated into behavioral output, which can
guide the creation of computational models of visual processing. In closedloop
experiments, the behavioral response is fed back onto the visual stimulus,
which permits characterization of the behavior under more realistic
conditions and, thus, allows for testing of the predictive power of the computational
models.
In addition, Drosophila’s genetic toolbox provides various strategies for
targeting and silencing specific neuron types, which helps identify which
cells are needed for a specific behavior. We have focused on visual interneuron
types T4 and T5 and assessed their role in visual orientation behavior.
These neurons build up a retinotopic array and cover the whole visual field
of the fly. They constitute major output elements from the medulla and have
long been speculated to be involved in motion processing.
This cumulative thesis consists of three published studies: In the first
study, we silenced both T4 and T5 neurons together and found that such flies
were completely blind to any kind of motion. In particular, these flies could
not perform an optomotor response anymore, which means that they lost
their normally innate following responses to motion of large-field moving
patterns. This was an important finding as it ruled out the contribution
of another system for motion vision-based behaviors. However, these flies
were still able to fixate a black bar. We could show that this behavior is
mediated by a T4/T5-independent flicker detection circuitry which exists in
parallel to the motion system.
In the second study, T4 and T5 neurons were characterized via twophoton
imaging, revealing that these cells are directionally selective and
have very similar temporal and orientation tuning properties to directionselective
neurons in the lobula plate. T4 and T5 cells responded in a
contrast polarity-specific manner: T4 neurons responded selectively to ON
edge motion while T5 neurons responded only to OFF edge motion. When
we blocked T4 neurons, behavioral responses to moving ON edges were
more impaired than those to moving OFF edges and the opposite was true
for the T5 block. Hence, these findings confirmed that the contrast polarityspecific
visual motion pathways, which start at the level of L1 (ON) and L2
(OFF), are maintained within the medulla and that motion information is
computed twice independently within each of these pathways.
Finally, in the third study, we used the virtual reality setups to probe the
performance of an artificial microcircuit. The system was equipped with a
camera and spherical fisheye lens. Images were processed by an array of
Reichardt detectors whose outputs were integrated in a similar way to what
is found in the lobula plate of flies. We provided the system with several rotating
natural environments and found that the fly-inspired artificial system
could accurately predict the axes of rotation
Interfacing of neuromorphic vision, auditory and olfactory sensors with digital neuromorphic circuits
The conventional Von Neumann architecture imposes strict constraints on the development of intelligent adaptive systems. The requirements of substantial computing power to process and analyse complex data make such an approach impractical to be used in implementing smart systems.
Neuromorphic engineering has produced promising results in applications such as electronic sensing, networking architectures and complex data processing. This interdisciplinary field takes inspiration from neurobiological architecture and emulates these characteristics using analogue Very Large Scale Integration (VLSI). The unconventional approach of exploiting the non-linear current characteristics of transistors has aided in the development of low-power adaptive systems that can be implemented in intelligent systems. The neuromorphic approach is widely applied in electronic sensing, particularly in vision, auditory, tactile and olfactory sensors. While conventional sensors generate a huge amount of redundant output data, neuromorphic sensors implement the biological concept of spike-based output to generate sparse output data that corresponds to a certain sensing event. The operation principle applied in these sensors supports reduced power consumption with operating efficiency comparable to conventional sensors. Although neuromorphic sensors such as Dynamic Vision Sensor (DVS), Dynamic and Active pixel Vision Sensor (DAVIS) and AEREAR2 are steadily expanding their scope of application in real-world systems, the lack of spike-based data processing algorithms and complex interfacing methods restricts its applications in low-cost standalone autonomous systems.
This research addresses the issue of interfacing between neuromorphic sensors and digital neuromorphic circuits. Current interfacing methods of these sensors are dependent on computers for output data processing. This approach restricts the portability of these sensors, limits their application in a standalone system and increases the overall cost of such systems. The proposed methodology simplifies the interfacing of these sensors with digital neuromorphic processors by utilizing AER communication protocols and neuromorphic hardware developed under the Convolution AER Vision Architecture for Real-time (CAVIAR) project. The proposed interface is simulated using a JAVA model that emulates a typical spikebased output of a neuromorphic sensor, in this case an olfactory sensor, and functions that process this data based on supervised learning. The successful implementation of this simulation suggests that the methodology is a practical solution and can be implemented in hardware. The JAVA simulation is compared to a similar model developed in Nengo, a standard large-scale neural simulation tool.
The successful completion of this research contributes towards expanding the scope of application of neuromorphic sensors in standalone intelligent systems. The easy interfacing method proposed in this thesis promotes the portability of these sensors by eliminating the dependency on computers for output data processing. The inclusion of neuromorphic Field Programmable Gate Array (FPGA) board allows reconfiguration and deployment of learning algorithms to implement adaptable systems. These low-power systems can be widely applied in biosecurity and environmental monitoring. With this thesis, we suggest directions for future research in neuromorphic standalone systems based on neuromorphic olfaction
Adaptive map alignment in the superior colliculus of the barn owl: a neuromorphic implementation
Adaptation is one of the basic phenomena of biology, while adaptability is an important
feature for neural network. Young barn owl can well adapt its visual and auditory
integration to the environmental change, such as prism wearing.
At first, a mathematical model is introduced by the related study in biological experiment.
The model well explained the mechanism of the sensory map realignment
through axongenesis and synaptogenesis. Simulation results of this model are consistent
with the biological data.
Thereafter, to test the model’s application in hardware, the model is implemented
into a robot. Visual and auditory signals are acquired by the sensors of the robot
and transferred back to PC through bluetooth. Results of the robot experiment are
presented, which shows the SC model allowing the robot to adjust visual and auditory
integration to counteract the effects of a prism.
Finally, based on the model, a silicon Superior Colliculus is designed in VLSI circuit
and fabricated. Performance of the fabricated chip has shown the synaptogenesis
and axogenesis can be emulated in VLSI circuit. The circuit of neural model provides
a new method to update signals and reconfigure the switch network (the chip has an
automatic reconfigurable network which is used to correct the disparity between signals).
The chip is also the first Superior Colliculus VLSI circuit to emulate the sensory
map realignment