3,912 research outputs found
Egocentric Perception using a Biologically Inspired Software Retina Integrated with a Deep CNN
We presented the concept of of a software retina, capable
of significant visual data reduction in combination with
scale and rotation invariance, for applications in egocentric
and robot vision at the first EPIC workshop in Amsterdam
[9]. Our method is based on the mammalian retino-cortical
transform: a mapping between a pseudo-randomly tessellated
retina model (used to sample an input image) and a
CNN. The aim of this first pilot study is to demonstrate a
functional retina-integrated CNN implementation and this
produced the following results: a network using the full
retino-cortical transform yielded an F1 score of 0.80 on a
test set during a 4-way classification task, while an identical
network not using the proposed method yielded an F1
score of 0.86 on the same task. On a 40K node retina the
method reduced the visual data bye×7, the input data to the
CNN by 40% and the number of CNN training epochs by
36%. These results demonstrate the viability of our method
and hint at the potential of exploiting functional traits of
natural vision systems in CNNs. In addition, to the above
study, we present further recent developments in porting
the retina to an Apple iPhone, an implementation in CUDA
C for NVIDIA GPU platforms and extensions of the retina
model we have adopted
PyCUDA and PyOpenCL: A Scripting-Based Approach to GPU Run-Time Code Generation
High-performance computing has recently seen a surge of interest in
heterogeneous systems, with an emphasis on modern Graphics Processing Units
(GPUs). These devices offer tremendous potential for performance and efficiency
in important large-scale applications of computational science. However,
exploiting this potential can be challenging, as one must adapt to the
specialized and rapidly evolving computing environment currently exhibited by
GPUs. One way of addressing this challenge is to embrace better techniques and
develop tools tailored to their needs. This article presents one simple
technique, GPU run-time code generation (RTCG), along with PyCUDA and PyOpenCL,
two open-source toolkits that support this technique.
In introducing PyCUDA and PyOpenCL, this article proposes the combination of
a dynamic, high-level scripting language with the massive performance of a GPU
as a compelling two-tiered computing platform, potentially offering significant
performance and productivity advantages over conventional single-tier, static
systems. The concept of RTCG is simple and easily implemented using existing,
robust infrastructure. Nonetheless it is powerful enough to support (and
encourage) the creation of custom application-specific tools by its users. The
premise of the paper is illustrated by a wide range of examples where the
technique has been applied with considerable success.Comment: Submitted to Parallel Computing, Elsevie
Asynchronous spiking neurons, the natural key to exploit temporal sparsity
Inference of Deep Neural Networks for stream signal (Video/Audio) processing in edge devices is still challenging. Unlike the most state of the art inference engines which are efficient for static signals, our brain is optimized for real-time dynamic signal processing. We believe one important feature of the brain (asynchronous state-full processing) is the key to its excellence in this domain. In this work, we show how asynchronous processing with state-full neurons allows exploitation of the existing sparsity in natural signals. This paper explains three different types of sparsity and proposes an inference algorithm which exploits all types of sparsities in the execution of already trained networks. Our experiments in three different applications (Handwritten digit recognition, Autonomous Steering and Hand-Gesture recognition) show that this model of inference reduces the number of required operations for sparse input data by a factor of one to two orders of magnitudes. Additionally, due to fully asynchronous processing this type of inference can be run on fully distributed and scalable neuromorphic hardware platforms
Bio-Inspired Stereo Vision Calibration for Dynamic Vision Sensors
Many advances have been made in the eld of computer vision. Several recent research trends
have focused on mimicking human vision by using a stereo vision system. In multi-camera systems, a
calibration process is usually implemented to improve the results accuracy. However, these systems generate
a large amount of data to be processed; therefore, a powerful computer is required and, in many cases,
this cannot be done in real time. Neuromorphic Engineering attempts to create bio-inspired systems that
mimic the information processing that takes place in the human brain. This information is encoded using
pulses (or spikes) and the generated systems are much simpler (in computational operations and resources),
which allows them to perform similar tasks with much lower power consumption, thus these processes
can be developed over specialized hardware with real-time processing. In this work, a bio-inspired stereovision
system is presented, where a calibration mechanism for this system is implemented and evaluated
using several tests. The result is a novel calibration technique for a neuromorphic stereo vision system,
implemented over specialized hardware (FPGA - Field-Programmable Gate Array), which allows obtaining
reduced latencies on hardware implementation for stand-alone systems, and working in real time.Ministerio de Economía y Competitividad TEC2016-77785-PMinisterio de Economía y Competitividad TIN2016-80644-
Stereo Matching in Address-Event-Representation (AER) Bio-Inspired Binocular Systems in a Field-Programmable Gate Array (FPGA)
In stereo-vision processing, the image-matching step is essential for results, although it
involves a very high computational cost. Moreover, the more information is processed, the more time
is spent by the matching algorithm, and the more ine cient it is. Spike-based processing is a relatively
new approach that implements processing methods by manipulating spikes one by one at the time
they are transmitted, like a human brain. The mammal nervous system can solve much more complex
problems, such as visual recognition by manipulating neuron spikes. The spike-based philosophy
for visual information processing based on the neuro-inspired address-event-representation (AER)
is currently achieving very high performance. The aim of this work was to study the viability of a
matching mechanism in stereo-vision systems, using AER codification and its implementation in
a field-programmable gate array (FPGA). Some studies have been done before in an AER system
with monitored data using a computer; however, this kind of mechanism has not been implemented
directly on hardware. To this end, an epipolar geometry basis applied to AER systems was studied
and implemented, with other restrictions, in order to achieve good results in a real-time scenario.
The results and conclusions are shown, and the viability of its implementation is proven.Ministerio de Economía y Competitividad TEC2016-77785-
- …