970 research outputs found
Event-Based Angular Velocity Regression with Spiking Networks
Spiking Neural Networks (SNNs) are bio-inspired networks that process
information conveyed as temporal spikes rather than numeric values. A spiking
neuron of an SNN only produces a spike whenever a significant number of spikes
occur within a short period of time. Due to their spike-based computational
model, SNNs can process output from event-based, asynchronous sensors without
any pre-processing at extremely lower power unlike standard artificial neural
networks. This is possible due to specialized neuromorphic hardware that
implements the highly-parallelizable concept of SNNs in silicon. Yet, SNNs have
not enjoyed the same rise of popularity as artificial neural networks. This not
only stems from the fact that their input format is rather unconventional but
also due to the challenges in training spiking networks. Despite their temporal
nature and recent algorithmic advances, they have been mostly evaluated on
classification problems. We propose, for the first time, a temporal regression
problem of numerical values given events from an event camera. We specifically
investigate the prediction of the 3-DOF angular velocity of a rotating event
camera with an SNN. The difficulty of this problem arises from the prediction
of angular velocities continuously in time directly from irregular,
asynchronous event-based input. Directly utilising the output of event cameras
without any pre-processing ensures that we inherit all the benefits that they
provide over conventional cameras. That is high-temporal resolution,
high-dynamic range and no motion blur. To assess the performance of SNNs on
this task, we introduce a synthetic event camera dataset generated from
real-world panoramic images and show that we can successfully train an SNN to
perform angular velocity regression
Recommended from our members
Neural reactivations during sleep determine network credit assignment.
A fundamental goal of motor learning is to establish the neural patterns that produce a desired behavioral outcome. It remains unclear how and when the nervous system solves this 'credit assignment' problem. Using neuroprosthetic learning, in which we could control the causal relationship between neurons and behavior, we found that sleep-dependent processing was required for credit assignment and the establishment of task-related functional connectivity reflecting the casual neuron-behavior relationship. Notably, we observed a strong link between the microstructure of sleep reactivations and credit assignment, with downscaling of non-causal activity. Decoupling of spiking to slow oscillations using optogenetic methods eliminated rescaling. Thus, our results suggest that coordinated firing during sleep is essential for establishing sparse activation patterns that reflect the causal neuron-behavior relationship
Egomotion from event-based SNN optical flow
We present a method for computing egomotion using event cameras with a pre-trained optical flow spiking neural network (SNN). To address the aperture problem encountered in the sparse and noisy normal flow of the initial SNN layers, our method includes a sliding-window bin-based pooling layer that computes a fused full flow estimate. To add robustness to noisy flow estimates, instead of computing the egomotion from vector averages, our method optimizes the intersection of constraints. The method also includes a RANSAC step to robustly deal with outlier flow estimates in the pooling layer. We validate our approach on both simulated and real scenes and compare our results favorably to the state-of-the-art methods. However, our method may be sensitive to datasets and motion speeds different from those used for training, limiting its generalizability.This work received support from projects EBCON (PID2020-119244GBI00) and AUDEL (TED2021-131759A-I00) funded by MCIN/ AEI/
10.13039/ 501100011033 and by the "European Union NextGenerationEU/PRTR"; the Consolidated Research Group RAIG (2021 SGR
00510) of the Departament de Recerca i Universitats de la Generalitat de Catalunya; and by an FI AGAUR PhD grant to Yi Tian.Peer ReviewedPostprint (author's final draft
Linking Cellular Mechanisms to Behavior: Entorhinal Persistent Spiking and Membrane Potential Oscillations May Underlie Path Integration, Grid Cell Firing, and Episodic Memory
The entorhinal cortex plays an important role in spatial memory and episodic memory functions. These functions may result from cellular mechanisms for integration of the afferent input to entorhinal cortex. This article reviews physiological data on persistent spiking and membrane potential oscillations in entorhinal cortex then presents models showing how both these cellular mechanisms could contribute to properties observed during unit recording, including grid cell firing, and how they could underlie behavioural functions including path integration. The interaction of oscillations and persistent firing could contribute to encoding and retrieval of trajectories through space and time as a mechanism relevant to episodic memory.Silvio O. Conte Center (NIMH MH71702, MH60450); National Institute of Mental Health Research (MH60013, MH61492); National Science Foundation (SLC SBE 0354378); National Institute of Drug Abuse (DA16454)
An On-chip Spiking Neural Network for Estimation of the Head Pose of the iCub Robot
In this work, we present a neuromorphic architecture for head pose estimation and scene representation for the humanoid iCub robot. The spiking neuronal network is fully realized in Intel's neuromorphic research chip, Loihi, and precisely integrates the issued motor commands to estimate the iCub's head pose in a neuronal path-integration process. The neuromorphic vision system of the iCub is used to correct for drift in the pose estimation. Positions of objects in front of the robot are memorized using on-chip synaptic plasticity. We present real-time robotic experiments using 2 degrees of freedom (DoF) of the robot's head and show precise path integration, visual reset, and object position learning on-chip. We discuss the requirements for integrating the robotic system and neuromorphic hardware with current technologies
MSS-DepthNet: Depth Prediction with Multi-Step Spiking Neural Network
Event cameras are considered to have great potential for computer vision and
robotics applications because of their high temporal resolution and low power
consumption characteristics. However, the event stream output from event
cameras has asynchronous, sparse characteristics that existing computer vision
algorithms cannot handle. Spiking neural network is a novel event-based
computational paradigm that is considered to be well suited for processing
event camera tasks. However, direct training of deep SNNs suffers from
degradation problems. This work addresses these problems by proposing a spiking
neural network architecture with a novel residual block designed and
multi-dimension attention modules combined, focusing on the problem of depth
prediction. In addition, a novel event stream representation method is
explicitly proposed for SNNs. This model outperforms previous ANN networks of
the same size on the MVSEC dataset and shows great computational efficiency
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
Event-driven Vision and Control for UAVs on a Neuromorphic Chip
Event-based vision sensors achieve up to three orders of magnitude better speed vs. power consumption trade off in high-speed control of UAVs compared to conventional image sensors. Event-based cameras produce a sparse stream of events that can be processed more efficiently and with a lower latency than images, enabling ultra-fast vision-driven control. Here, we explore how an event-based vision algorithm can be implemented as a spiking neuronal network on a neuromorphic chip and used in a drone controller. We show how seamless integration of event-based perception on chip leads to even faster control rates and lower latency. In addition, we demonstrate how online adaptation of the SNN controller can be realised using on-chip learning. Our spiking neuronal network on chip is the first example of a neuromorphic vision-based controller on chip solving a high-speed UAV control task. The excellent scalability of processing in neuromorphic hardware opens the possibility to solve more challenging visual tasks in the future and integrate visual perception in fast control loops
- …