37,429 research outputs found
Prediction of Acoustic Residual Inhibition of Tinnitus using a Brain-Inspired Spiking Neural Network Model
Auditory Residual Inhibition (ARI) is a temporary suppression of tinnitus that occurs in some people following the presentation of masking sounds. Differences in neural response to ARI stimuli may enable classification of tinnitus and a tailored approach to intervention in the future. In an exploratory study, we investigated the use of a brain-inspired artificial neural network to examine the effects of ARI on electroencephalographic function, as well as the predictive ability of the model. Ten tinnitus patients underwent two auditory stimulation conditions (constant and amplitude modulated broadband noise) at two time points and were then characterised as responders or non-responders, based on whether they experienced ARI or not. Using a spiking neural network model, we evaluated concurrent neural patterns generated across space and time from features of electroencephalographic data, capturing the neural dynamic changes before and after stimulation. Results indicated that the model may be used to predict the effect of auditory stimulation on tinnitus on an individual basis. This approach may aid in the development of predictive models for treatment selection
Photonic Delay Systems as Machine Learning Implementations
Nonlinear photonic delay systems present interesting implementation platforms
for machine learning models. They can be extremely fast, offer great degrees of
parallelism and potentially consume far less power than digital processors. So
far they have been successfully employed for signal processing using the
Reservoir Computing paradigm. In this paper we show that their range of
applicability can be greatly extended if we use gradient descent with
backpropagation through time on a model of the system to optimize the input
encoding of such systems. We perform physical experiments that demonstrate that
the obtained input encodings work well in reality, and we show that optimized
systems perform significantly better than the common Reservoir Computing
approach. The results presented here demonstrate that common gradient descent
techniques from machine learning may well be applicable on physical
neuro-inspired analog computers
On the Resilience of RTL NN Accelerators: Fault Characterization and Mitigation
Machine Learning (ML) is making a strong resurgence in tune with the massive
generation of unstructured data which in turn requires massive computational
resources. Due to the inherently compute- and power-intensive structure of
Neural Networks (NNs), hardware accelerators emerge as a promising solution.
However, with technology node scaling below 10nm, hardware accelerators become
more susceptible to faults, which in turn can impact the NN accuracy. In this
paper, we study the resilience aspects of Register-Transfer Level (RTL) model
of NN accelerators, in particular, fault characterization and mitigation. By
following a High-Level Synthesis (HLS) approach, first, we characterize the
vulnerability of various components of RTL NN. We observed that the severity of
faults depends on both i) application-level specifications, i.e., NN data
(inputs, weights, or intermediate), NN layers, and NN activation functions, and
ii) architectural-level specifications, i.e., data representation model and the
parallelism degree of the underlying accelerator. Second, motivated by
characterization results, we present a low-overhead fault mitigation technique
that can efficiently correct bit flips, by 47.3% better than state-of-the-art
methods.Comment: 8 pages, 6 figure
- …