465 research outputs found
Variable-rate data sampling for low-power microsystems using modified Adams methods
A method for variable-rate data sampling is proposed for the purpose of low-power data acquisition in a small footprint microsystem. The procedure enables energy saving by utilizing dynamic power management techniques and is based on the Adams-Bashforth and Adams-Moulton multistep predictor-corrector methods for ordinary differential equations. Newton-Gregory backward difference interpolation formulae and past value substitution are used to facilitate sample rate changes. It is necessary to store only 2m+1 equispaced past values of t and the corresponding values of y, where y=g(t), and m is the number of steps in the Adams methods. For the purposes of demonstrating the technique, fourth-order methods are used, but it is possible to use higher orders to improve accuracy if required
SpikingJelly: An open-source machine learning infrastructure platform for spike-based intelligence
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on
neuromorphic chips with high energy efficiency by introducing neural dynamics
and spike properties. As the emerging spiking deep learning paradigm attracts
increasing interest, traditional programming frameworks cannot meet the demands
of the automatic differentiation, parallel computation acceleration, and high
integration of processing neuromorphic datasets and deployment. In this work,
we present the SpikingJelly framework to address the aforementioned dilemma. We
contribute a full-stack toolkit for pre-processing neuromorphic datasets,
building deep SNNs, optimizing their parameters, and deploying SNNs on
neuromorphic chips. Compared to existing methods, the training of deep SNNs can
be accelerated , and the superior extensibility and flexibility of
SpikingJelly enable users to accelerate custom models at low costs through
multilevel inheritance and semiautomatic code generation. SpikingJelly paves
the way for synthesizing truly energy-efficient SNN-based machine intelligence
systems, which will enrich the ecology of neuromorphic computing.Comment: Accepted in Science Advances
(https://www.science.org/doi/10.1126/sciadv.adi1480
Enhancement automatic speech recognition by deep neural networks
The performance of speech recognition tasks utilizing systems based on deep learning has improved dramatically in recent years by utilizing different deep designs and learning methodologies. A popular way to boosting the number of training data is called Data Augmentation (DA), and research shows that using DA is effective in teaching neural network models how to make invariant predictions. furthermore, EM approaches have piqued machine-learning researchers' attention as a means of improving classifier performance. In this study, have been presenteded a unique deep neural network speech recognition that employs both EM and DA approaches to improve the system's prediction accuracy. firstly, reveal an approach based on vocal tract length disturbance that already exists and then propose a Feature perturbation is an alternative Data Augmentation approach. in order to make amendment training data sets. This is followed by an integration of the posterior probabilities obtained from several DNN acoustic models trained on diverse datasets. The study's findings reveal that the proposed system's recognition skills have improved
Bridging the Gap Between Neural Networks and Neuromorphic Hardware with A Neural Network Compiler
Different from developing neural networks (NNs) for general-purpose
processors, the development for NN chips usually faces with some
hardware-specific restrictions, such as limited precision of network signals
and parameters, constrained computation scale, and limited types of non-linear
functions.
This paper proposes a general methodology to address the challenges. We
decouple the NN applications from the target hardware by introducing a compiler
that can transform an existing trained, unrestricted NN into an equivalent
network that meets the given hardware's constraints. We propose multiple
techniques to make the transformation adaptable to different kinds of NN chips,
and reliable for restrict hardware constraints.
We have built such a software tool that supports both spiking neural networks
(SNNs) and traditional artificial neural networks (ANNs). We have demonstrated
its effectiveness with a fabricated neuromorphic chip and a
processing-in-memory (PIM) design. Tests show that the inference error caused
by this solution is insignificant and the transformation time is much shorter
than the retraining time. Also, we have studied the parameter-sensitivity
evaluations to explore the tradeoffs between network error and resource
utilization for different transformation strategies, which could provide
insights for co-design optimization of neuromorphic hardware and software.Comment: Accepted by ASPLOS 201
Hybrid Wavelet-Support Vector Classifiers
The Support Vector Machine (SVM) represents a new and very promising technique for machine learning tasks involving classification, regression or novelty detection. Improvements of its generalization ability can be achieved by incorporating prior knowledge of the task at hand. We propose a new hybrid algorithm consisting of signal-adapted wavelet decompositions and SVMs for waveform classification. The adaptation of the wavelet decompositions is tailormade for SVMs with radial basis functions as kernels. It allows the optimization Of the representation of the data before training the SVM and does not suffer from computationally expensive validation techniques. We assess the performance of our algorithm against the background of current concerns in medical diagnostics, namely the classification of endocardial electrograms and the detection of otoacoustic emissions. Here the performance of SVMs can significantly be improved by our adapted preprocessing step
- …