90 research outputs found

    Overcoming device unreliability with continuous learning in a population coding based computing system

    Full text link
    The brain, which uses redundancy and continuous learning to overcome the unreliability of its components, provides a promising path to building computing systems that are robust to the unreliability of their constituent nanodevices. In this work, we illustrate this path by a computing system based on population coding with magnetic tunnel junctions that implement both neurons and synaptic weights. We show that equipping such a system with continuous learning enables it to recover from the loss of neurons and makes it possible to use unreliable synaptic weights (i.e. low energy barrier magnetic memories). There is a tradeoff between power consumption and precision because low energy barrier memories consume less energy than high barrier ones. For a given precision, there is an optimal number of neurons and an optimal energy barrier for the weights that leads to minimum power consumption

    OvA-INN: Continual Learning with Invertible Neural Networks

    Full text link
    In the field of Continual Learning, the objective is to learn several tasks one after the other without access to the data from previous tasks. Several solutions have been proposed to tackle this problem but they usually assume that the user knows which of the tasks to perform at test time on a particular sample, or rely on small samples from previous data and most of them suffer of a substantial drop in accuracy when updated with batches of only one class at a time. In this article, we propose a new method, OvA-INN, which is able to learn one class at a time and without storing any of the previous data. To achieve this, for each class, we train a specific Invertible Neural Network to extract the relevant features to compute the likelihood on this class. At test time, we can predict the class of a sample by identifying the network which predicted the highest likelihood. With this method, we show that we can take advantage of pretrained models by stacking an Invertible Network on top of a feature extractor. This way, we are able to outperform state-of-the-art approaches that rely on features learning for the Continual Learning of MNIST and CIFAR-100 datasets. In our experiments, we reach 72% accuracy on CIFAR-100 after training our model one class at a time.Comment: to be published in IJCNN 202

    Microwave neural processing and broadcasting with spintronic nano-oscillators

    Full text link
    Can we build small neuromorphic chips capable of training deep networks with billions of parameters? This challenge requires hardware neurons and synapses with nanometric dimensions, which can be individually tuned, and densely connected. While nanosynaptic devices have been pursued actively in recent years, much less has been done on nanoscale artificial neurons. In this paper, we show that spintronic nano-oscillators are promising to implement analog hardware neurons that can be densely interconnected through electromagnetic signals. We show how spintronic oscillators maps the requirements of artificial neurons. We then show experimentally how an ensemble of four coupled oscillators can learn to classify all twelve American vowels, realizing the most complicated tasks performed by nanoscale neurons

    Tunable Superconducting Properties of a-NbSi Thin Films and Application to Detection in Astrophysics

    Full text link
    We report on the superconducting properties of amorphous NbxSi1-x thin films. The normal-state resistance and critical temperatures can be separately adjusted to suit the desired application. Notably, the relatively low electron-phonon coupling of these films makes them good candidates for an "all electron bolometer" for Cosmological Microwave Background radiation detection. Moreover, this device can be made to suit both high and low impedance readouts

    Role of non-linear data processing on speech recognition task in the framework of reservoir computing

    Full text link
    The reservoir computing neural network architecture is widely used to test hardware systems for neuromorphic computing. One of the preferred tasks for bench-marking such devices is automatic speech recognition. However, this task requires acoustic transformations from sound waveforms with varying amplitudes to frequency domain maps that can be seen as feature extraction techniques. Depending on the conversion method, these may obscure the contribution of the neuromorphic hardware to the overall speech recognition performance. Here, we quantify and separate the contributions of the acoustic transformations and the neuromorphic hardware to the speech recognition success rate. We show that the non-linearity in the acoustic transformation plays a critical role in feature extraction. We compute the gain in word success rate provided by a reservoir computing device compared to the acoustic transformation only, and show that it is an appropriate benchmark for comparing different hardware. Finally, we experimentally and numerically quantify the impact of the different acoustic transformations for neuromorphic hardware based on magnetic nano-oscillators.Comment: 13 pages, 5 figure

    Spin Channels in Functionalized Graphene Nanoribbons

    Full text link
    We characterize the transport properties of functionalized graphene nanoribbons using extensive first-principles calculations based on density functional theory (DFT) that encompass both monovalent and divalent ligands, hydrogenated defects and vacancies. We find that the edge metallic states are preserved under a variety of chemical environments, while bulk conducting channels can be easily destroyed by either hydrogenation or ion or electron beams, resulting in devices that can exhibit spin conductance polarization close to unity.Comment: 14 pages, 5 figure
    • …
    corecore