16,079 research outputs found
Supervised Learning in Spiking Neural Networks with Phase-Change Memory Synapses
Spiking neural networks (SNN) are artificial computational models that have
been inspired by the brain's ability to naturally encode and process
information in the time domain. The added temporal dimension is believed to
render them more computationally efficient than the conventional artificial
neural networks, though their full computational capabilities are yet to be
explored. Recently, computational memory architectures based on non-volatile
memory crossbar arrays have shown great promise to implement parallel
computations in artificial and spiking neural networks. In this work, we
experimentally demonstrate for the first time, the feasibility to realize
high-performance event-driven in-situ supervised learning systems using
nanoscale and stochastic phase-change synapses. Our SNN is trained to recognize
audio signals of alphabets encoded using spikes in the time domain and to
generate spike trains at precise time instances to represent the pixel
intensities of their corresponding images. Moreover, with a statistical model
capturing the experimental behavior of the devices, we investigate
architectural and systems-level solutions for improving the training and
inference performance of our computational memory-based system. Combining the
computational potential of supervised SNNs with the parallel compute power of
computational memory, the work paves the way for next-generation of efficient
brain-inspired systems
Hybrid computer Monte-Carlo techniques
Hybrid analog-digital computer systems for Monte Carlo method application
Engineering study for a mass memory system for advanced spacecrafts Final report, 1 Dec. 1969 - 1 Jul. 1970
Mass memory system for advanced spacecraf
Layered architecture for quantum computing
We develop a layered quantum computer architecture, which is a systematic
framework for tackling the individual challenges of developing a quantum
computer while constructing a cohesive device design. We discuss many of the
prominent techniques for implementing circuit-model quantum computing and
introduce several new methods, with an emphasis on employing surface code
quantum error correction. In doing so, we propose a new quantum computer
architecture based on optical control of quantum dots. The timescales of
physical hardware operations and logical, error-corrected quantum gates differ
by several orders of magnitude. By dividing functionality into layers, we can
design and analyze subsystems independently, demonstrating the value of our
layered architectural approach. Using this concrete hardware platform, we
provide resource analysis for executing fault-tolerant quantum algorithms for
integer factoring and quantum simulation, finding that the quantum dot
architecture we study could solve such problems on the timescale of days.Comment: 27 pages, 20 figure
Thermal Heating in ReRAM Crossbar Arrays: Challenges and Solutions
Increasing popularity of deep-learning-powered applications raises the issue
of vulnerability of neural networks to adversarial attacks. In other words,
hardly perceptible changes in input data lead to the output error in neural
network hindering their utilization in applications that involve decisions with
security risks. A number of previous works have already thoroughly evaluated
the most commonly used configuration - Convolutional Neural Networks (CNNs)
against different types of adversarial attacks. Moreover, recent works
demonstrated transferability of the some adversarial examples across different
neural network models. This paper studied robustness of the new emerging models
such as SpinalNet-based neural networks and Compact Convolutional Transformers
(CCT) on image classification problem of CIFAR-10 dataset. Each architecture
was tested against four White-box attacks and three Black-box attacks. Unlike
VGG and SpinalNet models, attention-based CCT configuration demonstrated large
span between strong robustness and vulnerability to adversarial examples.
Eventually, the study of transferability between VGG, VGG-inspired SpinalNet
and pretrained CCT 7/3x1 models was conducted. It was shown that despite high
effectiveness of the attack on the certain individual model, this does not
guarantee the transferability to other models.Comment: 18 page
Pavlov's dog associative learning demonstrated on synaptic-like organic transistors
In this letter, we present an original demonstration of an associative
learning neural network inspired by the famous Pavlov's dogs experiment. A
single nanoparticle organic memory field effect transistor (NOMFET) is used to
implement each synapse. We show how the physical properties of this dynamic
memristive device can be used to perform low power write operations for the
learning and implement short-term association using temporal coding and spike
timing dependent plasticity based learning. An electronic circuit was built to
validate the proposed learning scheme with packaged devices, with good
reproducibility despite the complex synaptic-like dynamic of the NOMFET in
pulse regime
Structural dynamics branch research and accomplishments for fiscal year 1987
This publication contains a collection of fiscal year 1987 research highlights from the Structural Dynamics Branch at NASA Lewis Research Center. Highlights from the branch's four major work areas, Aeroelasticity, Vibration Control, Dynamic Systems, and Computational Structural Methods, are included in the report as well as a complete listing of the FY87 branch publications
- …