6,227 research outputs found
Memory and information processing in neuromorphic systems
A striking difference between brain-inspired neuromorphic processors and
current von Neumann processors architectures is the way in which memory and
processing is organized. As Information and Communication Technologies continue
to address the need for increased computational power through the increase of
cores within a digital processor, neuromorphic engineers and scientists can
complement this need by building processor architectures where memory is
distributed with the processing. In this paper we present a survey of
brain-inspired processor architectures that support models of cortical networks
and deep neural networks. These architectures range from serial clocked
implementations of multi-neuron systems to massively parallel asynchronous ones
and from purely digital systems to mixed analog/digital systems which implement
more biological-like models of neurons and synapses together with a suite of
adaptation and learning mechanisms analogous to the ones found in biological
nervous systems. We describe the advantages of the different approaches being
pursued and present the challenges that need to be addressed for building
artificial neural processing systems that can display the richness of behaviors
seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed
neuromorphic computing platforms and system
Non-classical computing: feasible versus infeasible
Physics sets certain limits on what is and is not computable. These limits are very far from having been reached by current technologies. Whilst proposals for hypercomputation are almost certainly infeasible, there are a number of non classical approaches that do hold considerable promise. There are a range of possible architectures that could be implemented on silicon that are distinctly different from the von Neumann model. Beyond this, quantum simulators, which are the quantum equivalent of analogue computers, may be constructable in the near future
Principles of Neuromorphic Photonics
In an age overrun with information, the ability to process reams of data has
become crucial. The demand for data will continue to grow as smart gadgets
multiply and become increasingly integrated into our daily lives.
Next-generation industries in artificial intelligence services and
high-performance computing are so far supported by microelectronic platforms.
These data-intensive enterprises rely on continual improvements in hardware.
Their prospects are running up against a stark reality: conventional
one-size-fits-all solutions offered by digital electronics can no longer
satisfy this need, as Moore's law (exponential hardware scaling),
interconnection density, and the von Neumann architecture reach their limits.
With its superior speed and reconfigurability, analog photonics can provide
some relief to these problems; however, complex applications of analog
photonics have remained largely unexplored due to the absence of a robust
photonic integration industry. Recently, the landscape for
commercially-manufacturable photonic chips has been changing rapidly and now
promises to achieve economies of scale previously enjoyed solely by
microelectronics.
The scientific community has set out to build bridges between the domains of
photonic device physics and neural networks, giving rise to the field of
\emph{neuromorphic photonics}. This article reviews the recent progress in
integrated neuromorphic photonics. We provide an overview of neuromorphic
computing, discuss the associated technology (microelectronic and photonic)
platforms and compare their metric performance. We discuss photonic neural
network approaches and challenges for integrated neuromorphic photonic
processors while providing an in-depth description of photonic neurons and a
candidate interconnection architecture. We conclude with a future outlook of
neuro-inspired photonic processing.Comment: 28 pages, 19 figure
Universalities in cellular automata; a (short) survey
This reading guide aims to provide the reader with an easy access to the study of universality in the ïŹeld of cellular automata. To fulïŹll this goal, the approach taken here is organized in three parts: a detailled chronology of seminal papers, a discussion of the deïŹnition and main properties of universal cellular automata, and a broad bibliography
Sparse Coding on Stereo Video for Object Detection
Deep Convolutional Neural Networks (DCNN) require millions of labeled
training examples for image classification and object detection tasks, which
restrict these models to domains where such datasets are available. In this
paper, we explore the use of unsupervised sparse coding applied to stereo-video
data to help alleviate the need for large amounts of labeled data. We show that
replacing a typical supervised convolutional layer with an unsupervised
sparse-coding layer within a DCNN allows for better performance on a car
detection task when only a limited number of labeled training examples is
available. Furthermore, the network that incorporates sparse coding allows for
more consistent performance over varying initializations and ordering of
training examples when compared to a fully supervised DCNN. Finally, we compare
activations between the unsupervised sparse-coding layer and the supervised
convolutional layer, and show that the sparse representation exhibits an
encoding that is depth selective, whereas encodings from the convolutional
layer do not exhibit such selectivity. These result indicates promise for using
unsupervised sparse-coding approaches in real-world computer vision tasks in
domains with limited labeled training data
Why Quantum Bit Commitment And Ideal Quantum Coin Tossing Are Impossible
There had been well known claims of unconditionally secure quantum protocols
for bit commitment. However, we, and independently Mayers, showed that all
proposed quantum bit commitment schemes are, in principle, insecure because the
sender, Alice, can almost always cheat successfully by using an
Einstein-Podolsky-Rosen (EPR) type of attack and delaying her measurements. One
might wonder if secure quantum bit commitment protocols exist at all. We answer
this question by showing that the same type of attack by Alice will, in
principle, break any bit commitment scheme. The cheating strategy generally
requires a quantum computer. We emphasize the generality of this ``no-go
theorem'': Unconditionally secure bit commitment schemes based on quantum
mechanics---fully quantum, classical or quantum but with measurements---are all
ruled out by this result. Since bit commitment is a useful primitive for
building up more sophisticated protocols such as zero-knowledge proofs, our
results cast very serious doubt on the security of quantum cryptography in the
so-called ``post-cold-war'' applications. We also show that ideal quantum coin
tossing is impossible because of the EPR attack. This no-go theorem for ideal
quantum coin tossing may help to shed some lights on the possibility of
non-ideal protocols.Comment: We emphasize the generality of this "no-go theorem". All bit
commitment schemes---fully quantum, classical and quantum but with
measurements---are shown to be necessarily insecure. Accepted for publication
in a special issue of Physica D. About 18 pages in elsart.sty. This is an
extended version of an earlier manuscript (quant-ph/9605026) which has
appeared in the proceedings of PHYSCOMP'9
Classification of Hybrid Quantum-Classical Computing
As quantum computers mature, the applicability in practice becomes more
important. Many uses of quantum computers will be hybrid, with classical
computers still playing an important role in operating and using the quantum
computer. The term hybrid is however diffuse and multi-interpretable. In this
work we define two classes of hybrid quantum-classical computing: vertical and
horizontal. The first is application-agnostic and concerns using quantum
computers. The second is application-specific and concerns running an
algorithm. For both, we give a further subdivision in different types of hybrid
quantum-classical computing and we coin terms for them
Efficient hardware implementations of bio-inspired networks
The human brain, with its massive computational capability and power efficiency in small form factor, continues to inspire the ultimate goal of building machines that can perform tasks without being explicitly programmed. In an effort to mimic the natural information processing paradigms observed in the brain, several neural network generations have been proposed over the years. Among the neural networks inspired by biology, second-generation Artificial or Deep Neural Networks (ANNs/DNNs) use memoryless neuron models and have shown unprecedented success surpassing humans in a wide variety of tasks. Unlike ANNs, third-generation Spiking Neural Networks (SNNs) closely mimic biological neurons by operating on discrete and sparse events in time called spikes, which are obtained by the time integration of previous inputs.
Implementation of data-intensive neural network models on computers based on the von Neumann architecture is mainly limited by the continuous data transfer between the physically separated memory and processing units. Hence, non-von Neumann architectural solutions are essential for processing these memory-intensive bio-inspired neural networks in an energy-efficient manner. Among the non-von Neumann architectures, implementations employing non-volatile memory (NVM) devices are most promising due to their compact size and low operating power. However, it is non-trivial to integrate these nanoscale devices on conventional computational substrates due to their non-idealities, such as limited dynamic range, finite bit resolution, programming variability, etc. This dissertation demonstrates the architectural and algorithmic optimizations of implementing bio-inspired neural networks using emerging nanoscale devices.
The first half of the dissertation focuses on the hardware acceleration of DNN implementations. A 4-layer stochastic DNN in a crossbar architecture with memristive devices at the cross point is analyzed for accelerating DNN training. This network is then used as a baseline to explore the impact of experimental memristive device behavior on network performance. Programming variability is found to have a critical role in determining network performance compared to other non-ideal characteristics of the devices. In addition, noise-resilient inference engines are demonstrated using stochastic memristive DNNs with 100 bits for stochastic encoding during inference and 10 bits for the expensive training.
The second half of the dissertation focuses on a novel probabilistic framework for SNNs using the Generalized Linear Model (GLM) neurons for capturing neuronal behavior. This work demonstrates that probabilistic SNNs have comparable perform-ance against equivalent ANNs on two popular benchmarks - handwritten-digit classification and human activity recognition. Considering the potential of SNNs in energy-efficient implementations, a hardware accelerator for inference is proposed, termed as Spintronic Accelerator for Probabilistic SNNs (SpinAPS). The learning algorithm is optimized for a hardware friendly implementation and uses first-to-spike decoding scheme for low latency inference. With binary spintronic synapses and digital CMOS logic neurons for computations, SpinAPS achieves a performance improvement of 4x in terms of GSOPS/W/mm when compared to a conventional SRAM-based design.
Collectively, this work demonstrates the potential of emerging memory technologies in building energy-efficient hardware architectures for deep and spiking neural networks. The design strategies adopted in this work can be extended to other spike and non-spike based systems for building embedded solutions having power/energy constraints
Random Projection using Random Quantum Circuits
The random sampling task performed by Google's Sycamore processor gave us a
glimpse of the "Quantum Supremacy era". This has definitely shed some spotlight
on the power of random quantum circuits in this abstract task of sampling
outputs from the (pseudo-) random circuits. In this manuscript, we explore a
practical near-term use of local random quantum circuits in dimensional
reduction of large low-rank data sets. We make use of the well-studied
dimensionality reduction technique called the random projection method. This
method has been extensively used in various applications such as image
processing, logistic regression, entropy computation of low-rank matrices, etc.
We prove that the matrix representations of local random quantum circuits with
sufficiently shorter depths () serve as good candidates for random
projection. We demonstrate numerically that their projection abilities are not
far off from the computationally expensive classical principal components
analysis on MNIST and CIFAR-100 image data sets. We also benchmark the
performance of quantum random projection against the commonly used classical
random projection in the tasks of dimensionality reduction of image datasets
and computing Von Neumann entropies of large low-rank density matrices. And
finally using variational quantum singular value decomposition, we demonstrate
a near-term implementation of extracting the singular vectors with dominant
singular values after quantum random projecting a large low-rank matrix to
lower dimensions. All such numerical experiments unequivocally demonstrate the
ability of local random circuits to randomize a large Hilbert space at
sufficiently shorter depths with robust retention of properties of large
datasets in reduced dimensions.Comment: Minor typos fixed in this versio
- âŠ