12,247 research outputs found
Power Optimizations in MTJ-based Neural Networks through Stochastic Computing
Artificial Neural Networks (ANNs) have found widespread applications in tasks
such as pattern recognition and image classification. However, hardware
implementations of ANNs using conventional binary arithmetic units are
computationally expensive, energy-intensive and have large area overheads.
Stochastic Computing (SC) is an emerging paradigm which replaces these
conventional units with simple logic circuits and is particularly suitable for
fault-tolerant applications. Spintronic devices, such as Magnetic Tunnel
Junctions (MTJs), are capable of replacing CMOS in memory and logic circuits.
In this work, we propose an energy-efficient use of MTJs, which exhibit
probabilistic switching behavior, as Stochastic Number Generators (SNGs), which
forms the basis of our NN implementation in the SC domain. Further, error
resilient target applications of NNs allow us to introduce Approximate
Computing, a framework wherein accuracy of computations is traded-off for
substantial reductions in power consumption. We propose approximating the
synaptic weights in our MTJ-based NN implementation, in ways brought about by
properties of our MTJ-SNG, to achieve energy-efficiency. We design an algorithm
that can perform such approximations within a given error tolerance in a
single-layer NN in an optimal way owing to the convexity of the problem
formulation. We then use this algorithm and develop a heuristic approach for
approximating multi-layer NNs. To give a perspective of the effectiveness of
our approach, a 43% reduction in power consumption was obtained with less than
1% accuracy loss on a standard classification problem, with 26% being brought
about by the proposed algorithm.Comment: Accepted in the 2017 IEEE/ACM International Conference on Low Power
Electronics and Desig
ASCEND: Accurate yet Efficient End-to-End Stochastic Computing Acceleration of Vision Transformer
Stochastic computing (SC) has emerged as a promising computing paradigm for
neural acceleration. However, how to accelerate the state-of-the-art Vision
Transformer (ViT) with SC remains unclear. Unlike convolutional neural
networks, ViTs introduce notable compatibility and efficiency challenges
because of their nonlinear functions, e.g., softmax and Gaussian Error Linear
Units (GELU). In this paper, for the first time, a ViT accelerator based on
end-to-end SC, dubbed ASCEND, is proposed. ASCEND co-designs the SC circuits
and ViT networks to enable accurate yet efficient acceleration. To overcome the
compatibility challenges, ASCEND proposes a novel deterministic SC block for
GELU and leverages an SC-friendly iterative approximate algorithm to design an
accurate and efficient softmax circuit. To improve inference efficiency, ASCEND
develops a two-stage training pipeline to produce accurate low-precision ViTs.
With extensive experiments, we show the proposed GELU and softmax blocks
achieve 56.3% and 22.6% error reduction compared to existing SC designs,
respectively and reduce the area-delay product (ADP) by 5.29x and 12.6x,
respectively. Moreover, compared to the baseline low-precision ViTs, ASCEND
also achieves significant accuracy improvements on CIFAR10 and CIFAR100.Comment: Accepted in DATE 202
Gathering Statistics to Aspectually Classify Sentences with a Genetic Algorithm
This paper presents a method for large corpus analysis to semantically
classify an entire clause. In particular, we use cooccurrence statistics among
similar clauses to determine the aspectual class of an input clause. The
process examines linguistic features of clauses that are relevant to aspectual
classification. A genetic algorithm determines what combinations of linguistic
features to use for this task.Comment: postscript, 9 pages, Proceedings of the Second International
Conference on New Methods in Language Processing, Oflazer and Somers ed
Restricted Value Iteration: Theory and Algorithms
Value iteration is a popular algorithm for finding near optimal policies for
POMDPs. It is inefficient due to the need to account for the entire belief
space, which necessitates the solution of large numbers of linear programs. In
this paper, we study value iteration restricted to belief subsets. We show
that, together with properly chosen belief subsets, restricted value iteration
yields near-optimal policies and we give a condition for determining whether a
given belief subset would bring about savings in space and time. We also apply
restricted value iteration to two interesting classes of POMDPs, namely
informative POMDPs and near-discernible POMDPs
- …