5,458 research outputs found
Power Optimizations in MTJ-based Neural Networks through Stochastic Computing
Artificial Neural Networks (ANNs) have found widespread applications in tasks
such as pattern recognition and image classification. However, hardware
implementations of ANNs using conventional binary arithmetic units are
computationally expensive, energy-intensive and have large area overheads.
Stochastic Computing (SC) is an emerging paradigm which replaces these
conventional units with simple logic circuits and is particularly suitable for
fault-tolerant applications. Spintronic devices, such as Magnetic Tunnel
Junctions (MTJs), are capable of replacing CMOS in memory and logic circuits.
In this work, we propose an energy-efficient use of MTJs, which exhibit
probabilistic switching behavior, as Stochastic Number Generators (SNGs), which
forms the basis of our NN implementation in the SC domain. Further, error
resilient target applications of NNs allow us to introduce Approximate
Computing, a framework wherein accuracy of computations is traded-off for
substantial reductions in power consumption. We propose approximating the
synaptic weights in our MTJ-based NN implementation, in ways brought about by
properties of our MTJ-SNG, to achieve energy-efficiency. We design an algorithm
that can perform such approximations within a given error tolerance in a
single-layer NN in an optimal way owing to the convexity of the problem
formulation. We then use this algorithm and develop a heuristic approach for
approximating multi-layer NNs. To give a perspective of the effectiveness of
our approach, a 43% reduction in power consumption was obtained with less than
1% accuracy loss on a standard classification problem, with 26% being brought
about by the proposed algorithm.Comment: Accepted in the 2017 IEEE/ACM International Conference on Low Power
Electronics and Desig
Recommended from our members
Value-based argumentation frameworks as neural-symbolic learning systems
While neural networks have been successfully used in a number of machine learning applications, logical languages have been the standard for the representation of argumentative reasoning. In this paper, we establish a relationship between neural networks and argumentation networks, combining reasoning and learning in the same argumentation framework. We do so by presenting a new neural argumentation algorithm, responsible for translating argumentation networks into standard neural networks. We then show a correspondence between the two networks. The algorithm works not only for acyclic argumentation networks, but also for circular networks, and it enables the accrual of arguments through learning as well as the parallel computation of arguments
Recommended from our members
Layerwise symbolic knowledge extraction from deep neural networks
We examine the feasibility of rule extraction as a method of explanation for neural networks with an emphasis on deep neural networks. This is done by establishing a framework for neural-symbolic computing which gives precise meaning to notions such as fidelity, neural encoding, and rule extraction. Using this framework, we establish semantic and syntactic relationships between different classes of neural networks and different logical systems. This shows that there is nothing inherently different about the computations done by deep neural networks and logical systems. We use this to argue that complexity is the primary difference between neural and symbolic approaches. We develop a measure of complexity and two different rule extraction algorithms using M-of- N rules. The first extraction algorithm is a fast decompositional algorithm for Deep Belief Networks that builds on the optimal confidence extraction algorithm. The second algorithm is a parallel search for optimal M-of-N rules that implements a hyperparameter that controls the complexity of the extracted rules. We apply this algorithm to a variety of deep networks and find that although differences in architecture, dataset, and learning algorithm influence the complexity of extracted rules, generally only the final softmax layer can be represented simply and accurately with M-of-N rules. We conclude by experimenting with the combination of rule extraction from the final layer and importance methods to visualize the inputs to the final layer
DeepGauge: Multi-Granularity Testing Criteria for Deep Learning Systems
Deep learning (DL) defines a new data-driven programming paradigm that
constructs the internal system logic of a crafted neuron network through a set
of training data. We have seen wide adoption of DL in many safety-critical
scenarios. However, a plethora of studies have shown that the state-of-the-art
DL systems suffer from various vulnerabilities which can lead to severe
consequences when applied to real-world applications. Currently, the testing
adequacy of a DL system is usually measured by the accuracy of test data.
Considering the limitation of accessible high quality test data, good accuracy
performance on test data can hardly provide confidence to the testing adequacy
and generality of DL systems. Unlike traditional software systems that have
clear and controllable logic and functionality, the lack of interpretability in
a DL system makes system analysis and defect detection difficult, which could
potentially hinder its real-world deployment. In this paper, we propose
DeepGauge, a set of multi-granularity testing criteria for DL systems, which
aims at rendering a multi-faceted portrayal of the testbed. The in-depth
evaluation of our proposed testing criteria is demonstrated on two well-known
datasets, five DL systems, and with four state-of-the-art adversarial attack
techniques against DL. The potential usefulness of DeepGauge sheds light on the
construction of more generic and robust DL systems.Comment: The 33rd IEEE/ACM International Conference on Automated Software
Engineering (ASE 2018
Can biological quantum networks solve NP-hard problems?
There is a widespread view that the human brain is so complex that it cannot
be efficiently simulated by universal Turing machines. During the last decades
the question has therefore been raised whether we need to consider quantum
effects to explain the imagined cognitive power of a conscious mind.
This paper presents a personal view of several fields of philosophy and
computational neurobiology in an attempt to suggest a realistic picture of how
the brain might work as a basis for perception, consciousness and cognition.
The purpose is to be able to identify and evaluate instances where quantum
effects might play a significant role in cognitive processes.
Not surprisingly, the conclusion is that quantum-enhanced cognition and
intelligence are very unlikely to be found in biological brains. Quantum
effects may certainly influence the functionality of various components and
signalling pathways at the molecular level in the brain network, like ion
ports, synapses, sensors, and enzymes. This might evidently influence the
functionality of some nodes and perhaps even the overall intelligence of the
brain network, but hardly give it any dramatically enhanced functionality. So,
the conclusion is that biological quantum networks can only approximately solve
small instances of NP-hard problems.
On the other hand, artificial intelligence and machine learning implemented
in complex dynamical systems based on genuine quantum networks can certainly be
expected to show enhanced performance and quantum advantage compared with
classical networks. Nevertheless, even quantum networks can only be expected to
efficiently solve NP-hard problems approximately. In the end it is a question
of precision - Nature is approximate.Comment: 38 page
- …