61,557 research outputs found
Approximating power by weights
Determining the power distribution of the members of a shareholder meeting or
a legislative committee is a well-known problem for many applications. In some
cases it turns out that power is nearly proportional to relative voting
weights, which is very beneficial for both theoretical considerations and
practical computations with many members. We present quantitative approximation
results with precise error bounds for several power indices as well as
impossibility results for such approximations between power and weights.Comment: 23 pages, 1 table, 1 figur
Power Optimizations in MTJ-based Neural Networks through Stochastic Computing
Artificial Neural Networks (ANNs) have found widespread applications in tasks
such as pattern recognition and image classification. However, hardware
implementations of ANNs using conventional binary arithmetic units are
computationally expensive, energy-intensive and have large area overheads.
Stochastic Computing (SC) is an emerging paradigm which replaces these
conventional units with simple logic circuits and is particularly suitable for
fault-tolerant applications. Spintronic devices, such as Magnetic Tunnel
Junctions (MTJs), are capable of replacing CMOS in memory and logic circuits.
In this work, we propose an energy-efficient use of MTJs, which exhibit
probabilistic switching behavior, as Stochastic Number Generators (SNGs), which
forms the basis of our NN implementation in the SC domain. Further, error
resilient target applications of NNs allow us to introduce Approximate
Computing, a framework wherein accuracy of computations is traded-off for
substantial reductions in power consumption. We propose approximating the
synaptic weights in our MTJ-based NN implementation, in ways brought about by
properties of our MTJ-SNG, to achieve energy-efficiency. We design an algorithm
that can perform such approximations within a given error tolerance in a
single-layer NN in an optimal way owing to the convexity of the problem
formulation. We then use this algorithm and develop a heuristic approach for
approximating multi-layer NNs. To give a perspective of the effectiveness of
our approach, a 43% reduction in power consumption was obtained with less than
1% accuracy loss on a standard classification problem, with 26% being brought
about by the proposed algorithm.Comment: Accepted in the 2017 IEEE/ACM International Conference on Low Power
Electronics and Desig
BEAUTY Powered BEAST
We study inference about the uniform distribution with the proposed binary
expansion approximation of uniformity (BEAUTY) approach. Through an extension
of the celebrated Euler's formula, we approximate the characteristic function
of any copula distribution with a linear combination of means of binary
interactions from marginal binary expansions. This novel characterization
enables a unification of many important existing tests through an approximation
from some quadratic form of symmetry statistics, where the deterministic weight
matrix characterizes the power properties of each test. To achieve a uniformly
high power, we study test statistics with data-adaptive weights through an
oracle approach, referred to as the binary expansion adaptive symmetry test
(BEAST). By utilizing the properties of the binary expansion filtration, we
show that the Neyman-Pearson test of uniformity can be approximated by an
oracle weighted sum of symmetry statistics. The BEAST with this oracle leads
all existing tests we considered in empirical power against all complex forms
of alternatives. This oracle therefore sheds light on the potential of
substantial improvements in power and on the form of optimal weights under each
alternative. By approximating this oracle with data-adaptive weights, we
develop the BEAST that improves the empirical power of many existing tests
against a wide spectrum of common alternatives while providing clear
interpretation of the form of non-uniformity upon rejection. We illustrate the
BEAST with a study of the relationship between the location and brightness of
stars
p-Adic valuation of weights in Abelian codes over /spl Zopf/(p/sup d/)
Counting polynomial techniques introduced by Wilson are used to provide analogs of a theorem of McEliece. McEliece's original theorem relates the greatest power of p dividing the Hamming weights of words in cyclic codes over GF (p) to the length of the smallest unity-product sequence of nonzeroes of the code. Calderbank, Li, and Poonen presented analogs for cyclic codes over /spl Zopf/(2/sup d/) using various weight functions (Hamming, Lee, and Euclidean weight as well as count of occurrences of a particular symbol). Some of these results were strengthened by Wilson, who also considered the alphabet /spl Zopf/(p/sup d/) for p an arbitrary prime. These previous results, new strengthened versions, and generalizations are proved here in a unified and comprehensive fashion for the larger class of Abelian codes over /spl Zopf/(p/sup d/) with p any prime. For Abelian codes over /spl Zopf//sub 4/, combinatorial methods for use with counting polynomials are developed. These show that the analogs of McEliece's theorem obtained by Wilson (for Hamming weight, Lee weight, and symbol counts) and the analog obtained here for Euclidean weight are sharp in the sense that they give the maximum power of 2 that divides the weights of all the codewords whose Fourier transforms have a specified support
Dropout Inference in Bayesian Neural Networks with Alpha-divergences
To obtain uncertainty estimates with real-world Bayesian deep learning
models, practical inference approximations are needed. Dropout variational
inference (VI) for example has been used for machine vision and medical
applications, but VI can severely underestimates model uncertainty.
Alpha-divergences are alternative divergences to VI's KL objective, which are
able to avoid VI's uncertainty underestimation. But these are hard to use in
practice: existing techniques can only use Gaussian approximating
distributions, and require existing models to be changed radically, thus are of
limited use for practitioners. We propose a re-parametrisation of the
alpha-divergence objectives, deriving a simple inference technique which,
together with dropout, can be easily implemented with existing models by simply
changing the loss of the model. We demonstrate improved uncertainty estimates
and accuracy compared to VI in dropout networks. We study our model's epistemic
uncertainty far away from the data using adversarial images, showing that these
can be distinguished from non-adversarial images by examining our model's
uncertainty
- …