781 research outputs found
High-Speed Function Approximation using a Minimax Quadratic Interpolator
A table-based method for high-speed function approximation in single-precision floating-point format is presented in this paper. Our focus is the approximation of reciprocal, square root, square root reciprocal, exponentials, logarithms, trigonometric functions, powering (with a fixed exponent p), or special functions. The algorithm presented here combines table look-up, an enhanced minimax quadratic approximation, and an efficient evaluation of the second-degree polynomial (using a specialized squaring unit, redundant arithmetic, and multioperand addition). The execution times and area costs of an architecture implementing our method are estimated, showing the achievement of the fast execution times of linear approximation methods and the reduced area requirements of other second-degree interpolation algorithms. Moreover, the use of an enhanced minimax approximation which, through an iterative process, takes into account the effect of rounding the polynomial coefficients to a finite size allows for a further reduction in the size of the look-up tables to be used, making our method very suitable for the implementation of an elementary function generator in state-of-the-art DSPs or graphics processing units (GPUs)
(M,p,k)-friendly points: a table-based method for trigonometric function evaluation
International audienceWe present a new way of approximating the sine and cosine functions by a few table look-ups and additions. It consists in first reducing the input range to a very small interval by using rotations with "(M, p, k) friendly angles", proposed in this work, and then by using a bipartite table method in a small interval. An implementation of the method for 24- bit case is described and compared with CORDIC. Roughly, the proposed scheme offers a speedup of 2 compared with an unfolded double-rotation radix-2 CORDIC
Oblivious Bounds on the Probability of Boolean Functions
This paper develops upper and lower bounds for the probability of Boolean
functions by treating multiple occurrences of variables as independent and
assigning them new individual probabilities. We call this approach dissociation
and give an exact characterization of optimal oblivious bounds, i.e. when the
new probabilities are chosen independent of the probabilities of all other
variables. Our motivation comes from the weighted model counting problem (or,
equivalently, the problem of computing the probability of a Boolean function),
which is #P-hard in general. By performing several dissociations, one can
transform a Boolean formula whose probability is difficult to compute, into one
whose probability is easy to compute, and which is guaranteed to provide an
upper or lower bound on the probability of the original formula by choosing
appropriate probabilities for the dissociated variables. Our new bounds shed
light on the connection between previous relaxation-based and model-based
approximations and unify them as concrete choices in a larger design space. We
also show how our theory allows a standard relational database management
system (DBMS) to both upper and lower bound hard probabilistic queries in
guaranteed polynomial time.Comment: 34 pages, 14 figures, supersedes: http://arxiv.org/abs/1105.281
Cycles and 1-unconditional matrices
We characterize the 1-unconditional subsequences of the canonical basis
(e_rc) of elementary matrices in the Schatten-von-Neumann class S^p . The set I
of couples (r,c) must be the set of edges of a bipartite graph without cycles
of even length 4<=l<=p if p is an even integer, and without cycles at all if p
is a positive real number that is not an even integer. In the latter case, I is
even a Varopoulos set of V-interpolation of constant 1. We also study the
metric unconditional approximation property for the space S^p_I spanned by
(e_rc)_{(r,c)\in I} in S^p .Comment: 29 pages. This new version computes explicitly certain
unconditionality constants, shows how our results generalize Varopoulos' work
on V-Sidon sets, investigates the metric unconditional approximation property
in the same contex
A Fast and Low-Complexity Operator for the Computation of the Arctangent of a Complex Number
[EN] The computation of the arctangent of a complex number, i.e., the atan2 function, is frequently needed in hardware systems that could profit from an optimized operator. In this brief, we present a novel method to compute the atan2 function and a hardware architecture for its implementation. The method is based on a first stage that performs a coarse approximation of the atan2 function and a second stage that improves the output accuracy by means of a lookup table. We present results for fixed-point implementations in a field-programmable gate array device, all of them guaranteeing last-bit accuracy, which provide an advantage in latency, speed, and use of resources, when compared with well-established fixed-point options.This work was supported by the Spanish Ministerio de Economia y Competitividad and FEDER under Grant TEC2015-70858-C2-2-R.Torres Carot, V.; Valls Coquillat, J. (2017). A Fast and Low-Complexity Operator for the Computation of the Arctangent of a Complex Number. IEEE Transactions on Very Large Scale Integration (VLSI) Systems. 25(9):2663-2667. https://doi.org/10.1109/TVLSI.2017.2700519S2663266725
Multipartite table methods
International audienceA unified view of most previous table-lookup-and-addition methods (bipartite tables, SBTM, STAM, and multipartite methods) is presented. This unified view allows a more accurate computation of the error entailed by these methods, which enables a wider design space exploration, leading to tables smaller than the best previously published ones by up to 50 percent. The synthesis of these multipartite architectures on Virtex FPGAs is also discussed. Compared to other methods involving multipliers, the multipartite approach offers the best speed/area tradeoff for precisions up to 16 bits. A reference implementation is available at www.ens-lyon.fr/LIP/Arenaire/
On the number of matrices and a random matrix with prescribed row and column sums and 0-1 entries
We consider the set Sigma(R,C) of all mxn matrices having 0-1 entries and
prescribed row sums R=(r_1, ..., r_m) and column sums C=(c_1, ..., c_n). We
prove an asymptotic estimate for the cardinality |Sigma(R, C)| via the solution
to a convex optimization problem. We show that if Sigma(R, C) is sufficiently
large, then a random matrix D in Sigma(R, C) sampled from the uniform
probability measure in Sigma(R,C) with high probability is close to a
particular matrix Z=Z(R,C) that maximizes the sum of entropies of entries among
all matrices with row sums R, column sums C and entries between 0 and 1.
Similar results are obtained for 0-1 matrices with prescribed row and column
sums and assigned zeros in some positions.Comment: 26 pages, proofs simplified, results strengthene
The measurement postulates of quantum mechanics are operationally redundant
Understanding the core content of quantum mechanics requires us to
disentangle the hidden logical relationships between the postulates of this
theory. Here we show that the mathematical structure of quantum measurements,
the formula for assigning outcome probabilities (Born's rule) and the
post-measurement state-update rule, can be deduced from the other quantum
postulates, often referred to as "unitary quantum mechanics", and the
assumption that ensembles on finite-dimensional Hilbert spaces are
characterised by finitely many parameters. This is achieved by taking an
operational approach to physical theories, and using the fact that the manner
in which a physical system is partitioned into subsystems is a subjective
choice of the observer, and hence should not affect the predictions of the
theory. In contrast to other approaches, our result does not assume that
measurements are related to operators or bases, it does not rely on the
universality of quantum mechanics, and it is independent of the interpretation
of probability.Comment: This is a post-peer-review, pre-copyedit version of an article
published in Nature Communications. The final authenticated version is
available online at: http://dx.doi.org/10.1038/s41467-019-09348-
Crossing the transcendental divide: from translation surfaces to algebraic curves
We study constructing an algebraic curve from a Riemann surface given via a
translation surface, which is a collection of finitely many polygons in the
plane with sides identified by translation. We use the theory of discrete
Riemann surfaces to give an algorithm for approximating the Jacobian variety of
a translation surface whose polygon can be decomposed into squares. We first
implement the algorithm in the case of shaped polygons where the algebraic
curve is already known. The algorithm is also implemented in any genus for
specific examples of Jenkins-Strebel representatives, a dense family of
translation surfaces that, until now, lived squarely on the analytic side of
the transcendental divide between Riemann surfaces and algebraic curves. Using
Riemann theta functions, we give numerical experiments and resulting
conjectures up to genus 5.Comment: final version; 33 pages, 7 figures, comments welcome
- …