28 research outputs found
On Sound Relative Error Bounds for Floating-Point Arithmetic
State-of-the-art static analysis tools for verifying finite-precision code
compute worst-case absolute error bounds on numerical errors. These are,
however, often not a good estimate of accuracy as they do not take into account
the magnitude of the computed values. Relative errors, which compute errors
relative to the value's magnitude, are thus preferable. While today's tools do
report relative error bounds, these are merely computed via absolute errors and
thus not necessarily tight or more informative. Furthermore, whenever the
computed value is close to zero on part of the domain, the tools do not report
any relative error estimate at all. Surprisingly, the quality of relative error
bounds computed by today's tools has not been systematically studied or
reported to date. In this paper, we investigate how state-of-the-art static
techniques for computing sound absolute error bounds can be used, extended and
combined for the computation of relative errors. Our experiments on a standard
benchmark set show that computing relative errors directly, as opposed to via
absolute errors, is often beneficial and can provide error estimates up to six
orders of magnitude tighter, i.e. more accurate. We also show that interval
subdivision, another commonly used technique to reduce over-approximations, has
less benefit when computing relative errors directly, but it can help to
alleviate the effects of the inherent issue of relative error estimates close
to zero
Implementation and Synthesis of Math Library Functions
Achieving speed and accuracy for math library functions like exp, sin, and
log is difficult. This is because low-level implementation languages like C do
not help math library developers catch mathematical errors, build
implementations incrementally, or separate high-level and low-level decision
making. This ultimately puts development of such functions out of reach for all
but the most experienced experts. To address this, we introduce MegaLibm, a
domain-specific language for implementing, testing, and tuning math library
implementations. MegaLibm is safe, modular, and tunable. Implementations in
MegaLibm can automatically detect mathematical mistakes like sign flips via
semantic wellformedness checks, and components like range reductions can be
implemented in a modular, composable way, simplifying implementations. Once the
high-level algorithm is done, tuning parameters like working precisions and
evaluation schemes can be adjusted through orthogonal tuning parameters to
achieve the desired speed and accuracy. MegaLibm also enables math library
developers to work interactively, compiling, testing, and tuning their
implementations and invoking tools like Sollya and type-directed synthesis to
complete components and synthesize entire implementations. MegaLibm can express
8 state-of-the-art math library implementations with comparable speed and
accuracy to the original C code, and can synthesize 5 variations and 3
from-scratch implementations with minimal guidance.Comment: 25 pages, 12 figure
Distributed Shared State with History Maintenance
Shared mutable state is challenging to maintain in a distributed environment. We develop a technique, based on the Operational Transform, that guides independent agents into producing consistent states through inconsistent but equivalent histories of operations. Our technique, history maintenance, extends and streamlines the Operational Transform for general distributed systems. We describe how to use history maintenance to create eventually-consistent, strongly-consistent, and hybrid systems whose correctness is easy to reason about
Small Proofs from Congruence Closure
Satisfiability Modulo Theory (SMT) solvers and equality saturation engines
must generate proof certificates from e-graph-based congruence closure
procedures to enable verification and conflict clause generation. Smaller proof
certificates speed up these activities. Though the problem of generating proofs
of minimal size is known to be NP-complete, existing proof minimization
algorithms for congruence closure generate unnecessarily large proofs and
introduce asymptotic overhead over the core congruence closure procedure. In
this paper, we introduce an O(n^5) time algorithm which generates optimal
proofs under a new relaxed "proof tree size" metric that directly bounds proof
size. We then relax this approach further to a practical O(n \log(n)) greedy
algorithm which generates small proofs with no asymptotic overhead. We
implemented our techniques in the egg equality saturation toolkit, yielding the
first certifying equality saturation engine. We show that our greedy approach
in egg quickly generates substantially smaller proofs than the state-of-the-art
Z3 SMT solver on a corpus of 3760 benchmarks
Superconducting Quantum Interference in Fractal Percolation Films. Problem of 1/f Noise
An oscillatory magnetic field dependence of the DC voltage is observed when a
low-frequency current flows through superconducting Sn-Ge thin-film composites
near the percolation threshold. The paper also studies the experimental
realisations of temporal voltage fluctuations in these films. Both the
structure of the voltage oscillations against the magnetic field and the time
series of the electric "noise" possess a fractal pattern. With the help of the
fractal analysis procedure, the fluctuations observed have been shown to be
neither a noise with a large number of degrees of freedom, nor the realisations
of a well defined dynamic system. On the contrary the model of voltage
oscillations induced by the weak fluctuations of a magnetic field of arbitrary
nature gives the most appropriate description of the phenomenon observed. The
imaging function of such a transformation possesses a fractal nature, thus
leading to power-law spectra of voltage fluctuations even for the simplest
types of magnetic fluctuations including the monochromatic ones. Thus, the
paper suggests a new universal mechanism of a "1/f noise" origin. It consists
in a passive transformation of any natural fluctuations with a fractal-type
transformation function.Comment: 17 pages, 13 eps-figures, Latex; title page and figures include
Toward a Standard Benchmark Format and Suite for Floating-Point Analysis
We introduce FPBench, a standard benchmark format for
validation and optimization of numerical accuracy in
floating-point computations. FPBench is a first step toward addressing an increasing need in our community for comparisons and combinations of tools from different
application domains. To this end, FPBench provides a basic
floating-point benchmark format and accuracy measures for comparing different tools. The FPBench format and measures allow comparing and composing different floating-point tools. We describe the FPBench format and measures and show that FPBench expresses benchmarks from recent papers
in the literature, by building an initial benchmark suite drawn from these papers. We intend for FPBench to grow into a standard benchmark suite for the members of the floating-point tools research community