10,136 research outputs found

    Theorems on Efficient Argument Reductions

    Get PDF
    International audienceA commonly used argument reduction technique in elementary function computations begins with two positive floating point numbers α and γ that approximate (usually irrational but not necessarily) numbers 1/C and C, e.g., C = 2π for trigonometric functions and ln 2 for ex. Given an argument to the function of interest it extracts z as defined by xα = z + ς with z = k2−N and |ς| ≤ 2−N−1, where k,N are integers and N ≥ 0 is preselected, and then computes u = x − zγ. Usually zγ takes more bits than the working precision provides for storing its significand, and thus exact x−zγ may not be represented exactly by a floating point number of the same precision. This will cause performance penalty when the working precision is the highest available on the underlying hardware and thus considerable extra work is needed to get all the bits of x − zγ right. This paper presents theorems that show under mild conditions that can be easily met on today's computer hardware and still allow α ≈ 1/C and γ ≈ C to almost the full working precision, x−zγ is a floating point number of the same precision. An algorithmic procedure based on the theorems is obtained. The results will enhance performance, in particular on machines that has hardware support for fused multiply-add (fma) instruction(s)

    Space Efficiency of Propositional Knowledge Representation Formalisms

    Full text link
    We investigate the space efficiency of a Propositional Knowledge Representation (PKR) formalism. Intuitively, the space efficiency of a formalism F in representing a certain piece of knowledge A, is the size of the shortest formula of F that represents A. In this paper we assume that knowledge is either a set of propositional interpretations (models) or a set of propositional formulae (theorems). We provide a formal way of talking about the relative ability of PKR formalisms to compactly represent a set of models or a set of theorems. We introduce two new compactness measures, the corresponding classes, and show that the relative space efficiency of a PKR formalism in representing models/theorems is directly related to such classes. In particular, we consider formalisms for nonmonotonic reasoning, such as circumscription and default logic, as well as belief revision operators and the stable model semantics for logic programs with negation. One interesting result is that formalisms with the same time complexity do not necessarily belong to the same space efficiency class

    Suszko's Problem: Mixed Consequence and Compositionality

    Full text link
    Suszko's problem is the problem of finding the minimal number of truth values needed to semantically characterize a syntactic consequence relation. Suszko proved that every Tarskian consequence relation can be characterized using only two truth values. Malinowski showed that this number can equal three if some of Tarski's structural constraints are relaxed. By so doing, Malinowski introduced a case of so-called mixed consequence, allowing the notion of a designated value to vary between the premises and the conclusions of an argument. In this paper we give a more systematic perspective on Suszko's problem and on mixed consequence. First, we prove general representation theorems relating structural properties of a consequence relation to their semantic interpretation, uncovering the semantic counterpart of substitution-invariance, and establishing that (intersective) mixed consequence is fundamentally the semantic counterpart of the structural property of monotonicity. We use those to derive maximum-rank results proved recently in a different setting by French and Ripley, as well as by Blasio, Marcos and Wansing, for logics with various structural properties (reflexivity, transitivity, none, or both). We strengthen these results into exact rank results for non-permeable logics (roughly, those which distinguish the role of premises and conclusions). We discuss the underlying notion of rank, and the associated reduction proposed independently by Scott and Suszko. As emphasized by Suszko, that reduction fails to preserve compositionality in general, meaning that the resulting semantics is no longer truth-functional. We propose a modification of that notion of reduction, allowing us to prove that over compact logics with what we call regular connectives, rank results are maintained even if we request the preservation of truth-functionality and additional semantic properties.Comment: Keywords: Suszko's thesis; truth value; logical consequence; mixed consequence; compositionality; truth-functionality; many-valued logic; algebraic logic; substructural logics; regular connective

    Amortized Dynamic Cell-Probe Lower Bounds from Four-Party Communication

    Full text link
    This paper develops a new technique for proving amortized, randomized cell-probe lower bounds on dynamic data structure problems. We introduce a new randomized nondeterministic four-party communication model that enables "accelerated", error-preserving simulations of dynamic data structures. We use this technique to prove an Ω(n(logn/loglogn)2)\Omega(n(\log n/\log\log n)^2) cell-probe lower bound for the dynamic 2D weighted orthogonal range counting problem (2D-ORC) with n/polylognn/\mathrm{poly}\log n updates and nn queries, that holds even for data structures with exp(Ω~(n))\exp(-\tilde{\Omega}(n)) success probability. This result not only proves the highest amortized lower bound to date, but is also tight in the strongest possible sense, as a matching upper bound can be obtained by a deterministic data structure with worst-case operational time. This is the first demonstration of a "sharp threshold" phenomenon for dynamic data structures. Our broader motivation is that cell-probe lower bounds for exponentially small success facilitate reductions from dynamic to static data structures. As a proof-of-concept, we show that a slightly strengthened version of our lower bound would imply an Ω((logn/loglogn)2)\Omega((\log n /\log\log n)^2) lower bound for the static 3D-ORC problem with O(nlogO(1)n)O(n\log^{O(1)}n) space. Such result would give a near quadratic improvement over the highest known static cell-probe lower bound, and break the long standing Ω(logn)\Omega(\log n) barrier for static data structures

    Normal form approach to unconditional well-posedness of nonlinear dispersive PDEs on the real line

    Get PDF
    In this paper, we revisit the infinite iteration scheme of normal form reductions, introduced by the first and second authors (with Z. Guo), in constructing solutions to nonlinear dispersive PDEs. Our main goal is to present a simplified approach to this method. More precisely, we study normal form reductions in an abstract form and reduce multilinear estimates of arbitrarily high degrees to successive applications of basic trilinear estimates. As an application, we prove unconditional well-posedness of canonical nonlinear dispersive equations on the real line. In particular, we implement this simplified approach to an infinite iteration of normal form reductions in the context of the cubic nonlinear Schr\"odinger equation (NLS) and the modified KdV equation (mKdV) on the real line and prove unconditional well-posedness in Hs(R)H^s(\mathbb R) with (i) s16s\geq \frac 16 for the cubic NLS and (ii) s>14s > \frac 14 for the mKdV. Our normal form approach also allows us to construct weak solutions to the cubic NLS in Hs(R)H^s(\mathbb R), 0s<160 \leq s < \frac 16, and distributional solutions to the mKdV in H14(R)H^\frac{1}{4}(\mathbb R) (with some uniqueness statements).Comment: 60 pages. Typos corrected. To appear in Ann. Fac. Sci. Toulouse Mat

    Efficient prediction for linear and nonlinear autoregressive models

    Get PDF
    Conditional expectations given past observations in stationary time series are usually estimated directly by kernel estimators, or by plugging in kernel estimators for transition densities. We show that, for linear and nonlinear autoregressive models driven by independent innovations, appropriate smoothed and weighted von Mises statistics of residuals estimate conditional expectations at better parametric rates and are asymptotically efficient. The proof is based on a uniform stochastic expansion for smoothed and weighted von Mises processes of residuals. We consider, in particular, estimation of conditional distribution functions and of conditional quantile functions.Comment: Published at http://dx.doi.org/10.1214/009053606000000812 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A No-Go Theorem for Derandomized Parallel Repetition: Beyond Feige-Kilian

    Get PDF
    In this work we show a barrier towards proving a randomness-efficient parallel repetition, a promising avenue for achieving many tight inapproximability results. Feige and Kilian (STOC'95) proved an impossibility result for randomness-efficient parallel repetition for two prover games with small degree, i.e., when each prover has only few possibilities for the question of the other prover. In recent years, there have been indications that randomness-efficient parallel repetition (also called derandomized parallel repetition) might be possible for games with large degree, circumventing the impossibility result of Feige and Kilian. In particular, Dinur and Meir (CCC'11) construct games with large degree whose repetition can be derandomized using a theorem of Impagliazzo, Kabanets and Wigderson (SICOMP'12). However, obtaining derandomized parallel repetition theorems that would yield optimal inapproximability results has remained elusive. This paper presents an explanation for the current impasse in progress, by proving a limitation on derandomized parallel repetition. We formalize two properties which we call "fortification-friendliness" and "yields robust embeddings." We show that any proof of derandomized parallel repetition achieving almost-linear blow-up cannot both (a) be fortification-friendly and (b) yield robust embeddings. Unlike Feige and Kilian, we do not require the small degree assumption. Given that virtually all existing proofs of parallel repetition, including the derandomized parallel repetition result of Dinur and Meir, share these two properties, our no-go theorem highlights a major barrier to achieving almost-linear derandomized parallel repetition

    Type classes for efficient exact real arithmetic in Coq

    Get PDF
    Floating point operations are fast, but require continuous effort on the part of the user in order to ensure that the results are correct. This burden can be shifted away from the user by providing a library of exact analysis in which the computer handles the error estimates. Previously, we [Krebbers/Spitters 2011] provided a fast implementation of the exact real numbers in the Coq proof assistant. Our implementation improved on an earlier implementation by O'Connor by using type classes to describe an abstract specification of the underlying dense set from which the real numbers are built. In particular, we used dyadic rationals built from Coq's machine integers to obtain a 100 times speed up of the basic operations already. This article is a substantially expanded version of [Krebbers/Spitters 2011] in which the implementation is extended in the various ways. First, we implement and verify the sine and cosine function. Secondly, we create an additional implementation of the dense set based on Coq's fast rational numbers. Thirdly, we extend the hierarchy to capture order on undecidable structures, while it was limited to decidable structures before. This hierarchy, based on type classes, allows us to share theory on the naturals, integers, rationals, dyadics, and reals in a convenient way. Finally, we obtain another dramatic speed-up by avoiding evaluation of termination proofs at runtime.Comment: arXiv admin note: text overlap with arXiv:1105.275
    corecore