75,105 research outputs found
The set theory of arithmetic decomposition
Journal ArticleThe Set Theory of Arithmetic Decomposition is a method for designing complex addition/ subtraction circuits at any radix using strictly positional, sign-local number systems. The specification of an addition circuit is simply an equation that describes the inputs and the outputs as weighted digit sets. Design is done by applying a set of rewrite rules known as decomposition operators to the equation. The order in which and weight at which each operator is applied maps directly to a physical implementation, including both multiple-level logic and connectivity. The method is readily automated and has been used to design some higher radix arithmetic circuits. It is possible to compute the cost of a given adder before the detailed design is complete
Probabilistic Aspects of Dirichlet Series
We investigate and generalise some properties of a family of probability distributions
closely related to the Riemann zeta function. Random variables
that have the property that divisibility by a set of distinct primes occurs
as a set of independent events are characterised in terms of functions that
are well known in number theory. We refer to random variables with this
independence property as Khinchin random variables.
In characterising the collection of Khinchin random variables, we make a
connection between the probabilistic theory of discrete distributions and the
number-theoretic concept of Dirichlet series. We outline some interesting
correspondences between discrete probability distributions and arithmetic
functions. A subset of the Khinchin random variables have infinitely divisible
logarithms. We establish the necessity of a condition, already known to be
sufficient, that ensures infinite divisibility.
Some Khinchin random variables admit a multiplicative decomposition
into a product of random prime numbers. The number of terms in such
a product follows a Poisson distribution. We explore two instances of this
decomposition: one related to the zeta distribution, and the other related to
the so-called prime zeta function.
We use the zeta distribution to derive known results from number theory
via probabilistic methods, and provide a generalisation of the distribution
for other unique factorisation domains
Sharply o-minimal structures and sharp cellular decomposition
Sharply o-minimal structures (denoted \so-minimal) are a strict subclass of
the o-minimal structures, aimed at capturing some finer features of structures
arising from algebraic geometry and Hodge theory. Sharp o-minimality associates
to each definable set a pair of integers known as \emph{format} and
\emph{degree}, similar to the ambient dimension and degree in the algebraic
case; gives bounds on the growth of these quantities under the logical
operations; and allows one to control the geometric complexity of a set in
terms of its format and degree. These axioms have significant implications on
arithmetic properties of definable sets -- for example, \so-minimality was
recently used by the authors to settle Wilkie's conjecture on rational points
in -definable sets.
In this paper we develop some basic theory of sharply o-minimal structures.
We introduce the notions of reduction and equivalence on the class of
\so-minimal structures. We give three variants of the definition of
\so-minimality, of increasing strength, and show that they all agree up to
reduction. We also consider the problem of ``sharp cell decomposition'', i.e.
cell decomposition with good control on the number of the cells and their
formats and degrees. We show that every \so-minimal structure can be reduced to
one admitting sharp cell decomposition, and use this to prove bounds on the
Betti numbers of definable sets in terms of format and degree
Global semantic typing for inductive and coinductive computing
Inductive and coinductive types are commonly construed as ontological
(Church-style) types, denoting canonical data-sets such as natural numbers,
lists, and streams. For various purposes, notably the study of programs in the
context of global semantics, it is preferable to think of types as semantical
properties (Curry-style). Intrinsic theories were introduced in the late 1990s
to provide a purely logical framework for reasoning about programs and their
semantic types. We extend them here to data given by any combination of
inductive and coinductive definitions. This approach is of interest because it
fits tightly with syntactic, semantic, and proof theoretic fundamentals of
formal logic, with potential applications in implicit computational complexity
as well as extraction of programs from proofs. We prove a Canonicity Theorem,
showing that the global definition of program typing, via the usual (Tarskian)
semantics of first-order logic, agrees with their operational semantics in the
intended model. Finally, we show that every intrinsic theory is interpretable
in a conservative extension of first-order arithmetic. This means that
quantification over infinite data objects does not lead, on its own, to
proof-theoretic strength beyond that of Peano Arithmetic. Intrinsic theories
are perfectly amenable to formulas-as-types Curry-Howard morphisms, and were
used to characterize major computational complexity classes Their extensions
described here have similar potential which has already been applied
- …