154 research outputs found
Computability and Algorithmic Complexity in Economics
This is an outline of the origins and development of the way computability theory and algorithmic complexity theory were incorporated into economic and finance theories. We try to place, in the context of the development of computable economics, some of the classics of the subject as well as those that have, from time to time, been credited with having contributed to the advancement of the field. Speculative thoughts on where the frontiers of computable economics are, and how to move towards them, conclude the paper. In a precise sense - both historically and analytically - it would not be an exaggeration to claim that both the origins of computable economics and its frontiers are defined by two classics, both by Banach and Mazur: that one page masterpiece by Banach and Mazur ([5]), built on the foundations of Turing’s own classic, and the unpublished Mazur conjecture of 1928, and its unpublished proof by Banach ([38], ch. 6 & [68], ch. 1, #6). For the undisputed original classic of computable economics is RabinĂs effectivization of the Gale-Stewart game ([42];[16]); the frontiers, as I see them, are defined by recursive analysis and constructive mathematics, underpinning computability over the computable and constructive reals and providing computable foundations for the economist’s Marshallian penchant for curve-sketching ([9]; [19]; and, in general, the contents of Theoretical Computer Science, Vol. 219, Issue 1-2). The former work has its roots in the Banach-Mazur game (cf. [38], especially p.30), at least in one reading of it; the latter in ([5]), as well as other, earlier, contributions, not least by Brouwer.
Computational Complexity of Atomic Chemical Reaction Networks
Informally, a chemical reaction network is "atomic" if each reaction may be
interpreted as the rearrangement of indivisible units of matter. There are
several reasonable definitions formalizing this idea. We investigate the
computational complexity of deciding whether a given network is atomic
according to each of these definitions.
Our first definition, primitive atomic, which requires each reaction to
preserve the total number of atoms, is to shown to be equivalent to mass
conservation. Since it is known that it can be decided in polynomial time
whether a given chemical reaction network is mass-conserving, the equivalence
gives an efficient algorithm to decide primitive atomicity.
Another definition, subset atomic, further requires that all atoms are
species. We show that deciding whether a given network is subset atomic is in
, and the problem "is a network subset atomic with respect to a
given atom set" is strongly -.
A third definition, reachably atomic, studied by Adleman, Gopalkrishnan et
al., further requires that each species has a sequence of reactions splitting
it into its constituent atoms. We show that there is a to decide whether a given network is reachably atomic, improving
upon the result of Adleman et al. that the problem is . We
show that the reachability problem for reachably atomic networks is
-.
Finally, we demonstrate equivalence relationships between our definitions and
some special cases of another existing definition of atomicity due to Gnacadja
Model checking: Algorithmic verification and debugging
Turing Lecture from the winners of the 2007 ACM A.M. Turing Award.In 1981, Edmund M. Clarke and E. Allen Emerson, working in the USA, and Joseph Sifakis working independently in France, authored seminal papers that founded what has become the highly successful field of model checking. This verification technology provides an algorithmic means of determining whether an abstract model-representing, for example, a hardware or software design-satisfies a formal specification expressed as a temporal logic (TL) formula. Moreover, if the property does not hold, the method identifies a counterexample execution that shows the source of the problem.The progression of model checking to the point where it can be successfully used for complex systems has required the development of sophisticated means of coping with what is known as the state explosion problem. Great strides have been made on this problem over the past 28 years by what is now a very large international research community. As a result many major hardware and software companies are beginning to use model checking in practice. Examples of its use include the verification of VLSI circuits, communication protocols, software device drivers, real-time embedded systems, and security algorithms.The work of Clarke, Emerson, and Sifakis continues to be central to the success of this research area. Their work over the years has led to the creation of new logics for specification, new verification algorithms, and surprising theoretical results. Model checking tools, created by both academic and industrial teams, have resulted in an entirely novel approach to verification and test case generation. This approach, for example, often enables engineers in the electronics industry to design complex systems with considerable assurance regarding the correctness of their initial designs. Model checking promises to have an even greater impact on the hardware and software industries in the future.-Moshe Y. Vardi, Editor-in-Chief
Computations of Uniform Recurrence Equations Using Minimal Memory Size
International audienceWe consider a system of uniform recurrence equations (URE) of dimension one. We show how its computation can be carried out using minimal memory size with several synchronous processors. This result is then applied to register minimization for digital circuits and parallel computation of task graphs
Classical computing, quantum computing, and Shor's factoring algorithm
This is an expository talk written for the Bourbaki Seminar. After a brief
introduction, Section 1 discusses in the categorical language the structure of
the classical deterministic computations. Basic notions of complexity icluding
the P/NP problem are reviewed. Section 2 introduces the notion of quantum
parallelism and explains the main issues of quantum computing. Section 3 is
devoted to four quantum subroutines: initialization, quantum computing of
classical Boolean functions, quantum Fourier transform, and Grover's search
algorithm. The central Section 4 explains Shor's factoring algorithm. Section 5
relates Kolmogorov's complexity to the spectral properties of computable
function. Appendix contributes to the prehistory of quantum computing.Comment: 27 pp., no figures, amste
Non-Constructivity in Security Proofs
In the field of cryptography, one generally obtains assurances for the
security of a cryptographic protocol by giving a reductionist security
proof, which is comprised of a reduction from breaking a mathematical
problem (that is well-studied and widely believed to be intractable)
to the breaking of the cryptographic protocol. While such reductions
are generally constructive, some authors give non-constructive
reductions (also called non-uniform reductions) in order to reduce
the tightness gap of the reduction. However, in order to assess the
concrete security that the proof provides, one also needs to assess
the intractability of the underlying mathematical problem against
non-constructive attacks. Unfortunately, there has been very little
work in the literature on non-constructive attacks on these problems,
and sometimes non-constructive attacks are found that are much faster
than their constructive counterparts. Thus, it is sometimes very
difficult to obtain meaningful security assurances about a cryptographic
protocol from a non-constructive reductionist security proof.
In this thesis, we examine three instances of non-constructive security
proofs for cryptographic protocols in the literature:
(1) a password-based key derivation function; (2) an HMAC-related message
authentication code scheme; and (3) a
round-optimal blind signature scheme
- …