7,780 research outputs found
Implicit complexity for coinductive data: a characterization of corecurrence
We propose a framework for reasoning about programs that manipulate
coinductive data as well as inductive data. Our approach is based on using
equational programs, which support a seamless combination of computation and
reasoning, and using productivity (fairness) as the fundamental assertion,
rather than bi-simulation. The latter is expressible in terms of the former. As
an application to this framework, we give an implicit characterization of
corecurrence: a function is definable using corecurrence iff its productivity
is provable using coinduction for formulas in which data-predicates do not
occur negatively. This is an analog, albeit in weaker form, of a
characterization of recurrence (i.e. primitive recursion) in [Leivant, Unipolar
induction, TCS 318, 2004].Comment: In Proceedings DICE 2011, arXiv:1201.034
Natural scene statistics mediate the perception of image complexity
Humans are sensitive to complexity and regularity in patterns. The subjective
perception of pattern complexity is correlated to algorithmic
(Kolmogorov-Chaitin) complexity as defined in computer science, but also to the
frequency of naturally occurring patterns. However, the possible mediational
role of natural frequencies in the perception of algorithmic complexity remains
unclear. Here we reanalyze Hsu et al. (2010) through a mediational analysis,
and complement their results in a new experiment. We conclude that human
perception of complexity seems partly shaped by natural scenes statistics,
thereby establishing a link between the perception of complexity and the effect
of natural scene statistics
Sublogarithmic uniform Boolean proof nets
Using a proofs-as-programs correspondence, Terui was able to compare two
models of parallel computation: Boolean circuits and proof nets for
multiplicative linear logic. Mogbil et. al. gave a logspace translation
allowing us to compare their computational power as uniform complexity classes.
This paper presents a novel translation in AC0 and focuses on a simpler
restricted notion of uniform Boolean proof nets. We can then encode
constant-depth circuits and compare complexity classes below logspace, which
were out of reach with the previous translations.Comment: In Proceedings DICE 2011, arXiv:1201.034
Ageing and the Tax Implied in Public Pension Schemes: Simulations for Selected OECD Countries
A key figure which can be applied to measuring inter-generational imbalances involved in existing public pension schemes is given by the âimplicit taxâ that is levied on each generationâs life-time income through participation in these systems. The implicit tax arises from the fact that, quite generally, pension benefits received fall short of actuarial returns to contributions (i.e., âexplicitâ social security taxes) paid while actively working. If, in spite of large-scale demographic ageing, public pension schemes are continued to be run based on current rules, implicit tax rates will sharply increase for generations who are currently young when compared to those who are already approaching retirement. In the paper, this will be illustrated for the cases of France, Germany, Italy, Japan, Sweden, the UK, and the US. The results are based on simulations covering representative individuals in all age cohorts born from 1940 to 2000. At the same time, there are striking differences across countries regarding both the level of implicit taxes and their time paths over successive age cohorts, which can be attributed to different ageing processes as well as to different institutional features of national pension systems. In addition, we are studying the impact of pension reforms that were recently enacted or are currently under way, thus demonstrating how effective the measures taken are in terms of smoothing the inter-generational profile of implicit tax rates.demographic ageing, public pensions, pension reform, inter-generational redistribution, international comparisons
Global semantic typing for inductive and coinductive computing
Inductive and coinductive types are commonly construed as ontological
(Church-style) types, denoting canonical data-sets such as natural numbers,
lists, and streams. For various purposes, notably the study of programs in the
context of global semantics, it is preferable to think of types as semantical
properties (Curry-style). Intrinsic theories were introduced in the late 1990s
to provide a purely logical framework for reasoning about programs and their
semantic types. We extend them here to data given by any combination of
inductive and coinductive definitions. This approach is of interest because it
fits tightly with syntactic, semantic, and proof theoretic fundamentals of
formal logic, with potential applications in implicit computational complexity
as well as extraction of programs from proofs. We prove a Canonicity Theorem,
showing that the global definition of program typing, via the usual (Tarskian)
semantics of first-order logic, agrees with their operational semantics in the
intended model. Finally, we show that every intrinsic theory is interpretable
in a conservative extension of first-order arithmetic. This means that
quantification over infinite data objects does not lead, on its own, to
proof-theoretic strength beyond that of Peano Arithmetic. Intrinsic theories
are perfectly amenable to formulas-as-types Curry-Howard morphisms, and were
used to characterize major computational complexity classes Their extensions
described here have similar potential which has already been applied
Three Puzzles on Mathematics, Computation, and Games
In this lecture I will talk about three mathematical puzzles involving
mathematics and computation that have preoccupied me over the years. The first
puzzle is to understand the amazing success of the simplex algorithm for linear
programming. The second puzzle is about errors made when votes are counted
during elections. The third puzzle is: are quantum computers possible?Comment: ICM 2018 plenary lecture, Rio de Janeiro, 36 pages, 7 Figure
- âŠ