138 research outputs found
Complexity Hierarchies and Higher-order Cons-free Term Rewriting
Constructor rewriting systems are said to be cons-free if, roughly,
constructor terms in the right-hand sides of rules are subterms of the
left-hand sides; the computational intuition is that rules cannot build new
data structures. In programming language research, cons-free languages have
been used to characterize hierarchies of computational complexity classes; in
term rewriting, cons-free first-order TRSs have been used to characterize the
class PTIME.
We investigate cons-free higher-order term rewriting systems, the complexity
classes they characterize, and how these depend on the type order of the
systems. We prove that, for every K 1, left-linear cons-free systems
with type order K characterize ETIME if unrestricted evaluation is used
(i.e., the system does not have a fixed reduction strategy).
The main difference with prior work in implicit complexity is that (i) our
results hold for non-orthogonal term rewriting systems with no assumptions on
reduction strategy, (ii) we consequently obtain much larger classes for each
type order (ETIME versus EXPTIME), and (iii) results for cons-free
term rewriting systems have previously only been obtained for K = 1, and with
additional syntactic restrictions besides cons-freeness and left-linearity.
Our results are among the first implicit characterizations of the hierarchy E
= ETIME ETIME ... Our work confirms prior
results that having full non-determinism (via overlapping rules) does not
directly allow for characterization of non-deterministic complexity classes
like NE. We also show that non-determinism makes the classes characterized
highly sensitive to minor syntactic changes like admitting product types or
non-left-linear rules.Comment: extended version of a paper submitted to FSCD 2016. arXiv admin note:
substantial text overlap with arXiv:1604.0893
Complexity Hierarchies and Higher-Order Cons-Free Rewriting
Constructor rewriting systems are said to be cons-free if, roughly,
constructor terms in the right-hand sides of rules are subterms of constructor
terms in the left-hand side; the computational intuition is that rules cannot
build new data structures. It is well-known that cons-free programming
languages can be used to characterize computational complexity classes, and
that cons-free first-order term rewriting can be used to characterize the set
of polynomial-time decidable sets.
We investigate cons-free higher-order term rewriting systems, the complexity
classes they characterize, and how these depend on the order of the types used
in the systems. We prove that, for every k 1, left-linear cons-free
systems with type order k characterize ETIME if arbitrary evaluation is
used (i.e., the system does not have a fixed reduction strategy).
The main difference with prior work in implicit complexity is that (i) our
results hold for non-orthogonal term rewriting systems with possible rule
overlaps with no assumptions about reduction strategy, (ii) results for such
term rewriting systems have previously only been obtained for k = 1, and with
additional syntactic restrictions on top of cons-freeness and left-linearity.
Our results are apparently among the first implicit characterizations of the
hierarchy E = ETIME ETIME .... Our work
confirms prior results that having full non-determinism (via overlaps of rules)
does not directly allow characterization of non-deterministic complexity
classes like NE. We also show that non-determinism makes the classes
characterized highly sensitive to minor syntactic changes such as admitting
product types or non-left-linear rules.Comment: Extended version (with appendices) of a paper published in FSCD 201
Weak convergence and uniform normalization in infinitary rewriting
We study infinitary term rewriting systems containing finitely many rules. For these, we show that if a weakly convergent reduction is not strongly convergent, it contains a term that reduces to itself in one step (but the step itself need not be part of the reduction). Using this result, we prove
the starkly surprising result
that for any orthogonal system with finitely many rules, the system is
weakly normalizing under weak convergence if{f} it is strongly normalizing under weak convergence if{f} it is weakly normalizing under strong convergence if{f} it is strongly normalizing under strong convergence.
As further corollaries, we derive a number of new results for weakly convergent rewriting: Systems with finitely many rules enjoy unique normal forms, and acyclic orthogonal systems are confluent. Our results suggest that it may be possible to recover some of the positive results for strongly convergent rewriting in the setting of weak convergence, if systems with finitely many rules are considered. Finally, we give a number of counterexamples showing failure of most of the results when infinite sets of rules are allowed
The Expressive Power of One Variable Used Once: The Chomsky Hierarchy and First-Order Monadic Constructor Rewriting
We study the implicit computational complexity of constructor term rewriting systems where every function and constructor symbol is unary or nullary. Surprisingly, adding simple and natural constraints to rule formation yields classes of systems that accept exactly the four classes of languages in the Chomsky hierarchy
Evaluation Measures for Relevance and Credibility in Ranked Lists
Recent discussions on alternative facts, fake news, and post truth politics
have motivated research on creating technologies that allow people not only to
access information, but also to assess the credibility of the information
presented to them by information retrieval systems. Whereas technology is in
place for filtering information according to relevance and/or credibility, no
single measure currently exists for evaluating the accuracy or precision (and
more generally effectiveness) of both the relevance and the credibility of
retrieved results. One obvious way of doing so is to measure relevance and
credibility effectiveness separately, and then consolidate the two measures
into one. There at least two problems with such an approach: (I) it is not
certain that the same criteria are applied to the evaluation of both relevance
and credibility (and applying different criteria introduces bias to the
evaluation); (II) many more and richer measures exist for assessing relevance
effectiveness than for assessing credibility effectiveness (hence risking
further bias).
Motivated by the above, we present two novel types of evaluation measures
that are designed to measure the effectiveness of both relevance and
credibility in ranked lists of retrieval results. Experimental evaluation on a
small human-annotated dataset (that we make freely available to the research
community) shows that our measures are expressive and intuitive in their
interpretation
The exact hardness of deciding derivational and runtime complexity
For any class C of computable total functions satisfying some mild conditions, we prove that the following decision problems are complete for the existential part of the second level of the arithmetical hierarchy: (A) Deciding whether a term rewriting system (TRS for short) has runtime complexity bounded by a function in C. (B) Deciding whether a TRS has derivational complexity bounded by a function in C.
In particular, the problems of deciding whether a TRS has polynomially (exponentially) bounded runtime complexity (respectively derivational complexity) are complete for this level of the arithmetical ierarchy. This places deciding polynomial derivational or runtime complexity of TRSs at the same level as deciding nontermination or nonconfluence of TRSs. We proceed to show that the related problem of deciding for a single computable function f whether a TRS has runtime complexity bounded from above by f is complete for the universal part of the first level of the arithmetical hierarchy. We further prove that analysing the implicit complexity of TRSs is even more difficult: The problem of deciding whether a TRS accepts a language of terms accepted by some TRS with runtime complexity bounded by a function in C is complete for the existential part of the third level of the arithmetical hierarchy.
All of our results are easily extended to the notion of minimal complexity (where the length of shortest reductions to normal form is considered) and remain valid under any computable reduction strategy. Finally, all results hold both for unrestricted TRSs and for the class of orthogonal TRSs
Infinitary Combinatory Reduction Systems
We define infinitary Combinatory Reduction Systems (iCRSs), thus providing the first notion of infinitary higher-order rewriting. The systems defined are sufficiently general that ordinary infinitary term rewriting and infinitary -calculus are special cases. Furthermore, we generalise a number of known results from first-order infinitary rewriting and infinitary -calculus to iCRSs. In particular, for fully-extended, left-linear iCRSs we prove the well-known compression property, and for orthogonal iCRSs we prove that (1) if a set of redexes has a complete development, then all complete developments of end in the same term and that (2) any tiling diagram involving strongly convergent reductions and can be completed iff at least one of and is strongly convergent. We also prove an ancillary result of independent interest: A set of redexes in an orthogonal iCRS has a complete development iff the set has the so-called finite jumps property
A Complete Characterization of Infinitely Repeated Two-Player Games having Computable Strategies with no Computable Best Response under Limit-of-Means Payoff
It is well-known that for infinitely repeated games, there are computable
strategies that have best responses, but no computable best responses. These
results were originally proved for either specific games (e.g., Prisoner's
dilemma), or for classes of games satisfying certain conditions not known to be
both necessary and sufficient.
We derive a complete characterization in the form of simple necessary and
sufficient conditions for the existence of a computable strategy without a
computable best response under limit-of-means payoff. We further refine the
characterization by requiring the strategy profiles to be Nash equilibria or
subgame-perfect equilibria, and we show how the characterizations entail that
it is efficiently decidable whether an infinitely repeated game has a
computable strategy without a computable best response
- …