2,187 research outputs found
Complexity Bounds for Ordinal-Based Termination
`What more than its truth do we know if we have a proof of a theorem in a
given formal system?' We examine Kreisel's question in the particular context
of program termination proofs, with an eye to deriving complexity bounds on
program running times.
Our main tool for this are length function theorems, which provide complexity
bounds on the use of well quasi orders. We illustrate how to prove such
theorems in the simple yet until now untreated case of ordinals. We show how to
apply this new theorem to derive complexity bounds on programs when they are
proven to terminate thanks to a ranking function into some ordinal.Comment: Invited talk at the 8th International Workshop on Reachability
Problems (RP 2014, 22-24 September 2014, Oxford
A Plausibility Semantics for Abstract Argumentation Frameworks
We propose and investigate a simple ranking-measure-based extension semantics
for abstract argumentation frameworks based on their generic instantiation by
default knowledge bases and the ranking construction semantics for default
reasoning. In this context, we consider the path from structured to logical to
shallow semantic instantiations. The resulting well-justified JZ-extension
semantics diverges from more traditional approaches.Comment: Proceedings of the 15th International Workshop on Non-Monotonic
Reasoning (NMR 2014). This is an improved and extended version of the
author's ECSQARU 2013 pape
Efficient Regularized Least-Squares Algorithms for Conditional Ranking on Relational Data
In domains like bioinformatics, information retrieval and social network
analysis, one can find learning tasks where the goal consists of inferring a
ranking of objects, conditioned on a particular target object. We present a
general kernel framework for learning conditional rankings from various types
of relational data, where rankings can be conditioned on unseen data objects.
We propose efficient algorithms for conditional ranking by optimizing squared
regression and ranking loss functions. We show theoretically, that learning
with the ranking loss is likely to generalize better than with the regression
loss. Further, we prove that symmetry or reciprocity properties of relations
can be efficiently enforced in the learned models. Experiments on synthetic and
real-world data illustrate that the proposed methods deliver state-of-the-art
performance in terms of predictive power and computational efficiency.
Moreover, we also show empirically that incorporating symmetry or reciprocity
properties can improve the generalization performance
- …