18 research outputs found

    Acceptability with general orderings

    Full text link
    We present a new approach to termination analysis of logic programs. The essence of the approach is that we make use of general orderings (instead of level mappings), like it is done in transformational approaches to logic program termination analysis, but we apply these orderings directly to the logic program and not to the term-rewrite system obtained through some transformation. We define some variants of acceptability, based on general orderings, and show how they are equivalent to LD-termination. We develop a demand driven, constraint-based approach to verify these acceptability-variants. The advantage of the approach over standard acceptability is that in some cases, where complex level mappings are needed, fairly simple orderings may be easily generated. The advantage over transformational approaches is that it avoids the transformation step all together. {\bf Keywords:} termination analysis, acceptability, orderings.Comment: To appear in "Computational Logic: From Logic Programming into the Future

    Termination proofs for logic programs with tabling

    No full text

    Schäumen mit ProFoam - Der einfache Weg zu feinzellig geschäumten Spritzgussteilen

    No full text
    1 Introduction In addition to their many other advantages, multiple-classifier systems hold the promise of developing learning methods that are robust in the presence of imperfections in the data; in terms of missing features, and noise in both the class labels and the features. Noisy training data tends to increase the variance in the results produced by a given classifier; however, by learning a committee of hypotheses and combining their decisions, this variance can be reduced. In particular, variance-reducing methods such as Bagging [2] have been shown to be robust in the presence of fairly high levels of noise, and can even benefit from low levels of noise [4]. Bagging is a fairly simple ensemble method which is generally outperformed by more sophisticated techniques such as AdaBoost [5, 14]. However, AdaBoost has a tendency to overfit when there is significant noise in the training data, preventing it from learning an effective ensemble [4]. Therefore, there is a need for a general ensemble meta-learner 1 that is at least as accurate as AdaBoost when there is little or no noise, but is more robust to higher levels of random error in the training data

    Coherent composition of distributed knowledge-bases through abduction

    No full text
    Abstract. We introduce an abductive method for coherent composition of distributed data. Our approach is based on an abductive inference procedure that is applied on a meta-theory that relates different, possibly inconsistent, input databases. Repairs of the integrated data are computed, resulting in a consistent output database that satisfies the meta-theory. Our framework is based on the A-system, which is an abductive system that implements SLDNFA-resolution. The outcome is a robust application that, to the best of our knowledge, is more expressive (thus more general) than any other existing application for coherent data integration
    corecore