2 research outputs found

    Self-Correcting Unsound Reasoning Agents (DARe 2017)

    No full text
    International audienceThis paper introduces a formal framework for relating learning and deduction in reasoning agents. Our goal is to capture imperfect reasoning as well as the progress, through introspection, towards a better reasoning ability. We capture the interleaving between these by a reasoning/deduction connection and we show how this—and related—definition apply to a setting in which agents are modeled by first-order logic theories. In this setting, we give a sufficient condition on the connection ensuring that under fairness assumptions the limit of introspection steps is a sound and complete deduction system. Under the same assumption we prove every falsehood is eventually refuted, hence the self-correction property
    corecore