69 research outputs found

    On a notion of abduction and relevance for first-order logic clause sets

    Get PDF
    I propose techniques to help with explaining entailment and non-entailment in first-order logic respectively relying on deductive and abductive reasoning. First, given an unsatisfiable clause set, one could ask which clauses are necessary for any possible deduction (\emph{syntactically relevant}), usable for some deduction (\emph{syntactically semi-relevant}), or unusable (\emph{syntactically irrelevant}). I propose a first-order formalization of this notion and demonstrate a lifting of this notion to the explanation of an entailment w.r.t some axiom set defined in some description logic fragments. Moreover, it is accompanied by a semantic characterization via \emph{conflict literals} (contradictory simple facts). From an unsatisfiable clause set, a pair of conflict literals are always deducible. A \emph{relevant} clause is necessary to derive any conflict literal, a \emph{semi-relevant} clause is necessary to derive some conflict literal, and an \emph{irrelevant} clause is not useful in deriving any conflict literals. It helps provide a picture of why an explanation holds beyond what one can get from the predominant notion of a minimal unsatisfiable set. The need to test if a clause is (syntactically) semi-relevant leads to a generalization of a well-known resolution strategy: resolution equipped with the set-of-support strategy is refutationally complete on a clause set NN and SOS MM if and only if there is a resolution refutation from NâˆȘMN\cup M using a clause in MM. This result non-trivially improves the original formulation. Second, abductive reasoning helps find extensions of a knowledge base to obtain an entailment of some missing consequence (called observation). Not only that it is useful to repair incomplete knowledge bases but also to explain a possibly unexpected observation. I particularly focus on TBox abduction in \EL description logic (still first-order logic fragment via some model-preserving translation scheme) which is rather lightweight but prevalent in practice. The solution space can be huge or even infinite. So, different kinds of minimality notions can help sort the chaff from the grain. I argue that existing ones are insufficient, and introduce \emph{connection minimality}. This criterion offers an interpretation of Occam's razor in which hypotheses are accepted only when they help acquire the entailment without arbitrarily using axioms unrelated to the problem at hand. In addition, I provide a first-order technique to compute the connection-minimal hypotheses in a sound and complete way. The key technique relies on prime implicates. While the negation of a single prime implicate can already serve as a first-order hypothesis, a connection-minimal hypothesis which follows \EL syntactic restrictions (a set of simple concept inclusions) would require a combination of them. Termination by bounding the term depth in the prime implicates is provable by only looking into the ones that are also subset-minimal. I also present an evaluation on ontologies from the medical domain by implementing a prototype with SPASS as a prime implicate generation engine.Ich schlage Techniken vor, die bei der ErklĂ€rung von Folgerung und Nichtfolgerung in der Logik erster Ordnung helfen, die sich jeweils auf deduktives und abduktives Denken stĂŒtzen. Erstens könnte man bei einer gegebenen unerfĂŒllbaren Klauselmenge fragen, welche Klauseln fĂŒr eine mögliche Deduktion notwendig (\emph{syntaktisch relevant}), fĂŒr eine Deduktion verwendbar (\emph{syntaktisch semi-relevant}) oder unbrauchbar (\emph{syntaktisch irrelevant}). Ich schlage eine Formalisierung erster Ordnung dieses Begriffs vor und demonstriere eine Anhebung dieses Begriffs auf die ErklĂ€rung einer Folgerung bezĂŒglich einer Reihe von Axiomen, die in einigen Beschreibungslogikfragmenten definiert sind. Außerdem wird sie von einer semantischen Charakterisierung durch \emph{Konfliktliteral} (widersprĂŒchliche einfache Fakten) begleitet. Aus einer unerfĂŒllbaren Klauselmenge ist immer ein Konfliktliteralpaar ableitbar. Eine \emph{relevant}-Klausel ist notwendig, um ein Konfliktliteral abzuleiten, eine \emph{semi-relevant}-Klausel ist notwendig, um ein Konfliktliteral zu generieren, und eine \emph{irrelevant}-Klausel ist nicht nĂŒtzlich, um Konfliktliterale zu generieren. Es hilft, ein Bild davon zu vermitteln, warum eine ErklĂ€rung ĂŒber das hinausgeht, was man aus der vorherrschenden Vorstellung einer minimalen unerfĂŒllbaren Menge erhalten kann. Die Notwendigkeit zu testen, ob eine Klausel (syntaktisch) semi-relevant ist, fĂŒhrt zu einer Verallgemeinerung einer bekannten Resolutionsstrategie: Die mit der Set-of-Support-Strategie ausgestattete Resolution ist auf einer Klauselmenge NN und SOS MM widerlegungsvollstĂ€ndig, genau dann wenn es eine Auflösungswiderlegung von NâˆȘMN\cup M unter Verwendung einer Klausel in MM gibt. Dieses Ergebnis verbessert die ursprĂŒngliche Formulierung nicht trivial. Zweitens hilft abduktives Denken dabei, Erweiterungen einer Wissensbasis zu finden, um eine implikantion einer fehlenden Konsequenz (Beobachtung genannt) zu erhalten. Es ist nicht nur nĂŒtzlich, unvollstĂ€ndige Wissensbasen zu reparieren, sondern auch, um eine möglicherweise unerwartete Beobachtung zu erklĂ€ren. Ich konzentriere mich besonders auf die TBox-Abduktion in dem leichten, aber praktisch vorherrschenden Fragment der Beschreibungslogik \EL, das tatsĂ€chlich ein Logikfragment erster Ordnung ist (mittels eines modellerhaltenden Übersetzungsschemas). Der Lösungsraum kann riesig oder sogar unendlich sein. So können verschiedene Arten von MinimalitĂ€tsvorstellungen helfen, die Spreu vom Weizen zu trennen. Ich behaupte, dass die bestehenden unzureichend sind, und fĂŒhre \emph{VerbindungsminimalitĂ€t} ein. Dieses Kriterium bietet eine Interpretation von Ockhams Rasiermesser, bei der Hypothesen nur dann akzeptiert werden, wenn sie helfen, die Konsequenz zu erlangen, ohne willkĂŒrliche Axiome zu verwenden, die nichts mit dem vorliegenden Problem zu tun haben. Außerdem stelle ich eine Technik in Logik erster Ordnung zur Berechnung der verbindungsminimalen Hypothesen in zur VerfĂŒgung korrekte und vollstĂ€ndige Weise. Die SchlĂŒsseltechnik beruht auf Primimplikanten. WĂ€hrend die Negation eines einzelnen Primimplikant bereits als Hypothese in Logik erster Ordnung dienen kann, wĂŒrde eine Hypothese des Verbindungsminimums, die den syntaktischen EinschrĂ€nkungen von \EL folgt (einer Menge einfacher Konzeptinklusionen), eine Kombination dieser beiden erfordern. Die Terminierung durch Begrenzung der Termtiefe in den Primimplikanten ist beweisbar, indem nur diejenigen betrachtet werden, die auch teilmengenminimal sind. Außerdem stelle ich eine Auswertung zu Ontologien aus der Medizin vor, DomĂ€ne durch die Implementierung eines Prototyps mit SPASS als Primimplikant-Generierungs-Engine

    A core language for fuzzy answer set programming

    Get PDF
    A number of different Fuzzy Answer Set Programming (FASP) formalisms have been proposed in the last years, which all differ in the language extensions they support. In this paperwe investigate the expressivity of these frameworks. Specificallywe showhowa variety of constructs in these languages can be implemented using a considerably simpler core language. These simulations are important as a compact and simple language is easier to implement and to reason about, while an expressive language offers more options when modeling problems

    LOGIC AND CONSTRAINT PROGRAMMING FOR COMPUTATIONAL SUSTAINABILITY

    Get PDF
    Computational Sustainability is an interdisciplinary field that aims to develop computational and mathematical models and methods for decision making concerning the management and allocation of resources in order to help solve environmental problems. This thesis deals with a broad spectrum of such problems (energy efficiency, water management, limiting greenhouse gas emissions and fuel consumption) giving a contribution towards their solution by means of Logic Programming (LP) and Constraint Programming (CP), declarative paradigms from Artificial Intelligence of proven solidity. The problems described in this thesis were proposed by experts of the respective domains and tested on the real data instances they provided. The results are encouraging and show the aptness of the chosen methodologies and approaches. The overall aim of this work is twofold: both to address real world problems in order to achieve practical results and to get, from the application of LP and CP technologies to complex scenarios, feedback and directions useful for their improvement

    Cooperative Particle Swarm Optimization for Combinatorial Problems

    Get PDF
    A particularly successful line of research for numerical optimization is the well-known computational paradigm particle swarm optimization (PSO). In the PSO framework, candidate solutions are represented as particles that have a position and a velocity in a multidimensional search space. The direct representation of a candidate solution as a point that flies through hyperspace (i.e., Rn) seems to strongly predispose the PSO toward continuous optimization. However, while some attempts have been made towards developing PSO algorithms for combinatorial problems, these techniques usually encode candidate solutions as permutations instead of points in search space and rely on additional local search algorithms. In this dissertation, I present extensions to PSO that by, incorporating a cooperative strategy, allow the PSO to solve combinatorial problems. The central hypothesis is that by allowing a set of particles, rather than one single particle, to represent a candidate solution, combinatorial problems can be solved by collectively constructing solutions. The cooperative strategy partitions the problem into components where each component is optimized by an individual particle. Particles move in continuous space and communicate through a feedback mechanism. This feedback mechanism guides them in the assessment of their individual contribution to the overall solution. Three new PSO-based algorithms are proposed. Shared-space CCPSO and multispace CCPSO provide two new cooperative strategies to split the combinatorial problem, and both models are tested on proven NP-hard problems. Multimodal CCPSO extends these combinatorial PSO algorithms to efficiently sample the search space in problems with multiple global optima. Shared-space CCPSO was evaluated on an abductive problem-solving task: the construction of parsimonious set of independent hypothesis in diagnostic problems with direct causal links between disorders and manifestations. Multi-space CCPSO was used to solve a protein structure prediction subproblem, sidechain packing. Both models are evaluated against the provable optimal solutions and results show that both proposed PSO algorithms are able to find optimal or near-optimal solutions. The exploratory ability of multimodal CCPSO is assessed by evaluating both the quality and diversity of the solutions obtained in a protein sequence design problem, a highly multimodal problem. These results provide evidence that extended PSO algorithms are capable of dealing with combinatorial problems without having to hybridize the PSO with other local search techniques or sacrifice the concept of particles moving throughout a continuous search space

    Proceedings of the IJCAI-09 Workshop on Nonmonotonic Reasoning, Action and Change

    Full text link
    Copyright in each article is held by the authors. Please contact the authors directly for permission to reprint or use this material in any form for any purpose.The biennial workshop on Nonmonotonic Reasoning, Action and Change (NRAC) has an active and loyal community. Since its inception in 1995, the workshop has been held seven times in conjunction with IJCAI, and has experienced growing success. We hope to build on this success again this eighth year with an interesting and fruitful day of discussion. The areas of reasoning about action, non-monotonic reasoning and belief revision are among the most active research areas in Knowledge Representation, with rich inter-connections and practical applications including robotics, agentsystems, commonsense reasoning and the semantic web. This workshop provides a unique opportunity for researchers from all three fields to be brought together at a single forum with the prime objectives of communicating important recent advances in each field and the exchange of ideas. As these fundamental areas mature it is vital that researchers maintain a dialog through which they can cooperatively explore common links. The goal of this workshop is to work against the natural tendency of such rapidly advancing fields to drift apart into isolated islands of specialization. This year, we have accepted ten papers authored by a diverse international community. Each paper has been subject to careful peer review on the basis of innovation, significance and relevance to NRAC. The high quality selection of work could not have been achieved without the invaluable help of the international Program Committee. A highlight of the workshop will be our invited speaker Professor Hector Geffner from ICREA and UPF in Barcelona, Spain, discussing representation and inference in modern planning. Hector Geffner is a world leader in planning, reasoning, and knowledge representation; in addition to his many important publications, he is a Fellow of the AAAI, an associate editor of the Journal of Artificial Intelligence Research and won an ACM Distinguished Dissertation Award in 1990

    Optimization-based multi-contact motion planning for legged robots

    Get PDF
    For legged robots, generating dynamic and versatile motions is essential for interacting with complex and ever-changing environments. So far, robots that routinely operate reliably over rough terrains remains an elusive goal. Yet the primary promise of legged locomotion is to replace humans and animals in performing tedious and menial tasks, without requiring changes in the environment as wheeled robots do. A necessary step towards this goal is to endow robots with capabilities to reason about contacts but this vital skill is currently missing. An important justification for this is that contact phenomena are inherently non-smooth and non-convex. As a result, posing and solving problems involving contacts is non-trivial. Optimization-based motion planning constitutes a powerful paradigm to this end. Consequently, this thesis considers the problem of generating motions in contact-rich situations. Specifically, we introduce several methods that compute dynamic and versatile motion plans from a holistic optimization perspective based on trajectory optimization techniques. The advantage is that the user needs to provide a high-level task description in the form of an objective function only. Subsequently, the methods output a detailed motion plan—that includes contact locations, timings, gait patterns—that optimally achieves the high-level task. Initially, we assume that such a motion plan is available, and we investigate the relevant control problem. The problem is to track a nominal motion plan as close as possible given external disturbances by computing inputs for the robot. Thus, this stage typically follows the motion planning stage. Additionally, this thesis presents methods that do not necessarily require a separate control stage by computing the controller structure automatically. Afterwards, we proceed to the main parts of this thesis. First, assuming a pre-specified contact sequence, we formulate a trajectory optimization method reminiscent of hybrid approaches. Its backbone is a high-accuracy integrator, enabling reliable long-term motion planning while satisfying both translational and rotational dynamics. We utilize it to compute motion plans for a hopper traversing rough terrains—with gaps and obstacles—and performing explosive motions, like a somersault. Subsequently, we provide a discussion on how to extend the method when the contact sequence is unspecified. In the next chapter, we increase the complexity of the problem in many aspects. First, we formulate the problem in joint-level utilizing full dynamics and kinematics models. Second, we assume a contact-implicit perspective, i.e. decisions about contacts are implicitly defined in the problem’s formulation rather than defined as explicit contact modes. As a result, pre-specification of the contact interactions is not required, like the order by which the feet contact the ground for a quadruped robot model and the respective timings. Finally, we extend the classical rigid contact model to surfaces with soft and slippery properties. We quantitatively evaluate our proposed framework by performing comparisons against the rigid model and an alternative contact-implicit framework. Furthermore, we compute motion plans for a high-dimensional quadruped robot in a variety of terrains exhibiting the enhanced properties. In the final study, we extend the classical Differential Dynamic Programming algorithm to handle systems defined by implicit dynamics. While this can be of interest in its own right, our particular application is computing motion plans in contact-rich settings. Compared to the method presented in the previous chapter, this formulation enables experiencing contacts with all body parts in a receding horizon fashion, albeit with limited contact discovery capabilities. We demonstrate the properties of our proposed extension by comparing implicit and explicit models and generating motion plans for a single-legged robot with multiple contacts both for trajectory optimization and receding horizon settings. We conclude this thesis by providing insights and limitations of the proposed methods, and possible future directions that can improve and extend aspects of the presented work
    • 

    corecore