1,071 research outputs found

    Logic-Based Decision Support for Strategic Environmental Assessment

    Full text link
    Strategic Environmental Assessment is a procedure aimed at introducing systematic assessment of the environmental effects of plans and programs. This procedure is based on the so-called coaxial matrices that define dependencies between plan activities (infrastructures, plants, resource extractions, buildings, etc.) and positive and negative environmental impacts, and dependencies between these impacts and environmental receptors. Up to now, this procedure is manually implemented by environmental experts for checking the environmental effects of a given plan or program, but it is never applied during the plan/program construction. A decision support system, based on a clear logic semantics, would be an invaluable tool not only in assessing a single, already defined plan, but also during the planning process in order to produce an optimized, environmentally assessed plan and to study possible alternative scenarios. We propose two logic-based approaches to the problem, one based on Constraint Logic Programming and one on Probabilistic Logic Programming that could be, in the future, conveniently merged to exploit the advantages of both. We test the proposed approaches on a real energy plan and we discuss their limitations and advantages.Comment: 17 pages, 1 figure, 26th Int'l. Conference on Logic Programming (ICLP'10

    Complexity of Non-Monotonic Logics

    Full text link
    Over the past few decades, non-monotonic reasoning has developed to be one of the most important topics in computational logic and artificial intelligence. Different ways to introduce non-monotonic aspects to classical logic have been considered, e.g., extension with default rules, extension with modal belief operators, or modification of the semantics. In this survey we consider a logical formalism from each of the above possibilities, namely Reiter's default logic, Moore's autoepistemic logic and McCarthy's circumscription. Additionally, we consider abduction, where one is not interested in inferences from a given knowledge base but in computing possible explanations for an observation with respect to a given knowledge base. Complexity results for different reasoning tasks for propositional variants of these logics have been studied already in the nineties. In recent years, however, a renewed interest in complexity issues can be observed. One current focal approach is to consider parameterized problems and identify reasonable parameters that allow for FPT algorithms. In another approach, the emphasis lies on identifying fragments, i.e., restriction of the logical language, that allow more efficient algorithms for the most important reasoning tasks. In this survey we focus on this second aspect. We describe complexity results for fragments of logical languages obtained by either restricting the allowed set of operators (e.g., forbidding negations one might consider only monotone formulae) or by considering only formulae in conjunctive normal form but with generalized clause types. The algorithmic problems we consider are suitable variants of satisfiability and implication in each of the logics, but also counting problems, where one is not only interested in the existence of certain objects (e.g., models of a formula) but asks for their number.Comment: To appear in Bulletin of the EATC

    Including Generative Mechanisms in Project scheduling using Hybrid Simulation

    Full text link
    Scheduling is central to the practice of project management and a topic of significant interest for the operations research and management science academic communities. However, a rigour-relevance gap has developed between the research and practice of scheduling that mirrors similar concerns current in management science. Closing this gap requires a more accommodative philosophy that can integrate both hard and soft factors in the construction of project schedules. This paper outlines one interpretation of how this can be achieved through the combination of discrete event simulation for schedule construction and system dynamics for variable resource productivity. An implementation was built in a readily available modelling environment and its scheduling capabilities tested. They compare well with published results for commercial project scheduling packages. The use of system dynamics in schedule construction allows for the inclusion of generative mechanisms, models that describe the process by which some observed phenomenon is produced. They are powerful tools for answering questions about why things happen the way they do, a type of question very relevant to practic

    Logic programming in the context of multiparadigm programming: the Oz experience

    Full text link
    Oz is a multiparadigm language that supports logic programming as one of its major paradigms. A multiparadigm language is designed to support different programming paradigms (logic, functional, constraint, object-oriented, sequential, concurrent, etc.) with equal ease. This article has two goals: to give a tutorial of logic programming in Oz and to show how logic programming fits naturally into the wider context of multiparadigm programming. Our experience shows that there are two classes of problems, which we call algorithmic and search problems, for which logic programming can help formulate practical solutions. Algorithmic problems have known efficient algorithms. Search problems do not have known efficient algorithms but can be solved with search. The Oz support for logic programming targets these two problem classes specifically, using the concepts needed for each. This is in contrast to the Prolog approach, which targets both classes with one set of concepts, which results in less than optimal support for each class. To explain the essential difference between algorithmic and search programs, we define the Oz execution model. This model subsumes both concurrent logic programming (committed-choice-style) and search-based logic programming (Prolog-style). Instead of Horn clause syntax, Oz has a simple, fully compositional, higher-order syntax that accommodates the abilities of the language. We conclude with lessons learned from this work, a brief history of Oz, and many entry points into the Oz literature.Comment: 48 pages, to appear in the journal "Theory and Practice of Logic Programming

    Introduction to clarithmetic I

    Get PDF
    "Clarithmetic" is a generic name for formal number theories similar to Peano arithmetic, but based on computability logic (see http://www.cis.upenn.edu/~giorgi/cl.html) instead of the more traditional classical or intuitionistic logics. Formulas of clarithmetical theories represent interactive computational problems, and their "truth" is understood as existence of an algorithmic solution. Imposing various complexity constraints on such solutions yields various versions of clarithmetic. The present paper introduces a system of clarithmetic for polynomial time computability, which is shown to be sound and complete. Sound in the sense that every theorem T of the system represents an interactive number-theoretic computational problem with a polynomial time solution and, furthermore, such a solution can be efficiently extracted from a proof of T. And complete in the sense that every interactive number-theoretic problem with a polynomial time solution is represented by some theorem T of the system. The paper is written in a semitutorial style and targets readers with no prior familiarity with computability logic

    Perfect zero knowledge for quantum multiprover interactive proofs

    Full text link
    In this work we consider the interplay between multiprover interactive proofs, quantum entanglement, and zero knowledge proofs - notions that are central pillars of complexity theory, quantum information and cryptography. In particular, we study the relationship between the complexity class MIP^*, the set of languages decidable by multiprover interactive proofs with quantumly entangled provers, and the class PZKMIP^*, which is the set of languages decidable by MIP^* protocols that furthermore possess the perfect zero knowledge property. Our main result is that the two classes are equal, i.e., MIP=^* = PZKMIP^*. This result provides a quantum analogue of the celebrated result of Ben-Or, Goldwasser, Kilian, and Wigderson (STOC 1988) who show that MIP == PZKMIP (in other words, all classical multiprover interactive protocols can be made zero knowledge). We prove our result by showing that every MIP^* protocol can be efficiently transformed into an equivalent zero knowledge MIP^* protocol in a manner that preserves the completeness-soundness gap. Combining our transformation with previous results by Slofstra (Forum of Mathematics, Pi 2019) and Fitzsimons, Ji, Vidick and Yuen (STOC 2019), we obtain the corollary that all co-recursively enumerable languages (which include undecidable problems as well as all decidable problems) have zero knowledge MIP^* protocols with vanishing promise gap

    Diagnosing interoperability problems and debugging models by enhancing constraint satisfaction with case -based reasoning

    Get PDF
    Modeling, Diagnosis, and Model Debugging are the three main areas presented in this dissertation to automate the process of Interoperability Testing of networking protocols. The dissertation proposes a framework that uses the Constraint Satisfaction Problem (CSP) paradigm to define a modeling language and problem solving mechanism for interoperability testing, and uses Case-Based Reasoning (CBR) for debugging interoperability test cases. The dissertation makes three primary contributions: (1) Definition of a new modeling language using CSP and Object-Oriented Programming. This language is simple, declarative, and transparent. It provides a tool for testers to implement models of interoperability test cases. The dissertation introduces the notions of metavariables, metavalues and optional metavariables to improve the modeling language capabilities. It proposes modeling of test cases from test suite specifications that are usually used in interoperability testing performed manually by testers. Test suite specifications are written by organizations or individuals and break down the testing into modules of test cases that make diagnosis of problems more meaningful to testers. (2) Diagnosis of interoperability problems using search supplemented by consistency inference methods in a CSP context to support explanations of the problem solving behavior. These methods are adapted to the OO-based CSP context. Testers can then generate reports for individual test cases and for test groups from a test suite specification. (3) Detection and debugging of incompleteness and incorrectness in CSP models of interoperability test cases. This is done through the integration of two modes of reasoning, namely CBR and CSP. CBR manages cases that store information about updating models as well as cases that are related to interoperability problems where diagnosis fails to generate a useful explanation. For the latter cases, CBR recalls previous similar useful explanations

    A logical basis for constructive systems

    Full text link
    The work is devoted to Computability Logic (CoL) -- the philosophical/mathematical platform and long-term project for redeveloping classical logic after replacing truth} by computability in its underlying semantics (see http://www.cis.upenn.edu/~giorgi/cl.html). This article elaborates some basic complexity theory for the CoL framework. Then it proves soundness and completeness for the deductive system CL12 with respect to the semantics of CoL, including the version of the latter based on polynomial time computability instead of computability-in-principle. CL12 is a sequent calculus system, where the meaning of a sequent intuitively can be characterized as "the succedent is algorithmically reducible to the antecedent", and where formulas are built from predicate letters, function letters, variables, constants, identity, negation, parallel and choice connectives, and blind and choice quantifiers. A case is made that CL12 is an adequate logical basis for constructive applied theories, including complexity-oriented ones

    Enriching Solutions to Combinatorial Problems via Solution Engineering

    Get PDF
    International audienceExisting approaches to identify multiple solutions to combinatorial problems in practice are at best limited in their ability to simultaneously incorporate both diversity among generated solutions, as well as problem-specific desires that may only be discovered or articulated by the user after further analysis of solver output. We propose a general framework for problems of a combinatorial nature that can generate a set of of multiple (near-)optimal, diverse solutions, that are further infused with desirable features. We call our approach solution engineering. A key novelty is that desirable solution properties need not be explicitly modeled in advance. We customize the framework to both the mathematical programming and constraint programming technologies, and subsequently demonstrate its prac-ticality by implementing and then conducting computational experiments on existing test instances from the literature. Our computational results confirm the very real possibility of generating sets of solutions infused with features that might otherwise remain undiscovered

    Remote Sensing Information Sciences Research Group, Santa Barbara Information Sciences Research Group, year 3

    Get PDF
    Research continues to focus on improving the type, quantity, and quality of information which can be derived from remotely sensed data. The focus is on remote sensing and application for the Earth Observing System (Eos) and Space Station, including associated polar and co-orbiting platforms. The remote sensing research activities are being expanded, integrated, and extended into the areas of global science, georeferenced information systems, machine assissted information extraction from image data, and artificial intelligence. The accomplishments in these areas are examined
    corecore