137 research outputs found

    A Knowledge Compilation Map

    Full text link
    We propose a perspective on knowledge compilation which calls for analyzing different compilation approaches according to two key dimensions: the succinctness of the target compilation language, and the class of queries and transformations that the language supports in polytime. We then provide a knowledge compilation map, which analyzes a large number of existing target compilation languages according to their succinctness and their polytime transformations and queries. We argue that such analysis is necessary for placing new compilation approaches within the context of existing ones. We also go beyond classical, flat target compilation languages based on CNF and DNF, and consider a richer, nested class based on directed acyclic graphs (such as OBDDs), which we show to include a relatively large number of target compilation languages

    On Stratified Belief Base Compilation

    Full text link

    The Language of Search

    Full text link
    This paper is concerned with a class of algorithms that perform exhaustive search on propositional knowledge bases. We show that each of these algorithms defines and generates a propositional language. Specifically, we show that the trace of a search can be interpreted as a combinational circuit, and a search algorithm then defines a propositional language consisting of circuits that are generated across all possible executions of the algorithm. In particular, we show that several versions of exhaustive DPLL search correspond to such well-known languages as FBDD, OBDD, and a precisely-defined subset of d-DNNF. By thus mapping search algorithms to propositional languages, we provide a uniform and practical framework in which successful search techniques can be harnessed for compilation of knowledge into various languages of interest, and a new methodology whereby the power and limitations of search algorithms can be understood by looking up the tractability and succinctness of the corresponding propositional languages

    Computing explanations for interactive constraint-based systems

    Get PDF
    Constraint programming has emerged as a successful paradigm for modelling combinatorial problems arising from practical situations. In many of those situations, we are not provided with an immutable set of constraints. Instead, a user will modify his requirements, in an interactive fashion, until he is satisfied with a solution. Examples of such applications include, amongst others, model-based diagnosis, expert systems, product configurators. The system he interacts with must be able to assist him by showing the consequences of his requirements. Explanations are the ideal tool for providing this assistance. However, existing notions of explanations fail to provide sufficient information. We define new forms of explanations that aim to be more informative. Even if explanation generation is a very hard task, in the applications we consider, we must manage to provide a satisfactory level of interactivity and, therefore, we cannot afford long computational times. We introduce the concept of representative sets of relaxations, a compact set of relaxations that shows the user at least one way to satisfy each of his requirements and at least one way to relax them, and present an algorithm that efficiently computes such sets. We introduce the concept of most soluble relaxations, maximising the number of products they allow. We present algorithms to compute such relaxations in times compatible with interactivity, achieving this by indifferently making use of different types of compiled representations. We propose to generalise the concept of prime implicates to constraint problems with the concept of domain consequences, and suggest to generate them as a compilation strategy. This sets a new approach in compilation, and allows to address explanation-related queries in an efficient way. We define ordered automata to compactly represent large sets of domain consequences, in an orthogonal way from existing compilation techniques that represent large sets of solutions

    Adapting propositional cases based on tableaux repairs using adaptation knowledge -- extended report

    Get PDF
    Adaptation is a step of case-based reasoning that aims at modifying a source case (representing a problem-solving episode) in order to solve a new problem, called the target case. An approach to adaptation consists in applying a belief revision operator that modifies minimally the source case so that it becomes consistent with the target case. Another approach consists in using domain-dependent adaptation rules. These two approaches can be combined: a revision operator parametrized by the adaptation rules is introduced and the corresponding revision-based adaptation uses the rules to modify the source case. This paper presents an algorithm for revision-based and rule-based adaptation based on tableaux repairs in propositional logic: when the conjunction of source and target cases is inconsistent, the tableaux method leads to a set of branches, each of them ending with clashes, and then, these clashes are repaired (thus modifying the source case), with the help of the adaptation rules. This algorithm has been implemented in the REVISOR/PLAK tool and some implementation issues are presented

    Artificial Intelligence as Evidence

    Get PDF
    This article explores issues that govern the admissibility of Artificial Intelligence (“AI”) applications in civil and criminal cases, from the perspective of a federal trial judge and two computer scientists, one of whom also is an experienced attorney. It provides a detailed yet intelligible discussion of what AI is and how it works, a history of its development, and a description of the wide variety of functions that it is designed to accomplish, stressing that AI applications are ubiquitous, both in the private and public sectors. Applications today include: health care, education, employment-related decision-making, finance, law enforcement, and the legal profession. The article underscores the importance of determining the validity of an AI application (i.e., how accurately the AI measures, classifies, or predicts what it is designed to), as well as its reliability (i.e., the consistency with which the AI produces accurate results when applied to the same or substantially similar circumstances), in deciding whether it should be admitted into evidence in civil and criminal cases. The article further discusses factors that can affect the validity and reliability of AI evidence, including bias of various types, “function creep,” lack of transparency and explainability, and the sufficiency of the objective testing of AI applications before they are released for public use. The article next provides an in-depth discussion of the evidentiary principles that govern whether AI evidence should be admitted in court cases, a topic which, at present, is not the subject of comprehensive analysis in decisional law. The focus of this discussion is on providing a step-by-step analysis of the most important issues, and the factors that affect decisions on whether to admit AI evidence. Finally, the article concludes with a discussion of practical suggestions intended to assist lawyers and judges as they are called upon to introduce, object to, or decide on whether to admit AI evidence

    Autonomous Exchanges: Human-Machine Autonomy in the Automated Media Economy

    Get PDF
    Contemporary discourses and representations of automation stress the impending “autonomy” of automated technologies. From pop culture depictions to corporate white papers, the notion of autonomous technologies tends to enliven dystopic fears about the threat to human autonomy or utopian potentials to help humans experience unrealized forms of autonomy. This project offers a more nuanced perspective, rejecting contemporary notions of automation as inevitably vanquishing or enhancing human autonomy. Through a discursive analysis of industrial “deep texts” that offer considerable insights into the material development of automated media technologies, I argue for contemporary automation to be understood as a field for the exchange of autonomy, a human-machine autonomy in which autonomy is exchanged as cultural and economic value. Human-machine autonomy is a shared condition among humans and intelligent machines shaped by economic, legal, and political paradigms with a stake in the cultural uses of automated media technologies. By understanding human-machine autonomy, this project illuminates complications of autonomy emerging from interactions with automated media technologies across a range of cultural contexts

    Concerto for Laptop Ensemble and Orchestra: The Ship of Theseus and Problems of Performance for Electronics With Orchestra: Taxonomy and Nomenclature

    Get PDF
    This dissertation is an examination of the problems faced when staging a work for electronics and orchestra. Part I is an original composition and model for the exploration of those problems. Part II is a monograph reviewing those problems and concentrating on issues of taxonomy and nomenclature. Part I is a concerto for laptop ensemble and orchestra titled The Ship of Theseus. It is named after a philosophical paradox. If every component of an object (i.e. the boards of a ship) is replaced with newer parts, at what point does the original cease to exist? Likewise, if the music performed by an instrument or ensemble is sampled and played back on stage, is it still an orchestra, or is it a recording? The role of the soloists is also explored throughout the work. Similarly to the dialogue of a Classical concerto, at times the soloist enhances the orchestra; at other times it clashes. Part II is an exploration of the etymology and nomenclature of electroacoustic music. In chapter 1, I explore broad problems and concerns specific to electronics and orchestra. In chapter 2, I break down the etymologies of both the orchestra and electroacoustic music, focusing on general issues surrounding the latter specifically. A new taxonomy for electroacoustic music is presented. In chapter 3, I investigate the nomenclature of three well-known terms: live electronic, real time, and interactive. Each of these terms is problematic and often misused; as a result the new term transformational is introduced and defined. This term should not be associated with the general idea of a musical transformation (although such an idea is not unwarranted), but with the flow of musical information in and out of a system. It is my hope that with the introduction of a new classification based on musical information, I will not merely pad the decades-long discourse on nomenclature of electroacoustic music, but rather provide a starting point for composers and technicians to reconcile technology with the music itself. The terms presented in this dissertation should not be considered definitive, but rather the inception of a new dialogue
    • …
    corecore