20 research outputs found

    Every normal logic program has a 2-valued semantics: theory, extensions, applications, implementations

    Get PDF
    Trabalho apresentado no âmbito do Doutoramento em Informática, como requisito parcial para obtenção do grau de Doutor em InformáticaAfter a very brief introduction to the general subject of Knowledge Representation and Reasoning with Logic Programs we analyse the syntactic structure of a logic program and how it can influence the semantics. We outline the important properties of a 2-valued semantics for Normal Logic Programs, proceed to define the new Minimal Hypotheses semantics with those properties and explore how it can be used to benefit some knowledge representation and reasoning mechanisms. The main original contributions of this work, whose connections will be detailed in the sequel, are: • The Layering for generic graphs which we then apply to NLPs yielding the Rule Layering and Atom Layering — a generalization of the stratification notion; • The Full shifting transformation of Disjunctive Logic Programs into (highly nonstratified)NLPs; • The Layer Support — a generalization of the classical notion of support; • The Brave Relevance and Brave Cautious Monotony properties of a 2-valued semantics; • The notions of Relevant Partial Knowledge Answer to a Query and Locally Consistent Relevant Partial Knowledge Answer to a Query; • The Layer-Decomposable Semantics family — the family of semantics that reflect the above mentioned Layerings; • The Approved Models argumentation approach to semantics; • The Minimal Hypotheses 2-valued semantics for NLP — a member of the Layer-Decomposable Semantics family rooted on a minimization of positive hypotheses assumption approach; • The definition and implementation of the Answer Completion mechanism in XSB Prolog — an essential component to ensure XSB’s WAM full compliance with the Well-Founded Semantics; • The definition of the Inspection Points mechanism for Abductive Logic Programs;• An implementation of the Inspection Points workings within the Abdual system [21] We recommend reading the chapters in this thesis in the sequence they appear. However, if the reader is not interested in all the subjects, or is more keen on some topics rather than others, we provide alternative reading paths as shown below. 1-2-3-4-5-6-7-8-9-12 Definition of the Layer-Decomposable Semantics family and the Minimal Hypotheses semantics (1 and 2 are optional) 3-6-7-8-10-11-12 All main contributions – assumes the reader is familiarized with logic programming topics 3-4-5-10-11-12 Focus on abductive reasoning and applications.FCT-MCTES (Fundação para a Ciência e Tecnologia do Ministério da Ciência,Tecnologia e Ensino Superior)- (no. SFRH/BD/28761/2006

    Debugging Type Errors with a Blackbox Compiler

    Get PDF
    Type error debugging can be a laborious yet necessary process for programmers of statically typed functional programming languages. Often a compiler compounds this by inaccurately reporting the location of a type error, a problem that has been a subject of research for over thirty years. However, despite its long history, the solutions proposed are often reliant on direct modifications to the compiler, often distributed in the form of patches. These patches append another level of arduous activity to the task of debugging, keeping them modernised to the ever-changing programming language they support. This thesis investigates an additional option; the blackbox compiler. Split into three central parts, it shows the individual solutions involved in using a blackbox compiler to debug type errors in functional programming languages. First is a demonstration of how the combination of a blackbox compiler and a generic debugging algorithm can successfully locate type errors. Next tackled is a side-effect of this new combination, the introduction of extra errors, combated with a new speed boosted algorithm, evaluated with a proposed framework based on Data Science techniques to quantify the quality of a type error debugger. Lastly, the algorithms employed throughout this thesis, along with the blackbox compiler, have agnostic properties, they do not need language-specific knowledge. Thus, the final part presents utilising the agnostic abilities for an agnostic debugger to locate type errors

    Faculty Publications and Creative Works 2005

    Get PDF
    Faculty Publications & Creative Works is an annual compendium of scholarly and creative activities of University of New Mexico faculty during the noted calendar year. Published by the Office of the Vice President for Research and Economic Development, it serves to illustrate the robust and active intellectual pursuits conducted by the faculty in support of teaching and research at UNM. In 2005, UNM faculty produced over 1,887 works, including 1,887 scholarly papers and articles, 57 books, 127 book chapters, 58 reviews, 68 creative works and 4 patented works. We are proud of the accomplishments of our faculty which are in part reflected in this book, which illustrates the diversity of intellectual pursuits in support of research and education at the University of New Mexico

    Resource Polymorphism

    Get PDF
    We present a resource-management model for ML-style programming languages, designed to be compatible with the OCaml philosophy and runtime model. This is a proposal to extend the OCaml language with destructors, move semantics, and resource polymorphism, to improve its safety, efficiency, interoperability, and expressiveness. It builds on the ownership-and-borrowing models of systems programming languages (Cyclone, C++11, Rust) and on linear types in functional programming (Linear Lisp, Clean, Alms). It continues a synthesis of resources from systems programming and resources in linear logic initiated by Baker.It is a combination of many known and some new ideas. On the novel side, it highlights the good mathematical structure of Stroustrup's “Resource acquisition is initialisation” (RAII) idiom for resource management based on destructors, a notion sometimes confused with finalizers, and builds on it a notion of resource polymorphism, inspired by polarisation in proof theory, that mixes C++'s RAII and a tracing garbage collector (GC). In particular, it proposes to identify the types of GCed values with types with trivial destructor: from this definition it deduces a model in which GC is the default allocation mode, and where GCed values can be used without restriction both in owning and borrowing contexts.The proposal targets a new spot in the design space, with an automatic and predictable resource-management model, at the same time based on lightweight and expressive language abstractions. It is backwards-compatible: current code is expected to run with the same performance, the new abstractions fully combine with the current ones, and it supports a resource-polymorphic extension of libraries. It does so with only a few additions to the runtime, and it integrates with the current GC implementation. It is also compatible with the upcoming multicore extension, and suggests that the Rust model for eliminating data-races applies.Interesting questions arise for a safe and practical type system, many of which have already been thoroughly investigated in the languages and prototypes Cyclone, Rust, and Alms

    Mechanized Reasoning About how Using Functional Programs And Embeddings

    Get PDF
    Embedding describes the process of encoding a program\u27s syntax and/or semantics in another language---typically a theorem prover in the context of mechanized reasoning. Among different embedding styles, deep embeddings are generally preferred as they enable the most faithful modeling of the original language. However, deep embeddings are also the most complex, and working with them requires additional effort. In light of that, this dissertation aims to draw more attention to alternative styles, namely shallow and mixed embeddings, by studying their use in mechanized reasoning about programs\u27 properties that are related to how . More specifically, I present a simple shallow embedding for reasoning about computation costs of lazy programs, and a class of mixed embeddings that are useful for reasoning about properties of general computation patterns in effectful programs. I show the usefulness of these embedding styles with examples based on real-world applications

    Usage Policies for Decentralised Information Processing

    Get PDF
    Owners impose usage restrictions on their information, which can be based e.g. on privacy laws, copyright law or social conventions. Often, information is processed in complex constellations without central control. In this work, we introduce technologies to formally express usage restrictions in a machine-interpretable way as so-called policies that enable the creation of decentralised systems that provide, consume and process distributed information in compliance with their usage restrictions

    Distributing abstract machines

    Get PDF
    Today's distributed programs are often written using either explicit message passing or Remote Procedure Calls (RPCs) that are not natively integrated in the language. It is difficult to establish the correctness of programs written this way compared to programs written for a single computer. We propose a generalisation of RPCs that are natively integrated in a functional programming language meaning that they have support for higher-order calls across node boundaries. Our focus is on how such languages can be compiled correctly and efficiently. We present four different solutions. Two of them are based on interaction semantics --- the Geometry of Interaction and game semantics --- and two are extensions of conventional abstract machines --- the Krivine machine and the SECD machine. To target as general distributed systems as possible our solutions support RPCs without sending code. We prove the correctness of the abstract machines with respect to their single-node execution, and show their viability for use for compilation by implementing prototype compilers based on them. The conventionally based machines are shown to enable efficient programs. Our intention is that these abstract machines can form the foundation for future programming languages that use the idea of higher-order RPCs

    An object-oriented modelling method for evolving the hybrid vehicle design space in a systems engineering environment

    Get PDF
    A combination of environmental awareness, consumer demands and pressure from legislators has led automotive manufacturers to seek for more environmentally friendly alternatives while still meeting the quality, performance and price demands of their customers. This has led to many complex powertrain designs being developed in order to produce vehicles with reduced carbon emissions. In particular, within the last decade most of the major automotive manufactures have either developed or announced plans to develop one or more hybrid vehicle models. This means that to be competitive and o er the best HEV solutions to customers, manufacturers have to assess a multitude of complex design choices in the most e cient way possible. Even though the automotive industry is adept at dealing with the many complexities of modern vehicle development; the magnitude of design choices, the cross coupling of multiple domains, the evolving technologies and the relative lack of experience with respect to conventional vehicle development compounds the complexities within the HEV design space. In order to meet the needs of e cient and exible HEV powertrain modelling within this design space, a parallel is drawn with the development of complex software systems. This parallel is both from a programmatic viewpoint where object-oriented techniques can be used for physical model development with new equation oriented modelling environments, and from a systems methodology perspective where the development approach encourages incremental development in order to minimize risk. This Thesis proposes a modelling method that makes use of these new tools to apply OOM principles to the design and development of HEV powertrain models. Furthermore, it is argued that together with an appropriate systems engineering approach within which the model development activities will occur, the proposed method can provide a more exible and manageable manner of exploring the HEV design space.The exibility of the modelling method is shown by means of two separate case studies, where a hierarchical library of extendable and replaceable models is developed in order to model the di erent powertrains. Ultimately the proposed method leads to an intuitive manner of developing a complex system model through abstraction and incremental development of the abstracted subsystems. Having said this, the correct management of such an e ort within the automotive industry is key for ensuring the reusability of models through enforced procedures for structuring, maintaining, controlling, documenting and protecting the model development. Further, in order to integrate the new methodology into the existing systems and practices it is imperative to develop an e cient means of sharing information between all stakeholders involved. In this respect it is proposed that together with an overall systems modelling activity for tracking stakeholder involvement and providing a central point for sharing data, CAE methods can be employed in order to automate the integration of data.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore