138 research outputs found

    Semantics of Negation in Extensional Higher-Order Logic Programming

    Get PDF
    Θεωρούμε τις δύο υπάρχουσες εκτατικές προσεγγίσεις στη σημασιολογία των θετικών λογικών προγραμμάτων ανώτερης τάξης, προταθείσες από τον W. W. Wadge και τον M. Bezem αντίστοιχα. Η πρώτη προσέγγιση χρησιμοποιεί κλασικά εργαλεία από τη θεωρία πεδίων ενώ η δεύτερη στηρίζεται στις συντακτικές οντότητες που εμφανίζονται στο πρόγραμμα και βασίζεται στην επεξεργασία του βασικού αναπτύγματος του προγράμματος. Οι σχέσεις μεταξύ των δύο προσεγγίσεων δεν είχαν ως τώρα διερευνηθεί, ενώ μόνο η προσέγγιση του Wadge είχε επεκταθεί ώστε να εφαρμοστεί σε προγράμματα ανώτερης τάξης με άρνηση. Δείχνουμε ότι οι σημασιολογίες του Wadge και του Bezem συμπίπτουν για μία ευρεία και ενδιαφέρουσα κλάση προγραμμάτων, τα οποία δεν περιλαμβάνουν υπαρξιακά ποσοτικοποιημένες μεταβλητές στα σώματα των προτάσεων. Σημειώνουμε ότι έχουν επίσης ουσιαστικές διαφορές, οι οποίες γίνονται εμφανείς όταν επεκτείνουμε την θεωρούμενη γλώσσα ώστε να επιτρέπονται υπαρξιακές μεταβλητές. Επιπλέον, εστιάζουμε στη λιγότερο ανεπτυγμένη ερευνητική κατεύθυνση εκ των δύο, δηλαδή τη σημασιολογία του Bezem, και προσαρμόζουμε για πρώτη φορά την τεχνική του Bezem ώστε να ορίσουμε μία εκτατική σημασιολογία για λογικά προγράμματα ανώτερης τάξης με άρνηση. Για τον σκοπό αυτό, αξιοποιούμε την απειρότιμη προσέγγιση στην άρνηση-μέσω-αποτυχίας. Από την άλλη, δείχνουμε ότι o συνδυασμός της τεχνικής με τη σημασιολογία σταθερού μοντέλου ή με την καλώς θεμελιωμένη σημασιολογία, αποτυγχάνει να παράξει εκτατικές σημασιολογίες, στη γενική περίπτωση. Αναλύουμε τις αιτίες αυτής της αποτυχίας και ισχυριζόμαστε ότι μία τρίτιμη λογική δεν μπορεί να διαχωρίσει μεταξύ τους ορισμένα κατηγορήματα, τα οποία έχουν διαφορετική συμπεριφορά μέσα σε ένα πρόγραμμα, αλλά τυγχάνει να εμφανίζονται ως πανομοιότυπες τρίτιμες σχέσεις. Τέλος, ορίζουμε για πρώτη φορά τις έννοιες της στρωματοποίησης και της τοπικής στρωματοποίησης για λογικά προγράμματα ανώτερης τάξης με άρνηση. Αποδεικνύουμε ότι κάθε στρωματοποιημένο πρόγραμμα έχει ένα διακριτό εκτατικό μοντέλο, το οποίο μπορεί να κατασκευαστεί ισοδύναμα μέσω της καλώς θεμελιωμένης, της σταθερής ή της απειρότιμης σημασιολογίας. Επιπλέον, δείχνουμε ότι αυτό το μοντέλο δεν αποδίδει ποτέ την άγνωστη τιμή αληθείας. Τα αποτελέσματα αυτά αναδεικνύουν τη σπουδαιότητα και την καλή φύση των στρωματοποιημένων προγραμμάτων, που ήταν ως τώρα γνωστή μόνο στην περίπτωση των λογικών προγραμμάτων πρώτης τάξης.We consider the two existing extensional approaches to the semantics of positive higher-order logic programming, originally introduced by W. W. Wadge and M. Bezem respectively. The former approach uses classical domain-theoretic tools while the latter builds on a fixed-point construction defined on a syntactic instantiation of the source program. The relationships between these two approaches had not been investigated until now, while only Wadge's approach had been extended to apply to higher-order programs with negation. We show that Wadge's semantics and Bezem's semantics coincide for a broad and interesting class of programs, which do not include existentially quantified predicate variables in the bodies of clauses. We indicate that they also have profound differences, which surface when we extend our source language to allow existential predicate variables. In addition, we focus on the less developed research direction of the two, namely Bezem's semantics, and we adapt, for the first time, Bezem's technique to define an extensional semantics for higher-order logic programs with negation. For this purpose, we utilize the infinite-valued approach to negation-as-failure. On the other hand, we show that an adaptation of the technique under the well-founded or the stable model semantics does not in general lead to an extensional semantics. We analyse the reasons for this failure arguing that a three-valued setting cannot distinguish between certain predicates that appear to have a different behaviour inside a program context, but which happen to be identical as three-valued relations. As an application of our developments, we define for the first time the notions of stratification and local stratification for higher-order logic programs with negation. We prove that every stratified program has a distinguished extensional model which can be equivalently obtained through the well-founded, stable or infinite-valued model semantics. Furthermore, we show that this model does not assign the unknown truth value. These results affirm the importance and the well-behaved nature of stratified programs, which was, until now, only known for the first-order case

    Probabilistic Programming Semantics for Name Generation

    Full text link
    We make a formal analogy between random sampling and fresh name generation. We show that quasi-Borel spaces, a model for probabilistic programming, can soundly interpret Stark's ν\nu-calculus, a calculus for name generation. Moreover, we prove that this semantics is fully abstract up to first-order types. This is surprising for an 'off-the-shelf' model, and requires a novel analysis of probability distributions on function spaces. Our tools are diverse and include descriptive set theory and normal forms for the ν\nu-calculus.Comment: 29 pages, 1 figure; to be published in POPL 202

    A Reasonably Gradual Type Theory

    Full text link
    Gradualizing the Calculus of Inductive Constructions (CIC) involves dealing with subtle tensions between normalization, graduality, and conservativity with respect to CIC. Recently, GCIC has been proposed as a parametrized gradual type theory that admits three variants, each sacrificing one of these properties. For devising a gradual proof assistant based on CIC, normalization and conservativity with respect to CIC are key, but the tension with graduality needs to be addressed. Additionally, several challenges remain: (1) The presence of two wildcard terms at any type-the error and unknown terms-enables trivial proofs of any theorem, jeopardizing the use of a gradual type theory in a proof assistant; (2) Supporting general indexed inductive families, most prominently equality, is an open problem; (3) Theoretical accounts of gradual typing and graduality so far do not support handling type mismatches detected during reduction; (4) Precision and graduality are external notions not amenable to reasoning within a gradual type theory. All these issues manifest primally in CastCIC, the cast calculus used to define GCIC. In this work, we present an extension of CastCIC called GRIP. GRIP is a reasonably gradual type theory that addresses the issues above, featuring internal precision and general exception handling. GRIP features an impure (gradual) sort of types inhabited by errors and unknown terms, and a pure (non-gradual) sort of strict propositions for consistent reasoning about gradual terms. Internal precision supports reasoning about graduality within GRIP itself, for instance to characterize gradual exception-handling terms, and supports gradual subset types. We develop the metatheory of GRIP using a model formalized in Coq, and provide a prototype implementation of GRIP in Agda.Comment: 27pages + 2pages bibliograph

    Topics in Programming Languages, a Philosophical Analysis through the case of Prolog

    Get PDF
    [EN]Programming languages seldom find proper anchorage in philosophy of logic, language and science. is more, philosophy of language seems to be restricted to natural languages and linguistics, and even philosophy of logic is rarely framed into programming languages topics. The logic programming paradigm and Prolog are, thus, the most adequate paradigm and programming language to work on this subject, combining natural language processing and linguistics, logic programming and constriction methodology on both algorithms and procedures, on an overall philosophizing declarative status. Not only this, but the dimension of the Fifth Generation Computer system related to strong Al wherein Prolog took a major role. and its historical frame in the very crucial dialectic between procedural and declarative paradigms, structuralist and empiricist biases, serves, in exemplar form, to treat straight ahead philosophy of logic, language and science in the contemporaneous age as well. In recounting Prolog's philosophical, mechanical and algorithmic harbingers, the opportunity is open to various routes. We herein shall exemplify some: - the mechanical-computational background explored by Pascal, Leibniz, Boole, Jacquard, Babbage, Konrad Zuse, until reaching to the ACE (Alan Turing) and EDVAC (von Neumann), offering the backbone in computer architecture, and the work of Turing, Church, Gödel, Kleene, von Neumann, Shannon, and others on computability, in parallel lines, throughly studied in detail, permit us to interpret ahead the evolving realm of programming languages. The proper line from lambda-calculus, to the Algol-family, the declarative and procedural split with the C language and Prolog, and the ensuing branching and programming languages explosion and further delimitation, are thereupon inspected as to relate them with the proper syntax, semantics and philosophical élan of logic programming and Prolog

    A Reasonably Gradual Type Theory

    Get PDF
    International audienceGradualizing the Calculus of Inductive Constructions (CIC) involves dealing with subtle tensions between normalization, graduality, and conservativity with respect to CIC. Recently, GCIC has been proposed as a parametrized gradual type theory that admits three variants, each sacrificing one of these properties. For devising a gradual proof assistant based on CIC, normalization and conservativity with respect to CIC are key, but the tension with graduality needs to be addressed. Additionally, several challenges remain: (1) The presence of two wildcard terms at any type-the error and unknown terms-enables trivial proofs of any theorem, jeopardizing the use of a gradual type theory in a proof assistant; (2) Supporting general indexed inductive families, most prominently equality, is an open problem; (3) Theoretical accounts of gradual typing and graduality so far do not support handling type mismatches detected during reduction; (4) Precision and graduality are external notions not amenable to reasoning within a gradual type theory. All these issues manifest primally in CastCIC, the cast calculus used to define GCIC. In this work, we present an alternative to CastCIC called GRIP. GRIP is a reasonably gradual type theory that addresses the issues above, featuring internal precision and general exception handling. For consistent reasoning about gradual terms, GRIP features an impure sort of types inhabited by errors and unknown terms, and a pure sort of strict propositions. By adopting a novel interpretation of the unknown term that carefully accounts for universe levels, GRIP satisfies graduality for a large and well-defined class of terms, in addition to being normalizing and a conservative extension of CIC. Internal precision supports reasoning about graduality within GRIP itself, for instance to characterize gradual exception-handling terms, and supports gradual subset types. We develop the metatheory of GRIP using a model formalized in Coq, and provide a prototype implementation of GRIP in Agda

    Program Similarity Analysis for Malware Classification and its Pitfalls

    Get PDF
    Malware classification, specifically the task of grouping malware samples into families according to their behaviour, is vital in order to understand the threat they pose and how to protect against them. Recognizing whether one program shares behaviors with another is a task that requires semantic reasoning, meaning that it needs to consider what a program actually does. This is a famously uncomputable problem, due to Rice\u2019s theorem. As there is no one-size-fits-all solution, determining program similarity in the context of malware classification requires different tools and methods depending on what is available to the malware defender. When the malware source code is readily available (or at least, easy to retrieve), most approaches employ semantic \u201cabstractions\u201d, which are computable approximations of the semantics of the program. We consider this the first scenario for this thesis: malware classification using semantic abstractions extracted from the source code in an open system. Structural features, such as the control flow graphs of programs, can be used to classify malware reasonably well. To demonstrate this, we build a tool for malware analysis, R.E.H.A. which targets the Android system and leverages its openness to extract a structural feature from the source code of malware samples. This tool is first successfully evaluated against a state of the art malware dataset and then on a newly collected dataset. We show that R.E.H.A. is able to classify the new samples into their respective families, often outperforming commercial antivirus software. However, abstractions have limitations by virtue of being approximations. We show that by increasing the granularity of the abstractions used to produce more fine-grained features, we can improve the accuracy of the results as in our second tool, StranDroid, which generates fewer false positives on the same datasets. The source code of malware samples is not often available or easily retrievable. For this reason, we introduce a second scenario in which the classification must be carried out with only the compiled binaries of malware samples on hand. Program similarity in this context cannot be done using semantic abstractions as before, since it is difficult to create meaningful abstractions from zeros and ones. Instead, by treating the compiled programs as raw data, we transform them into images and build upon common image classification algorithms using machine learning. This led us to develop novel deep learning models, a convolutional neural network and a long short-term memory, to classify the samples into their respective families. To overcome the usual obstacle of deep learning of lacking sufficiently large and balanced datasets, we utilize obfuscations as a data augmentation tool to generate semantically equivalent variants of existing samples and expand the dataset as needed. Finally, to lower the computational cost of the training process, we use transfer learning and show that a model trained on one dataset can be used to successfully classify samples in different malware datasets. The third scenario explored in this thesis assumes that even the binary itself cannot be accessed for analysis, but it can be executed, and the execution traces can then be used to extract semantic properties. However, dynamic analysis lacks the formal tools and frameworks that exist in static analysis to allow proving the effectiveness of obfuscations. For this reason, the focus shifts to building a novel formal framework that is able to assess the potency of obfuscations against dynamic analysis. We validate the new framework by using it to encode known analyses and obfuscations, and show how these obfuscations actually hinder the dynamic analysis process
    corecore