1,990 research outputs found

    Alʔilbīrī’s Book of the rational conclusions. Introduction, Critical Edition of the Arabic Text and Materials for the History of the Ḫawāṣṣic Genre in Early Andalus

    Full text link
    [eng] The Book of the rational conclusions, written perhaps somewhen in the 10th c. by a physician from Ilbīrah (Andalus), is a multi-section medical pandect. The author brings together, from a diversity of sources, materials dealing with matters related to drug-handling, natural philosophy, therapeutics, medical applications of the specific properties of things, a regimen, and a dispensatory. This dissertation includes three different parts. First the transmission of the text, its contents, and its possible context are discussed. Then a critical edition of the Arabic text is offered. Last, but certainly not least, the subject of the specific properties is approached from several points of view. The analysis of Section III of the original book leads to an exploration of the early Andalusī assimilation of this epistemic tradition and to the establishment of a well-defined textual family in which our text must be inscribed. On the other hand, the concept itself of ‘specific property’ is often misconstrued and it is usually made synonymous to magic and superstition. Upon closer inspection, however, the alleged irrationality of the knowledge of these properties appears to be largely the result of anachronistic interpretation. As a complement of this particular research and as an illustration of the genre, a sample from an ongoing integral commentary on this section of the book is presented.[cat] El Llibre de les conclusions racionals d’un desconegut metge d’Ilbīrah (l’Àndalus) va ser compilat probablement durant la segona meitat del s. X. Es tracta d’un rudimentari però notablement complet kunnaix (un gènere epistèmic que és definit sovint com a ‘enciclopèdia mèdica’) en què l’autor aplega materials manllevats (sovint de manera literal i no-explícita) de diversos gèneres. El llibre obre amb una secció sobre apoteconomia (una mena de manual d’apotecaris) però se centra després en les diferents branques de la medicina. A continuació d’uns prolegòmens filosòfics l’autor copia, amb mínima adaptació lingüística, un tractat sencer de terapèutica, després un altre sobre les aplicacions mèdiques de les propietats específiques de les coses, una sèrie de fragments relacionats amb la dietètica (un règim en termes tradicionals) i, finalment, una col·lecció de receptes mèdiques. Cadascuna d’aquestes seccions mostren evidents lligams d’intertextualitat que apunten cap a una intensa activitat sintetitzadora de diverses tradicions aliades a la medicina a l’Àndalus califal. El text és, de fet, un magnífic objecte sobre el qual aplicar la metodologia de la crítica textual i de fonts. L’edició crítica del text incorpora la dimensió cronològica dins l’aparat, que esdevé així un element contextualitzador. Quant l’estudi de les fonts, si tot al llarg de la primera part d’aquesta tesi és només secundari, aquesta disciplina pren un protagonisme gairebé absolut en la tercera part, especialment en el capítol dedicat a l’anàlisi individual de cada passatge recollit en la secció sobre les propietats específiques de les coses

    A foundation for synthesising programming language semantics

    Get PDF
    Programming or scripting languages used in real-world systems are seldom designed with a formal semantics in mind from the outset. Therefore, the first step for developing well-founded analysis tools for these systems is to reverse-engineer a formal semantics. This can take months or years of effort. Could we automate this process, at least partially? Though desirable, automatically reverse-engineering semantics rules from an implementation is very challenging, as found by Krishnamurthi, Lerner and Elberty. They propose automatically learning desugaring translation rules, mapping the language whose semantics we seek to a simplified, core version, whose semantics are much easier to write. The present thesis contains an analysis of their challenge, as well as the first steps towards a solution. Scaling methods with the size of the language is very difficult due to state space explosion, so this thesis proposes an incremental approach to learning the translation rules. I present a formalisation that both clarifies the informal description of the challenge by Krishnamurthi et al, and re-formulates the problem, shifting the focus to the conditions for incremental learning. The central definition of the new formalisation is the desugaring extension problem, i.e. extending a set of established translation rules by synthesising new ones. In a synthesis algorithm, the choice of search space is important and non-trivial, as it needs to strike a good balance between expressiveness and efficiency. The rest of the thesis focuses on defining search spaces for translation rules via typing rules. Two prerequisites are required for comparing search spaces. The first is a series of benchmarks, a set of source and target languages equipped with intended translation rules between them. The second is an enumerative synthesis algorithm for efficiently enumerating typed programs. I show how algebraic enumeration techniques can be applied to enumerating well-typed translation rules, and discuss the properties expected from a type system for ensuring that typed programs be efficiently enumerable. The thesis presents and empirically evaluates two search spaces. A baseline search space yields the first practical solution to the challenge. The second search space is based on a natural heuristic for translation rules, limiting the usage of variables so that they are used exactly once. I present a linear type system designed to efficiently enumerate translation rules, where this heuristic is enforced. Through informal analysis and empirical comparison to the baseline, I then show that using linear types can speed up the synthesis of translation rules by an order of magnitude

    Language integrated relational lenses

    Get PDF
    Relational databases are ubiquitous. Such monolithic databases accumulate large amounts of data, yet applications typically only work on small portions of the data at a time. A subset of the database defined as a computation on the underlying tables is called a view. Querying views is helpful, but it is also desirable to update them and have these changes be applied to the underlying database. This view update problem has been the subject of much previous work before, but support by database servers is limited and only rarely available. Lenses are a popular approach to bidirectional transformations, a generalization of the view update problem in databases to arbitrary data. However, perhaps surprisingly, lenses have seldom actually been used to implement updatable views in databases. Bohannon, Pierce and Vaughan propose an approach to updatable views called relational lenses. However, to the best of our knowledge this proposal has not been implemented or evaluated prior to the work reported in this thesis. This thesis proposes programming language support for relational lenses. Language integrated relational lenses support expressive and efficient view updates, without relying on updatable view support from the database server. By integrating relational lenses into the programming language, application development becomes easier and less error-prone, avoiding the impedance mismatch of having two programming languages. Integrating relational lenses into the language poses additional challenges. As defined by Bohannon et al. relational lenses completely recompute the database, making them inefficient as the database scales. The other challenge is that some parts of the well-formedness conditions are too general for implementation. Bohannon et al. specify predicates using possibly infinite abstract sets and define the type checking rules using relational algebra. Incremental relational lenses equip relational lenses with change-propagating semantics that map small changes to the view into (potentially) small changes to the source tables. We prove that our incremental semantics are functionally equivalent to the non-incremental semantics, and our experimental results show orders of magnitude improvement over the non-incremental approach. This thesis introduces a concrete predicate syntax and shows how the required checks are performed on these predicates and show that they satisfy the abstract predicate specifications. We discuss trade-offs between static predicates that are fully known at compile time vs dynamic predicates that are only known during execution and introduce hybrid predicates taking inspiration from both approaches. This thesis adapts the typing rules for relational lenses from sequential composition to a functional style of sub-expressions. We prove that any well-typed functional relational lens expression can derive a well-typed sequential lens. We use these additions to relational lenses as the foundation for two practical implementations: an extension of the Links functional language and a library written in Haskell. The second implementation demonstrates how type-level computation can be used to implement relational lenses without changes to the compiler. These two implementations attest to the possibility of turning relational lenses into a practical language feature

    Investigations into Proof Structures

    Full text link
    We introduce and elaborate a novel formalism for the manipulation and analysis of proofs as objects in a global manner. In this first approach the formalism is restricted to first-order problems characterized by condensed detachment. It is applied in an exemplary manner to a coherent and comprehensive formal reconstruction and analysis of historical proofs of a widely-studied problem due to {\L}ukasiewicz. The underlying approach opens the door towards new systematic ways of generating lemmas in the course of proof search to the effects of reducing the search effort and finding shorter proofs. Among the numerous reported experiments along this line, a proof of {\L}ukasiewicz's problem was automatically discovered that is much shorter than any proof found before by man or machine.Comment: This article is a continuation of arXiv:2104.1364

    Exploiting Process Algebras and BPM Techniques for Guaranteeing Success of Distributed Activities

    Get PDF
    The communications and collaborations among activities, pro- cesses, or systems, in general, are the base of complex sys- tems defined as distributed systems. Given the increasing complexity of their structure, interactions, and functionali- ties, many research areas are interested in providing mod- elling techniques and verification capabilities to guarantee their correctness and satisfaction of properties. In particular, the formal methods community provides robust verification techniques to prove system properties. However, most ap- proaches rely on manually designed formal models, making the analysis process challenging because it requires an expert in the field. On the other hand, the BPM community pro- vides a widely used graphical notation (i.e., BPMN) to design internal behaviour and interactions of complex distributed systems that can be enhanced with additional features (e.g., privacy technologies). Furthermore, BPM uses process min- ing techniques to automatically discover these models from events observation. However, verifying properties and ex- pected behaviour, especially in collaborations, still needs a solid methodology. This thesis aims at exploiting the features of the formal meth- ods and BPM communities to provide approaches that en- able formal verification over distributed systems. In this con- text, we propose two approaches. The modelling-based ap- proach starts from BPMN models and produces process al- gebra specifications to enable formal verification of system properties, including privacy-related ones. The process mining- based approach starts from logs observations to automati- xv cally generate process algebra specifications to enable veri- fication capabilities

    Towards A Practical High-Assurance Systems Programming Language

    Full text link
    Writing correct and performant low-level systems code is a notoriously demanding job, even for experienced developers. To make the matter worse, formally reasoning about their correctness properties introduces yet another level of complexity to the task. It requires considerable expertise in both systems programming and formal verification. The development can be extremely costly due to the sheer complexity of the systems and the nuances in them, if not assisted with appropriate tools that provide abstraction and automation. Cogent is designed to alleviate the burden on developers when writing and verifying systems code. It is a high-level functional language with a certifying compiler, which automatically proves the correctness of the compiled code and also provides a purely functional abstraction of the low-level program to the developer. Equational reasoning techniques can then be used to prove functional correctness properties of the program on top of this abstract semantics, which is notably less laborious than directly verifying the C code. To make Cogent a more approachable and effective tool for developing real-world systems, we further strengthen the framework by extending the core language and its ecosystem. Specifically, we enrich the language to allow users to control the memory representation of algebraic data types, while retaining the automatic proof with a data layout refinement calculus. We repurpose existing tools in a novel way and develop an intuitive foreign function interface, which provides users a seamless experience when using Cogent in conjunction with native C. We augment the Cogent ecosystem with a property-based testing framework, which helps developers better understand the impact formal verification has on their programs and enables a progressive approach to producing high-assurance systems. Finally we explore refinement type systems, which we plan to incorporate into Cogent for more expressiveness and better integration of systems programmers with the verification process

    In search of a philosophy of computing education.

    Get PDF
    In this paper, we present a preliminary description of the field of inquiry encompassed by the philosophy of computing education. We first attempt to identify a general framework for investigating characteristic questions of a philosophical nature that arise within the STEM application domains. We describe the categories such questions fall under and use the philosophy of computing to illustrate this process. We also consider an alternative approach to identifying philosophical issues within a practical field, using the philosophy of education as an example. We consider the related subject of the philosophy of engineering education and see how this has recently emerged as an object of study in its own right. We investigate the similarities and differences between this field and the philosophy of computing education, and provide an initial description of the latter subject area. We also discuss how it relates to the subject referred to as Computing Education Research. Finally, we draw some conclusions about why there is this puzzling, apparent lack of interest in current work in this area, and suggest reasons why the pursuit of philosophical inquiry into computing education should be an important aspect of scholarly study in the subject

    A Declarative Validator for GSOS Languages

    Full text link
    Rule formats can quickly establish meta-theoretic properties of process algebras. It is then desirable to identify domain-specific languages (DSLs) that can easily express rule formats. In prior work, we have developed Lang-n-Change, a DSL that includes convenient features for browsing language definitions and retrieving information from them. In this paper, we use Lang-n-Change to write a validator for the GSOS rule format, and we augment Lang-n-Change with suitable macros on our way to do so. Our GSOS validator is concise, and amounts to a few lines of code. We have used it to validate several concurrency operators as adhering to the GSOS format. Moreover, our code expresses the restrictions of the format declaratively.Comment: In Proceedings PLACES 2023, arXiv:2304.0543

    Wiring Circuits Is Easy as {0,1,ω}, or Is It...

    Get PDF
    Quantitative Type-Systems support fine-grained reasoning about term usage in our programming languages. Hardware Design Languages are another style of language in which quantitative typing would be beneficial. When wiring components together we must ensure that there are no unused ports, dangling wires, or accidental fan-ins and fan-outs. Although many wire usage checks are detectable using static analysis tools, such as Verilator, quantitative typing supports making these extrinsic checks an intrinsic aspect of the type-system. With quantitative typing of bound terms, we can provide design-time checks that all wires and ports have been used, and ensure that all wiring decisions are explicitly made, and are neither implicit nor accidental. We showcase the use of quantitative types in hardware design languages by detailing how we can retrofit quantitative types onto SystemVerilog netlists, and the impact that such a quantitative type-system has when creating designs. Netlists are gate-level descriptions of hardware that are produced as the result of synthesis, and it is from these netlists that hardware is generated (fabless or fabbed). First, we present a simple structural type-system for a featherweight version of SystemVerilog netlists that demonstrates how we can type netlists using standard structural techniques, and what it means for netlists to be type-safe but still lead to ill-wired designs. We then detail how to retrofit the language with quantitative types, make the type-system sub-structural, and detail how our new type-safety result ensures that wires and ports are used once. Our ideas have been proven both practically and formally by realising our work in Idris2, through which we can construct a verified language implementation that can type-check existing designs. From this work we can look to promote quantitative typing back up the synthesis chain to a more comprehensive hardware description language; and to help develop new and better hardware description languages with quantitative typing
    corecore