820 research outputs found

    REALISTIC CORRECT SYSTEMS IMPLEMENTATION

    Get PDF
    The present article and the forthcoming second part on Trusted Compiler Implementation\ud address correct construction and functioning of large computer based systems. In view\ud of so many annoying and dangerous system misbehaviors we ask: Can informaticians\ud righteously be accounted for incorrectness of systems, will they be able to justify systems\ud to work correctly as intended? We understand the word justification in the sense: design\ud of computer based systems, formulation of mathematical models of information flows, and\ud construction of controlling software are to be such that the expected system effects, the\ud absence of internal failures, and the robustness towards misuses and malicious external attacks\ud are foreseeable as logical consequences of the models.\ud Since more than 40 years, theoretical informatics, software engineering and compiler\ud construction have made important contributions to correct specification and also to correct\ud high-level implementation of compilers. But the third step, translation - bootstrapping - of\ud high level compiler programs to host machine code by existing host compilers, is as important.\ud So far there are no realistic recipes to close this correctness gap, although it is known\ud for some years that trust in executable code can dangerously be compromised by Trojan\ud Horses in compiler executables, even if they pass strongest tests.\ud In the present first article we will give a comprehensive motivation and develop\ud a mathematical theory in order to conscientiously prove the correctness of an initial fully\ud trusted compiler executable. The task will be modularized in three steps. The third step of\ud machine level compiler implementation verification is the topic of the forthcoming second\ud part on Trusted Compiler Implementation. It closes the implementation gap, not only for\ud compilers but also for correct software-based systems in general. Thus, the two articles together\ud give a rather confident answer to the question raised in the title

    On the Implementation of GNU Prolog

    Get PDF
    GNU Prolog is a general-purpose implementation of the Prolog language, which distinguishes itself from most other systems by being, above all else, a native-code compiler which produces standalone executables which don't rely on any byte-code emulator or meta-interpreter. Other aspects which stand out include the explicit organization of the Prolog system as a multipass compiler, where intermediate representations are materialized, in Unix compiler tradition. GNU Prolog also includes an extensible and high-performance finite domain constraint solver, integrated with the Prolog language but implemented using independent lower-level mechanisms. This article discusses the main issues involved in designing and implementing GNU Prolog: requirements, system organization, performance and portability issues as well as its position with respect to other Prolog system implementations and the ISO standardization initiative.Comment: 30 pages, 3 figures, To appear in Theory and Practice of Logic Programming (TPLP); Keywords: Prolog, logic programming system, GNU, ISO, WAM, native code compilation, Finite Domain constraint

    A New Verified Compiler Backend for CakeML

    Get PDF
    We have developed and mechanically verified a new compiler backend for CakeML. Our new compiler features a sequence of intermediate languages that allows it to incrementally compile away high-level features and enables verification at the right levels of semantic detail. In this way, it resembles mainstream (unverified) compilers for strict functional languages. The compiler supports efficient curried multi-argument functions, configurable data representations, exceptions that unwind the call stack, register allocation, and more. The compiler targets several architectures: x86-64, ARMv6, ARMv8, MIPS-64, and RISC-V. In this paper, we present the overall structure of the compiler, including its 12 intermediate languages, and explain how everything fits together. We focus particularly on the interaction between the verification of the register allocator and the garbage collector, and memory representations. The entire development has been carried out within the HOL4 theorem prover.Engineering and Physical Sciences Research Counci

    Lessons from Formally Verified Deployed Software Systems (Extended version)

    Full text link
    The technology of formal software verification has made spectacular advances, but how much does it actually benefit the development of practical software? Considerable disagreement remains about the practicality of building systems with mechanically-checked proofs of correctness. Is this prospect confined to a few expensive, life-critical projects, or can the idea be applied to a wide segment of the software industry? To help answer this question, the present survey examines a range of projects, in various application areas, that have produced formally verified systems and deployed them for actual use. It considers the technologies used, the form of verification applied, the results obtained, and the lessons that can be drawn for the software industry at large and its ability to benefit from formal verification techniques and tools. Note: a short version of this paper is also available, covering in detail only a subset of the considered systems. The present version is intended for full reference.Comment: arXiv admin note: text overlap with arXiv:1211.6186 by other author

    Realistic correct systems implementation

    Get PDF
    Подана перша частина статті і наступна її друга частина присвячені методам коректної побудови і функціонування великих комп'ютерних систем. У центрі уваги – проблема обґрунтування, що подається в сенсі формулювання математичної моделі інформаційних потоків у комп'ютерній системі і побудови керуючого програмного забезпечення, що контролює слушність поводження, відсутність внутрішніх помилок і усталеність стосовно зовнішніх атак як логічні наслідки, що одержуються з моделі. У першій частині статті викладена математична теорія доказової побудови компіляторівПредставленная первая часть статьи и последующая ее вторая часть посвящены методам корректного построения и функционирования больших компьютерных систем. В центре внимания – проблема обоснования, понимаемая в смысле формулирования математической модели информационных потоков в компьютерной системе и построения управляющего программного обеспечения, контролирующего правильность поведения, отсутствие внутренних ошибок и устойчивость по отношению к внешним атакам как логические следствия, получаемые из модели. В первой части статьи изложена математическая теория доказательного построения компиляторов

    Will Informatics be able to Justify the Construction of Large Computer Based Systems? Part II. Trusted compiler implementation

    Get PDF
    The present and the previous article on Realistic Correct Systems Implementation together address correct construction and functioning of large computer based systems. In view of so many annoying and dangerous system misbehaviors we want to ask: Can informaticians righteously be accounted for incorrectness of ystems, will they be able to justify systems to work correctly as intended? We understand the word justification in this sense, i.e. for the design of computer based systems, the formulation of mathematical models of information flows, and the construction of controlling software to be such that the expected system effects, the absence of internal failures, and the robustness towards misuses and malicious external attacks are foreseeable as logical consequences of the models. Since more than 40 years, theoretical informatics, software engineering and compiler construction have made important contributions to correct specification and also to correct high-level implementation of compilers. But the third step, translation — bootstrapping — of high level compiler programs into host machine code by existing host ompilers, is as important. So far there are no realistic recipes to close this gap, although it is known for many years that trust in executable code can dangerously be compromised by Trojan Horses in compiler executables, even if they pass strongest tests. Our article will show how to close this low level gap. We demonstrate the method of rigorous syntactic a-posteriori code inspection, which has been developed by the research group Verifix funded by the Deutsche Forschungsgemeinschaft (DFG).Багато років теоретична інформатика вчастині розробки програмного забезпечення і побудови компіляторів займалась проблемами правильності специфікацій і високорівневих реалізацій компіляторів. У другій частині статті розглядається проблема коректного і безпечного перекладу (bootstrapping) програм з мови високого рівня в коди машини. Показано, як вирішуються проблеми коректності програм на мовах низького рівня. Продемонстрований метод строго синтаксичного апостеріор ного аналізу, котрий був розроблений дослідною групою Verifix в університеті м. Киля (ФРН).Много лет теоретическая информатика в части разработки программного обеспечения и построения компиляторов занималась проблемами правильности спецификаций и высокоуровневых реализаций компиляторов. Во второй части статьи рассматривается проблема корректного и безопасного перевода (bootstrapping) программ с языка высокого уровня в коды машины. Показано, как решаются проблемы корректности программ на языках низкого уровня. Продемонстрирован метод строго синтаксического апостериорного анализа, который был разработан исследовательской группой Verifix в университете г. Киля (ФРГ)

    Converting web pages mockups to HTML using machine learning

    Get PDF
    Converting Web pages mockups to code is a task that developers typically perform. Due to the time required to accomplish this task, the time available to devote to application logic is reduced. So, the main goal of the present work was to develop deep learning models to automatically convert mockups of Web graphical interfaces into HTML, CSS and Bootstrap code. The trained model must be deployed as a Web application. Two deep learning models were built, resulting from two different approaches to integrate in the Web application. The first approach uses a hybrid architecture with a convolutional neuronal network (CNN) and two recurrent networks (RNNs), following the encoder-decoder architecture commonly adopted in image captioning. The second approach is focused on the spatial component of the problem being addressed, and includes the YOLO network and a layout algorithm. Testing with the same dataset, the prediction’s correction achieved with the first approach was 71.30%, while the se cond approach reached 88.28%. The first contribution of the present paper is the development of a rich dataset with Web pages GUI sketches and their captions. There was no dataset with sufficiently complex GUI sketches before we start this work. A second contribution was applying YOLO to detect and localize HTML elements, and the development of a layout algorithm that allows us to convert the YOLO result into code. It is a completely different approach from what is found in the related work. Finally, we achieved with YOLO-based architecture a prediction’s correction higher than reported in the literature.FCT - Fundação para a Ciência e Tecnologia within the R&D Units Project Scope: UIDB/00319/202

    Mining State-Based Models from Proof Corpora

    Full text link
    Interactive theorem provers have been used extensively to reason about various software/hardware systems and mathematical theorems. The key challenge when using an interactive prover is finding a suitable sequence of proof steps that will lead to a successful proof requires a significant amount of human intervention. This paper presents an automated technique that takes as input examples of successful proofs and infers an Extended Finite State Machine as output. This can in turn be used to generate proofs of new conjectures. Our preliminary experiments show that the inferred models are generally accurate (contain few false-positive sequences) and that representing existing proofs in such a way can be very useful when guiding new ones.Comment: To Appear at Conferences on Intelligent Computer Mathematics 201
    corecore