12,464 research outputs found

    Towards A Practical High-Assurance Systems Programming Language

    Full text link
    Writing correct and performant low-level systems code is a notoriously demanding job, even for experienced developers. To make the matter worse, formally reasoning about their correctness properties introduces yet another level of complexity to the task. It requires considerable expertise in both systems programming and formal verification. The development can be extremely costly due to the sheer complexity of the systems and the nuances in them, if not assisted with appropriate tools that provide abstraction and automation. Cogent is designed to alleviate the burden on developers when writing and verifying systems code. It is a high-level functional language with a certifying compiler, which automatically proves the correctness of the compiled code and also provides a purely functional abstraction of the low-level program to the developer. Equational reasoning techniques can then be used to prove functional correctness properties of the program on top of this abstract semantics, which is notably less laborious than directly verifying the C code. To make Cogent a more approachable and effective tool for developing real-world systems, we further strengthen the framework by extending the core language and its ecosystem. Specifically, we enrich the language to allow users to control the memory representation of algebraic data types, while retaining the automatic proof with a data layout refinement calculus. We repurpose existing tools in a novel way and develop an intuitive foreign function interface, which provides users a seamless experience when using Cogent in conjunction with native C. We augment the Cogent ecosystem with a property-based testing framework, which helps developers better understand the impact formal verification has on their programs and enables a progressive approach to producing high-assurance systems. Finally we explore refinement type systems, which we plan to incorporate into Cogent for more expressiveness and better integration of systems programmers with the verification process

    Eficiência energética em iluminação análise do plano decenal de eficiência energética e estudo de caso

    Get PDF
    Since prehistoric period, man has used intelligence to create mechanisms that reduce effort and increase comfort. By mastering the fire technique, he improved his food, lighting and security. He discovered the power of the waters, the winds and tamed animals, using the strength of horses and oxen for work. Thousands of years passed until a fact marked the history of energy: the invention of the steam engine, an energetic symbol of the Industrial Revolution. It was just over 100 years ago that electrical energy emerged, symbol of the Information Age. Through it, other forms of energy could be transformed efficiently, such as: heat, lighting and mechanical energy. Electricity is something special for the development of activities, but in view of the scarcity of natural energy resources and the increase in consumers resulting in price increases, the conscious use of energy is increasingly encouraged, seeking alternatives to improve quality. and efficiency of the energy to be supplied. With the increase in technology, and increase in consumers, the concern about the energy efficiency of the energy distribution system grows, with a view to the quality of supply and economy in the sector for the year 2029, an energy saving of 23, is expected. 1 million tons of oil equivalent, of which 16% corresponds to electricity savings and 84% to fuel savings, around 10% higher than those foreseen in the PDE 2029, in order to adequately deal with the implementation risks. For electricity savings in the year 2029, the industrial sector represents about 35% of the total, followed by the residential and commercial and public buildings sectors, which represent 20% and 16% of the total, respectively. The service sector, including the public lighting and sanitation sectors, but without the participation of commercial and public buildings, represents approximately 21% of the total. In this context, the general objective of this work is to analyze the considerations of the PEDF together with the PDE 2029 considering the guidelines, studies and uncertainties, practical application in an industrial environment, showing the energy gain, where a lighting Retrofit with reduction of the consumption, generating economic gain as a result of energy efficiency, which consists of changing from less efficient to more efficient lighting, considering the replacement depending on the areas, changing IP67 closed hermetic luminaires and changing lamps to LED, generating a total reduction of approximately 25% of the system power with reduced maintenance cost and risk reduction.Desde a Pré-História, o homem tem usado a inteligência para criar mecanismos que reduzam o esforço e aumentam o seu conforto. Ao dominar a técnica do fogo, melhorou sua alimentação, iluminação e segurança. Descobriu a força das águas, dos ventos e domesticou animais, usando a força de cavalos e bois para trabalho. Milhares de anos se passaram até que um fato marcou a história da energia: a invenção da máquina de vapor, um símbolo energético da Revolução industrial. Foi apenas há pouco mais de 100 anos que surgiu a energia elétrica, símbolo da Era da Informação. Através dela, outras formas de energia puderam se transformar com eficiência, como: calor, iluminação e energia mecânica. A Energia Elétrica é algo especial para o desenvolvimento das atividades, mas tendo em vista a escassez dos recursos energéticos naturais e o aumento dos consumidores resultando em aumentos de preços, cada vez mais é incentivado o uso consciente da energia, buscando alternativas para melhorar a qualidade e eficiência da energia a ser fornecida. Com o aumento da tecnologia, e aumento de consumidores cresce a preocupação com a eficiência energética do sistema de distribuição de energia, com vistas à qualidade do fornecimento e economia no setor para o ano de 2029, espera-se uma economia de energia de 23,1 milhões de toneladas equivalentes de petróleo, do qual 16% corresponde a economias de energia elétrica e 84% a economias de combustível, cerca de 10% superiores aos previstos no PDE 2029, de forma a lidar adequadamente com os riscos de implantação. Para as economias de eletricidade no ano de 2029, o setor industrial representa cerca de 35% do total, seguido pelos setores residencial e edificações comerciais e públicas, que representam 20% e 16% do total, respectivamente. O setor de serviços, incluindo os setores de iluminação pública e saneamento, mas sem a participação das edificações comerciais e públicas, representam, aproximadamente, 21% do total. Nesse contexto, o objetivo geral deste trabalho consiste em analisar as considerações do PEDF juntamente com o PDE 2029 considerando as diretrizes, estudos e incertezas, aplicação pratica em um ambiente industrial, mostrando o ganho energético, onde será realizado um Retrofit de iluminação com redução do consumo, gerando ganho econômico em decorrência da eficiência energética que consiste na troca de uma iluminação menos eficiente para uma mais eficiente, considerando a substituição dependendo das áreas troca de luminárias herméticas fechadas IP67 e troca de lâmpadas para LED gerando redução total de aproximadamente 25% da potência do sistema com redução de custo de manutenção e redução de riscos

    Towards a Benchmark for Scientific Understanding in Humans and Machines

    Full text link
    Scientific understanding is a fundamental goal of science, allowing us to explain the world. There is currently no good way to measure the scientific understanding of agents, whether these be humans or Artificial Intelligence systems. Without a clear benchmark, it is challenging to evaluate and compare different levels of and approaches to scientific understanding. In this Roadmap, we propose a framework to create a benchmark for scientific understanding, utilizing tools from philosophy of science. We adopt a behavioral notion according to which genuine understanding should be recognized as an ability to perform certain tasks. We extend this notion by considering a set of questions that can gauge different levels of scientific understanding, covering information retrieval, the capability to arrange information to produce an explanation, and the ability to infer how things would be different under different circumstances. The Scientific Understanding Benchmark (SUB), which is formed by a set of these tests, allows for the evaluation and comparison of different approaches. Benchmarking plays a crucial role in establishing trust, ensuring quality control, and providing a basis for performance evaluation. By aligning machine and human scientific understanding we can improve their utility, ultimately advancing scientific understanding and helping to discover new insights within machines

    A User Study for Evaluation of Formal Verification Results and their Explanation at Bosch

    Full text link
    Context: Ensuring safety for any sophisticated system is getting more complex due to the rising number of features and functionalities. This calls for formal methods to entrust confidence in such systems. Nevertheless, using formal methods in industry is demanding because of their lack of usability and the difficulty of understanding verification results. Objective: We evaluate the acceptance of formal methods by Bosch automotive engineers, particularly whether the difficulty of understanding verification results can be reduced. Method: We perform two different exploratory studies. First, we conduct a user survey to explore challenges in identifying inconsistent specifications and using formal methods by Bosch automotive engineers. Second, we perform a one-group pretest-posttest experiment to collect impressions from Bosch engineers familiar with formal methods to evaluate whether understanding verification results is simplified by our counterexample explanation approach. Results: The results from the user survey indicate that identifying refinement inconsistencies, understanding formal notations, and interpreting verification results are challenging. Nevertheless, engineers are still interested in using formal methods in real-world development processes because it could reduce the manual effort for verification. Additionally, they also believe formal methods could make the system safer. Furthermore, the one-group pretest-posttest experiment results indicate that engineers are more comfortable understanding the counterexample explanation than the raw model checker output. Limitations: The main limitation of this study is the generalizability beyond the target group of Bosch automotive engineers.Comment: This manuscript is under review with the Empirical Software Engineering journa

    The Path to Durable Linearizability

    Get PDF

    Kater: {A}utomating Weak Memory Model Metatheory and Consistency Checking

    Get PDF

    Commercialization of Separated Human Body Parts - Unpacking Instrumentalization Approach

    Get PDF
    The principle of non-commercialization, which prohibits trade in separated human body parts, has long been firmly embedded in many European legal orders and has become an integral part of them. However, many new uses for human biomaterials have now been discovered, and the need for them has reached a historical climax. This paper aims to explain the main tenets of non-commercialization theory, including such principles as human dignity and need to protect human’s health, and to show that these categories have so far been understood in a very one-sided and visceral way, and largely in contradiction to their true spirit. We will not dwell on a critique of the existing approach, but will propose an instrumental approach to human health based primarily on the will of the individual. At the end of this paper, we will describe possible legal constructs through which the market for separated human body parts can function, and the outcomes of adoption of one or another model

    On Interactive Proofs of Proximity with Proof-Oblivious Queries

    Get PDF
    Interactive proofs of proximity (IPPs) offer ultra-fast approximate verification of assertions regarding their input, where ultra-fast means that only a small portion of the input is read and approximate verification is analogous to the notion of approximate decision that underlies property testing. Specifically, in an IPP, the prover can make the verifier accept each input in the property, but cannot fool the verifier into accepting an input that is far from the property (except for with small probability). The verifier in an IPP system engages in two very different types of activities: interacting with an untrusted prover, and querying its input. The definition allows for arbitrary coordination between these two activities, but keeping them separate is both conceptually interesting and necessary for important applications such as addressing temporal considerations (i.e., at what time is each of the services available) and facilitating the construction of zero-knowledge schemes. In this work we embark on a systematic study of IPPs with proof-oblivious queries, where the queries should not be affected by the interaction with the prover. We assign the query and interaction activities to separate modules, and consider different limitations on their coordination. The most strict limitation requires these activities to be totally isolated from one another; they just feed their views to a separate deciding module. We show that such systems can be efficiently emulated by standard testers. Going to the other extreme, we only disallow information to flow from the interacting module to the querying module, but allow free information flow in the other direction. We show that extremely efficient one-round (i.e., two-message) systems of such type can be used to verify properties that are extremely hard to test (without the help of a prover). That is, the complexity of verifying can be polylogarithmic in the complexity of testing. This stands in contrast the MAPs (viewed as 1/2-round systems) in which proof-oblivious queries are as limited as our isolated model. Our focus is on an intermediate model that allows shared randomness between the querying and interacting modules but no information flow between them. In this case we show that 1-round systems are efficiently emulated by standard testers but 3/2-round systems of extremely low complexity exist for properties that are extremely hard to test. One additional result about this model is that it can efficiently emulate any IPP for any property of low-degree polynomials

    Graph Sequence Learning for Premise Selection

    Full text link
    Premise selection is crucial for large theory reasoning as the sheer size of the problems quickly leads to resource starvation. This paper proposes a premise selection approach inspired by the domain of image captioning, where language models automatically generate a suitable caption for a given image. Likewise, we attempt to generate the sequence of axioms required to construct the proof of a given problem. This is achieved by combining a pre-trained graph neural network with a language model. We evaluated different configurations of our method and experience a 17.7% improvement gain over the baseline.Comment: 17 page
    corecore