146 research outputs found

    Robustness Testing of Intermediate Verifiers

    Full text link
    Program verifiers are not exempt from the bugs that affect nearly every piece of software. In addition, they often exhibit brittle behavior: their performance changes considerably with details of how the input program is expressed-details that should be irrelevant, such as the order of independent declarations. Such a lack of robustness frustrates users who have to spend considerable time figuring out a tool's idiosyncrasies before they can use it effectively. This paper introduces a technique to detect lack of robustness of program verifiers; the technique is lightweight and fully automated, as it is based on testing methods (such as mutation testing and metamorphic testing). The key idea is to generate many simple variants of a program that initially passes verification. All variants are, by construction, equivalent to the original program; thus, any variant that fails verification indicates lack of robustness in the verifier. We implemented our technique in a tool called "mugie", which operates on programs written in the popular Boogie language for verification-used as intermediate representation in numerous program verifiers. Experiments targeting 135 Boogie programs indicate that brittle behavior occurs fairly frequently (16 programs) and is not hard to trigger. Based on these results, the paper discusses the main sources of brittle behavior and suggests means of improving robustness

    Draft Regional Recommendations for the Pacific Northwest on Water Quality Trading

    Get PDF
    In March 2013, water quality agency staff from Idaho, Oregon, and Washington, U.S. EPA Region 10, Willamette Partnership, and The Freshwater Trust convened a working group for the first of a series of four interagency workshops on water quality trading in the Pacific Northwest. Facilitated by Willamette Partnership through a USDA-NRCS Conservation Innovation Grant, those who assembled over the subsequent eight months discussed and evaluated water quality trading policies, practices, and programs across the country in an effort to better understand and draw from EPA's January 13, 2003, Water Quality Trading Policy, and its 2007 Permit Writers' Toolkit, as well as existing state guidance and regulations on water quality trading. All documents presented at those conversations and meeting summaries are posted on the Willamette Partnership's website.The final product is intended to be a set of recommended practices for each state to consider as they develop water quality trading. The goals of this effort are to help ensure that water quality "trading programs" have the quality, credibility, and transparency necessary to be consistent with the "Clean Water Act" (CWA), its implementing regulations and state and local water quality laws

    Can Carbon Sinks be Operational? An RFF Workshop Summary

    Get PDF
    An RFF Workshop brought together experts from around the world to assess the feasibility of using biological sinks to sequester carbon as part of a global atmospheric mitigation effort. The chapters of this proceeding are a result of that effort. Although the intent of the workshop was not to generate a consensus, a number of studies suggest that sinks could be a relatively inexpensive and effective carbon management tool. The chapters cover a variety of aspects and topics related to the monitoring and measurement of carbon in biological systems. They tend to support the view the carbon sequestration using biological systems is technically feasible with relatively good precision and at relatively low cost. Thus carbon sinks can be operational.carbon, sinks, global warming, sequestration, forests

    Refinement Reflection:Complete Verification with SMT

    Get PDF
    We introduce Refinement Reflection, a new framework for building SMT-based deductive verifiers. The key idea is to reflect the code implementing a user-defined function into the function's (output) refinement type. As a consequence, at uses of the function, the function definition is instantiated in the SMT logic in a precise fashion that permits decidable verification. Reflection allows the user to write equational proofs of programs just by writing other programs using pattern-matching and recursion to perform case-splitting and induction. Thus, via the propositions-as-types principle, we show that reflection permits the specification of arbitrary functional correctness properties. Finally, we introduce a proof-search algorithm called Proof by Logical Evaluation that uses techniques from model checking and abstract interpretation, to completely automate equational reasoning. We have implemented reflection in Liquid Haskell and used it to verify that the widely used instances of the Monoid, Applicative, Functor, and Monad typeclasses actually satisfy key algebraic laws required to make the clients safe, and have used reflection to build the first library that actually verifies assumptions about associativity and ordering that are crucial for safe deterministic parallelism.Comment: 29 pages plus appendices, to appear in POPL 2018. arXiv admin note: text overlap with arXiv:1610.0464

    REST: Integrating Term Rewriting with Program Verification (Extended Version)

    Get PDF
    We introduce REST, a novel term rewriting technique for theorem proving that uses online termination checking and can be integrated with existing program verifiers. REST enables flexible but terminating term rewriting for theorem proving by: (1) exploiting newly-introduced term orderings that are more permissive than standard rewrite simplification orderings; (2) dynamically and iteratively selecting orderings based on the path of rewrites taken so far; and (3) integrating external oracles that allow steps that cannot be justified with rewrite rules. Our REST approach is designed around an easily implementable core algorithm, parameterizable by choices of term orderings and their implementations; in this way our approach can be easily integrated into existing tools. We implemented REST as a Haskell library and incorporated it into Liquid Haskell's evaluation strategy, extending Liquid Haskell with rewriting rules. We evaluated our REST implementation by comparing it against both existing rewriting techniques and E-matching and by showing that it can be used to supplant manual lemma application in many existing Liquid Haskell proofs

    A Survey of DeFi Security: Challenges and Opportunities

    Full text link
    DeFi, or Decentralized Finance, is based on a distributed ledger called blockchain technology. Using blockchain, DeFi may customize the execution of predetermined operations between parties. The DeFi system use blockchain technology to execute user transactions, such as lending and exchanging. The total value locked in DeFi decreased from \$200 billion in April 2022 to \$80 billion in July 2022, indicating that security in this area remained problematic. In this paper, we address the deficiency in DeFi security studies. To our best knowledge, our paper is the first to make a systematic analysis of DeFi security. First, we summarize the DeFi-related vulnerabilities in each blockchain layer. Additionally, application-level vulnerabilities are also analyzed. Then we classify and analyze real-world DeFi attacks based on the principles that correlate to the vulnerabilities. In addition, we collect optimization strategies from the data, network, consensus, smart contract, and application layers. And then, we describe the weaknesses and technical approaches they address. On the basis of this comprehensive analysis, we summarize several challenges and possible future directions in DeFi to offer ideas for further research

    Aligning Large Language Models with Human: A Survey

    Full text link
    Large Language Models (LLMs) trained on extensive textual corpora have emerged as leading solutions for a broad array of Natural Language Processing (NLP) tasks. Despite their notable performance, these models are prone to certain limitations such as misunderstanding human instructions, generating potentially biased content, or factually incorrect (hallucinated) information. Hence, aligning LLMs with human expectations has become an active area of interest within the research community. This survey presents a comprehensive overview of these alignment technologies, including the following aspects. (1) Data collection: the methods for effectively collecting high-quality instructions for LLM alignment, including the use of NLP benchmarks, human annotations, and leveraging strong LLMs. (2) Training methodologies: a detailed review of the prevailing training methods employed for LLM alignment. Our exploration encompasses Supervised Fine-tuning, both Online and Offline human preference training, along with parameter-efficient training mechanisms. (3) Model Evaluation: the methods for evaluating the effectiveness of these human-aligned LLMs, presenting a multifaceted approach towards their assessment. In conclusion, we collate and distill our findings, shedding light on several promising future research avenues in the field. This survey, therefore, serves as a valuable resource for anyone invested in understanding and advancing the alignment of LLMs to better suit human-oriented tasks and expectations. An associated GitHub link collecting the latest papers is available at https://github.com/GaryYufei/AlignLLMHumanSurvey.Comment: work in progres

    Cyber Peace

    Get PDF
    Cyberspace is increasingly vital to the future of humanity and managing it peacefully and sustainably is critical to both security and prosperity in the twenty-first century. These chapters and essays unpack the field of cyber peace by investigating historical and contemporary analogies, in a wide-ranging and accessible Open Access publication
    • …
    corecore