412 research outputs found

    Foundational extensible corecursion: a proof assistant perspective

    Get PDF
    This paper presents a formalized framework for defining corecursive functions safely in a total setting, based on corecursion up-to and relational parametricity. The end product is a general corecursor that allows corecursive (and even recursive) calls under “friendly” operations, including constructors. Friendly corecursive functions can be registered as such, thereby increasing the corecursor’s expressiveness. The metatheory is formalized in the Isabelle proof assistant and forms the core of a prototype tool. The corecursor is derived from first principles, without requiring new axioms or extensions of the logic

    Competence Assessment Instruments in Perianesthesia Nursing Care: A Scoping Review of the Literature

    Get PDF
    Purpose: To identify competence assessment instruments in perianesthesia nursing care and to describe the validity and reliability of the instruments.Design: A scoping review in a systematic manner.Methods: A search in CINAHL, MEDLINE, and ERIC was carried out to identify empirical studies from 1994 to 2015. A narrative synthesis approach was undertaken to analyze the data.Findings: Nine competence assessment instruments in perianesthesia nursing care were identified. The instruments used three types of data collection methods: Self-report, observation, and written examinations. The most commonly reported validity method was content validity involving expert panels and reliability tests for internal consistency and inter-rater's consistency.Conclusions: Integrating more than one data collection method may give support to overcoming some of the limitations, such as lack of objectivity and misinterpretation of the assessment results. In an ever-changing environment, perianesthesia nursing competence requires constant reassessment from the perspective of content validity, scoring methods, and reliability

    Modular Synthesis of Sketches Using Models

    Get PDF
    One problem with the constraint-based approaches to synthesis that have become popular over the last few years is that they only scale to relatively small routines, on the order of a few dozen lines of code. This paper presents a mechanism for modular reasoning that allows us to break larger synthesis problems into small manageable pieces. The approach builds on previous work in the verification community of using high-level specifications and partially interpreted functions (we call them models) in place of more complex pieces of code in order to make the analysis modular. The main contribution of this paper is to show how to combine these techniques with the counterexample guided synthesis approaches used to efficiently solve synthesis problems. Specifically, we show two new algorithms; one to efficiently synthesize functions that use models, and another one to synthesize functions while ensuring that the behavior of the resulting function will be in the set of behaviors allowed by the model. We have implemented our approach on top of the open-source Sketch synthesis system, and we demonstrate its effectiveness on several Sketch benchmark problems.National Science Foundation (U.S.) (Grant NSF-1116362)National Science Foundation (U.S.) (Grant NSF-1139056)United States. Dept. of Energy (Grant DE-SC0005372

    On the equivalence of pairing correlations and intrinsic vortical currents in rotating nuclei

    Full text link
    The present paper establishes a link between pairing correlations in rotating nuclei and collective vortical modes in the intrinsic frame. We show that the latter can be embodied by a simple S-type coupling a la Chandrasekhar between rotational and intrinsic vortical collective modes. This results from a comparison between the solutions of microscopic calculations within the HFB and the HF Routhian formalisms. The HF Routhian solutions are constrained to have the same Kelvin circulation expectation value as the HFB ones. It is shown in several mass regions, pairing regimes, and for various spin values that this procedure yields moments of inertia, angular velocities, and current distributions which are very similar within both formalisms. We finally present perspectives for further studies.Comment: 8 pages, 4 figures, submitted to Phys. Rev.

    On Deciding Local Theory Extensions via E-matching

    Full text link
    Satisfiability Modulo Theories (SMT) solvers incorporate decision procedures for theories of data types that commonly occur in software. This makes them important tools for automating verification problems. A limitation frequently encountered is that verification problems are often not fully expressible in the theories supported natively by the solvers. Many solvers allow the specification of application-specific theories as quantified axioms, but their handling is incomplete outside of narrow special cases. In this work, we show how SMT solvers can be used to obtain complete decision procedures for local theory extensions, an important class of theories that are decidable using finite instantiation of axioms. We present an algorithm that uses E-matching to generate instances incrementally during the search, significantly reducing the number of generated instances compared to eager instantiation strategies. We have used two SMT solvers to implement this algorithm and conducted an extensive experimental evaluation on benchmarks derived from verification conditions for heap-manipulating programs. We believe that our results are of interest to both the users of SMT solvers as well as their developers

    Robustness Testing of Intermediate Verifiers

    Full text link
    Program verifiers are not exempt from the bugs that affect nearly every piece of software. In addition, they often exhibit brittle behavior: their performance changes considerably with details of how the input program is expressed-details that should be irrelevant, such as the order of independent declarations. Such a lack of robustness frustrates users who have to spend considerable time figuring out a tool's idiosyncrasies before they can use it effectively. This paper introduces a technique to detect lack of robustness of program verifiers; the technique is lightweight and fully automated, as it is based on testing methods (such as mutation testing and metamorphic testing). The key idea is to generate many simple variants of a program that initially passes verification. All variants are, by construction, equivalent to the original program; thus, any variant that fails verification indicates lack of robustness in the verifier. We implemented our technique in a tool called "mugie", which operates on programs written in the popular Boogie language for verification-used as intermediate representation in numerous program verifiers. Experiments targeting 135 Boogie programs indicate that brittle behavior occurs fairly frequently (16 programs) and is not hard to trigger. Based on these results, the paper discusses the main sources of brittle behavior and suggests means of improving robustness

    Program analysis is harder than verification: A computability perspective

    Get PDF
    We study from a computability perspective static program analysis, namely detecting sound program assertions, and verification, namely sound checking of program assertions. We first design a general computability model for domains of program assertions and correspond- ing program analysers and verifiers. Next, we formalize and prove an instantiation of Rice\u2019s theorem for static program analysis and verifica- tion. Then, within this general model, we provide and show a precise statement of the popular belief that program analysis is a harder prob- lem than program verification: we prove that for finite domains of pro- gram assertions, program analysis and verification are equivalent prob- lems, while for infinite domains, program analysis is strictly harder than verification

    Elongation mechanism of the ion shaping of embedded gold nanoparticles under swift heavy ion irradiation

    Get PDF
    The elongation process under swift heavy ion irradiation (74 MeV Kr ions) of gold NPs, with a diameter in the range 10-30 nm, and embedded in a silica matrix has been investigated by combining experiment and simulation techniques: three-dimensional thermal spike (3DTS), molecular dynamics (MD) and a phenomenological simulation code specially developed for this study. 3DTS simulations evidence the formation of a track in the host matrix and the melting of the NP after the passage of the impinging ion. MD simulations demonstrate that melted NPs have enough time to expand after each ion impact. Our phenomenological simulation relies on the expansion of the melted NP, which flows in the track in silica with modified (lower) density, followed by its recrystallization upon cooling. Finally, the elongation of the spherical NP into a cylindrical one, with a length proportional to its initial size and a width close to the diameter of the track, is the result of the superposition of the independent effects of each expansion/recrystallization process occurring for each ion impact. In agreement with experiment, the simulation shows the gradual elongation of spherical NPs in the ion-beam direction until their widths saturate in the steady state and reach a value close to the track diameter. Moreover, the simulations indicate that the expansion of the gold NP is incomplete at each ion impact.Peer reviewe

    Critical care nurses' self-assessed patient observation skills: a cross-sectional survey study

    Get PDF
    BACKGROUND: Observing a patient's clinical condition is an important responsibility of critical care nurses and an essential component of their competence. Critical care nurses' patient observation skills contribute to patient safety and quality of care. These observation skills have not been assessed or measured previously.AIM: The aim of this study was to measure the self-assessed level of critical care nurses' patient observation skills and to explore the factors associated with these skills.STUDY DESIGN: This was a multicentre cross-sectional survey conducted in Finland.METHODS: The sample consisted of critical care nurses working at Finnish university hospitals. The data were collected between September 2017 and January 2018 using an instrument developed for the study - Patient Observation Skills in Critical Care Nursing (visual analogue scale 0-100). Descriptive and inferential statistics were used to analyse the data.RESULTS: A total of 372 critical care nurses (49%) responded. Finnish critical care nurses assessed their patient observation skills overall as excellent. The bio-physiological foundation was assessed as good, whereas skills in using observation methods and skills in recognizing changing clinical condition were assessed as excellent. Education for special tasks in intensive care units, information searching in scientific journals, working experience in critical care nursing and critical care nurses' perception of critical care as a preferred field of nursing were factors promoting patient observation skills.CONCLUSIONS AND RELEVANCE TO CLINICAL PRACTICE: The study provided a novel instrument for measuring critical care nurses' patient observation skills. The instrument may be used as an assessment tool in clinical practice and education. Developing orientation and on-the-job training in intensive care units are essential in assuring critical care nurses' adequate patient observation skills. Patient observation skills could be developed during nursing education by providing students with opportunities for clinical training and applying patient cases in virtual learning environments.</p

    Invariant Synthesis for Incomplete Verification Engines

    Full text link
    We propose a framework for synthesizing inductive invariants for incomplete verification engines, which soundly reduce logical problems in undecidable theories to decidable theories. Our framework is based on the counter-example guided inductive synthesis principle (CEGIS) and allows verification engines to communicate non-provability information to guide invariant synthesis. We show precisely how the verification engine can compute such non-provability information and how to build effective learning algorithms when invariants are expressed as Boolean combinations of a fixed set of predicates. Moreover, we evaluate our framework in two verification settings, one in which verification engines need to handle quantified formulas and one in which verification engines have to reason about heap properties expressed in an expressive but undecidable separation logic. Our experiments show that our invariant synthesis framework based on non-provability information can both effectively synthesize inductive invariants and adequately strengthen contracts across a large suite of programs
    • …
    corecore