162,767 research outputs found

    An iterative approach to precondition inference using constrained Horn clauses

    Get PDF
    We present a method for automatic inference of conditions on the initial states of a program that guarantee that the safety assertions in the program are not violated. Constrained Horn clauses (CHCs) are used to model the program and assertions in a uniform way, and we use standard abstract interpretations to derive an over-approximation of the set of unsafe initial states. The precondition then is the constraint corresponding to the complement of that set, under-approximating the set of safe initial states. This idea of complementation is not new, but previous attempts to exploit it have suffered from the loss of precision. Here we develop an iterative specialisation algorithm to give more precise, and in some cases optimal safety conditions. The algorithm combines existing transformations, namely constraint specialisation, partial evaluation and a trace elimination transformation. The last two of these transformations perform polyvariant specialisation, leading to disjunctive constraints which improve precision. The algorithm is implemented and tested on a benchmark suite of programs from the literature in precondition inference and software verification competitions.Comment: Paper presented at the 34nd International Conference on Logic Programming (ICLP 2018), Oxford, UK, July 14 to July 17, 2018 18 pages, LaTe

    PRESENT 2020: Text Expanding on the Checklist for Proper Reporting of Evidence in Sport and Exercise Nutrition Trials.

    Get PDF
    Some readers may not have access to the full paper, so a properly formatted and well-written abstract is imperative. Authors should give priority to information about the current study rather than using the abstract for an extensive background or rationale. 2a Methods: Key information regarding the study design, methods, and population should be summarized to enable broad understanding of the study from the abstract. 2b Results: Readers are interested in extracting key data that reflect the main findings of the study. The abstract should present data (e.g., the absolute magnitude of values and the size/precision of effects—specifying which measures of central tendency and variability are stated) rather than simply stating the presence, absence, or direction of effects. The presentation of p values or similar inferential statistics is no substitute for reporting actual data (Maughan, 2004). 2c Conclusion: Priority should be given to the reporting of results as per the previous section, with only a brief concluding statement thereafter. A concise conclusion based on what was actually measured in the study is preferred to speculative interpretations, with cautious use of language to avoid hyperbole or improper inference of causality (Brown et al., 2013). It is not appropriate or necessary to identify further research priorities here

    Profiling of OCR'ed Historical Texts Revisited

    Full text link
    In the absence of ground truth it is not possible to automatically determine the exact spectrum and occurrences of OCR errors in an OCR'ed text. Yet, for interactive postcorrection of OCR'ed historical printings it is extremely useful to have a statistical profile available that provides an estimate of error classes with associated frequencies, and that points to conjectured errors and suspicious tokens. The method introduced in Reffle (2013) computes such a profile, combining lexica, pattern sets and advanced matching techniques in a specialized Expectation Maximization (EM) procedure. Here we improve this method in three respects: First, the method in Reffle (2013) is not adaptive: user feedback obtained by actual postcorrection steps cannot be used to compute refined profiles. We introduce a variant of the method that is open for adaptivity, taking correction steps of the user into account. This leads to higher precision with respect to recognition of erroneous OCR tokens. Second, during postcorrection often new historical patterns are found. We show that adding new historical patterns to the linguistic background resources leads to a second kind of improvement, enabling even higher precision by telling historical spellings apart from OCR errors. Third, the method in Reffle (2013) does not make any active use of tokens that cannot be interpreted in the underlying channel model. We show that adding these uninterpretable tokens to the set of conjectured errors leads to a significant improvement of the recall for error detection, at the same time improving precision

    An Algebraic Framework for Compositional Program Analysis

    Full text link
    The purpose of a program analysis is to compute an abstract meaning for a program which approximates its dynamic behaviour. A compositional program analysis accomplishes this task with a divide-and-conquer strategy: the meaning of a program is computed by dividing it into sub-programs, computing their meaning, and then combining the results. Compositional program analyses are desirable because they can yield scalable (and easily parallelizable) program analyses. This paper presents algebraic framework for designing, implementing, and proving the correctness of compositional program analyses. A program analysis in our framework defined by an algebraic structure equipped with sequencing, choice, and iteration operations. From the analysis design perspective, a particularly interesting consequence of this is that the meaning of a loop is computed by applying the iteration operator to the loop body. This style of compositional loop analysis can yield interesting ways of computing loop invariants that cannot be defined iteratively. We identify a class of algorithms, the so-called path-expression algorithms [Tarjan1981,Scholz2007], which can be used to efficiently implement analyses in our framework. Lastly, we develop a theory for proving the correctness of an analysis by establishing an approximation relationship between an algebra defining a concrete semantics and an algebra defining an analysis.Comment: 15 page

    The metal-insulator transition in disordered solids: How theoretical prejudices influence its characterization. A critical review of analyses of experimental data

    Get PDF
    In a recent experiment, Siegrist et al. [Nature Materials 10, 202 (2011)] investigated the metal-insulator transition (MIT) of GeSb_2Te_4 on increasing annealing temperature. The authors conclude that this material exhibits a discontinuous MIT with a finite minimum metallic conductivity. The striking contrast to reports on other disordered substances motivates the present in-depth study of the influence of the MIT criterion used on the characterization of the MIT. First, we discuss in detail the inherent biases of the various available approaches to locating the MIT. Second, reanalyzing the GeSb_2Te_4 measurements, we show that this material resembles other disordered solids to a large extent: according to a widely-used approach, these data may also be interpreted in terms of a continuous MIT. Checking the justification of the respective fits, however, uncovers inconsistencies in the experimental data. Third, comparing with previous experimental studies of crystalline Si:As, Si:P, Si:B, Ge:Ga, CdSe:In, n-Cd_{0.95}Mn$_{0.05}Se, Cd_{0.95}Mn_{0.05}Te_{0.97}Se_{0.03}:In, disordered Gd, and nano-granular Pt-C, we show that such an inconclusive behavior occurs frequently: the analysis of the logarithmic temperature derivative of the conductivity highlights serious inconsistencies in the original interpretations in terms of a continuous MIT. In part, they are common to all these studies and seem to be generic, in part, they vary from experiment to experiment and may arise from measurement problems. Thus, the question for the character of the MIT of these materials has to be considered as yet open. The challenges now lie in improving the measurement precision and in developing a microscopic theory capable of explaining the seemingly generic features.Comment: Revtex-file + 23 figures -> 51 pages. Revisions: Some arguments completed; structure slightly modified: mathematical part of former Subsection II.E is now presented as Appendix B. This version was accepted for publ. by Critical Reviews in Solid State and Materials Sciences at July 18, 2017. It differs from this publication concerning citation style, abstract, and some very minor modification
    • …
    corecore