58 research outputs found

    Theoretical error of luminosity cross section at LEP

    Full text link
    The aim of this note is to characterize briefly main components of theoretical error of the small angle Bhabha measurement at LEP and to discuss critically how solid these estimates really are, from todays perspective. We conclude that the existing theoretical error of the LEP luminometer process (small angle Bhabha) is rather solid, and we add some new discussion concerning the remaining uncertainties and prospects of the future improvements toward the 0.025\le 0.025% precision.Comment: Invited talk presented at Mini-Workshop ``Electroweak Physics Data and the Higgs Mass'', DESY Zeuthen, Germany, February 28 - March 1, 200

    Computer science and technology : historiography III (extras)

    Get PDF
    Operating systems before or outside of modern BSD, GNU and Linux

    Dr. John Russell interviewed by William Wakeman and Kate Cronin

    Get PDF
    Dr. John Russell was interviewed on April 1, 2004 by William Wakeman and Kate Cronin as part of their History 210 class project. Dr. Russell was a faculty member in Chemistry from 1956 - 1992

    Exploring inconsistencies in genome-wide protein function annotations: a machine learning approach

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Incorrectly annotated sequence data are becoming more commonplace as databases increasingly rely on automated techniques for annotation. Hence, there is an urgent need for computational methods for checking consistency of such annotations against independent sources of evidence and detecting potential annotation errors. We show how a machine learning approach designed to automatically predict a protein's Gene Ontology (GO) functional class can be employed to identify potential gene annotation errors.</p> <p>Results</p> <p>In a set of 211 previously annotated mouse protein kinases, we found that 201 of the GO annotations returned by AmiGO appear to be <it>inconsistent </it>with the UniProt functions assigned to their human counterparts. In contrast, 97% of the predicted annotations generated using a machine learning approach were <it>consistent </it>with the UniProt annotations of the human counterparts, as well as with available annotations for these mouse protein kinases in the Mouse Kinome database.</p> <p>Conclusion</p> <p>We conjecture that most of our predicted annotations are, therefore, correct and suggest that the machine learning approach developed here could be routinely used to detect potential errors in GO annotations generated by high-throughput gene annotation projects.</p> <p>Editors Note : Authors from the original publication (Okazaki et al.: <it>Nature </it>2002, <b>420</b>:563–73) have provided their response to Andorf et al, directly following the correspondence.</p

    An Automated Vulnerability Detection Framework for Smart Contracts

    Full text link
    With the increase of the adoption of blockchain technology in providing decentralized solutions to various problems, smart contracts have become more popular to the point that billions of US Dollars are currently exchanged every day through such technology. Meanwhile, various vulnerabilities in smart contracts have been exploited by attackers to steal cryptocurrencies worth millions of dollars. The automatic detection of smart contract vulnerabilities therefore is an essential research problem. Existing solutions to this problem particularly rely on human experts to define features or different rules to detect vulnerabilities. However, this often causes many vulnerabilities to be ignored, and they are inefficient in detecting new vulnerabilities. In this study, to overcome such challenges, we propose a framework to automatically detect vulnerabilities in smart contracts on the blockchain. More specifically, first, we utilize novel feature vector generation techniques from bytecode of smart contract since the source code of smart contracts are rarely available in public. Next, the collected vectors are fed into our novel metric learning-based deep neural network(DNN) to get the detection result. We conduct comprehensive experiments on large-scale benchmarks, and the quantitative results demonstrate the effectiveness and efficiency of our approach

    Researches on Automatic Techniques for Specification-Based Testing and Fault Localization

    Get PDF
    The existing specification-based techniques (SBT) has difficulty in generating an appropriate test suite without the knowledge of code structure to trigger different kinds of unintended behaviours hidden in programs. Symbolic execution, a powerful technique for automating software testing, instead takes advantage of internal code design to detect many types of errors like out of memory and assertion violations. However, it can encounter severe path explosion problem during the exhausted test data generation. Besides, by only using assertions, it may ignore some faulty paths due to not going deep into checking the functional correctness of a path. To address these problems, this research proposes a specification-based incremental testing method with symbolic execution, called SIT-SE, to provide a much more rigorous way to automatically check the functional correctness of all the discovered program paths within limited time. In this method, we introduce theorems instead of assertions for checking path correctness, and describe a Branch Sequence Coverage (BSC) algorithm together with checking levels for guiding a moderate path exploration. The proposed method carefully treats the relationship between a path condition and the specification in a theorem to reduce the monotonous path exploration, whereas traditional symbolic testing methods use assertions that are not sufficient to judge the correctness of a path during the long tedious path exploration. Moreover, we present a strategy of fault localization called TRIACFL with the support of the SIT-SE to give useful hints to pinpoint the faults in a small set of statements. To enrich our methodology of testing used in practice, we also describe a test data generation method that integrates formal specification with a genetic algorithm as supplementary to the SIT-SE for dealing with exceptional cases where some code is not available to testers. We conduct two experiments with the proposed methods, and the results demonstrate that these methods together facilitate an effective automatic bug detection.There are three main contributions in our work. Firstly, we propose a method, SIT-SE, to provide a systematic way to automatically verify the correctness of all the representative program paths by integrating symbolic execution and formal specification. Secondly, we present a fault localization method with the SIT-SE, namely TRIACFL, to provide useful clues for the locations of real faults within a small scale of statements in programs. Thirdly, a test data generation method using the formal specification and a genetic algorithm (GA), is proposed to cope with the situations where the SIT-SE is not applicable.博士(理学)法政大学 (Hosei University
    corecore