276,965 research outputs found

    Scaling Bounded Model Checking By Transforming Programs With Arrays

    Full text link
    Bounded Model Checking is one the most successful techniques for finding bugs in program. However, model checkers are resource hungry and are often unable to verify programs with loops iterating over large arrays.We present a transformation that enables bounded model checkers to verify a certain class of array properties. Our technique transforms an array-manipulating (ANSI-C) program to an array-free and loop-free (ANSI-C) program thereby reducing the resource requirements of a model checker significantly. Model checking of the transformed program using an off-the-shelf bounded model checker simulates the loop iterations efficiently. Thus, our transformed program is a sound abstraction of the original program and is also precise in a large number of cases - we formally characterize the class of programs for which it is guaranteed to be precise. We demonstrate the applicability and usefulness of our technique on both industry code as well as academic benchmarks

    Using job-title-based physical exposures from O*NET in an epidemiological study of carpal tunnel syndrome

    Get PDF
    OBJECTIVE: We studied associations between job title based measures of force and repetition and incident carpal tunnel syndrome (CTS). BACKGROUND: Job exposure matrices (JEMs) are not commonly used in studies of work-related upper extremity disorders. METHODS: We enrolled newly-hired workers into a prospective cohort study. We assigned a Standard Occupational Classification (SOC) code to each job held and extracted physical work exposure variables from the Occupational Information Network (O*NET). CTS case definition required both characteristic symptoms and abnormal median nerve conduction. RESULTS: 751 (67.8%) of 1107 workers completed follow-up evaluations. 31 subjects (4.4%) developed CTS during an average of 3.3 years of follow-up. Repetitive Motion, Static Strength, and Dynamic Strength from the most recent job held were all significant predictors of CTS when included individually as physical exposures in models adjusting for age, gender, and BMI. Similar results were found using time-weighted exposure across all jobs held during the study. Repetitive Motion, Static Strength, and Dynamic Strength were correlated, precluding meaningful analysis of their independent effects. CONCLUSION: This study found strong relationships between workplace physical exposures assessed via a JEM and CTS, after adjusting for age, gender, and BMI. Though job title based exposures are likely to result in significant exposure misclassification, they can be useful for large population studies where more precise exposure data are not available. APPLICATION: JEMs can be used as a measure of workplace physical exposures for some studies of musculoskeletal disorders

    From opt-in to obligation? : Examining the regulation of globally operating tech companies through alternative regulatory instruments from a material and territorial viewpoint

    Get PDF
    Modern society’s ever-increasing reliance on technology raises complex legal challenges. In the search for an efficient and effective regulatory response, more and more authorities – in particular the European Union – are relying on alternative regulatory instruments (ARIs) when engaging big tech companies. Materially, this is a natural fit: the tech industry is a complex and rapidly-evolving sector and – unlike the rigid classic legislative process – ARIs allow for meaningful ex ante anticipatory constructions and ex post enforcement due to their unique flexibility. However, from a territorial point of view several complications arise. Although the use of codes of conduct to regulate transnational private actors has a rich history, the way in which such codes are set out under articles 40 and 41 of the EU’s GDPR implies a ‘hardening’ of these soft law instruments that has repercussions for their relationship to the principles of territorial jurisdiction. This contribution serves as a first step for further research into the relationship between codes of conduct, the regulation of the tech industry and the territorial aspects related thereto

    Differentially Testing Soundness and Precision of Program Analyzers

    Full text link
    In the last decades, numerous program analyzers have been developed both by academia and industry. Despite their abundance however, there is currently no systematic way of comparing the effectiveness of different analyzers on arbitrary code. In this paper, we present the first automated technique for differentially testing soundness and precision of program analyzers. We used our technique to compare six mature, state-of-the art analyzers on tens of thousands of automatically generated benchmarks. Our technique detected soundness and precision issues in most analyzers, and we evaluated the implications of these issues to both designers and users of program analyzers

    Developing a distributed electronic health-record store for India

    Get PDF
    The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India

    Off-line computing for experimental high-energy physics

    Get PDF
    The needs of experimental high-energy physics for large-scale computing and data handling are explained in terms of the complexity of individual collisions and the need for high statistics to study quantum mechanical processes. The prevalence of university-dominated collaborations adds a requirement for high-performance wide-area networks. The data handling and computational needs of the different types of large experiment, now running or under construction, are evaluated. Software for experimental high-energy physics is reviewed briefly with particular attention to the success of packages written within the discipline. It is argued that workstations and graphics are important in ensuring that analysis codes are correct, and the worldwide networks which support the involvement of remote physicists are described. Computing and data handling are reviewed showing how workstations and RISC processors are rising in importance but have not supplanted traditional mainframe processing. Examples of computing systems constructed within high-energy physics are examined and evaluated

    A survey of software development practices in the New Zealand software industry

    Get PDF
    We report on the software development techniques used in the New Zealand software industry, paying particular attention to requirements gathering. We surveyed a selection of software companies with a general questionnaire and then conducted in-depth interviews with four companies. Our results show a wide variety in the kinds of companies undertaking software development, employing a wide range of software development techniques. Although our data are not sufficiently detailed to draw statistically significant conclusions, it appears that larger software development groups typically have more well-defined software development processes, spend proportionally more time on requirements gathering, and follow more rigorous testing regimes

    zfit: scalable pythonic fitting

    Full text link
    Statistical modeling is a key element in many scientific fields and especially in High-Energy Physics (HEP) analysis. The standard framework to perform this task in HEP is the C++ ROOT/RooFit toolkit; with Python bindings that are only loosely integrated into the scientific Python ecosystem. In this paper, zfit, a new alternative to RooFit written in pure Python, is presented. Most of all, zfit provides a well defined high-level API and workflow for advanced model building and fitting, together with an implementation on top of TensorFlow, allowing a transparent usage of CPUs and GPUs. It is designed to be extendable in a very simple fashion, allowing the usage of cutting-edge developments from the scientific Python ecosystem in a transparent way. The main features of zfit are introduced, and its extension to data analysis, especially in the context of HEP experiments, is discussed.Comment: 12 pages, 2 figure

    TRACEABILITY IN THE U.S. FOOD SUPPLY: ECONOMIC THEORY AND INDUSTRY STUDIES

    Get PDF
    This investigation into the traceability baseline in the United States finds that private sector food firms have developed a substantial capacity to trace. Traceability systems are a tool to help firms manage the flow of inputs and products to improve efficiency, product differentiation, food safety, and product quality. Firms balance the private costs and benefits of traceability to determine the efficient level of traceability. In cases of market failure, where the private sector supply of traceability is not socially optimal, the private sector has developed a number of mechanisms to correct the problem, including contracting, third-party safety/quality audits, and industry-maintained standards. The best-targeted government policies for strengthening firms' incentives to invest in traceability are aimed at ensuring that unsafe of falsely advertised foods are quickly removed from the system, while allowing firms the flexibility to determine the manner. Possible policy tools include timed recall standards, increased penalties for distribution of unsafe foods, and increased foodborne-illness surveillance.traceability, tracking, traceback, tracing, recall, supply-side management, food safety, product differentiation, Food Consumption/Nutrition/Food Safety, Industrial Organization,
    • 

    corecore