64,762 research outputs found

    Optimizing compilation with preservation of structural code coverage metrics to support software testing

    Get PDF
    Code-coverage-based testing is a widely-used testing strategy with the aim of providing a meaningful decision criterion for the adequacy of a test suite. Code-coverage-based testing is also mandated for the development of safety-critical applications; for example, the DO178b document requires the application of the modified condition/decision coverage. One critical issue of code-coverage testing is that structural code coverage criteria are typically applied to source code whereas the generated machine code may result in a different code structure because of code optimizations performed by a compiler. In this work, we present the automatic calculation of coverage profiles describing which structural code-coverage criteria are preserved by which code optimization, independently of the concrete test suite. These coverage profiles allow to easily extend compilers with the feature of preserving any given code-coverage criteria by enabling only those code optimizations that preserve it. Furthermore, we describe the integration of these coverage profile into the compiler GCC. With these coverage profiles, we answer the question of how much code optimization is possible without compromising the error-detection likelihood of a given test suite. Experimental results conclude that the performance cost to achieve preservation of structural code coverage in GCC is rather low.Peer reviewedSubmitted Versio

    FORTEST: Formal methods and testing

    Get PDF
    Formal methods have traditionally been used for specification and development of software. However there are potential benefits for the testing stage as well. The panel session associated with this paper explores the usefulness or otherwise of formal methods in various contexts for improving software testing. A number of different possibilities for the use of formal methods are explored and questions raised. The contributors are all members of the UK FORTEST Network on formal methods and testing. Although the authors generally believe that formal methods are useful in aiding the testing process, this paper is intended to provoke discussion. Dissenters are encouraged to put their views to the panel or individually to the authors

    Link Prediction by De-anonymization: How We Won the Kaggle Social Network Challenge

    Full text link
    This paper describes the winning entry to the IJCNN 2011 Social Network Challenge run by Kaggle.com. The goal of the contest was to promote research on real-world link prediction, and the dataset was a graph obtained by crawling the popular Flickr social photo sharing website, with user identities scrubbed. By de-anonymizing much of the competition test set using our own Flickr crawl, we were able to effectively game the competition. Our attack represents a new application of de-anonymization to gaming machine learning contests, suggesting changes in how future competitions should be run. We introduce a new simulated annealing-based weighted graph matching algorithm for the seeding step of de-anonymization. We also show how to combine de-anonymization with link prediction---the latter is required to achieve good performance on the portion of the test set not de-anonymized---for example by training the predictor on the de-anonymized portion of the test set, and combining probabilistic predictions from de-anonymization and link prediction.Comment: 11 pages, 13 figures; submitted to IJCNN'201

    Proceedings from the Synthetic LBD International Seminar

    Get PDF
    On May 9, 2017, we hosted a seminar to discuss the conditions necessary to im- plement the SynLBD approach with interested parties, with the goal of providing a straightforward toolkit to implement the same procedure on other data. The proceed- ings summarize the discussions during the workshop

    PriPeARL: A Framework for Privacy-Preserving Analytics and Reporting at LinkedIn

    Full text link
    Preserving privacy of users is a key requirement of web-scale analytics and reporting applications, and has witnessed a renewed focus in light of recent data breaches and new regulations such as GDPR. We focus on the problem of computing robust, reliable analytics in a privacy-preserving manner, while satisfying product requirements. We present PriPeARL, a framework for privacy-preserving analytics and reporting, inspired by differential privacy. We describe the overall design and architecture, and the key modeling components, focusing on the unique challenges associated with privacy, coverage, utility, and consistency. We perform an experimental study in the context of ads analytics and reporting at LinkedIn, thereby demonstrating the tradeoffs between privacy and utility needs, and the applicability of privacy-preserving mechanisms to real-world data. We also highlight the lessons learned from the production deployment of our system at LinkedIn.Comment: Conference information: ACM International Conference on Information and Knowledge Management (CIKM 2018

    Evolution of constrained layer damping using a cellular automaton algorithm

    No full text
    Constrained layer damping (CLD) is a highly effective passive vibration control strategy if optimized adequately. Factors controlling CLD performance are well documented for the flexural modes of beams but not for more complicated mode shapes or structures. The current paper introduces an approach that is suitable for locating CLD on any type of structure. It follows the cellular automaton (CA) principle and relies on the use of finite element models to describe the vibration properties of the structure. The ability of the algorithm to reach the best solution is demonstrated by applying it to the bending and torsion modes of a plate. Configurations that give the most weight-efficient coverage for each type of mode are first obtained by adapting the existing 'optimum length' principle used for treated beams. Next, a CA algorithm is developed, which grows CLD patches one at a time on the surface of the plate according to a simple set of rules. The effectiveness of the algorithm is then assessed by comparing the generated configurations with the known optimum ones

    First-principles study of the atomic and electronic structure of the Si(111)-(5x2-Au surface reconstruction

    Full text link
    We present a systematic study of the atomic and electronic structure of the Si(111)-(5x2)-Au reconstruction using first-principles electronic structure calculations based on the density functional theory. We analyze the structural models proposed by Marks and Plass [Phys. Rev. Lett.75, 2172 (1995)], those proposed recently by Erwin [Phys. Rev. Lett.91, 206101 (2003)], and a completely new structure that was found during our structural optimizations. We study in detail the energetics and the structural and electronic properties of the different models. For the two most stable models, we also calculate the change in the surface energy as a function of the content of silicon adatoms for a realistic range of concentrations. Our new model is the energetically most favorable in the range of low adatom concentrations, while Erwin's "5x2" model becomes favorable for larger adatom concentrations. The crossing between the surface energies of both structures is found close to 1/2 adatoms per 5x2 unit cell, i.e. near the maximum adatom coverage observed in the experiments. Both models, the new structure and Erwin's "5x2" model, seem to provide a good description of many of the available experimental data, particularly of the angle-resolved photoemission measurements
    • ā€¦
    corecore