7,678 research outputs found

    Using Mutation Analysis to Evolve Subdomains for Random Testing

    Get PDF
    Random testing is inexpensive, but it can also be inefficient. We apply mutation analysis to evolve efficient subdomains for the input parameters of eight benchmark programs that are frequently used in testing research. The evolved subdomains can be used for program analysis and regression testing. Test suites generated from the optimised subdomains outperform those generated from random subdomains with 10, 100 and 1000 test cases for uniform, Gaussian and exponential sampling. Our subdomains kill a large proportion of mutants for most of the programs we tested with just 10 test cases

    Subdomain-based test data generation

    Get PDF
    Abstract Considerable effort is required to test software thoroughly. Even with automated test data generation tools, it is still necessary to evaluate the output of each test case and identify unexpected results. Manual effort can be reduced by restricting the range of inputs testers need to consider to regions that are more likely to reveal faults, thus reducing the number of test cases overall, and therefore reducing the effort needed to create oracles. This article describes and evaluates search-based techniques, using evolution strategies and subset selection, for identifying regions of the input domain (known as subdomains) such that test cases sampled at random from within these regions can be used efficiently to find faults. The fault finding capability of each subdomain is evaluated using mutation analysis, a technique that is based on faults programmers are likely to make. The resulting subdomains kill more mutants than random testing (up to six times as many in one case) with the same number or fewer test cases. Optimised subdomains can be used as a starting point for program analysis and regression testing. They can easily be comprehended by a human test engineer, so may be used to provide information about the software under test and design further highly efficient test suites

    Learning Compositional Visual Concepts with Mutual Consistency

    Full text link
    Compositionality of semantic concepts in image synthesis and analysis is appealing as it can help in decomposing known and generatively recomposing unknown data. For instance, we may learn concepts of changing illumination, geometry or albedo of a scene, and try to recombine them to generate physically meaningful, but unseen data for training and testing. In practice however we often do not have samples from the joint concept space available: We may have data on illumination change in one data set and on geometric change in another one without complete overlap. We pose the following question: How can we learn two or more concepts jointly from different data sets with mutual consistency where we do not have samples from the full joint space? We present a novel answer in this paper based on cyclic consistency over multiple concepts, represented individually by generative adversarial networks (GANs). Our method, ConceptGAN, can be understood as a drop in for data augmentation to improve resilience for real world applications. Qualitative and quantitative evaluations demonstrate its efficacy in generating semantically meaningful images, as well as one shot face verification as an example application.Comment: 10 pages, 8 figures, 4 tables, CVPR 201

    Preconditioning of weighted H(div)-norm and applications to numerical simulation of highly heterogeneous media

    Full text link
    In this paper we propose and analyze a preconditioner for a system arising from a finite element approximation of second order elliptic problems describing processes in highly het- erogeneous media. Our approach uses the technique of multilevel methods and the recently proposed preconditioner based on additive Schur complement approximation by J. Kraus (see [8]). The main results are the design and a theoretical and numerical justification of an iterative method for such problems that is robust with respect to the contrast of the media, defined as the ratio between the maximum and minimum values of the coefficient (related to the permeability/conductivity).Comment: 28 page

    CharBot: A Simple and Effective Method for Evading DGA Classifiers

    Full text link
    Domain generation algorithms (DGAs) are commonly leveraged by malware to create lists of domain names which can be used for command and control (C&C) purposes. Approaches based on machine learning have recently been developed to automatically detect generated domain names in real-time. In this work, we present a novel DGA called CharBot which is capable of producing large numbers of unregistered domain names that are not detected by state-of-the-art classifiers for real-time detection of DGAs, including the recently published methods FANCI (a random forest based on human-engineered features) and LSTM.MI (a deep learning approach). CharBot is very simple, effective and requires no knowledge of the targeted DGA classifiers. We show that retraining the classifiers on CharBot samples is not a viable defense strategy. We believe these findings show that DGA classifiers are inherently vulnerable to adversarial attacks if they rely only on the domain name string to make a decision. Designing a robust DGA classifier may, therefore, necessitate the use of additional information besides the domain name alone. To the best of our knowledge, CharBot is the simplest and most efficient black-box adversarial attack against DGA classifiers proposed to date
    • …
    corecore