6,169 research outputs found

    Badger: Complexity Analysis with Fuzzing and Symbolic Execution

    Full text link
    Hybrid testing approaches that involve fuzz testing and symbolic execution have shown promising results in achieving high code coverage, uncovering subtle errors and vulnerabilities in a variety of software applications. In this paper we describe Badger - a new hybrid approach for complexity analysis, with the goal of discovering vulnerabilities which occur when the worst-case time or space complexity of an application is significantly higher than the average case. Badger uses fuzz testing to generate a diverse set of inputs that aim to increase not only coverage but also a resource-related cost associated with each path. Since fuzzing may fail to execute deep program paths due to its limited knowledge about the conditions that influence these paths, we complement the analysis with a symbolic execution, which is also customized to search for paths that increase the resource-related cost. Symbolic execution is particularly good at generating inputs that satisfy various program conditions but by itself suffers from path explosion. Therefore, Badger uses fuzzing and symbolic execution in tandem, to leverage their benefits and overcome their weaknesses. We implemented our approach for the analysis of Java programs, based on Kelinci and Symbolic PathFinder. We evaluated Badger on Java applications, showing that our approach is significantly faster in generating worst-case executions compared to fuzzing or symbolic execution on their own

    Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World

    Get PDF
    This report documents the program and the outcomes of GI-Dagstuhl Seminar 16394 "Software Performance Engineering in the DevOps World". The seminar addressed the problem of performance-aware DevOps. Both, DevOps and performance engineering have been growing trends over the past one to two years, in no small part due to the rise in importance of identifying performance anomalies in the operations (Ops) of cloud and big data systems and feeding these back to the development (Dev). However, so far, the research community has treated software engineering, performance engineering, and cloud computing mostly as individual research areas. We aimed to identify cross-community collaboration, and to set the path for long-lasting collaborations towards performance-aware DevOps. The main goal of the seminar was to bring together young researchers (PhD students in a later stage of their PhD, as well as PostDocs or Junior Professors) in the areas of (i) software engineering, (ii) performance engineering, and (iii) cloud computing and big data to present their current research projects, to exchange experience and expertise, to discuss research challenges, and to develop ideas for future collaborations

    A Review on Web Application Testing and its Current Research Directions

    Get PDF
    Testing is an important part of every software development process on which companies devote considerable time and effort. The burgeoning web applications and their proliferating economic significance in the society made the area of web application testing an area of acute importance. The web applications generally tend to take faster and quicker release cycles making their testing very challenging. The main issues in testing are cost efficiency and bug detection efficiency. Coverage-based   testing is the process of ensuring exercise of specific program elements. Coverage measurement helps determine the “thoroughness” of testing achieved. An avalanche of tools, techniques, frameworks came into existence to ascertain the quality of web applications.  A comparative study of some of the prominent tools, techniques and models for web application testing is presented. This work highlights the current research directions of some of the web application testing techniques

    A Survey on Compiler Autotuning using Machine Learning

    Full text link
    Since the mid-1990s, researchers have been trying to use machine-learning based approaches to solve a number of different compiler optimization problems. These techniques primarily enhance the quality of the obtained results and, more importantly, make it feasible to tackle two main compiler optimization problems: optimization selection (choosing which optimizations to apply) and phase-ordering (choosing the order of applying optimizations). The compiler optimization space continues to grow due to the advancement of applications, increasing number of compiler optimizations, and new target architectures. Generic optimization passes in compilers cannot fully leverage newly introduced optimizations and, therefore, cannot keep up with the pace of increasing options. This survey summarizes and classifies the recent advances in using machine learning for the compiler optimization field, particularly on the two major problems of (1) selecting the best optimizations and (2) the phase-ordering of optimizations. The survey highlights the approaches taken so far, the obtained results, the fine-grain classification among different approaches and finally, the influential papers of the field.Comment: version 5.0 (updated on September 2018)- Preprint Version For our Accepted Journal @ ACM CSUR 2018 (42 pages) - This survey will be updated quarterly here (Send me your new published papers to be added in the subsequent version) History: Received November 2016; Revised August 2017; Revised February 2018; Accepted March 2018

    Assessing and improving quality of QVTo model transformations

    Get PDF
    We investigate quality improvement in QVT operational mappings (QVTo) model transformations, one of the languages defined in the OMG standard on model-to-model transformations. Two research questions are addressed. First, how can we assess quality of QVTo model transformations? Second, how can we develop higher-quality QVTo transformations? To address the first question, we utilize a bottom–up approach, starting with a broad exploratory study including QVTo expert interviews, a review of existing material, and introspection. We then formalize QVTo transformation quality into a QVTo quality model. The quality model is validated through a survey of a broader group of QVTo developers. We find that although many quality properties recognized as important for QVTo do have counterparts in general purpose languages, a number of them are specific to QVTo or model transformation languages. To address the second research question, we leverage the quality model to identify developer support tooling for QVTo. We then implemented and evaluated one of the tools, namely a code test coverage tool. In designing the tool, code coverage criteria for QVTo model transformations are also identified. The primary contributions of this paper are a QVTo quality model relevant to QVTo practitioners and an open-source code coverage tool already usable by QVTo transformation developers. Secondary contributions are a bottom–up approach to building a quality model, a validation approach leveraging developer perceptions to evaluate quality properties, code test coverage criteria for QVTo, and numerous directions for future research and tooling related to QVTo quality

    LEGaTO: first steps towards energy-efficient toolset for heterogeneous computing

    Get PDF
    LEGaTO is a three-year EU H2020 project which started in December 2017. The LEGaTO project will leverage task-based programming models to provide a software ecosystem for Made-in-Europe heterogeneous hardware composed of CPUs, GPUs, FPGAs and dataflow engines. The aim is to attain one order of magnitude energy savings from the edge to the converged cloud/HPC.Peer ReviewedPostprint (author's final draft
    • …
    corecore