32 research outputs found

    Efficient Bounded Model Checking of Heap-Manipulating Programs using Tight Field Bounds

    Get PDF
    Software model checkers are able to exhaustively explore different bounded program executions arising from various sources of nondeterminism. These tools provide statements to produce non-determinis- tic values for certain variables, thus forcing the corresponding model checker to consider all possible values for these during verification. While these statements offer an effective way of verifying programs handling basic data types and simple structured types, they are inappropriate as a mechanism for nondeterministic generation of pointers, favoring the use of insertion routines to produce dynamic data structures when verifying, via model checking, programs handling such data types. We present a technique to improve model checking of programs handling heap-allocated data types, by taming the explosion of candidate structures that can be built when non-deterministically initializing heap object fields. The technique exploits precomputed relational bounds, that disregard values deemed invalid by the structure’s type invariant, thus reducing the state space to be explored by the model checker. Precomputing the relational bounds is a challenging costly task too, for which we also present an efficient algorithm, based on incremental SAT solving. We implement our approach on top of the CBMC bounded model checker, and show that, for a number of data structures implementations, we can handle significantly larger input structures and detect faults that CBMC is unable to detect.Sociedad Argentina de Informática e Investigación Operativ

    Efficient Verification of Programs with Complex Data Structures Using SMT Solvers

    Get PDF

    Deductive Synthesis and Repair

    Get PDF
    In this thesis, we explore techniques for the development of recursive functional programs over unbounded domains that are proved correct according to their high-level specifications. We present algorithms for automatically synthesizing executable code, starting from the speci- fication alone. We implement these algorithms in the Leon system. We augment relational specifications with a concise notation for symbolic tests, which are are helpful to characterize fragments of the functionsâ behavior. We build on our synthesis procedure to automatically repair invalid functions by generating alternative implementations. Our approach therefore formulates program repair in the framework of deductive synthesis and uses the existing program structure as a hint to guide synthesis. We rely on user-specified tests as well as automatically generated ones to localize the fault. This localization enables our procedure to repair functions that would otherwise be out of reach of our synthesizer, and ensures that most of the original behavior is preserved. We also investigate multiple ways of enabling Leon programs to interact with external, un- trusted code. For that purpose, we introduce a precise inter-procedural effect analysis for arbitrary Scala programs with mutable state, dynamic object allocation, and dynamic dispatch. We analyzed the Scala standard library containing 58000 methods and classified them into sev- eral categories according to their effects. Our analysis proves that over one half of all methods are pure, identifies a number of conditionally pure methods, and computes summary graphs and regular expressions describing the side effects of non-pure methods. We implement the synthesis and repair algorithms within the Leon system and deploy them as part of a novel interactive development environment available as a web interface. Our implementation is able to synthesize, within seconds, a number of useful recursive functions that manipulate unbounded numbers and data structures. Our repair procedure automatically locates various kinds of errors in recursive functions and fixes them by synthesizing alternative implementations

    Tools for efficient Deep Learning

    Get PDF
    In the era of Deep Learning (DL), there is a fast-growing demand for building and deploying Deep Neural Networks (DNNs) on various platforms. This thesis proposes five tools to address the challenges for designing DNNs that are efficient in time, in resources and in power consumption. We first present Aegis and SPGC to address the challenges in improving the memory efficiency of DL training and inference. Aegis makes mixed precision training (MPT) stabler by layer-wise gradient scaling. Empirical experiments show that Aegis can improve MPT accuracy by at most 4\%. SPGC focuses on structured pruning: replacing standard convolution with group convolution (GConv) to avoid irregular sparsity. SPGC formulates GConv pruning as a channel permutation problem and proposes a novel heuristic polynomial-time algorithm. Common DNNs pruned by SPGC have maximally 1\% higher accuracy than prior work. This thesis also addresses the challenges lying in the gap between DNN descriptions and executables by Polygeist for software and POLSCA for hardware. Many novel techniques, e.g. statement splitting and memory partitioning, are explored and used to expand polyhedral optimisation. Polygeist can speed up software execution in sequential and parallel by 2.53 and 9.47 times on Polybench/C. POLSCA achieves 1.5 times speedup over hardware designs directly generated from high-level synthesis on Polybench/C. Moreover, this thesis presents Deacon, a framework that generates FPGA-based DNN accelerators of streaming architectures with advanced pipelining techniques to address the challenges from heterogeneous convolution and residual connections. Deacon provides fine-grained pipelining, graph-level optimisation, and heuristic exploration by graph colouring. Compared with prior designs, Deacon shows resource/power consumption efficiency improvement of 1.2x/3.5x for MobileNets and 1.0x/2.8x for SqueezeNets. All these tools are open source, some of which have already gained public engagement. We believe they can make efficient deep learning applications easier to build and deploy.Open Acces

    An Integrated Environment For Automated Benchmarking And Validation Of XML-Based Applications

    Get PDF
    Testing is the dominant software verification technique used in industry; it is a critical and most expensive process during software development. Along with the increase in software complexity, the costs of testing are increasing rapidly. Faced with this problem, many researchers are working on automated testing, attempting to find methods that execute the processes of testing automatically and cut down the cost of testing. Today, software systems are becoming complicated. Some of them are composed of several different components. Some projects even required different systems to work together and support each other. The XML have been developed to facilitate data exchange and enhance interoperability among software systems. Along with the development of XML technologies, XML-based systems are used widely in many domains. In this thesis we will present a methodology for testing XML-based applications automatically. In this thesis we present a methodology called XPT (XML-based Partition Testing) which is defined as deriving XML Instances from XML Schema automatically and systematically. XPT methodology is inspired from the Category-partition method, which is a well-known approach to Black-box Test generation. We follow a similar idea of applying partitioning to an XML Schema in order to generate a suite of conforming instances; in addition, since the number of generated instances soon becomes unmanageable, we also introduce a set of heuristics for reducing the suite; while optimizing the XML Schema coverage. The aim of our research is not only to invent a technical method, but also to attempt to apply XPT methodology in real applications. We have created a proof-of-concept tool, TAXI, which is the implementation of XPT. This tool has a graphic user interface that can guide and help testers to use it easily. TAXI can also be customized for specific applications to build the test environment and automate the whole processes of testing. The details of TAXI design and the case studies using TAXI in different domains are presented in this thesis. The case studies cover three test purposes. The first one is for functional correctness, specifically we apply the methodology to do the XSLT Testing, which uses TAXI to build an automatic environment for testing the XSLT transformation; the second is for robustness testing, we did the XML database mapping test which tests the data transformation tool for mapping and populate the data from XML Document to XML database; and the third one is for the performance testing, we show XML benchmark that uses TAXI to do the benchmarking of the XML-based applications

    Doctor of Philosophy

    Get PDF
    dissertationIn computer science, functional software testing is a method of ensuring that software gives expected output on specific inputs. Software testing is conducted to ensure desired levels of quality in light of uncertainty resulting from the complexity of software. Most of today's software is written by people and software development is a creative activity. However, due to the complexity of computer systems and software development processes, this activity leads to a mismatch between the expected software functionality and the implemented one. If not addressed in a timely and proper manner, this mismatch can cause serious consequences to users of the software, such as security and privacy breaches, financial loss, and adversarial human health issues. Because of manual effort, software testing is costly. Software testing that is performed without human intervention is automatic software testing and it is one way of addressing the issue. In this work, we build upon and extend several techniques for automatic software testing. The techniques do not require any guidance from the user. Goals that are achieved with the techniques are checking for yet unknown errors, automatically testing object-oriented software, and detecting malicious software. To meet these goals, we explored several techniques and related challenges: automatic test case generation, runtime verification, dynamic symbolic execution, and the type and size of test inputs for efficient detection of malicious software via machine learning. Our work targets software written in the Java programming language, though the techniques are general and applicable to other languages. We performed an extensive evaluation on freely available Java software projects, a flight collision avoidance system, and thousands of applications for the Android operating system. Evaluation results show to what extent dynamic symbolic execution is applicable in testing object-oriented software, they show correctness of the flight system on millions of automatically customized and generated test cases, and they show that simple and relatively small inputs in random testing can lead to effective malicious software detection

    Develop a multi-functional green pervious concrete (MGPC) pavement with polycyclic aromatic hydrocarbons (PAHs) removal function.

    Get PDF
    Stormwater runoff induced Polycyclic Aromatic Hydrocarbons (PAHs) contaminant increasingly imperils the groundwater quality and the sustainable development of human society due to the potential carcinogenic risks. Pavement can be considered as the first line of defense for contaminant removal of the stormwater runoff. New construction materials with stormwater runoff quantity and quality control are in urgent demand for updating the existing pavement system. An innovative material called Multi-functional Green Pervious Concrete (MGPC) was developed in the department of Civil and Environmental Engineering at University of Louisville. This material uses organoclay as the amendment to enhance the PAHs removal capacity of conventional pervious concrete. The objective of this study is to evaluate the potential implementation of MGPC as a pavement material with the groundwater contamination remediation functions. The study was performed in five stages. First, The PAHs remediation function of MGPC was tested by introducing organoclay [bis (hydrogenated tallow alkyl) dimethyl ammonium modified montmorillonite] to the conventional pervious concrete. After test and verification, the mix proportion of MGPC was designed to meet the compressive strength and hydraulic conductivity requirements of pervious concrete. A small amount of organoclay addition was found not to adversely affect the compressive strength and hydraulic conductivity of MGPC. The preliminary study of the PAHs removal functions of MGPC was conducted in stage two. The isothermal batch sorption test was conducted to quantify the sorption capacity of the organoclay modified cement paste, and the column test was performed to investigate the transport mechanism and retardation behavior of PAHs in MGPC. It was found that the developed MGPC with a small addition of organoclay could substantially remove PAHs contaminants and it also has much stronger adsorption and retardation capacity than the conventional pervious concrete. In stage three, a series of comprehensive laboratory-scale tests were conducted to examine the effectiveness of stormwater induced PAHs removal by using the MGPC pavement. The results indicated that the initial concentrations of the PAHs and the flow rates would impact the removal efficiency of MGPC. The tests showed that the MGPC still maintained considerable sorption capacity after 50 PAHs sorption and desorption cycles. An ideal site under steady-state groundwater conditions was generated to simulate the long-term performance of MGPC on PAHs removal by using the finite element method in stage four. The laboratory experiments were used to determine the physicochemical parameters of MGPC, and three sorption isothermal models (linear, Freundlich and Langmuir) were fitted to the sorption test data. The computer simulation revealed that the MGPC had significant remediation efficiency on the PAHs contaminant. Other than the material properties of MGPC, the efficiency of contaminant remediation of MGPC was also found to be influenced by the permeability of the subbase and the initial concentration of PAHs. It was also found that the linear isotherm model would overestimate the removal efficiency of PAHs with higher concentration sources. At last final fifth stage, a Pavement Environment and Performance Index (PEPI) was proposed to evaluate the environmental impacts of three different types of pavements (impervious concrete, conventional pervious concrete, and MGPC). The data from experiments and the Environmental Footprint Database was used to calculate the PEPI. Based on the Life Cycle Assessment (LCA) results, it was found that the MGPC pavement was much more environmentally friendly with relatively lower greenhouse gas emissions and energy consumption, and better environmental performance comparing with the other two types of pavements

    Fundamental Approaches to Software Engineering

    Get PDF
    This open access book constitutes the proceedings of the 24th International Conference on Fundamental Approaches to Software Engineering, FASE 2021, which took place during March 27–April 1, 2021, and was held as part of the Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg but changed to an online format due to the COVID-19 pandemic. The 16 full papers presented in this volume were carefully reviewed and selected from 52 submissions. The book also contains 4 Test-Comp contributions
    corecore