12 research outputs found

    A Failed Proof Can Yield a Useful Test

    Full text link
    A successful automated program proof is, in software verification, the ultimate triumph. In practice, however, the road to such success is paved with many failed proof attempts. Unlike a failed test, which provides concrete evidence of an actual bug in the program, a failed proof leaves the programmer in the dark. Can we instead learn something useful from it? The work reported here takes advantage of the rich internal information that some automatic provers collect about the program when attempting a proof. If the proof fails, the Proof2Test tool presented in this article uses the counterexample generated by the prover (specifically, the SMT solver underlying the proof environment Boogie, used in the AutoProof system to perform correctness proofs of contract-equipped Eiffel programs) to produce a failed test, which provides the programmer with immediately exploitable information to correct the program. The discussion presents the Proof2Test tool and demonstrates the application of the ideas and tool to a collection of representative examples

    Intra-Class Testing of Abstract Class Features

    Get PDF
    One of the characteristics of the increasingly widespread use of object-oriented libraries and the resulting intensive use of inheritance is the proliferation of dependencies on abstract classes. Such classes defer the implementation of some features, and are typically used as a specification or design tool. However, since their features are not fully implemented,abstract classes cannot be instantiated, and thus pose challenges for execution-based testing strategies. This paper presents a structured approach that supports the testing of features in abstract classes. Core to the approach is a series of static analysis steps that build a comprehensive view of the inter-class dependencies in the system under test. We then leveraged this information to define a test order for the methods in an abstract class that minimizes the number of stubs required during testing, and clearly identifies the required functionality of these stubs. Our approach is based on a comprehensive taxonomy of object-oriented classes that provides a framework for our analysis. First we describe the algorithms to calculate the inter-class dependencies and the test-order that minimizes stub creation. Then we give an overview of our tool, AbstractTestJ that implements our approach by generating a test order for the methods in an abstract Java class. Finally, we harness this tool to provide an analysis of 12 substantial Java applications that demonstrates both the feasibility of our approach and the importance of this technique

    Reducing Software Testing Time with Combinatorial Testing and Test Automation

    Get PDF
    The development of large software systems is a complex and error prone process. Errors may occur at any stage of software development. These errors, sometimes referred to as bugs, can cause great losses in terms of both time and money if not identified and removed as early as possible. Testing a software product is costly, since it takes much time and need to test many combinations of its functions, integrity, performance etc., which can be called as test cases. The company\u27s goal is to reduce the testing time, so that they can save money and deliver the product much faster to the customer. Testing time can be reduced in two main ways, first by reducing number of test cases and second by automating repeatedly testing areas. This paper will discuss fundamentals of testing such as importance and difference of verification and validation, testing throughout the software development life cycle and testing methods, levels and types. Then it will discuss the possibility of reducing the time spent on testing by reducing number of test cases with combinatorial testing and automating repeatedly tested areas with test automation using Selenium tool. Finally it will also shed some light on a real world test automation project with selenium and two integrated develop environments

    A Testing Strategy for Abstract Classes [Draft]

    Get PDF
    One of the characteristics of the increasingly widespread use of object-oriented libraries and the resulting intensive use of inheritance is the proliferation of dependencies on abstract classes. Since abstract classes cannot be instantiated, they cannot be tested in isolation using standard execution-based testing strategies. A standard approach to testing abstract classes is to instantiate a concrete descendant class and test the features that are inherited. This paper presents a structured approach that supports the testing of features in abstract classes, paying particular attention to ensuring that the features tested are those defined in the abstract class. Two empirical studies are performed on a suite of large Java programs and the results presented. The first study analyzes the role of abstract classes from a testing perspective. The second study investigates the impact of the testing strategy on the programs in this suite to demonstrate its feasibility and to comment on the pragmatics of its use

    Extending an open source enterprise service bus for PostgreSQL statement transformation to enable cloud data access

    Get PDF
    Cloud computing has enabled a new era in the IT industry and many organizations are interested in moving their business operations to the Cloud. This can be realized by designing new applications that follow the prerequisites of the Cloud provider or just by migrating the existing applications to the Cloud. Each application follows a multi-layered architecture defined by its design approach. Application data is of utmost importance and it is managed by the data layer, which is further divided into two sublayers, the Data Access Layer (DAL) and the Database Layer (DBL). The former abstracts the data access functionality and the latter ensures data persistence and data manipulation. Application migration to the Cloud can be achieved by migrating all layers it consists of or only part of them. In many situations it is chosen to move only the DBL to the Cloud and keep the other layers on-premise. Most preferably, the migration of the DBL should be transparent to the upper layers of the application, so that the effort and the cost of the migration, especially concerning application refactoring, becomes minimal. In this thesis, an open source Enterprise Service Bus (ESB), able to provide multi-tenant and transparent data access to the Cloud, is extended with PostgreSQL transformation functionality. Previously the ESB could only support MySQL source databases. After the integration of two new components, a PostgreSQL proxy and a PostgreSQL transformer, we provide support for PostgreSQL source databases and dialects. Furthermore, we validate and evaluate our approach based on the TPC-H benchmark, in order to ensure results based on realistic SQL statements and appropriate example data. We show linear time complexity, O(n) of the developed PostgreSQL transformer

    Weighted Statistical Testing based on Active Learning and Formal Verification Techniques for Software Reliability Assessment

    Get PDF
    This work developed an automatic approach for the assessment of software reliability which is both theoretical sound and practical. The developed approach extends and combines theoretical sound approaches in a novel manner to systematically reduce the overhead of reliability assessment

    PROFILE- AND INSTRUMENTATION- DRIVEN METHODS FOR EMBEDDED SIGNAL PROCESSING

    Get PDF
    Modern embedded systems for digital signal processing (DSP) run increasingly sophisticated applications that require expansive performance resources, while simultaneously requiring better power utilization to prolong battery-life. Achieving such conflicting objectives requires innovative software/hardware design space exploration spanning a wide-array of techniques and technologies that offer trade-offs among performance, cost, power utilization, and overall system design complexity. To save on non-recurring engineering (NRE) costs and in order to meet shorter time-to-market requirements, designers are increasingly using an iterative design cycle and adopting model-based computer-aided design (CAD) tools to facilitate analysis, debugging, profiling, and design optimization. In this dissertation, we present several profile- and instrumentation-based techniques that facilitate design and maintenance of embedded signal processing systems: 1. We propose and develop a novel, translation lookaside buffer (TLB) preloading technique. This technique, called context-aware TLB preloading (CTP), uses a synergistic relationship between the (1) compiler for application specific analysis of a task's context, and (2) operating system (OS), for run-time introspection of the context and efficient identification of TLB entries for current and future usage. CTP works by (1) identifying application hotspots using compiler-enabled (or manual) profiling, and (2) exploiting well-understood memory access patterns, typical in signal processing applications, to preload the TLB at context switch time. The benefits of CTP in eliminating inter-task TLB interference and preemptively allocating TLB entries during context-switch are demonstrated through extensive experimental results with signal processing kernels. 2. We develop an instrumentation-driven approach to facilitate the conversion of legacy systems, not designed as dataflow-based applications, to dataflow semantics by automatically identifying the behavior of the core actors as instances of well-known dataflow models. This enables the application of powerful dataflow-based analysis and optimization methods to systems to which these methods have previously been unavailable. We introduce a generic method for instrumenting dataflow graphs that can be used to profile and analyze actors, and we use this instrumentation facility to instrument legacy designs being converted and then automatically detect the dataflow models of the core functions. We also present an iterative actor partitioning process that can be used to partition complex actors into simpler entities that are more prone to analysis. We demonstrate the utility of our proposed new instrumentation-driven dataflow approach with several DSP-based case studies. 3. We extend the instrumentation technique discussed in (2) to introduce a novel tool for model-based design validation called dataflow validation framework (DVF). DVF addresses the problem of ensuring consistency between (1) dataflow properties that are declared or otherwise assumed as part of dataflow-based application models, and (2) the dataflow behavior that is exhibited by implementations that are derived from the models. The ability of DVF to identify disparities between an application's formal dataflow representation and its implementation is demonstrated through several signal processing application development case studies

    Model Transformation Approach to Automated Model Driven Development

    Get PDF
    One of the contemporary challenges of software evolution is to adapt a software system to the changing of requirements and demands from users and environments. An ultimate goal is to encapsulate these requirements into a high-level abstraction, giving the ability to achieve large-scale adaptation of the underlying software implementation. Model-Driven Engineering (MDE) is one of the enabling techniques that supports this objective. In MDE, the e ective creation of models and their transformation are core activities to enable the conversion of source models to target models in order to change model structures or translate models to other software artifacts. The main goal is to provide automation and enable the automated development of a system from its corresponding models. There are several approaches on this matter from high level. However, there is still absence of clear methodology and results on how to apply MDE for a speci c domain with speci c requirements such as the web domain. This research brings contribution toward the solution to automated model development by providing an overview of existing approaches and introducing a novel approach in the emerging eld of web applications and services. To cope with current trend in the growing of complexity of web services as programmatic backbones of modern distributed and cloud architecture, we present an approach using domain speci c language for modeling of web services as the solution to the challenge in scalability of web service modeling and development. We analyze the current state of the problem domain and implement a domain speci c language called Simple Web Service Modeling to support automated model-driven development of such web services. This approach is the solution to the problem in web service development of software-as-service systems that require the support for tenant-speci c architecture. In the domain of web application quality assurance, we build a modeling language for model driven testing of web application that focuses on automation and regression testing. Our techniques are based on building abstractions of web pages and modeling state-machinebased test behavior using Web Testing Modeling Language - a domain speci c language that we developed for web page modeling. This methodology and techniques aim at helping software developers as well as testers to become more productive and reduce the time-tomarket, while maintaining high standards of web application. The proposing techniques is the answer to the lack of concrete methods and toolset in applying model driven development to speci c areas such as web application testing and services. The results of this work can be applied to practical purposes with the methodological support to integrate into existing software development practices.Katedra počítač

    An Integrated Environment For Automated Benchmarking And Validation Of XML-Based Applications

    Get PDF
    Testing is the dominant software verification technique used in industry; it is a critical and most expensive process during software development. Along with the increase in software complexity, the costs of testing are increasing rapidly. Faced with this problem, many researchers are working on automated testing, attempting to find methods that execute the processes of testing automatically and cut down the cost of testing. Today, software systems are becoming complicated. Some of them are composed of several different components. Some projects even required different systems to work together and support each other. The XML have been developed to facilitate data exchange and enhance interoperability among software systems. Along with the development of XML technologies, XML-based systems are used widely in many domains. In this thesis we will present a methodology for testing XML-based applications automatically. In this thesis we present a methodology called XPT (XML-based Partition Testing) which is defined as deriving XML Instances from XML Schema automatically and systematically. XPT methodology is inspired from the Category-partition method, which is a well-known approach to Black-box Test generation. We follow a similar idea of applying partitioning to an XML Schema in order to generate a suite of conforming instances; in addition, since the number of generated instances soon becomes unmanageable, we also introduce a set of heuristics for reducing the suite; while optimizing the XML Schema coverage. The aim of our research is not only to invent a technical method, but also to attempt to apply XPT methodology in real applications. We have created a proof-of-concept tool, TAXI, which is the implementation of XPT. This tool has a graphic user interface that can guide and help testers to use it easily. TAXI can also be customized for specific applications to build the test environment and automate the whole processes of testing. The details of TAXI design and the case studies using TAXI in different domains are presented in this thesis. The case studies cover three test purposes. The first one is for functional correctness, specifically we apply the methodology to do the XSLT Testing, which uses TAXI to build an automatic environment for testing the XSLT transformation; the second is for robustness testing, we did the XML database mapping test which tests the data transformation tool for mapping and populate the data from XML Document to XML database; and the third one is for the performance testing, we show XML benchmark that uses TAXI to do the benchmarking of the XML-based applications
    corecore