15 research outputs found

    Automated Realistic Test Input Generation and Cost Reduction in Service-centric System Testing

    Get PDF
    Service-centric System Testing (ScST) is more challenging than testing traditional software due to the complexity of service technologies and the limitations that are imposed by the SOA environment. One of the most important problems in ScST is the problem of realistic test data generation. Realistic test data is often generated manually or using an existing source, thus it is hard to automate and laborious to generate. One of the limitations that makes ScST challenging is the cost associated with invoking services during testing process. This thesis aims to provide solutions to the aforementioned problems, automated realistic input generation and cost reduction in ScST. To address automation in realistic test data generation, the concept of Service-centric Test Data Generation (ScTDG) is presented, in which existing services used as realistic data sources. ScTDG minimises the need for tester input and dependence on existing data sources by automatically generating service compositions that can generate the required test data. In experimental analysis, our approach achieved between 93% and 100% success rates in generating realistic data while state-of-the-art automated test data generation achieved only between 2% and 34%. The thesis addresses cost concerns at test data generation level by enabling data source selection in ScTDG. Source selection in ScTDG has many dimensions such as cost, reliability and availability. This thesis formulates this problem as an optimisation problem and presents a multi-objective characterisation of service selection in ScTDG, aiming to reduce the cost of test data generation. A cost-aware pareto optimal test suite minimisation approach addressing testing cost concerns during test execution is also presented. The approach adapts traditional multi-objective minimisation approaches to ScST domain by formulating ScST concerns, such as invocation cost and test case reliability. In experimental analysis, the approach achieved reductions between 69% and 98.6% in monetary cost of service invocations during testin

    A Survey on Metamorphic Testing

    Get PDF
    A test oracle determines whether a test execution reveals a fault, often by comparing the observed program output to the expected output. This is not always practical, for example when a program's input-output relation is complex and difficult to capture formally. Metamorphic testing provides an alternative, where correctness is not determined by checking an individual concrete output, but by applying a transformation to a test input and observing how the program output “morphs” into a different one as a result. Since the introduction of such metamorphic relations in 1998, many contributions on metamorphic testing have been made, and the technique has seen successful applications in a variety of domains, ranging from web services to computer graphics. This article provides a comprehensive survey on metamorphic testing: It summarises the research results and application areas, and analyses common practice in empirical studies of metamorphic testing as well as the main open challenges

    A Survey on Metamorphic Testing

    Get PDF
    A test oracle determines whether a test execution reveals a fault, often by comparing the observed program output to the expected output. This is not always practical, for example when a program’s input-output relation is complex and difficult to capture formally. Metamorphic testing provides an alternative, where correctness is not determined by checking an individual concrete output, but by applying a transformation to a test input and observing how the program output “morphs” into a different one as a result. Since the introduction of such metamorphic relations in 1998, many contributions on metamorphic testing have been made, and the technique has seen successful applications in a variety of domains, ranging from web services to computer graphics. This article provides a comprehensive survey on metamorphic testing: It summarises the research results and application areas, and analyses common practice in empirical studies of metamorphic testing as well as the main open challenges.European Commission (FEDER)Spanish Govermen

    A Survey on Metamorphic Testing

    Full text link

    Higher Order Mutation Testing

    Get PDF
    Mutation testing is a fault-based software testing technique that has been studied widely for over three decades. To date, work in this field has focused largely on first order mutants because it is believed that higher order mutation testing is too computationally expensive to be practical. This thesis argues that some higher order mutants are potentially better able to simulate real world faults and to reveal insights into programming bugs than the restricted class of first order mutants. This thesis proposes a higher order mutation testing paradigm which combines valuable higher order mutants and non-trivial first order mutants together for mutation testing. To overcome the exponential increase in the number of higher order mutants a search process that seeks fit mutants (both first and higher order) from the space of all possible mutants is proposed. A fault-based higher order mutant classification scheme is introduced. Based on different types of fault interactions, this approach classifies higher order mutants into four categories: expected, worsening, fault masking and fault shifting. A search-based approach is then proposed for locating subsuming and strongly subsuming higher order mutants. These mutants are a subset of fault mask and fault shift classes of higher order mutants that are more difficult to kill than their constituent first order mutants. Finally, a hybrid test data generation approach is introduced, which combines the dynamic symbolic execution and search based software testing approaches to generate strongly adequate test data to kill first and higher order mutants

    Interdependent Security and Compliance in Service Selection

    Get PDF
    Application development today is characterized by ever shorter release cycles and more frequent change requests. Hence development methods such as service composition are increasingly arousing interest as viable alternative approaches. While employing web services as building blocks rapidly reduces development times, it raises new challenges regarding security and compliance since their implementation remains a black box which usually cannot be controlled. Security in particular gets even more challenging since some applications require domainspecific security objectives such as location privacy. Another important aspect is that security objectives are in general no singletons but subject to interdependence. Hence this thesis addresses the question of how to consider interdependent security and compliance in service composition. Current approaches for service composition do neither consider interdependent security nor compliance. Selecting suiting services for a composition is a combinatorial problem which is known to be NP-hard. Often this problem is solved utilizing genetic algorithms in order to obtain near-optimal solutions in reasonable time. This is particularly the case if multiple objectives have to be optimized simultaneously such as price, runtime and data encryption strength. Security properties of compositions are usually verified using formal methods. However, none of the available methods supports interdependence effects or defining arbitrary security objectives. Similarly, no current approach ensures compliance of service compositions during service selection. Instead, compliance is verified afterwards which might necessitate repeating the selection process in case of a non-compliant solution. In this thesis, novel approaches for considering interdependent security and compliance in service composition are being presented and discussed. Since no formal methods exist covering interdependence effects for security, this aspect is covered in terms of a security assessment. An assessment method is developed which builds upon the notion of structural decomposition in order to assess the fulfillment of arbitrary security objectives in terms of a utility function. Interdependence effects are being modeled as dependencies between utility functions. In order to enable compliance-awareness, an approach is presented which checks compliance of compositions during service selection and marks non-compliant parts. This enables to repair the corresponding parts during the selection process by replacing the current services and hence avoids the necessity to repeat the selection process. It is demonstrated how to embed the presented approaches into a genetic algorithm in order to ease integration with existing approaches for service composition. The developed approaches are being compared to state-of-the-art genetic algorithms using simulations

    Interoperability of Enterprise Software and Applications

    Get PDF

    Intensional Cyberforensics

    Get PDF
    This work focuses on the application of intensional logic to cyberforensic analysis and its benefits and difficulties are compared with the finite-state-automata approach. This work extends the use of the intensional programming paradigm to the modeling and implementation of a cyberforensics investigation process with backtracing of event reconstruction, in which evidence is modeled by multidimensional hierarchical contexts, and proofs or disproofs of claims are undertaken in an eductive manner of evaluation. This approach is a practical, context-aware improvement over the finite state automata (FSA) approach we have seen in previous work. As a base implementation language model, we use in this approach a new dialect of the Lucid programming language, called Forensic Lucid, and we focus on defining hierarchical contexts based on intensional logic for the distributed evaluation of cyberforensic expressions. We also augment the work with credibility factors surrounding digital evidence and witness accounts, which have not been previously modeled. The Forensic Lucid programming language, used for this intensional cyberforensic analysis, formally presented through its syntax and operational semantics. In large part, the language is based on its predecessor and codecessor Lucid dialects, such as GIPL, Indexical Lucid, Lucx, Objective Lucid, and JOOIP bound by the underlying intensional programming paradigm.Comment: 412 pages, 94 figures, 18 tables, 19 algorithms and listings; PhD thesis; v2 corrects some typos and refs; also available on Spectrum at http://spectrum.library.concordia.ca/977460

    Descriptive business process models at run-time

    Get PDF
    Today's competitive markets require organisations to react proactively to changes in their environment if financial and legal consequences are to be avoided. Since business processes are elementary parts of modern organisations they are also required to efficiently adapt to these changes in quick and flexible ways. This requirement demands a more dynamic handling of business processes, i.e. treating business processes as run-time artefacts rather than design-time artefacts. One general approach to address this problem is provided by the community of [email protected], which promotes methodologies concerned with self-adaptive systems where models reflect the system's current state at any point in time and allow immediate reasoning and adaptation mechanisms. However, in contrast to common self-adaptive systems the domain of business processes features two additional challenges: (i) a bigger than usual abstraction gap between the business process models and the actual run-time information of the enterprise system and (ii) the possibility of run-time deviations from the planned models. Developing an understanding of such processes is a crucial necessity in order to optimise business processes and dynamically adapt to changing demands. This thesis explores the potential of adopting and enhancing principles and mechanisms from the [email protected] domain to the business process domain for the purpose of run-time reasoning, i.e. investigating the potential role of Descriptive Business Process Models at Run-time (DBPMRTs) in the business process management domain. The DBPMRT is a model describing the enterprise system at run-time and thus enabling higher-level reasoning on the as-is state. Along with the specification of the DBPMRT, algorithms and an overall framework are proposed to establish and maintain a causal link from the enterprise system to the DBPMRT at run-time. Furthermore, it is shown that proactive higher-level reasoning on a DBPMRT in the form of performance prediction allows for more accurate results. By taking these steps the thesis addresses general challenges of business process management, e.g. dealing with frequently changing processes and shortening the business process life cycle. At the same time this thesis contributes to research in [email protected] by providing a complex real-world use case as well as a reference approach for dealing with volatile [email protected] of a higher abstraction level

    Intensional Cyberforensics

    Get PDF
    This work focuses on the application of intensional logic to cyberforensic analysis and its benefits and difficulties are compared with the finite-state-automata approach. This work extends the use of the intensional programming paradigm to the modeling and implementation of a cyberforensics investigation process with backtracing of event reconstruction, in which evidence is modeled by multidimensional hierarchical contexts, and proofs or disproofs of claims are undertaken in an eductive manner of evaluation. This approach is a practical, context-aware improvement over the finite state automata (FSA) approach we have seen in previous work. As a base implementation language model, we use in this approach a new dialect of the Lucid programming language, called Forensic Lucid, and we focus on defining hierarchical contexts based on intensional logic for the distributed evaluation of cyberforensic expressions. We also augment the work with credibility factors surrounding digital evidence and witness accounts, which have not been previously modeled. The Forensic Lucid programming language, used for this intensional cyberforensic analysis, formally presented through its syntax and operational semantics. In large part, the language is based on its predecessor and codecessor Lucid dialects, such as GIPL, Indexical Lucid, Lucx, Objective Lucid, MARFL, and JOOIP bound by the underlying intensional programming paradigm
    corecore