1,124 research outputs found

    The Oracle Problem in Software Testing: A Survey

    Get PDF
    Testing involves examining the behaviour of a system in order to discover potential faults. Given an input for a system, the challenge of distinguishing the corresponding desired, correct behaviour from potentially incorrect behavior is called the ā€œtest oracle problemā€. Test oracle automation is important to remove a current bottleneck that inhibits greater overall test automation. Without test oracle automation, the human has to determine whether observed behaviour is correct. The literature on test oracles has introduced techniques for oracle automation, including modelling, specifications, contract-driven development and metamorphic testing. When none of these is completely adequate, the final source of test oracle information remains the human, who may be aware of informal specifications, expectations, norms and domain specific information that provide informal oracle guidance. All forms of test oracles, even the humble human, involve challenges of reducing cost and increasing benefit. This paper provides a comprehensive survey of current approaches to the test oracle problem and an analysis of trends in this important area of software testing research and practice

    Le web sƩmantique en aide Ơ l'analyste de traces d'exƩcution

    Get PDF
    International audienceL'analyse de traces d' exĆ©cution est devenue l'outil priv-ilĆ©giĆ© pour dĆ©bugger et optimiser le code des applications sur les syst emes embarquĆ©s. Ces syst emes ont des architec-tures complexes basĆ©es sur des composants intĆ©grĆ©s appelĆ©s SoC (System-on-Chip). Le travail de l'analyste (souvent, un dĆ©veloppeur d'application) devient un vĆ©ritable challenge car les traces produites par ces syst emes sont de tr es grande taille et les ev enements qu'ils contiennent sont de bas niveau. Nous proposons d'aider ce travail d'analyse en utilisant des outils de gestion des connaissances pour faciliter l'explo-ration de la trace. Nous proposons une ontologie du do-maine qui dĆ©crit les principaux concepts et contraintes pour l'analyse de traces issues de SoC. Cette ontologie reprend les paradigmes d'ontologie lĆ©g ere pour supporter le passagĆØ a l'Ā“ echelle de la gestion des connaissances. Elle utilise des technologies de " triple store " RDF pour son exploitation a l'aide de requĆŖtes dĆ©claratives SPARQL. Nous illustrons notre ap-proche en offrant une analyse de meilleure qualitĆ© des traces d'un cas d'utilisation rĆ©el

    DyCL: Dynamic Neural Network Compilation Via Program Rewriting and Graph Optimization

    Full text link
    DL compiler's primary function is to translate DNN programs written in high-level DL frameworks such as PyTorch and TensorFlow into portable executables. These executables can then be flexibly executed by the deployed host programs. However, existing DL compilers rely on a tracing mechanism, which involves feeding a runtime input to a neural network program and tracing the program execution paths to generate the computational graph necessary for compilation. Unfortunately, this mechanism falls short when dealing with modern dynamic neural networks (DyNNs) that possess varying computational graphs depending on the inputs. Consequently, conventional DL compilers struggle to accurately compile DyNNs into executable code. To address this limitation, we propose \tool, a general approach that enables any existing DL compiler to successfully compile DyNNs. \tool tackles the dynamic nature of DyNNs by introducing a compilation mechanism that redistributes the control and data flow of the original DNN programs during the compilation process. Specifically, \tool develops program analysis and program transformation techniques to convert a dynamic neural network into multiple sub-neural networks. Each sub-neural network is devoid of conditional statements and is compiled independently. Furthermore, \tool synthesizes a host module that models the control flow of the DyNNs and facilitates the invocation of the sub-neural networks. Our evaluation demonstrates the effectiveness of \tool, achieving a 100\% success rate in compiling all dynamic neural networks. Moreover, the compiled executables generated by \tool exhibit significantly improved performance, running between 1.12Ɨ1.12\times and 20.21Ɨ20.21\times faster than the original DyNNs executed on general-purpose DL frameworks.Comment: This paper has been accepted to ISSTA 202

    On the Security of Software Systems and Services

    Get PDF
    This work investigates new methods for facing the security issues and threats arising from the composition of software. This task has been carried out through the formal modelling of both the software composition scenarios and the security properties, i.e., policies, to be guaranteed. Our research moves across three different modalities of software composition which are of main interest for some of the most sensitive aspects of the modern information society. They are mobile applications, trust-based composition and service orchestration. Mobile applications are programs designed for being deployable on remote platforms. Basically, they are the main channel for the distribution and commercialisation of software for mobile devices, e.g., smart phones and tablets. Here we study the security threats that affect the application providers and the hosting platforms. In particular, we present a programming framework for the development of applications with a static and dynamic security support. Also, we implemented an enforcement mechanism for applying fine-grained security controls on the execution of possibly malicious applications. In addition to security, trust represents a pragmatic and intuitive way for managing the interactions among systems. Currently, trust is one of the main factors that human beings keep into account when deciding whether to accept a transaction or not. In our work we investigate the possibility of defining a fully integrated environment for security policies and trust including a runtime monitor. Finally, Service-Oriented Computing (SOC) is the leading technology for business applications distributed over a network. The security issues related to the service networks are many and multi-faceted. We mainly deal with the static verification of secure composition plans of web services. Moreover, we introduce the synthesis of dynamic security checks for protecting the services against illegal invocations

    How To Touch a Running System

    Get PDF
    The increasing importance of distributed and decentralized software architectures entails more and more attention for adaptive software. Obtaining adaptiveness, however, is a difficult task as the software design needs to foresee and cope with a variety of situations. Using reconfiguration of components facilitates this task, as the adaptivity is conducted on an architecture level instead of directly in the code. This results in a separation of concerns; the appropriate reconfiguration can be devised on a coarse level, while the implementation of the components can remain largely unaware of reconfiguration scenarios. We study reconfiguration in component frameworks based on formal theory. We first discuss programming with components, exemplified with the development of the cmc model checker. This highly efficient model checker is made of C++ components and serves as an example for component-based software development practice in general, and also provides insights into the principles of adaptivity. However, the component model focuses on high performance and is not geared towards using the structuring principle of components for controlled reconfiguration. We thus complement this highly optimized model by a message passing-based component model which takes reconfigurability to be its central principle. Supporting reconfiguration in a framework is about alleviating the programmer from caring about the peculiarities as much as possible. We utilize the formal description of the component model to provide an algorithm for reconfiguration that retains as much flexibility as possible, while avoiding most problems that arise due to concurrency. This algorithm is embedded in a general four-stage adaptivity model inspired by physical control loops. The reconfiguration is devised to work with stateful components, retaining their data and unprocessed messages. Reconfiguration plans, which are provided with a formal semantics, form the input of the reconfiguration algorithm. We show that the algorithm achieves perceived atomicity of the reconfiguration process for an important class of plans, i.e., the whole process of reconfiguration is perceived as one atomic step, while minimizing the use of blocking of components. We illustrate the applicability of our approach to reconfiguration by providing several examples like fault-tolerance and automated resource control

    Formal Firewall Conformance Testing: An Application of Test and Proof Techniques

    Get PDF
    Firewalls are an important means to secure critical ICT infrastructures. As configurable off-the-shelf prod\-ucts, the effectiveness of a firewall crucially depends on both the correctness of the implementation itself as well as the correct configuration. While testing the implementation can be done once by the manufacturer, the configuration needs to be tested for each application individually. This is particularly challenging as the configuration, implementing a firewall policy, is inherently complex, hard to understand, administrated by different stakeholders and thus difficult to validate. This paper presents a formal model of both stateless and stateful firewalls (packet filters), including NAT, to which a specification-based conformance test case gen\-eration approach is applied. Furthermore, a verified optimisation technique for this approach is presented: starting from a formal model for stateless firewalls, a collection of semantics-preserving policy transformation rules and an algorithm that optimizes the specification with respect of the number of test cases required for path coverage of the model are derived. We extend an existing approach that integrates verification and testing, that is, tests and proofs to support conformance testing of network policies. The presented approach is supported by a test framework that allows to test actual firewalls using the test cases generated on the basis of the formal model. Finally, a report on several larger case studies is presented

    Studying a Virtual Testbed for Unverified Data

    Get PDF
    It is difficult to fully know the effects a piece of software will have on your computer, particularly when the software is distributed by an unknown source. The research in this paper focuses on malware detection, virtualization, and sandbox/honeypot techniques with the goal of improving the security of installing useful, but unverifiable, software. With a combination of these techniques, it should be possible to install software in an environment where it cannot harm a machine, but can be tested to determine its safety. Testing for malware, performance, network connectivity, memory usage, and interoperability can be accomplished without allowing the program to access the base operating system of a machine. After the full effects of the software are understood and it is determined to be safe, it could then be run from, and given access to, the base operating system. This thesis investigates the feasibility of creating a system to verify the security of unknown software while ensuring it will have no negative impact on the host machine

    Efficient Software Implementation of Stream Programs

    Get PDF
    The way we use computers and mobile phones today requires large amounts of processing of data streams. Examples include digital signal processing for wireless transmission, audio and video coding for recording and watching videos, and noise reduction for the phone calls. These tasks can be performed by stream programsā€”computer programs that process streams of data. Stream programs can be composed of other stream programs. Components of a composition are connected in a network, i.e. the output streams of one component are sent as input streams to other components. The components, that perform the actual computation, are called kernels. They can be described in different styles and programming languages. There are also formal models for describing the kernels and the networks. One such model is the actor machine.This dissertation evaluates the actor machine, how it facilitates creating efficient software implementation of stream programs. The evaluation is divided into four aspects: (1) analyzability of its structure, (2) generality in what languages and styles it can express, (3) efficient implementation of kernels, and (4) efficient implementation of networks. This dissertation demonstrates all four aspects through implementation and evaluation of a stream program compiler based on actor machines
    • ā€¦
    corecore