108,334 research outputs found

    The Oracle Problem in Software Testing: A Survey

    Get PDF
    Testing involves examining the behaviour of a system in order to discover potential faults. Given an input for a system, the challenge of distinguishing the corresponding desired, correct behaviour from potentially incorrect behavior is called the ā€œtest oracle problemā€. Test oracle automation is important to remove a current bottleneck that inhibits greater overall test automation. Without test oracle automation, the human has to determine whether observed behaviour is correct. The literature on test oracles has introduced techniques for oracle automation, including modelling, specifications, contract-driven development and metamorphic testing. When none of these is completely adequate, the final source of test oracle information remains the human, who may be aware of informal specifications, expectations, norms and domain specific information that provide informal oracle guidance. All forms of test oracles, even the humble human, involve challenges of reducing cost and increasing benefit. This paper provides a comprehensive survey of current approaches to the test oracle problem and an analysis of trends in this important area of software testing research and practice

    LLM for Test Script Generation and Migration: Challenges, Capabilities, and Opportunities

    Full text link
    This paper investigates the application of large language models (LLM) in the domain of mobile application test script generation. Test script generation is a vital component of software testing, enabling efficient and reliable automation of repetitive test tasks. However, existing generation approaches often encounter limitations, such as difficulties in accurately capturing and reproducing test scripts across diverse devices, platforms, and applications. These challenges arise due to differences in screen sizes, input modalities, platform behaviors, API inconsistencies, and application architectures. Overcoming these limitations is crucial for achieving robust and comprehensive test automation. By leveraging the capabilities of LLMs, we aim to address these challenges and explore its potential as a versatile tool for test automation. We investigate how well LLMs can adapt to diverse devices and systems while accurately capturing and generating test scripts. Additionally, we evaluate its cross-platform generation capabilities by assessing its ability to handle operating system variations and platform-specific behaviors. Furthermore, we explore the application of LLMs in cross-app migration, where it generates test scripts across different applications and software environments based on existing scripts. Throughout the investigation, we analyze its adaptability to various user interfaces, app architectures, and interaction patterns, ensuring accurate script generation and compatibility. The findings of this research contribute to the understanding of LLMs' capabilities in test automation. Ultimately, this research aims to enhance software testing practices, empowering app developers to achieve higher levels of software quality and development efficiency.Comment: Accepted by the 23rd IEEE International Conference on Software Quality, Reliability, and Security (QRS 2023

    Ways of Applying Artificial Intelligence in Software Engineering

    Full text link
    As Artificial Intelligence (AI) techniques have become more powerful and easier to use they are increasingly deployed as key components of modern software systems. While this enables new functionality and often allows better adaptation to user needs it also creates additional problems for software engineers and exposes companies to new risks. Some work has been done to better understand the interaction between Software Engineering and AI but we lack methods to classify ways of applying AI in software systems and to analyse and understand the risks this poses. Only by doing so can we devise tools and solutions to help mitigate them. This paper presents the AI in SE Application Levels (AI-SEAL) taxonomy that categorises applications according to their point of AI application, the type of AI technology used and the automation level allowed. We show the usefulness of this taxonomy by classifying 15 papers from previous editions of the RAISE workshop. Results show that the taxonomy allows classification of distinct AI applications and provides insights concerning the risks associated with them. We argue that this will be important for companies in deciding how to apply AI in their software applications and to create strategies for its use

    Exploring Domain Specific Approaches to Software Model Checking

    Get PDF
    Model checking has proven to be an effective technology for verification and debugging in hardware domains and more recently in software domains. The major challenges in the application of model checking to software systems are: the mapping of software executables to model checker's input language and the intrinsic complexity of the ever growing software systems. This thesis explores the domain specific model checking approaches to large systems in order to optimize the state space storage for specific domains. Bogor [Bogor 2003] is an extensible, customizable, and highly modular model checking framework that supports general as well as domain specific software model checking. As a part of the thesis, domain specific extensions to Bogor's input language, called Bandera Intermediate Representation (BIR), were implemented by providing a plugin for Eclipse [Eclipse 2004]. Eclipse is a universal platform for tool integration and its plugin development environment facilitates addition of new plugins to the existing ones. Eclipse's extension mechanism is exploited by Bogor. Bogor was installed as an Eclipse plugin and with the help of Eclipse's Plugin Development Environment (PDE), new data types were integrated with the existing Bogor framework. Two case studies ('postfix calculator' using stack extension and 'resource allocation' using multiset extension) were investigated. Various metrics such as number of states, transitions, and maximum depth were analyzed. The complexity of the test cases was increased gradually to test the extensions for feasibility and scalability. The thesis also involves a comprehensive study of some of the well-known model checkers and their features, degree of automation, and input languages. It was observed that customizing the model checker as per domain specifications helped in achieving space reduction. The space reduction is prominent, especially in large domains where it contributes towards state space explosion solution. Although development of extensions is achievable, it requires a working knowledge of Eclipse and specific knowledge of model checking. In conclusion, a domain specific approach for software model checking was demonstrated to be a promising technology. Language extensions to BIR were successfully built and tested for accuracy and scalability.Computer Science Departmen

    Modeling, Simulation and Emulation of Intelligent Domotic Environments

    Get PDF
    Intelligent Domotic Environments are a promising approach, based on semantic models and commercially off-the-shelf domotic technologies, to realize new intelligent buildings, but such complexity requires innovative design methodologies and tools for ensuring correctness. Suitable simulation and emulation approaches and tools must be adopted to allow designers to experiment with their ideas and to incrementally verify designed policies in a scenario where the environment is partly emulated and partly composed of real devices. This paper describes a framework, which exploits UML2.0 state diagrams for automatic generation of device simulators from ontology-based descriptions of domotic environments. The DogSim simulator may simulate a complete building automation system in software, or may be integrated in the Dog Gateway, allowing partial simulation of virtual devices alongside with real devices. Experiments on a real home show that the approach is feasible and can easily address both simulation and emulation requirement

    Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World

    Get PDF
    This report documents the program and the outcomes of GI-Dagstuhl Seminar 16394 "Software Performance Engineering in the DevOps World". The seminar addressed the problem of performance-aware DevOps. Both, DevOps and performance engineering have been growing trends over the past one to two years, in no small part due to the rise in importance of identifying performance anomalies in the operations (Ops) of cloud and big data systems and feeding these back to the development (Dev). However, so far, the research community has treated software engineering, performance engineering, and cloud computing mostly as individual research areas. We aimed to identify cross-community collaboration, and to set the path for long-lasting collaborations towards performance-aware DevOps. The main goal of the seminar was to bring together young researchers (PhD students in a later stage of their PhD, as well as PostDocs or Junior Professors) in the areas of (i) software engineering, (ii) performance engineering, and (iii) cloud computing and big data to present their current research projects, to exchange experience and expertise, to discuss research challenges, and to develop ideas for future collaborations
    • ā€¦
    corecore