11,812 research outputs found

    A new test framework for communications-critical large scale systems

    Get PDF
    None of today’s large scale systems could function without the reliable availability of a varied range of network communications capabilities. Whilst software, hardware and communications technologies have been advancing throughout the past two decades, the methods commonly used by industry for testing large scale systems which incorporate critical communications interfaces have not kept pace. This paper argues for the need for a specifically tailored framework to achieve effective and precise testing of communications-critical large scale systems (CCLSSs). The paper briefly discusses how generic test approaches are leading to inefficient and costly test activities in industry. The paper then outlines the features of an alternative CCLSS domain-specific test framework, and then provides an example based on a real case study. The paper concludes with an evaluation of the benefits observed during the case study and an outline of the available evidence that such benefits can be realized with other comparable systems

    A Survey on Software Testing Techniques using Genetic Algorithm

    Full text link
    The overall aim of the software industry is to ensure delivery of high quality software to the end user. To ensure high quality software, it is required to test software. Testing ensures that software meets user specifications and requirements. However, the field of software testing has a number of underlying issues like effective generation of test cases, prioritisation of test cases etc which need to be tackled. These issues demand on effort, time and cost of the testing. Different techniques and methodologies have been proposed for taking care of these issues. Use of evolutionary algorithms for automatic test generation has been an area of interest for many researchers. Genetic Algorithm (GA) is one such form of evolutionary algorithms. In this research paper, we present a survey of GA approach for addressing the various issues encountered during software testing.Comment: 13 Page

    Improving regression testing transparency and efficiency with history-based prioritization – an industrial case study

    Get PDF
    Abstract—Background: History based regression testing was proposed as a basis for automating regression test selection, for the purpose of improving transparency and test efficiency, at the function test level in a large scale software development organization. Aim: The study aims at investigating the current manual regression testing process as well as adopting, implementing and evaluating the effect of the proposed method. Method: A case study was launched including: identification of important factors for prioritization and selection of test cases, implementation of the method, and a quantitative and qualitative evaluation. Results: 10 different factors, of which two are history-based, are identified as important for selection. Most of the information needed is available in the test management and error reporting systems while some is embedded in the process. Transparency is increased through a semi-automated method. Our quantitative evaluation indicates a possibility to improve efficiency, while the qualitative evaluation supports the general principles of history-based testing but suggests changes in implementation details

    Dynamic Analysis can be Improved with Automatic Test Suite Refactoring

    Full text link
    Context: Developers design test suites to automatically verify that software meets its expected behaviors. Many dynamic analysis techniques are performed on the exploitation of execution traces from test cases. However, in practice, there is only one trace that results from the execution of one manually-written test case. Objective: In this paper, we propose a new technique of test suite refactoring, called B-Refactoring. The idea behind B-Refactoring is to split a test case into small test fragments, which cover a simpler part of the control flow to provide better support for dynamic analysis. Method: For a given dynamic analysis technique, our test suite refactoring approach monitors the execution of test cases and identifies small test cases without loss of the test ability. We apply B-Refactoring to assist two existing analysis tasks: automatic repair of if-statements bugs and automatic analysis of exception contracts. Results: Experimental results show that test suite refactoring can effectively simplify the execution traces of the test suite. Three real-world bugs that could previously not be fixed with the original test suite are fixed after applying B-Refactoring; meanwhile, exception contracts are better verified via applying B-Refactoring to original test suites. Conclusions: We conclude that applying B-Refactoring can effectively improve the purity of test cases. Existing dynamic analysis tasks can be enhanced by test suite refactoring

    SoC regression strategy developement

    Get PDF
    Abstract. The objective of the verifcation process of hardware is ensuring that the design does not contain any functional errors. Verifying the correct functionality of a large System-on-Chip (SoC) is a co-design process that is performed by running immature software on immature hardware. Among the key objectives is to ensure the completion of the design before proceeding to fabrication. Verification is performed using a mix of software simulations that imitate the hardware functions and emulations executed on reconfigurable hardware. Both techniques are time-consuming, the software running perhaps at a billionth and the emulation at thousands of times slower than the targeted system. A good verification strategy reduces the time to market without compromising the testing coverage. This thesis compares regression verification strategies for a large SoC project. These include different techniques of test case selection, test case prioritization that have been researched in software projects. There is no single strategy that performs well in SoC throughout the whole development cycle. In the early stages of development time based test case prioritization provides the fastest convergence. Later history based test case prioritization and risk based test case selection gave a good balance between coverage, error detection, execution time, and foundations to predict the time to completion

    Search based software engineering: Trends, techniques and applications

    Get PDF
    © ACM, 2012. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version is available from the link below.In the past five years there has been a dramatic increase in work on Search-Based Software Engineering (SBSE), an approach to Software Engineering (SE) in which Search-Based Optimization (SBO) algorithms are used to address problems in SE. SBSE has been applied to problems throughout the SE lifecycle, from requirements and project planning to maintenance and reengineering. The approach is attractive because it offers a suite of adaptive automated and semiautomated solutions in situations typified by large complex problem spaces with multiple competing and conflicting objectives. This article provides a review and classification of literature on SBSE. The work identifies research trends and relationships between the techniques applied and the applications to which they have been applied and highlights gaps in the literature and avenues for further research.EPSRC and E

    Industrially Applicable System Regression Test Prioritization in Production Automation

    Full text link
    When changes are performed on an automated production system (aPS), new faults can be accidentally introduced in the system, which are called regressions. A common method for finding these faults is regression testing. In most cases, this regression testing process is performed under high time pressure and on-site in a very uncomfortable environment. Until now, there is no automated support for finding and prioritizing system test cases regarding the fully integrated aPS that are suitable for finding regressions. Thus, the testing technician has to rely on personal intuition and experience, possibly choosing an inappropriate order of test cases, finding regressions at a very late stage of the test run. Using a suitable prioritization, this iterative process of finding and fixing regressions can be streamlined and a lot of time can be saved by executing test cases likely to identify new regressions earlier. Thus, an approach is presented in this paper that uses previously acquired runtime data from past test executions and performs a change identification and impact analysis to prioritize test cases that have a high probability to unveil regressions caused by side effects of a system change. The approach was developed in cooperation with reputable industrial partners active in the field of aPS engineering, ensuring a development in line with industrial requirements. An industrial case study and an expert evaluation were performed, showing promising results.Comment: 13 pages, https://ieeexplore.ieee.org/abstract/document/8320514
    corecore