3,922 research outputs found

    FAST Approaches to Scalable Similarity-based Test Case Prioritization

    Get PDF
    Many test case prioritization criteria have been proposed for speeding up fault detection. Among them, similarity-based approaches give priority to the test cases that are the most dissimilar from those already selected. However, the proposed criteria do not scale up to handle the many thousands or even some millions test suite sizes of modern industrial systems and simple heuristics are used instead. We introduce the FAST family of test case prioritization techniques that radically changes this landscape by borrowing algorithms commonly exploited in the big data domain to find similar items. FAST techniques provide scalable similarity-based test case prioritization in both white-box and black-box fashion. The results from experimentation on real world C and Java subjects show that the fastest members of the family outperform other black-box approaches in efficiency with no significant impact on effectiveness, and also outperform white-box approaches, including greedy ones, if preparation time is not counted. A simulation study of scalability shows that one FAST technique can prioritize a million test cases in less than 20 minutes

    Visualizing test diversity to support test optimisation

    Full text link
    Diversity has been used as an effective criteria to optimise test suites for cost-effective testing. Particularly, diversity-based (alternatively referred to as similarity-based) techniques have the benefit of being generic and applicable across different Systems Under Test (SUT), and have been used to automatically select or prioritise large sets of test cases. However, it is a challenge to feedback diversity information to developers and testers since results are typically many-dimensional. Furthermore, the generality of diversity-based approaches makes it harder to choose when and where to apply them. In this paper we address these challenges by investigating: i) what are the trade-off in using different sources of diversity (e.g., diversity of test requirements or test scripts) to optimise large test suites, and ii) how visualisation of test diversity data can assist testers for test optimisation and improvement. We perform a case study on three industrial projects and present quantitative results on the fault detection capabilities and redundancy levels of different sets of test cases. Our key result is that test similarity maps, based on pair-wise diversity calculations, helped industrial practitioners identify issues with their test repositories and decide on actions to improve. We conclude that the visualisation of diversity information can assist testers in their maintenance and optimisation activities

    Efficient 2D-3D Matching for Multi-Camera Visual Localization

    Full text link
    Visual localization, i.e., determining the position and orientation of a vehicle with respect to a map, is a key problem in autonomous driving. We present a multicamera visual inertial localization algorithm for large scale environments. To efficiently and effectively match features against a pre-built global 3D map, we propose a prioritized feature matching scheme for multi-camera systems. In contrast to existing works, designed for monocular cameras, we (1) tailor the prioritization function to the multi-camera setup and (2) run feature matching and pose estimation in parallel. This significantly accelerates the matching and pose estimation stages and allows us to dynamically adapt the matching efforts based on the surrounding environment. In addition, we show how pose priors can be integrated into the localization system to increase efficiency and robustness. Finally, we extend our algorithm by fusing the absolute pose estimates with motion estimates from a multi-camera visual inertial odometry pipeline (VIO). This results in a system that provides reliable and drift-less pose estimation. Extensive experiments show that our localization runs fast and robust under varying conditions, and that our extended algorithm enables reliable real-time pose estimation.Comment: 7 pages, 5 figure

    LTM: Scalable and Black-box Similarity-based Test Suite Minimization based on Language Models

    Full text link
    Test suites tend to grow when software evolves, making it often infeasible to execute all test cases with the allocated testing budgets, especially for large software systems. Therefore, test suite minimization (TSM) is employed to improve the efficiency of software testing by removing redundant test cases, thus reducing testing time and resources, while maintaining the fault detection capability of the test suite. Most of the TSM approaches rely on code coverage (white-box) or model-based features, which are not always available for test engineers. Recent TSM approaches that rely only on test code (black-box) have been proposed, such as ATM and FAST-R. To address scalability, we propose LTM (Language model-based Test suite Minimization), a novel, scalable, and black-box similarity-based TSM approach based on large language models (LLMs). To support similarity measurement, we investigated three different pre-trained language models: CodeBERT, GraphCodeBERT, and UniXcoder, to extract embeddings of test code, on which we computed two similarity measures: Cosine Similarity and Euclidean Distance. Our goal is to find similarity measures that are not only computationally more efficient but can also better guide a Genetic Algorithm (GA), thus reducing the overall search time. Experimental results, under a 50% minimization budget, showed that the best configuration of LTM (using UniXcoder with Cosine similarity) outperformed the best two configurations of ATM in three key facets: (a) achieving a greater saving rate of testing time (40.38% versus 38.06%, on average); (b) attaining a significantly higher fault detection rate (0.84 versus 0.81, on average); and, more importantly, (c) minimizing test suites much faster (26.73 minutes versus 72.75 minutes, on average) in terms of both preparation time (up to two orders of magnitude faster) and search time (one order of magnitude faster)

    The Progress, Challenges, and Perspectives of Directed Greybox Fuzzing

    Full text link
    Most greybox fuzzing tools are coverage-guided as code coverage is strongly correlated with bug coverage. However, since most covered codes may not contain bugs, blindly extending code coverage is less efficient, especially for corner cases. Unlike coverage-guided greybox fuzzers who extend code coverage in an undirected manner, a directed greybox fuzzer spends most of its time allocation on reaching specific targets (e.g., the bug-prone zone) without wasting resources stressing unrelated parts. Thus, directed greybox fuzzing (DGF) is particularly suitable for scenarios such as patch testing, bug reproduction, and specialist bug hunting. This paper studies DGF from a broader view, which takes into account not only the location-directed type that targets specific code parts, but also the behaviour-directed type that aims to expose abnormal program behaviours. Herein, the first in-depth study of DGF is made based on the investigation of 32 state-of-the-art fuzzers (78% were published after 2019) that are closely related to DGF. A thorough assessment of the collected tools is conducted so as to systemise recent progress in this field. Finally, it summarises the challenges and provides perspectives for future research.Comment: 16 pages, 4 figure

    Scuba:Scalable kernel-based gene prioritization

    Get PDF
    Abstract Background The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. Results We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Conclusions Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba

    Evaluation of Software Product Line Test Case Prioritization Technique

    Full text link

    SRPTackle: A semi-automated requirements prioritisation technique for scalable requirements of software system projects

    Get PDF
    ContextRequirement prioritisation (RP) is often used to select the most important system requirements as perceived by system stakeholders. RP plays a vital role in ensuring the development of a quality system with defined constraints. However, a closer look at existing RP techniques reveals that these techniques suffer from some key challenges, such as scalability, lack of quantification, insufficient prioritisation of participating stakeholders, overreliance on the participation of professional expertise, lack of automation and excessive time consumption. These key challenges serve as the motivation for the present research.ObjectiveThis study aims to propose a new semiautomated scalable prioritisation technique called ‘SRPTackle’ to address the key challenges.MethodSRPTackle provides a semiautomated process based on a combination of a constructed requirement priority value formulation function using a multi-criteria decision-making method (i.e. weighted sum model), clustering algorithms (K-means and K-means++) and a binary search tree to minimise the need for expert involvement and increase efficiency. The effectiveness of SRPTackle is assessed by conducting seven experiments using a benchmark dataset from a large actual software project.ResultsExperiment results reveal that SRPTackle can obtain 93.0% and 94.65% as minimum and maximum accuracy percentages, respectively. These values are better than those of alternative techniques. The findings also demonstrate the capability of SRPTackle to prioritise large-scale requirements with reduced time consumption and its effectiveness in addressing the key challenges in comparison with other techniques.ConclusionWith the time effectiveness, ability to scale well with numerous requirements, automation and clear implementation guidelines of SRPTackle, project managers can perform RP for large-scale requirements in a proper manner, without necessitating an extensive amount of effort (e.g. tedious manual processes, need for the involvement of experts and time workload)
    corecore