489 research outputs found
Automatic Generation of Test Cases based on Bug Reports: a Feasibility Study with Large Language Models
Software testing is a core discipline in software engineering where a large
array of research results has been produced, notably in the area of automatic
test generation. Because existing approaches produce test cases that either can
be qualified as simple (e.g. unit tests) or that require precise
specifications, most testing procedures still rely on test cases written by
humans to form test suites. Such test suites, however, are incomplete: they
only cover parts of the project or they are produced after the bug is fixed.
Yet, several research challenges, such as automatic program repair, and
practitioner processes, build on the assumption that available test suites are
sufficient. There is thus a need to break existing barriers in automatic test
case generation. While prior work largely focused on random unit testing
inputs, we propose to consider generating test cases that realistically
represent complex user execution scenarios, which reveal buggy behaviour. Such
scenarios are informally described in bug reports, which should therefore be
considered as natural inputs for specifying bug-triggering test cases. In this
work, we investigate the feasibility of performing this generation by
leveraging large language models (LLMs) and using bug reports as inputs. Our
experiments include the use of ChatGPT, as an online service, as well as
CodeGPT, a code-related pre-trained LLM that was fine-tuned for our task.
Overall, we experimentally show that bug reports associated to up to 50% of
Defects4J bugs can prompt ChatGPT to generate an executable test case. We show
that even new bug reports can indeed be used as input for generating executable
test cases. Finally, we report experimental results which confirm that
LLM-generated test cases are immediately useful in software engineering tasks
such as fault localization as well as patch validation in automated program
repair
CodeFuse-13B: A Pretrained Multi-lingual Code Large Language Model
Code Large Language Models (Code LLMs) have gained significant attention in
the industry due to their wide applications in the full lifecycle of software
engineering. However, the effectiveness of existing models in understanding
non-English inputs for multi-lingual code-related tasks is still far from well
studied. This paper introduces CodeFuse-13B, an open-sourced pre-trained code
LLM. It is specifically designed for code-related tasks with both English and
Chinese prompts and supports over 40 programming languages. CodeFuse achieves
its effectiveness by utilizing a high quality pre-training dataset that is
carefully filtered by program analyzers and optimized during the training
process. Extensive experiments are conducted using real-world usage scenarios,
the industry-standard benchmark HumanEval-x, and the specially designed
CodeFuseEval for Chinese prompts. To assess the effectiveness of CodeFuse, we
actively collected valuable human feedback from the AntGroup's software
development process where CodeFuse has been successfully deployed. The results
demonstrate that CodeFuse-13B achieves a HumanEval pass@1 score of 37.10%,
positioning it as one of the top multi-lingual code LLMs with similar parameter
sizes. In practical scenarios, such as code generation, code translation, code
comments, and testcase generation, CodeFuse performs better than other models
when confronted with Chinese prompts.Comment: 10 pages with 2 pages for reference
A model-based approach for the specification and refinement of streaming applications
Embedded systems can be found in a wide range of applications. Depending on the application, embedded systems must meet a wide range of constraints. Thus, designing and programming embedded systems is a challenging task. Here, model-based design flows can be a solution. This thesis proposes novel approaches for the specification and refinement of streaming applications. To this end, it focuses on dataflow models. As key result, the proposed dataflow model provides for a seamless model-based design flow from system level to the instruction/logic level for a wide range of streaming applications
Racing to hardware-validated simulation
Processor simulators rely on detailed timing models of the processor pipeline to evaluate performance. The diversity in real-world processor designs mandates building flexible simulators that expose parts of the underlying model to the user in the form of configurable parameters. Consequently, the accuracy of modeling a real processor relies on both the accuracy of the pipeline model itself, and the accuracy of adjusting the configuration parameters according to the modeled processor. Unfortunately, processor vendors publicly disclose only a subset of their design decisions, raising the probability of introducing specification inaccuracies when modeling these processors. Inaccurately tuning model parameters deviates the simulated processor from the actual one. In the worst case, using improper parameters may lead to imbalanced pipeline models compromising the simulation output. Therefore, simulation models should be hardware-validated before using them for performance evaluation. As processors increase in complexity and diversity, validating a simulator model against real hardware becomes increasingly more challenging and time-consuming. In this work, we propose a methodology for validating simulation models against real hardware. We create a framework that relies on micro-benchmarks to collect performance statistics on real hardware, and machine learning-based algorithms to fine-tune the unknown parameters based on the accumulated statistics. We overhaul the Sniper simulator to support the ARM AArch64 instruction-set architecture (ISA), and introduce two new timing models for ARM-based in-order and out-of-order cores. Using our proposed simulator validation framework, we tune the in-order and out-of-order models to match the performance of a real-world implementation of the Cortex-A53 and Cortex-A72 cores with an average error of 7% and 15%, respectively, across a set of SPEC CPU2017 benchmarks
Software redundancy: what, where, how
Software systems have become pervasive in everyday life and are the core component of many crucial activities. An inadequate level of reliability may determine the commercial failure of a software product. Still, despite the commitment and the rigorous verification processes employed by developers, software is deployed with faults. To increase the reliability of software systems, researchers have investigated the use of various form of redundancy. Informally, a software system is redundant when it performs the same functionality through the execution of different elements. Redundancy has been extensively exploited in many software engineering techniques, for example for fault-tolerance and reliability engineering, and in self-adaptive and self- healing programs. Despite the many uses, though, there is no formalization or study of software redundancy to support a proper and effective design of software. Our intuition is that a systematic and formal investigation of software redundancy will lead to more, and more effective uses of redundancy. This thesis develops this intuition and proposes a set of ways to characterize qualitatively as well as quantitatively redundancy. We first formalize the intuitive notion of redundancy whereby two code fragments are considered redundant when they perform the same functionality through different executions. On the basis of this abstract and general notion, we then develop a practical method to obtain a measure of software redundancy. We prove the effectiveness of our measure by showing that it distinguishes between shallow differences, where apparently different code fragments reduce to the same underlying code, and deep code differences, where the algorithmic nature of the computations differs. We also demonstrate that our measure is useful for developers, since it is a good predictor of the effectiveness of techniques that exploit redundancy. Besides formalizing the notion of redundancy, we investigate the pervasiveness of redundancy intrinsically found in modern software systems. Intrinsic redundancy is a form of redundancy that occurs as a by-product of modern design and development practices. We have observed that intrinsic redundancy is indeed present in software systems, and that it can be successfully exploited for good purposes. This thesis proposes a technique to automatically identify equivalent method sequences in software systems to help developers assess the presence of intrinsic redundancy. We demonstrate the effectiveness of the technique by showing that it identifies the majority of equivalent method sequences in a system with good precision and performance
A case for code-representative microbenchmarks
Microbenchmarks are fundamental in the design of a microarchitecture. They allow rapid evaluation of the system, while incurring little exploration overhead. One key design aspect is the thermal design point (TDP), the maximum sustained power that a system will experience in typical conditions. Designers tend to use hand-coded microbenchmarks to provide an estimation for TDP. In this work we make the case for a systematic methodology to automatically generate code-representative microbenchmarks that can be used to drive the TDP estimation
Achieving High Speed CFD simulations: Optimization, Parallelization, and FPGA Acceleration for the unstructured DLR TAU Code
Today, large scale parallel simulations are fundamental tools to handle complex problems. The number of processors in current computation platforms has been recently increased and therefore it is necessary to optimize the application performance and to enhance the scalability of massively-parallel systems. In addition, new heterogeneous architectures, combining conventional processors with specific hardware, like FPGAs, to accelerate the most time consuming functions are considered as a strong alternative to boost the performance.
In this paper, the performance of the DLR TAU code is analyzed and optimized. The improvement of the code efficiency is addressed through three key activities: Optimization, parallelization and hardware acceleration. At first, a profiling analysis of the most time-consuming processes of the Reynolds Averaged Navier Stokes flow solver on a three-dimensional unstructured mesh is performed. Then, a study of the code scalability with new partitioning algorithms are tested to show the most suitable partitioning algorithms for the selected applications. Finally, a feasibility study on the application of FPGAs and GPUs for the hardware acceleration of CFD simulations is presented
Synthesizing Speech Test Cases with Text-to-Speech? An Empirical Study on the False Alarms in Automated Speech Recognition Testing
Recent studies have proposed the use of Text-To-Speech (TTS) systems to
automatically synthesise speech test cases on a scale and uncover a large
number of failures in ASR systems. However, the failures uncovered by synthetic
test cases may not reflect the actual performance of an ASR system when it
transcribes human audio, which we refer to as false alarms. Given a failed test
case synthesised from TTS systems, which consists of TTS-generated audio and
the corresponding ground truth text, we feed the human audio stating the same
text to an ASR system. If human audio can be correctly transcribed, an instance
of a false alarm is detected. In this study, we investigate false alarm
occurrences in five popular ASR systems using synthetic audio generated from
four TTS systems and human audio obtained from two commonly used datasets. Our
results show that the least number of false alarms is identified when testing
Deepspeech, and the number of false alarms is the highest when testing
Wav2vec2. On average, false alarm rates range from 21% to 34% in all five ASR
systems. Among the TTS systems used, Google TTS produces the least number of
false alarms (17%), and Espeak TTS produces the highest number of false alarms
(32%) among the four TTS systems. Additionally, we build a false alarm
estimator that flags potential false alarms, which achieves promising results:
a precision of 98.3%, a recall of 96.4%, an accuracy of 98.5%, and an F1 score
of 97.3%. Our study provides insight into the appropriate selection of TTS
systems to generate high-quality speech to test ASR systems. Additionally, a
false alarm estimator can be a way to minimise the impact of false alarms and
help developers choose suitable test inputs when evaluating ASR systems. The
source code used in this paper is publicly available on GitHub at
https://github.com/julianyonghao/FAinASRtest.Comment: 12 pages, Accepted at ISSTA202
Systematic Model-based Design Assurance and Property-based Fault Injection for Safety Critical Digital Systems
With advances in sensing, wireless communications, computing, control, and automation technologies, we are witnessing the rapid uptake of Cyber-Physical Systems across many applications including connected vehicles, healthcare, energy, manufacturing, smart homes etc. Many of these applications are safety-critical in nature and they depend on the correct and safe execution of software and hardware that are intrinsically subject to faults. These faults can be design faults (Software Faults, Specification faults, etc.) or physically occurring faults (hardware failures, Single-event-upsets, etc.). Both types of faults must be addressed during the design and development of these critical systems. Several safety-critical industries have widely adopted Model-Based Engineering paradigms to manage the design assurance processes of these complex CPSs. This thesis studies the application of IEC 61508 compliant model-based design assurance methodology on a representative safety-critical digital architecture targeted for the Nuclear power generation facilities. The study presents detailed experiences and results to demonstrate the benefits of Model testing in finding design flaws and its relevance to subsequent verification steps in the workflow. Additionally, to study the impact of physical faults on the digital architecture we develop a novel property-based fault injection method that overcomes few deficiencies of traditional fault injection methods. The model-based fault injection approach presented here guarantees high efficiency and near-exhaustive input/state/fault space coverage, by utilizing formal model checking principles to identify fault activation conditions and prove the fault tolerance features. The fault injection framework facilitates automated integration of fault saboteurs throughout the model to enable exhaustive fault location coverage in the model
- …