8,156 research outputs found

    What Am I Testing and Where? Comparing Testing Procedures based on Lightweight Requirements Annotations

    Get PDF
    [Context] The testing of software-intensive systems is performed in different test stages each having a large number of test cases. These test cases are commonly derived from requirements. Each test stages exhibits specific demands and constraints with respect to their degree of detail and what can be tested. Therefore, specific test suites are defined for each test stage. In this paper, the focus is on the domain of embedded systems, where, among others, typical test stages are Software- and Hardware-in-the-loop. [Objective] Monitoring and controlling which requirements are verified in which detail and in which test stage is a challenge for engineers. However, this information is necessary to assure a certain test coverage, to minimize redundant testing procedures, and to avoid inconsistencies between test stages. In addition, engineers are reluctant to state their requirements in terms of structured languages or models that would facilitate the relation of requirements to test executions. [Method] With our approach, we close the gap between requirements specifications and test executions. Previously, we have proposed a lightweight markup language for requirements which provides a set of annotations that can be applied to natural language requirements. The annotations are mapped to events and signals in test executions. As a result, meaningful insights from a set of test executions can be directly related to artifacts in the requirements specification. In this paper, we use the markup language to compare different test stages with one another. [Results] We annotate 443 natural language requirements of a driver assistance system with the means of our lightweight markup language. The annotations are then linked to 1300 test executions from a simulation environment and 53 test executions from test drives with human drivers. Based on the annotations, we are able to analyze how similar the test stages are and how well test stages and test cases are aligned with the requirements. Further, we highlight the general applicability of our approach through this extensive experimental evaluation. [Conclusion] With our approach, the results of several test levels are linked to the requirements and enable the evaluation of complex test executions. By this means, practitioners can easily evaluate how well a systems performs with regards to its specification and, additionally, can reason about the expressiveness of the applied test stage.TU Berlin, Open-Access-Mittel - 202

    Developing a distributed electronic health-record store for India

    Get PDF
    The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India

    Sensor Analysis, Modeling, and Test for Robust Propulsion System Autonomy

    Get PDF
    An approach is presented supporting analysis, modeling, and test validation of operational flight instrumentation (OFI) that facilitates critical functions for the Space Launch System (SLS) main propulsion system (MPS). Certain types of OFI sensors were shown to exhibit highly nonlinear and non-gaussian noise characteristics during acceptance testing, motivating the development of advanced modeling and simulation (M&S) capability to support algorithm verification and flight certification. Hardware model and algorithm simulation fidelity was informed by a risk scoring metric; redesign of high-risk algorithms using test-validated sensor models significantly improved their expected performance as evaluated using Monte Carlo acceptance sampling methods. Autonomous functions include closed-loop ullage pressure regulation, pressurant leak detection, and fault isolation for automated safing and crew caution and warning (C&W)

    Influence of confirmation biases of developers on software quality: an empirical study

    Get PDF
    The thought processes of people have a significant impact on software quality, as software is designed, developed and tested by people. Cognitive biases, which are defined as patterned deviations of human thought from the laws of logic and mathematics, are a likely cause of software defects. However, there is little empirical evidence to date to substantiate this assertion. In this research, we focus on a specific cognitive bias, confirmation bias, which is defined as the tendency of people to seek evidence that verifies a hypothesis rather than seeking evidence to falsify a hypothesis. Due to this confirmation bias, developers tend to perform unit tests to make their program work rather than to break their code. Therefore, confirmation bias is believed to be one of the factors that lead to an increased software defect density. In this research, we present a metric scheme that explores the impact of developers’ confirmation bias on software defect density. In order to estimate the effectiveness of our metric scheme in the quantification of confirmation bias within the context of software development, we performed an empirical study that addressed the prediction of the defective parts of software. In our empirical study, we used confirmation bias metrics on five datasets obtained from two companies. Our results provide empirical evidence that human thought processes and cognitive aspects deserve further investigation to improve decision making in software development for effective process management and resource allocation

    Using Machine Learning Techniques to Improve Static Code Analysis Tools Usefulness

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)This dissertation proposes an approach to reduce the cost of manual inspections for as large a number of false positive warnings that are being reported by Static Code Analysis (SCA) tools as much as possible using Machine Learning (ML) techniques. The proposed approach neither assume to use the particular SCA tools nor depends on the specific programming language used to write the target source code or the application. To reduce the number of false positive warnings we first evaluated a number of SCA tools in terms of software engineering metrics using a highlighted synthetic source code named the Juliet test suite. From this evaluation, we concluded that the SCA tools report plenty of false positive warnings that need a manual inspection. Then we generated a number of datasets from the source code that forced the SCA tool to generate either true positive, false positive, or false negative warnings. The datasets, then, were used to train four of ML classifiers in order to classify the collected warnings from the synthetic source code. From the experimental results of the ML classifiers, we observed that the classifier that built using the Random Forests (RF) technique outperformed the rest of the classifiers. Lastly, using this classifier and an instance-based transfer learning technique, we ranked a number of warnings that were aggregated from various open-source software projects. The experimental results show that the proposed approach to reduce the cost of the manual inspection of the false positive warnings outperformed the random ranking algorithm and was highly correlated with the ranked list that the optimal ranking algorithm generated

    Users manual for the Automated Performance Test System (APTS)

    Get PDF
    The characteristics of and the user information for the Essex Automated Performance Test System (APTS) computer-based portable performance assessment battery are given. The battery was developed to provide a menu of performance test tapping the widest possible variety of human cognitive and motor functions, implemented on a portable computer system suitable for use in both laboratory and field settings for studying the effects of toxic agents and other stressors. The manual gives guidance in selecting, administering and scoring tests from the battery, and reviews the data and studies underlying the development of the battery. Its main emphasis is on the users of the battery - the scientists, researchers and technicians who wish to examine changes in human performance across time or as a function of changes in the conditions under which test data are obtained. First the how to information needed to make decisions about where and how to use the battery is given, followed by the research background supporting the battery development. Further, the development history of the battery focuses largely on the logical framework within which tests were evaluated
    • …
    corecore