9,321 research outputs found

    Survey of information resources on newborn blood spot screening for parents and health professionals: a systematic review

    Get PDF

    Machine Learning Data Suitability and Performance Testing Using Fault Injection Testing Framework

    Full text link
    Creating resilient machine learning (ML) systems has become necessary to ensure production-ready ML systems that acquire user confidence seamlessly. The quality of the input data and the model highly influence the successful end-to-end testing in data-sensitive systems. However, the testing approaches of input data are not as systematic and are few compared to model testing. To address this gap, this paper presents the Fault Injection for Undesirable Learning in input Data (FIUL-Data) testing framework that tests the resilience of ML models to multiple intentionally-triggered data faults. Data mutators explore vulnerabilities of ML systems against the effects of different fault injections. The proposed framework is designed based on three main ideas: The mutators are not random; one data mutator is applied at an instance of time, and the selected ML models are optimized beforehand. This paper evaluates the FIUL-Data framework using data from analytical chemistry, comprising retention time measurements of anti-sense oligonucleotide. Empirical evaluation is carried out in a two-step process in which the responses of selected ML models to data mutation are analyzed individually and then compared with each other. The results show that the FIUL-Data framework allows the evaluation of the resilience of ML models. In most experiments cases, ML models show higher resilience at larger training datasets, where gradient boost performed better than support vector regression in smaller training sets. Overall, the mean squared error metric is useful in evaluating the resilience of models due to its higher sensitivity to data mutation.Comment: 18 page

    Industrial Surveys on Software Testing Practices: A Literature Review

    Get PDF
    A US government agency estimated the national cost of inadequate software testing to be \$60 billion annually, and that was 20 years ago. As the role of technology and software has been rapidly increasing worldwide for decades, it suffices to say that the worldwide fiscal effect of poor testing practices today is probably ``quite a bit``. An increasing number of industry-focused survey studies on testing have been published worldwide in recent years, signalling an increased need to characterize the testing practices of the software development industry. These types of studies can help to guide future research efforts towards subjects that are meaningful to the industry, and provide practitioners with an opportunity to compare their own practice to those of their peers and recognize the main improvement areas. As no secondary study devoted to these types of survey studies could be identified, the opportunity was seized to carry out a literature review was to find out what the data from these studies can tell us when aggregated. The precise topics focused on were the usage of test levels, test types, test design techniques, test tools and test automation. Looking at these studies in aggregate tells us about some general trends: unit testing, functional testing and regression testing are popular everywhere, and also quite popular regardless of the surveyed population are performance testing and usability testing. The popularity of the other test levels and test types vary from survey to survey or region to region. Black-box techniques and experience-based techniques are more popular than white-box techniques. Exploratory testing, error guessing, use case testing and boundary value analysis are some of the most popular test design techniques. Much of the industry relies on manual testing over automated testing and/or have inadequately adopted the usage of testing tools

    Software Engineering 2021 : Fachtagung vom 22.-26. Februar 2021 Braunschweig/virtuell

    Get PDF

    Genetic Services in Ontario: Mapping the Future

    Get PDF

    Overcoming Language Dichotomies: Toward Effective Program Comprehension for Mobile App Development

    Full text link
    Mobile devices and platforms have become an established target for modern software developers due to performant hardware and a large and growing user base numbering in the billions. Despite their popularity, the software development process for mobile apps comes with a set of unique, domain-specific challenges rooted in program comprehension. Many of these challenges stem from developer difficulties in reasoning about different representations of a program, a phenomenon we define as a "language dichotomy". In this paper, we reflect upon the various language dichotomies that contribute to open problems in program comprehension and development for mobile apps. Furthermore, to help guide the research community towards effective solutions for these problems, we provide a roadmap of directions for future work.Comment: Invited Keynote Paper for the 26th IEEE/ACM International Conference on Program Comprehension (ICPC'18

    A survey on software testability

    Full text link
    Context: Software testability is the degree to which a software system or a unit under test supports its own testing. To predict and improve software testability, a large number of techniques and metrics have been proposed by both practitioners and researchers in the last several decades. Reviewing and getting an overview of the entire state-of-the-art and state-of-the-practice in this area is often challenging for a practitioner or a new researcher. Objective: Our objective is to summarize the body of knowledge in this area and to benefit the readers (both practitioners and researchers) in preparing, measuring and improving software testability. Method: To address the above need, the authors conducted a survey in the form of a systematic literature mapping (classification) to find out what we as a community know about this topic. After compiling an initial pool of 303 papers, and applying a set of inclusion/exclusion criteria, our final pool included 208 papers. Results: The area of software testability has been comprehensively studied by researchers and practitioners. Approaches for measurement of testability and improvement of testability are the most-frequently addressed in the papers. The two most often mentioned factors affecting testability are observability and controllability. Common ways to improve testability are testability transformation, improving observability, adding assertions, and improving controllability. Conclusion: This paper serves for both researchers and practitioners as an "index" to the vast body of knowledge in the area of testability. The results could help practitioners measure and improve software testability in their projects
    • …
    corecore