152,571 research outputs found

    Functional Requirements-Based Automated Testing for Avionics

    Full text link
    We propose and demonstrate a method for the reduction of testing effort in safety-critical software development using DO-178 guidance. We achieve this through the application of Bounded Model Checking (BMC) to formal low-level requirements, in order to generate tests automatically that are good enough to replace existing labor-intensive test writing procedures while maintaining independence from implementation artefacts. Given that existing manual processes are often empirical and subjective, we begin by formally defining a metric, which extends recognized best practice from code coverage analysis strategies to generate tests that adequately cover the requirements. We then formulate the automated test generation procedure and apply its prototype in case studies with industrial partners. In review, the method developed here is demonstrated to significantly reduce the human effort for the qualification of software products under DO-178 guidance

    Design of automatic vision-based inspection system for solder joint segmentation

    Get PDF
    Purpose: Computer vision has been widely used in the inspection of electronic components. This paper proposes a computer vision system for the automatic detection, localisation, and segmentation of solder joints on Printed Circuit Boards (PCBs) under different illumination conditions. Design/methodology/approach: An illumination normalization approach is applied to an image, which can effectively and efficiently eliminate the effect of uneven illumination while keeping the properties of the processed image the same as in the corresponding image under normal lighting conditions. Consequently special lighting and instrumental setup can be reduced in order to detect solder joints. These normalised images are insensitive to illumination variations and are used for the subsequent solder joint detection stages. In the segmentation approach, the PCB image is transformed from an RGB color space to a YIQ color space for the effective detection of solder joints from the background. Findings: The segmentation results show that the proposed approach improves the performance significantly for images under varying illumination conditions. Research limitations/implications: This paper proposes a front-end system for the automatic detection, localisation, and segmentation of solder joint defects. Further research is required to complete the full system including the classification of solder joint defects. Practical implications: The methodology presented in this paper can be an effective method to reduce cost and improve quality in production of PCBs in the manufacturing industry. Originality/value: This research proposes the automatic location, identification and segmentation of solder joints under different illumination conditions

    Sciduction: Combining Induction, Deduction, and Structure for Verification and Synthesis

    Full text link
    Even with impressive advances in automated formal methods, certain problems in system verification and synthesis remain challenging. Examples include the verification of quantitative properties of software involving constraints on timing and energy consumption, and the automatic synthesis of systems from specifications. The major challenges include environment modeling, incompleteness in specifications, and the complexity of underlying decision problems. This position paper proposes sciduction, an approach to tackle these challenges by integrating inductive inference, deductive reasoning, and structure hypotheses. Deductive reasoning, which leads from general rules or concepts to conclusions about specific problem instances, includes techniques such as logical inference and constraint solving. Inductive inference, which generalizes from specific instances to yield a concept, includes algorithmic learning from examples. Structure hypotheses are used to define the class of artifacts, such as invariants or program fragments, generated during verification or synthesis. Sciduction constrains inductive and deductive reasoning using structure hypotheses, and actively combines inductive and deductive reasoning: for instance, deductive techniques generate examples for learning, and inductive reasoning is used to guide the deductive engines. We illustrate this approach with three applications: (i) timing analysis of software; (ii) synthesis of loop-free programs, and (iii) controller synthesis for hybrid systems. Some future applications are also discussed

    Characterization of neurophysiologic and neurocognitive biomarkers for use in genomic and clinical outcome studies of schizophrenia.

    Get PDF
    BackgroundEndophenotypes are quantitative, laboratory-based measures representing intermediate links in the pathways between genetic variation and the clinical expression of a disorder. Ideal endophenotypes exhibit deficits in patients, are stable over time and across shifts in psychopathology, and are suitable for repeat testing. Unfortunately, many leading candidate endophenotypes in schizophrenia have not been fully characterized simultaneously in large cohorts of patients and controls across these properties. The objectives of this study were to characterize the extent to which widely-used neurophysiological and neurocognitive endophenotypes are: 1) associated with schizophrenia, 2) stable over time, independent of state-related changes, and 3) free of potential practice/maturation or differential attrition effects in schizophrenia patients (SZ) and nonpsychiatric comparison subjects (NCS). Stability of clinical and functional measures was also assessed.MethodsParticipants (SZ n = 341; NCS n = 205) completed a battery of neurophysiological (MMN, P3a, P50 and N100 indices, PPI, startle habituation, antisaccade), neurocognitive (WRAT-3 Reading, LNS-forward, LNS-reorder, WCST-64, CVLT-II). In addition, patients were rated on clinical symptom severity as well as functional capacity and status measures (GAF, UPSA, SOF). 223 subjects (SZ n = 163; NCS n = 58) returned for retesting after 1 year.ResultsMost neurophysiological and neurocognitive measures exhibited medium-to-large deficits in schizophrenia, moderate-to-substantial stability across the retest interval, and were independent of fluctuations in clinical status. Clinical symptoms and functional measures also exhibited substantial stability. A Longitudinal Endophenotype Ranking System (LERS) was created to rank neurophysiological and neurocognitive biomarkers according to their effect sizes across endophenotype criteria.ConclusionsThe majority of neurophysiological and neurocognitive measures exhibited deficits in patients, stability over a 1-year interval and did not demonstrate practice or time effects supporting their use as endophenotypes in neural substrate and genomic studies. These measures hold promise for informing the "gene-to-phene gap" in schizophrenia research

    Six-man, self-contained carbon dioxide concentrator subsystem for Space Station Prototype (SSP) application

    Get PDF
    A six man, self contained, electrochemical carbon dioxide concentrating subsystem for space station prototype use was successfully designed, fabricated, and tested. A test program was successfully completed which covered shakedown testing, design verification testing, and acceptance testing

    The GoSam package: an overview

    Full text link
    The public code GOSAM for the computation of the one loop virtual corrections to scattering amplitudes in the Standard Model and beyond is presented. Particular emphasis is devoted to the interface with other public tools via the Binoth Les Houches Accord. We show with examples that doing LHC phenomenology including automatically Next to Leading Order QCD corrections is now handy.Comment: 8 pages, 4 figures, presented at the 11th DESY workshop "Loops and Legs in Quantum Field Theory", April 2012, Wernigerode, German

    You Cannot Fix What You Cannot Find! An Investigation of Fault Localization Bias in Benchmarking Automated Program Repair Systems

    Get PDF
    Properly benchmarking Automated Program Repair (APR) systems should contribute to the development and adoption of the research outputs by practitioners. To that end, the research community must ensure that it reaches significant milestones by reliably comparing state-of-the-art tools for a better understanding of their strengths and weaknesses. In this work, we identify and investigate a practical bias caused by the fault localization (FL) step in a repair pipeline. We propose to highlight the different fault localization configurations used in the literature, and their impact on APR systems when applied to the Defects4J benchmark. Then, we explore the performance variations that can be achieved by `tweaking' the FL step. Eventually, we expect to create a new momentum for (1) full disclosure of APR experimental procedures with respect to FL, (2) realistic expectations of repairing bugs in Defects4J, as well as (3) reliable performance comparison among the state-of-the-art APR systems, and against the baseline performance results of our thoroughly assessed kPAR repair tool. Our main findings include: (a) only a subset of Defects4J bugs can be currently localized by commonly-used FL techniques; (b) current practice of comparing state-of-the-art APR systems (i.e., counting the number of fixed bugs) is potentially misleading due to the bias of FL configurations; and (c) APR authors do not properly qualify their performance achievement with respect to the different tuning parameters implemented in APR systems.Comment: Accepted by ICST 201

    JWalk: a tool for lazy, systematic testing of java classes by design introspection and user interaction

    Get PDF
    Popular software testing tools, such as JUnit, allow frequent retesting of modified code; yet the manually created test scripts are often seriously incomplete. A unit-testing tool called JWalk has therefore been developed to address the need for systematic unit testing within the context of agile methods. The tool operates directly on the compiled code for Java classes and uses a new lazy method for inducing the changing design of a class on the fly. This is achieved partly through introspection, using Java’s reflection capability, and partly through interaction with the user, constructing and saving test oracles on the fly. Predictive rules reduce the number of oracle values that must be confirmed by the tester. Without human intervention, JWalk performs bounded exhaustive exploration of the class’s method protocols and may be directed to explore the space of algebraic constructions, or the intended design state-space of the tested class. With some human interaction, JWalk performs up to the equivalent of fully automated state-based testing, from a specification that was acquired incrementally
    • …
    corecore