11 research outputs found

    Avoiding coincidental correctness in boundary value analysis

    Get PDF
    In partition analysis we divide the input domain to form subdomains on which the system's behaviour should be uniform. Boundary value analysis produces test inputs near each subdomain's boundaries to find failures caused by incorrect implementation of the boundaries. However, boundary value analysis can be adversely affected by coincidental correctness---the system produces the expected output, but for the wrong reason. This article shows how boundary value analysis can be adapted in order to reduce the likelihood of coincidental correctness. The main contribution is to cases of automated test data generation in which we cannot rely on the expertise of a tester

    Towards Automated Boundary Value Testing with Program Derivatives and Search

    Full text link
    A natural and often used strategy when testing software is to use input values at boundaries, i.e. where behavior is expected to change the most, an approach often called boundary value testing or analysis (BVA). Even though this has been a key testing idea for long it has been hard to clearly define and formalize. Consequently, it has also been hard to automate. In this research note we propose one such formalization of BVA by, in a similar way as to how the derivative of a function is defined in mathematics, considering (software) program derivatives. Critical to our definition is the notion of distance between inputs and outputs which we can formalize and then quantify based on ideas from Information theory. However, for our (black-box) approach to be practical one must search for test inputs with specific properties. Coupling it with search-based software engineering is thus required and we discuss how program derivatives can be used as and within fitness functions. This brief note does not allow a deeper, empirical investigation but we use a simple illustrative example throughout to introduce the main ideas. By combining program derivatives with search, we thus propose a practical as well as theoretically interesting technique for automated boundary value (analysis and) testing

    Boundary Value Exploration for Software Analysis

    Full text link
    For software to be reliable and resilient, it is widely accepted that tests must be created and maintained alongside the software itself. One safeguard from vulnerabilities and failures in code is to ensure correct behavior on the boundaries between sub-domains of the input space. So-called boundary value analysis (BVA) and boundary value testing (BVT) techniques aim to exercise those boundaries and increase test effectiveness. However, the concepts of BVA and BVT themselves are not clearly defined and it is not clear how to identify relevant sub-domains, and thus the boundaries delineating them, given a specification. This has limited adoption and hindered automation. We clarify BVA and BVT and introduce Boundary Value Exploration (BVE) to describe techniques that support them by helping to detect and identify boundary inputs. Additionally, we propose two concrete BVE techniques based on information-theoretic distance functions: (i) an algorithm for boundary detection and (ii) the usage of software visualization to explore the behavior of the software under test and identify its boundary behavior. As an initial evaluation, we apply these techniques on a much used and well-tested date handling library. Our results reveal questionable behavior at boundaries highlighted by our techniques. In conclusion, we argue that the boundary value exploration that our techniques enable is a step towards automated boundary value analysis and testing which can foster their wider use and improve test effectiveness and efficiency

    Precise propagation of fault-failure correlations in program flow graphs

    Get PDF
    Statistical fault localization techniques find suspicious faulty program entities in programs by comparing passed and failed executions. Existing studies show that such techniques can be promising in locating program faults. However, coincidental correctness and execution crashes may make program entities indistinguishable in the execution spectra under study, or cause inaccurate counting, thus severely affecting the precision of existing fault localization techniques. In this paper, we propose a BlockRank technique, which calculates, contrasts, and propagates the mean edge profiles between passed and failed executions to alleviate the impact of coincidental correctness. To address the issue of execution crashes, Block-Rank identifies suspicious basic blocks by modeling how each basic block contributes to failures by apportioning their fault relevance to surrounding basic blocks in terms of the rate of successful transition observed from passed and failed executions. BlockRank is empirically shown to be more effective than nine representative techniques on four real-life medium-sized programs. Ā© 2011 IEEE.published_or_final_versionProceedings of the 35th IEEE Annual International Computer Software and Applications Conference (COMPSAC 2011), Munich, Germany, 18-22 July 2011, p. 58-6

    Beyond the Appraisal Framework: Evaluation of Can and May in Introductions and Conclusions to Computing Research Articles

    Get PDF
    This paper attempts to analyse the presence of the modal auxiliaries can and may as markers of authorial evaluation in a corpus of introductions and conclusions to computing research articles. Bearing in mind the semantic familiarity of these two modals, we start from Martin and Whiteā€™s Appraisal framework, whose focus is on the interpersonal in language, the subjective presence of authors in their texts, and the stances they take both towards those texts and their readers. In particular, we extend Martin and Whiteā€™s notions on epistemic modality and evidentiality, which they interpret from a co-textual and contextual point of view, and use Alonso-Almeidaā€™s views on epistemicity as a pragmatic eff ect of evidential strategies. An important conclusion points at functional variation of epistemic and evidential readings in these two sections of research articles, with a predominant occurrence of epistemic attributions in introductions and evidential interpretations in conclusions. This result is in consonance with the type of genre selected and its authorsā€™ aims.Este trabajo pretende analizar la presencia de los auxiliares modales can y may como indicadores de evaluaciĆ³n en un corpus de introducciones y conclusiones de artĆ­culos de investigaciĆ³n sobre ingenierĆ­a informĆ”tica. Teniendo en cuenta el parecido semĆ”ntico entre estos dos modales, tomamos como primera referencia el modelo evaluativo de Martin y White, cuyo trabajo se centra en la funciĆ³n interpersonal del lenguaje, la presencia del autor en su obra, y su posicionamiento con respecto a esta y sus lectores. Extendemos a continuaciĆ³n sus nociones sobre modalidad epistĆ©mica y evidencialidad, que interpretan desde una perspectiva cotextual y contextual, y utilizamos para ello las ideas de Alonso-Almeida (en prensa) sobre epistemicidad como efecto pragmĆ”tico de las estrategias evidenciales. Una conclusiĆ³n importante refl eja la variaciĆ³n funcional de lecturas epistĆ©micas y evidenciales en las dos secciones de los artĆ­culos de investigaciĆ³n, predominando las primeras en las introducciones y las segundas en las conclusiones. Este resultado concuerda con el tipo de gĆ©nero y el propĆ³sito de los autores

    Finding failures from passed test cases: Improving the pattern classification approach to the testing of mesh simplification programs

    Get PDF
    Mesh simplification programs create three-dimensional polygonal models similar to an original polygonal model, and yet use fewer polygons. They produce different graphics even though they are based on the same original polygonal model. This results in a test oracle problem. To address the problem, our previous work has developed a technique that uses a reference model of the program under test to train a classifier. Using such an approach may mistakenly mark a failure-causing test case as passed. It lowers the testing effectiveness of revealing failures. This paper suggests piping the test cases marked as passed by a statistical pattern classification module to an analytical metamorphic testing (MT) module. We evaluate our approach empirically using three subject programs with over 2700 program mutants. The result shows that, using a resembling reference model to train a classifier, the integrated approach can significantly improve the failure detection effectiveness of the pattern classification approach. We also explain how MT in our design trades specificity for sensitivity. Copyright Ā© 2009 John Wiley & Sons, Ltd.link_to_subscribed_fulltex
    corecore