5,816 research outputs found

    Software robustness: A survey, a theory, and prospects

    Get PDF
    If a software execution is disrupted, witnessing the execution at a later point may see evidence of the disruption or not. If not, we say the disruption failed to propagate. One name for this phenomenon is software robustness but it appears in different contexts in software engineering with different names. Contexts include testing, security, reliability, and automated code improvement or repair. Names include coincidental correctness, correctness attraction, transient error reliability. As witnessed, it is a dynamic phenomenon but any explanation with predictive power must necessarily take a static view. As a dynamic/static phenomenon it is convenient to take a statistical view of it which we do by way of information theory. We theorise that for failed disruption propagation to occur, a necessary condition is that the code region where the disruption occurs is composed with or succeeded by a subsequent code region that suffers entropy loss over all executions. The higher is the entropy loss, the higher the likelihood that disruption in the first region fails to propagate to the downstream observation point. We survey different research silos that address this phenomenon and explain how the theory might be exploited in software engineering

    An information theoretic notion of software testability

    Get PDF
    CONTEXT: In software testing, Failed Error Propagation (FEP) is the situation in which a faulty program state occurs during the execution of the system under test (SUT) but this does not lead to incorrect output. It is known that FEP can adversely affect software testing and this has resulted in associated information theoretic measures. OBJECTIVE: To devise measures that can be used to assess the testability of the SUT. By testability, we mean how likely it is that a faulty program state, that occurs during testing, will lead to incorrect output. Previous work has considered a single program point rather than an entire program. METHOD: New, more fine-grained, measures were devised. Experiments were used to evaluate these and the previously defined measures (Squeeziness and Normalised Squeeziness). The experiments assessed how well these measures correlated with an estimate of the probability of FEP occurring during testing. Mutants were used to estimate this probability. RESULTS: A strong rank correlation was found between several of the measures and the probability of FEP. Importantly, this included the Normalised Squeeziness of the whole SUT, which is simpler to compute, or estimate, than most of the other measures considered. Additional experiments found that the measures were relatively insensitive to the choice of mutants and also test suite. CONCLUSION: There is scope to use information theoretic measures to estimate how prone an SUT is to FEP. As a result, there is potential to use such measures to prioritise testing or estimate how much testing an SUT might require

    Measuring information-transfer delays

    Get PDF
    In complex networks such as gene networks, traffic systems or brain circuits it is important to understand how long it takes for the different parts of the network to effectively influence one another. In the brain, for example, axonal delays between brain areas can amount to several tens of milliseconds, adding an intrinsic component to any timing-based processing of information. Inferring neural interaction delays is thus needed to interpret the information transfer revealed by any analysis of directed interactions across brain structures. However, a robust estimation of interaction delays from neural activity faces several challenges if modeling assumptions on interaction mechanisms are wrong or cannot be made. Here, we propose a robust estimator for neuronal interaction delays rooted in an information-theoretic framework, which allows a model-free exploration of interactions. In particular, we extend transfer entropy to account for delayed source-target interactions, while crucially retaining the conditioning on the embedded target state at the immediately previous time step. We prove that this particular extension is indeed guaranteed to identify interaction delays between two coupled systems and is the only relevant option in keeping with Wiener’s principle of causality. We demonstrate the performance of our approach in detecting interaction delays on finite data by numerical simulations of stochastic and deterministic processes, as well as on local field potential recordings. We also show the ability of the extended transfer entropy to detect the presence of multiple delays, as well as feedback loops. While evaluated on neuroscience data, we expect the estimator to be useful in other fields dealing with network dynamics

    Using Squeeziness to test component-based systems defined as Finite State Machines

    Get PDF
    Context: Testing is the main validation technique used to increase the reliability of software systems. The effectiveness of testing can be strongly reduced by Failed Error Propagation. This situation happens when the System Under Test executes a faulty statement, the state of the system is affected by this fault, but the expected output is observed. Squeeziness is an information theoretic measure designed to quantify the likelihood of Failed Error Propagation and previous work has shown that Squeeziness correlates strongly with Failed Error Propagation in white-box scenarios. Despite its usefulness, this measure, in its current formulation, cannot be used in a black-box scenario where we do not have access to the source code of the components. Objective: The main goal of this paper is to adapt Squeeziness to a black-box scenario and evaluate whether it can be used to estimate the likelihood that a component of a software system introduces Failed Error Propagation. Method: First, we defined our black-box scenario. Specifically, we considered the Failed Error Propagation that a component introduces when it receives its input from another component. We were interested in this since such fault masking makes it more difficult to find faults in the previous component when testing. Second, we defined our notion of Squeeziness in this framework. Finally, we carried out experiments in order to evaluate our measure. Results: Our experiments showed a strong correlation between the likelihood of Failed Error Propagation and Squeeziness. Conclusion: We can conclude that our new notion of Squeeziness can be used as a measure that estimates the probability of Failed Error Propagation being introduced by a component. As a result, it has the potential to be used as a measure of testability, allowing testers to assess how easy it is to test either the whole system or a single component. We considered a simple model (Finite State Machines) but the notions and results can be extended/adapted to deal with more complex state-based models, in particular, those containing data

    Aplicaciones de la teoría de la información y la inteligencia artificial al testing de software

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, Departamento de Ingeniería de Sistemas lnformáticos y de Computación, leída el 4-05-2022Software Testing is a critical field for the software industry, as it has the main tools used to ensure the reliability of the produced software. Currently, mor then 50% of the time and resources for creating a software product are diverted to testing tasks, from unit testing to system testing. Moreover, there is a huge interest into automatising this field, as software gets bigger and the amount of required testing increases. however, software Testing is not only an industry oriented field; it is also a really interesting field with a noble goal (improving the reliability of software systems) that at the same tieme is full of problems to solve....Es Testing Software es un campo crítico para la industria del software, ya que éste contienen las principales herramientas que se usan para asegurar la fiabilidad del software producido. Hoy en día, más del 50% del tiempo y recursos necesarios para crear un producto software son dirigidos a tareas de testing, desde el testing unitario al testing a nivel de sistema. Más aún, hay un gran interés en automatizar este campo, ya que el software cada vez es más grande y la cantidad de testing requerido crece. Sin embargo, el Testing de Software no es solo un campo orientado a la industria; también es un campo muy interesante con un objetivo noble (mejorar la fiabilidad de los sistemas software) que al mismo tiempo está lleno de problemas por resolver...Fac. de InformáticaTRUEunpu

    Test Case Purification for Improving Fault Localization

    Get PDF
    Finding and fixing bugs are time-consuming activities in software development. Spectrum-based fault localization aims to identify the faulty position in source code based on the execution trace of test cases. Failing test cases and their assertions form test oracles for the failing behavior of the system under analysis. In this paper, we propose a novel concept of spectrum driven test case purification for improving fault localization. The goal of test case purification is to separate existing test cases into small fractions (called purified test cases) and to enhance the test oracles to further localize faults. Combining with an original fault localization technique (e.g., Tarantula), test case purification results in better ranking the program statements. Our experiments on 1800 faults in six open-source Java programs show that test case purification can effectively improve existing fault localization techniques

    Building Bayesian Networks: Elicitation, Evaluation, and Learning

    Get PDF
    As a compact graphical framework for representation of multivariate probabilitydistributions, Bayesian networks are widely used for efficient reasoning underuncertainty in a variety of applications, from medical diagnosis to computertroubleshooting and airplane fault isolation. However, construction of Bayesiannetworks is often considered the main difficulty when applying this frameworkto real-world problems. In real world domains, Bayesian networks are often built by knowledge engineering approach. Unfortunately, eliciting knowledge from domain experts isa very time-consuming process, and could result in poor-quality graphicalmodels when not performed carefully. Over the last decade, the research focusis shifting more towards learning Bayesian networks from data, especially withincreasing volumes of data available in various applications, such asbiomedical, internet, and e-business, among others.Aiming at solving the bottle-neck problem of building Bayesian network models, thisresearch work focuses on elicitation, evaluation and learning Bayesiannetworks. Specifically, the contribution of this dissertation involves the research in the following five areas:a) graphical user interface tools forefficient elicitation and navigation of probability distributions, b) systematic and objective evaluation of elicitation schemes for probabilistic models, c)valid evaluation of performance robustness, i.e., sensitivity, of Bayesian networks,d) the sensitivity inequivalent characteristic of Markov equivalent networks, and the appropriateness of using sensitivity for model selection in learning Bayesian networks,e) selective refinement for learning probability parameters of Bayesian networks from limited data with availability of expert knowledge. In addition, an efficient algorithm for fast sensitivity analysis is developed based on relevance reasoning technique. The implemented algorithm runs very fast and makes d) and e) more affordable for real domain practice
    • …
    corecore