4,440 research outputs found
Should We Learn Probabilistic Models for Model Checking? A New Approach and An Empirical Study
Many automated system analysis techniques (e.g., model checking, model-based
testing) rely on first obtaining a model of the system under analysis. System
modeling is often done manually, which is often considered as a hindrance to
adopt model-based system analysis and development techniques. To overcome this
problem, researchers have proposed to automatically "learn" models based on
sample system executions and shown that the learned models can be useful
sometimes. There are however many questions to be answered. For instance, how
much shall we generalize from the observed samples and how fast would learning
converge? Or, would the analysis result based on the learned model be more
accurate than the estimation we could have obtained by sampling many system
executions within the same amount of time? In this work, we investigate
existing algorithms for learning probabilistic models for model checking,
propose an evolution-based approach for better controlling the degree of
generalization and conduct an empirical study in order to answer the questions.
One of our findings is that the effectiveness of learning may sometimes be
limited.Comment: 15 pages, plus 2 reference pages, accepted by FASE 2017 in ETAP
Test generation for high coverage with abstraction refinement and coarsening (ARC)
Testing is the main approach used in the software industry to expose failures. Producing thorough test suites is an expensive and error prone task that can greatly benefit from automation. Two challenging problems in test automation are generating test input and evaluating the adequacy of test suites: the first amounts to producing a set of test cases that accurately represent the software behavior, the second requires defining appropriate metrics to evaluate the thoroughness of the testing activities. Structural testing addresses these problems by measuring the amount of code elements that are executed by a test suite. The code elements that are not covered by any execution are natural candidates for generating further test cases, and the measured coverage rate can be used to estimate the thoroughness of the test suite. Several empirical studies show that test suites achieving high coverage rates exhibit a high failure detection ability. However, producing highly covering test suites automatically is hard as certain code elements are executed only under complex conditions while other might be not reachable at all. In this thesis we propose Abstraction Refinement and Coarsening (ARC), a goal oriented technique that combines static and dynamic software analysis to automatically generate test suites with high code coverage. At the core of our approach there is an abstract program model that enables the synergistic application of the different analysis components. In ARC we integrate Dynamic Symbolic Execution (DSE) and abstraction refinement to precisely direct test generation towards the coverage goals and detect infeasible elements. ARC includes a novel coarsening algorithm for improved scalability. We implemented ARC-B, a prototype tool that analyses C programs and produces test suites that achieve high branch coverage. Our experiments show that the approach effectively exploits the synergy between symbolic testing and reachability analysis outperforming state of the art test generation approaches. We evaluated ARC-B on industry relevant software, and exposed previously unknown failures in a safety-critical software component
Life of occam-Pi
This paper considers some questions prompted by a brief review of the history of computing. Why is programming so hard? Why is concurrency considered an “advanced” subject? What’s the matter with Objects? Where did all the Maths go? In searching for answers, the paper looks at some concerns over fundamental ideas within object orientation (as represented by modern programming languages), before focussing on the concurrency model of communicating processes and its particular expression in the occam family of languages. In that focus, it looks at the history of occam, its underlying philosophy (Ockham’s Razor), its semantic foundation on Hoare’s CSP, its principles of process oriented design and its development over almost three decades into occam-? (which blends in the concurrency dynamics of Milner’s ?-calculus). Also presented will be an urgent need for rationalisation – occam-? is an experiment that has demonstrated significant results, but now needs time to be spent on careful review and implementing the conclusions of that review. Finally, the future is considered. In particular, is there a future
Propagators and Solvers for the Algebra of Modular Systems
To appear in the proceedings of LPAR 21.
Solving complex problems can involve non-trivial combinations of distinct
knowledge bases and problem solvers. The Algebra of Modular Systems is a
knowledge representation framework that provides a method for formally
specifying such systems in purely semantic terms. Formally, an expression of
the algebra defines a class of structures. Many expressive formalism used in
practice solve the model expansion task, where a structure is given on the
input and an expansion of this structure in the defined class of structures is
searched (this practice overcomes the common undecidability problem for
expressive logics). In this paper, we construct a solver for the model
expansion task for a complex modular systems from an expression in the algebra
and black-box propagators or solvers for the primitive modules. To this end, we
define a general notion of propagators equipped with an explanation mechanism,
an extension of the alge- bra to propagators, and a lazy conflict-driven
learning algorithm. The result is a framework for seamlessly combining solving
technology from different domains to produce a solver for a combined system.Comment: To appear in the proceedings of LPAR 2
Synthetic Gene Circuits: Design with Directed Evolution
Synthetic circuits offer great promise for generating insights into nature's underlying design principles or forward engineering novel biotechnology applications. However, construction of these circuits is not straightforward. Synthetic circuits generally consist of components optimized to function in their natural context, not in the context of the synthetic circuit. Combining mathematical modeling with directed evolution offers one promising means for addressing this problem. Modeling identifies mutational targets and limits the evolutionary search space for directed evolution, which alters circuit performance without the need for detailed biophysical information. This review examines strategies for integrating modeling and directed evolution and discusses the utility and limitations of available methods
Executable cancer models: successes and challenges
Making decisions on how best to treat cancer patients requires the integration of different data sets, including genomic profiles, tumour histopathology, radiological images, proteomic analysis and more. This wealth of biological information calls for novel strategies to integrate such information in a meaningful, predictive and experimentally verifiable way. In this Perspective we explain how executable computational models meet this need. Such models provide a means for comprehensive data integration, can be experimentally validated, are readily interpreted both biologically and clinically, and have the potential to predict effective therapies for different cancer types and subtypes. We explain what executable models are and how they can be used to represent the dynamic biological behaviours inherent in cancer, and demonstrate how such models, when coupled with automated reasoning, facilitate our understanding of the mechanisms by which oncogenic signalling pathways regulate tumours. We explore how executable models have impacted the field of cancer research and argue that extending them to represent a tumour in a specific patient (that is, an avatar) will pave the way for improved personalized treatments and precision medicine. Finally, we highlight some of the ongoing challenges in developing executable models and stress that effective cross-disciplinary efforts are key to forward progress in the field
- …