112,301 research outputs found
Boost the Impact of Continuous Formal Verification in Industry
Software model checking has experienced significant progress in the last two
decades, however, one of its major bottlenecks for practical applications
remains its scalability and adaptability. Here, we describe an approach to
integrate software model checking techniques into the DevOps culture by
exploiting practices such as continuous integration and regression tests. In
particular, our proposed approach looks at the modifications to the software
system since its last verification, and submits them to a continuous formal
verification process, guided by a set of regression test cases. Our vision is
to focus on the developer in order to integrate formal verification techniques
into the developer workflow by using their main software development
methodologies and tools.Comment: 7 page
Enhancing Reuse of Constraint Solutions to Improve Symbolic Execution
Constraint solution reuse is an effective approach to save the time of
constraint solving in symbolic execution. Most of the existing reuse approaches
are based on syntactic or semantic equivalence of constraints; e.g. the Green
framework is able to reuse constraints which have different representations but
are semantically equivalent, through canonizing constraints into syntactically
equivalent normal forms. However, syntactic/semantic equivalence is not a
necessary condition for reuse--some constraints are not syntactically or
semantically equivalent, but their solutions still have potential for reuse.
Existing approaches are unable to recognize and reuse such constraints.
In this paper, we present GreenTrie, an extension to the Green framework,
which supports constraint reuse based on the logical implication relations
among constraints. GreenTrie provides a component, called L-Trie, which stores
constraints and solutions into tries, indexed by an implication partial order
graph of constraints. L-Trie is able to carry out logical reduction and logical
subset and superset querying for given constraints, to check for reuse of
previously solved constraints. We report the results of an experimental
assessment of GreenTrie against the original Green framework, which shows that
our extension achieves better reuse of constraint solving result and saves
significant symbolic execution time.Comment: this paper has been submitted to conference ISSTA 201
Finding Regressions in Projects under Version Control Systems
Version Control Systems (VCS) are frequently used to support development of
large-scale software projects. A typical VCS repository of a large project can
contain various intertwined branches consisting of a large number of commits.
If some kind of unwanted behaviour (e.g. a bug in the code) is found in the
project, it is desirable to find the commit that introduced it. Such commit is
called a regression point. There are two main issues regarding the regression
points. First, detecting whether the project after a certain commit is correct
can be very expensive as it may include large-scale testing and/or some other
forms of verification. It is thus desirable to minimise the number of such
queries. Second, there can be several regression points preceding the actual
commit; perhaps a bug was introduced in a certain commit, inadvertently fixed
several commits later, and then reintroduced in a yet later commit. In order to
fix the actual commit it is usually desirable to find the latest regression
point.
The currently used distributed VCS contain methods for regression
identification, see e.g. the git bisect tool. In this paper, we present a new
regression identification algorithm that outperforms the current tools by
decreasing the number of validity queries. At the same time, our algorithm
tends to find the latest regression points which is a feature that is missing
in the state-of-the-art algorithms. The paper provides an experimental
evaluation of the proposed algorithm and compares it to the state-of-the-art
tool git bisect on a real data set
Neural nets - their use and abuse for small data sets
Neural nets can be used for non-linear classification and regression models. They have a big advantage
over conventional statistical tools in that it is not necessary to assume any mathematical form for the
functional relationship between the variables. However, they also have a few associated problems chief of
which are probably the risk of over-parametrization in the absence of P-values, the lack of appropriate
diagnostic tools and the difficulties associated with model interpretation. The first of these problems is
particularly important in the case of small data sets. These problems are investigated in the context of real
market research data involving non-linear regression and discriminant analysis. In all cases we compare
the results of the non-linear neural net models with those of conventional linear statistical methods. Our
conclusion is that the theory and software for neural networks has some way to go before the above
problems will be solved
- …