2,541 research outputs found
A Symbolic Execution Algorithm for Constraint-Based Testing of Database Programs
In so-called constraint-based testing, symbolic execution is a common
technique used as a part of the process to generate test data for imperative
programs. Databases are ubiquitous in software and testing of programs
manipulating databases is thus essential to enhance the reliability of
software. This work proposes and evaluates experimentally a symbolic ex-
ecution algorithm for constraint-based testing of database programs. First, we
describe SimpleDB, a formal language which offers a minimal and well-defined
syntax and seman- tics, to model common interaction scenarios between pro-
grams and databases. Secondly, we detail the proposed al- gorithm for symbolic
execution of SimpleDB models. This algorithm considers a SimpleDB program as a
sequence of operations over a set of relational variables, modeling both the
database tables and the program variables. By inte- grating this relational
model of the program with classical static symbolic execution, the algorithm
can generate a set of path constraints for any finite path to test in the
control- flow graph of the program. Solutions of these constraints are test
inputs for the program, including an initial content for the database. When the
program is executed with respect to these inputs, it is guaranteed to follow
the path with re- spect to which the constraints were generated. Finally, the
algorithm is evaluated experimentally using representative SimpleDB models.Comment: 12 pages - preliminary wor
Static Application-Level Race Detection in STM Haskell using Contracts
Writing concurrent programs is a hard task, even when using high-level
synchronization primitives such as transactional memories together with a
functional language with well-controlled side-effects such as Haskell, because
the interferences generated by the processes to each other can occur at
different levels and in a very subtle way. The problem occurs when a thread
leaves or exposes the shared data in an inconsistent state with respect to the
application logic or the real meaning of the data. In this paper, we propose to
associate contracts to transactions and we define a program transformation that
makes it possible to extend static contract checking in the context of STM
Haskell. As a result, we are able to check statically that each transaction of
a STM Haskell program handles the shared data in a such way that a given
consistency property, expressed in the form of a user-defined boolean function,
is preserved. This ensures that bad interference will not occur during the
execution of the concurrent program.Comment: In Proceedings PLACES 2013, arXiv:1312.2218. [email protected];
[email protected]
Offline Specialisation in Prolog Using a Hand-Written Compiler Generator
The so called "cogen approach" to program specialisation, writing a compiler generator instead of a specialiser, has been used with considerable success in partial evaluation of both functional and imperative languages. This paper demonstrates that the "cogen" approach is also applicable to the specialisation of logic programs (called partial deduction when applied to pure logic programs) and leads to effective specialisers. Moreover, using good binding-time annotations, the speed-ups of the specialised programs are comparable to the speed-ups obtained with online specialisers. The paper first develops a generic approach to offline partial deduction and then a specific offline partial deduction method, leading to the offline system LIX for pure logic programs. While this is a usable specialiser by itself, its specialisation strategy is used to develop the "cogen" system LOGEN. Given a program, a specification of what inputs will be static, and an annotation specifying which calls should be unfolded, LOGEN generates a specialised specialiser for the program at hand. Running this specialiser with particular values for the static inputs results in the specialised program. While this requires two steps instead of one, the efficiency of the specialisation process is improved in situations where the same program is specialised multiple times. The paper also presents and evaluates an automatic binding-time analysis that is able to derive the annotations. While the derived annotations are still suboptimal compared to hand-crafted ones, they enable non-expert users to use the LOGEN system in a fully automated way Finally, LOGEN is extended so as to directly support a large part of Prolog's declarative and non-declarative features and so as to be able to perform so called mixline specialisations. In mixline specialisation some unfolding decisions depend on the outcome of tests performed at specialisation time instead of being hardwired into the specialiser
Ultraviolet asymptotics for quasiperiodic AdS_4 perturbations
Spherically symmetric perturbations in AdS-scalar field systems of small
amplitude epsilon approximately periodic on time scales of order 1/epsilon^2
(in the sense that no significant transfer of energy between the AdS normal
modes occurs) have played an important role in considerations of AdS stability.
They are seen as anchors of stability islands where collapse of small
perturbations to black holes does not occur. (This collapse, if it happens,
typically develops on time scales of the order 1/epsilon^2.) We construct an
analytic treatment of the frequency spectra of such quasiperiodic
perturbations, paying special attention to the large frequency asymptotics. For
the case of a self-interacting phi^4 scalar field in a non-dynamical AdS
background, we arrive at a fairly complete analytic picture involving
quasiperiodic spectra with an exponential suppression modulated by a power law
at large mode numbers. For the case of dynamical gravity, the structure of the
large frequency asymptotics is more complicated. We give analytic explanations
for the general qualitative features of quasiperiodic solutions localized
around a single mode, in close parallel to our discussion of the probe scalar
field, and find numerical evidence for logarithmic modulations in the
gravitational quasiperiodic spectra existing on top of the formulas previously
reported in the literature.Comment: 18 pages; v3: minor improvements, published versio
Using school performance feedback: perceptions of primary school principals
The present study focuses on the perception of primary school principals of school performance feedback (SPF) and of the actual use of this information. This study is part of a larger project which aims to develop a new school performance feedback system (SPFS). The study builds on an eclectic framework that integrates the literature on SPFSs. Through in-depth interviews with 16 school principals, 4 clusters of factors influencing school feedback use were identified: context, school and user, SPFS, and support. This study refines the description of feedback use in terms of phases and types of use and effects on school improvement. Although school performance feedback can be seen as an important instrument for school improvement, no systematic use of feedback by school principals was observed. This was partly explained by a lack of skills, time, and support
Spin-density wave in Cr: nesting versus low-lying thermal excitations
It is well known that present versions of density functional theory do not predict the experimentally observed spin-density wave state to be the ground state of Cr. Recently, a so-called "nodon model" has been proposed as an alternative way to reconcile theory and experiment: the ground state of Cr is truly antiferromagnetic, and the spin-density wave appears due to low-lying thermal excitations ("nodons"). We examine in this paper whether the postulated properties of these nodons are reproduced by ab initio calculations
Using Sensitivity as a Method for Ranking the Test Cases Classified by Binary Decision Trees
Usually, data mining projects that are based on decision trees for classifying test cases will use the
probabilities provided by these decision trees for ranking classified test cases. We have a need for a better
method for ranking test cases that have already been classified by a binary decision tree because these
probabilities are not always accurate and reliable enough. A reason for this is that the probability estimates
computed by existing decision tree algorithms are always the same for all the different cases in a particular leaf of
the decision tree. This is only one reason why the probability estimates given by decision tree algorithms can not
be used as an accurate means of deciding if a test case has been correctly classified. Isabelle Alvarez has
proposed a new method that could be used to rank the test cases that were classified by a binary decision tree
[Alvarez, 2004]. In this paper we will give the results of a comparison of different ranking methods that are based
on the probability estimate, the sensitivity of a particular case or both
- …
