58 research outputs found
Exact Gap Computation for Code Coverage Metrics in ISO-C
Test generation and test data selection are difficult tasks for model based
testing. Tests for a program can be meld to a test suite. A lot of research is
done to quantify the quality and improve a test suite. Code coverage metrics
estimate the quality of a test suite. This quality is fine, if the code
coverage value is high or 100%. Unfortunately it might be impossible to achieve
100% code coverage because of dead code for example. There is a gap between the
feasible and theoretical maximal possible code coverage value. Our review of
the research indicates, none of current research is concerned with exact gap
computation. This paper presents a framework to compute such gaps exactly in an
ISO-C compatible semantic and similar languages. We describe an efficient
approximation of the gap in all the other cases. Thus, a tester can decide if
more tests might be able or necessary to achieve better coverage.Comment: In Proceedings MBT 2012, arXiv:1202.582
CHIMP: A SIMPLE POPULATION MODEL FOR USE IN INTEGRATED ASSESSMENT OF GLOBAL ENVIRONMENTAL CHANGE
We present the Canberra-Hamburg Integrated Model for Population (CHIMP), a new global population model for long-term projections. Distinguishing features of this model, compared to other model for secular population projections, are that (a) mortality, fertility, and migration are partly driven by per capita income; (b) large parts of the model have been estimated rather than calibrated; and (c) the model is in the public domain. Scenario experiments show similarities but also differences with other models. Similarities include rapid aging of the population and an eventual reversal of global population growth. The main difference is that CHIMP projects substantially higher populations, particularly in Africa, primarily because our data indicate a slower fertility decline than assumed elsewhere. Model runs show a strong interaction between population growth and economic growth, and a weak feedback of climate change on population growth.population model, long term projections, global change, integrated assessment
GPU Accelerated counterexample generation in LTL model checking
Strongly Connected Component (SCC) based searching is one of the most popular LTL model checking algorithms. When the SCCs are huge, the counterexample generation process can be time-consuming, especially when dealing with fairness assumptions. In this work, we propose a GPU accelerated counterexample generation algorithm, which improves the performance by parallelizing the Breadth First Search (BFS) used in the counterexample generation. BFS work is irregular, which means it is hard to allocate resources and may suffer from imbalanced load. We make use of the features of latest CUDA Compute Architecture-NVIDIA Kepler GK110 to achieve the dynamic parallelism and memory hierarchy so as to handle the irregular searching pattern in BFS.We build dynamic queue management, task scheduler and path recording such that the counterexample generation process can be completely finished by GPU without involving CPU. We have implemented the proposed approach in PAT model checker. Our experiments show that our approach is effective and scalable. ?Springer International Publishing Switzerland 2014.EI0413-429882
Efficient Emptiness Check for Timed B\"uchi Automata (Extended version)
The B\"uchi non-emptiness problem for timed automata refers to deciding if a
given automaton has an infinite non-Zeno run satisfying the B\"uchi accepting
condition. The standard solution to this problem involves adding an auxiliary
clock to take care of the non-Zenoness. In this paper, it is shown that this
simple transformation may sometimes result in an exponential blowup. A
construction avoiding this blowup is proposed. It is also shown that in many
cases, non-Zenoness can be ascertained without extra construction. An
on-the-fly algorithm for the non-emptiness problem, using non-Zenoness
construction only when required, is proposed. Experiments carried out with a
prototype implementation of the algorithm are reported.Comment: Published in the Special Issue on Computer Aided Verification - CAV
2010; Formal Methods in System Design, 201
SAT-based Explicit LTL Reasoning
We present here a new explicit reasoning framework for linear temporal logic
(LTL), which is built on top of propositional satisfiability (SAT) solving. As
a proof-of-concept of this framework, we describe a new LTL satisfiability
tool, Aalta\_v2.0, which is built on top of the MiniSAT SAT solver. We test the
effectiveness of this approach by demonnstrating that Aalta\_v2.0 significantly
outperforms all existing LTL satisfiability solvers. Furthermore, we show that
the framework can be extended from propositional LTL to assertional LTL (where
we allow theory atoms), by replacing MiniSAT with the Z3 SMT solver, and
demonstrating that this can yield an exponential improvement in performance
O conceito de consumidor direto e a jurisprudência do Superior Tribunal de Justiça
- Texto de autoria de Ministra do Superior Tribunal de Justiça.Trata, sob o enfoque jurídico e econômico, o conceito de consumidor direto, contextualizando-o, de um lado, com as duas escolas de pensamento formuladas sobre o tema, e, de outro, com os recentes avanços jurisprudenciais desenvolvidos pelo Superior Tribunal de Justiça (STJ). Comenta que a Escola Subjetiva considera que a aquisição ou uso de bem ou serviço para o exercício de atividade econômica, civil ou empresária (CC/02, art. 966, caput e parágrafo único), descaracteriza requisito essencial à formação da relação de consumo, qual seja, ser o consumidor o destinatário final da fruição do bem. Informa que a linha de precedentes adotada pela Quarta e Sexta Turmas deste STJ coaduna-se com os pressupostos da teoria subjetiva ou finalista, restringindo a exegese do art. 2º do CDC ao destinatário final fático e também econômico do bem ou serviço. Apresenta ainda a teoria da Escola Objetiva que considera que a aquisição ou uso de bem ou serviço na condição de destinatário final fático caracteriza a relação de consumo, por força do elemento objetivo, qual seja, o ato de consumo. Informa também que a linha de precedentes adotada pela Primeira e Terceira Turmas deste STJ coaduna-se com os pressupostos da teoria objetiva (ou maximalista), considerando-se consumidor o destinatário final fático do bem ou serviço, ainda que venha a utilizá-lo no exercício de profissão ou de empresa. Indica a tendência jurisprudencial do STJ de prevalência da Escola Objetiva. Apresenta precedente recente (Conflito de Competência nº. 41056/SP, julgado em 23/06/2004), em que a Segunda Seção do STJ acolheu, por maioria, o conceito de consumidor direto eleito pela escola objetiva
A Fully Verified Executable LTL Model Checker
International audienceWe present an LTL model checker whose code has been completely verified using the Isabelle theorem prover. The checker consists of over 4000 lines of ML code. The code is produced using recent Isabelle technology called the Refinement Framework, which allows us to split its correctness proof into (1) the proof of an abstract version of the checker, consisting of a few hundred lines of “formalized pseudocode”, and (2) a verified refinement step in which mathematical sets and other abstract structures are replaced by implementations of efficient structures like red-black trees and functional arrays. This leads to a checker that, while still slower than unverified checkers, can already be used as a trusted reference implementation against which advanced implementations can be tested. We report on the structure of the checker, the development process, and some experiments on standard benchmarks
Senescent ground tree rewrite systems
Ground Tree Rewrite Systems with State are known to have an undecidable
control state reachability problem. Taking inspiration from the recent
introduction of scope-bounded multi-stack pushdown systems, we define Senescent
Ground Tree Rewrite Systems. These are a restriction of ground tree rewrite
systems with state such that nodes of the tree may no longer be rewritten after
having witnessed an a priori fixed number of control state changes. As well as
generalising scope-bounded multi-stack pushdown systems, we show --- via
reductions to and from reset Petri-nets --- that these systems have an
Ackermann-complete control state reachability problem. However, reachability of
a regular set of trees remains undecidable
Variations on Multi-Core Nested Depth-First Search
Recently, two new parallel algorithms for on-the-fly model checking of LTL
properties were presented at the same conference: Automated Technology for
Verification and Analysis, 2011. Both approaches extend Swarmed NDFS, which
runs several sequential NDFS instances in parallel. While parallel random
search already speeds up detection of bugs, the workers must share some global
information in order to speed up full verification of correct models. The two
algorithms differ considerably in the global information shared between
workers, and in the way they synchronize.
Here, we provide a thorough experimental comparison between the two
algorithms, by measuring the runtime of their implementations on a multi-core
machine. Both algorithms were implemented in the same framework of the model
checker LTSmin, using similar optimizations, and have been subjected to the
full BEEM model database.
Because both algorithms have complementary advantages, we constructed an
algorithm that combines both ideas. This combination clearly has an improved
speedup. We also compare the results with the alternative parallel algorithm
for accepting cycle detection OWCTY-MAP. Finally, we study a simple statistical
model for input models that do contain accepting cycles. The goal is to
distinguish the speedup due to parallel random search from the speedup that can
be attributed to clever work sharing schemes.Comment: In Proceedings PDMC 2011, arXiv:1111.006
- …