68 research outputs found
Sobre disjunções, confluências e o centro de gravidade da lógica filosófica
No dizer de Burgess [B2009], o centro de gravidade da Lógica Filosófica é, hoje em dia, a área científica que, em Inglês, se designa por Theoretical Computer Science. Não sei bem como traduzir esta designação, mas, se assim é, a área científica assim designada parece ser relevante num painel sobre Matemática e Ciências da Computação, integrado num colóquio sobre as disjunções e confluências entre Humanidades e Ciências. O pequeno trabalho técnico sobre o qual vou aqui escrever pertence à Teoria da Demonstração – pertence, portanto, a um dos compartimentos principais da Lógica Matemática, dos mais próximos da Theoretical Computer Science; e é relativo às distinções entre lógicas clássicas e não-clássicas – ora o estudo das lógicas não-clássicas é outra definição da Lógica Filosófica.
O referido trabalho técnico ocupa a segunda parte deste artigo e debruça-se (curiosamente) sobre a operação lógica de disjunção, no contexto da lógica clássica. O objectivo do exercício não é tanto o de publicitar o resultado final, mas antes o de ilustrar como, através de exercícios técnicos deste género, que constituem o dia-a-dia de alguns lógicos, se tocam indirectamente questões que talvez possam ser do interesse de alguns colegas das Humanidades. Esta tentativa de confluência ocupará a terceira parte deste artigo.
Depois de explicitado o objectivo do artigo, é evidente qual será o seu tom: o cientista tentará, numa linguagem que não é a sua, falar ao humanista em problemas estranhos a este, sem perder a esperança de ser transparente. A ver vamos.Fundação para a Ciência e a Tecnologia (FCT
Strengthening Model Checking Techniques with Inductive Invariants
This paper describes optimized techniques to efficiently compute and reap benefits from inductive invariants within SAT-based model checking. We address sequential circuit verification, and we consider both equivalences and implications between pairs of nodes in the logic networks. First, we present a very efficient dynamic procedure, based on equivalence classes and incremental SAT, specifically oriented to reduce the set of checked invariants. Then, we show how to effectively integrate the computation of inductive invariants within state-of-the-art SAT-based model checking procedures. Experiments (on more than 600 designs) show the robustness of our approach on verification instances on which stand-alone techniques fai
Machine-Checked Proofs For Realizability Checking Algorithms
Virtual integration techniques focus on building architectural models of
systems that can be analyzed early in the design cycle to try to lower cost,
reduce risk, and improve quality of complex embedded systems. Given appropriate
architectural descriptions, assume/guarantee contracts, and compositional
reasoning rules, these techniques can be used to prove important safety
properties about the architecture prior to system construction. For these
proofs to be meaningful, each leaf-level component contract must be realizable;
i.e., it is possible to construct a component such that for any input allowed
by the contract assumptions, there is some output value that the component can
produce that satisfies the contract guarantees. We have recently proposed (in
[1]) a contract-based realizability checking algorithm for assume/guarantee
contracts over infinite theories supported by SMT solvers such as linear
integer/real arithmetic and uninterpreted functions. In that work, we used an
SMT solver and an algorithm similar to k-induction to establish the
realizability of a contract, and justified our approach via a hand proof. Given
the central importance of realizability to our virtual integration approach, we
wanted additional confidence that our approach was sound. This paper describes
a complete formalization of the approach in the Coq proof and specification
language. During formalization, we found several small mistakes and missing
assumptions in our reasoning. Although these did not compromise the correctness
of the algorithm used in the checking tools, they point to the value of
machine-checked formalization. In addition, we believe this is the first
machine-checked formalization for a realizability algorithm.Comment: 14 pages, 1 figur
The Normalization Theorem for the First-Order Classical Natural Deduction with Disjunctive Syllogism
In the present paper, we prove the normalization theorem and the consistency of the first-order classical logic with disjunctive syllogism. First, we propose the natural deduction system SCD for classical propositional logic having rules for conjunction, implication, negation, and disjunction. The rules for disjunctive syllogism are regarded as the rules for disjunction.
After we prove the normalization theorem and the consistency of SCD, we extend SCD to the system SPCD for the first-order classical logic with disjunctive syllogism. It can be shown that SPCD is conservative extension to SCD. Then, the normalization theorem and the consistency of SPCD are given
An Overview of Backtrack Search Satisfiability Algorithms
Propositional Satisfiability (SAT) is often used as the underlying model for a significan
Generating Property-Directed Potential Invariants By Backward Analysis
This paper addresses the issue of lemma generation in a k-induction-based
formal analysis of transition systems, in the linear real/integer arithmetic
fragment. A backward analysis, powered by quantifier elimination, is used to
output preimages of the negation of the proof objective, viewed as unauthorized
states, or gray states. Two heuristics are proposed to take advantage of this
source of information. First, a thorough exploration of the possible
partitionings of the gray state space discovers new relations between state
variables, representing potential invariants. Second, an inexact exploration
regroups and over-approximates disjoint areas of the gray state space, also to
discover new relations between state variables. k-induction is used to isolate
the invariants and check if they strengthen the proof objective. These
heuristics can be used on the first preimage of the backward exploration, and
each time a new one is output, refining the information on the gray states. In
our context of critical avionics embedded systems, we show that our approach is
able to outperform other academic or commercial tools on examples of interest
in our application field. The method is introduced and motivated through two
main examples, one of which was provided by Rockwell Collins, in a
collaborative formal verification framework.Comment: In Proceedings FTSCS 2012, arXiv:1212.657
Linear Temporal Logic and Propositional Schemata, Back and Forth (extended version)
This paper relates the well-known Linear Temporal Logic with the logic of
propositional schemata introduced by the authors. We prove that LTL is
equivalent to a class of schemata in the sense that polynomial-time reductions
exist from one logic to the other. Some consequences about complexity are
given. We report about first experiments and the consequences about possible
improvements in existing implementations are analyzed.Comment: Extended version of a paper submitted at TIME 2011: contains proofs,
additional examples & figures, additional comparison between classical
LTL/schemata algorithms up to the provided translations, and an example of
how to do model checking with schemata; 36 pages, 8 figure
Answer Set Solving with Generalized Learned Constraints
Conflict learning plays a key role in modern Boolean constraint solving. Advanced in satisfiability testing, it has meanwhile become a base technology in many neighboring fields, among them answer set programming (ASP). However, learned constraints are only valid for a currently solved problem instance and do not carry over to similar instances. We address this issue in ASP and introduce a framework featuring an integrated feedback loop that allows for reusing conflict constraints. The idea is to extract (propositional) conflict constraints, generalize and validate them, and reuse them as integrity constraints. Although we explore our approach in the context of dynamic applications based on transition systems, it is driven by the ultimate objective of overcoming the issue that learned knowledge is bound to specific problem instances. We implemented this workflow in two systems, namely, a variant of the ASP solver clasp that extracts integrity constraints along with a downstream system for generalizing and validating them
Linear Temporal Logic and Propositional Schemata, Back and Forth
Session: p-Automata and Obligation Games - http://www.isp.uni-luebeck.de/time11/International audienceThis paper relates the well-known Linear Temporal Logic with the logic of propositional schemata introduced in elsewhere by the authors. We prove that LTL is equivalent to a class of schemata in the sense that polynomial-time reductions exist from one logic to the other. Some consequences about complexity are given. We report about first experiments and the consequences about possible improvements in existing implementations are analyzed
- …