167,989 research outputs found
Solving categorical syllogisms with singular premises
We elaborate on the approach to syllogistic reasoning based on "case identification" (Stenning & Oberlander, 1995; Stenning & Yule, 1997). It is shown that this can be viewed as the formalisation of a method of proof that dates back to Aristotle, namely proof by exposition (ecthesis), and that there are traces of this method in the strategies described by a number of psychologists, from Störring (1908) to the present day. It was hypothesised that by rendering individual cases explicit in the premises the chance that reasoners engage in a proof by exposition would be enhanced, and thus performance improved. To do so, we used syllogisms with singular premises (e. g., this X is Y). This resulted in a uniform increase in performance as compared to performance on the associated standard syllogisms. These results cannot be explained by the main theories of syllogistic reasoning in their current state
Recommended from our members
Profiling cyber attackers using case-based reasoning
Computer security would arguably benefit from more information on the characteristics of the particular human attacker behind a security incident. Nevertheless, technical security mechanisms have always focused on the at- tack's characteristics rather than the attacker's. The latter is a challenging prob- lem, as relevant data cannot easily be found. We argue that the cyber traces left by a human attacker during an intrusion attempt can help towards building a profile of the particular person. To illustrate this concept, we have developed an approach using case-based reasoning that indirectly measures an attacker’s characteristics for given attack scenarios. Our results reveal that case-based rea- soning has the potential of being used to assist security and forensic investiga- tors in profiling human attackers
Bounded Model Checking for Asynchronous Hyperproperties
Many types of attacks on confidentiality stem from the nondeterministic
nature of the environment that computer programs operate in (e.g., schedulers
and asynchronous communication channels). In this paper, we focus on
verification of confidentiality in nondeterministic environments by reasoning
about asynchronous hyperproperties. First, we generalize the temporal logic
A-HLTL to allow nested trajectory quantification, where a trajectory determines
how different execution traces may advance and stutter. We propose a bounded
model checking algorithm for A-HLTL based on QBF-solving for a fragment of the
generalized A-HLTL and evaluate it by various case studies on concurrent
programs, scheduling attacks, compiler optimization, speculative execution, and
cache timing attacks. We also rigorously analyze the complexity of model
checking for different fragments of A-HLTL.Comment: 34 page
Atomic Action Refinement in Model Based Testing
In model based testing (MBT) test cases are derived from a specification of the system that we want to test. In general the specification is more abstract than the implementation. This may result in 1) test cases that are not executable, because their actions are too abstract (the implementation does not understand them); or 2) test cases that are incorrect, because the specification abstracts from relevant behavior. The standard approach to remedy this problem is to rewrite the specification by hand to the required level of detail and regenerate the test cases. This is error-prone and time consuming. Another approach is to do some translation during test execution. This solution has no basis in the theory of MBT. We propose a framework to add the required level of detail automatically to the abstract specification and/or abstract test cases.\ud
\ud
This paper focuses on general atomic action refinement. This means that an abstract action is replaced by more complex behavior (expressed as a labeled transition system). With general we mean that we impose as few restrictions as possible. Atomic means that the actions that are being refined behave as if they were atomic, i.e., no other actions are allowed to interfere
Learning-assisted Theorem Proving with Millions of Lemmas
Large formal mathematical libraries consist of millions of atomic inference
steps that give rise to a corresponding number of proved statements (lemmas).
Analogously to the informal mathematical practice, only a tiny fraction of such
statements is named and re-used in later proofs by formal mathematicians. In
this work, we suggest and implement criteria defining the estimated usefulness
of the HOL Light lemmas for proving further theorems. We use these criteria to
mine the large inference graph of the lemmas in the HOL Light and Flyspeck
libraries, adding up to millions of the best lemmas to the pool of statements
that can be re-used in later proofs. We show that in combination with
learning-based relevance filtering, such methods significantly strengthen
automated theorem proving of new conjectures over large formal mathematical
libraries such as Flyspeck.Comment: journal version of arXiv:1310.2797 (which was submitted to LPAR
conference
Mining State-Based Models from Proof Corpora
Interactive theorem provers have been used extensively to reason about
various software/hardware systems and mathematical theorems. The key challenge
when using an interactive prover is finding a suitable sequence of proof steps
that will lead to a successful proof requires a significant amount of human
intervention. This paper presents an automated technique that takes as input
examples of successful proofs and infers an Extended Finite State Machine as
output. This can in turn be used to generate proofs of new conjectures. Our
preliminary experiments show that the inferred models are generally accurate
(contain few false-positive sequences) and that representing existing proofs in
such a way can be very useful when guiding new ones.Comment: To Appear at Conferences on Intelligent Computer Mathematics 201
- …