30,471 research outputs found

    Saturation-based decision procedures for fixed domain and minimal model validity

    Get PDF
    Superposition is an established decision procedure for a variety of first-order logic theories represented by sets of clauses. A satisfiable theory, saturated by superposition, implicitly defines a minimal Herbrand model for the theory. This raises the question in how far superposition calculi can be employed for reasoning about such minimal models. This is indeed often possible when existential properties are considered. However, proving universal properties directly leads to a modification of the minimal model's termgenerated domain, as new Skolem functions are introduced. For many applications, this is not desired because it changes the problem. In this thesis, I propose the first superposition calculus that can explicitly represent existentially quantified variables and can thus compute with respect to a given fixed domain. It does not eliminate existential variables by Skolemization, but handles them using additional constraints with which each clause is annotated. This calculus is sound and refutationally complete in the limit for a fixed domain semantics. For saturated Horn theories and classes of positive formulas, the calculus is even complete for proving properties of the minimal model itself, going beyond the scope of known superpositionbased approaches. The calculus is applicable to every set of clauses with equality and does not rely on any syntactic restrictions of the input. Extensions of the calculus lead to various new decision procedures for minimal model validity. A main feature of these decision procedures is that even the validity of queries containing one quantifier alternation can be decided. In particular, I prove that the validity of any formula with at most one quantifier alternation is decidable in models represented by a finite set of atoms and that the validity of several classes of such formulas is decidable in models represented by so-called disjunctions of implicit generalizations. Moreover, I show that the decision of minimal model validity can be reduced to the superposition-based decision of first-order validity for models of a class of predicative Horn clauses where all function symbols are at most unary.Superposition ist eine bewährte Entscheidungsprozedur für eine Vielzahl von Theorien in Prädikatenlogik erster Stufe, die durch Klauseln repräsentiert sind. Eine erfüllbare und bezüglich Superposition saturierte Theorie definiert ein minimales Herbrand-Modell dieser Theorie. Dies wirft die Frage auf, inwiefern Superpositionskalküle zur Argumentation in solchen minimalen Modellen verwendet werden können. Das ist bei der Betrachtung existenziell quantifizierter Eigenschaften tatsächlich oft möglich. Die Analyseuniversell quantifizierter Eigenschaften führt jedoch unmittelbar zu einer Modifizierung der termgenerierten Domäne des minimalen Modells, da neue Skolemfunktionen eingeführt werden. Für viele Anwendungen ist dies unerwünscht, da es die Problemstellung verändert. In dieser Arbeit stelle ich den ersten Superpositionskalkül vor, der existenziell quantifizierte Variablen explizit darstellen und daher Berechnungen über einer gegebenen festen Domäne anstellen kann. In ihm werden existenziell quantifizierte Variablen nicht durch Skolemisierung eliminiert sondern mithilfe zusätzlicher Constraints gehandhabt, mit denen jede Klausel versehen wird. Dieser Kalkül ist korrekt und im Grenzwert widerspruchsvollständig für eine domänenspezifische Semantik. Für saturierte Horntheorien und Klassen positiver Formeln ist der Kalkül sogar korrekt für den Beweis von Eigenschaften des minimalen Modells selbst. Dies übersteigt die Möglichkeiten bisheriger superpositionsbasierter Ansätze. Der Kalkül ist auf beliebige Klauselmengen mit Gleichheit anwendbar und erlegt der Eingabe keine syntaktischen Beschränkungen auf. Erweiterungen des Kalküls führen zu verschiedenen neuen Entscheidungsverfahren für die Gültigkeit in minimalen Modellen. Ein Hauptmerkmal dieser Verfahren ist es, dass selbst die Gültigkeit von Anfragen entscheidbar ist, die einen Quantorenwechsel enthalten. Insbesondere beweise ich, dass die Gültigkeit jeder Formel mit höchstens einem Quantorenwechsel in durch endlich viele Atome repräsentierten Modellen entscheidbar ist, und gleiches gilt für die Gültigkeit mehrerer Klassen solcher Formeln in durch so genannte Disjunktionen impliziter Verallgemeinerungen repräsentieren Modellen. Außerdem zeige ich, dass für eine Klasse prädikativer Hornklauseln, bei denen alle vorkommenden Funktionssymbole maximal einstellig sind, die Entscheidbarkeit der Gültigkeit in minimalen Modellen auf superpositionsbasierte Entscheidbarkeit in Prädikatenlogik erster Stufe reduziert werden kann

    Combining decision procedures for the reals

    Full text link
    We address the general problem of determining the validity of boolean combinations of equalities and inequalities between real-valued expressions. In particular, we consider methods of establishing such assertions using only restricted forms of distributivity. At the same time, we explore ways in which "local" decision or heuristic procedures for fragments of the theory of the reals can be amalgamated into global ones. Let Tadd[Q] be the first-order theory of the real numbers in the language of ordered groups, with negation, a constant 1, and function symbols for multiplication by rational constants. Let Tmult[Q] be the analogous theory for the multiplicative structure, and let T[Q] be the union of the two. We show that although T[Q] is undecidable, the universal fragment of T[Q] is decidable. We also show that terms of T[Q]can fruitfully be put in a normal form. We prove analogous results for theories in which Q is replaced, more generally, by suitable subfields F of the reals. Finally, we consider practical methods of establishing quantifier-free validities that approximate our (impractical) decidability results.Comment: Will appear in Logical Methods in Computer Scienc

    Optimal Population Coding, Revisited

    Get PDF
    Cortical circuits perform the computations underlying rapid perceptual decisions within a few dozen milliseconds with each neuron emitting only a few spikes. Under these conditions, the theoretical analysis of neural population codes is challenging, as the most commonly used theoretical tool – Fisher information – can lead to erroneous conclusions about the optimality of different coding schemes. Here we revisit the effect of tuning function width and correlation structure on neural population codes based on ideal observer analysis in both a discrimination and reconstruction task. We show that the optimal tuning function width and the optimal correlation structure in both paradigms strongly depend on the available decoding time in a very similar way. In contrast, population codes optimized for Fisher information do not depend on decoding time and are severely suboptimal when only few spikes are available. In addition, we use the neurometric functions of the ideal observer in the classification task to investigate the differential coding properties of these Fisher-optimal codes for fine and coarse discrimination. We find that the discrimination error for these codes does not decrease to zero with increasing population size, even in simple coarse discrimination tasks. Our results suggest that quite different population codes may be optimal for rapid decoding in cortical computations than those inferred from the optimization of Fisher information

    Naming the Pain in Requirements Engineering: A Design for a Global Family of Surveys and First Results from Germany

    Get PDF
    For many years, we have observed industry struggling in defining a high quality requirements engineering (RE) and researchers trying to understand industrial expectations and problems. Although we are investigating the discipline with a plethora of empirical studies, they still do not allow for empirical generalisations. To lay an empirical and externally valid foundation about the state of the practice in RE, we aim at a series of open and reproducible surveys that allow us to steer future research in a problem-driven manner. We designed a globally distributed family of surveys in joint collaborations with different researchers and completed the first run in Germany. The instrument is based on a theory in the form of a set of hypotheses inferred from our experiences and available studies. We test each hypothesis in our theory and identify further candidates to extend the theory by correlation and Grounded Theory analysis. In this article, we report on the design of the family of surveys, its underlying theory, and the full results obtained from Germany with participants from 58 companies. The results reveal, for example, a tendency to improve RE via internally defined qualitative methods rather than relying on normative approaches like CMMI. We also discovered various RE problems that are statistically significant in practice. For instance, we could corroborate communication flaws or moving targets as problems in practice. Our results are not yet fully representative but already give first insights into current practices and problems in RE, and they allow us to draw lessons learnt for future replications. Our results obtained from this first run in Germany make us confident that the survey design and instrument are well-suited to be replicated and, thereby, to create a generalisable empirical basis of RE in practice

    Disproving in First-Order Logic with Definitions, Arithmetic and Finite Domains

    Get PDF
    This thesis explores several methods which enable a first-order reasoner to conclude satisfiability of a formula modulo an arithmetic theory. The most general method requires restricting certain quantifiers to range over finite sets; such assumptions are common in the software verification setting. In addition, the use of first-order reasoning allows for an implicit representation of those finite sets, which can avoid scalability problems that affect other quantified reasoning methods. These new techniques form a useful complement to existing methods that are primarily aimed at proving validity. The Superposition calculus for hierarchic theory combinations provides a basis for reasoning modulo theories in a first-order setting. The recent account of ‘weak abstraction’ and related improvements make an mplementation of the calculus practical. Also, for several logical theories of interest Superposition is an effective decision procedure for the quantifier free fragment. The first contribution is an implementation of that calculus (Beagle), including an optimized implementation of Cooper’s algorithm for quantifier elimination in the theory of linear integer arithmetic. This includes a novel means of extracting values for quantified variables in satisfiable integer problems. Beagle won an efficiency award at CADE Automated theorem prover System Competition (CASC)-J7, and won the arithmetic non-theorem category at CASC-25. This implementation is the start point for solving the ‘disproving with theories’ problem. Some hypotheses can be disproved by showing that, together with axioms the hypothesis is unsatisfiable. Often this is relative to other axioms that enrich a base theory by defining new functions. In that case, the disproof is contingent on the satisfiability of the enrichment. Satisfiability in this context is undecidable. Instead, general characterizations of definition formulas, which do not alter the satisfiability status of the main axioms, are given. These general criteria apply to recursive definitions, definitions over lists, and to arrays. This allows proving some non-theorems which are otherwise intractable, and justifies similar disproofs of non-linear arithmetic formulas. When the hypothesis is contingently true, disproof requires proving existence of a model. If the Superposition calculus saturates a clause set, then a model exists, but only when the clause set satisfies a completeness criterion. This requires each instance of an uninterpreted, theory-sorted term to have a definition in terms of theory symbols. The second contribution is a procedure that creates such definitions, given that a subset of quantifiers range over finite sets. Definitions are produced in a counter-example driven way via a sequence of over and under approximations to the clause set. Two descriptions of the method are given: the first uses the component solver modularly, but has an inefficient counter-example heuristic. The second is more general, correcting many of the inefficiencies of the first, yet it requires tracking clauses through a proof. This latter method is shown to apply also to lists and to problems with unbounded quantifiers. Together, these tools give new ways for applying successful first-order reasoning methods to problems involving interpreted theories

    New results on rewrite-based satisfiability procedures

    Full text link
    Program analysis and verification require decision procedures to reason on theories of data structures. Many problems can be reduced to the satisfiability of sets of ground literals in theory T. If a sound and complete inference system for first-order logic is guaranteed to terminate on T-satisfiability problems, any theorem-proving strategy with that system and a fair search plan is a T-satisfiability procedure. We prove termination of a rewrite-based first-order engine on the theories of records, integer offsets, integer offsets modulo and lists. We give a modularity theorem stating sufficient conditions for termination on a combinations of theories, given termination on each. The above theories, as well as others, satisfy these conditions. We introduce several sets of benchmarks on these theories and their combinations, including both parametric synthetic benchmarks to test scalability, and real-world problems to test performances on huge sets of literals. We compare the rewrite-based theorem prover E with the validity checkers CVC and CVC Lite. Contrary to the folklore that a general-purpose prover cannot compete with reasoners with built-in theories, the experiments are overall favorable to the theorem prover, showing that not only the rewriting approach is elegant and conceptually simple, but has important practical implications.Comment: To appear in the ACM Transactions on Computational Logic, 49 page

    Robustness - a challenge also for the 21st century: A review of robustness phenomena in technical, biological and social systems as well as robust approaches in engineering, computer science, operations research and decision aiding

    Get PDF
    Notions on robustness exist in many facets. They come from different disciplines and reflect different worldviews. Consequently, they contradict each other very often, which makes the term less applicable in a general context. Robustness approaches are often limited to specific problems for which they have been developed. This means, notions and definitions might reveal to be wrong if put into another domain of validity, i.e. context. A definition might be correct in a specific context but need not hold in another. Therefore, in order to be able to speak of robustness we need to specify the domain of validity, i.e. system, property and uncertainty of interest. As proofed by Ho et al. in an optimization context with finite and discrete domains, without prior knowledge about the problem there exists no solution what so ever which is more robust than any other. Similar to the results of the No Free Lunch Theorems of Optimization (NLFTs) we have to exploit the problem structure in order to make a solution more robust. This optimization problem is directly linked to a robustness/fragility tradeoff which has been observed in many contexts, e.g. 'robust, yet fragile' property of HOT (Highly Optimized Tolerance) systems. Another issue is that robustness is tightly bounded to other phenomena like complexity for which themselves exist no clear definition or theoretical framework. Consequently, this review rather tries to find common aspects within many different approaches and phenomena than to build a general theorem for robustness, which anyhow might not exist because complex phenomena often need to be described from a pluralistic view to address as many aspects of a phenomenon as possible. First, many different robustness problems have been reviewed from many different disciplines. Second, different common aspects will be discussed, in particular the relationship of functional and structural properties. This paper argues that robustness phenomena are also a challenge for the 21st century. It is a useful quality of a model or system in terms of the 'maintenance of some desired system characteristics despite fluctuations in the behaviour of its component parts or its environment' (s. [Carlson and Doyle, 2002], p. 2). We define robustness phenomena as solution with balanced tradeoffs and robust design principles and robustness measures as means to balance tradeoffs. --

    Exploring differences in individual and group judgements in standard setting

    Get PDF
    Context Standard setting is critically important to assessment decisions in medical education. Recent research has demonstrated variations between medical schools in the standards set for shared items. Despite the centrality of judgement to criterion-referenced standard setting methods, little is known about the individual or group processes that underpin them. This study aimed to explore the operation and interaction of these processes in order to illuminate potential sources of variability. Methods Using qualitative research, we purposively sampled across UK medical schools that set a low, medium or high standard on nationally shared items, collecting data by observation of graduation-level standard-setting meetings and semi-structured interviews with standard-setting judges. Data were analysed using thematic analysis based on the principles of grounded theory. Results Standard setting occurred through the complex interaction of institutional context, judges’ individual perspectives and group interactions. Schools’ procedures, panel members and atmosphere produced unique contexts. Individual judges formed varied understandings of the clinical and technical features of each question, relating these to their differing (sometimes contradictory) conceptions of minimally competent students, by balancing information and making suppositions. Conceptions of minimal competence variously comprised: limited attendance; limited knowledge; poor knowledge application; emotional responses to questions; ‘test-savviness’, or a strategic focus on safety. Judges experienced tensions trying to situate these abstract conceptions in reality, revealing uncertainty. Groups constructively revised scores through debate, sharing information and often constructing detailed clinical representations of cases. Groups frequently displayed conformity, illustrating a belief that outlying judges were likely to be incorrect. Less frequently, judges resisted change, using emphatic language, bargaining or, rarely, ‘polarisation’ to influence colleagues. Conclusions Despite careful conduct through well-established procedures, standard setting is judgementally complex and involves uncertainty. Understanding whether or how these varied processes produce the previously observed variations in outcomes may offer routes to enhance equivalence of criterion-referenced standards

    Formal methods for modeling and analysis of hybrid systems

    Get PDF
    A technique based on the use of a quantifier elimination decision procedure for real closed fields and simple theorem proving to construct a series of successively finer qualitative abstractions of hybrid automata is taught. The resulting abstractions are always discrete transition systems which can then be used by any traditional analysis tool. The constructed abstractions are conservative and can be used to establish safety properties of the original system. The technique works on linear and non-linear polynomial hybrid systems: the guards on discrete transitions and the continuous flows in all modes can be specified using arbitrary polynomial expressions over the continuous variables. An exemplar tool in the SAL environment built over the theorem prover PVS is detailed. The technique scales well to large and complex hybrid systems
    • …
    corecore