586 research outputs found
A theory of resolution
We review the fundamental resolution-based methods for first-order theorem proving and present them in a uniform framework. We show that these calculi can be viewed as specializations of non-clausal resolution with simplification. Simplification techniques are justified with the help of a rather general notion of redundancy for inferences. As simplification and other techniques for the elimination of redundancy are indispensable for an acceptable behaviour of any practical theorem prover this work is the first uniform treatment of resolution-like techniques in which the avoidance of redundant computations attains the attention it deserves. In many cases our presentation of a resolution method will indicate new ways of how to improve the method over what was known previously. We also give answers to several open problems in the area
Rewrite-based equational theorem proving with selection and simplification
We present various refutationally complete calculi for first-order clauses with equality that allow for arbitrary selection of negative atoms in clauses. Refutation completeness is established via the use of well-founded orderings on clauses for defining a Herbrand model for a consistent set of clauses. We also formulate an abstract notion of redundancy and show that the deletion of redundant clauses during the theorem proving process preserves refutation completeness. It is often possible to compute the closure of nontrivial sets of clauses under application of non-redundant inferences. The refutation of goals for such complete sets of clauses is simpler than for arbitrary sets of clauses, in particular one can restrict attention to proofs that have support from the goals without compromising refutation completeness. Additional syntactic properties allow to restrict the search space even further, as we demonstrate for so-called quasi-Horn clauses. The results in this paper contain as special cases or generalize many known results about Knuth-Bendix-like completion procedures (for equations, Horn clauses, and Horn clauses over built-in Booleans), completion of first-order clauses by clausal rewriting, and inductive theorem proving for Horn clauses
Weakly Equivalent Arrays
The (extensional) theory of arrays is widely used to model systems. Hence,
efficient decision procedures are needed to model check such systems. Current
decision procedures for the theory of arrays saturate the read-over-write and
extensionality axioms originally proposed by McCarthy. Various filters are used
to limit the number of axiom instantiations while preserving completeness. We
present an algorithm that lazily instantiates lemmas based on weak equivalence
classes. These lemmas are easier to interpolate as they only contain existing
terms. We formally define weak equivalence and show correctness of the
resulting decision procedure
Superposition with simplification as a decision procedure for the monadic class with equality
We show that strict superposition, a restricted form of paramodulation, can be combined with specifically designed simplification rules such that it becomes a decision procedure for the monadic class with equality. The completeness of the method follows from a general notion of redundancy for clauses and superposition inferences
Basic paramodulation
We introduce a class of restrictions for the ordered paramodulation and superposition calculi (inspired by the {\em basic\/} strategy for narrowing), in which paramodulation inferences are forbidden at terms introduced by substitutions from previous inference steps. In addition we introduce restrictions based on term selection rules and redex orderings, which are general criteria for delimiting the terms which are available for inferences. These refinements are compatible with standard ordering restrictions and are complete without paramodulation into variables or using functional reflexivity axioms. We prove refutational completeness in the context of deletion rules, such as simplification by rewriting (demodulation) and subsumption, and of techniques for eliminating redundant inferences
Datenintegration in biomedizinischen Forschungsverbünden auf Basis von serviceorientierten Architekturen
In biomedizinischen Forschungsverbünden besteht der Bedarf, Forschungsdaten innerhalb des Verbundes und darüber hinaus gemeinsam zu nutzen. Hierzu wird zunächst ein Anforderungsmodell erstellt, das anschließend konsolidiert und abstrahiert wird. Daraus ergibt sich ein Referenzmodell für Anforderungen, welches anderen Forschungsverbünden als Grundlage für die beschleunigte Erstellung eines eigenen SOA-Systems dienen kann.
Zum Referenzmodell wird weiterhin eine konkrete Instanz als Anforderungsmodell für den durch die Deutsche Forschungsgemeinschaft (DFG) geförderten geförderten Sonderforschungsbereich/Transregio 77 „Leberkrebs–von der molekularen Pathogenese zur zielgerichteten Therapie“ beschrieben. Aus dem Anforderungsmodell wird ein IT-Architekturmodell für den Verbund abgeleitet, welches aus Komponentenmodell, Verteilungsmodell und der Sicherheitsarchitektur besteht.
Die Architektur wird unter Verwendung des Cancer Biomedical Informatics Grid (caBIG) umgesetzt. Dabei werden die in den Projekten anfallenden Daten in Datendienste umgewandelt und so für den Zugriff in einer SOA bereitgestellt. Durch die Datendienste kann die Anforderung der Projekte, die Kontrolle über die eigenen Daten zu behalten, weitgehend erfüllt werden: Die Dienste können mit individuellen Zugriffsberechtigungen versehen und dezentral betrieben werden, bei Bedarf auch im Verantwortungsbereich der Projekte selbst. Der Zugriff auf das System erfolgt mittels eines Webbrowsers, mit dem sich die Mitarbeiter des Verbundes unter Verwendung einer individuellen Zugangskennung an einem zentralen Portal anmelden. Zum einfachen und sicheren Austausch von Dokumenten innerhalb des Verbundes wird ein Dokumentenmanagementsystem in die SOA eingebunden.
Um die Forschungsdaten aus verschiedenen Quellen auch auf semantischer Ebene integrieren zu können, werden Metadatensysteme entwickelt. Hierzu wird ein kontrolliertes Vokabular erstellt, das mit der hierfür entwickelten Methode aus den von den Projekten verwendeten Terminologien gewonnen wird. Die so gesammelten Begriffe werden mit standardisierten Vokabularien aus dem Unified Medical Language System (UMLS) abgeglichen. Hierfür wird ein Software-Werkzeug erstellt, das diesen Abgleich unterstützt.
Des Weiteren hat sich im Rahmen dieser Arbeit herausgestellt, dass keine Ontologie existiert, um die in der biomedizinischen Forschung häufig verwendeten Zelllinien einschließlich ihrer Wachstumsbedingungen umfassend abzubilden. Daher wird mit der Cell Culture Ontology (CCONT) eine neue Ontologie für Zelllinien entwickelt. Dabei wird Wert darauf gelegt, bereits etablierte Ontologien dieses Bereichs soweit wie möglich zu integrieren.
Somit wird hier eine vollständige IT-Architektur auf der Basis einer SOA zum Austausch und zur Integration von Forschungsdaten innerhalb von Forschungsverbünden beschrieben. Das Referenzmodell für Anforderungen, die IT-Architektur und die Metadatenspezifikationen stehen für andere Forschungsverbünde und darüber hinaus als Grundlagen für eigene Entwicklungen zur Verfügung. Gleiches gilt für die entwickelten Software-Werkzeuge zum UMLS-Abgleich von Vokabularen und zur automatisierten Modellerstellung für caBIG-Datendienste
Superposition with Equivalence Reasoning andDelayed Clause Normal Form Transformation
This report describes a superposition calculus where quantifiers are eliminated lazily. Superposition and simplification inferences may employ equivalences that have arbitrary formulas at their smaller side. A closely related calculus is implemented in the Saturate system and has shown useful on many examples, in particular in set theory. The report presents a completeness proof and reports on practical experience obtained with the Saturate system
Communication
The geometry of reaction compartments can affect the local outcome of interface-restricted reactions. Giant unilamellar vesicles (GUVs) are commonly used to generate cell-sized, membrane-bound reaction compartments, which are, however, always spherical. Herein, we report the development of a microfluidic chip to trap and reversibly deform GUVs into cigar-like shapes. When trapping and elongating GUVs that contain the primary protein of the bacterial Z ring, FtsZ, we find that membrane-bound FtsZ filaments align preferentially with the short GUV axis. When GUVs are released from this confinement and membrane tension is relaxed, FtsZ reorganizes reversibly from filaments into dynamic rings that stabilize membrane protrusions; a process that allows reversible GUV deformation. We conclude that microfluidic traps are useful for manipulating both geometry and tension of GUVs, and for investigating how both affect the outcome of spatially-sensitive reactions inside them, such as that of protein self-organization.We acknowledge the MPIB Biochemistry Core Facility for assistance in protein purification
Acceptability with general orderings
We present a new approach to termination analysis of logic programs. The
essence of the approach is that we make use of general orderings (instead of
level mappings), like it is done in transformational approaches to logic
program termination analysis, but we apply these orderings directly to the
logic program and not to the term-rewrite system obtained through some
transformation. We define some variants of acceptability, based on general
orderings, and show how they are equivalent to LD-termination. We develop a
demand driven, constraint-based approach to verify these
acceptability-variants.
The advantage of the approach over standard acceptability is that in some
cases, where complex level mappings are needed, fairly simple orderings may be
easily generated. The advantage over transformational approaches is that it
avoids the transformation step all together.
{\bf Keywords:} termination analysis, acceptability, orderings.Comment: To appear in "Computational Logic: From Logic Programming into the
Future
Hierarchic Superposition Revisited
Many applications of automated deduction require reasoning in first-order
logic modulo background theories, in particular some form of integer
arithmetic. A major unsolved research challenge is to design theorem provers
that are "reasonably complete" even in the presence of free function symbols
ranging into a background theory sort. The hierarchic superposition calculus of
Bachmair, Ganzinger, and Waldmann already supports such symbols, but, as we
demonstrate, not optimally. This paper aims to rectify the situation by
introducing a novel form of clause abstraction, a core component in the
hierarchic superposition calculus for transforming clauses into a form needed
for internal operation. We argue for the benefits of the resulting calculus and
provide two new completeness results: one for the fragment where all
background-sorted terms are ground and another one for a special case of linear
(integer or rational) arithmetic as a background theory
- …