292 research outputs found
Strategic programming on graph rewriting systems
We describe a strategy language to control the application of graph rewriting
rules, and show how this language can be used to write high-level declarative
programs in several application areas. This language is part of a graph-based
programming tool built within the port-graph transformation and visualisation
environment PORGY.Comment: In Proceedings IWS 2010, arXiv:1012.533
Separation of Test-Free Propositional Dynamic Logics over Context-Free Languages
For a class L of languages let PDL[L] be an extension of Propositional
Dynamic Logic which allows programs to be in a language of L rather than just
to be regular. If L contains a non-regular language, PDL[L] can express
non-regular properties, in contrast to pure PDL.
  For regular, visibly pushdown and deterministic context-free languages, the
separation of the respective PDLs can be proven by automata-theoretic
techniques. However, these techniques introduce non-determinism on the automata
side. As non-determinism is also the difference between DCFL and CFL, these
techniques seem to be inappropriate to separate PDL[DCFL] from PDL[CFL].
Nevertheless, this separation is shown but for programs without test operators.Comment: In Proceedings GandALF 2011, arXiv:1106.081
Automated Synthesis of Tableau Calculi
This paper presents a method for synthesising sound and complete tableau
calculi. Given a specification of the formal semantics of a logic, the method
generates a set of tableau inference rules that can then be used to reason
within the logic. The method guarantees that the generated rules form a
calculus which is sound and constructively complete. If the logic can be shown
to admit finite filtration with respect to a well-defined first-order semantics
then adding a general blocking mechanism provides a terminating tableau
calculus. The process of generating tableau rules can be completely automated
and produces, together with the blocking mechanism, an automated procedure for
generating tableau decision procedures. For illustration we show the
workability of the approach for a description logic with transitive roles and
propositional intuitionistic logic.Comment: 32 page
Quantifier-Free Interpolation of a Theory of Arrays
The use of interpolants in model checking is becoming an enabling technology
to allow fast and robust verification of hardware and software. The application
of encodings based on the theory of arrays, however, is limited by the
impossibility of deriving quantifier- free interpolants in general. In this
paper, we show that it is possible to obtain quantifier-free interpolants for a
Skolemized version of the extensional theory of arrays. We prove this in two
ways: (1) non-constructively, by using the model theoretic notion of
amalgamation, which is known to be equivalent to admit quantifier-free
interpolation for universal theories; and (2) constructively, by designing an
interpolating procedure, based on solving equations between array updates.
(Interestingly, rewriting techniques are used in the key steps of the solver
and its proof of correctness.) To the best of our knowledge, this is the first
successful attempt of computing quantifier- free interpolants for a variant of
the theory of arrays with extensionality
New results on rewrite-based satisfiability procedures
Program analysis and verification require decision procedures to reason on
theories of data structures. Many problems can be reduced to the satisfiability
of sets of ground literals in theory T. If a sound and complete inference
system for first-order logic is guaranteed to terminate on T-satisfiability
problems, any theorem-proving strategy with that system and a fair search plan
is a T-satisfiability procedure. We prove termination of a rewrite-based
first-order engine on the theories of records, integer offsets, integer offsets
modulo and lists. We give a modularity theorem stating sufficient conditions
for termination on a combinations of theories, given termination on each. The
above theories, as well as others, satisfy these conditions. We introduce
several sets of benchmarks on these theories and their combinations, including
both parametric synthetic benchmarks to test scalability, and real-world
problems to test performances on huge sets of literals. We compare the
rewrite-based theorem prover E with the validity checkers CVC and CVC Lite.
Contrary to the folklore that a general-purpose prover cannot compete with
reasoners with built-in theories, the experiments are overall favorable to the
theorem prover, showing that not only the rewriting approach is elegant and
conceptually simple, but has important practical implications.Comment: To appear in the ACM Transactions on Computational Logic, 49 page
Infectious Disease Ontology
Technological developments have resulted in tremendous increases in the volume and diversity of the data and information that must be processed in the course of biomedical and clinical research and practice. Researchers are at the same time under ever greater pressure to share data and to take steps to ensure that data resources are interoperable. The use of ontologies to annotate data has proven successful in supporting these goals and in providing new possibilities for the automated processing of data and information. In this chapter, we describe different types of vocabulary resources and emphasize those features of formal ontologies that make them most useful for computational applications. We describe current uses of ontologies and discuss future goals for ontology-based computing, focusing on its use in the field of infectious diseases. We review the largest and most widely used vocabulary resources relevant to the study of infectious diseases and conclude with a description of the Infectious Disease Ontology (IDO) suite of interoperable ontology modules that together cover the entire infectious disease domain
A comparative study of fragment screening methods on the p38α kinase: new methods, new insights
The stress-activated kinase p38α was used to evaluate a fragment-based drug discovery approach using the BioFocus fragment library. Compounds were screened by surface plasmon resonance (SPR) on a Biacore(™) T100 against p38α and two selectivity targets. A sub-set of our library was the focus of detailed follow-up analyses that included hit confirmation, affinity determination on 24 confirmed, selective hits and competition assays of these hits with respect to a known ATP binding site inhibitor. In addition, functional activity against p38α was assessed in a biochemical assay using a mobility shift platform (LC3000, Caliper LifeSciences). A selection of fragments was also evaluated using fluorescence lifetime (FLEXYTE(™)) and microscale thermophoresis (Nanotemper) technologies. A good correlation between the data for the different assays was found. Crystal structures were solved for four of the small molecules complexed to p38α. Interestingly, as determined both by X-ray analysis and SPR competition experiments, three of the complexes involved the fragment at the ATP binding site, while the fourth compound bound in a distal site that may offer potential as a novel drug target site. A first round of optimization around the remotely bound fragment has led to the identification of a series of triazole-containing compounds. This approach could form the basis for developing novel and active p38α inhibitors. More broadly, it illustrates the power of combining a range of biophysical and biochemical techniques to the discovery of fragments that facilitate the development of novel modulators of kinase and other drug targets. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s10822-011-9454-9) contains supplementary material, which is available to authorized users
Astrometry and geodesy with radio interferometry: experiments, models, results
Summarizes current status of radio interferometry at radio frequencies
between Earth-based receivers, for astrometric and geodetic applications.
Emphasizes theoretical models of VLBI observables that are required to extract
results at the present accuracy levels of 1 cm and 1 nanoradian. Highlights the
achievements of VLBI during the past two decades in reference frames, Earth
orientation, atmospheric effects on microwave propagation, and relativity.Comment: 83 pages, 19 Postscript figures. To be published in Rev. Mod. Phys.,
  Vol. 70, Oct. 199
LNCS
We define the model-measuring problem: given a model M and specification φ, what is the maximal distance ρ such that all models M′ within distance ρ from M satisfy (or violate) φ. The model measuring problem presupposes a distance function on models. We concentrate on automatic distance functions, which are defined by weighted automata. The model-measuring problem subsumes several generalizations of the classical model-checking problem, in particular, quantitative model-checking problems that measure the degree of satisfaction of a specification, and robustness problems that measure how much a model can be perturbed without violating the specification. We show that for automatic distance functions, and ω-regular linear-time and branching-time specifications, the model-measuring problem can be solved. We use automata-theoretic model-checking methods for model measuring, replacing the emptiness question for standard word and tree automata by the optimal-weight question for the weighted versions of these automata. We consider weighted automata that accumulate weights by maximizing, summing, discounting, and limit averaging. We give several examples of using the model-measuring problem to compute various notions of robustness and quantitative satisfaction for temporal specifications
- …
