754 research outputs found
A Unifying Theory for Graph Transformation
The field of graph transformation studies the rule-based transformation of graphs. An important branch is the algebraic graph transformation tradition, in which approaches are defined and studied using the language of category theory. Most algebraic graph transformation approaches (such as DPO, SPO, SqPO, and AGREE) are opinionated about the local contexts that are allowed around matches for rules, and about how replacement in context should work exactly. The approaches also differ considerably in their underlying formal theories and their general expressiveness (e.g., not all frameworks allow duplication). This dissertation proposes an expressive algebraic graph transformation approach, called PBPO+, which is an adaptation of PBPO by Corradini et al. The central contribution is a proof that PBPO+ subsumes (under mild restrictions) DPO, SqPO, AGREE, and PBPO in the important categorical setting of quasitoposes. This result allows for a more unified study of graph transformation metatheory, methods, and tools. A concrete example of this is found in the second major contribution of this dissertation: a graph transformation termination method for PBPO+, based on decreasing interpretations, and defined for general categories. By applying the proposed encodings into PBPO+, this method can also be applied for DPO, SqPO, AGREE, and PBPO
Fragments and frame classes:Towards a uniform proof theory for modal fixed point logics
This thesis studies the proof theory of modal fixed point logics. In particular, we construct proof systems for various fragments of the modal mu-calculus, interpreted over various classes of frames. With an emphasis on uniform constructions and general results, we aim to bring the relatively underdeveloped proof theory of modal fixed point logics closer to the well-established proof theory of basic modal logic. We employ two main approaches. First, we seek to generalise existing methods for basic modal logic to accommodate fragments of the modal mu-calculus. We use this approach for obtaining Hilbert-style proof systems. Secondly, we adapt existing proof systems for the modal mu-calculus to various classes of frames. This approach yields proof systems which are non-well-founded, or cyclic.The thesis starts with an introduction and some mathematical preliminaries. In Chapter 3 we give hypersequent calculi for modal logic with the master modality, building on work by Ori Lahav. This is followed by an Intermezzo, where we present an abstract framework for cyclic proofs, in which we give sufficient conditions for establishing the bounded proof property. In Chapter 4 we generalise existing work on Hilbert-style proof systems for PDL to the level of the continuous modal mu-calculus. Chapter 5 contains a novel cyclic proof system for the alternation-free two-way modal mu-calculus. Finally, in Chapter 6, we present a cyclic proof system for Guarded Kleene Algebra with Tests and take a first step towards using it to establish the completeness of an algebraic counterpart
Formally verified animation for RoboChart using interaction trees
RoboChart is a core notation in the RoboStar framework. It is a timed and probabilistic domain-specific and state machine-based language for robotics. RoboChart supports shared variables and communication across entities in its component model. It has formal denotational semantics given in CSP. The semantic technique of Interaction Trees (ITrees) represents behaviours of reactive and concurrent programs interacting with their environments. Recent mechanisation of ITrees, ITree-based CSP semantics and a Z mathematical toolkit in Isabelle/HOL bring new applications of verification and animation for state-rich process languages, such as RoboChart. In this paper, we use ITrees to give RoboChart novel operational semantics, implement it in Isabelle, and use Isabelle’s code generator to generate verified and executable animations. We illustrate our approach using an autonomous chemical detector and patrol robot models, exhibiting nondeterminism and using shared variables. With animation, we show two concrete scenarios for the chemical detector when the robot encounters different environmental inputs and three for the patrol robot when its calibrated position is in other corridor sections. We also verify that the animated scenarios are trace refinements of the CSP denotational semantics of the RoboChart models using FDR, a refinement model checker for CSP. This ensures that our approach to resolve nondeterminism using CSP operators with priority is sound and correct
Complete and easy type Inference for first-class polymorphism
The Hindley-Milner (HM) typing discipline is remarkable in that it allows statically typing programs without requiring the programmer to annotate programs with types themselves. This is due to the HM system offering complete type inference, meaning that if a program is well typed, the inference algorithm is able to determine all the necessary typing information. Let bindings implicitly perform generalisation, allowing a let-bound variable to receive the most general possible type, which in turn may be instantiated appropriately at each of the variable’s use sites. As a result, the HM type system has since become the foundation for type inference in programming languages such as Haskell as well as the ML family of languages and has been extended in a multitude of ways.
The original HM system only supports prenex polymorphism, where type variables are universally quantified only at the outermost level. This precludes many useful programs, such as passing a data structure to a function in the form of a fold function, which would need to be polymorphic in the type of the accumulator. However, this would require a nested quantifier in the type of the overall function. As a result, one direction of extending the HM system is to add support for first-class polymorphism, allowing arbitrarily nested quantifiers and instantiating type variables with polymorphic types. In such systems, restrictions are necessary to retain decidability of type inference.
This work presents FreezeML, a novel approach for integrating first-class polymorphism into the HM system, focused on simplicity. It eschews sophisticated yet hard to grasp heuristics in the type systems or extending the language of types, while still requiring only modest amounts of annotations. In particular, FreezeML leverages the mechanisms for generalisation and instantiation that are already at the heart of ML. Generalisation and instantiation are performed by let bindings and variables, respectively, but extended to types beyond prenex polymorphism. The defining feature of FreezeML is the ability to freeze variables, which prevents the usual instantiation of their types, allowing them instead to keep their original, fully polymorphic types.
We demonstrate that FreezeML is as expressive as System F by providing a translation from the latter to the former; the reverse direction is also shown. Further, we prove that FreezeML is indeed a conservative extension of ML: When considering only ML programs, FreezeML accepts exactly the same programs as ML itself. #
We show that type inference for FreezeML can easily be integrated into HM-like type systems by presenting a sound and complete inference algorithm for FreezeML that extends Algorithm W, the original inference algorithm for the HM system.
Since the inception of Algorithm W in the 1970s, type inference for the HM system and its descendants has been modernised by approaches that involve constraint solving, which proved to be more modular and extensible. In such systems, a term is translated to a logical constraint, whose solutions correspond to the types of the original term. A solver for such constraints may then be defined independently. To this end, we demonstrate such a constraint-based inference approach for FreezeML.
We also discuss the effects of integrating the value restriction into FreezeML and provide detailed comparisons with other approaches towards first-class polymorphism in ML alongside a collection of examples found in the literature
Language integrated relational lenses
Relational databases are ubiquitous. Such monolithic databases accumulate large
amounts of data, yet applications typically only work on small portions of the data
at a time. A subset of the database defined as a computation on the underlying
tables is called a view. Querying views is helpful, but it is also desirable to update
them and have these changes be applied to the underlying database. This view
update problem has been the subject of much previous work before, but support
by database servers is limited and only rarely available.
Lenses are a popular approach to bidirectional transformations, a generalization
of the view update problem in databases to arbitrary data. However, perhaps surprisingly, lenses have seldom actually been used to implement updatable views in
databases. Bohannon, Pierce and Vaughan propose an approach to updatable views called relational lenses. However, to the best of our knowledge this
proposal has not been implemented or evaluated prior to the work reported in
this thesis.
This thesis proposes programming language support for relational lenses. Language integrated relational lenses support expressive and efficient view updates,
without relying on updatable view support from the database server. By integrating relational lenses into the programming language, application development
becomes easier and less error-prone, avoiding the impedance mismatch of having
two programming languages. Integrating relational lenses into the language poses
additional challenges. As defined by Bohannon et al. relational lenses completely
recompute the database, making them inefficient as the database scales. The
other challenge is that some parts of the well-formedness conditions are too general for implementation. Bohannon et al. specify predicates using possibly infinite
abstract sets and define the type checking rules using relational algebra.
Incremental relational lenses equip relational lenses with change-propagating semantics that map small changes to the view into (potentially) small changes
to the source tables. We prove that our incremental semantics are functionally
equivalent to the non-incremental semantics, and our experimental results show
orders of magnitude improvement over the non-incremental approach. This thesis introduces a concrete predicate syntax and shows how the required checks
are performed on these predicates and show that they satisfy the abstract predicate specifications. We discuss trade-offs between static predicates that are fully
known at compile time vs dynamic predicates that are only known during execution and introduce hybrid predicates taking inspiration from both approaches.
This thesis adapts the typing rules for relational lenses from sequential composition to a functional style of sub-expressions. We prove that any well-typed
functional relational lens expression can derive a well-typed sequential lens.
We use these additions to relational lenses as the foundation for two practical implementations: an extension of the Links functional language and a library written
in Haskell. The second implementation demonstrates how type-level computation can be used to implement relational lenses without changes to the compiler.
These two implementations attest to the possibility of turning relational lenses
into a practical language feature
Investigations into Proof Structures
We introduce and elaborate a novel formalism for the manipulation and
analysis of proofs as objects in a global manner. In this first approach the
formalism is restricted to first-order problems characterized by condensed
detachment. It is applied in an exemplary manner to a coherent and
comprehensive formal reconstruction and analysis of historical proofs of a
widely-studied problem due to {\L}ukasiewicz. The underlying approach opens the
door towards new systematic ways of generating lemmas in the course of proof
search to the effects of reducing the search effort and finding shorter proofs.
Among the numerous reported experiments along this line, a proof of
{\L}ukasiewicz's problem was automatically discovered that is much shorter than
any proof found before by man or machine.Comment: This article is a continuation of arXiv:2104.1364
Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5
This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered.
First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes.
Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification.
Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well
Optimality and Complexity in Measured Quantum-State Stochastic Processes
If an experimentalist observes a sequence of emitted quantum states via
either projective or positive-operator-valued measurements, the outcomes form a
time series. Individual time series are realizations of a stochastic process
over the measurements' classical outcomes. We recently showed that, in general,
the resulting stochastic process is highly complex in two specific senses: (i)
it is inherently unpredictable to varying degrees that depend on measurement
choice and (ii) optimal prediction requires using an infinite number of
temporal features. Here, we identify the mechanism underlying this
complicatedness as generator nonunifilarity -- the degeneracy between sequences
of generator states and sequences of measurement outcomes. This makes it
possible to quantitatively explore the influence that measurement choice has on
a quantum process' degrees of randomness and structural complexity using
recently introduced methods from ergodic theory. Progress in this, though,
requires quantitative measures of structure and memory in observed time series.
And, success requires accurate and efficient estimation algorithms that
overcome the requirement to explicitly represent an infinite set of predictive
features. We provide these metrics and associated algorithms, using them to
design informationally-optimal measurements of open quantum dynamical systems.Comment: 31 pages, 6 appendices, 22 figures;
http://csc.ucdavis.edu/~cmg/compmech/pubs/qdic.ht
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
- …