407 research outputs found
On elementary equivalence in fuzzy predicate logics
Our work is a contribution to the model theory of fuzzy predicate logics. In this paper we characterize elementary equivalence between models of fuzzy predicate logic using elementary mappings. Refining the method of diagrams we give a solution to an open problem of Hájek and Cintula (J Symb Log 71(3):863-880, 2006, Conjectures 1 and 2). We investigate also the properties of elementary extensions in witnessed and quasi-witnessed theories, generalizing some results of Section 7 of Hájek and Cintula (J Symb Log 71(3):863-880, 2006) and of Section 4 of Cerami and Esteva (Arch Math Log 50(5/6):625-641, 2011) to non-exhaustive model
New Directions in Categorical Logic, for Classical, Probabilistic and Quantum Logic
Intuitionistic logic, in which the double negation law not-not-P = P fails,
is dominant in categorical logic, notably in topos theory. This paper follows a
different direction in which double negation does hold. The algebraic notions
of effect algebra/module that emerged in theoretical physics form the
cornerstone. It is shown that under mild conditions on a category, its maps of
the form X -> 1+1 carry such effect module structure, and can be used as
predicates. Predicates are identified in many different situations, and capture
for instance ordinary subsets, fuzzy predicates in a probabilistic setting,
idempotents in a ring, and effects (positive elements below the unit) in a
C*-algebra or Hilbert space. In quantum foundations the duality between states
and effects plays an important role. It appears here in the form of an
adjunction, where we use maps 1 -> X as states. For such a state s and a
predicate p, the validity probability s |= p is defined, as an abstract Born
rule. It captures many forms of (Boolean or probabilistic) validity known from
the literature. Measurement from quantum mechanics is formalised categorically
in terms of `instruments', using L\"uders rule in the quantum case. These
instruments are special maps associated with predicates (more generally, with
tests), which perform the act of measurement and may have a side-effect that
disturbs the system under observation. This abstract description of
side-effects is one of the main achievements of the current approach. It is
shown that in the special case of C*-algebras, side-effect appear exclusively
in the non-commutative case. Also, these instruments are used for test
operators in a dynamic logic that can be used for reasoning about quantum
programs/protocols. The paper describes four successive assumptions, towards a
categorical axiomatisation of quantitative logic for probabilistic and quantum
systems
Dual Logic Concepts based on Mathematical Morphology in Stratified Institutions: Applications to Spatial Reasoning
Several logical operators are defined as dual pairs, in different types of
logics. Such dual pairs of operators also occur in other algebraic theories,
such as mathematical morphology. Based on this observation, this paper proposes
to define, at the abstract level of institutions, a pair of abstract dual and
logical operators as morphological erosion and dilation. Standard quantifiers
and modalities are then derived from these two abstract logical operators.
These operators are studied both on sets of states and sets of models. To cope
with the lack of explicit set of states in institutions, the proposed abstract
logical dual operators are defined in an extension of institutions, the
stratified institutions, which take into account the notion of open sentences,
the satisfaction of which is parametrized by sets of states. A hint on the
potential interest of the proposed framework for spatial reasoning is also
provided.Comment: 36 page
From probability to sequences and back
This is a survey covering sequential structures and their
applications to the foundations of probability theory. Sequential convergence, convergence groups and the extension of sequentially continuous
maps belong to general topology and Trieste for long has been a center
of sequential topology. We begin with some personal reflections, con-
tinue with topological problems motivated by the extension of probability
measures, and close with some recent results related to the categorical
foundations of probability theory
Quantitative Methods for Similarity in Description Logics
Description Logics (DLs) are a family of logic-based knowledge representation languages used to describe the knowledge of an application domain and reason about it in formally well-defined way. They allow users to describe the important notions and classes of the knowledge domain as concepts, which formalize the necessary and sufficient conditions for individual objects to belong to that concept. A variety of different DLs exist, differing in the set of properties one can use to express concepts, the so-called concept constructors, as well as the set of axioms available to describe the relations between concepts or individuals. However, all classical DLs have in common that they can only express exact knowledge, and correspondingly only allow exact inferences. Either we can infer that some individual belongs to a concept, or we can't, there is no in-between. In practice though, knowledge is rarely exact. Many definitions have their exceptions or are vaguely formulated in the first place, and people might not only be interested in exact answers, but also in alternatives that are "close enough".
This thesis is aimed at tackling how to express that something "close enough", and how to integrate this notion into the formalism of Description Logics. To this end, we will use the notion of similarity and dissimilarity measures as a way to quantify how close exactly two concepts are. We will look at how useful measures can be defined in the context of DLs, and how they can be incorporated into the formal framework in order to generalize it. In particular, we will look closer at two applications of thus measures to DLs: Relaxed instance queries will incorporate a similarity measure in order to not just give the exact answer to some query, but all answers that are reasonably similar. Prototypical definitions on the other hand use a measure of dissimilarity or distance between concepts in order to allow the definitions of and reasoning with concepts that capture not just those individuals that satisfy exactly the stated properties, but also those that are "close enough"
- …