3,363 research outputs found
A probabilistic framework for analysing the compositionality of conceptual combinations
Conceptual combination performs a fundamental role in creating the broad
range of compound phrases utilised in everyday language. This article provides
a novel probabilistic framework for assessing whether the semantics of conceptual
combinations are compositional, and so can be considered as a function of
the semantics of the constituent concepts, or not. While the systematicity and
productivity of language provide a strong argument in favor of assuming compositionality,
this very assumption is still regularly questioned in both cognitive
science and philosophy. Additionally, the principle of semantic compositionality
is underspecified, which means that notions of both "strong" and "weak"
compositionality appear in the literature. Rather than adjudicating between
different grades of compositionality, the framework presented here contributes
formal methods for determining a clear dividing line between compositional and
non-compositional semantics. In addition, we suggest that the distinction between
these is contextually sensitive. Compositionality is equated with a joint probability distribution modeling how the constituent concepts in the combination
are interpreted. Marginal selectivity is introduced as a pivotal probabilistic
constraint for the application of the Bell/CH and CHSH systems of inequalities.
Non-compositionality is equated with a failure of marginal selectivity, or violation
of either system of inequalities in the presence of marginal selectivity. This
means that the conceptual combination cannot be modeled in a joint probability
distribution, the variables of which correspond to how the constituent concepts
are being interpreted. The formal analysis methods are demonstrated by applying
them to an empirical illustration of twenty-four non-lexicalised conceptual
combinations
Polysemy and word meaning: an account of lexical meaning for different kinds of content words
There is an ongoing debate about the meaning of lexical words, i.e., words that contribute with content to the meaning of sentences. This debate has coincided with a renewal in the study of polysemy, which has taken place in the psycholinguistics camp mainly. There is already a fruitful interbreeding between two lines of research: the theoretical study of lexical word meaning, on the one hand, and the models of polysemy psycholinguists present, on the other. In this paper I aim at deepening on this ongoing interbreeding, examine what is said about polysemy, particularly in the psycholinguistics literature, and then show how what we seem to know about the representation and storage of polysemous senses affects the models that we have about lexical word meaning
Z Logic and its Consequences
This paper provides an introduction to the specification language Z from a logical perspective. The possibility of presenting Z in this way is a consequence of a number of joint publications on Z logic that Henson and Reeves have co-written since 1997. We provide an informal as well as formal introduction to Z logic and show how it may be used, and extended, to investigate issues such as equational logic, the logic of preconditions, the issue of monotonicity and both operation and data refinement
Dynamic Tableaux for Dynamic Modal Logics
In this dissertation we present proof systems for several modal logics. These proof systems are based on analytic (or semantic) tableaux.
Modal logics are logics for reasoning about possibility, knowledge, beliefs, preferences, and other modalities. Their semantics are almost always based on Saul Kripke’s possible world semantics. In Kripke semantics, models are represented by relational structures or, equivalently, labeled graphs. Syntactic formulas that express statements about knowledge and other modalities are evaluated in terms of such models.
This dissertation focuses on modal logics with dynamic operators for public announcements, belief revision, preference upgrades, and so on. These operators are defined in terms of mathematical operations on Kripke models. Thus, for example, a belief revision operator in the syntax would correspond to a belief revision operation on models.
The ‘dynamic’ semantics of dynamic modal logics are a clever way of extending languages without compromising on intuitiveness. We present ‘dynamic’ tableau proof systems for these dynamic semantics, with the express aim to make them conceptually simple, easy to use, modular, and extensible. This we do by reflecting the semantics as closely as possible in the components of our tableau system. For instance, dynamic operations on Kripke models have counterpart dynamic relations between tableaux.
Soundness, completeness, and decidability are three of the most important properties that a proof system may have. A proof system is sound if and only if any formula for which a proof exists, is true in every model. A proof system is complete if and only if for any formula that is true in all models, a proof exists. A proof system is decidable if and only if any formula can be proved to be a theorem or not a theorem in a finite number of steps. All proof systems in this dissertation are sound, complete, and decidable.
Part of our strategy to create modular tableau systems is to delay concerns over decidability until after soundness and completeness have been established. Decidability is attained through the operations of folding and through operations on ‘tableau cascades’, which are graphs of tableaux.
Finally, we provide a proof-of-concept implementation of our dynamic tableau system for public announcement logic in the Clojure programming language
Don't Blame Distributional Semantics if it can't do Entailment
Distributional semantics has had enormous empirical success in Computational
Linguistics and Cognitive Science in modeling various semantic phenomena, such
as semantic similarity, and distributional models are widely used in
state-of-the-art Natural Language Processing systems. However, the theoretical
status of distributional semantics within a broader theory of language and
cognition is still unclear: What does distributional semantics model? Can it
be, on its own, a fully adequate model of the meanings of linguistic
expressions? The standard answer is that distributional semantics is not fully
adequate in this regard, because it falls short on some of the central aspects
of formal semantic approaches: truth conditions, entailment, reference, and
certain aspects of compositionality. We argue that this standard answer rests
on a misconception: These aspects do not belong in a theory of expression
meaning, they are instead aspects of speaker meaning, i.e., communicative
intentions in a particular context. In a slogan: words do not refer, speakers
do. Clearing this up enables us to argue that distributional semantics on its
own is an adequate model of expression meaning. Our proposal sheds light on the
role of distributional semantics in a broader theory of language and cognition,
its relationship to formal semantics, and its place in computational models.Comment: To appear in Proceedings of the 13th International Conference on
Computational Semantics (IWCS 2019), Gothenburg, Swede
- …