330 research outputs found
Positive Logic with Adjoint Modalities: Proof Theory, Semantics and Reasoning about Information
We consider a simple modal logic whose non-modal part has conjunction and
disjunction as connectives and whose modalities come in adjoint pairs, but are
not in general closure operators. Despite absence of negation and implication,
and of axioms corresponding to the characteristic axioms of (e.g.) T, S4 and
S5, such logics are useful, as shown in previous work by Baltag, Coecke and the
first author, for encoding and reasoning about information and misinformation
in multi-agent systems. For such a logic we present an algebraic semantics,
using lattices with agent-indexed families of adjoint pairs of operators, and a
cut-free sequent calculus. The calculus exploits operators on sequents, in the
style of "nested" or "tree-sequent" calculi; cut-admissibility is shown by
constructive syntactic methods. The applicability of the logic is illustrated
by reasoning about the muddy children puzzle, for which the calculus is
augmented with extra rules to express the facts of the muddy children scenario.Comment: This paper is the full version of the article that is to appear in
the ENTCS proceedings of the 25th conference on the Mathematical Foundations
of Programming Semantics (MFPS), April 2009, University of Oxfor
Phase diagram of J1-J2 transverse field Ising model on the checkerboard lattice: a plaquette-operator approach
We study the effect of quantum fluctuations by means of a transverse magnetic
field () on the antiferromagnetic Ising model on the
checkerboard lattice, the two dimensional version of the pyrochlore lattice.
The zero-temperature phase diagram of the model has been obtained by employing
a plaquette operator approach (POA). The plaquette operator formalism bosonizes
the model, in which a single boson is associated to each eigenstate of a
plaquette and the inter-plaquette interactions define an effective Hamiltonian.
The excitations of a plaquette would represent an-harmonic fluctuations of the
model, which lead not only to lower the excitation energy compared with a
single-spin flip but also to lift the extensive degeneracy in favor of a
plaquette ordered solid (RPS) state, which breaks lattice translational
symmetry, in addition to a unique collinear phase for . The bosonic
excitation gap vanishes at the critical points to the N\'{e}el ()
and collinear () ordered phases, which defines the critical phase
boundaries. At the homogeneous coupling () and its close neighborhood,
the (canted) RPS state, established from an-harmonic fluctuations, lasts for
low fields, , which is followed by a transition to the
quantum paramagnet (polarized) phase at high fields. The transition from RPS
state to the N\'{e}el phase is either a deconfined quantum phase transition or
a first order one, however a continuous transition occurs between RPS and
collinear phases.Comment: To appear in EPJB, 12 pages, 15 figures, 1 tabl
A Generalised Quantifier Theory of Natural Language in Categorical Compositional Distributional Semantics with Bialgebras
Categorical compositional distributional semantics is a model of natural language; it combines the statistical vector space models of words with the compositional models of grammar. We formalise in this model the generalised quantifier theory of natural language, due to Barwise and Cooper. The underlying setting is a compact closed category with bialgebras. We start from a generative grammar formalisation and develop an abstract categorical compositional semantics for it, then instantiate the abstract setting to sets and relations and to finite dimensional vector spaces and linear maps. We prove the equivalence of the relational instantiation to the truth theoretic semantics of generalised quantifiers. The vector space instantiation formalises the statistical usages of words and enables us to, for the first time, reason about quantified phrases and sentences compositionally in distributional semantics
Classical Copying versus Quantum Entanglement in Natural Language: The Case of VP-ellipsis
In Proceedings CAPNS 2018, arXiv:1811.02701In Proceedings CAPNS 2018, arXiv:1811.0270
Evaluating Composition Models for Verb Phrase Elliptical Sentence Embeddings
Ellipsis is a natural language phenomenon where part of a sentence is missing and its information must be recovered from its surrounding context, as in “Cats chase dogs and so do foxes.”. Formal semantics has different methods for resolving ellipsis and recovering the missing information, but the problem has not been considered for distributional semantics, where words have vector embeddings and combinations thereof provide embeddings for sentences. In elliptical sentences these combinations go beyond linear as copying of elided information is necessary. In this paper, we develop different models for embedding VP-elliptical sentences. We extend existing verb disambiguation and sentence similarity datasets to ones containing elliptical phrases and evaluate our models on these datasets for a variety of non-linear combinations and their linear counterparts. We compare results of these compositional models to state of the art holistic sentence encoders. Our results show that non-linear addition and a non-linear tensor-based composition outperform the naive non-compositional baselines and the linear models, and that sentence encoders perform well on sentence similarity, but not on verb disambiguation
Enhancing Personalised Recommendations with the Use of Multimodal Information
Whenever we watch a TV show or movie, we process a substantial amount of information that is conveyed to us via various multimedia mediums, in particular: visual, textual, and audio. These data signify distinctive properties that aid in creating a unique motion picture experience. In effort to not only produce a more personalised recommender system, but also tackle the problem of popularity bias, we develop a system that incorporates the use of multimodal information. Specifically, we investigate the correlation between features that are extracted using state of the art techniques and deep learning models from visual characteristics, audio patterns and subtitles. The framework is evaluated on a dataset comprising of 145 BBC TV programmes against genre and user baselines. We demonstrate that personalised recommendations can not only be improved with the use of multimodal information, but also outperform genre and user-based models in terms of diversity, whilst maintaining matching levels of accuracy
Linguistic Matrix Theory
32 pages, 3 figures32 pages, 3 figures32 pages, 3 figuresRecent research in computational linguistics has developed algorithms which associate matrices with adjectives and verbs, based on the distribution of words in a corpus of text. These matrices are linear operators on a vector space of context words. They are used to construct the meaning of composite expressions from that of the elementary constituents, forming part of a compositional distributional approach to semantics. We propose a Matrix Theory approach to this data, based on permutation symmetry along with Gaussian weights and their perturbations. A simple Gaussian model is tested against word matrices created from a large corpus of text. We characterize the cubic and quartic departures from the model, which we propose, alongside the Gaussian parameters, as signatures for comparison of linguistic corpora. We propose that perturbed Gaussian models with permutation symmetry provide a promising framework for characterizing the nature of universality in the statistical properties of word matrices. The matrix theory framework developed here exploits the view of statistics as zero dimensional perturbative quantum field theory. It perceives language as a physical system realizing a universality class of matrix statistics characterized by permutation symmetry
- …