13,080 research outputs found
A categorical semantics for causal structure
We present a categorical construction for modelling causal structures within
a general class of process theories that include the theory of classical
probabilistic processes as well as quantum theory. Unlike prior constructions
within categorical quantum mechanics, the objects of this theory encode
fine-grained causal relationships between subsystems and give a new method for
expressing and deriving consequences for a broad class of causal structures. We
show that this framework enables one to define families of processes which are
consistent with arbitrary acyclic causal orderings. In particular, one can
define one-way signalling (a.k.a. semi-causal) processes, non-signalling
processes, and quantum -combs. Furthermore, our framework is general enough
to accommodate recently-proposed generalisations of classical and quantum
theory where processes only need to have a fixed causal ordering locally, but
globally allow indefinite causal ordering.
To illustrate this point, we show that certain processes of this kind, such
as the quantum switch, the process matrices of Oreshkov, Costa, and Brukner,
and a classical three-party example due to Baumeler, Feix, and Wolf are all
instances of a certain family of processes we refer to as in
the appropriate category of higher-order causal processes. After defining these
families of causal structures within our framework, we give derivations of
their operational behaviour using simple, diagrammatic axioms.Comment: Extended version of a LICS 2017 paper with the same titl
Monoidal computer III: A coalgebraic view of computability and complexity
Monoidal computer is a categorical model of intensional computation, where
many different programs correspond to the same input-output behavior. The
upshot of yet another model of computation is that a categorical formalism
should provide a much needed high level language for theory of computation,
flexible enough to allow abstracting away the low level implementation details
when they are irrelevant, or taking them into account when they are genuinely
needed. A salient feature of the approach through monoidal categories is the
formal graphical language of string diagrams, which supports visual reasoning
about programs and computations.
In the present paper, we provide a coalgebraic characterization of monoidal
computer. It turns out that the availability of interpreters and specializers,
that make a monoidal category into a monoidal computer, is equivalent with the
existence of a *universal state space*, that carries a weakly final state
machine for any pair of input and output types. Being able to program state
machines in monoidal computers allows us to represent Turing machines, to
capture their execution, count their steps, as well as, e.g., the memory cells
that they use. The coalgebraic view of monoidal computer thus provides a
convenient diagrammatic language for studying computability and complexity.Comment: 34 pages, 24 figures; in this version: added the Appendi
Lexical typology through similarity semantics: Toward a semantic map of motion verbs
This paper discusses a multidimensional probabilistic semantic map of lexical motion verb stems based on data collected from parallel texts (viz. translations of the Gospel according to Mark) for 100 languages from all continents. The crosslinguistic diversity of lexical semantics in motion verbs is illustrated in detail for the domain of `go', `come', and `arrive' type contexts. It is argued that the theoretical bases underlying probabilistic semantic maps from exemplar data are the isomorphism hypothesis (given any two meanings and their corresponding forms in any particular language, more similar meanings are more likely to be expressed by the same form in any language), similarity semantics (similarity is more basic than identity), and exemplar semantics (exemplar meaning is more fundamental than abstract concepts)
Conditional Random Field Autoencoders for Unsupervised Structured Prediction
We introduce a framework for unsupervised learning of structured predictors
with overlapping, global features. Each input's latent representation is
predicted conditional on the observable data using a feature-rich conditional
random field. Then a reconstruction of the input is (re)generated, conditional
on the latent structure, using models for which maximum likelihood estimation
has a closed-form. Our autoencoder formulation enables efficient learning
without making unrealistic independence assumptions or restricting the kinds of
features that can be used. We illustrate insightful connections to traditional
autoencoders, posterior regularization and multi-view learning. We show
competitive results with instantiations of the model for two canonical NLP
tasks: part-of-speech induction and bitext word alignment, and show that
training our model can be substantially more efficient than comparable
feature-rich baselines
- …