26,175 research outputs found
Probabilistic Programming Concepts
A multitude of different probabilistic programming languages exists today,
all extending a traditional programming language with primitives to support
modeling of complex, structured probability distributions. Each of these
languages employs its own probabilistic primitives, and comes with a particular
syntax, semantics and inference procedure. This makes it hard to understand the
underlying programming concepts and appreciate the differences between the
different languages. To obtain a better understanding of probabilistic
programming, we identify a number of core programming concepts underlying the
primitives used by various probabilistic languages, discuss the execution
mechanisms that they require and use these to position state-of-the-art
probabilistic languages and their implementation. While doing so, we focus on
probabilistic extensions of logic programming languages such as Prolog, which
have been developed since more than 20 years
Semantics, Modelling, and the Problem of Representation of Meaning -- a Brief Survey of Recent Literature
Over the past 50 years many have debated what representation should be used
to capture the meaning of natural language utterances. Recently new needs of
such representations have been raised in research. Here I survey some of the
interesting representations suggested to answer for these new needs.Comment: 15 pages, no figure
A Fully Attention-Based Information Retriever
Recurrent neural networks are now the state-of-the-art in natural language
processing because they can build rich contextual representations and process
texts of arbitrary length. However, recent developments on attention mechanisms
have equipped feedforward networks with similar capabilities, hence enabling
faster computations due to the increase in the number of operations that can be
parallelized. We explore this new type of architecture in the domain of
question-answering and propose a novel approach that we call Fully Attention
Based Information Retriever (FABIR). We show that FABIR achieves competitive
results in the Stanford Question Answering Dataset (SQuAD) while having fewer
parameters and being faster at both learning and inference than rival methods.Comment: Accepted for presentation at the International Joint Conference on
Neural Networks (IJCNN) 201
- …