10,910 research outputs found
Holographic Reduced Representations for Oscillator Recall: A Model of Phonological Production
This paper describes a new computational
model of phonological production, Holographic
Reduced Representations for Oscillator Recall, or HORROR. HORROR's
architecture accounts
for phonological speech error patterns by combining
the hierarchical oscillating context signal of the OSCAR serial-order
model~\cite{VousdenEtAl:2000,BrownEtAl:2000} with a holographic associative
memory~\cite{Plate:1995}.
The resulting model is novel in a number of
ways.
Most importantly, all of the noise needed to generate errors is intrinsic
to the system, instead of being generated by an external process. The
model features
fully-distributed hierarchical phoneme
representations and a single distributed associative memory.
Using
fewer parameters and a more parsimonious design than OSCAR, HORROR accounts
for error type proportions, the syllable-position constraint, and other
constraints seen in the human speech error data
Towards a Finite- Hologram
We suggest that holographic tensor models related to SYK are viable
candidates for exactly (ie., non-perturbatively in ) solvable holographic
theories. The reason is that in these theories, the Hilbert space is a spinor
representation, and the Hamiltonian (at least in some classes) can be arranged
to commute with the Clifford level. This makes the theory solvable level by
level. We demonstrate this for the specific case of the uncolored
tensor model with arbitrary even , and reduce the question of determining
the spectrum and eigenstates to an algebraic equation relating Young tableaux.
Solving this reduced problem is conceptually trivial and amounts to matching
the representations on either side, as we demonstrate explicitly at low levels.
At high levels, representations become bigger, but should still be tractable.
None of our arguments require any supersymmetry.Comment: 16 page
Holographic Embeddings of Knowledge Graphs
Learning embeddings of entities and relations is an efficient and versatile
method to perform machine learning on relational data such as knowledge graphs.
In this work, we propose holographic embeddings (HolE) to learn compositional
vector space representations of entire knowledge graphs. The proposed method is
related to holographic models of associative memory in that it employs circular
correlation to create compositional representations. By using correlation as
the compositional operator HolE can capture rich interactions but
simultaneously remains efficient to compute, easy to train, and scalable to
very large datasets. In extensive experiments we show that holographic
embeddings are able to outperform state-of-the-art methods for link prediction
in knowledge graphs and relational learning benchmark datasets.Comment: To appear in AAAI-1
Dynamics for holographic codes
We describe how to introduce dynamics for the holographic states and codes
introduced by Pastawski, Yoshida, Harlow and Preskill. This task requires the
definition of a continuous limit of the kinematical Hilbert space which we
argue may be achieved via the semicontinuous limit of Jones. Dynamics is then
introduced by building a unitary representation of a group known as Thompson's
group T, which is closely related to the conformal group in 1+1 dimensions. The
bulk Hilbert space is realised as a special subspace of the semicontinuous
limit Hilbert space spanned by a class of distinguished states which can be
assigned a discrete bulk geometry. The analogue of the group of large bulk
diffeomorphisms is given by a unitary representation of the Ptolemy group Pt,
on the bulk Hilbert space thus realising a toy model of the AdS/CFT
correspondence which we call the Pt/T correspondence.Comment: 40 pages (revised version submitted to journal). See video of related
talk: https://www.youtube.com/watch?v=xc2KIa2LDF
Learning to Rank Question Answer Pairs with Holographic Dual LSTM Architecture
We describe a new deep learning architecture for learning to rank question
answer pairs. Our approach extends the long short-term memory (LSTM) network
with holographic composition to model the relationship between question and
answer representations. As opposed to the neural tensor layer that has been
adopted recently, the holographic composition provides the benefits of scalable
and rich representational learning approach without incurring huge parameter
costs. Overall, we present Holographic Dual LSTM (HD-LSTM), a unified
architecture for both deep sentence modeling and semantic matching.
Essentially, our model is trained end-to-end whereby the parameters of the LSTM
are optimized in a way that best explains the correlation between question and
answer representations. In addition, our proposed deep learning architecture
requires no extensive feature engineering. Via extensive experiments, we show
that HD-LSTM outperforms many other neural architectures on two popular
benchmark QA datasets. Empirical studies confirm the effectiveness of
holographic composition over the neural tensor layer.Comment: SIGIR 2017 Full Pape
- …