1,524 research outputs found
Turing machines based on unsharp quantum logic
In this paper, we consider Turing machines based on unsharp quantum logic.
For a lattice-ordered quantum multiple-valued (MV) algebra E, we introduce
E-valued non-deterministic Turing machines (ENTMs) and E-valued deterministic
Turing machines (EDTMs). We discuss different E-valued recursively enumerable
languages from width-first and depth-first recognition. We find that
width-first recognition is equal to or less than depth-first recognition in
general. The equivalence requires an underlying E value lattice to degenerate
into an MV algebra. We also study variants of ENTMs. ENTMs with a classical
initial state and ENTMs with a classical final state have the same power as
ENTMs with quantum initial and final states. In particular, the latter can be
simulated by ENTMs with classical transitions under a certain condition. Using
these findings, we prove that ENTMs are not equivalent to EDTMs and that ENTMs
are more powerful than EDTMs. This is a notable difference from the classical
Turing machines.Comment: In Proceedings QPL 2011, arXiv:1210.029
Investigating Linguistic Pattern Ordering in Hierarchical Natural Language Generation
Natural language generation (NLG) is a critical component in spoken dialogue
system, which can be divided into two phases: (1) sentence planning: deciding
the overall sentence structure, (2) surface realization: determining specific
word forms and flattening the sentence structure into a string. With the rise
of deep learning, most modern NLG models are based on a sequence-to-sequence
(seq2seq) model, which basically contains an encoder-decoder structure; these
NLG models generate sentences from scratch by jointly optimizing sentence
planning and surface realization. However, such simple encoder-decoder
architecture usually fail to generate complex and long sentences, because the
decoder has difficulty learning all grammar and diction knowledge well. This
paper introduces an NLG model with a hierarchical attentional decoder, where
the hierarchy focuses on leveraging linguistic knowledge in a specific order.
The experiments show that the proposed method significantly outperforms the
traditional seq2seq model with a smaller model size, and the design of the
hierarchical attentional decoder can be applied to various NLG systems.
Furthermore, different generation strategies based on linguistic patterns are
investigated and analyzed in order to guide future NLG research work.Comment: accepted by the 7th IEEE Workshop on Spoken Language Technology (SLT
2018). arXiv admin note: text overlap with arXiv:1808.0274
- …