52,622 research outputs found
Neural Responding Machine for Short-Text Conversation
We propose Neural Responding Machine (NRM), a neural network-based response
generator for Short-Text Conversation. NRM takes the general encoder-decoder
framework: it formalizes the generation of response as a decoding process based
on the latent representation of the input text, while both encoding and
decoding are realized with recurrent neural networks (RNN). The NRM is trained
with a large amount of one-round conversation data collected from a
microblogging service. Empirical study shows that NRM can generate
grammatically correct and content-wise appropriate responses to over 75% of the
input text, outperforming state-of-the-arts in the same setting, including
retrieval-based and SMT-based models.Comment: accepted as a full paper at ACL 201
A Compression-Based Toolkit for Modelling and Processing Natural Language Text
A novel compression-based toolkit for modelling and processing natural language text is described. The design of the toolkit adopts an encoding perspective—applications are considered to be problems in searching for the best encoding of different transformations of the source text into the target text. This paper describes a two phase ‘noiseless channel model’ architecture that underpins the toolkit which models the text processing as a lossless communication down a noise-free channel. The transformation and encoding that is performed in the first phase must be both lossless and reversible. The role of the verification and decoding second phase is to verify the correctness of the communication of the target text that is produced by the application. This paper argues that this encoding approach has several advantages over the decoding approach of the standard noisy channel model. The concepts abstracted by the toolkit’s design are explained together with details of the library calls. The pseudo-code for a number of algorithms is also described for the applications that the toolkit implements including encoding, decoding, classification, training (model building), parallel sentence alignment, word segmentation and language segmentation. Some experimental results, implementation details, memory usage and execution speeds are also discussed for these applications
MEANING IN TRANSLATION
Translation is a kind of process. It is not regarded as the product. The process of translation is decoding the source language text to find the meaning. After that, encoding the meaning in the target language text. The process of decoding and encoding meaning in the translation process is not a simple activity. there are many considerations that have to be taken into account. Meaning equivalence has to be maintained well. Because of that, decoding and encoding the meaning must be done properly. In finding the meaning of the source language text, it is not enough to pay attention merely on the referential meaning. the connotative meaning plays important role in gaining the right meaning. After getting the meaning of the source language text, maintaining meaning equivalence in target language can be difficult. There are problems that might rise the difficulty in finding the meaning equivalence. The problems are the difference of language system, the difference of culture, the various meanings embedded by a word, and the lack of generic-spesific word relationship
Long-distance quantum communication over noisy networks without long-time quantum memory
The problem of sharing entanglement over large distances is crucial for
implementations of quantum cryptography. A possible scheme for long-distance
entanglement sharing and quantum communication exploits networks whose nodes
share Einstein-Podolsky-Rosen (EPR) pairs. In Perseguers et al. [Phys. Rev. A
78, 062324 (2008)] the authors put forward an important isomorphism between
storing quantum information in a dimension and transmission of quantum
information in a -dimensional network. We show that it is possible to
obtain long-distance entanglement in a noisy two-dimensional (2D) network, even
when taking into account that encoding and decoding of a state is exposed to an
error. For 3D networks we propose a simple encoding and decoding scheme based
solely on syndrome measurements on 2D Kitaev topological quantum memory. Our
procedure constitutes an alternative scheme of state injection that can be used
for universal quantum computation on 2D Kitaev code. It is shown that the
encoding scheme is equivalent to teleporting the state, from a specific node
into a whole two-dimensional network, through some virtual EPR pair existing
within the rest of network qubits. We present an analytic lower bound on
fidelity of the encoding and decoding procedure, using as our main tool a
modified metric on space-time lattice, deviating from a taxicab metric at the
first and the last time slices.Comment: 15 pages, 10 figures; title modified; appendix included in main text;
section IV extended; minor mistakes remove
- …