1,235 research outputs found
Hybrid Approach to English-Hindi Name Entity Transliteration
Machine translation (MT) research in Indian languages is still in its
infancy. Not much work has been done in proper transliteration of name entities
in this domain. In this paper we address this issue. We have used English-Hindi
language pair for our experiments and have used a hybrid approach. At first we
have processed English words using a rule based approach which extracts
individual phonemes from the words and then we have applied statistical
approach which converts the English into its equivalent Hindi phoneme and in
turn the corresponding Hindi word. Through this approach we have attained
83.40% accuracy.Comment: Proceedings of IEEE Students' Conference on Electrical, Electronics
and Computer Sciences 201
Bridge Correlational Neural Networks for Multilingual Multimodal Representation Learning
Recently there has been a lot of interest in learning common representations
for multiple views of data. Typically, such common representations are learned
using a parallel corpus between the two views (say, 1M images and their English
captions). In this work, we address a real-world scenario where no direct
parallel data is available between two views of interest (say, and )
but parallel data is available between each of these views and a pivot view
(). We propose a model for learning a common representation for ,
and using only the parallel data available between and
. The proposed model is generic and even works when there are views
of interest and only one pivot view which acts as a bridge between them. There
are two specific downstream applications that we focus on (i) transfer learning
between languages ,,..., using a pivot language and (ii)
cross modal access between images and a language using a pivot language
. Our model achieves state-of-the-art performance in multilingual document
classification on the publicly available multilingual TED corpus and promising
results in multilingual multimodal retrieval on a new dataset created and
released as a part of this work.Comment: Published at NAACL-HLT 201
Finding Function in Form: Compositional Character Models for Open Vocabulary Word Representation
We introduce a model for constructing vector representations of words by
composing characters using bidirectional LSTMs. Relative to traditional word
representation models that have independent vectors for each word type, our
model requires only a single vector per character type and a fixed set of
parameters for the compositional model. Despite the compactness of this model
and, more importantly, the arbitrary nature of the form-function relationship
in language, our "composed" word representations yield state-of-the-art results
in language modeling and part-of-speech tagging. Benefits over traditional
baselines are particularly pronounced in morphologically rich languages (e.g.,
Turkish)
Learning Character-level Compositionality with Visual Features
Previous work has modeled the compositionality of words by creating
character-level models of meaning, reducing problems of sparsity for rare
words. However, in many writing systems compositionality has an effect even on
the character-level: the meaning of a character is derived by the sum of its
parts. In this paper, we model this effect by creating embeddings for
characters based on their visual characteristics, creating an image for the
character and running it through a convolutional neural network to produce a
visual character embedding. Experiments on a text classification task
demonstrate that such model allows for better processing of instances with rare
characters in languages such as Chinese, Japanese, and Korean. Additionally,
qualitative analyses demonstrate that our proposed model learns to focus on the
parts of characters that carry semantic content, resulting in embeddings that
are coherent in visual space.Comment: Accepted to ACL 201
- …