3 research outputs found
Sanskrit Sandhi Splitting using seq2(seq)^2
In Sanskrit, small words (morphemes) are combined to form compound words
through a process known as Sandhi. Sandhi splitting is the process of splitting
a given compound word into its constituent morphemes. Although rules governing
word splitting exists in the language, it is highly challenging to identify the
location of the splits in a compound word. Though existing Sandhi splitting
systems incorporate these pre-defined splitting rules, they have a low accuracy
as the same compound word might be broken down in multiple ways to provide
syntactically correct splits.
In this research, we propose a novel deep learning architecture called Double
Decoder RNN (DD-RNN), which (i) predicts the location of the split(s) with 95%
accuracy, and (ii) predicts the constituent words (learning the Sandhi
splitting rules) with 79.5% accuracy, outperforming the state-of-art by 20%.
Additionally, we show the generalization capability of our deep learning model,
by showing competitive results in the problem of Chinese word segmentation, as
well.Comment: Accepted in EMNLP 201
Linguistically-Informed Neural Architectures for Lexical, Syntactic and Semantic Tasks in Sanskrit
The primary focus of this thesis is to make Sanskrit manuscripts more
accessible to the end-users through natural language technologies. The
morphological richness, compounding, free word orderliness, and low-resource
nature of Sanskrit pose significant challenges for developing deep learning
solutions. We identify four fundamental tasks, which are crucial for developing
a robust NLP technology for Sanskrit: word segmentation, dependency parsing,
compound type identification, and poetry analysis. The first task, Sanskrit
Word Segmentation (SWS), is a fundamental text processing task for any other
downstream applications. However, it is challenging due to the sandhi
phenomenon that modifies characters at word boundaries. Similarly, the existing
dependency parsing approaches struggle with morphologically rich and
low-resource languages like Sanskrit. Compound type identification is also
challenging for Sanskrit due to the context-sensitive semantic relation between
components. All these challenges result in sub-optimal performance in NLP
applications like question answering and machine translation. Finally, Sanskrit
poetry has not been extensively studied in computational linguistics.
While addressing these challenges, this thesis makes various contributions:
(1) The thesis proposes linguistically-informed neural architectures for these
tasks. (2) We showcase the interpretability and multilingual extension of the
proposed systems. (3) Our proposed systems report state-of-the-art performance.
(4) Finally, we present a neural toolkit named SanskritShala, a web-based
application that provides real-time analysis of input for various NLP tasks.
Overall, this thesis contributes to making Sanskrit manuscripts more accessible
by developing robust NLP technology and releasing various resources, datasets,
and web-based toolkit.Comment: Ph.D. dissertatio