863 research outputs found
Morphological annotation of Korean with Directly Maintainable Resources
This article describes an exclusively resource-based method of morphological
annotation of written Korean text. Korean is an agglutinative language. Our
annotator is designed to process text before the operation of a syntactic
parser. In its present state, it annotates one-stem words only. The output is a
graph of morphemes annotated with accurate linguistic information. The
granularity of the tagset is 3 to 5 times higher than usual tagsets. A
comparison with a reference annotated corpus showed that it achieves 89% recall
without any corpus training. The language resources used by the system are
lexicons of stems, transducers of suffixes and transducers of generation of
allomorphs. All can be easily updated, which allows users to control the
evolution of the performances of the system. It has been claimed that
morphological annotation of Korean text could only be performed by a
morphological analysis module accessing a lexicon of morphemes. We show that it
can also be performed directly with a lexicon of words and without applying
morphological rules at annotation time, which speeds up annotation to 1,210
word/s. The lexicon of words is obtained from the maintainable language
resources through a fully automated compilation process
Letter to Sound Rules for Accented Lexicon Compression
This paper presents trainable methods for generating letter to sound rules
from a given lexicon for use in pronouncing out-of-vocabulary words and as a
method for lexicon compression.
As the relationship between a string of letters and a string of phonemes
representing its pronunciation for many languages is not trivial, we discuss
two alignment procedures, one fully automatic and one hand-seeded which produce
reasonable alignments of letters to phones.
Top Down Induction Tree models are trained on the aligned entries. We show
how combined phoneme/stress prediction is better than separate prediction
processes, and still better when including in the model the last phonemes
transcribed and part of speech information. For the lexicons we have tested,
our models have a word accuracy (including stress) of 78% for OALD, 62% for CMU
and 94% for BRULEX. The extremely high scores on the training sets allow
substantial size reductions (more than 1/20).
WWW site: http://tcts.fpms.ac.be/synthesis/mbrdicoComment: 4 pages 1 figur
A finite-state approach to arabic broken noun morphology
In this paper, a finite-state computational approach to Arabic broken plural noun morphology is introduced. The paper considers the derivational aspect of the approach, and how generalizations about dependencies in the broken plural noun derivational system of Arabic are captured and handled computationally in this finite-state approach. The approach will be implemented using Xerox finite-state tool
HFST runtime format : A compacted transducer format allowing for fast lookup
University of Pretoria,; 978-1-86854-743-2;Peer reviewe
Automated Morphological Segmentation and Evaluation
In this paper we introduce (i) a new method for morphological segmentation of part of speech labelled German words and (ii) some measures related to the MDL principle for evaluation of morphological segmentations. The segmentation algorithm is capable to discover hierarchical structure and to retrieve new morphemes. It achieved 75 % recall and 99 % precision. Regarding MDL based evaluation, a linear combination of vocabulary size and size of reduced deterministic finite state automata matching exactly the segmentation output turned out to be an appropriate measure to rank segmentation models according to their quality
Speech Recognition by Composition of Weighted Finite Automata
We present a general framework based on weighted finite automata and weighted
finite-state transducers for describing and implementing speech recognizers.
The framework allows us to represent uniformly the information sources and data
structures used in recognition, including context-dependent units,
pronunciation dictionaries, language models and lattices. Furthermore, general
but efficient algorithms can used for combining information sources in actual
recognizers and for optimizing their application. In particular, a single
composition algorithm is used both to combine in advance information sources
such as language models and dictionaries, and to combine acoustic observations
and information sources dynamically during recognition.Comment: 24 pages, uses psfig.st
Building and Using Existing Hunspell Dictionaries and TEX Hyphenators as Finite-State Automata
Volume: 5 Proceeding volume: 5There are numerous formats for writing spellcheckers for open-source systems and there are many descriptions for languages written in these formats. Similarly, for word hyphenation by computer there are TEX rules for many languages. In this paper we demonstrate a method for converting these spell-checking lexicons and hyphenation rule sets into finite-state automata, and present a new finite-state based system for writer’s tools used in current open-source software such as Firefox, OpenOffice.org and enchant via the spell-checking library voikko.Peer reviewe
Open Source Natural Language Processing
Our MQP aimed to introduce finite state machine based techniques for natural language processing into Hunspell, the world\u27s premiere Open Source spell checker used in several prominent projects such as Firefox and Open Office. We created compact machine-readable finite state transducer representations of 26 of the most commonly used languages on Wikipedia. We then created an automata based spell checker. In addition, we implemented an transducer based stemmer, which will be used in the future of transducer based morphological analysis
- …