4,837 research outputs found
BEA – A multifunctional Hungarian spoken language database
In diverse areas of linguistics, the demand for studying actual language use is on
the increase. The aim of developing a phonetically-based multi-purpose database of
Hungarian spontaneous speech, dubbed BEA2, is to accumulate a large amount of
spontaneous speech of various types together with sentence repetition and reading.
Presently, the recorded material of BEA amounts to 260 hours produced by 280
present-day Budapest speakers (ages between 20 and 90, 168 females and 112
males), providing also annotated materials for various types of research and practical
applications
Multilingual Twitter Sentiment Classification: The Role of Human Annotators
What are the limits of automated Twitter sentiment classification? We analyze
a large set of manually labeled tweets in different languages, use them as
training data, and construct automated classification models. It turns out that
the quality of classification models depends much more on the quality and size
of training data than on the type of the model trained. Experimental results
indicate that there is no statistically significant difference between the
performance of the top classification models. We quantify the quality of
training data by applying various annotator agreement measures, and identify
the weakest points of different datasets. We show that the model performance
approaches the inter-annotator agreement when the size of the training set is
sufficiently large. However, it is crucial to regularly monitor the self- and
inter-annotator agreements since this improves the training datasets and
consequently the model performance. Finally, we show that there is strong
evidence that humans perceive the sentiment classes (negative, neutral, and
positive) as ordered
How Many Topics? Stability Analysis for Topic Models
Topic modeling refers to the task of discovering the underlying thematic
structure in a text corpus, where the output is commonly presented as a report
of the top terms appearing in each topic. Despite the diversity of topic
modeling algorithms that have been proposed, a common challenge in successfully
applying these techniques is the selection of an appropriate number of topics
for a given corpus. Choosing too few topics will produce results that are
overly broad, while choosing too many will result in the "over-clustering" of a
corpus into many small, highly-similar topics. In this paper, we propose a
term-centric stability analysis strategy to address this issue, the idea being
that a model with an appropriate number of topics will be more robust to
perturbations in the data. Using a topic modeling approach based on matrix
factorization, evaluations performed on a range of corpora show that this
strategy can successfully guide the model selection process.Comment: Improve readability of plots. Add minor clarification
Morphological annotation of Korean with Directly Maintainable Resources
This article describes an exclusively resource-based method of morphological
annotation of written Korean text. Korean is an agglutinative language. Our
annotator is designed to process text before the operation of a syntactic
parser. In its present state, it annotates one-stem words only. The output is a
graph of morphemes annotated with accurate linguistic information. The
granularity of the tagset is 3 to 5 times higher than usual tagsets. A
comparison with a reference annotated corpus showed that it achieves 89% recall
without any corpus training. The language resources used by the system are
lexicons of stems, transducers of suffixes and transducers of generation of
allomorphs. All can be easily updated, which allows users to control the
evolution of the performances of the system. It has been claimed that
morphological annotation of Korean text could only be performed by a
morphological analysis module accessing a lexicon of morphemes. We show that it
can also be performed directly with a lexicon of words and without applying
morphological rules at annotation time, which speeds up annotation to 1,210
word/s. The lexicon of words is obtained from the maintainable language
resources through a fully automated compilation process
Recommended from our members
Introducing a Romanian Frequency List and the Romanian Vocabulary Levels Test
Vocabulary is considered essential to language learning, thus English word lists and tests based on frequency information have become the centre of attention for researchers, teachers and learners alike. As a result, it is argued hereby that frequency based word lists and tests should be adapted and regarded as key elements for teaching and learning Romanian as an additional language as well.
Since there are currently no reliable frequency lists and lexical tests in Romanian, this paper aims to bridge this gap by introducing the first Romanian Word List and the Romanian Vocabulary Levels Test. The list contains the 10,000 most frequent Romanian words and is based on the Romanian Balanced Annotated Corpus (ROMBAC, Ion, Irimia, Ștefănescu, Tufiș 2012).
The primary objective of the paper is to elaborate on the compilation criteria, the challenges involved and the benefits of such a list in the case of teaching, learning and curriculum design for Romanian as an additional language. The secondary objective is to present a practical application of the word list by introducing an exemplary Romanian lexical test, the Romanian Vocabulary Levels Test and examine its reliability and validity
A Robust Transformation-Based Learning Approach Using Ripple Down Rules for Part-of-Speech Tagging
In this paper, we propose a new approach to construct a system of
transformation rules for the Part-of-Speech (POS) tagging task. Our approach is
based on an incremental knowledge acquisition method where rules are stored in
an exception structure and new rules are only added to correct the errors of
existing rules; thus allowing systematic control of the interaction between the
rules. Experimental results on 13 languages show that our approach is fast in
terms of training time and tagging speed. Furthermore, our approach obtains
very competitive accuracy in comparison to state-of-the-art POS and
morphological taggers.Comment: Version 1: 13 pages. Version 2: Submitted to AI Communications - the
European Journal on Artificial Intelligence. Version 3: Resubmitted after
major revisions. Version 4: Resubmitted after minor revisions. Version 5: to
appear in AI Communications (accepted for publication on 3/12/2015
Narrative Language as an Expression of Individual and Group Identity
Scientific Narrative Psychology integrates quantitative methodologies into the study of identity. Its methodology, Narrative Categorical Analysis, and its toolkit, NarrCat, were both originally developed by the Hungarian Narrative Psychology Group. NarrCat is for machine-made transformation of sentences in self-narratives into psychologically relevant, statistically processable narrative categories. The main body of this flexible and comprehensive system is formed by Psycho-Thematic modules, such as Agency, Evaluation, Emotion, Cognition, Spatiality, and Temporality. The Relational Modules include Social References, Semantic Role Labeling (SRL), and Negation. Certain elements can be combined into Hypermodules, such as Psychological Perspective and Spatio-Temporal Perspective, which allow for even more complex, higher level exploration of composite psychological processes. Using up-to-date developments of corpus linguistics and Natural Language Processing (NLP), a unique feature of NarrCat is its capacity of SRL. The structure of NarrCat, as well as the empirical results in group identity research, is discussed
A new ParlaMint corpus for Hungarian 30m tokens of annotated parliamentary data
Parliamentary data constitute a rich source for research for academic fields in the social sciences and humanities (SSH). To facilitate such research, comparable, high-quality parliamentary corpora are needed. The ParlaMint project, funded by CLARIN-ERIC, aims to create such corpora for languages spoken in European parliaments in a shared framework consisting of uniform encoding schemas, metadata structure, and Universal Dependencies-type linguistic annotation. The newly built Hungarian corpus of ParlaMint II focuses on the minutes of the Hungarian National Assembly between May 2014 and June 2022 and can be considered a major improvement from the Hungarian corpus of ParlaMint I. It has a wider time frame, more extensive metadata on speakers and their affiliations, and more sophisticated linguistic analysis than what was available in ParlaMint I. The Hungarian ParlaMint II corpus is openly available, just as all the ParlaMint corpora for other languages. Some potential applications of ParlaMint corpora in SSH research are also discussed
- …