12,556 research outputs found

    Beyond Facets: Semantic Roots and Modifiers as Elements of a Conceptual Morphology

    Get PDF
    This paper presents initial ideas on a conceptual morphology in which concepts such as Fermentation, Fermented, and Fermentable are represented as combinations of a semantic root, in the example Ferment, with a modifier, in the example process, state/condition, and susceptible to process, respectively. This makes it possible to generate a large number of concepts from a much smaller list of semantic roots and modifiers. It also allows for great flexibility in indexing and searching. The paper gives a preliminary scheme of modifiers and invites ideas from classification researchers, logicians, and linguists

    A New Approach of Grammar Teaching: Pre-modifiers in Noun Phrases

    Get PDF
    Teaching grammar has always constituted a major part of language education in curricula around the world. This paper investigates pre-modifiers in noun phrase in English and focuses on their definition and classifications. Pervious scholars have different focuses and give various definitions and classifications of pre-modifiers. Through thoroughly evaluating and comparing of the different theories given by previous grammarians and linguists, this study redefines modifiers from a semantic, formal and syntactical perspective and constructs a new classification based on word classes and classifies pre-modifiers into six categories: 1). adjectival pre-modifiers; 2). nominal pre-modifiers; 3). participle pre-modifiers; 4). genitive pre-modifiers; 5). adverb phrases pre-modifiers; 6). sentences, etc. The implications of this paper may provide new insights in grammar teaching in English classes

    An Empirical Analysis of the Role of Amplifiers, Downtoners, and Negations in Emotion Classification in Microblogs

    Full text link
    The effect of amplifiers, downtoners, and negations has been studied in general and particularly in the context of sentiment analysis. However, there is only limited work which aims at transferring the results and methods to discrete classes of emotions, e. g., joy, anger, fear, sadness, surprise, and disgust. For instance, it is not straight-forward to interpret which emotion the phrase "not happy" expresses. With this paper, we aim at obtaining a better understanding of such modifiers in the context of emotion-bearing words and their impact on document-level emotion classification, namely, microposts on Twitter. We select an appropriate scope detection method for modifiers of emotion words, incorporate it in a document-level emotion classification model as additional bag of words and show that this approach improves the performance of emotion classification. In addition, we build a term weighting approach based on the different modifiers into a lexical model for the analysis of the semantics of modifiers and their impact on emotion meaning. We show that amplifiers separate emotions expressed with an emotion- bearing word more clearly from other secondary connotations. Downtoners have the opposite effect. In addition, we discuss the meaning of negations of emotion-bearing words. For instance we show empirically that "not happy" is closer to sadness than to anger and that fear-expressing words in the scope of downtoners often express surprise.Comment: Accepted for publication at The 5th IEEE International Conference on Data Science and Advanced Analytics (DSAA), https://dsaa2018.isi.it

    All mixed up? Finding the optimal feature set for general readability prediction and its application to English and Dutch

    Get PDF
    Readability research has a long and rich tradition, but there has been too little focus on general readability prediction without targeting a specific audience or text genre. Moreover, though NLP-inspired research has focused on adding more complex readability features there is still no consensus on which features contribute most to the prediction. In this article, we investigate in close detail the feasibility of constructing a readability prediction system for English and Dutch generic text using supervised machine learning. Based on readability assessments by both experts and a crowd, we implement different types of text characteristics ranging from easy-to-compute superficial text characteristics to features requiring a deep linguistic processing, resulting in ten different feature groups. Both a regression and classification setup are investigated reflecting the two possible readability prediction tasks: scoring individual texts or comparing two texts. We show that going beyond correlation calculations for readability optimization using a wrapper-based genetic algorithm optimization approach is a promising task which provides considerable insights in which feature combinations contribute to the overall readability prediction. Since we also have gold standard information available for those features requiring deep processing we are able to investigate the true upper bound of our Dutch system. Interestingly, we will observe that the performance of our fully-automatic readability prediction pipeline is on par with the pipeline using golden deep syntactic and semantic information
    corecore