3 research outputs found

    A Cognitive Linguistic Study of Categorisation and Uncertain Reasoning in the Representation of Degree Modifiers.

    Get PDF
    Degree modifiers (such as very and really) are common features of written and spoken language. In general, their effect is to moderate the perceived strength of the linguistic form on which they act, making them a useful and versatile tool of expression and emphasis. However, the cognitive mechanisms that underlie the conceptualisation of degree modifiers and the linguistic aspects of their use in combination with other classes of words are extremely complex. The ease and fluency with which they are used and the extent to which their effect is commonly understood is good evidence that, like many aspects of meaning, degree modifiers rely on commonly held beliefs and knowledge about the world around us. For this reason the whole area of linguistic categorisation and prototypes are central to understanding the role of degree modifiers, particularly given that assumptions about prototypical strengths of adjectives are exactly what degree modifiers seek to alter. A core part of this study is the consideration of the role of uncertainty - not only uncertainty relating to the strength of the degree modifier, but also of the linguistic forms on which they act. More specifically, the inter-relationship between the perceived strength of degree modifiers and the certainty (or uncertainty) of the belief they express is a relatively unexplored yet intriguing area of linguistic research. The human mind constantly seeks to process as much information as possible for the least possible cognitive effort, yet this is difficult to achieve when reasoning with uncertain knowledge. By exploring the role and characteristics of degree modifiers, my aim is to illuminate how uncertain reasoning permeates many aspects of cognitive linguistic processing and how it relates to the conceptualisation and use of uncertain concepts in language

    Mapping Collocational Properties into Machine Learning Features

    No full text
    This paper investigates interactions between collocational properties and methods for organizing them into features for machine learning. In experiments performing an event categorization task, Wiebe et al. (1997a) found that different organizations are best for different properties. This paper presents a statistical analysis of the results across different machine learning algorithms. In the experiments, the relationship between property and organization was strikingly consistent across algorithms. This prompted further analysis of this relationship, and an investigation of criteria for recognizing beneficial ways to include collocational properties in machine learning experiments. While many types of collocational properties and methods of organizing them into features have been used in NLP, systematic investigations of their interaction are rare. 1 Introduction Properties can be mapped to features in a machine learning algorithm in different ways, potentially yielding different re..
    corecore