24,122 research outputs found

    UNDERSTANDING PREPOSITIONS THROUGH COGNITIVE GRAMMAR. A CASE OF IN

    Get PDF
    Poly - semantic nature of prepositions has been discussed in linguistic literature and confirmed by language data. In the majority of research within cognitive linguistics prepositions have been approached as predicates organising entities in space, with less attention paid to the search for a meaning schema sanctioning the numerous uses. Cognitive Grammar analytic tools allow for the analysis which results in discovering one meaning schema sanctioning the uses of the English preposition in. The present analysis is based on the assumption that the meaning schema of in profiles a relation of conceptual enclosure between two symbolic structures, one of which conceptually fits in the other. Accordingly, I argue that the speaker employs in to structure a real scene not because one element of the scene can physically enclose the other one, but due to conceptual ‘fitting in’ holding between the predication ‘preceding’ the preposition and the one that ‘follows’. In formal terms, the usage of in is conditioned and sanctioned by compatibility of active zones in the predications used to form the complex language expression involved. Peculiarities of physical organization may be ignored in such conceptualisation, though the speaker can choose to encode all peculiarities of physical organisation of real world objects employing different linguistic devices

    Lightweight Ontologies

    Get PDF
    Ontologies are explicit specifications of conceptualizations. They are often thought of as directed graphs whose nodes represent concepts and whose edges represent relations between concepts. The notion of concept is understood as defined in Knowledge Representation, i.e., as a set of objects or individuals. This set is called the concept extension or the concept interpretation. Concepts are often lexically defined, i.e., they have natural language names which are used to describe the concept extensions (e.g., concept mother denotes the set of all female parents). Therefore, when ontologies are visualized, their nodes are often shown with corresponding natural language concept names. The backbone structure of the ontology graph is a taxonomy in which the relations are “is-a”, whereas the remaining structure of the graph supplies auxiliary information about the modeled domain and may include relations like “part-of”, “located-in”, “is-parent-of”, and many others

    Discovery of Linguistic Relations Using Lexical Attraction

    Full text link
    This work has been motivated by two long term goals: to understand how humans learn language and to build programs that can understand language. Using a representation that makes the relevant features explicit is a prerequisite for successful learning and understanding. Therefore, I chose to represent relations between individual words explicitly in my model. Lexical attraction is defined as the likelihood of such relations. I introduce a new class of probabilistic language models named lexical attraction models which can represent long distance relations between words and I formalize this new class of models using information theory. Within the framework of lexical attraction, I developed an unsupervised language acquisition program that learns to identify linguistic relations in a given sentence. The only explicitly represented linguistic knowledge in the program is lexical attraction. There is no initial grammar or lexicon built in and the only input is raw text. Learning and processing are interdigitated. The processor uses the regularities detected by the learner to impose structure on the input. This structure enables the learner to detect higher level regularities. Using this bootstrapping procedure, the program was trained on 100 million words of Associated Press material and was able to achieve 60% precision and 50% recall in finding relations between content-words. Using knowledge of lexical attraction, the program can identify the correct relations in syntactically ambiguous sentences such as ``I saw the Statue of Liberty flying over New York.''Comment: dissertation, 56 page

    Language, logic and ontology: uncovering the structure of commonsense knowledge

    Get PDF
    The purpose of this paper is twofold: (i) we argue that the structure of commonsense knowledge must be discovered, rather than invented; and (ii) we argue that natural language, which is the best known theory of our (shared) commonsense knowledge, should itself be used as a guide to discovering the structure of commonsense knowledge. In addition to suggesting a systematic method to the discovery of the structure of commonsense knowledge, the method we propose seems to also provide an explanation for a number of phenomena in natural language, such as metaphor, intensionality, and the semantics of nominal compounds. Admittedly, our ultimate goal is quite ambitious, and it is no less than the systematic ‘discovery’ of a well-typed ontology of commonsense knowledge, and the subsequent formulation of the longawaited goal of a meaning algebra
    • 

    corecore