516 research outputs found

    Inquiries into the lexicon-syntax relations in Basque

    Get PDF
    Index:- Foreword. B. Oyharçabal.- Morphosyntactic disambiguation and shallow parsing in computational processing in Basque. I. Aduriz, A. Díaz de Ilarraza.- The transitivity of borrowed verbs in Basque: an outline. X. Alberdi.- Patrixa: a unification-based parser for Basque and its application to the automatic analysis of verbs. I. Aldezabal, M. J. Aranzabe, A. Atutxa, K.Gojenola, K, Sarasola.- Learning argument/adjunct distinction for Basque. I. Aldezabal, M. J. Aranzabe, K. Gojenola, K, Sarasola, A. Atutxa.- Analyzing verbal subcategorization aimed at its computation application. I. Aldezabal, P. Goenaga.- Automatic extraction of verb paterns from “hauta-lanerako euskal hiztegia”. J. M. Arriola, X. Artola, A. Soroa.- The case of an enlightening, provoking an admirable Basque derivational siffux with implications for the theory of argument structure. X. Artiagoitia.- Verb-deriving processes in Basque. J. C. Odriozola.- Lexical causatives and causative alternation in Basque. B. Oyharçabal.- Causation and semantic control; diagnosis of incorrect use in minorized languages. I. Zabala.- Subject index.- Contributions

    Processing Coordinated Verb Phrases: The Relevance of Lexical-Semantic, Conceptual, and Contextual Information towards Establishing Verbal Parallelism.

    Full text link
    This dissertation examines the influence of lexical-semantic representations, conceptual similarity, and contextual fit on the processing of coordinated verb phrases. The study integrates information gleaned from current linguistic theory with current psycholinguistic approaches to examining the processing of coordinated verb phrases. It has been claimed that in coordinated phrases, one conjunct may influence the processing of a second conjunct if they are sufficiently similar. For example, The likelihood of adopting an intransitive analysis for the optionally transitive verb of a subordinated clause in sentences like "Although the pirate ship sank the nearby British vessel did not send out lifeboats" may be increased if the ambiguous verb ("sank") is coordinated with a preceding, intransitively biased verb ("halted and sank"). Similarly, processing of the second conjunct may be facilitated when coordinated with a similar first conjunct. Such effects, and others in this vein have often been designated “parallelism effects.” However, notions of similarity underlying such effects have long been ill-defined. Many existing studies rely on relatively shallow features like syntactic category information or argument structure generalizations, such as transitive or intransitive, as a basis for structural comparison. But it may be that deeper levels of lexical-semantic representation and more varied, semantic or conceptual sources of information are also relevant to establishing similarity between conjuncts. In addition, little has been done to integrate parallelism effects to theories of the processing architecture underlying such effects, particularly for studies involving syntactic ambiguity resolution. Using two word-by-word reading and three eyetracking while reading experiments, I investigate what contribution detailed lexical-semantic representations, as well as conceptual and contextual information make towards establishing parallel coordination in the online processing of coordinated verb phrases. The five studies demonstrate that parallelism effects are indeed sensitive to deeper representational information, conceptual similarity, and contextual fit. Furthermore, by controlling for deeper representational information, it is demonstrated that expected facilitatory patterns arising from coordination of similar conjuncts may be disrupted. Implications for the architecture of the processing system are discussed, and it is argued that constraint-based/competition models of processing best accommodate the pattern of results.Ph.D.LinguisticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/78841/1/damont_1.pd

    Natural language understanding: instructions for (Present and Future) use

    Get PDF
    In this paper I look at Natural Language Understanding, an area of Natural Language Processing aimed at making sense of text, through the lens of a visionary future: what do we expect a machine should be able to understand? and what are the key dimensions that require the attention of researchers to make this dream come true

    Syntax and Semantics Meet in the "Middle": Probing the Syntax-Semantics Interface of LMs Through Agentivity

    Full text link
    Recent advances in large language models have prompted researchers to examine their abilities across a variety of linguistic tasks, but little has been done to investigate how models handle the interactions in meaning across words and larger syntactic forms -- i.e. phenomena at the intersection of syntax and semantics. We present the semantic notion of agentivity as a case study for probing such interactions. We created a novel evaluation dataset by utilitizing the unique linguistic properties of a subset of optionally transitive English verbs. This dataset was used to prompt varying sizes of three model classes to see if they are sensitive to agentivity at the lexical level, and if they can appropriately employ these word-level priors given a specific syntactic context. Overall, GPT-3 text-davinci-003 performs extremely well across all experiments, outperforming all other models tested by far. In fact, the results are even better correlated with human judgements than both syntactic and semantic corpus statistics. This suggests that LMs may potentially serve as more useful tools for linguistic annotation, theory testing, and discovery than select corpora for certain tasks

    Acquiring and processing verb argument structure : distributional learning in a miniature language

    Get PDF
    Adult knowledge of a language involves correctly balancing lexically-based and more language-general patterns. For example, verb argument structures may sometimes readily generalize to new verbs, yet with particular verbs may resist generalization. From the perspective of acquisition, this creates significant learnability problems, with some researchers claiming a crucial role for verb semantics in the determination of when generalization may and may not occur. Similarly, there has been debate regarding how verb-specific and more generalized constraints interact in sentence processing and on the role of semantics in this process. The current work explores these issues using artificial language learning. In three experiments using languages without semantic cues to verb distribution, we demonstrate that learners can acquire both verb-specific and verb-general patterns, based on distributional information in the linguistic input regarding each of the verbs as well as across the language as a whole. As with natural languages, these factors are shown to affect production, judgments and real-time processing. We demonstrate that learners apply a rational procedure in determining their usage of these different input statistics and conclude by suggesting that a Bayesian perspective on statistical learning may be an appropriate framework for capturing our findings

    An Annotation Scheme for Reichenbach's Verbal Tense Structure

    Full text link
    In this paper we present RTMML, a markup language for the tenses of verbs and temporal relations between verbs. There is a richness to tense in language that is not fully captured by existing temporal annotation schemata. Following Reichenbach we present an analysis of tense in terms of abstract time points, with the aim of supporting automated processing of tense and temporal relations in language. This allows for precise reasoning about tense in documents, and the deduction of temporal relations between the times and verbal events in a discourse. We define the syntax of RTMML, and demonstrate the markup in a range of situations

    Transitive phrasal verbs with the particle "out": A lexicon-grammar analysis

    Get PDF
    International audienceUsing a lexicon-grammar approach developed by Maurice Gross (1992), this project involved systematically mapping the structural properties of over 550 transitive phrasal verbs with the particle "out", "PV out". The data is analyzed in terms of two main tables or matrices. The first table illustrates the morpho-syntactic properties of purely simple "PV out" expressions, like "freak out the kid" ↔ "freak the kid out". The second table illustrates the morpho-syntactic combinations of complex "PV out" expressions, as in "take the boxer out of the fight". The research shows that "PV out" expressions may involve up to 25 syntactic features, including N 2 promotion, as in "The girl spilled the water out of the glass" → "The girl spilled the glass out", complex-neutral constructions, like "The water spilled out of the glass", and reversed constructions, like "The company farmed the oil out of the land" →"The company farmed the land out of oil". The research shows that these syntactic combinations are highly lexical in that a unique combination of features applies to individual phrasal verbs

    Acts of killing, acts of meaning:an application of corpus pattern analysis to language of animal-killing

    Get PDF
    We are currently witnessing unprecedented levels of ecological destruction and violence visited upon nonhumans. Study of the more-than-human world is now being enthusiastically taken up across a range of disciplines, in what has been called the ‘scholarly animal turn’. This thesis brings together concerns of Critical Animal Studies – along with related threads of posthumanism and new materialist thinking – and Corpus Linguistics, specifically Corpus Pattern Analysis (CPA), to produce a data-driven, lexicocentric study of the discourse of animal-killing. CPA, which has been employed predominantly in corpus lexicography, provides a robust and empirically well-founded basis for the analysis of verbs. Verbs are chosen as they act as the pivot of a clause; analysing them also uncovers their arguments – in this case, participants in material-discursive ‘killing’ events. This project analyses 15 ‘killing’ verbs using CPA as a basis, in what I term a corpus-lexicographical discourse analysis. The data is sampled from an animal-themed corpus of around 9 million words of contemporary British English, and the British National Corpus is used for reference. The findings are both methodological and substantive. CPA is found to be a reliable empirical starting point for discourse analysis, and the lexicographical practice of establishing linguistic ‘norms’ is critical to the identification of anomalous uses. The thesis presents evidence of anthropocentrism inherent in the English lexicon, and demonstrates several ways in which distance is created between participants of ‘killing’ constructions. The analysis also reveals specific ways that verbs can obfuscate, deontologise and deindividualise their arguments. The recommendations, for discourse analysts, include the adoption of CPA and a critical analysis of its resulting patterns in order to demonstrate the precise mechanisms by which verb use can either oppress or empower individuals. Social justice advocates are also alerted to potentially harmful language that might undermine their cause

    Polysemy and word meaning: an account of lexical meaning for different kinds of content words

    Get PDF
    There is an ongoing debate about the meaning of lexical words, i.e., words that contribute with content to the meaning of sentences. This debate has coincided with a renewal in the study of polysemy, which has taken place in the psycholinguistics camp mainly. There is already a fruitful interbreeding between two lines of research: the theoretical study of lexical word meaning, on the one hand, and the models of polysemy psycholinguists present, on the other. In this paper I aim at deepening on this ongoing interbreeding, examine what is said about polysemy, particularly in the psycholinguistics literature, and then show how what we seem to know about the representation and storage of polysemous senses affects the models that we have about lexical word meaning
    • 

    corecore