1,199 research outputs found
Recommended from our members
Empirical Studies on the Disambiguation of Cue Phrases
Cue phrases are linguistic expressions such as now and well that function as explicit indicators of the structure of a discourse. For example, now may signal the beginning of a subtopic or a return to a previous topic, while well may mark subsequent material as a response to prior material, or as an explanatory comment. However, while cue phrases may convey discourse structure, each also has one or more alternate uses. While incidentally may be used sententially as an adverbial, for example, the discourse use initiates a digression. Although distinguishing discourse and sentential uses of cue phrases is critical to the interpretation and generation of discourse, the question of how speakers and hearers accomplish this disambiguation is rarely addressed. This paper reports results of empirical studies on discourse and sentential uses of cue phrases, in which both text-based and prosodic features were examined for disambiguating power. Based on these studies, it is proposed that discourse versus sentential usage may be distinguished by intonational features, specifically, pitch accent and prosodic phrasing. A prosodic model that characterizes these distinctions is identified. This model is associated with features identifiable from text analysis, including orthography and part of speech, to permit the application of the results of the prosodic analysis to the generation of appropriate intonational features for discourse and sentential uses of cue phrases in synthetic speech
Classifying Cue Phrases in Text and Speech Using Machine Learning
Cue phrases may be used in a discourse sense to explicitly signal discourse
structure, but also in a sentential sense to convey semantic rather than
structural information. This paper explores the use of machine learning for
classifying cue phrases as discourse or sentential. Two machine learning
programs (Cgrendel and C4.5) are used to induce classification rules from sets
of pre-classified cue phrases and their features. Machine learning is shown to
be an effective technique for not only automating the generation of
classification rules, but also for improving upon previous results.Comment: 8 pages, PostScript File, to appear in the Proceedings of AAAI-9
The development of play-texts: From manuscript to print
It is an axiom of historical linguistics, and indeed historical studies generally, that our present-day assumptions are not a reliable basis for the analysis and interpretation of language data from earlier periods. Assumptions, not just about language but any kind of human experience, help people make sense of the world in a cognitively efficient way. But those very assumptions interact with the phenomena to which they pertain, and together they change over time. Present-day assumptions form the endpoint of diachronic change. The first task for the historian is to describe earlier states of the language and its contexts, including the likely assumptions of contemporaries, and begin to understand why it is as it is. The second task is to explain the processes of change which have led to the current situation today. This paper aims to show how present-day assumptions about early modern play-texts are inappropriate or misleading. It explores how the dialogue of earlier plays was shaped by particular manuscript practices, and compares this with the dialogue of present-day plays that are shaped by the context of printing
Cue Phrase Classification Using Machine Learning
Cue phrases may be used in a discourse sense to explicitly signal discourse
structure, but also in a sentential sense to convey semantic rather than
structural information. Correctly classifying cue phrases as discourse or
sentential is critical in natural language processing systems that exploit
discourse structure, e.g., for performing tasks such as anaphora resolution and
plan recognition. This paper explores the use of machine learning for
classifying cue phrases as discourse or sentential. Two machine learning
programs (Cgrendel and C4.5) are used to induce classification models from sets
of pre-classified cue phrases and their features in text and speech. Machine
learning is shown to be an effective technique for not only automating the
generation of classification models, but also for improving upon previous
results. When compared to manually derived classification models already in the
literature, the learned models often perform with higher accuracy and contain
new linguistic insights into the data. In addition, the ability to
automatically construct classification models makes it easier to comparatively
analyze the utility of alternative feature representations of the data.
Finally, the ease of retraining makes the learning approach more scalable and
flexible than manual methods.Comment: 42 pages, uses jair.sty, theapa.bst, theapa.st
Age-Related Changes to the Production of Linguistic Prosody
The production of speech prosody (the rhythm, pausing, and intonation associated with natural speech) is critical to effective communication. The current study investigated the impact of age-related changes to physiology and cognition in relation to the production of two types of linguistic prosody: lexical stress and the disambiguation of syntactically ambiguous utterances. Analyses of the acoustic correlates of stress: speech intensity (or sound-pressure level; SPL), fundamental frequency (F0), key word/phrase duration, and pause duration revealed that both young and older adults effectively use these acoustic features to signal linguistic prosody, although the relative weighting of cues differed by group. Differences in F0 were attributed to age-related physiological changes in the laryngeal subsystem, while group differences in duration measures were attributed to relative task complexity and the cognitive-linguistic load of these respective tasks. The current study provides normative acoustic data for older adults which informs interpretation of clinical findings as well as research pertaining to dysprosody as the result of disease processes
Gesture Facilitates the Syntactic Analysis of Speech
Recent research suggests that the brain routinely binds together information from gesture and speech. However, most of this research focused on the integration of representational gestures with the semantic content of speech. Much less is known about how other aspects of gesture, such as emphasis, influence the interpretation of the syntactic relations in a spoken message. Here, we investigated whether beat gestures alter which syntactic structure is assigned to ambiguous spoken German sentences. The P600 component of the Event Related Brain Potential indicated that the more complex syntactic structure is easier to process when the speaker emphasizes the subject of a sentence with a beat. Thus, a simple flick of the hand can change our interpretation of who has been doing what to whom in a spoken sentence. We conclude that gestures and speech are integrated systems. Unlike previous studies, which have shown that the brain effortlessly integrates semantic information from gesture and speech, our study is the first to demonstrate that this integration also occurs for syntactic information. Moreover, the effect appears to be gesture-specific and was not found for other stimuli that draw attention to certain parts of speech, including prosodic emphasis, or a moving visual stimulus with the same trajectory as the gesture. This suggests that only visual emphasis produced with a communicative intention in mind (that is, beat gestures) influences language comprehension, but not a simple visual movement lacking such an intention
Juncture prosody across languages: Similar production but dissimilar perception
How do speakers of languages with different intonation systems produce and perceive prosodic junctures in sentences with identical structural ambiguity? Native speakers of English and of Mandarin produced potentially ambiguous sentences with a prosodic juncture either earlier in the utterance (e.g., “He gave her # dog biscuits,” “他给她#狗饼干 ”), or later (e.g., “He gave her dog # biscuits,” “他给她狗 #饼干 ”). These productiondata showed that prosodic disambiguation is realised very similarly in the two languages, despite some differences in the degree to which individual juncture cues (e.g., pausing) were favoured. In perception experiments with a new disambiguation task, requiring speeded responses to select the correct meaning for structurally ambiguous sentences, language differences in disambiguation response time appeared: Mandarin speakers correctly disambiguated sentences with earlier juncture faster than those with later juncture, while English speakers showed the reverse. Mandarin-speakers with L2 English did not show their native-language response time pattern when they heard the English ambiguous sentences. Thus even with identical structural ambiguity and identically cued production, prosodic juncture perception across languages can differ
- …