97 research outputs found

    Zipf’s law revisited: Spoken dialog, linguistic units, parameters, and the principle of least effort

    Full text link
    The ubiquitous inverse relationship between word frequency and word rank is commonly known as Zipf’s law. The theoretical underpinning of this law states that the inverse relationship yields decreased effort in both the speaker and hearer, the so-called principle of least effort. Most research has focused on showing an inverse relationship only for written monolog, only for frequencies and ranks of one linguistic unit, generally word unigrams, with strong correlations of the power law to the observed frequency distributions, with limited to no attention to psychological mechanisms such as the principle of least effort. The current paper extends the existing findings, by not focusing on written monolog but on a more fundamental form of communication, spoken dialog, by not only investigating word unigrams but also units quantified on syntactic, pragmatic, utterance, and nonverbal communicative levels by showing that the adequacy of Zipf’s formula seems ubiquitous, but the exponent of the power law curve is not, and by placing these findings in the context of Zipf’s principle of least effort through redefining effort in terms of cognitive resources available for communication. Our findings show that Zipf’s law also applies to a more natural form of communication—that of spoken dialog, that it applies to a range of linguistic units beyond word unigrams, that the general good fit of Zipf’s law needs to be revisited in light of the parameters of the formula, and that the principle of least effort is a useful theoretical framework for the findings of Zipf’s law

    Surface and Contextual Linguistic Cues in Dialog Act Classification: A Cognitive Science View

    Full text link
    What role do linguistic cues on a surface and contextual level have in identifying the intention behind an utterance? Drawing on the wealth of studies and corpora from the computational task of dialog act classification, we studied this question from a cognitive science perspective. We first reviewed the role of linguistic cues in dialog act classification studies that evaluated model performance on three of the most commonly used English dialog act corpora. Findings show that frequency‐based, machine learning, and deep learning methods all yield similar performance. Classification accuracies, moreover, generally do not explain which specific cues yield high performance. Using a cognitive science approach, in two analyses, we systematically investigated the role of cues in the surface structure of the utterance and cues of the surrounding context individually and combined. By comparing the explained variance, rather than the prediction accuracy of these cues in a logistic regression model, we found that (1) while surface and contextual linguistic cues can complement each other, surface linguistic cues form the backbone in human dialog act identification, (2) with word frequency statistics being particularly important for the dialog act, and (3) the similar trends across corpora, despite differences in the type of dialog, corpus setup, and dialog act tagset. The importance of surface linguistic cues in dialog act classification sheds light on how both computers and humans take advantage of these cues in speech act recognition

    Lingualyzer: A computational linguistic tool for multilingual and multidimensional text analysis

    Full text link
    Most natural language models and tools are restricted to one language, typically English. For researchers in the behavioral sciences investigating languages other than English, and for those researchers who would like to make cross-linguistic comparisons, hardly any computational linguistic tools exist, particularly none for those researchers who lack deep computational linguistic knowledge or programming skills. Yet, for interdisciplinary researchers in a variety of fields, ranging from psycholinguistics, social psychology, cognitive psychology, education, to literary studies, there certainly is a need for such a cross-linguistic tool. In the current paper, we present Lingualyzer (https://lingualyzer.com), an easily accessible tool that analyzes text at three different text levels (sentence, paragraph, document), which includes 351 multidimensional linguistic measures that are available in 41 different languages. This paper gives an overview of Lingualyzer, categorizes its hundreds of measures, demonstrates how it distinguishes itself from other text quantification tools, explains how it can be used, and provides validations. Lingualyzer is freely accessible for scientific purposes using an intuitive and easy-to-use interface

    A realistic, multimodal virtual agent for the healthcare domain

    Full text link
    We introduce an interactive embodied conversational agent for deployment in the healthcare sector. The agent is operated by a software architecture that integrates speech recognition, dialog management, and speech synthesis, and is embodied by a virtual human face developed using photogrammetry techniques. These features together allow for real-time, face-to-face interactions with human users. Although the developed software architecture is domain-independent and highly customizable, the virtual agent will initially be applied to healtcare domain. Here we give an overview of the different components of the architecture

    Identifying Linguistic Cues that Distinguish Text Types: A Comparison of First and Second Language Speakers

    Get PDF
    The authors examine the degree to which first (L1) and second language (L2) speakers of English are able to distinguish between simplified or authentic reading texts from L2 instructional books and whether L1 and L2 speakers differ in their ability to process linguistic cues related to this distinction. These human judgments are also compared to computational judgments which are based on indices inspired by cognitive theories of reading processing. Results demonstrate that both L1 and L2 speakers of English are able to identify linguistic cues within both text types, but only L1 speakers are able to successfully distinguish between simplified and authentic texts. In addition, the performance of a computational tool was comparable to that of human performance. These findings have important implications for second language text processing and readability as well as implications for material development for second language instruction

    Prosodic marking of contrasts in information structure

    Get PDF
    Successful dialogue requires cultivation of com-mon ground (Clark, 1996), shared information, which changes as the conversation proceeds. Dialogue partners can maintain common ground by using different modalities like eye gaze, facial expressions, gesture, content information or in-tonation. Here, we focus on intonation and inves-tigate how contrast in information structure is prosodically marked in spontaneous speech. Combinatory Categorial Grammar (CCG, Steedman 2000) distinguishes theme and rheme as elements of information structure. In some cases they can be distinguished by the pitch ac-cent with which the corresponding words are realised. We experimentally evoke instances o
    corecore