46,793 research outputs found

    Timing is Stressful: Do Listeners Combine Meaning and Rhythm to Predict Speech?

    Get PDF
    English and other languages such as German are stress-timed languages: the timing of the speech is determined by stressed and unstressed syllables, providing structure for sentences. While natural speech is not generally metrically regular, like in Shakespearean poetry, it still conveys timing cues through stress. Prior research has found that metric regularity enhances the processing of words (Rothermich et al, 2012), potentially because it attunes listeners’ attention to the predictability of stressed, and therefore important, syllables. Other work (e.g., Rogers, 2017) has suggested that predictability in the form of semantic associations (e.g., hearing “barn” facilitates understanding of “hay”) is a driving force for speech understanding, so much so that people falsely “hear” words predicted by semantic context (e.g., hearing “barn” leads to hearing “hay”, even if “pay” was presented). In the current study, we aimed to examine how stress patterns and semantic associations may interact in listeners’ understanding of speech, as they both provide bases for predictions on the part of the listener. We measured speech understanding by masking the final word of a sentence in noise, then asking participants to identify what that word was (e.g., Jake visits the park to walk his DOG). We manipulated each sentence’s rhythmic predictability (whether the sentence was in natural speech, with a rhythm emphasized, or with a drum beat matching rhythm preceding the rhythmic speech) and semantic predictability (whether the last word made sense with the sentence, e.g. Jake visits the park to walk his dog/log). There was also a baseline condition for each of the rhythmic conditions wherein the sentence predictability was low. The results indicated that the beat prime improved processing of the rhythmic speech in conditions where expectancy effects played a role (semantically congruent and incongruent) but had a negligible impact in the baseline condition

    Computational coverage of type logical grammar: The Montague test

    Get PDF
    It is nearly half a century since Montague made his contributions to the field of logical semantics. In this time, computational linguistics has taken an almost entirely statistical turn and mainstream linguistics has adopted an almost entirely non-formal methodology. But in a minority approach reaching back before the linguistic revolution, and to the origins of computing, type logical grammar (TLG) has continued championing the flags of symbolic computation and logical rigor in discrete grammar. In this paper, we aim to concretise a measure of progress for computational grammar in the form of the Montague Test. This is the challenge of providing a computational cover grammar of the Montague fragment. We formulate this Montague Test and show how the challenge is met by the type logical parser/theorem-prover CatLog2.Peer ReviewedPostprint (published version

    Who Cares How Congress Really Works?

    Get PDF
    Legislative intent is a fiction. Courts and scholars accept this, by and large. As this Article shows, however, both are confused as to why legislative intent is a fiction and as to what this fiction entails. This Article first argues that the standard explanation—that Congress is a “they,” not an “it”—rests on an unduly simple conception of shared agency. Drawing from contemporary scholarship in the philosophy of action, it contends that Congress has no collective intention, not because of difficulties in aggregating the intentions of individual members, but rather because Congress lacks the sort of delegatory structure that one finds in, for example, a corporation. Second, this Article argues that—contrary to a recent, influential wave of scholarship—the fictional nature of legislative intent leaves interpreters of legislation with little reason to care about the fine details of legislative process. It is a platitude that legislative text must be interpreted in “context.” Context, however, consists of information salient to author and audience alike. This basic insight from the philosophy of language necessitates what this Article calls the “conversation” model of interpretation. Legislation is written by legislators for those tasked with administering the law—for example, courts and agencies—and those on whom the law operates—for example, citizens. Almost any interpreter thus occupies the position of conversational participant, reading legislative text in a context consisting of information salient both to members of Congress and to citizens (as well as agencies, courts, etc.). The conversation model displaces what this Article calls the “eavesdropping” model of interpretation—the prevailing paradigm among both courts and scholars. When asking what sources of information an interpreter should consider, courts and scholars have reliably privileged the epistemic position of members of Congress. The result is that legislation is erroneously treated as having been written by legislators exclusively for other legislators. This tendency is plainest in recent scholarship urging greater attention to legislative process—the nuances of which are of high salience to legislators but plainly not to citizens

    Who Cares How Congress Really Works?

    Get PDF
    Legislative intent is a fiction. Courts and scholars accept this, by and large. As this Article shows, however, both are confused as to why legislative intent is a fiction and as to what this fiction entails. This Article first argues that the standard explanation—that Congress is a “they,” not an “it”—rests on an unduly simple conception of shared agency. Drawing from contemporary scholarship in the philosophy of action, it contends that Congress has no collective intention, not because of difficulties in aggregating the intentions of individual members, but rather because Congress lacks the sort of delegatory structure that one finds in, for example, a corporation. Second, this Article argues that—contrary to a recent, influential wave of scholarship—the fictional nature of legislative intent leaves interpreters of legislation with little reason to care about the fine details of legislative process. It is a platitude that legislative text must be interpreted in “context.” Context, however, consists of information salient to author and audience alike. This basic insight from the philosophy of language necessitates what this Article calls the “conversation” model of interpretation. Legislation is written by legislators for those tasked with administering the law—for example, courts and agencies—and those on whom the law operates—for example, citizens. Almost any interpreter thus occupies the position of conversational participant, reading legislative text in a context consisting of information salient both to members of Congress and to citizens (as well as agencies, courts, etc.). The conversation model displaces what this Article calls the “eavesdropping” model of interpretation—the prevailing paradigm among both courts and scholars. When asking what sources of information an interpreter should consider, courts and scholars have reliably privileged the epistemic position of members of Congress. The result is that legislation is erroneously treated as having been written by legislators exclusively for other legislators. This tendency is plainest in recent scholarship urging greater attention to legislative process—the nuances of which are of high salience to legislators but plainly not to citizens

    Walking across Wikipedia: a scale-free network model of semantic memory retrieval.

    Get PDF
    Semantic knowledge has been investigated using both online and offline methods. One common online method is category recall, in which members of a semantic category like "animals" are retrieved in a given period of time. The order, timing, and number of retrievals are used as assays of semantic memory processes. One common offline method is corpus analysis, in which the structure of semantic knowledge is extracted from texts using co-occurrence or encyclopedic methods. Online measures of semantic processing, as well as offline measures of semantic structure, have yielded data resembling inverse power law distributions. The aim of the present study is to investigate whether these patterns in data might be related. A semantic network model of animal knowledge is formulated on the basis of Wikipedia pages and their overlap in word probability distributions. The network is scale-free, in that node degree is related to node frequency as an inverse power law. A random walk over this network is shown to simulate a number of results from a category recall experiment, including power law-like distributions of inter-response intervals. Results are discussed in terms of theories of semantic structure and processing

    Questions and Answers in a Context-Dependent Montague Grammar

    Get PDF

    Phonologial constraints and overextensions

    Get PDF
    • …
    corecore