62,749 research outputs found
The source ambiguity problem: Distinguishing the effects of grammar and processing on acceptability judgments
Judgments of linguistic unacceptability may theoretically arise from either grammatical deviance or significant processing difficulty. Acceptability data are thus naturally ambiguous in theories that explicitly distinguish formal and functional constraints. Here, we consider this source ambiguity problem in the context of Superiority effects: the dispreference for ordering a wh-phrase in front of a syntactically “superior” wh-phrase in multiple wh-questions, e.g., What did who buy? More specifically, we consider the acceptability contrast between such examples and so-called D-linked examples, e.g., Which toys did which parents buy? Evidence from acceptability and self-paced reading experiments demonstrates that (i) judgments and processing times for Superiority violations vary in parallel, as determined by the kind of wh-phrases they contain, (ii) judgments increase with exposure, while processing times decrease, (iii) reading times are highly predictive of acceptability judgments for the same items, and (iv) the effects of the complexity of the wh-phrases combine in both acceptability judgments and reading times. This evidence supports the conclusion that D-linking effects are likely reducible to independently motivated cognitive mechanisms whose effects emerge in a wide range of sentence contexts. This in turn suggests that Superiority effects, in general, may owe their character to differential processing difficulty
Understanding acceptability judgments: Additivity and working memory effects
Linguists build theories of grammar based largely on
acceptability contrasts. But these contrasts can reflect
grammatical constraints and/or constraints on language
processing. How can theorists determine the extent to which the acceptability of an utterance depends on functional constraints? In a series of acceptability experiments, we consider two factors that might indicate processing contributions to acceptability contrasts: (1) the way constraints combine (i.e., additively or super-additively), and (2) the way a comprehender’s working memory resources influence acceptability judgments. Results suggest that multiple sources of processing difficulty combine to produce super-additive effects, but multiple grammatical violations do not. Furthermore, when acceptability judgments improve with higher working memory scores, this appears to be due to functional constraints. We conclude that tests of (super)-additivity and of differences in working memory can help to identify the effects of processing difficulty (due to functional constraints)
Cognitive constraints and island effects
Competence-based theories of island effects play a central role in generative grammar, yet the graded nature of many syntactic islands has never been properly accounted for. Categorical syntactic accounts of island effects have persisted in spite of a wealth of data suggesting that island effects are not categorical in nature and that nonstructural manipulations that leave island structures intact can radically alter judgments of island violations. We argue here, building on work by Paul Deane, Robert Kluender, and others, that processing factors have the potential to account for this otherwise unexplained variation in acceptability judgments.
We report the results of self-paced reading experiments and controlled acceptability studies that explore the relationship between processing costs and judgments of acceptability. In each of the three self-paced reading studies, the data indicate that the processing cost of different types of island violations can be significantly reduced to a degree comparable to that of nonisland filler-gap constructions by manipulating a single nonstructural factor. Moreover, this reduction in processing cost is accompanied by significant improvements in acceptability. This evidence favors the hypothesis that island-violating constructions involve numerous processing pressures that aggregate to drive processing difficulty above a threshold, resulting in unacceptability. We examine the implications of these findings for the grammar of filler-gap dependencies
Laboratory study of effects of sonic boom shaping on subjective loudness and acceptability
A laboratory study was conducted to determine the effects of sonic boom signature shaping on subjective loudness and acceptability. The study utilized the sonic boom simulator at the Langley Research Center. A wide range of symmetrical, front-shock-minimized signature shapes were investigated together with a limited number of asymmetrical signatures. Subjective loudness judgments were obtained from 60 test subjects by using an 11-point numerical category scale. Acceptability judgments were obtained using the method of constant stimuli. Results were used to assess the relative predictive ability of several noise metrics, determine the loudness benefits of detailed boom shaping, and derive laboratory sonic boom acceptability criteria. These results indicated that the A-weighted sound exposure level, the Stevens Mark 7 Perceived Level, and the Zwicker Loudness Level metrics all performed well. Significant reductions in loudness were obtained by increasing front-shock rise time and/or decreasing front-shock overpressure of the front-shock minimized signatures. In addition, the asymmetrical signatures were rated to be slightly quieter than the symmetrical front-shock-minimized signatures of equal A-weighted sound exposure level. However, this result was based on a limited number of asymmetric signatures. The comparison of laboratory acceptability results with acceptability data obtained in more realistic situations also indicated good agreement
SNAP judgments: A small N acceptability paradigm (SNAP) for linguistic acceptability judgments
While published linguistic judgments sometimes differ from the judgments found in large-scale formal experiments with naive participants, there is not a consensus as to how often these errors occur nor as to how often formal experiments should be used in syntax and semantics research. In this article, we first present the results of a large-scale replication of the Sprouse et al. 2013 study on 100 English contrasts randomly sampled from Linguistic Inquiry 2001–2010 and tested in both a forced-choice experiment and an acceptability rating experiment. Like Sprouse, Schütze, and Almeida, we find that the effect sizes of published linguistic acceptability judgments are not uniformly large or consistent but rather form a continuum from very large effects to small or nonexistent effects. We then use this data as a prior in a Bayesian framework to propose a small n acceptability paradigm for linguistic acceptability judgments (SNAP Judgments). This proposal makes it easier and cheaper to obtain meaningful quantitative data in syntax and semantics research. Specifically, for a contrast of linguistic interest for which a researcher is confident that sentence A is better than sentence B, we recommend that the researcher should obtain judgments from at least five unique participants, using at least five unique sentences of each type. If all participants in the sample agree that sentence A is better than sentence B, then the researcher can be confident that the result of a full forced-choice experiment would likely be 75% or more agreement in favor of sentence A (with a mean of 93%). We test this proposal by sampling from the existing data and find that it gives reliable performance.*American Society for Engineering Education. National Defense Science and Engineering Graduate Fellowshi
Processing effects in linguistic judgment data: (super-)additivity and reading span scores
abstractLinguistic acceptability judgments are widely agreed to reflect constraints on real-time language processing. Nonetheless, very little is known about how processing costs affect acceptability judgments. In this paper, we explore how processing limitations are manifested in acceptability judgment data. In a series of experiments, we consider how two factors relate to judgments for sentences with varying degrees of complexity: (1) the way constraints combine (i.e., additively or super-additively), and (2) the way a comprehender’s memory resources influence acceptability judgments. Results indicate that multiple sources of processing difficulty can combine to produce super-additive effects, and that there is a positive linear relationship between reading span scores and judgments for sentences whose unacceptability is attributable to processing costs. These patterns do not hold for sentences whose unacceptability is attributable to factors other than processing costs, e.g., grammatical constraints. We conclude that tests of (super)-additivity and of relationships to reading span scores can help to identify the effects of processing difficulty on acceptability judgments, although these tests cannot be used in contexts of extreme processing difficulty.</jats:p
How do individual cognitive differences relate to acceptability judgments?: A reply to Sprouse, Wagers, and Phillips
Sprouse, Wagers, and Phillips (2012) carried out two experiments in which they measured individual differences in memory to test processing accounts of island effects. They found that these individual differences failed to predict the magnitude of island effects, and they construe these findings as counterevidence to processing-based accounts of island effects. Here, we take up several problems with their methods, their findings, and their conclusions.
First, the arguments against processing accounts are based on null results using tasks that may be ineffective or inappropriate measures of working memory (the n-back and serial-recall tasks). The authors provide no evidence that these two measures predict judgments for other constructions that are difficult to process and yet are clearly grammatical. They assume that other measures of working memory would have yielded the same result, but provide no justification that they should. We further show that whether a working-memory measure relates to judgments of grammatical, hard-to-process sentences depends on how difficult the sentences are. In this light, the stimuli used by the authors present processing difficulties other than the island violations under investigation and may have been particularly hard to process. Second, the Sprouse et al. results are statistically in line with the hypothesis that island sensitivity varies with working memory. Three out of the four island types in their experiment 1 show a significant relation between memory scores and island sensitivity, but the authors discount these findings on the grounds that the variance accounted for is too small to have much import. This interpretation, however, runs counter to standard practices in linguistics, psycholinguistics, and psychology
Correlates of Sophisticated Listener Judgments of Esophageal Air Intake Noise
The literature on esophageal speech has identified the problem of extraneous air intake noise, suggested its possible etiology, and provided practical advice for clinical management. Documentation on the efficacy of specific methodology is lacking in the literature. Such documentation would be simplified if objective criteria were used to rate the severity of intake noise. The present study was prompted by the lack of basic data regarding listener evaluation of intake noise.
The purpose of this study was to identify physical and perceptual correlates of acceptability of esophageal air intake noise. A primary and a secondary question were asked:
Are selected objective measures of esophageal speech significantly correlated with sophisticated listener judgments of air intake noise acceptability? The measures used were: The mean intensity of air intake noise The mean intensity of speech The ratio of mean speech intensity to mean intake noise intensity The number of syllables uttered per intake The rate of speech (in syllables per second)
Secondarily, are sophisticated listener judgments of overall esophageal speech proficiency significantly correlated with sophisticated listener judgments of air intake noise acceptability
Islands in the grammar? Standards of evidence
When considering how a complex system operates, the observable behavior depends upon both architectural properties of the system and the principles governing its operation. As a simple example, the behavior of computer chess programs depends upon both the processing speed and resources of the computer and the programmed rules that determine how the computer selects its next move. Despite having very similar search techniques, a computer from the 1990s might make a move that its 1970s forerunner would overlook simply because it had more raw computational power. From the naïve observer’s perspective, however, it is not superficially evident if a particular move is dispreferred or overlooked because of computational limitations or the search strategy and decision algorithm. In the case of computers, evidence for the source of any particular behavior can ultimately be found by inspecting the code and tracking the decision process of the computer. But with the human mind, such options are not yet available. The preference for certain behaviors and the dispreference for others may theoretically follow from cognitive limitations or from task-related principles that preclude certain kinds of cognitive operations, or from some combination of the two. This uncertainty gives rise to the fundamental problem of finding evidence for one explanation over the other. Such a problem arises in the analysis of syntactic island effects – the focu
- …
