29 research outputs found

    Deriving a complex BIN through adverbial BIN complexes

    Get PDF
    Work by Green (1998) discusses 3 sub-types of stressed BIN in African American English (AAE): stative, habitual, and completive. BIN constructions that co-occur with temporal adverbials exhibit limited grammaticality, with each sub-type differing in how they interact with these adverbials. Non-BIN constructions that involve multiple instances in the same clause of adverbials of the same class exhibit restrictions that resemble BIN + adverbial data. Drawing on works that analyze BIN as a remote past marker (Rickford 1975, Green 1998) and on works connecting adverbial position to interpretation (Ernst 2020), I argue that BIN is an adverbial itself that situates the initiation of an eventuality in the remote past. This adverbial BIN, in concert with certain combinations of tense and aspect, forms a complex that makes up the canonical BIN construction

    Investigation of the Effect of Contextual Factors on BIN Production in AAE

    Get PDF
    Treatments of African American English (AAE) in the literature have focused primarily on morphosyntactic differences from mainstream American English. One these differences is found in the tense and aspect system. While both dialects have the present perfect use for “been”, AAE also has a stressed variant of “been”, termed BIN. This aspectual marker is featured in the literature, but the main focus has been on its prosodic qualities. It differs from present perfect been in that it has the semantics of a remote past marker (Rickford 1973, Rickford 1975, Green 1998). For a comprehensive understanding of AAE’s tense aspect system, both syntactic-semantic and discourse-pragmatic aspects of these markers need to be studied as well. We complete a production experiment with members of an AAE-speaking community in Southwest Louisiana followed by an acceptability judgement task. The purpose of the experiment is twofold. First, it allows us to examine BIN production in canonical BIN environments and non-BIN environments. Second, by paying close attention to the context these environments occur in, we can also examine the influence of discourse-pragmatic factors (LONG-TIME, TEMPORAL JUST, POLAR QUESTIONS) on BIN production in unambiguous environments, as well as in ambiguous environments. The factors LONG-TIME and TEMPORAL JUST are found to be significant predictors of BIN production Furthermore, there is a significant difference in ambiguity, such that the unambiguous contexts predicted BIN slightly less. Overall, the results of the experiment suggest that speakers are consistent in their BIN production for expected BIN environments, but more variable in the non-BIN environments for both unambiguous and ambiguous contexts. This raises the interesting question of why speakers are more variable in the non-BIN environments as well as questioning what the discourse-pragmatic factors are actually capturing. Together, however, it suggests that there are a variety of components that can influence BIN production. Future areas of work could further investigate in regards these components

    Does training with amplitude modulated tones affect tone-vocoded speech perception?

    Get PDF
    Temporal-envelope cues are essential for successful speech perception. We asked here whether training on stimuli containing temporal-envelope cues without speech content can improve the perception of spectrally-degraded (vocoded) speech in which the temporal-envelope (but not the temporal fine structure) is mainly preserved. Two groups of listeners were trained on different amplitude-modulation (AM) based tasks, either AM detection or AM-rate discrimination (21 blocks of 60 trials during two days, 1260 trials; frequency range: 4Hz, 8Hz, and 16Hz), while an additional control group did not undertake any training. Consonant identification in vocoded vowel-consonant-vowel stimuli was tested before and after training on the AM tasks (or at an equivalent time interval for the control group). Following training, only the trained groups showed a significant improvement in the perception of vocoded speech, but the improvement did not significantly differ from that observed for controls. Thus, we do not find convincing evidence that this amount of training with temporal-envelope cues without speech content provide significant benefit for vocoded speech intelligibility. Alternative training regimens using vocoded speech along the linguistic hierarchy should be explored

    The contribution of visual information to the perception of speech in noise with and without informative temporal fine structure

    Get PDF
    Understanding what is said in demanding listening situations is assisted greatly by looking at the face of a talker. Previous studies have observed that normal-hearing listeners can benefit from this visual information when a talker’s voice is presented in background noise. These benefits have also been observed in quiet listening conditions in cochlear-implant users, whose device does not convey the informative temporal fine structure cues in speech, and when normal-hearing individuals listen to speech processed to remove these informative temporal fine structure cues. The current study (1) characterised the benefits of visual information when listening in background noise; and (2) used sine-wave vocoding to compare the size of the visual benefit when speech is presented with or without informative temporal fine structure. The accuracy with which normal-hearing individuals reported words in spoken sentences was assessed across three experiments. The availability of visual information and informative temporal fine structure cues was varied within and across the experiments. The results showed that visual benefit was observed using open- and closed-set tests of speech perception. The size of the benefit increased when informative temporal fine structure cues were removed. This finding suggests that visual information may play an important role in the ability of cochlear-implant users to understand speech in many everyday situations. Models of audio-visual integration were able to account for the additional benefit of visual information when speech was degraded and suggested that auditory and visual information was being integrated in a similar way in all conditions. The modelling results were consistent with the notion that audio-visual benefit is derived from the optimal combination of auditory and visual sensory cues

    Wavelet-Based Speech Enhancement For Hearing Aids

    No full text
    : Several wavelet-based methods have been applied to compensate the speech signal to improve the intelligibility for a common hearing impairment known as recruitment of loudness, a sensorineural hearing loss of cochlear origin. The more complete method performs both denoising and amplitude compression using the same wavelet coefficients for both stages. Introduction Patients with sensorineural losses generally experience a high-frequency loss, resulting in a reduced dynamic range of hearing. In addition, many listeners will experience a reduced spectral resolution related to the phenomenon of upward spread of masking. Thus, speech discrimination is adversely affected. If a listener suffers from recruitment of loudness, perceived loudness grows more rapidly with an increase in sound intensity than it does in the normal ear. Thus, for sensorineural hearing losses with severely restricted dynamic ranges linear processing has limitations. The amplitude compression approach allows fast adj..

    Shifting Fundamental Frequency in Simulated Electric-Acoustic Listening

    No full text
    corecore