25 research outputs found

    Spoken word recognition of novel words, either produced or only heard during learning

    Get PDF
    This document is the Accepted Manuscript Version of the following article: Tania S. Zamuner, Elizabeth Morin-Lessard, Stephanie Strahm, and Michael P. A. Page, 'Soke word recognition of novel words, either produced or only heard during learning', Journal of Memory and Language, Vol. 89, August 2016, pp. 55-67, doi: 10.1016/j.jml.2015.10.003. Under embargo. Embargo end date: 1 December 2017. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/Psycholinguistic models of spoken word production differ in how they conceptualize the relationship between lexical, phonological and output representations, making different predictions for the role of production in language acquisition and language processing. This work examines the impact of production on spoken word recognition of newly learned non-words. In Experiment 1, adults were trained on non-words with visual referents; during training, they produced half of the non-words, with the other half being heard-only. Using a visual world paradigm at test, eye tracking results indicated faster recognition of non-words that were produced compared with heard-only during training. In Experiment 2, non-words were correctly pronounced or mispronounced at test. Participants showed a different pattern of recognition for mispronunciation on non-words that were produced compared with heard-only during training. Together these results indicate that production affects the representations of newly learned words.Peer reviewedFinal Accepted Versio

    Gradient phonological relationships: Evidence from vowels in French

    No full text
    The dichotomy of contrastive and allophonic phonological relationships has a long-standing tradition in phonology, but there is growing research that points to phonological relationships that fall between contrastive and allophonic. Measures of lexical distinction (minimal pair counts) and predictability of distribution were applied to Laurentian French vowels to quantify three degrees of contrast between pairs: high, mid, and low contrast. According to traditional definitions, both the high and mid contrast pairs are classified as phonologically contrastive, and low contrast pairs as allophonic. As such, a binary view of contrast (contrastive vs. non-contrastive) predicted that high and mid contrast pairs would pattern together on tasks of speech perception, and low contrast pairs would show a different pattern. The gradient view predicted all vowel pairs would fall along a continuum. Thirty-two speakers of Laurentian French participated in two experiments: an AX task and a similarity rating task. The results did not support a strict binary interpretation of contrast, since the high, mid, and low contrast vowel pairs pattern differently across the experiments. Instead, the results support a gradient view of phonological relationships.   This article is part of the special collection: Marginal Contrasts</a

    The role of long-distance phonological processes in spoken word recognition: A preliminary investigation

    No full text
    Previous work has demonstrated that during spoken word recognition, listeners can use a variety of cues to anticipate an upcoming sound before the sound is encountered. However, this vein of research has largely focused on local phenomena that hold between adjacent sounds. In order to fill this gap, we combine the Visual World Paradigm with an Artificial Language Learning methodology to investigate whether knowledge of a long-distance pattern of sibilant harmony can be utilized during spoken word recognition. The hypothesis was that participants trained on sibilant harmony could more quickly identify a target word from among a set of competitors when that target contained a prefix which had undergone regressive sibilant harmony. Participants tended to behave as expected for the subset of items that they saw during training, but the effect did not reach statistical significance and did not extend to novel items. This suggests that participants did not learn the rule of sibilant harmony and may have been memorizing which base went with which alternant. Failure to learn the pattern may have been due to certain aspects of the design, which will be addressed in future iterations of the experiment

    Combining and integrating multiple linguistic cues during spoken language comprehension: A focus on semantics and coarticulation

    No full text
    This research examines how adults process and integrate a combination of higher-level semantic cues (i.e., semantic context) which are followed by lower-level acoustic cues (i.e., coarticulatory cues) during online spoken comprehension. Previous studies investigating cue integration (e.g., Martin, 2016) found that listeners can flexibly use and integrate a variety of available cues across linguistic representations. The current pre-registered study used an eye-tracking paradigm and tested a previously unstudied combination of cues. Specifically, how listeners process coarticulation (a lower-level cue) in the presence of preceding semantic information (a higher-level cue). Adult listeners were sensitive to both semantic and coarticulatory cues; moreover, adults’ processing of later acoustic cues varied depending on the earlier semantic context. These results demonstrate that listeners can flexibly use and weigh cues across multiple levels of linguistic representations during language comprehension. Earlier semantic information can be maintained over time and can influence the processing of later lower-level acoustic cues

    Developmental change in children’s speech processing of auditory and visual cues: An eyetracking study

    No full text
    This study investigates how children aged two to eight years (N = 129) and adults (N = 29) use auditory and visual speech for word recognition. The goal was to bridge the gap between apparent successes of visual speech processing in young children in visual-looking tasks, with apparent difficulties of speech processing in older children from explicit behavioural measures. Participants were presented with familiar words in audio-visual (AV), audio-only (A-only) or visual-only (V-only) speech modalities, then presented with target and distractor images, and looking to targets was measured. Adults showed high accuracy, with slightly less target-image looking in the V-only modality. Developmentally, looking was above chance for both AV and A-only modalities, but not in the V-only modality until 6 years of age (earlier on /k/-initial words). Flexible use of visual cues for lexical access develops throughout childhood. Accepted - Journal of Child Languag

    Infants track word forms in early word-object associations

    No full text
    A central component of language development is word learning. One characterization of this process is that language learners discover objects and then look for word forms to associate with these objects (Mcnamara, 1984; Smith, 2000). Another possibility is that word forms themselves are also important, such that once learned, hearing a familiar word form will lead young word learners to look for an object to associate with it (Juscyzk, 1997). This research investigates the relative weighing of word forms and objects in early word-object associations using the anticipatory eye-movement paradigm (AEM; McMurray & Aslin, 2004). Eighteen-month-old infants and adults were taught novel word-object associations and then tested on ambiguous stimuli that pitted word forms and objects against each other. Results revealed a change in weighing of these components across development. For 18-month-old infants, word forms weighed more in early word-object associative learning, while for adults, objects were more salient. Our results suggest that infants preferentially use word forms to guide the process of word-object association.Arts, Faculty ofPsychology, Department ofReviewedFacult
    corecore