21 research outputs found

    Generalization of auditory sensory and cognitive learning in typically developing children

    Get PDF
    Despite the well-established involvement of both sensory (“bottom-up”) and cognitive (“top-down”) processes in literacy, the extent to which auditory or cognitive (memory or attention) learning transfers to phonological and reading skills remains unclear. Most research has demonstrated learning of the trained task or even learning transfer to a closely related task. However, few studies have reported “far-transfer” to a different domain, such as the improvement of phonological and reading skills following auditory or cognitive training. This study assessed the effectiveness of auditory, memory or attention training on far-transfer measures involving phonological and reading skills in typically developing children. Mid-transfer was also assessed through untrained auditory, attention and memory tasks. Sixty 5- to 8-year-old children with normal hearing were quasi-randomly assigned to one of five training groups: attention group (AG), memory group (MG), auditory sensory group (SG), placebo group (PG; drawing, painting), and a control, untrained group (CG). Compliance, mid-transfer and far-transfer measures were evaluated before and after training. All trained groups received 12 x 45-min training sessions over 12 weeks. The CG did not receive any intervention. All trained groups, especially older children, exhibited significant learning of the trained task. On pre- to post-training measures (test-retest), most groups exhibited improvements on most tasks. There was significant mid-transfer for a visual digit span task, with highest span in the MG, relative to other groups. These results show that both sensory and cognitive (memory or attention) training can lead to learning in the trained task and to mid-transfer learning on a task (visual digit span) within the same domain as the trained tasks. However, learning did not transfer to measures of language (reading and phonological awareness), as the PG and CG improved as much as the other trained groups. Further research is required to investigate the effects of various stimuli and lengths of training on the generalization of sensory and cognitive learning to literacy skills

    Intensify perception in two-tone complexes

    No full text

    Profile-analysis models applied in two-tone complexes

    No full text
    Thresholds for changes in the amplitude of one or both components that compose two-tone complexes have been measured as a function of frequency separation. Thresholds have been measured while keeping the overall level fixed, or while using the within-trial roving level paradigm. Changes in the spectral shape of two-tone complexes were generally perceived better than changes in the overall level. Thresholds for the former changes are lowest at frequency separations near one semitone. Thresholds for the latter hardly depend on frequency separation. Two models have been applied to the data. For narrowbandwidth signals (smaller than~ ST, the EWAIF model [Feth, Pere. & Psychoph., lQ., 375-378 (1974) ] can account for the obtained data. For (partially) resolved signals. a two-channel model [ Durlach, Braida and Ito. J. Acoust. Soc. Am. 80, 63-72 (1986) ] is able to model the data. The values of certain model parameters show that the auditory system does not always process the changes optimally, in the sense that amplitude changes in one component contribute more in the detection process than those in the other component. This is especially true for large frequency separations. The values of some other model parameters are more difficult to interpret, but suggest that central noise is involved

    Intensity perception in two-tone complexes

    No full text

    The presence of a social other motivates to invest effort while listening to speech-in-noise

    No full text
    Background: Mental effort has been gaining attention as an important facet of listening. A relevant factor influencing mental effort is motivation, which in turn can be influenced by reward. Reward has been found to enhance the mental effort that is spent while listening, as shown by an increased peak pupil dilation. Furthermore, social interactions have been suggested to be rewarding and may also increase the motivation to spent effort while listening. However, how social aspects influence listening effort has not been examined until now. In this study, we examined the influence of a social presence on listening effort. Objectives: The aim of this study was to modify existing speech-in-noise paradigms to assess whether a social presence influences the amount of effort spent while listening. More specifically, we aimed to assess if doing a speech-in-noise task together with another individual, rather than alone, affected the task-evoked pupil dilation response. Furthermore, we examined if any potential effects were influenced by the difficulty of the task and the requirement to repeat the sentence. Method: 34 Young, normal-hearing participants (10 males, 24 females) listened to Dutch sentences that were masked with a stationary noise masker and presented through a loudspeaker. The participants’ task alternated between repeating sentences (active condition) and not repeating sentences (passive condition). The participant did this either alone or together with another individual in the booth. When together, they repeated sentences in turn. The participant and the other individual did not know each other before the study. Participants performed the task at three intelligibility levels (20%, 50% and 80% sentences correct) in a blockwise fashion. During testing, pupil size was recorded as an objective outcome measure of listening effort. Results: Both task difficulty and doing the task in the presence of another individual significantly increased peak pupil dilation (PPD). There was no interaction between task difficulty and the presence/absence of another individual on PPD. Furthermore, PPD was significantly lower in the passive conditions. This effect interacted with intelligibility. Lastly, performance on the listening task was affected by task difficulty, but not the physical presence/absence of another individual. Conclusion: Increased PPD values suggest an increase in mental effort during listening when another participant is present, but only in the active condition (i.e. when the participants had to repeat the sentence). The effect of a social presence on pupil dilation seems to be independent of task difficulty

    Combining Cardiovascular and Pupil Features Using k-Nearest Neighbor Classifiers to Assess Task Demand, Social Context, and Sentence Accuracy During Listening

    No full text
    In daily life, both acoustic factors and social context can affect listening effort investment. In laboratory settings, information about listening effort has been deduced from pupil and cardiovascular responses independently. The extent to which these measures can jointly predict listening-related factors is unknown. Here we combined pupil and cardiovascular features to predict acoustic and contextual aspects of speech perception. Data were collected from 29 adults (mean  =  64.6 years, SD  =  9.2) with hearing loss. Participants performed a speech perception task at two individualized signal-to-noise ratios (corresponding to 50% and 80% of sentences correct) and in two social contexts (the presence and absence of two observers). Seven features were extracted per trial: baseline pupil size, peak pupil dilation, mean pupil dilation, interbeat interval, blood volume pulse amplitude, pre-ejection period and pulse arrival time. These features were used to train k-nearest neighbor classifiers to predict task demand, social context and sentence accuracy. The k-fold cross validation on the group-level data revealed above-chance classification accuracies: task demand, 64.4%; social context, 78.3%; and sentence accuracy, 55.1%. However, classification accuracies diminished when the classifiers were trained and tested on data from different participants. Individually trained classifiers (one per participant) performed better than group-level classifiers: 71.7% (SD  =  10.2) for task demand, 88.0% (SD  =  7.5) for social context, and 60.0% (SD  =  13.1) for sentence accuracy. We demonstrated that classifiers trained on group-level physiological data to predict aspects of speech perception generalized poorly to novel participants. Individually calibrated classifiers hold more promise for future applications
    corecore