31 research outputs found
Stimuli and the four speaker-listener groups in the current study.
<p>(A) An example stimulus. Visual (upper panel) and auditory (lower panel) speech cues of the sentence β<i>The girl loved the sweet coffee</i>β produced by a native American English speaker and a native Korean speaker. (B) The four speaker-listener groups. E-E: English sentences were produced by a native American English speaker, and then presented to a group of native American English listeners. E-K: English sentences were produced by a native American English speaker, and then presented to a group of native Korean listeners. K-E: English sentences were produced by a native Korean speaker, and then presented to a group of native American English listeners. K-K: English sentences were produced by a native Korean speaker, and then presented to a group of native Korean listeners.</p
Results of the linear mixed effects logistic regression on intelligibility data to examine three-way interaction among modality, speaker, and listener group in Experiment 1.
<p>Results of the linear mixed effects logistic regression on intelligibility data to examine three-way interaction among modality, speaker, and listener group in Experiment 1.</p
Visual enhancement scores as a function of self-reported English proficiency in native Korean listeners.
<p>(A) Sentences were produced by a native American English speaker. (B) Sentences were produced by a native Korean <b>speaker</b>. The straight line represents the best-fitting line for the data points. <i>r</i> represents the correlation coefficient between visual enhancement scores and English proficiency. * denotes <i>p</i><.05.</p
Mean proportion of correctly identified keywords as a function of signal-to-noise ratio in audio-only and audiovisual conditions across the four speaker-listener groups (see <b>Figure 1</b> for detailed descriptions of the four speaker-listener groups).
<p>Error bars represent standard error.</p
Psychometric function analysis on visual enhancement data in Experiment 1.
<p>Psychometric function analysis on visual enhancement data in Experiment 1.</p
Mean visual enhancement scores as a function of signal-to-noise ratio in the four speaker-listener groups (see <b>Figure 1</b> for detailed descriptions of the four speaker-listener groups).
<p>Error bars represent standard errors.</p
Signal-to-noise ratio (SNR) values were transformed to be used as independent variables in the linear mixed effects modeling analysis in order to test specific patterns regarding the relationship between visual enhancement (VE) and SNR.
<p>(A) Mean-centered and scaled raw SNR values. This variable presupposes a linear relationship between VE and SNR, which was used to test the pattern that the AV benefit varies linearly with SNRs, and is greatest at the highest SNRs or at the lowest SNRs. (B) The second-degree polynomial of the linear SNR values. This variable presupposes a quadratic relationship between VE and SNR, which was used to test the pattern that the AV benefit peak at intermediate SNR of β12, and drop for SNRs above and below.</p
Association between load of <i>ASPM-G</i> allele and neural repetition suppression to tone repeat (1) and tone-change (2) conditions.
*<p>Covariates included were IQ, Sound blending, and Auditory working memory scores.</p
Waveforms and spectrograms of one meaningful sentence (top panels) and one anomalous sentences (bottom panels), each produced in both conversational (left panels) and clear (right panels) speaking styles.
<p>Each panel display represents 2.5 seconds.</p
Average dβ² scores in both testing blocks for semantically anomalous and meaningful sentences produced in clear and conversational speaking styles.
<p>Error bars represent standard error.</p