32 research outputs found

    Across-talker effects on non-native listeners' vowel perecption in noise

    Get PDF
    Journal ArticleThis study explored how across-talker differences influence non-native vowel perception. American English (AE) and Korean listeners were presented with recordings of 10 AE vowels in /bVd/ context. The stimuli were mixed with noise and presented for identification in a 10-alternative forced-choice task. The two listener groups heard recordings of the vowels produced by 10 talkers at three signal-to-noise ratios. Overall the AE listeners identified the vowels 22% more accurately than the Korean listeners. There was a wide range of identification accuracy scores across talkers for both AE and Korean listeners. At each signal-to-noise ratio, the across-talker intelligibility scores were highly correlated for AE and Korean listeners. Acoustic analysis was conducted for 2 vowel pairs that exhibited variable accuracy across talkers for Korean listeners but high identification accuracy for AE listeners. Results demonstrated that Korean listeners? error patterns for these four vowels were strongly influenced by variability in vowel production that was within the normal range for AE talkers. These results suggest that non-native listeners are strongly influenced by across-talker variability perhaps because of the difficulty they have forming native-like vowel categories

    Evaluating the Effectiveness of Teaching Assistants in Active Learning Classrooms

    Get PDF
    Active learning classrooms (ALCs) support teaching approaches that foster greater interaction and student engagement. However, a common challenge for instructors who teach in ALCs is to provide adequate assistance to students while implementing collaborative activities. This study examined the impact of teaching assistants in a large ALC. The results showed that incorporating teaching assistants increases students’ access to expert advice during small group activities; further, students view the teaching assistants as supportive of their success in the classroom. Therefore, availability of teaching assistants for instructors teaching in large ALCs must be considered along with classroom design and pedagogical approach

    Intelligibility of medically related sentences in quiet, speech-shaped noise, and hospital noise

    Get PDF
    Noise in healthcare settings, such as hospitals, often exceeds levels recommended by health organizations. Although researchers and medical professionals have raised concerns about the effect of these noise levels on spoken communication, objective measures of behavioral intelligibility in hospital noise are lacking. Further, no studies of intelligibility in hospital noise used medically relevant terminology, which may differentially impact intelligibility compared to standard terminology in speech perception research and is essential for ensuring ecological validity. Here, intelligibility was measured using online testing for 69 young adult listeners in three listening conditions (i.e., quiet, speech-shaped noise, and hospital noise: 23 listeners per condition) for four sentence types. Three sentence types included medical terminology with varied lexical frequency and familiarity characteristics. A final sentence set included non-medically related sentences. Results showed that intelligibility was negatively impacted by both noise types with no significant difference between the hospital and speech-shaped noise. Medically related sentences were not less intelligible overall, but word recognition accuracy was significantly positively correlated with both lexical frequency and familiarity. These results support the need for continued research on how noise levels in healthcare settings in concert with less familiar medical terminology impact communications and ultimately health outcomes

    Reproducible radiomics through automated machine learning validated on twelve clinical applications

    Get PDF
    Radiomics uses quantitative medical imaging features to predict clinical outcomes. Currently, in a new clinical application, findingthe optimal radiomics method out of the wide range of available options has to be done manually through a heuristic trial-anderror process. In this study we propose a framework for automatically optimizing the construction of radiomics workflows perapplication. To this end, we formulate radiomics as a modular workflow and include a large collection of common algorithms foreach component. To optimize the workflow per application, we employ automated machine learning using a random search andensembling. We evaluate our method in twelve different clinical applications, resulting in the following area under the curves: 1)liposarcoma (0.83); 2) desmoid-type fibromatosis (0.82); 3) primary liver tumors (0.80); 4) gastrointestinal stromal tumors (0.77);5) colorectal liver metastases (0.61); 6) melanoma metastases (0.45); 7) hepatocellular carcinoma (0.75); 8) mesenteric fibrosis(0.80); 9) prostate cancer (0.72); 10) glioma (0.71); 11) Alzheimer’s disease (0.87); and 12) head and neck cancer (0.84). Weshow that our framework has a competitive performance compared human experts, outperforms a radiomics baseline, and performssimilar or superior to Bayesian optimization and more advanced ensemble approaches. Concluding, our method fully automaticallyoptimizes the construction of radiomics workflows, thereby streamlining the search for radiomics biomarkers in new applications.To facilitate reproducibility and future research, we publicly release six datasets, the software implementation of our framework,and the code to reproduce this study

    Development of unfamiliar accent comprehension continues through adolescence

    No full text
    School-age children's understanding of unfamiliar accents is not adult-like and the age at which this ability fully matures is unknown. To address this gap, eight- to fifteen-year-old children's (n = 74) understanding of native- and non-native-accented sentences in quiet and noise was assessed. Children's performance was adult-like by eleven to twelve years for the native accent in noise and by fourteen to fifteen years for the non-native accent in quiet. However, fourteen- to fifteen-year old's performance was not adult-like for the non-native accent in noise. Thus, adult-like comprehension of unfamiliar accents may require greater exposure to linguistic variability or additional cognitive–linguistic growth

    Shhh… I Need Quiet! Children’s Understanding of American, British, and Japanese-accented English Speakers

    No full text
    Children’s ability to understand speakers with a wide range of dialects and accents is essential for efficient language development and communication in a global society. Here, the impact of regional dialect and foreign-accent variability on children’s speech understanding was evaluated in both quiet and noisy conditions. Five- to seven-year-old children (n = 90) and adults (n = 96) repeated sentences produced by three speakers with different accents—American English, British English, and Japanese-accented English—in quiet or noisy conditions. Adults had no difficulty understanding any speaker in quiet conditions. Their performance declined for the nonnative speaker with a moderate amount of noise; their performance only substantially declined for the British English speaker (i.e., below 93% correct) when their understanding of the American English speaker was also impeded. In contrast, although children showed accurate word recognition for the American and British English speakers in quiet conditions, they had difficulty understanding the nonnative speaker even under ideal listening conditions. With a moderate amount of noise, their perception of British English speech declined substantially and their ability to understand the nonnative speaker was particularly poor. These results suggest that although school-aged children can understand unfamiliar native dialects under ideal listening conditions, their ability to recognize words in these dialects may be highly susceptible to the influence of environmental degradation. Fully adult-like word identification for speakers with unfamiliar accents and dialects may exhibit a protracted developmental trajectory

    The Role of Semantic Predictability in Adaptation to Nonnative Speech

    Get PDF
    Project files are comprised of 1 page pdf and presentation recording in mp4 format.Nonnative-accented speech is more difficult for native listeners to understand than native-accented speech. However, listeners can improve their abilities to understand nonnative- accented speech through exposure and training. The goal of this project is to explore whether exposing native listeners to different sentence types affects listeners' adaptation to nonnative speech. Listeners will be trained on high predictability sentences (e.g., "The color of a lemon is yellow"), low predictability sentences (e.g., "Mom said that it is yellow"), or semantically anomalous sentences (e.g., "The green week did the page"). Previous research has demonstrated that semantic predictability impacts speech perception, but its influence on adaptation to nonnative speech is unknown. Will training with low predictability or anomalous stimuli require listeners to focus more attention on the acoustic-phonetic properties of the accent and thus lead to greater adaptation and generalizable learning? Or will training with high predictability stimuli provide valuable semantic information that will allow listeners to create a better framework for improving perception? The data from this experiment will shed light on perceptual mechanisms, including how semantic predictability interacts with adaptation and learning.VPRI, CUR

    Development of unfamiliar accent comprehension continues through adolescence

    No full text
    corecore