949 research outputs found

    Oral History Interview: B\u27Alma Epps Jones

    Get PDF
    This interview is one of series conducted concerning Oral Histories of African-American women who taught in West Virginia public schools. B\u27Alma Epps Jones began teaching in Washington High School in London, West Virginia in the 1930s. She gives us detailed information about her family throughout the interview (including her father who had many jobs, such as a candy -maker), Christmas in her family, her husband and her married life, social activities she and her husband participated in, the deaths of her mother and husband, and a white relative in her family. She also tells us detailed information about her education, which included a one-room school, Salem College, West Virginia State, and West Virginia University. She was a member of a sorority and also tells us about her activities during high school and teachers she knew. Her employment history is another important topic, and she gives us detailed information about her teaching career, including her teaching methods, segregation in education and the effects of desegregation in schools, coming to work at a predominantly white school (Summit Park High School), race relations at that school, her job and duties at Kelley Miller School and Summit Park High School, the PTA (Parent Teachers Association), as well as how teaching and disciplining students has become harder for teachers in recent years. She also taught Sunday School at her church. There are numerous other discussion points as well, such as: why she moved to WV; her church; brief information on childhood punishments; her social life in Tennessee (which was the state where she was born); a brief section on the Great Depression; a newspaper article that featured her; a women\u27s study club she was a member of; World War II; interracial dating; her shyness; equality in marriage; her self-perceptions; how being an African-American woman helped shape her life and discrimination she has faced in her life; and many other subjects. She ends with more information on her family and slavery.https://mds.marshall.edu/oral_history/1581/thumbnail.jp

    Differential performance of automatic speech-based depression classification across smartphones

    Full text link
    Smartphones offer exciting new prospects for measuring and monitoring mental health. While research into speech-based analysis of depression shows promise for smartphone app deployment, systematic studies investigating how smartphone device variability affects performance of speech-based classification of health conditions like depression have yet to be reported. Differing audio acquisition techniques between different devices introduces variability into the speech-based depression classification problem. Experiments reported herein reveal dissimilarities in depression classification performance among Androidâ„¢ smartphones, particularly for spectral features. This preliminary study on speech-based depression classification shows that by using smartphone version-specific models, relatively channel-independent features, and/or normalization methods unwanted performance variability can be mitigated - producing significant improvement over a channel-agnostic feature approach

    Paradigms lost and gained: stakeholder experiences of crisis distance learning during the Covid-19 pandemic

    Get PDF
    The physical distancing requirements designed to slow the contagion of COVID-19 instigated sweeping changes to the education sector. School closures in 193 countries brought significant disruption to education and to the lives of children, parents, and teachers. This study explored the experiences of school stakeholders during this period of crisis distance learning (DL). The perspectives of participants in six discrete focus groups of pupils, parents, and teachers at a private school in Dubai, United Arab Emirates, were subject to thematic analysis. Researchers identified three key themes, including ‘a need for stakeholder support’, ‘curriculum delivery implications’, and ‘educational outcomes of crisis distance learning’. Conclusions and recommendations will be of interest to researchers, teachers, school leaders, and teacher education providers

    Analysis of phonetic markedness and gestural effort measures for acoustic speech-based depression classification

    Full text link
    While acoustic-based links between clinical depression and abnormal speech have been established, there is still however little knowledge regarding what kinds of phonological content is most impacted. Moreover, for automatic speech-based depression classification and depression assessment elicitation protocols, even less is understood as to what phonemes or phoneme transitions provide the best analysis. In this paper we analyze articulatory measures to gain further insight into how articulation is affected by depression. In our investigative experiments, by partitioning acoustic speech data based on lower to high densities of specific phonetic markedness and gestural effort, we demonstrate improvements in depressed/non-depressed classification accuracy and F1 scores

    Automatic elicitation compliance for short-duration speech based depression detection

    Full text link
    Detecting depression from the voice in naturalistic environments is challenging, particularly for short-duration audio recordings. This enhances the need to interpret and make optimal use of elicited speech. The rapid consonant-vowel syllable combination 'pataka' has frequently been selected as a clinical motor-speech task. However, there is significant variability in elicited recordings, which remains to be investigated. In this multi-corpus study of over 25,000 'pataka' utterances, it was discovered that speech landmarkbased features were sensitive to the number of 'pataka' utterances per recording. This landmark feature sensitivity was newly exploited to automatically estimate 'pataka' count and rate, achieving root mean square errors nearly three times lower than chance-level. Leveraging count-rate knowledge of the elicited speech for depression detection, results show that the estimated 'pataka' number and rate are important for normalizing evaluative 'pataka' speech data. Count and/or rate normalized 'pataka' models produced relative reductions in depression classification error of up to 26% compared with non-normalized models

    Depression Classification Using n-Gram Speech Errors from Manual and Automatic Stroop Color Test Transcripts

    Full text link
    While the psychological Stroop color test has frequently been used to analyze response delays in temporal cognitive processing, minimal research has examined incorrect/correct verbal test response pattern differences exhibited in healthy control and clinically depressed populations. Further, the development of speech error features with an emphasis on sequential Stroop test responses has been unexplored for automatic depression classification. In this study which uses speech recorded via a smart device, an analysis of -gram error sequence distributions shows that participants with clinical depression produce more Stroop color test errors, especially sequential errors, than the healthy controls. By utilizing -gram error features derived from multisession manual transcripts, experimentation shows that trigram error features generate up to 95% depression classification accuracy, whereas an acoustic feature baseline achieve only upwards of 75%. Moreover, -gram error features using ASR transcripts produced up to 90% depression classification accuracy

    Automatic Detection of COVID-19 Based on Short-Duration Acoustic Smartphone Speech Analysis

    Full text link
    Currently, there is an increasing global need for COVID-19 screening to help reduce the rate of infection and at-risk patient workload at hospitals. Smartphone-based screening for COVID-19 along with other respiratory illnesses offers excellent potential due to its rapid-rollout remote platform, user convenience, symptom tracking, comparatively low cost, and prompt result processing timeframe. In particular, speech-based analysis embedded in smartphone app technology can measure physiological effects relevant to COVID-19 screening that are not yet digitally available at scale in the healthcare field. Using a selection of the Sonde Health COVID-19 2020 dataset, this study examines the speech of COVID-19-negative participants exhibiting mild and moderate COVID-19-like symptoms as well as that of COVID-19-positive participants with mild to moderate symptoms. Our study investigates the classification potential of acoustic features (e.g., glottal, prosodic, spectral) from short-duration speech segments (e.g., held vowel, pataka phrase, nasal phrase) for automatic COVID-19 classification using machine learning. Experimental results indicate that certain feature-task combinations can produce COVID-19 classification accuracy of up to 80% as compared with using the all-acoustic feature baseline (68%). Further, with brute-forced n-best feature selection and speech task fusion, automatic COVID-19 classification accuracy of upwards of 82–86% was achieved, depending on whether the COVID-19-negative participant had mild or moderate COVID-19-like symptom severity
    • …
    corecore