433,097 research outputs found

    Speech and language therapy versus placebo or no intervention for speech problems in Parkinson's disease

    Get PDF
    Parkinson's disease patients commonly suffer from speech and vocal problems including dysarthric speech, reduced loudness and loss of articulation. These symptoms increase in frequency and intensity with progression of the disease). Speech and language therapy (SLT) aims to improve the intelligibility of speech with behavioural treatment techniques or instrumental aids

    Grammatical errors in spoken english of University students in oral communication course

    Get PDF
    The present study examines the grammatical errors in spoken English of university students who are less proficient in English. The specific objectives of the study are to determine the types of errors and the changes in grammatical accuracy during the duration of the English for Social Purposes course focussing on oral communication. The language data were obtained from the simulated oral interactions of 42 students participating in five role play situations during the 14-week semester. Error analysis of 126 oral interactions showed that the five common grammar errors made by the learners are preposition, question, article, plural form of nouns, subject-verb agreement and tense. Based on Dulay, Burt and Krashen’s (1982) surface structure taxonomy, the main ways by which students modify the target forms are misinformation and omission, with addition of elements or misordering being less frequent. The results also showed an increase in grammatical accuracy in the students’ spoken English towards the end of the course

    Talking the Talk: The Effect of Vocalics in an Interview

    Get PDF
    Our voices carry more than just content. People continuously make assumptions of one’s intelligence, credibility, personality, and other characteristics merely based on the way we talk. As the diversity of individuals in the workplace increases, so too do the differences in how those individuals talk. It is important that we understand how these different ways of speaking are being perceived in the workplace. More specifically, how are individuals being perceived prior to being hired via the interview process? This Honors Capstone project aims to understand the impact that vocal characteristics in an individual have on the interviewer’s perception of the interviewee, and how that impacts the hiring process. This project will offer professionals of all ages tangible advice on ways to increase one’s chances of receiving a job just by altering aspects of one’s voice

    Atypical audiovisual speech integration in infants at risk for autism

    Get PDF
    The language difficulties often seen in individuals with autism might stem from an inability to integrate audiovisual information, a skill important for language development. We investigated whether 9-month-old siblings of older children with autism, who are at an increased risk of developing autism, are able to integrate audiovisual speech cues. We used an eye-tracker to record where infants looked when shown a screen displaying two faces of the same model, where one face is articulating/ba/and the other/ga/, with one face congruent with the syllable sound being presented simultaneously, the other face incongruent. This method was successful in showing that infants at low risk can integrate audiovisual speech: they looked for the same amount of time at the mouths in both the fusible visual/ga/− audio/ba/and the congruent visual/ba/− audio/ba/displays, indicating that the auditory and visual streams fuse into a McGurk-type of syllabic percept in the incongruent condition. It also showed that low-risk infants could perceive a mismatch between auditory and visual cues: they looked longer at the mouth in the mismatched, non-fusible visual/ba/− audio/ga/display compared with the congruent visual/ga/− audio/ga/display, demonstrating that they perceive an uncommon, and therefore interesting, speech-like percept when looking at the incongruent mouth (repeated ANOVA: displays x fusion/mismatch conditions interaction: F(1,16) = 17.153, p = 0.001). The looking behaviour of high-risk infants did not differ according to the type of display, suggesting difficulties in matching auditory and visual information (repeated ANOVA, displays x conditions interaction: F(1,25) = 0.09, p = 0.767), in contrast to low-risk infants (repeated ANOVA: displays x conditions x low/high-risk groups interaction: F(1,41) = 4.466, p = 0.041). In some cases this reduced ability might lead to the poor communication skills characteristic of autism

    Effects of Gesture on Recollection and Description of Auditory and Visual Stimuli

    Get PDF

    Dialogue Act Modeling for Automatic Tagging and Recognition of Conversational Speech

    Get PDF
    We describe a statistical approach for modeling dialogue acts in conversational speech, i.e., speech-act-like units such as Statement, Question, Backchannel, Agreement, Disagreement, and Apology. Our model detects and predicts dialogue acts based on lexical, collocational, and prosodic cues, as well as on the discourse coherence of the dialogue act sequence. The dialogue model is based on treating the discourse structure of a conversation as a hidden Markov model and the individual dialogue acts as observations emanating from the model states. Constraints on the likely sequence of dialogue acts are modeled via a dialogue act n-gram. The statistical dialogue grammar is combined with word n-grams, decision trees, and neural networks modeling the idiosyncratic lexical and prosodic manifestations of each dialogue act. We develop a probabilistic integration of speech recognition with dialogue modeling, to improve both speech recognition and dialogue act classification accuracy. Models are trained and evaluated using a large hand-labeled database of 1,155 conversations from the Switchboard corpus of spontaneous human-to-human telephone speech. We achieved good dialogue act labeling accuracy (65% based on errorful, automatically recognized words and prosody, and 71% based on word transcripts, compared to a chance baseline accuracy of 35% and human accuracy of 84%) and a small reduction in word recognition error.Comment: 35 pages, 5 figures. Changes in copy editing (note title spelling changed

    Oral motor deficits in speech-impaired children with autism

    Get PDF
    Absence of communicative speech in autism has been presumed to reflect a fundamental deficit in the use of language, but at least in a subpopulation may instead stem from motor and oral motor issues. Clinical reports of disparity between receptive vs. expressive speech/language abilities reinforce this hypothesis. Our early-intervention clinic develops skills prerequisite to learning and communication, including sitting, attending, and pointing or reference, in children below 6 years of age. In a cohort of 31 children, gross and fine motor skills and activities of daily living as well as receptive and expressive speech were assessed at intake and after 6 and 10 months of intervention. Oral motor skills were evaluated separately within the first 5 months of the child's enrolment in the intervention programme and again at 10 months of intervention. Assessment used a clinician-rated structured report, normed against samples of 360 (for motor and speech skills) and 90 (for oral motor skills) typically developing children matched for age, cultural environment and socio-economic status
    corecore