47,946 research outputs found
Recommended from our members
Investigation of Multiple Recognitions Used for EFL Writing in Authentic Contexts
Recognition technologies had been prevailing and widely used for EFL learning. We investigated the different recognitions used for EFL writing based on image-to-text, translated speech-to-text, and location-to-text recognitions â ITR, TSTR, and LTR. A quasi-experiment was implemented for 12 weeks in a vocational high school with experimental and control groups in two stages. Pre-test, posttests 1 and 2, questionnaires, and interviews were conducted and analyzed. Experimental learners, who wrote writing based on ITR and TSTR, outperformed control learners who wrote that based on TSTR only. Also, the experimental learners, who wore writing based on ITR, TSTR, and LTR, outperformed the control learners who wrote that based on ITR and TSTR. Particularly, LTR was beneficial for identifying controlling ideas and addressing the writing topics. ITR was beneficial for brainstorming and generating more ideas. TSTR was beneficial for yielding and transferring writing contents into words. The multiple recognitions were beneficial for most EFL writers, especially for low-ability language writers. Most writers were interested in describing based on authentic context learning. However, they complained about the low accuracy of LTR and TSTR and the difficulty of ITR texts when writing. Accordingly, the LTR database with various categories of places, the generation of ITR based on the language abilities of learners, and the higher accuracy of TSTR should be strictly considered when applying multiple recognitions for EFL writing
The contribution of verbal working memory to deaf children's oral and written production
open3noopenArfé, Barbara; Rossi, Cristina; Sicoli, SilviaArfe', Barbara; Rossi, Cristina; Sicoli, Silvi
Word Importance Modeling to Enhance Captions Generated by Automatic Speech Recognition for Deaf and Hard of Hearing Users
People who are deaf or hard-of-hearing (DHH) benefit from sign-language interpreting or live-captioning (with a human transcriptionist), to access spoken information. However, such services are not legally required, affordable, nor available in many settings, e.g., impromptu small-group meetings in the workplace or online video content that has not been professionally captioned. As Automatic Speech Recognition (ASR) systems improve in accuracy and speed, it is natural to investigate the use of these systems to assist DHH users in a variety of tasks. But, ASR systems are still not perfect, especially in realistic conversational settings, leading to the issue of trust and acceptance of these systems from the DHH community. To overcome these challenges, our work focuses on: (1) building metrics for accurately evaluating the quality of automatic captioning systems, and (2) designing interventions for improving the usability of captions for DHH users.
The first part of this dissertation describes our research on methods for identifying words that are important for understanding the meaning of a conversational turn within transcripts of spoken dialogue. Such knowledge about the relative importance of words in spoken messages can be used in evaluating ASR systems (in part 2 of this dissertation) or creating new applications for DHH users of captioned video (in part 3 of this dissertation). We found that models which consider both the acoustic properties of spoken words as well as text-based features (e.g., pre-trained word embeddings) are more effective at predicting the semantic importance of a word than models that utilize only one of these types of features.
The second part of this dissertation describes studies to understand DHH users\u27 perception of the quality of ASR-generated captions; the goal of this work was to validate the design of automatic metrics for evaluating captions in real-time applications for these users. Such a metric could facilitate comparison of various ASR systems, for determining the suitability of specific ASR systems for supporting communication for DHH users. We designed experimental studies to elicit feedback on the quality of captions from DHH users, and we developed and evaluated automatic metrics for predicting the usability of automatically generated captions for these users. We found that metrics that consider the importance of each word in a text are more effective at predicting the usability of imperfect text captions than the traditional Word Error Rate (WER) metric.
The final part of this dissertation describes research on importance-based highlighting of words in captions, as a way to enhance the usability of captions for DHH users. Similar to highlighting in static texts (e.g., textbooks or electronic documents), highlighting in captions involves changing the appearance of some texts in caption to enable readers to attend to the most important bits of information quickly. Despite the known benefits of highlighting in static texts, research on the usefulness of highlighting in captions for DHH users is largely unexplored. For this reason, we conducted experimental studies with DHH participants to understand the benefits of importance-based highlighting in captions, and their preference on different design configurations for highlighting in captions. We found that DHH users subjectively preferred highlighting in captions, and they reported higher readability and understandability scores and lower task-load scores when viewing videos with captions containing highlighting compared to the videos without highlighting. Further, in partial contrast to recommendations in prior research on highlighting in static texts (which had not been based on experimental studies with DHH users), we found that DHH participants preferred boldface, word-level, non-repeating highlighting in captions
Recommended from our members
Mobile-assisted language learning [Revised and updated version]
Mobile-assisted language learning (MALL) is the use of smartphones and other mobile technologies in language learning, especially in situations where portability and situated learning offer specific advantages. A key attraction of mobile learning is the ubiquity of mobile phones. Typical applications can support learners in reading, listening, speaking and writing in the target language, either individually or in collaboration with one another. Increasingly, MALL applications relate language learning to a personâs physical context when mobile, primarily to provide access to location-specific language material or to enable learners to capture aspects of language use in situ and share it with others. Mobile learning can be formal or informal, and mobile devices may form a bridge connecting in-class and out-of-class learning. When learning takes place outside the classroom, it is often beyond the reach and control of the teacher. This can be perceived as a threat, but it is also an opportunity to revitalize and rethink current approaches to teaching and learning. Mobile learning appeals to a wide range of people for a variety of reasons. It may exclude some learners but it is often a mechanism for inclusion. It is likely that the next generation of mobile learning will be more ubiquitous, which means that there will be smart systems everywhere for digital learning. Mobile learning is proving its potential to address authentic learner needs at the point at which they arise, and to deliver more flexible models of language learning
Intelligent Learning Systems for Inclusive Education: A Focus on Dyslexia
Undergraduate thesis submitted to the Department of Computer Science and Information Systems, Ashesi University, in partial fulfillment of Bachelor of Science degree in Computer Science, May 2022As children grow, they learn how to read and write. Reading involves recognizing, distinguishing, and understanding words and characters to make sense of a text. By the age of 7, a child should read and understand simple texts. For some people, it is not the case. They struggle to read and write. Reading is fundamental as it is applied everywhere; for instance, a person needs to read road signs to know their current location, read the operating manual for a new device they have bought, and many others. Some people struggle to read and write because of a learning disability called Dyslexia. It makes them unable to identify words and make sense of them. Some people can overcome Dyslexia by third grade, but others struggle even in university. Students are struggling to keep up academically because of this learning disability. This Thesis undertakes research to identify what students with Dyslexia go through and what strategies work best to help them study effectively and at a reasonable pace.Ashesi Universit
Robustness of Cognitive Performance to Irrelevant Speech Effects
openIn this thesis have been researched the impact of the babble speech effect on cognitive processes, such as reading, understanding adn remembering. These abilities were valuated via comprehension questions. It is a project related to project CoEn, but with the participants who are 20+ years old and study in our university in English language as their second language. The main question of the research is what would be the impact of the noise in another language than mother tongue and what would be the hardest challenge during cognitive tasks.In this thesis have been researched the impact of the babble speech effect on cognitive processes, such as reading, understanding adn remembering. These abilities were valuated via comprehension questions. It is a project related to project CoEn, but with the participants who are 20+ years old and study in our university in English language as their second language. The main question of the research is what would be the impact of the noise in another language than mother tongue and what would be the hardest challenge during cognitive tasks
- âŠ