6,006 research outputs found

    LINGUISTIC DIVERSITY, EQUITY AND HEALTH: ‘DO YOU SPEAK COVID-19?’

    Get PDF
    With many languages of the world becoming marginalized, discriminated against and at times even facing extinction, the linguistic landscape of the medical and health-care context suffers many challenges. Most prominently, when medical / health staff and their patients do not speak the same language, health-care disparities arise. With COVID-19 sweeping through the world, language barriers multiplied, access to information became a privilege conditional upon competence in English or one of the major world languages, people’s perception of the pandemic became confused and health care was adversely affected. This paper attempts a review of some of the research conducted on the impact of language obstacles on patients and the health care they receive, and the impact of linguistic inequities on the particular case of health care during the pandemic of COVID-19. Emphasizing the indispensable need for linguistic rights, the paper calls for a redefinition of health in linguistic terms, proposing the term “linguistic health” to address relevant issues. Strategies for multilingual health care and a medical responsiveness coupled with linguistic responsiveness are deemed an essential prerequisite for an all-inclusive global health culture

    A Review of Deep Learning Techniques for Speech Processing

    Full text link
    The field of speech processing has undergone a transformative shift with the advent of deep learning. The use of multiple processing layers has enabled the creation of models capable of extracting intricate features from speech data. This development has paved the way for unparalleled advancements in speech recognition, text-to-speech synthesis, automatic speech recognition, and emotion recognition, propelling the performance of these tasks to unprecedented heights. The power of deep learning techniques has opened up new avenues for research and innovation in the field of speech processing, with far-reaching implications for a range of industries and applications. This review paper provides a comprehensive overview of the key deep learning models and their applications in speech-processing tasks. We begin by tracing the evolution of speech processing research, from early approaches, such as MFCC and HMM, to more recent advances in deep learning architectures, such as CNNs, RNNs, transformers, conformers, and diffusion models. We categorize the approaches and compare their strengths and weaknesses for solving speech-processing tasks. Furthermore, we extensively cover various speech-processing tasks, datasets, and benchmarks used in the literature and describe how different deep-learning networks have been utilized to tackle these tasks. Additionally, we discuss the challenges and future directions of deep learning in speech processing, including the need for more parameter-efficient, interpretable models and the potential of deep learning for multimodal speech processing. By examining the field's evolution, comparing and contrasting different approaches, and highlighting future directions and challenges, we hope to inspire further research in this exciting and rapidly advancing field

    Automatic Diagnosis of Distortion Type of Arabic /r/ Phoneme Using Feed Forward Neural Network

    Get PDF
    The paper is not for recognizing normal formed speech but for distorted speech via examining the ability of feed forward Artificial Neural Networks (ANN) to recognize speech flaws. In this paper we take the Arabic /r/ phoneme distortion that is somewhat common among native speakers as a case study.To do this, r-Distype program is developed as a script written using Praat speech processing software tool. r-Distype program automatically develops a feed forward ANN that tests the spoken word (which includes /r/ phoneme) to detect any possible type of distortion. Multiple feed forward ANNs of different architectures have been developed and their achievements reported. Training data and testing data of the developed ANNs are sets of spoken Arabic words that contain /r/ phoneme in different positions so they cover all distortion types of Arabic /r/ phoneme. These sets of words were produced by different genders and different ages.The results obtained from developed ANNs were used to draw a conclusion about automating the detection of pronunciation problems in general.Such computerised system would be a good tool for diagnosing speech flaws and gives a great help in speech therapy. Also, the idea itself may open a new research subarea of speech recognition that is automatic speech therapy. Keywords: Distortion, Arabic /r/ phoneme, articulation disorders, Artificial Neural Network, Praa

    Unveiling the frontiers of deep learning: innovations shaping diverse domains

    Full text link
    Deep learning (DL) enables the development of computer models that are capable of learning, visualizing, optimizing, refining, and predicting data. In recent years, DL has been applied in a range of fields, including audio-visual data processing, agriculture, transportation prediction, natural language, biomedicine, disaster management, bioinformatics, drug design, genomics, face recognition, and ecology. To explore the current state of deep learning, it is necessary to investigate the latest developments and applications of deep learning in these disciplines. However, the literature is lacking in exploring the applications of deep learning in all potential sectors. This paper thus extensively investigates the potential applications of deep learning across all major fields of study as well as the associated benefits and challenges. As evidenced in the literature, DL exhibits accuracy in prediction and analysis, makes it a powerful computational tool, and has the ability to articulate itself and optimize, making it effective in processing data with no prior training. Given its independence from training data, deep learning necessitates massive amounts of data for effective analysis and processing, much like data volume. To handle the challenge of compiling huge amounts of medical, scientific, healthcare, and environmental data for use in deep learning, gated architectures like LSTMs and GRUs can be utilized. For multimodal learning, shared neurons in the neural network for all activities and specialized neurons for particular tasks are necessary.Comment: 64 pages, 3 figures, 3 table

    A Detailed Study on Aggregation Methods used in Natural Language Interface to Databases (NLIDB)

    Get PDF
    Historically, databases have been the most crucial issue in the study of information systems, and they constitute an essential part of all information management systems. Since, it complicated due to restricting the number of potential users, particularly non-expert database users who must comprehend the database structure to submit such queries. Natural language interface (NLI), the simplest method to retrieve information, is one possibility for interacting with the database. The transformation of a natural language query into a Structured Query (SQL) in a database is known as a "Natural Language Interface to Database" (NLIDB). This study uses NLIDB to handle the works performed under various aggregations with aggregation functions, a grouping phrase, and a possessing clause. This study carefully examines the numerous systematic aggregation approaches utilized in the NLIDB. This review provides extensive information about the many methods, including query-based, pattern-based, general, keyword-based NLIDB, and grammar-based systems, to extract data for a dissertation from a generic module for use in such systems that support query execution utilizing aggregations

    Multimodal Based Audio-Visual Speech Recognition for Hard-of-Hearing: State of the Art Techniques and Challenges

    Get PDF
    Multimodal Integration (MI) is the study of merging the knowledge acquired by the nervous system using sensory modalities such as speech, vision, touch, and gesture. The applications of MI expand over the areas of Audio-Visual Speech Recognition (AVSR), Sign Language Recognition (SLR), Emotion Recognition (ER), Bio Metrics Applications (BMA), Affect Recognition (AR), Multimedia Retrieval (MR), etc. The fusion of modalities such as hand gestures- facial, lip- hand position, etc., are mainly used sensory modalities for the development of hearing-impaired multimodal systems. This paper encapsulates an overview of multimodal systems available within literature towards hearing impaired studies. This paper also discusses some of the studies related to hearing-impaired acoustic analysis. It is observed that very less algorithms have been developed for hearing impaired AVSR as compared to normal hearing. Thus, the study of audio-visual based speech recognition systems for the hearing impaired is highly demanded for the people who are trying to communicate with natively speaking languages.  This paper also highlights the state-of-the-art techniques in AVSR and the challenges faced by the researchers for the development of AVSR systems
    corecore