3 research outputs found

    A Cross-Lingual Mobile Medical Communication System Prototype for Foreigners and Subjects with Speech, Hearing, and Mental Disabilities Based on Pictograms

    Get PDF
    People with speech, hearing, or mental impairment require special communication assistance, especially for medical purposes. Automatic solutions for speech recognition and voice synthesis from text are poor fits for communication in the medical domain because they are dependent on error-prone statistical models. Systems dependent on manual text input are insufficient. Recently introduced systems for automatic sign language recognition are dependent on statistical models as well as on image and gesture quality. Such systems remain in early development and are based mostly on minimal hand gestures unsuitable for medical purposes. Furthermore, solutions that rely on the Internet cannot be used after disasters that require humanitarian aid. We propose a high-speed, intuitive, Internet-free, voice-free, and text-free tool suited for emergency medical communication. Our solution is a pictogram-based application that provides easy communication for individuals who have speech or hearing impairment or mental health issues that impair communication, as well as foreigners who do not speak the local language. It provides support and clarification in communication by using intuitive icons and interactive symbols that are easy to use on a mobile device. Such pictogram-based communication can be quite effective and ultimately make people’s lives happier, easier, and safer

    The Possibilities of Smart Clothing in Adult Speech Therapy : Speech Therapists' Visions for the Future

    Get PDF
    The potential of technology in healthcare has been closely explored in recent years. Increasingly more innovative technology-Assisted rehabilitation methods for various customer groups are constantly being developed. However, the possibilities of smart clothing in adult speech rehabilitation have not been previously studied. The purpose of this study was to discover speech therapists' visions about the possibilities of smart clothing in adult rehabilitation. We organized an ideation workshop in December 2020 with four speech therapists who had worked in adult rehabilitation for at least five years. The workshop was held online on the Zoom platform. In the workshop we presented three questions for the speech therapists: 1) Which adult speech therapy clients could benefit from smart clothing? 2) What could smart clothing be used for in speech therapy rehabilitation for adults? and 3) How could smart clothing be used in speech therapy rehabilitation for adults? Qualitative data from this research was analyzed by thematic analysis. The main results of this study were that patients with dysphagia and patients with voice disorders were seen as the groups with the greatest potential use smart clothing, and continuous registration of various physiological functions of voice and swallowing were voted as the most usable applications of smart clothing. The most discussed topics were using smart clothing to monitor rehabilitation and using the clothing to activate and motivate the client by giving feedback. And finally, the easiest ways to control smart clothing were seen to be body movements, gestures, and touch.acceptedVersionPeer reviewe

    An assistive interpreter tool using glove-based hand gesture recognition

    No full text
    An assistive tool (InterpreterGlove) for hearing- and speech impaired people is created, enabling them to easily communicate with the non-disabled using hand gestures and sign language. An integrated hardware and software solution is built to improve their standard of living, consisting of sensor network based motion-capture gloves, a low-level signal processing unit and a mobile application for high-level natural language processing. This paper introduces the overall system architecture and describes our automatic sign language interpreter software solution that processes the gesture descriptor stream of the motion-capture gloves, produces understandable text and reads it out as audible speech. The main logic of our automatic sign language interpreter consists of two algorithms: sign descriptor stream segmentation and text auto-correction. The software architecture of this time-sensitive complex application and the semantics of the developed hand gesture descriptor are described. We also present how the beta tester’s feedback from the deaf community influenced our work and achievements
    corecore