834 research outputs found

    Design and evaluation of a multimodal assistive technology using tongue commands, head movements, and speech recognition for people with tetraplegia

    Get PDF
    People with high level (C1-C4) spinal cord injury (SCI) cannot use their limbs to do the daily life activities by themselves without assistance. Current assistive technologies (ATs) use remaining capabilities (tongue, muscle, brain, speech, sniffing) as an input method to help them control devices (computer, smartphone). However, these ATs are not very efficient as compared to the gold standards (mouse and keyboards, touch interfaces, joysticks, and so forth) which are being used in everyday life. Therefore, in this work, a novel multimodal assistive system is designed to provide better accessibility more intuitively. The multimodal Tongue Drive System (mTDS) utilizes three key remaining abilities (speech, tongue and head movements) to help people with tetraplegia control the environments such as accessing computers, smartphones or driving wheelchairs. Tongue commands are used as discrete/switch like inputs and head movements as proportional/continuous type inputs, and speech recognition to type texts faster compared to any keyboards to emulate a mouse-keyboard combined system to access computers/ smartphones. Novel signal processing algorithms are developed and implemented in the wearable unit to provide universal access to multiple devices from the wireless mTDS. Non-disabled subjects participated in multiple studies to find the efficacy of mTDS in comparison to gold standards, and people with tetraplegia to evaluate technology learning abilities. Significant improvements are observed in terms of increasing accuracy and speed while doing different computer access and wheelchair mobility tasks. Thus, with sufficient learning of mTDS, it is feasible to reduce the performance gap between a non-disabled and a person with tetraplegia compared to the existing ATs.Ph.D

    Multimodal interface for an intelligent wheelchair

    Get PDF
    Tese de mestrado integrado. Engenharia Informática e Computação. Universidade do Porto. Faculdade de Engenharia. 201

    Gyro-Accelerometer based Control of an Intelligent Wheelchair

    Get PDF
    This paper presents a free-hand interface to control an electric wheelchair using the head gesture for people with severe disabilities i.e. multiple sclerosis, quadriplegic patients and old age people. The patient head acceleration and rotation rate are used to control the intelligent wheelchair. The patient head gesture is detected using accelerometer and gyroscope sensors embedded on a single board MPU6050. The MEMS sensors outputs are combined using Kalman filter as sensor fusion to build a high accurate orientation sensor. The system uses an Arduino mega as microcontroller to perform data processing, sensor fusion and joystick emulation to control the intelligent wheelchair and HC-SR04 ultrasonic sensors to provide safe navigation.The wheelchair can be controlled using two modes. In the first mode, the wheelchair is controlled by the usual joystick. In the second mode, the patient uses his head motion to control the wheelchair. The principal advantage of the proposed approach is that the switching between the two control modes is soft, straightforward and transparent to the user

    A Flexible, Open, Multimodal System of Computer Control Based on Infrared Light

    Get PDF
    This In this paper, a system architecture that can be adapted to an individual’s motor capacity and preferences, to control a computer is presented. The system uses two different transducers based on the emission and the reflection of infrared light. These let to detect of voluntary blinks, winks, saccadic or head movements and/or sequences of them. Transducer selection and operational mode can be configured. The signal provided by the transducer is adapted, processed and sent to a computer by external hardware. The computer runs a row-column scanned switch-controlled Virtual Keyboard (VK). This sends commands to the operating system to control the computer, making possible to run any application such as a web browser, etc. The main system characteristics are flexibility and relatively low-cost hardware.Junta de Andalucía p08-TIC-363

    Combining brain-computer interfaces and assistive technologies: state-of-the-art and challenges

    Get PDF
    In recent years, new research has brought the field of EEG-based Brain-Computer Interfacing (BCI) out of its infancy and into a phase of relative maturity through many demonstrated prototypes such as brain-controlled wheelchairs, keyboards, and computer games. With this proof-of-concept phase in the past, the time is now ripe to focus on the development of practical BCI technologies that can be brought out of the lab and into real-world applications. In particular, we focus on the prospect of improving the lives of countless disabled individuals through a combination of BCI technology with existing assistive technologies (AT). In pursuit of more practical BCIs for use outside of the lab, in this paper, we identify four application areas where disabled individuals could greatly benefit from advancements in BCI technology, namely,“Communication and Control”, “Motor Substitution”, “Entertainment”, and “Motor Recovery”. We review the current state of the art and possible future developments, while discussing the main research issues in these four areas. In particular, we expect the most progress in the development of technologies such as hybrid BCI architectures, user-machine adaptation algorithms, the exploitation of users’ mental states for BCI reliability and confidence measures, the incorporation of principles in human-computer interaction (HCI) to improve BCI usability, and the development of novel BCI technology including better EEG devices

    The development of a SmartAbility Framework to enhance multimodal interaction for people with reduced physical ability.

    Get PDF
    Assistive technologies are an evolving market due to the number of people worldwide who have conditions resulting in reduced physical ability (also known as disability). Various classification schemes exist to categorise disabilities, as well as government legislations to ensure equal opportunities within the community. However, there is a notable absence of a process to map physical conditions to technologies in order to improve Quality of Life for this user group. This research is characterised primarily under the Human Computer Interaction (HCI) domain, although aspects of Systems of Systems (SoS) and Assistive Technologies have been applied. The thesis focuses on examples of multimodal interactions leading to the development of a SmartAbility Framework that aims to assist people with reduced physical ability by utilising their abilities to suggest interaction mediums and technologies. The framework was developed through a predominantly Interpretivism methodology approach consisting of a variety of research methods including state- of-the-art literature reviews, requirements elicitation, feasibility trials and controlled usability evaluations to compare multimodal interactions. The developed framework was subsequently validated through the involvement of the intended user community and domain experts and supported by a concept demonstrator incorporating the SmartATRS case study. The aim and objectives of this research were achieved through the following key outputs and findings: - A comprehensive state-of-the-art literature review focussing on physical conditions and their classifications, HCI concepts relevant to multimodal interaction (Ergonomics of human-system interaction, Design For All and Universal Design), SoS definition and analysis techniques involving System of Interest (SoI), and currently-available products with potential uses as assistive technologies. - A two-phased requirements elicitation process applying surveys and semi-structured interviews to elicit the daily challenges for people with reduced physical ability, their interests in technology and the requirements for assistive technologies obtained through collaboration with a manufacturer. - Findings from feasibility trials involving monitoring brain activity using an electroencephalograph (EEG), tracking facial features through Tracking Learning Detection (TLD), applying iOS Switch Control to track head movements and investigating smartglasses. - Results of controlled usability evaluations comparing multimodal interactions with the technologies deemed to be feasible from the trials. The user community of people with reduced physical ability were involved during the process to maximise the usefulness of the data obtained. - An initial SmartDisability Framework developed from the results and observations ascertained through requirements elicitation, feasibility trials and controlled usability evaluations, which was validated through an approach of semi-structured interviews and a focus group. - An enhanced SmartAbility Framework to address the SmartDisability validation feedback by reducing the number of elements, using simplified and positive terminology and incorporating concepts from Quality Function Deployment (QFD). - A final consolidated version of the SmartAbility Framework that has been validated through semi-structured interviews with additional domain experts and addressed all key suggestions. The results demonstrated that it is possible to map technologies to people with physical conditions by considering the abilities that they can perform independently without external support and the exertion of significant physical effort. This led to a realisation that the term ‘disability’ has a negative connotation that can be avoided through the use of the phrase ‘reduced physical ability’. It is important to promote this rationale to the wider community, through exploitation of the framework. This requires a SmartAbility smartphone application to be developed that allows users to input their abilities in order for recommendations of interaction mediums and technologies to be provided
    corecore