60 research outputs found

    The Tongue Enables Computer and Wheelchair Control for People with Spinal Cord Injury

    Get PDF
    The Tongue Drive System (TDS) is a wireless and wearable assistive technology, designed to allow individuals with severe motor impairments such as tetraplegia to access their environment using voluntary tongue motion. Previous TDS trials used a magnetic tracer temporarily attached to the top surface of the tongue with tissue adhesive. We investigated TDS efficacy for controlling a computer and driving a powered wheelchair in two groups of able-bodied subjects and a group of volunteers with spinal cord injury (SCI) at C6 or above. All participants received a magnetic tongue barbell and used the TDS for five to six consecutive sessions. The performance of the group was compared for TDS versus keypad and TDS versus a sip-and-puff device (SnP) using accepted measures of speed and accuracy. All performance measures improved over the course of the trial. The gap between keypad and TDS performance narrowed for able-bodied subjects. Despite participants with SCI already having familiarity with the SnP, their performance measures were up to three times better with the TDS than with the SnP and continued to improve. TDS flexibility and the inherent characteristics of the human tongue enabled individuals with high-level motor impairments to access computers and drive wheelchairs at speeds that were faster than traditional assistive technologies but with comparable accuracy

    Qualitative assessment of Tongue Drive System by people with high-level spinal cord injury

    Get PDF
    The Tongue Drive System (TDS) is a minimally invasive, wireless, and wearable assistive technology (AT) that enables people with severe disabilities to control their environments using tongue motion. TDS translates specific tongue gestures into commands by sensing the magnetic field created by a small magnetic tracer applied to the user’s tongue. We have previously quantitatively evaluated the TDS for accessing computers and powered wheelchairs, demonstrating its usability. In this study, we focused on its qualitative evaluation by people with high-level spinal cord injury who each received a magnetic tongue piercing and used the TDS for 6 wk. We used two questionnaires, an after-scenario and a poststudy, designed to evaluate the tongue-piercing experience and the TDS usability compared with that of the sip-and-puff and the users’ current ATs. After study completion, 73% of the participants were positive about keeping the magnetic tongue-barbell in order to use the TDS. All were satisfied with the TDS performance and most said that they were able to do more things using TDS than their current ATs (4.22/5)

    Detecting head movement using gyroscope data collected via in-ear wearables

    Get PDF
    Abstract. Head movement is considered as an effective, natural, and simple method to determine the pointing towards an object. Head movement detection technology has significant potentiality in diverse field of applications and studies in this field verify such claim. The application includes fields like users interaction with computers, controlling many devices externally, power wheelchair operation, detecting drivers’ drowsiness while they drive, video surveillance system, and many more. Due to the diversity in application, the method of detecting head movement is also wide-ranging. A number of approaches such as acoustic-based, video-based, computer-vision based, inertial sensor data based head movement detection methods have been introduced by researchers over the years. In order to generate inertial sensor data, various types of wearables are available for example wrist band, smart watch, head-mounted device, and so on. For this thesis, eSense — a representative earable device — that has built-in inertial sensor to generate gyroscope data is employed. This eSense device is a True Wireless Stereo (TWS) earbud. It is augmented with some key equipment such as a 6-axis inertial motion unit, a microphone, and dual mode Bluetooth (Bluetooth Classic and Bluetooth Low Energy). Features are extracted from gyroscope data collected via eSense device. Subsequently, four machine learning models — Random Forest (RF), Support Vector Machine (SVM), Naïve Bayes, and Perceptron — are applied aiming to detect head movement. The performance of these models is evaluated by four different evaluation metrics such as Accuracy, Precision, Recall, and F1 score. Result shows that machine learning models that have been applied in this thesis are able to detect head movement. Comparing the performance of all these machine learning models, Random Forest performs better than others, it is able to detect head movement with approximately 77% accuracy. The accuracy rate of other three models such as Support Vector Machine, Naïve Bayes, and Perceptron is close to each other, where these models detect head movement with about 42%, 40%, and 39% accuracy, respectively. Besides, the result of other evaluation metrics like Precision, Recall, and F1 score verifies that using these machine learning models, different head direction such as left, right, or straight can be detected

    The development of a SmartAbility Framework to enhance multimodal interaction for people with reduced physical ability.

    Get PDF
    Assistive technologies are an evolving market due to the number of people worldwide who have conditions resulting in reduced physical ability (also known as disability). Various classification schemes exist to categorise disabilities, as well as government legislations to ensure equal opportunities within the community. However, there is a notable absence of a process to map physical conditions to technologies in order to improve Quality of Life for this user group. This research is characterised primarily under the Human Computer Interaction (HCI) domain, although aspects of Systems of Systems (SoS) and Assistive Technologies have been applied. The thesis focuses on examples of multimodal interactions leading to the development of a SmartAbility Framework that aims to assist people with reduced physical ability by utilising their abilities to suggest interaction mediums and technologies. The framework was developed through a predominantly Interpretivism methodology approach consisting of a variety of research methods including state- of-the-art literature reviews, requirements elicitation, feasibility trials and controlled usability evaluations to compare multimodal interactions. The developed framework was subsequently validated through the involvement of the intended user community and domain experts and supported by a concept demonstrator incorporating the SmartATRS case study. The aim and objectives of this research were achieved through the following key outputs and findings: - A comprehensive state-of-the-art literature review focussing on physical conditions and their classifications, HCI concepts relevant to multimodal interaction (Ergonomics of human-system interaction, Design For All and Universal Design), SoS definition and analysis techniques involving System of Interest (SoI), and currently-available products with potential uses as assistive technologies. - A two-phased requirements elicitation process applying surveys and semi-structured interviews to elicit the daily challenges for people with reduced physical ability, their interests in technology and the requirements for assistive technologies obtained through collaboration with a manufacturer. - Findings from feasibility trials involving monitoring brain activity using an electroencephalograph (EEG), tracking facial features through Tracking Learning Detection (TLD), applying iOS Switch Control to track head movements and investigating smartglasses. - Results of controlled usability evaluations comparing multimodal interactions with the technologies deemed to be feasible from the trials. The user community of people with reduced physical ability were involved during the process to maximise the usefulness of the data obtained. - An initial SmartDisability Framework developed from the results and observations ascertained through requirements elicitation, feasibility trials and controlled usability evaluations, which was validated through an approach of semi-structured interviews and a focus group. - An enhanced SmartAbility Framework to address the SmartDisability validation feedback by reducing the number of elements, using simplified and positive terminology and incorporating concepts from Quality Function Deployment (QFD). - A final consolidated version of the SmartAbility Framework that has been validated through semi-structured interviews with additional domain experts and addressed all key suggestions. The results demonstrated that it is possible to map technologies to people with physical conditions by considering the abilities that they can perform independently without external support and the exertion of significant physical effort. This led to a realisation that the term ‘disability’ has a negative connotation that can be avoided through the use of the phrase ‘reduced physical ability’. It is important to promote this rationale to the wider community, through exploitation of the framework. This requires a SmartAbility smartphone application to be developed that allows users to input their abilities in order for recommendations of interaction mediums and technologies to be provided

    Assistente de navegação com apontador laser para conduzir cadeiras de rodas robotizadas

    Get PDF
    Orientador: Eric RohmerDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: As soluções de robótica assistida ajudam as pessoas a recuperar sua mobilidade e autonomia perdidas em suas vidas diárias. Este documento apresenta um assistente de navegação de baixo custo projetado para pessoas tetraplégicas para dirigir uma cadeira de rodas robotizada usando a combinação da orientação da cabeça e expressões faciais (sorriso e sobrancelhas para cima) para enviar comandos para a cadeira. O assistente fornece dois modos de navegação: manual e semi-autônomo. Na navegação manual, uma webcam normal com o algoritmo OpenFace detecta a orientação da cabeça do usuário e expressões faciais (sorriso, sobrancelhas para cima) para compor comandos e atuar diretamente nos movimentos da cadeira de rodas (parar, ir à frente, virar à direita, virar à esquerda). No modo semi-autônomo, o usuário controla um laser pan-tilt com a cabeça para apontar o destino desejado no solo e valida com o comando sobrancelhas para cima que faz com que a cadeira de rodas robotizada realize uma rotação seguida de um deslocamento linear para o alvo escolhido. Embora o assistente precise de melhorias, os resultados mostraram que essa solução pode ser uma tecnologia promissora para pessoas paralisadas do pescoço para controlar uma cadeira de rodas robotizadaAbstract: Assistive robotics solutions help people to recover their lost mobility and autonomy in their daily life. This document presents a low-cost navigation assistant designed for people paralyzed from down the neck to drive a robotized wheelchair using the combination of the head's posture and facial expressions (smile and eyebrows up) to send commands to the chair. The assistant provides two navigation modes: manual and semi-autonomous. In the manual navigation, a regular webcam with the OpenFace algorithm detects the user's head orientation and facial expressions (smile, eyebrows up) to compose commands and actuate directly on the wheelchair movements (stop, go front, turn right, turn left). In the semi-autonomous, the user controls a pan-tilt laser with his/her head to point the desired destination on the ground and validates with eyebrows up command which makes the robotized wheelchair performs a rotation followed by a linear displacement to the chosen target. Although the assistant need improvements, results have shown that this solution may be a promising technology for people paralyzed from down the neck to control a robotized wheelchairMestradoEngenharia de ComputaçãoMestre em Engenharia ElétricaCAPE

    Challenges and Solutions on Assistive Technologies: Electronic systems design for people with disabilities

    Get PDF
    This work of PhD Thesis focuses on technology dedicated to persons with disabilities. This category of devices is known in the academic field and also on the market with the term of Assistive Technology. This name in fact indicates a series of technological solutions that can assist people with disabilities during everyday life and often return to the user one or more skills such as walk, talk, play or trivially change channels when watching television. In the elaborate some of the major contributions made by the candidate to the field of Assistive Technology are presented. However, to better understand the dynamics and the scene of the Assistive Technology worldwide, also the most important and current issues of this field, both technological and economics, are described

    Classificação de pacientes para adaptação de cadeira de rodas inteligente

    Get PDF
    Doutoramento em Engenharia InformáticaA importância e preocupação dedicadas à autonomia e independência das pessoas idosas e dos pacientes que sofrem de algum tipo de deficiência tem vindo a aumentar significativamente ao longo das últimas décadas. As cadeiras de rodas inteligentes (CRI) são tecnologias que podem ajudar este tipo de população a aumentar a sua autonomia, sendo atualmente uma área de investigação bastante ativa. Contudo, a adaptação das CRIs a pacientes específicos e a realização de experiências com utilizadores reais são assuntos de estudo ainda muito pouco aprofundados. A cadeira de rodas inteligente, desenvolvida no âmbito do Projeto IntellWheels, é controlada a alto nível utilizando uma interface multimodal flexível, recorrendo a comandos de voz, expressões faciais, movimentos de cabeça e através de joystick. Este trabalho teve como finalidade a adaptação automática da CRI atendendo às características dos potenciais utilizadores. Foi desenvolvida uma metodologia capaz de criar um modelo do utilizador. A investigação foi baseada num sistema de recolha de dados que permite obter e armazenar dados de voz, expressões faciais, movimentos de cabeça e do corpo dos pacientes. A utilização da CRI pode ser efetuada em diferentes situações em ambiente real e simulado e um jogo sério foi desenvolvido permitindo especificar um conjunto de tarefas a ser realizado pelos utilizadores. Os dados foram analisados recorrendo a métodos de extração de conhecimento, de modo a obter o modelo dos utilizadores. Usando os resultados obtidos pelo sistema de classificação, foi criada uma metodologia que permite selecionar a melhor interface e linguagem de comando da cadeira para cada utilizador. A avaliação para validação da abordagem foi realizada no âmbito do Projeto FCT/RIPD/ADA/109636/2009 - "IntellWheels - Intelligent Wheelchair with Flexible Multimodal Interface". As experiências envolveram um vasto conjunto de indivíduos que sofrem de diversos níveis de deficiência, em estreita colaboração com a Escola Superior de Tecnologia de Saúde do Porto e a Associação do Porto de Paralisia Cerebral. Os dados recolhidos através das experiências de navegação na CRI foram acompanhados por questionários preenchidos pelos utilizadores. Estes dados foram analisados estatisticamente, a fim de provar a eficácia e usabilidade na adequação da interface da CRI ao utilizador. Os resultados mostraram, em ambiente simulado, um valor de usabilidade do sistema de 67, baseado na opinião de uma amostra de pacientes que apresentam os graus IV e V (os mais severos) de Paralisia Cerebral. Foi também demonstrado estatisticamente que a interface atribuída automaticamente pela ferramenta tem uma avaliação superior à sugerida pelos técnicos de Terapia Ocupacional, mostrando a possibilidade de atribuir automaticamente uma linguagem de comando adaptada a cada utilizador. Experiências realizadas com distintos modos de controlo revelaram a preferência dos utilizadores por um controlo compartilhado com um nível de ajuda associado ao nível de constrangimento do paciente. Em conclusão, este trabalho demonstra que é possível adaptar automaticamente uma CRI ao utilizador com claros benefícios a nível de usabilidade e segurança.The importance and concern given to the autonomy and independence of elderly people and patients suffering from some kind of disability has been growing significantly in the last few decades. Intelligent wheelchairs (IW) are technologies that can increase the autonomy and independence of this kind of population and are nowadays a very active research area. However, the adaptations to users’ specificities and experiments with real users are topics that lack deeper studies. The intelligent wheelchair, developed in the context of the IntellWheels project, is controlled at a high-level through a flexible multimodal interface, using voice commands, facial expressions, head movements and joystick as its main input modalities. This work intended to develop a system enabling the automatic adaptation, to the user characteristics, of the previously developed intelligent wheelchair. A methodology was created enabling the creation of a user model. The research was based on the development of a data gathering system, enabling the collection and storage of data from voice commands, facial expressions, head and body movements from several patients with distinct disabilities such as Cerebral Palsy. The wheelchair can be used in different situations in real and simulated environments and a serious game was developed where different tasks may be performed by users. Data was analysed using knowledge discovery methods in order to create an automatic patient classification system. Based on the classification system, a methodology was developed enabling to select the best wheelchair interface and command language for each patient. Evaluation was performed in the context of Project FCT/RIPD/ADA/109636/ 2009 – “IntellWheels – Intelligent Wheelchair with Flexible Multimodal Interface”. Experiments were conducted, using a large set of patients suffering from severe physical constraints in close collaboration with Escola Superior de Tecnologia de Saúde do Porto and Associação do Porto de Paralisia Cerebral. The experiments using the intelligent wheelchair were followed by user questionnaires. The results were statistically analysed in order to prove the effectiveness and usability of the adaptation of the Intelligent Wheelchair multimodal interface to the user characteristics. The results obtained in a simulated environment showed a 67 score on the system usability scale based in the opinion of a sample of cerebral palsy patients with the most severe cases IV and V of the Gross Motor Function Scale. It was also statistically demonstrated that the data analysis system advised the use of an adapted interface with higher evaluation than the one suggested by the occupational therapists, showing the usefulness of defining a command language adapted to each user. Experiments conducted with distinct control modes revealed the users' preference for a shared control with an aid level taking into account the level of constraint of the patient. In conclusion, this work demonstrates that it is possible to adapt an intelligent wheelchair to the user with clear usability and safety benefits

    Development and Evaluation of Tongue Operated Robotic Rehabilitation Paradigm for Stroke Survivors with Upper Limb Paralysis

    Get PDF
    Stroke is a devastating condition that may cause upper limb paralysis. Robotic rehabilitation with self-initiated and assisted movements is a promising technology that could help restore upper limb function. The objective of this research is to develop and evaluate a tongue-operated exoskeleton that will harness the intention of stroke survivors with upper limb paralysis via tongue motion to control robotic exoskeleton during rehabilitation to achieve functional restoration and improve quality of life. Specifically, a tongue operated assistive technology called the Tongue Drive System is used to harness the tongue gesture to generate commands. And, the generated command is used to control rehabilitation robot such as wrist-based exoskeleton Hand Mentor ProTM (HM) and upper limb-based exoskeleton KINARMTM. Through a pilot experiment with 3 healthy participants, we have demonstrated the functionality of an enhanced TDS-HM with pressure-sensing capability. The system can add a programmable load force to increase the exercise intensity in isotonic mode. Through experiments with healthy and stroke subjects, we have demonstrated that the TDS-KINARM system could accurately translate tongue commands to exoskeleton arm movements, quantify function of the upper limb and perform rehabilitation training. Specifically, all healthy subjects and stroke survivors successfully performed target reaching and tracking tasks in all control modes. One of the stroke patients showed clinically significant improvement. We also analyzed the arm reaching kinematics of healthy subjects in 4 modes (active, active viscous, discrete tongue, and proportional tongue) of TDS-KINARM operation. The results indicated that the proportional tongue mode was a better candidate than the discrete tongue mode for the tongue assisted rehabilitation. This study also provided initial insights into possible kinematic similarities between tongue-operated and voluntary arm movements. Furthermore, the results showed that the viscous resistance to arm motion did not affect kinematics of arm reaching movements significantly. Finally, through a 6 healthy subject experiment, we observed a tendency of a facilitatory effect of adding tongue movement to limb movement on event-related desynchronization in EEG, implying enhanced brain excitability. This effect may contribute to enhanced rehabilitation outcome in stroke survivors using TDS with motor rehabilitation.Ph.D

    Evaluaciones utilizadas en investigaciones de tecnología de asistencia : Estado de arte.

    Get PDF
    El propósito del trabajo de grado ¿Estado de arte: Evaluaciones utilizadas en las investigaciones de tecnología de asistencia¿ fue identificar, analizar e interpretar los artículos en tecnología de asistencia, en relación con las evaluaciones, al no encontrarse una recopilación que diera cuenta de los contenidos y tendencias en el tema de investigación, para guiar a los profesionales de diferentes disciplinas en el proceso de prestación de servicios en tecnología de asistencia. Se propuso un estudio cualitativo, de tipo exploratorio, con diseño de investigación documental a 12 meses, llevado cabo en 8 bases de datos disponibles de la Universidad del Valle: EBSCO, DOAJ, SCIENCE, Springer Link, IEEE, Wiley Journals, Pubmed, ISI web of Science. Para lo anterior se elaboró un marco teórico dando referencia a conceptos del tema de investigación: discapacidad, tecnología de asistencia (TA), Modelo de la Actividad Humana: Tecnología de Asistencia (HAAT), Modelo Persona- Tecnología (MPT), Clasificación Internacional del Funcionamiento de la Discapacidad y la Salud (CIF), Modelo de Evaluación de la Prestación de Servicios en Tecnología de Asistencia (ATA), Prestación de servicios en Tecnología de Asistencia, evaluaciones, entre otros. Se evidenciaron tendencias en materia de investigación, determinando necesidades futuras que consolidan un marco de conocimientos sobre el tema, con el análisis de 134 artículos, la mayoría obtenidos de la base de datos EBSCO, publicados en 97 revistas, comprendidas en 50 áreas de conocimiento, en 27 países. A partir de estos artículos se recopilaron 274 evaluaciones utilizadas en las investigaciones en tecnología de asistencia
    corecore