574 research outputs found

    Adapted control methods for cerebral palsy users of an intelligent wheelchair

    Get PDF
    The development of an intelligent wheelchair (IW) platform that may be easily adapted to any commercial electric powered wheelchair and aid any person with special mobility needs is the main objective of this project. To be able to achieve this main objective, three distinct control methods were implemented in the IW: manual, shared and automatic. Several algorithms were developed for each of these control methods. This paper presents three of the most significant of those algorithms with emphasis on the shared control method. Experiments were performed by users suffering from cerebral palsy, using a realistic simulator, in order to validate the approach. The experiments revealed the importance of using shared (aided) controls for users with severe disabilities. The patients still felt having complete control over the wheelchair movement when using a shared control at a 50% level and thus this control type was very well accepted. Thus it may be used in intelligent wheelchairs since it is able to correct the direction in case of involuntary movements of the user but still gives him a sense of complete control over the IW movement

    Classificação de pacientes para adaptação de cadeira de rodas inteligente

    Get PDF
    Doutoramento em Engenharia InformáticaA importância e preocupação dedicadas à autonomia e independência das pessoas idosas e dos pacientes que sofrem de algum tipo de deficiência tem vindo a aumentar significativamente ao longo das últimas décadas. As cadeiras de rodas inteligentes (CRI) são tecnologias que podem ajudar este tipo de população a aumentar a sua autonomia, sendo atualmente uma área de investigação bastante ativa. Contudo, a adaptação das CRIs a pacientes específicos e a realização de experiências com utilizadores reais são assuntos de estudo ainda muito pouco aprofundados. A cadeira de rodas inteligente, desenvolvida no âmbito do Projeto IntellWheels, é controlada a alto nível utilizando uma interface multimodal flexível, recorrendo a comandos de voz, expressões faciais, movimentos de cabeça e através de joystick. Este trabalho teve como finalidade a adaptação automática da CRI atendendo às características dos potenciais utilizadores. Foi desenvolvida uma metodologia capaz de criar um modelo do utilizador. A investigação foi baseada num sistema de recolha de dados que permite obter e armazenar dados de voz, expressões faciais, movimentos de cabeça e do corpo dos pacientes. A utilização da CRI pode ser efetuada em diferentes situações em ambiente real e simulado e um jogo sério foi desenvolvido permitindo especificar um conjunto de tarefas a ser realizado pelos utilizadores. Os dados foram analisados recorrendo a métodos de extração de conhecimento, de modo a obter o modelo dos utilizadores. Usando os resultados obtidos pelo sistema de classificação, foi criada uma metodologia que permite selecionar a melhor interface e linguagem de comando da cadeira para cada utilizador. A avaliação para validação da abordagem foi realizada no âmbito do Projeto FCT/RIPD/ADA/109636/2009 - "IntellWheels - Intelligent Wheelchair with Flexible Multimodal Interface". As experiências envolveram um vasto conjunto de indivíduos que sofrem de diversos níveis de deficiência, em estreita colaboração com a Escola Superior de Tecnologia de Saúde do Porto e a Associação do Porto de Paralisia Cerebral. Os dados recolhidos através das experiências de navegação na CRI foram acompanhados por questionários preenchidos pelos utilizadores. Estes dados foram analisados estatisticamente, a fim de provar a eficácia e usabilidade na adequação da interface da CRI ao utilizador. Os resultados mostraram, em ambiente simulado, um valor de usabilidade do sistema de 67, baseado na opinião de uma amostra de pacientes que apresentam os graus IV e V (os mais severos) de Paralisia Cerebral. Foi também demonstrado estatisticamente que a interface atribuída automaticamente pela ferramenta tem uma avaliação superior à sugerida pelos técnicos de Terapia Ocupacional, mostrando a possibilidade de atribuir automaticamente uma linguagem de comando adaptada a cada utilizador. Experiências realizadas com distintos modos de controlo revelaram a preferência dos utilizadores por um controlo compartilhado com um nível de ajuda associado ao nível de constrangimento do paciente. Em conclusão, este trabalho demonstra que é possível adaptar automaticamente uma CRI ao utilizador com claros benefícios a nível de usabilidade e segurança.The importance and concern given to the autonomy and independence of elderly people and patients suffering from some kind of disability has been growing significantly in the last few decades. Intelligent wheelchairs (IW) are technologies that can increase the autonomy and independence of this kind of population and are nowadays a very active research area. However, the adaptations to users’ specificities and experiments with real users are topics that lack deeper studies. The intelligent wheelchair, developed in the context of the IntellWheels project, is controlled at a high-level through a flexible multimodal interface, using voice commands, facial expressions, head movements and joystick as its main input modalities. This work intended to develop a system enabling the automatic adaptation, to the user characteristics, of the previously developed intelligent wheelchair. A methodology was created enabling the creation of a user model. The research was based on the development of a data gathering system, enabling the collection and storage of data from voice commands, facial expressions, head and body movements from several patients with distinct disabilities such as Cerebral Palsy. The wheelchair can be used in different situations in real and simulated environments and a serious game was developed where different tasks may be performed by users. Data was analysed using knowledge discovery methods in order to create an automatic patient classification system. Based on the classification system, a methodology was developed enabling to select the best wheelchair interface and command language for each patient. Evaluation was performed in the context of Project FCT/RIPD/ADA/109636/ 2009 – “IntellWheels – Intelligent Wheelchair with Flexible Multimodal Interface”. Experiments were conducted, using a large set of patients suffering from severe physical constraints in close collaboration with Escola Superior de Tecnologia de Saúde do Porto and Associação do Porto de Paralisia Cerebral. The experiments using the intelligent wheelchair were followed by user questionnaires. The results were statistically analysed in order to prove the effectiveness and usability of the adaptation of the Intelligent Wheelchair multimodal interface to the user characteristics. The results obtained in a simulated environment showed a 67 score on the system usability scale based in the opinion of a sample of cerebral palsy patients with the most severe cases IV and V of the Gross Motor Function Scale. It was also statistically demonstrated that the data analysis system advised the use of an adapted interface with higher evaluation than the one suggested by the occupational therapists, showing the usefulness of defining a command language adapted to each user. Experiments conducted with distinct control modes revealed the users' preference for a shared control with an aid level taking into account the level of constraint of the patient. In conclusion, this work demonstrates that it is possible to adapt an intelligent wheelchair to the user with clear usability and safety benefits

    Empowering and assisting natural human mobility: The simbiosis walker

    Get PDF
    This paper presents the complete development of the Simbiosis Smart Walker. The device is equipped with a set of sensor subsystems to acquire user-machine interaction forces and the temporal evolution of user's feet during gait. The authors present an adaptive filtering technique used for the identification and separation of different components found on the human-machine interaction forces. This technique allowed isolating the components related with the navigational commands and developing a Fuzzy logic controller to guide the device. The Smart Walker was clinically validated at the Spinal Cord Injury Hospital of Toledo - Spain, presenting great acceptability by spinal chord injury patients and clinical staf

    Combining brain-computer interfaces and assistive technologies: state-of-the-art and challenges

    Get PDF
    In recent years, new research has brought the field of EEG-based Brain-Computer Interfacing (BCI) out of its infancy and into a phase of relative maturity through many demonstrated prototypes such as brain-controlled wheelchairs, keyboards, and computer games. With this proof-of-concept phase in the past, the time is now ripe to focus on the development of practical BCI technologies that can be brought out of the lab and into real-world applications. In particular, we focus on the prospect of improving the lives of countless disabled individuals through a combination of BCI technology with existing assistive technologies (AT). In pursuit of more practical BCIs for use outside of the lab, in this paper, we identify four application areas where disabled individuals could greatly benefit from advancements in BCI technology, namely,“Communication and Control”, “Motor Substitution”, “Entertainment”, and “Motor Recovery”. We review the current state of the art and possible future developments, while discussing the main research issues in these four areas. In particular, we expect the most progress in the development of technologies such as hybrid BCI architectures, user-machine adaptation algorithms, the exploitation of users’ mental states for BCI reliability and confidence measures, the incorporation of principles in human-computer interaction (HCI) to improve BCI usability, and the development of novel BCI technology including better EEG devices

    Explainable shared control in assistive robotics

    Get PDF
    Shared control plays a pivotal role in designing assistive robots to complement human capabilities during everyday tasks. However, traditional shared control relies on users forming an accurate mental model of expected robot behaviour. Without this accurate mental image, users may encounter confusion or frustration whenever their actions do not elicit the intended system response, forming a misalignment between the respective internal models of the robot and human. The Explainable Shared Control paradigm introduced in this thesis attempts to resolve such model misalignment by jointly considering assistance and transparency. There are two perspectives of transparency to Explainable Shared Control: the human's and the robot's. Augmented reality is presented as an integral component that addresses the human viewpoint by visually unveiling the robot's internal mechanisms. Whilst the robot perspective requires an awareness of human "intent", and so a clustering framework composed of a deep generative model is developed for human intention inference. Both transparency constructs are implemented atop a real assistive robotic wheelchair and tested with human users. An augmented reality headset is incorporated into the robotic wheelchair and different interface options are evaluated across two user studies to explore their influence on mental model accuracy. Experimental results indicate that this setup facilitates transparent assistance by improving recovery times from adverse events associated with model misalignment. As for human intention inference, the clustering framework is applied to a dataset collected from users operating the robotic wheelchair. Findings from this experiment demonstrate that the learnt clusters are interpretable and meaningful representations of human intent. This thesis serves as a first step in the interdisciplinary area of Explainable Shared Control. The contributions to shared control, augmented reality and representation learning contained within this thesis are likely to help future research advance the proposed paradigm, and thus bolster the prevalence of assistive robots.Open Acces

    Multimodal fuzzy fusion for enhancing the motor-imagery-based brain computer interface

    Get PDF
    © 2005-2012 IEEE. Brain-computer interface technologies, such as steady-state visually evoked potential, P300, and motor imagery are methods of communication between the human brain and the external devices. Motor imagery-based brain-computer interfaces are popular because they avoid unnecessary external stimuli. Although feature extraction methods have been illustrated in several machine intelligent systems in motor imagery-based brain-computer interface studies, the performance remains unsatisfactory. There is increasing interest in the use of the fuzzy integrals, the Choquet and Sugeno integrals, that are appropriate for use in applications in which fusion of data must consider possible data interactions. To enhance the classification accuracy of brain-computer interfaces, we adopted fuzzy integrals, after employing the classification method of traditional brain-computer interfaces, to consider possible links between the data. Subsequently, we proposed a novel classification framework called the multimodal fuzzy fusion-based brain-computer interface system. Ten volunteers performed a motor imagery-based brain-computer interface experiment, and we acquired electroencephalography signals simultaneously. The multimodal fuzzy fusion-based brain-computer interface system enhanced performance compared with traditional brain-computer interface systems. Furthermore, when using the motor imagery-relevant electroencephalography frequency alpha and beta bands for the input features, the system achieved the highest accuracy, up to 78.81% and 78.45% with the Choquet and Sugeno integrals, respectively. Herein, we present a novel concept for enhancing brain-computer interface systems that adopts fuzzy integrals, especially in the fusion for classifying brain-computer interface commands

    BNCI Horizon 2020 - Towards a Roadmap for Brain/Neural Computer Interaction

    Get PDF
    In this paper, we present BNCI Horizon 2020, an EU Coordination and Support Action (CSA) that will provide a roadmap for brain-computer interaction research for the next years, starting in 2013, and aiming at research efforts until 2020 and beyond. The project is a successor of the earlier EU-funded Future BNCI CSA that started in 2010 and produced a roadmap for a shorter time period. We present how we, a consortium of the main European BCI research groups as well as companies and end user representatives, expect to tackle the problem of designing a roadmap for BCI research. In this paper, we define the field with its recent developments, in particular by considering publications and EU-funded research projects, and we discuss how we plan to involve research groups, companies, and user groups in our effort to pave the way for useful and fruitful EU-funded BCI research for the next ten years
    corecore