1,258 research outputs found
Evaluation of distinct input methods of an intelligent wheelchair in simulated and real environments: a performance and usability study
This paper focuses on evaluating the usability of an Intelligent Wheelchair (IW) in both real and simulated environments. The wheelchair is controlled at a high-level by a flexible multimodal interface, using voice commands, facial expressions, head movements and joystick as its main inputs. A Quasi-experimental design was applied including a deterministic sample with a questionnaire that enabled to apply the System Usability Scale. The subjects were divided in two independent samples: 46 individuals performing the experiment with an Intelligent Wheelchair in a simulated environment (28 using different commands in a sequential way and 18 with the liberty to choose the command); 12 individuals performing the experiment with a real IW. The main conclusion achieved by this study is that the usability of the Intelligent Wheelchair in a real environment is higher than in the simulated environment. However there were not statistical evidences to affirm that there are differences between the real and simulated wheelchairs in terms of safety and control. Also, most of users considered the multimodal way of driving the wheelchair very practical and satisfactory. Thus, it may be concluded that the multimodal interfaces enables very easy and safe control of the IW both in simulated and real environments
Combining brain-computer interfaces and assistive technologies: state-of-the-art and challenges
In recent years, new research has brought the field of EEG-based Brain-Computer Interfacing (BCI) out of its infancy and into a phase of relative maturity through many demonstrated prototypes such as brain-controlled wheelchairs, keyboards, and computer games. With this proof-of-concept phase in the past, the time is now ripe to focus on the development of practical BCI technologies that can be brought out of the lab and into real-world applications. In particular, we focus on the prospect of improving the lives of countless disabled individuals through a combination of BCI technology with existing assistive technologies (AT). In pursuit of more practical BCIs for use outside of the lab, in this paper, we identify four application areas where disabled individuals could greatly benefit from advancements in BCI technology, namely,“Communication and Control”, “Motor Substitution”, “Entertainment”, and “Motor Recovery”. We review the current state of the art and possible future developments, while discussing the main research issues in these four areas. In particular, we expect the most progress in the development of technologies such as hybrid BCI architectures, user-machine adaptation algorithms, the exploitation of users’ mental states for BCI reliability and confidence measures, the incorporation of principles in human-computer interaction (HCI) to improve BCI usability, and the development of novel BCI technology including better EEG devices
Explainable shared control in assistive robotics
Shared control plays a pivotal role in designing assistive robots to complement human capabilities during everyday tasks. However, traditional shared control relies on users forming an accurate mental model of expected robot behaviour. Without this accurate mental image, users may encounter confusion or frustration whenever their actions do not elicit the intended system response, forming a misalignment between the respective internal models of the robot and human. The Explainable Shared Control paradigm introduced in this thesis attempts to resolve such model misalignment by jointly considering assistance and transparency.
There are two perspectives of transparency to Explainable Shared Control: the human's and the robot's. Augmented reality is presented as an integral component that addresses the human viewpoint by visually unveiling the robot's internal mechanisms. Whilst the robot perspective requires an awareness of human "intent", and so a clustering framework composed of a deep generative model is developed for human intention inference.
Both transparency constructs are implemented atop a real assistive robotic wheelchair and tested with human users. An augmented reality headset is incorporated into the robotic wheelchair and different interface options are evaluated across two user studies to explore their influence on mental model accuracy. Experimental results indicate that this setup facilitates transparent assistance by improving recovery times from adverse events associated with model misalignment. As for human intention inference, the clustering framework is applied to a dataset collected from users operating the robotic wheelchair. Findings from this experiment demonstrate that the learnt clusters are interpretable and meaningful representations of human intent.
This thesis serves as a first step in the interdisciplinary area of Explainable Shared Control. The contributions to shared control, augmented reality and representation learning contained within this thesis are likely to help future research advance the proposed paradigm, and thus bolster the prevalence of assistive robots.Open Acces
Multimodal interface for an intelligent wheelchair
Tese de mestrado integrado. Engenharia Informática e Computação. Universidade do Porto. Faculdade de Engenharia. 201
Classificação de pacientes para adaptação de cadeira de rodas inteligente
Doutoramento em Engenharia InformáticaA importância e preocupação dedicadas à autonomia e independência das
pessoas idosas e dos pacientes que sofrem de algum tipo de deficiência tem
vindo a aumentar significativamente ao longo das últimas décadas. As
cadeiras de rodas inteligentes (CRI) são tecnologias que podem ajudar este
tipo de população a aumentar a sua autonomia, sendo atualmente uma área
de investigação bastante ativa. Contudo, a adaptação das CRIs a pacientes
específicos e a realização de experiências com utilizadores reais são assuntos
de estudo ainda muito pouco aprofundados.
A cadeira de rodas inteligente, desenvolvida no âmbito do Projeto IntellWheels,
é controlada a alto nível utilizando uma interface multimodal flexível,
recorrendo a comandos de voz, expressões faciais, movimentos de cabeça e
através de joystick. Este trabalho teve como finalidade a adaptação automática
da CRI atendendo às características dos potenciais utilizadores.
Foi desenvolvida uma metodologia capaz de criar um modelo do utilizador. A
investigação foi baseada num sistema de recolha de dados que permite obter
e armazenar dados de voz, expressões faciais, movimentos de cabeça e do
corpo dos pacientes. A utilização da CRI pode ser efetuada em diferentes
situações em ambiente real e simulado e um jogo sério foi desenvolvido
permitindo especificar um conjunto de tarefas a ser realizado pelos
utilizadores. Os dados foram analisados recorrendo a métodos de extração de
conhecimento, de modo a obter o modelo dos utilizadores. Usando os
resultados obtidos pelo sistema de classificação, foi criada uma metodologia
que permite selecionar a melhor interface e linguagem de comando da cadeira
para cada utilizador.
A avaliação para validação da abordagem foi realizada no âmbito do Projeto
FCT/RIPD/ADA/109636/2009 - "IntellWheels - Intelligent Wheelchair with
Flexible Multimodal Interface". As experiências envolveram um vasto conjunto
de indivíduos que sofrem de diversos níveis de deficiência, em estreita
colaboração com a Escola Superior de Tecnologia de Saúde do Porto e a
Associação do Porto de Paralisia Cerebral. Os dados recolhidos através das
experiências de navegação na CRI foram acompanhados por questionários
preenchidos pelos utilizadores. Estes dados foram analisados estatisticamente,
a fim de provar a eficácia e usabilidade na adequação da interface da CRI ao
utilizador. Os resultados mostraram, em ambiente simulado, um valor de
usabilidade do sistema de 67, baseado na opinião de uma amostra de
pacientes que apresentam os graus IV e V (os mais severos) de Paralisia
Cerebral. Foi também demonstrado estatisticamente que a interface atribuída
automaticamente pela ferramenta tem uma avaliação superior à sugerida pelos
técnicos de Terapia Ocupacional, mostrando a possibilidade de atribuir
automaticamente uma linguagem de comando adaptada a cada utilizador.
Experiências realizadas com distintos modos de controlo revelaram a
preferência dos utilizadores por um controlo compartilhado com um nível de
ajuda associado ao nível de constrangimento do paciente. Em conclusão, este
trabalho demonstra que é possível adaptar automaticamente uma CRI ao
utilizador com claros benefícios a nível de usabilidade e segurança.The importance and concern given to the autonomy and independence of
elderly people and patients suffering from some kind of disability has been
growing significantly in the last few decades. Intelligent wheelchairs (IW) are
technologies that can increase the autonomy and independence of this kind of
population and are nowadays a very active research area. However, the
adaptations to users’ specificities and experiments with real users are topics
that lack deeper studies.
The intelligent wheelchair, developed in the context of the IntellWheels project,
is controlled at a high-level through a flexible multimodal interface, using voice
commands, facial expressions, head movements and joystick as its main input
modalities. This work intended to develop a system enabling the automatic
adaptation, to the user characteristics, of the previously developed intelligent
wheelchair.
A methodology was created enabling the creation of a user model. The
research was based on the development of a data gathering system, enabling
the collection and storage of data from voice commands, facial expressions,
head and body movements from several patients with distinct disabilities such
as Cerebral Palsy. The wheelchair can be used in different situations in real
and simulated environments and a serious game was developed where
different tasks may be performed by users.
Data was analysed using knowledge discovery methods in order to create an
automatic patient classification system. Based on the classification system, a
methodology was developed enabling to select the best wheelchair interface
and command language for each patient.
Evaluation was performed in the context of Project FCT/RIPD/ADA/109636/
2009 – “IntellWheels – Intelligent Wheelchair with Flexible Multimodal
Interface”. Experiments were conducted, using a large set of patients suffering
from severe physical constraints in close collaboration with Escola Superior de
Tecnologia de Saúde do Porto and Associação do Porto de Paralisia Cerebral.
The experiments using the intelligent wheelchair were followed by user
questionnaires. The results were statistically analysed in order to prove the
effectiveness and usability of the adaptation of the Intelligent Wheelchair
multimodal interface to the user characteristics. The results obtained in a
simulated environment showed a 67 score on the system usability scale based
in the opinion of a sample of cerebral palsy patients with the most severe cases
IV and V of the Gross Motor Function Scale. It was also statistically
demonstrated that the data analysis system advised the use of an adapted
interface with higher evaluation than the one suggested by the occupational
therapists, showing the usefulness of defining a command language adapted to
each user. Experiments conducted with distinct control modes revealed the
users' preference for a shared control with an aid level taking into account the
level of constraint of the patient. In conclusion, this work demonstrates that it is
possible to adapt an intelligent wheelchair to the user with clear usability and
safety benefits
Autonomous open-source electric wheelchair platform with internet-of-things and proportional-integral-derivative control
This study aims to improve the working model of autonomous wheelchair navigation for disabled patients using the internet of things (IoT). A proportional-integral-derivative (PID) control algorithm is applied to the autonomous wheelchair to control movement based on position coordinates and orientation provided by the global positioning system (GPS) and digital compass sensor. This system is controlled through the IoT system, which can be operated from a web browser. Autonomous wheelchairs are handled using a waypoint algorithm; ESP8266 is used as a microcontroller unit that acts as a bridge for transmitting data obtained by sensors and controlling the direct current (DC) motors as actuators. The proposed system and the autonomous wheelchair performance gave satisfactory results with a longitude and latitude error of 1.1 meters to 4.5 meters. This error is obtained because of the limitations of GPS with the type of Ublox Neo-M8N. As a starting point for further research, a mathematical model of a wheelchair was created, and pure pursuit control algorithm was used to simulate the movement. An open-source autonomous IoT platform for electric wheelchairs has been successfully created; this platform can help nurses and caretakers
Research on Application of Cognitive-Driven Human-Computer Interaction
Human-computer interaction is an important research content of intelligent manufacturing human factor engineering. Natural human-computer interaction conforms to the cognition of users' habits and can efficiently process inaccurate information interaction, thus improving user experience and reducing cognitive load. Through the analysis of the information interaction process, user interaction experience cognition and human-computer interaction principles in the human-computer interaction system, a cognitive-driven human-computer interaction information transmission model is established. Investigate the main interaction modes in the current human-computer interaction system, and discuss its application status, technical requirements and problems. This paper discusses the analysis and evaluation methods of interaction modes in human-computer system from three levels of subjective evaluation, physiological measurement and mathematical method evaluation, so as to promote the understanding of inaccurate information to achieve the effect of interaction self-adaptation and guide the design and optimization of human-computer interaction system. According to the development status of human-computer interaction in intelligent environment, the research hotspots, problems and development trends of human-computer interaction are put forward
- …