2,381 research outputs found

    Virtual Reality-Assisted Physiotherapy for Visuospatial Neglect Rehabilitation: A Proof-of-Concept Study

    Full text link
    This study explores a VR-based intervention for Visuospatial neglect (VSN), a post-stroke condition. It aims to develop a VR task utilizing interactive visual-audio cues to improve sensory-motor training and assess its impact on VSN patients' engagement and performance. Collaboratively designed with physiotherapists, the VR task uses directional and auditory stimuli to alert and direct patients, tested over 12 sessions with two individuals. Results show a consistent decrease in task completion variability and positive patient feedback, highlighting the VR task's potential for enhancing engagement and suggesting its feasibility in rehabilitation. The study underlines the significance of collaborative design in healthcare technology and advocates for further research with a larger sample size to confirm the benefits of VR in VSN treatment, as well as its applicability to other multimodal disorders.Comment: 29 pages, 8 figures, 5 table

    Investigating motor skill in closed-loop myoelectric hand prostheses:Through speed-accuracy trade-offs

    Get PDF

    Gaze-tracking-based interface for robotic chair guidance

    Get PDF
    This research focuses on finding solutions to enhance the quality of life for wheelchair users, specifically by applying a gaze-tracking-based interface for the guidance of a robotized wheelchair. For this purpose, the interface was applied in two different approaches for the wheelchair control system. The first one was an assisted control in which the user was continuously involved in controlling the movement of the wheelchair in the environment and the inclination of the different parts of the seat through the user’s gaze and eye blinks obtained with the interface. The second approach was to take the first steps to apply the device to an autonomous wheelchair control in which the wheelchair moves autonomously avoiding collisions towards the position defined by the user. To this end, the basis for obtaining the gaze position relative to the wheelchair and the object detection was developed in this project to be able to calculate in the future the optimal route to which the wheelchair should move. In addition, the integration of a robotic arm in the wheelchair to manipulate different objects was also considered, obtaining in this work the object of interest indicated by the user's gaze within the detected objects so that in the future the robotic arm could select and pick up the object the user wants to manipulate. In addition to the two approaches, an attempt was also made to estimate the user's gaze without the software interface. For this purpose, the gaze is obtained from pupil detection libraries, a calibration and a mathematical model that relates pupil positions to gaze. The results of the implementations have been analysed in this work, including some limitations encountered. Nevertheless, future improvements are proposed, with the aim of increasing the independence of wheelchair user

    A Scenario Analysis of Wearable Interface Technology Foresight

    Get PDF
    Although the importance and value of wearable interface have gradually being realized, wearable interface related technologies and the priority of adopting these technologies have so far not been clearly recognized. To fill this gap, this paper focuses on the technology planning strategy of organizations that have an interest in developing or adopting wearable interface related technologies. Based on the scenario analysis approach, a technology planning strategy is proposed. In this analysis, thirty wearable interface technologies are classified into six categories, and the importance and risk factors of these categories are then evaluated under two possible scenarios. The main research findings include the discovery that most brain based wearable interface technologies are rated high to medium importance and high risk in all scenarios, and that scenario changes will have less impact on voice based as well as gesture based wearable interface technologies. These results provide a reference for organizations and vendors interested in adopting or developing wearable interface technologies

    Electroencephalography (EEG), electromyography (EMG) and eye-tracking for astronaut training and space exploration

    Full text link
    The ongoing push to send humans back to the Moon and to Mars is giving rise to a wide range of novel technical solutions in support of prospective astronaut expeditions. Against this backdrop, the European Space Agency (ESA) has recently launched an investigation into unobtrusive interface technologies as a potential answer to such challenges. Three particular technologies have shown promise in this regard: EEG-based brain-computer interfaces (BCI) provide a non-invasive method of utilizing recorded electrical activity of a user's brain, electromyography (EMG) enables monitoring of electrical signals generated by the user's muscle contractions, and finally, eye tracking enables, for instance, the tracking of user's gaze direction via camera recordings to convey commands. Beyond simply improving the usability of prospective technical solutions, our findings indicate that EMG, EEG, and eye-tracking could also serve to monitor and assess a variety of cognitive states, including attention, cognitive load, and mental fatigue of the user, while EMG could furthermore also be utilized to monitor the physical state of the astronaut. In this paper, we elaborate on the key strengths and challenges of these three enabling technologies, and in light of ESA's latest findings, we reflect on their applicability in the context of human space flight. Furthermore, a timeline of technological readiness is provided. In so doing, this paper feeds into the growing discourse on emerging technology and its role in paving the way for a human return to the Moon and expeditions beyond the Earth's orbit

    Behavioural attentiveness patterns analysis – detecting distraction behaviours

    Get PDF
    The capacity of remaining focused on a task can be crucial in some circumstances. In general, this ability is intrinsic in a human social interaction and it is naturally used in any social context. Nevertheless, some individuals have difficulties in remaining concentrated in an activity, resulting in a short attention span. Children with Autism Spectrum Disorder (ASD) are a special example of such individuals. ASD is a group of complex developmental disorders of the brain. Individuals affected by this disorder are characterized by repetitive patterns of behaviour, restricted activities or interests, and impairments in social communication. The use of robots has already proved to encourage the developing of social interaction skills lacking in children with ASD. However, most of these systems are controlled remotely and cannot adapt automatically to the situation, and even those who are more autonomous still cannot perceive whether or not the user is paying attention to the instructions and actions of the robot. Following this trend, this dissertation is part of a research project that has been under development for some years. In this project, the Robot ZECA (Zeno Engaging Children with Autism) from Hanson Robotics is used to promote the interaction with children with ASD helping them to recognize emotions, and to acquire new knowledge in order to promote social interaction and communication with the others. The main purpose of this dissertation is to know whether the user is distracted during an activity. In the future, the objective is to interface this system with ZECA to consequently adapt its behaviour taking into account the individual affective state during an emotion imitation activity. In order to recognize human distraction behaviours and capture the user attention, several patterns of distraction, as well as systems to automatically detect them, have been developed. One of the most used distraction patterns detection methods is based on the measurement of the head pose and eye gaze. The present dissertation proposes a system based on a Red Green Blue (RGB) camera, capable of detecting the distraction patterns, head pose, eye gaze, blinks frequency, and the user to position towards the camera, during an activity, and then classify the user's state using a machine learning algorithm. Finally, the proposed system is evaluated in a laboratorial and controlled environment in order to verify if it is capable to detect the patterns of distraction. The results of these preliminary tests allowed to detect some system constraints, as well as to validate its adequacy to later use it in an intervention setting.A capacidade de permanecer focado numa tarefa pode ser crucial em algumas circunstâncias. No geral, essa capacidade é intrínseca numa interação social humana e é naturalmente usada em qualquer contexto social. No entanto, alguns indivíduos têm dificuldades em permanecer concentrados numa atividade, resultando num curto período de atenção. Crianças com Perturbações do Espectro do Autismo (PEA) são um exemplo especial de tais indivíduos. PEA é um grupo de perturbações complexas do desenvolvimento do cérebro. Os indivíduos afetados por estas perturbações são caracterizados por padrões repetitivos de comportamento, atividades ou interesses restritos e deficiências na comunicação social. O uso de robôs já provaram encorajar a promoção da interação social e ajudaram no desenvolvimento de competências deficitárias nas crianças com PEA. No entanto, a maioria desses sistemas é controlada remotamente e não consegue-se adaptar automaticamente à situação, e mesmo aqueles que são mais autônomos ainda não conseguem perceber se o utilizador está ou não atento às instruções e ações do robô. Seguindo esta tendência, esta dissertação é parte de um projeto de pesquisa que vem sendo desenvolvido há alguns anos, onde o robô ZECA (Zeno Envolvendo Crianças com Autismo) da Hanson Robotics é usado para promover a interação com crianças com PEA, ajudando-as a reconhecer emoções, adquirir novos conhecimentos para promover a interação social e comunicação com os pares. O principal objetivo desta dissertação é saber se o utilizador está distraído durante uma atividade. No futuro, o objetivo é fazer a interface deste sistema com o ZECA para, consequentemente, adaptar o seu comportamento tendo em conta o estado afetivo do utilizador durante uma atividade de imitação de emoções. A fim de reconhecer os comportamentos de distração humana e captar a atenção do utilizador, vários padrões de distração, bem como sistemas para detetá-los automaticamente, foram desenvolvidos. Um dos métodos de deteção de padrões de distração mais utilizados baseia-se na medição da orientação da cabeça e da orientação do olhar. A presente dissertação propõe um sistema baseado numa câmera Red Green Blue (RGB), capaz de detetar os padrões de distração, orientação da cabeça, orientação do olhar, frequência do piscar de olhos e a posição do utilizador em frente da câmera, durante uma atividade, e então classificar o estado do utilizador usando um algoritmo de “machine learning”. Por fim, o sistema proposto é avaliado num ambiente laboratorial, a fim de verificar se é capaz de detetar os padrões de distração. Os resultados destes testes preliminares permitiram detetar algumas restrições do sistema, bem como validar a sua adequação para posteriormente utilizá-lo num ambiente de intervenção
    corecore