38 research outputs found

    Touchless Typing using Head Movement-based Gestures

    Full text link
    Physical contact-based typing interfaces are not suitable for people with upper limb disabilities such as Quadriplegia. This paper, thus, proposes a touch-less typing interface that makes use of an on-screen QWERTY keyboard and a front-facing smartphone camera mounted on a stand. The keys of the keyboard are grouped into nine color-coded clusters. Users pointed to the letters that they wanted to type just by moving their head. The head movements of the users are recorded by the camera. The recorded gestures are then translated into a cluster sequence. The translation module is implemented using CNN-RNN, Conv3D, and a modified GRU based model that uses pre-trained embedding rich in head pose features. The performances of these models were evaluated under four different scenarios on a dataset of 2234 video sequences collected from 22 users. The modified GRU-based model outperforms the standard CNN-RNN and Conv3D models for three of the four scenarios. The results are encouraging and suggest promising directions for future research.Comment: *The two lead authors contributed equally. The dataset and code are available upon request. Please contact the last autho

    Improved Hands-Free Text Entry System

    Get PDF
    An input device is a hardware device which is used to send input data to a computer or which is used to control and interact with a computer system. Contemporary input mechanisms can be categorized by the input medium: Keyboards and mice are hand-operated, Siri and Alexa are voice-based, etc. The objective of this project was to come up with a head movement based input system that improves upon earlier such systems. Input entry based on head movements may help people with disabilities to interact with computers more easily. The system developed provides the flexibility to capture rigid and non- rigid motions. Unlike prior work, the organization of alphabet symbols in our design is based on the frequency of the characters in the English dictionary. We conducted experiments on our system and compared it to previous head movement systems

    Studies on the impact of assistive communication devices on the quality of life of patients with amyotrophic lateral sclerosis

    Get PDF
    Tese de doutoramento, Ciências Biomédicas (Neurociências), Universidade de Lisboa, Faculdade de Medicina, 2016Amyotrophic Lateral Sclerosis (ALS) is a progressive neuromuscular disease with rapid and generalized degeneration of motor neurons. Patients with ALS experiment a relentless decline in functions that affect performance of most activities of daily living (ADL), such as speaking, eating, walking or writing. For this reason, dependence on caregivers grows as the disease progresses. Management of the respiratory system is one of the main concerns of medical support, since respiratory failure is the most common cause of death in ALS. Due to increasing muscle weakness, most patients experience dramatic decrease of speech intelligibility and difficulties in using upper limbs (UL) for writing. There is growing evidence that mild cognitive impairment is common in ALS, but most patients are self-conscious of their difficulties in communicating and, in very severe stages, locked-in syndrome can occur. When no other resources than speech and writing are used to assist communication, patients are deprived of expressing needs or feelings, making decisions and keeping social relationships. Further, caregivers feel increased dependence due to difficulties in communication with others and get frustrated about difficulties in understanding partners’ needs. Support for communication is then very important to improve quality of life of both patients and caregivers; however, this has been poorly investigated in ALS. Assistive communication devices (ACD) can support patients by providing a diversity of tools for communication, as they progressively lose speech. ALS, in common with other degenerative conditions, introduces an additional challenge for the field of ACD: as the disease progresses, technologies must adapt to different conditions of the user. In early stages, patients may need speech synthesis in a mobile device, if dysarthria is one of the initial symptoms, or keyboard modifications, as weakness in UL increases. When upper limbs’ dysfunction is high, different input technologies may be adapted to capture voluntary control (for example, eye-tracking devices). Despite the enormous advances in the field of Assistive Technologies, in the last decade, difficulties in clinical support for the use of assistive communication devices (ACD) persist. Among the main reasons for these difficulties are lack of assessment tools to evaluate communication needs and determine proper input devices and to indicate changes over disease progression, and absence of clinical evidence that ACD has relevant impact on the quality of life of affected patients. For this set of reasons, support with communication tools is delayed to stages where patients are severely disabled. Often in these stages, patients face additional clinical complications and increased dependence on their caregivers’ decisions, which increase the difficulty in adaptation to new communication tools. This thesis addresses the role of assistive technologies in the quality of life of early-affected patients with ALS. Also, it includes the study of assessment tools that can improve longitudinal evaluation of communication needs of patients with ALS. We longitudinally evaluated a group of 30 patients with bulbar-onset ALS and 17 caregivers, during 2 to 29 months. Patients were assessed during their regular clinical appointments, in the Hospital de Santa Maria-Centro Hospitalar Lisboa_Norte. Evaluation of patients was based on validated instruments for assessing the Quality of Life (QoL) of patients and caregivers, and on methodologies for recording communication and measuring its performance (including speech, handwriting and typing). We tested the impact of early support with ACD on the QoL of patients with ALS, using a randomized, prospective, longitudinal design. Patients were able to learn and improve their skills to use communication tools based on electronic assistive devices. We found a positive impact of ACD in psychological and wellbeing domains of quality of life in patients, as well as in the support and psychological domains in caregivers. We also studied performance of communication (words per minute) using UL. Performance in handwriting may decline faster than performance in typing, supporting the idea that the use of touchscreen-based ACD supports communication for longer than handwriting. From longitudinal recordings of speech and typing activity we could observe that ACD can support tools to detect early markers of bulbar and UL dysfunction in ALS. Methodologies that were used in this research for recording and assessing function in communication can be replicated in the home environment and form part of the original contributions of this research. Implementation of remote monitoring tools in daily use of ACD, based on these methodologies, is discussed. Considering those patients who receive late support for the use of ACD, lack of time or daily support to learn how to control complex input devices may hinder its use. We developed a novel device to explore the detection and control of various residual movements, based on sensors of accelerometry, electromyography and force, as input signals for communication. The aim of this input device was to develop a tool to explore new communication channels in patients with generalized muscle weakness. This research contributed with novel tools from the Engineering field to the study of assistive communication in patients with ALS. Methodologies that were developed in this work can be further applied to the study of the impact of ACD in other neurodegenerative diseases that affect speech and motor control of UL

    An end-to-end review of gaze estimation and its interactive applications on handheld mobile devices

    Get PDF
    In recent years we have witnessed an increasing number of interactive systems on handheld mobile devices which utilise gaze as a single or complementary interaction modality. This trend is driven by the enhanced computational power of these devices, higher resolution and capacity of their cameras, and improved gaze estimation accuracy obtained from advanced machine learning techniques, especially in deep learning. As the literature is fast progressing, there is a pressing need to review the state of the art, delineate the boundary, and identify the key research challenges and opportunities in gaze estimation and interaction. This paper aims to serve this purpose by presenting an end-to-end holistic view in this area, from gaze capturing sensors, to gaze estimation workflows, to deep learning techniques, and to gaze interactive applications.PostprintPeer reviewe

    Applications of the electric potential sensor for healthcare and assistive technologies

    Get PDF
    The work discussed in this thesis explores the possibility of employing the Electric Potential Sensor for use in healthcare and assistive technology applications with the same and in some cases better degrees of accuracy than those of conventional technologies. The Electric Potential Sensor is a generic and versatile sensing technology capable of working in both contact and non-contact (remote) modes. New versions of the active sensor were developed for specific surface electrophysiological signal measurements. The requirements in terms of frequency range, electrode size and gain varied with the type of signal measured for each application. Real-time applications based on electrooculography, electroretinography and electromyography are discussed, as well as an application based on human movement. A three sensor electrooculography eye tracking system was developed which is of interest to eye controlled assistive technologies. The system described achieved an accuracy at least as good as conventional wet gel electrodes for both horizontal and vertical eye movements. Surface recording of the electroretinogram, used to monitor eye health and diagnose degenerative diseases of the retina, was achieved and correlated with both corneal fibre and wet gel surface electrodes. The main signal components of electromyography lie in a higher bandwidth and surface signals of the deltoid muscle were recorded over the course of rehabilitation of a subject with an injured arm. Surface electromyography signals of the bicep were also recorded and correlated with the joint dynamics of the elbow. A related non-contact application of interest to assistive technologies was also developed. Hand movement within a defined area was mapped and used to control a mouse cursor and a predictive text interface

    Controlling a Mouse Pointer with a Single-Channel EEG Sensor

    Get PDF
    Goals: The purpose of this study was to analyze the feasibility of using the information obtained from a one-channel electro-encephalography (EEG) signal to control a mouse pointer. We used a low-cost headset, with one dry sensor placed at the FP1 position, to steer a mouse pointer and make selections through a combination of the user’s attention level with the detection of voluntary blinks. There are two types of cursor movements: spinning and linear displacement. A sequence of blinks allows for switching between these movement types, while the attention level modulates the cursor’s speed. The influence of the attention level on performance was studied. Additionally, Fitts’ model and the evolution of the emotional states of participants, among other trajectory indicators, were analyzed. (2) Methods: Twenty participants distributed into two groups (Attention and No-Attention) performed three runs, on different days, in which 40 targets had to be reached and selected. Target positions and distances from the cursor’s initial position were chosen, providing eight different indices of difficulty (IDs). A self-assessment manikin (SAM) test and a final survey provided information about the system’s usability and the emotions of participants during the experiment. (3) Results: The performance was similar to some brain–computer interface (BCI) solutions found in the literature, with an averaged information transfer rate (ITR) of 7 bits/min. Concerning the cursor navigation, some trajectory indicators showed our proposed approach to be as good as common pointing devices, such as joysticks, trackballs, and so on. Only one of the 20 participants reported difficulty in managing the cursor and, according to the tests, most of them assessed the experience positively. Movement times and hit rates were significantly better for participants belonging to the attention group. (4) Conclusions: The proposed approach is a feasible low-cost solution to manage a mouse pointe

    Cognitive state monitoring and the design of adaptive instruction in digital environments: lessons learned from cognitive workload assessment using a passive brain-computer interface approach

    Get PDF
    According to Cognitive Load Theory (CLT), one of the crucial factors for successful learning is the type and amount of working-memory load (WML) learners experience while studying instructional materials. Optimal learning conditions are characterized by providing challenges for learners without inducing cognitive over- or underload. Thus, presenting instruction in a way that WML is constantly held within an optimal range with regard to learners' working-memory capacity might be a good method to provide these optimal conditions. The current paper elaborates how digital learning environments, which achieve this goal can be developed by combining approaches from Cognitive Psychology, Neuroscience, and Computer Science. One of the biggest obstacles that needs to be overcome is the lack of an unobtrusive method of continuously assessing learners' WML in real-time. We propose to solve this problem by applying passive Brain-Computer Interface (BCI) approaches to realistic learning scenarios in digital environments. In this paper we discuss the methodological and theoretical prospects and pitfalls of this approach based on results from the literature and from our own research. We present a strategy on how several inherent challenges of applying BCIs to WML and learning can be met by refining the psychological constructs behind WML, by exploring their neural signatures, by using these insights for sophisticated task designs, and by optimizing algorithms for analyzing electroencephalography (EEG) data. Based on this strategy we applied machine-learning algorithms for cross-task classifications of different levels of WML to tasks that involve studying realistic instructional materials. We obtained very promising results that yield several recommendations for future work
    corecore