1,291 research outputs found

    EYECOM: an innovative approach for computer interaction

    Get PDF
    The world is innovating rapidly, and there is a need for continuous interaction with the technology. Sadly, there do not exist promising options for paralyzed people to interact with the machines i.e., laptops, smartphones, and tabs. A few commercial solutions such as Google Glasses are costly and cannot be afforded by every paralyzed person for such interaction. Towards this end, the thesis proposes a retina-controlled device called EYECOM. The proposed device is constructed from off-the-shelf cost-effective yet robust IoT devices (i.e., Arduino microcontrollers, Xbee wireless sensors, IR diodes, and accelerometer). The device can easily be mounted on to the glasses; the paralyzed person using this device can interact with the machine using simple head movement and eye blinks. The IR detector is located in front of the eye to illuminate the eye region. As a result of illumination, the eye reflects IR light which includes electrical signals and as the eyelids close, the reflected light over eye surface is disrupted, and such change in reflected value is recorded. Further to enable cursor movement onto the computer screen for the paralyzed person a device named accelerometer is used. The accelerometer is a small device, with the size of phalanges, a human thumb bone. The device operates on the principle of axis-based motion sensing and it can be worn as a ring by a paralyzed person. A microcontroller processes the inputs from the IR sensors, accelerometer and transmits them wirelessly via Xbee wireless sensor (i.e., a radio) to another microcontroller attached to the computer. With the help of a proposed algorithm, the microcontroller attached to the computer, on receiving the signals moves cursor onto the computer screen and facilitate performing actions, as simple as opening a document to operating a word-to-speech software. EYECOM has features which can help paralyzed persons to continue their contributions towards the technological world and become an active part of the society. Resultantly, they will be able to perform number of tasks without depending upon others from as simple as reading a newspaper on the computer to activate word-to-voice software

    Biosignal‐based human–machine interfaces for assistance and rehabilitation : a survey

    Get PDF
    As a definition, Human–Machine Interface (HMI) enables a person to interact with a device. Starting from elementary equipment, the recent development of novel techniques and unobtrusive devices for biosignals monitoring paved the way for a new class of HMIs, which take such biosignals as inputs to control various applications. The current survey aims to review the large literature of the last two decades regarding biosignal‐based HMIs for assistance and rehabilitation to outline state‐of‐the‐art and identify emerging technologies and potential future research trends. PubMed and other databases were surveyed by using specific keywords. The found studies were further screened in three levels (title, abstract, full‐text), and eventually, 144 journal papers and 37 conference papers were included. Four macrocategories were considered to classify the different biosignals used for HMI control: biopotential, muscle mechanical motion, body motion, and their combinations (hybrid systems). The HMIs were also classified according to their target application by considering six categories: prosthetic control, robotic control, virtual reality control, gesture recognition, communication, and smart environment control. An ever‐growing number of publications has been observed over the last years. Most of the studies (about 67%) pertain to the assistive field, while 20% relate to rehabilitation and 13% to assistance and rehabilitation. A moderate increase can be observed in studies focusing on robotic control, prosthetic control, and gesture recognition in the last decade. In contrast, studies on the other targets experienced only a small increase. Biopotentials are no longer the leading control signals, and the use of muscle mechanical motion signals has experienced a considerable rise, especially in prosthetic control. Hybrid technologies are promising, as they could lead to higher performances. However, they also increase HMIs’ complex-ity, so their usefulness should be carefully evaluated for the specific application

    Essential Elements for Assessment of Persons with Severe Neurological Impairments for Computer Access Using Assistive Technology Devices: A Delphi Study

    Get PDF
    This study was undertaken with the intention of determining potential elements for inclusion in an assessment of persons with disabilities for access to computers utilizing assistive technology (AT). There is currently a lack of guidelines regarding areas that constitute a comprehensive and valid measure of a person’s need for AT devices to enable computer access, resulting in substandard services. A list of criteria for elements that should be incorporated into an instrument for determining AT for computer access was compiled from a literature review in the areas of neuroscience, rehabilitation, and education; and a Delphi study using an electronic survey form that was e-mailed to a panel of experts in the field of AT. The initial Delphi survey contained 22 categories (54 subcategories) and elicited 33 responses. The second round of the survey completed the Delphi process resulting in a consensus by the panel of experts for inclusion of 39 subcategories or elements that could be utilized in an assessment instrument. Only those areas rated as essential to the assessment process (very important or important by 80% of the respondents) were chosen as important criteria for an assessment instrument. Many of the non-selected elements were near significance, were studied in the literature, or were given favorable comments by the expert panelists. Other areas may be redundant or could be subsumed under another category. There are inherent obstacles to prescribing the proper AT device to assist disabled persons with computer access due to the complexity of their conditions. There are numerous technological devices to aid persons in accomplishing diverse tasks. This study reveals the complexity of the assessment process, especially in persons with severe disabilities associated with neurological conditions. An assessment instrument should be broad ranging considering the multidimensional nature of AT prescription for computer access. Both intrinsic and extrinsic factors affect the provision of AT

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Selection strategies in gaze interaction

    Get PDF
    This thesis deals with selection strategies in gaze interaction, specifically for a context where gaze is the sole input modality for users with severe motor impairments. The goal has been to contribute to the subfield of assistive technology where gaze interaction is necessary for the user to achieve autonomous communication and environmental control. From a theoretical point of view research has been done on the physiology of the gaze, eye tracking technology, and a taxonomy of existing selection strategies has been developed. Empirically two overall approaches have been taken. Firstly, end-user research has been conducted through interviews and observation. The capabilities, requirements, and wants of the end-user have been explored. Secondly, several applications have been developed to explore the selection strategy of single stroke gaze gestures (SSGG) and aspects of complex gaze gestures. The main finding is that single stroke gaze gestures can successfully be used as a selection strategy. Some of the features of SSGG are: That horizontal single stroke gaze gestures are faster than vertical single stroke gaze gestures; That there is a significant difference in completion time depending on gesture length; That single stroke gaze gestures can be completed without visual feedback; That gaze tracking equipment has a significant effect on the completion times and error rates of single stroke gaze gestures; That there is not a significantly greater chance of making selection errors with single stroke gaze gestures compared with dwell selection. The overall conclusion is that the future of gaze interaction should focus on developing multi-modal interactions for mono-modal input

    Evaluation Of The Matadoc And Comparison Of Auditory Musical, Non-Musical, And Live Music Therapy Stimuli To Increase Awareness And Sense Of Self In Patients With Moderate And Severe Dementia: An Exploratory Case Study

    Get PDF
    Background: The severe stage of dementia (SSD) can cause the loss of self-awareness, affecting the proper assessment, treatment, and care. The Music Therapy Assessment Tool for Awareness of Disorders of Consciousness (MATADOC) is a validated and reliable tool to measure awareness in DOC populations and it might be able to track awareness levels in people with SSD. Also, there is a need to identify effective treatments with people with SSD since pharmacological treatments have shown limited and even negative results. Both live music therapy and music listening of recorded songs have evidence of positive effects. Purpose: The purpose of this study is twofold: 1) To explore the use of the MATADOC for the assessment of patients with advanced dementia, 2) To compare the effects of live music therapy, recorded songs, and simulated presence therapy on increasing MATADOC scores and signs of an enhanced sense of self. Method: A case study with four participants was conducted by a graduate student. Participants underwent 4 sessions of baseline assessment with the MATADOC. Afterward, each participant completed a 30-minute minute session of listening of recorded songs, live music therapy, and auditory simulated presence therapy, in a different order, and each one on a different day. Each condition was immediately followed by a single MATADOC session as a post-test. All the sessions were recorded on video for behavioral/thematic analysis. Caregivers were interviewed to provide reports. Results: Most of the items of the MATADOC showed consistency with the level of deterioration of dementia. Two items of intentional behavior and non-verbal communication were consistently high with the four participants. While, the vocalization and the emotional response items showed consistency with the type of dementia, vocal/speech health, or location of brain damage. The protocol appeared to increase arousal, verbalizations, and/or mood. The researcher identified 18 adaptations or considerations to better fit the MATADOC to the dementia population. The musical conditions showed a better response in 100% of participants over control. Live music therapy showed a better response in 3 out of 4 participants and listening to recorded songs was better for the other remaining participant. Conclusion: MATADOC might be able to identify awareness deficits with people with SSD, but it could be improved by including cognitive, sensory, and declining factors appropriate for the dementia population. The positive effects of live music therapy could be addressed to its flexibility and multimodal approach suited to be adapted to the individual strengths and needs of the participants. Listening of recorded songs appeared as an important treatment but with risks of harm. Five recommendations for future research were identified and outlined

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio
    • 

    corecore