1,927 research outputs found
EYECOM: an innovative approach for computer interaction
The world is innovating rapidly, and there is a need for continuous interaction with the technology. Sadly, there do not exist promising options for paralyzed people to interact with the machines i.e., laptops, smartphones, and tabs. A few commercial solutions such as Google Glasses are costly and cannot be afforded by every paralyzed person for such interaction. Towards this end, the thesis proposes a retina-controlled device called EYECOM. The proposed device is constructed from off-the-shelf cost-effective yet robust IoT devices (i.e., Arduino microcontrollers, Xbee wireless sensors, IR diodes, and accelerometer). The device can easily be mounted on to the glasses; the paralyzed person using this device can interact with the machine using simple head movement and eye blinks. The IR detector is located in front of the eye to illuminate the eye region. As a result of illumination, the eye reflects IR light which includes electrical signals and as the eyelids close, the reflected light over eye surface is disrupted, and such change in reflected value is recorded. Further to enable cursor movement onto the computer screen for the paralyzed person a device named accelerometer is used. The accelerometer is a small device, with the size of phalanges, a human thumb bone. The device operates on the principle of axis-based motion sensing and it can be worn as a ring by a paralyzed person. A microcontroller processes the inputs from the IR sensors, accelerometer and transmits them wirelessly via Xbee wireless sensor (i.e., a radio) to another microcontroller attached to the computer. With the help of a proposed algorithm, the microcontroller attached to the computer, on receiving the signals moves cursor onto the computer screen and facilitate performing actions, as simple as opening a document to operating a word-to-speech software. EYECOM has features which can help paralyzed persons to continue their contributions towards the technological world and become an active part of the society. Resultantly, they will be able to perform number of tasks without depending upon others from as simple as reading a newspaper on the computer to activate word-to-voice software
Recommended from our members
Cognition in action C-i-A: Rethinking gesture in neuro-atypical young people: A conceptual framework for embodied, embedded, extended and enacted intentionality
The three aims of my interdisciplinary thesis are:
-To develop a conceptual framework for re-thinking the gestures of neuro-atypical young people, that is non-traditional and non-representational
-To develop qualitative analytical tools for the annotation and interpretation of gesture that can be applied inclusively to both neuro-atypical and neuro-typical young people
-To consider the conceptual framework in terms of its theoretical implications and practical applications
Learning to communicate and work with neuro-atypical young people provides the rationale and continued impetus for my work. My approach is influenced by the limited social, physical and communicative experiences of young people with severe speech and motor impairment, due to cerebral palsy (SSMI-CP). CP is described as: a range of non-progressive syndromes of posture and motor impairment. The aetiology is thought to result from damage to the developing central nervous system during gestation or in the neonate. Brain lesions involve the basal ganglia and the cerebellum; both these sites are known to support motor control and integration.
However, gaps in theoretical research and empirical data in the study of corporeal expression in young people with SSMI-CP necessitated the development of both an alternative theoretical framework and new tools. Biological Dynamic Systems Theory is proposed as the best candidate structure for the reconsideration of gesture. It encompasses the global, synthetic and embodied nature of gesture. Gesture is redefined and considered part of an emergent dynamic, complex, non-linear and self-organizing system.
My construct of Cognition-in-Action (C-i-A) is derived from the notion of knowing-as-doing influenced by socio-biological paradigms; it places the Action-Ready-Body centre stage. It is informed by a theoretical synthesis of knowledge from the domains of Philosophy, Science and Technology, including practices in the clinical, technology design and performance arts arenas. The C-i-A is a descriptive, non-computational feature-based framework. Its development centred around two key questions that served as operational starting points: What can gestures reveal about children’s cognition-in-action? and Is there the potential to influence gestural capacity in children? These are supported by my research objectives.
Three case studies are presented that focus on the annotation and interpretative analyses of corporeal exemplars from two adolescent males aged 16.9 and 17.9 years, and one female girl aged 10.7 years. These exemplars were contributed to the Child Gesture Corpus by these young people with SSMI-CP. The Gesture-Action-Entity (GAE) is proposed as a unit of interest for the analysis of procedural, semantic and episodic aspects of our corporeal knowledge. A body-based-action-annotation-system (G-ABAS) and Interpretative Phenomenological Analysis methodology is applied for the first time to gesture (G-IPA). These tools facilitate fine-grained corporeal dynamic and narrative gesture feature analyses.
Phenomenal data reveal that these young people have latent resources, capacities and capabilities that they can express corporeally. Iteration of these interpretative findings with the Cognition-in-Action framework allows for the inference of processes that may underlie the strategies they use to achieve such social-motor-cognitive functions. In summary, their Cognition-in-Action is brought-forth, carried forward and has the potential to be culturally embodied.
The utility of C-i-A framework lies in its explanatory power to contribute to a deeper understanding of child gesture. Furthermore, I discuss and illustrate its potential to influence practice in the domains of pedagogy, rehabilitation and the design of future intimate, assistive and perceptually sensitive technologies. Such technologies are increasingly mediating our social interactions. My work offers an ecologically valid alternative to tradition conceptualization of perception, cognition and action. My thesis contributes both new knowledge and carries implications across the domains of movement science, gesture studies and applied participatory performance arts and health practices
Sensorimotor experience in virtual environments
The goal of rehabilitation is to reduce impairment and provide functional improvements resulting in quality participation in activities of life, Plasticity and motor learning principles provide inspiration for therapeutic interventions including movement repetition in a virtual reality environment, The objective of this research work was to investigate functional specific measurements (kinematic, behavioral) and neural correlates of motor experience of hand gesture activities in virtual environments stimulating sensory experience (VE) using a hand agent model. The fMRI compatible Virtual Environment Sign Language Instruction (VESLI) System was designed and developed to provide a number of rehabilitation and measurement features, to identify optimal learning conditions for individuals and to track changes in performance over time. Therapies and measurements incorporated into VESLI target and track specific impairments underlying dysfunction. The goal of improved measurement is to develop targeted interventions embedded in higher level tasks and to accurately track specific gains to understand the responses to treatment, and the impact the response may have upon higher level function such as participation in life. To further clarify the biological model of motor experiences and to understand the added value and role of virtual sensory stimulation and feedback which includes seeing one\u27s own hand movement, functional brain mapping was conducted with simultaneous kinematic analysis in healthy controls and in stroke subjects. It is believed that through the understanding of these neural activations, rehabilitation strategies advantaging the principles of plasticity and motor learning will become possible. The present research assessed successful practice conditions promoting gesture learning behavior in the individual. For the first time, functional imaging experiments mapped neural correlates of human interactions with complex virtual reality hands avatars moving synchronously with the subject\u27s own hands, Findings indicate that healthy control subjects learned intransitive gestures in virtual environments using the first and third person avatars, picture and text definitions, and while viewing visual feedback of their own hands, virtual hands avatars, and in the control condition, hidden hands. Moreover, exercise in a virtual environment with a first person avatar of hands recruited insular cortex activation over time, which might indicate that this activation has been associated with a sense of agency. Sensory augmentation in virtual environments modulated activations of important brain regions associated with action observation and action execution. Quality of the visual feedback was modulated and brain areas were identified where the amount of brain activation was positively or negatively correlated with the visual feedback, When subjects moved the right hand and saw unexpected response, the left virtual avatar hand moved, neural activation increased in the motor cortex ipsilateral to the moving hand This visual modulation might provide a helpful rehabilitation therapy for people with paralysis of the limb through visual augmentation of skills. A model was developed to study the effects of sensorimotor experience in virtual environments, and findings of the effect of sensorimotor experience in virtual environments upon brain activity and related behavioral measures. The research model represents a significant contribution to neuroscience research, and translational engineering practice, A model of neural activations correlated with kinematics and behavior can profoundly influence the delivery of rehabilitative services in the coming years by giving clinicians a framework for engaging patients in a sensorimotor environment that can optimally facilitate neural reorganization
Recommended from our members
The role of HG in the analysis of temporal iteration and interaural correlation
How a Diverse Research Ecosystem Has Generated New Rehabilitation Technologies: Review of NIDILRR’s Rehabilitation Engineering Research Centers
Over 50 million United States citizens (1 in 6 people in the US) have a developmental, acquired, or degenerative disability. The average US citizen can expect to live 20% of his or her life with a disability. Rehabilitation technologies play a major role in improving the quality of life for people with a disability, yet widespread and highly challenging needs remain. Within the US, a major effort aimed at the creation and evaluation of rehabilitation technology has been the Rehabilitation Engineering Research Centers (RERCs) sponsored by the National Institute on Disability, Independent Living, and Rehabilitation Research. As envisioned at their conception by a panel of the National Academy of Science in 1970, these centers were intended to take a “total approach to rehabilitation”, combining medicine, engineering, and related science, to improve the quality of life of individuals with a disability. Here, we review the scope, achievements, and ongoing projects of an unbiased sample of 19 currently active or recently terminated RERCs. Specifically, for each center, we briefly explain the needs it targets, summarize key historical advances, identify emerging innovations, and consider future directions. Our assessment from this review is that the RERC program indeed involves a multidisciplinary approach, with 36 professional fields involved, although 70% of research and development staff are in engineering fields, 23% in clinical fields, and only 7% in basic science fields; significantly, 11% of the professional staff have a disability related to their research. We observe that the RERC program has substantially diversified the scope of its work since the 1970’s, addressing more types of disabilities using more technologies, and, in particular, often now focusing on information technologies. RERC work also now often views users as integrated into an interdependent society through technologies that both people with and without disabilities co-use (such as the internet, wireless communication, and architecture). In addition, RERC research has evolved to view users as able at improving outcomes through learning, exercise, and plasticity (rather than being static), which can be optimally timed. We provide examples of rehabilitation technology innovation produced by the RERCs that illustrate this increasingly diversifying scope and evolving perspective. We conclude by discussing growth opportunities and possible future directions of the RERC program
Recommended from our members
A flexible object orientated design approach for the realisation of assistive technology
This thesis contributes to a growing body of research conducted by the Interactive Systems Research Group (ISRG) at Nottingham Trent University within the fields of accessibility and accessible technologies. Core to this research is the exploration of how interactive technologies can be developed and applied as platforms for education, rehabilitation and social inclusion. To this end the group has been actively evolving the User Sensitive and Inclusive Design (USID) methodology for the design, development and evaluation of accessible software and related technologies. This thesis contributes to the further development of the USID method with a focus on its application for the design of assistive technology
Auditory and haptic feedback to train basic mathematical skills of children with visual impairments
Physical manipulatives, such as rods or tiles, are widely used for mathematics learning, as they
support embodied cognition, enable the execution of epistemic actions, and foster conceptual
metaphors. Counting them, children explore, rearrange, and reinterpret the environment
through the haptic channel. Vision generally complements physical actions, which makes using
traditional manipulatives limited for children with visual impairments (VIs). Digitally augmenting
manipulatives with feedback through alternative modalities might improve them. We
specifically discuss conveying number representations to children with VIs using haptic and
auditory channels within an environment encouraging exploration and supporting active touch
counting strategies while promoting reflection. This paper presents LETSMath, a tangible system
for training basic mathematical skills of children with VIs, developed through Design-Based
Research with three iterations in which we involved 19 children with VIs and their educators.
We discuss how the system may support training skills in the composition of numbers and the
impact that the different system features have on slowing down the interaction pace to trigger
reflection, in understanding, and in incorporation.Universitat Pompeu Fabra (Spain) through MIREGAMIS: 2018 LLAV 00009Agencia Nacional de Investigación e Innovación - ANIIFundación CeibalCentro Interdisciplinario en Cognición para la Enseñanza y el Aprendizaje - CICEA, Universidad de la RepúblicaUniversitat Oberta de Catalunya (Spain) through Ministry of Science, Innovation, and Universities IJCI-2017-32162LASIGE Research Unit (Portugal) through FCT project mIDR (AAC02/SAICT/-2017, project 30347, cofunded by COMPETE/FEDER/FNR), the LASIGE Research Unit, ref. UIDB/00408/2020 and ref. UIDP/00408/2020
User-centered design of a dynamic-autonomy remote interaction concept for manipulation-capable robots to assist elderly people in the home
In this article, we describe the development of a human-robot interaction concept for service robots to assist elderly people in the home with physical tasks. Our approach is based on the insight that robots are not yet able to handle all tasks autonomously with sufficient reliability in the complex and heterogeneous environments of private homes. We therefore employ remote human operators to assist on tasks a robot cannot handle completely autonomously. Our development methodology was user-centric and iterative, with six user studies carried out at various stages involving a total of 241 participants. The concept is under implementation on the Care-O-bot 3 robotic platform. The main contributions of this article are (1) the results of a survey in form of a ranking of the demands of elderly people and informal caregivers for a range of 25 robot services, (2) the results of an ethnography investigating the suitability of emergency teleassistance and telemedical centers for incorporating robotic teleassistance, and (3) a user-validated human-robot interaction concept with three user roles and corresponding three user interfaces designed as a solution to the problem of engineering reliable service robots for home environments
- …