13,497 research outputs found

    In-home and remote use of robotic body surrogates by people with profound motor deficits

    Get PDF
    By controlling robots comparable to the human body, people with profound motor deficits could potentially perform a variety of physical tasks for themselves, improving their quality of life. The extent to which this is achievable has been unclear due to the lack of suitable interfaces by which to control robotic body surrogates and a dearth of studies involving substantial numbers of people with profound motor deficits. We developed a novel, web-based augmented reality interface that enables people with profound motor deficits to remotely control a PR2 mobile manipulator from Willow Garage, which is a human-scale, wheeled robot with two arms. We then conducted two studies to investigate the use of robotic body surrogates. In the first study, 15 novice users with profound motor deficits from across the United States controlled a PR2 in Atlanta, GA to perform a modified Action Research Arm Test (ARAT) and a simulated self-care task. Participants achieved clinically meaningful improvements on the ARAT and 12 of 15 participants (80%) successfully completed the simulated self-care task. Participants agreed that the robotic system was easy to use, was useful, and would provide a meaningful improvement in their lives. In the second study, one expert user with profound motor deficits had free use of a PR2 in his home for seven days. He performed a variety of self-care and household tasks, and also used the robot in novel ways. Taking both studies together, our results suggest that people with profound motor deficits can improve their quality of life using robotic body surrogates, and that they can gain benefit with only low-level robot autonomy and without invasive interfaces. However, methods to reduce the rate of errors and increase operational speed merit further investigation.Comment: 43 Pages, 13 Figure

    Spatial distribution of HD-EMG improves identification of task and force in patients with incomplete spinal cord injury

    Get PDF
    Background: Recent studies show that spatial distribution of High Density surface EMG maps (HD-EMG) improves the identification of tasks and their corresponding contraction levels. However, in patients with incomplete spinal cord injury (iSCI), some nerves that control muscles are damaged, leaving some muscle parts without an innervation. Therefore, HD-EMG maps in patients with iSCI are affected by the injury and they can be different for every patient. The objective of this study is to investigate the spatial distribution of intensity in HD-EMG recordings to distinguish co-activation patterns for different tasks and effort levels in patients with iSCI. These patterns are evaluated to be used for extraction of motion intention.; Method: HD-EMG was recorded in patients during four isometric tasks of the forearm at three different effort levels. A linear discriminant classifier based on intensity and spatial features of HD-EMG maps of five upper-limb muscles was used to identify the attempted tasks. Task and force identification were evaluated for each patient individually, and the reliability of the identification was tested with respect to muscle fatigue and time interval between training and identification. Results: Three feature sets were analyzed in the identification: 1) intensity of the HD-EMG map, 2) intensity and center of gravity of HD-EMG maps and 3) intensity of a single differential EMG channel (gold standard).; Results show that the combination of intensity and spatial features in classification identifies tasks and effort levels properly (Acc = 98.8 %; S = 92.5 %; P = 93.2 %; SP = 99.4 %) and outperforms significantly the other two feature sets (p < 0.05).; Conclusion: In spite of the limited motor functionality, a specific co-activation pattern for each patient exists for both intensity, and spatial distribution of myoelectric activity. The spatial distribution is less sensitive than intensity to myoelectric changes that occur due to fatigue, and other time-dependent influences.Peer ReviewedPostprint (published version

    Body-Borne Computers as Extensions of Self

    Get PDF
    The opportunities for wearable technologies go well beyond always-available information displays or health sensing devices. The concept of the cyborg introduced by Clynes and Kline, along with works in various fields of research and the arts, offers a vision of what technology integrated with the body can offer. This paper identifies different categories of research aimed at augmenting humans. The paper specifically focuses on three areas of augmentation of the human body and its sensorimotor capabilities: physical morphology, skin display, and somatosensory extension. We discuss how such digital extensions relate to the malleable nature of our self-image. We argue that body-borne devices are no longer simply functional apparatus, but offer a direct interplay with the mind. Finally, we also showcase some of our own projects in this area and shed light on future challenges

    Skin-Mounted RFID Sensing Tattoos for Assistive Technologies

    Get PDF
    UHF RFID technology is presented that can facilitate new passive assistive technologies. Tongue control for human computer interfaces is first discussed where a tag is attached to the hard palate of the mouth and the tag turn-on power is observed to vary in response to tongue proximity. Secondly, a stretchable tag is fabricated from Lycra fabric that contains conducting silver fibres. The application of strain to the elastic tag again causes the required power at the reader to activate the tag to vary in proportion. This elastic tag is proposed as a temporary skin mounted strain gauge that could detect muscle twitch in the face or neck of an otherwise physically incapacitated person. Either design might be applied to the steering function of a powered wheelchair, or to facilitate the control of a computer mouse. Better than 3dB isolation is achieved in the tongue switching case and approximately 0.25dBm per percentage stretch is observed for the strain gauge

    Passive wireless tags for tongue controlled assistive technology interfaces

    Get PDF
    Tongue control with low profile, passive mouth tags is demonstrated as a human–device interface by communicating values of tongue-tag separation over a wireless link. Confusion matrices are provided to demonstrate user accuracy in targeting by tongue position. Accuracy is found to increase dramatically after short training sequences with errors falling close to 1% in magnitude with zero missed targets. The rate at which users are able to learn accurate targeting with high accuracy indicates that this is an intuitive device to operate. The significance of the work is that innovative very unobtrusive, wireless tags can be used to provide intuitive human–computer interfaces based on low cost and disposable mouth mounted technology. With the development of an appropriate reading system, control of assistive devices such as computer mice or wheelchairs could be possible for tetraplegics and others who retain fine motor control capability of their tongues. The tags contain no battery and are intended to fit directly on the hard palate, detecting tongue position in the mouth with no need for tongue piercings

    Lipreading with Long Short-Term Memory

    Full text link
    Lipreading, i.e. speech recognition from visual-only recordings of a speaker's face, can be achieved with a processing pipeline based solely on neural networks, yielding significantly better accuracy than conventional methods. Feed-forward and recurrent neural network layers (namely Long Short-Term Memory; LSTM) are stacked to form a single structure which is trained by back-propagating error gradients through all the layers. The performance of such a stacked network was experimentally evaluated and compared to a standard Support Vector Machine classifier using conventional computer vision features (Eigenlips and Histograms of Oriented Gradients). The evaluation was performed on data from 19 speakers of the publicly available GRID corpus. With 51 different words to classify, we report a best word accuracy on held-out evaluation speakers of 79.6% using the end-to-end neural network-based solution (11.6% improvement over the best feature-based solution evaluated).Comment: Accepted for publication at ICASSP 201
    corecore